Suppose the set S contains n positive integers. If the mean μ of the elements is known, is there a method to finding the maximum possible value of the standard deviation σ for S?
I know there have to be a finite number of subsets of the positive integers that have n elements and mean μ; each element must be less than or equal to the sum of every number in the set, nμ, so we have at most (nμ)n possibilities for S. However, I can't think of a method that can reliably maximize the spread of S. For context, I'm specifically working with a set of 1000 positive integers with μ=10, so brute forcing it is not an option. Any help is appreciated.
Answer
The most extreme case will be when n−1 integers take the minimum value 1 and the remaining integer nμ−n+1. Any other case can have the standard deviation increased by moving a pair of values further apart
This has mean μ and standard deviation (using the population 1n method) of (μ−1)√n−1
(If you insist on using the sample 1n−1 method then the standard deviation would instead be (μ−1)√n)
In your example with n=1000 and μ=10, this would be 999 values of 1 and 1 value of 9001, with a standard deviation of about 284.423 (or 284.605)
No comments:
Post a Comment