Multiply list of floats to retain maximum precision on a modern CPU

I have a list of many single-precision IEEE floating point numbers. How does one go through the list and multiply all numbers to ensure maximum precision, and no over/underflows? We assume there should be no over/underflows with arbitrary precision multiplication followed by truncation to float.

Can one look at a list of floats and figure out the optimal order without actually multiplying the numbers, or must we start multiplying, and only then try and find the next number we should multiply by to retain maximum precision, in a sort of feedback (next step search) algorithm? Can we limit ourselves to sorting based on going through the list and just adding the exponents?

For i uniformly distributed floating point numbers of width m bits for the mantissa and n for the exponent, with a cpu that stores multiplication results in a register with an r bit mantissa and s bit exponent and then truncates back to m+n on read, assuming multiplying those numbers in arbitrary precision and truncating to the original m+n bit format would not produce over/underflow, what are the chances that multiplying with the finite r+s bit register would produce no overflow? In the case of no overflow, what is the kind of precision I can expect from this operation depending on i, n, m, r, and s?

A great partial answer would just answer this for floats, doubles, and various common register sizes, for small, medium, large, and very large i.

• What do you mean by “uniformly distributed floating point numbers of width m bits for the mantissa and n for the exponent”? Are you saying the m bits for the significand and the n bits for the exponent are chosen uniformly (that is, each with equal probability for 0 or 1, independently of the others)? Or do you mean the represented values are uniform over some interval? If the former, this does not seem like a practical problem: Numbers used in practice are not distributed this way. If the latter, there are additional issues of floating-pont characteristics affecting the distribution. Oct 14, 2018 at 13:53
• I don't hold onto any definition in particular, this was just to direct the discussion in a productive direction - if you think any specific non-trivial distribution makes it easier to answer the question, use that instead. Oct 14, 2018 at 14:26
• I doubt the probability aspects are interesting. The problem of whether underflow or overflow occurs is largely a random walk problem; the known random walk solutions will apply with minor variations due to rounding effects. As for accuracy, multiplication does not suffer from the cancelation problem the way addition does. I think the average error and the worst possible error for n multiplications will tend to be (1+e)^(n−1)−1, where e is the average or worst error for one multiplication for the format. Using a wider accumulator register would reduce e. Oct 15, 2018 at 0:25
• Exponent overflow/underflow could be handled in various ways, such as rescaling when the bounds are approached (and tracking the rescalings in auxiliary integers) or by sorting by magnitude and picking from the small or large end according to the current exponent of the accumulator. I am not aware of methods for manipulating the products to reduce errors such as selecting which factors to multiply to avoid errors—I can think of some crude ideas (picking those with favorable significands somehow), but they seem like more work than they would be worth. Oct 15, 2018 at 0:28
• With a wider accumulator, I think the final error would be the sum of (1+e)^(n−1)−1 using the e for the accumulator and the rounding error when converting from the accumulator format to the final format. One or the other might dominate, depending on circumstances. (Do note that I am just spitballing; on average, errors will cancel somewhat, and I have not accounted for that.) Oct 15, 2018 at 0:31