I'm looking for an equivalent for `(int) log2(X)`

that does not suffer from floating point accuracy problems.

I have found several discussions about this problem (both in stackoverflow and outside), but none were able to satisfy all of the following three conditions:

- Portability: Should be able to run in multiple OS's and architectures.
- Efficiency: Constant time would be best.
- Accuracy: Should not have accuracy errors.

A few solutions I have found or thought include (For the time complexities I'm using `N`

for the quantity of bits, not the value of `X`

):

- Use GCC's
`__builtin_clz`

. This solution is compiler dependent. - Use the
`bsr`

instruction. This solution is architecture dependent. - Iterate through the powers of 2. This solution is
`O(N)`

. - Binary search through the powers of 2. This solution is
`O(log(N))`

. Also, it might present some overhead compared to a simpler solution. - Use C's
`log2`

to get an initial guess and iterate through the powers of 2 from it. I'm not sure if this solves the accuracy problem, but maybe this would work. Still, using floats through`log2`

probably wouldn't beat a simpler integer-only solution.

Maybe I have overlooked some aspect of the proposed solutions, but all of them will break one of the three conditions.

Is there a solution that will satisfy all three conditions? (Note that I'm not necessarily asking for an easy/simple solution, although that would be preferred)

`O`

notation is very problematic: such notation is useless in understanding performance of short loops (e.g. here we have a few dozen steps in the most naive implementation, or just a couple when it's optimized). The compilers will unroll naive loops, and processors will parallelize microinstructions that don't have data dependencies. Those act like`O(1)`

. A call to`log2`

will be the slowest.morework but has fewer data dependencies will be faster than "simpler" code that has to be serialized by the CPU. All naive "textbook" approaches to estimating the complexity/cost will be basically no better than random guessing in terms of their ability to predict actual performance on actual hardware. Your idea of cost applies on Arduino, not on desktop/mobile CPUs.