Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Both methods give you a bit of information per iteration. The number of distinct floating point numbers between l,r is the same as the number of integers in the reinterpretation of l,r as integers.

(assuming starting l,r were appropriate)



Suppose l=1, r=3. So m=(1+3)/2=2. There are twice as many floating-point numbers between 1 and 2 as between 2 and 3. So if you refine from [1 3] to [1 2], you've only managed to prune 1/3 of your search space, not 1/2.

(Also: integer halving breaks when l and r have opposite signs.)

Perhaps this will make it clearer: suppose we have l=0 and r=1, and the number we're searching for is the smallest float, 2^-1074. If we use integer halving, we get one bit per iterations; floats have 64 bits, so we should require no more than 64 iterations. But if we use floating-point halving, obviously it will take 1074 iterations to get from [0 1] to [0 2^-1074].


Oh yeah, nice. Thanks, the example did make it clear.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: