Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Makes sense for dollars, but for anything like graphics or physics I'd consider a power of two like 1,024 as the fixed-point factor instead.

My intuition tells me that "x * 1000 / 1000 == x" might not be true for all numbers if you're using floats.



A sure sign of an inexperienced programmer in numerical computing is when they check for equality to zero of a floating-point number as

if (x == 0) ...

instead of something like

if (abs(x) < eps) ...

where eps is a suitably defined small number.


Sometimes it is fine. For example, reference BLAS will check if the input scalars in DGEMM are exactly zero, for

    C <- alpha*AB + beta*C 
If beta is exactly 0, you don’t have to read C, just write to it.

The key here is that beta is likely to be an exact value that is entered as a constant, and detecting it allows for a worthwhile optimization.


I would guess even most of time people using epsilon don't understand it. Its not like there is universal constant error with floating point numbers. I feel that saying just use epsilon is not much better than x == 0 and could be harder to find bugs if it sometimes works and othertimes does not.


I think funny enough a sure sign of an inexperienced programmer in bigco application programming is the other way around, that they wrongly learn a metal model of "floating point is approximate, never ever do ==" in school.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: