Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Someone correct me if I'm wrong because it's been years since my EE class.

Basic math ends at division. Heck if you learn how math works on computers, it feels like a number trick. Those are reliable.

The issue comes when you need to track never ending decimal points. Even if every transistor was dedicated to your problem, you'd still need to round. So even if you have 32 bits of decimals, there will be rounding sometimes.

The human has to decide what to do. Instead of converting the binary to decimal and calling it done, it's more important to be consistent and predictable.

The issues described in the article arent 5/2= 2 billion. The article talks about rounding giving you always even numbers, which you can imagine leads to issues if you are using both float and using logic that involves even and odd numbers.

Edit- can someone correct me or add details, I'm very interested but I'm not taking ENG 240 again... :p



Exactly this. The floor function works on binary numbers but has to round them to a decimal place. It's not simple, it just appears to be.


Floor has nothing to do with decimal.


The period in "3.14" is called a "decimal." The digits after the decimal in "3.14" are enumerated as "decimal places." What each decimal place "means" depends on the numeric base.

So, "decimal places" or "rounding to a decimal place" has nothing to do with base 10 (which is what I think you are implying): decimal and "decimal places" exist in all bases, including base 2, and the meaning of each "decimal place" depends on the numeric base (base 10, base 2, etc.).


Sometimes people use "binary point" to distinguish from decimal point and avoid the confusion of the dec- prefix.


"Radix point" or "radix place" are the generic terms I've seen most often.


"Fraction point" and "fraction (or fractional) places" might be a decent alternative that clearly works in any base, while also conveying its purpose.


This is why the big rational data type is so useful on occasion. When you just want to get one calculation exactly right, and performance doesn't matter so much because you aren't doing huge numbers of repeated operations (as in ML), use a big rational with its arbitrary precision numerator and denominator and you don't need to worry about even epsilon error.


> because you aren't doing huge numbers of repeated operations (as in ML)

In ML precision doesn't count so much. It's often possible that we train on 32bits and deploy the model on 16bits. In fact stochasticity is useful and we add it in by dropout or other things. You can drop any connection (or millions of them) in a neural net and get almost the same result. There have been papers that showed that reducing the number of bits used to represent weights might even improve the network because it has a regularising effect. The real brain is also stochastic.


Except if you run exp, log, sqrt, sin, cos etc. Then you immediately leave the rationals.


Yes, it definitely depends on what you're doing with the numbers, but personally I don't recall ever having needed trigonometric functions in my decade plus working life doing standard line of business type stuff.


I can't tell if you are dismissing this, but as an FYI trig is used in Engineering.


I said "it definitely depends on what you're doing with the numbers". That's not a dismissal; I'm simply pointing out that neither I nor many if not most other software engineers use trig on a frequent basis. I'm well aware that it's used in actual engineering, but that's just not what most of us here do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: