Ok so I regularly fill in forms on Adobe, it automatically totals columns except the final one where the last digit is as above.
How very queer.
MM_Dandy wrote:For example, Microsoft implemented what they call a floating decimal point type, which allows for exact decimal representation up to however many decimal numbers can be stored in memory.
Мастер wrote:MM_Dandy wrote:For example, Microsoft implemented what they call a floating decimal point type, which allows for exact decimal representation up to however many decimal numbers can be stored in memory.
I did not know that. I assume Microsoft is not making chips now, so that this is strictly something they have in a software library somewhere?
Microsoft wrote:The binary representation of a Decimal value consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the 96-bit integer and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28. Therefore, the binary representation of a Decimal value the form, ((-2^96 to 2^96) / 10^(0 to 28)), where -(2^96-1) is equal to MinValue, and 2^96-1 is equal to MaxValue.
Мастер wrote:Do you know how they do it? One thought that occurs to me is to represent all numbers as integers, then include some additional bits to indicate how many decimal places things should be moved over. I once wrote a piece of software with a primitive version of that, representing all numbers as integers, and implicitly assuming they are all divided by one hundred. So that the number "three" was represented as the integer 300. This was financial, representing dollars and cents. But I feel like I could implement something like this very quickly myself if all we need is addition and subtraction, just declare a data type with an integer and a "shift" short integer or some such thing, and write addition and subtraction routines. Seems like a few minutes' work. Multiplication would be quite easy also. Division - OK, that might be a little trickier if you want to retain the accuracy. A lot of other functions, trigonometric, exponential, logarithmic, etc., they're not going to be exact anyway, so you could just convert to normal numbers, perform the operation, then change them back. But if you want some special cases to come out exact (for example, if you want the base ten logarithm of 0.001 to be exactly -3), it might be a little tricky.
Мастер wrote:I think you would still have issues if you tried to do things like add 17,000,000,000,000,000 and 0.000000000000000142 though.
MM_Dandy wrote:Microsoft wrote:The binary representation of a Decimal value consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the 96-bit integer and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28. Therefore, the binary representation of a Decimal value the form, ((-2^96 to 2^96) / 10^(0 to 28)), where -(2^96-1) is equal to MinValue, and 2^96-1 is equal to MaxValue.
MM_Dandy wrote:As far as the type is concerned, I thought it worked something along those lines as well, and the StackOverflow entry that I referenced implicated as much, but that is definitely not the case according to Microsoft. Since it is a binary representation, however, if your problem deals with numbers that cannot be represented in binary or need to be both very large and very precise, you'd still have to roll your own solution.
Return to Here There Be Llamas
Users browsing this forum: No registered users and 1 guest