dmitry.nadezhin at gmail.com
Fri Feb 7 10:44:47 UTC 2014
I think that better name is BigBinary.
Both BigDecimal and BigBinary are floating-point number
with radix=10 and radix=2 respectively.
More general approach is an abstract class or interface Rational
which has implementation subclasses.
Each nonzero rational number P/Q can be represented as
P/Q = p/q * 2^e , where e is integer, p and q are odd integers and GCD(p,q)
Then BigBinary is a subclass with q=1.
Arithmetic operations on Rationals are implemented by general algorithm
when arguments are true
rationals (q!=1) and by specific algorithms when they are Binaries (q=1).
This is elaborated here:
On Thu, Feb 6, 2014 at 9:11 PM, Tim Buktu <tbuktu at hotmail.com> wrote:
> now that BigInteger deals better with large numbers, it would be nice
> for that to translate into an improvement in BigDecimal performance
> because BigDecimal is essentially a wrapper around BigInteger.
> Unfortunately, BigDecimal is still slower than BigInteger because it has
> to scale and round.
> I don't see a way to fix this without breaking the
> BigDecimal=BigInteger*10^n paradigm, but it could be done by introducing
> something like a BigFloat class that wraps a BigInteger such that
> BigFloat=BigInteger*2^n. I would expect the code to be less complex than
> BigDecimal because the only places it would have to deal with powers of
> ten would be conversion from and to String or BigDecimal. It would also
> be faster than BigDecimal for the same reason, but the downside is that
> it wouldn't accurately represent decimal fractions (just like float and
> Is this something that would be beneficial in the real world?
> I also did a little experiment to see how long a computation would take
> using BigDecimals vs the same computation using fixed-point BigInteger
> arithmetic. I wrote two programs that calculate pi to a million digits.
> The BigInteger version took 3 minutes; the BigDecimal version took 28
> minutes (both single-threaded).
More information about the core-libs-dev