BigDecimal performance

Tim Buktu tbuktu at
Sun Feb 9 10:00:05 UTC 2014

Hi Joe,

> I'm not sure which algorithm you used to compute pi, but I would be
> very surprised by a ~8 X speed difference between BigInteger and
> BigDecimal if the same computation was being under on the BigDecimal
> values, meaning the BigIntegers held in the BigDecimals were the same
> as the BigIntegers in the BigInteger computation.

The BigIntegers are not identical because one is rounded to some number
of bits whereas the other is rounded to some number of decimal digits.
They should be the same order of magnitude, though.
It's possible I made a mistake, of course. The two programs are on GitHub:

> A BigDecimal object is a fairly small wrapper around a BigInteger and
> updating the scale/exponent field isn't too much additional work.

It's not the exponent, it's rounding decimal digits that slows
BigDecimal down because it has to divide the BigInteger by a power of 10.


> On 02/07/2014 02:44 AM, Dmitry Nadezhin wrote:
>> I think that better name is BigBinary.
>> Both BigDecimal and BigBinary are floating-point number
>> with radix=10 and radix=2 respectively.
>> More general approach is an abstract class or interface Rational
>> which has implementation subclasses.
>> Each nonzero rational number P/Q can be represented as
>> P/Q = p/q * 2^e , where e is integer, p and q are odd integers and
>> GCD(p,q)
>> = 1.
>> Then BigBinary is a subclass with q=1.
>> Arithmetic operations on Rationals are implemented by general algorithm
>> when arguments are true
>> rationals (q!=1) and by specific algorithms when they are Binaries
>> (q=1).
>> This is elaborated here:
>> On Thu, Feb 6, 2014 at 9:11 PM, Tim Buktu <tbuktu at> wrote:
>>> Hi,
>>> now that BigInteger deals better with large numbers, it would be nice
>>> for that to translate into an improvement in BigDecimal performance
>>> because BigDecimal is essentially a wrapper around BigInteger.
>>> Unfortunately, BigDecimal is still slower than BigInteger because it
>>> has
>>> to scale and round.
>>> I don't see a way to fix this without breaking the
>>> BigDecimal=BigInteger*10^n paradigm, but it could be done by
>>> introducing
>>> something like a BigFloat class that wraps a BigInteger such that
>>> BigFloat=BigInteger*2^n. I would expect the code to be less complex
>>> than
>>> BigDecimal because the only places it would have to deal with powers of
>>> ten would be conversion from and to String or BigDecimal. It would also
>>> be faster than BigDecimal for the same reason, but the downside is that
>>> it wouldn't accurately represent decimal fractions (just like float and
>>> double).
>>> Is this something that would be beneficial in the real world?
>>> I also did a little experiment to see how long a computation would take
>>> using BigDecimals vs the same computation using fixed-point BigInteger
>>> arithmetic. I wrote two programs that calculate pi to a million digits.
>>> The BigInteger version took 3 minutes; the BigDecimal version took 28
>>> minutes (both single-threaded).
>>> Tim

More information about the core-libs-dev mailing list