Are We Fast Yet? Comparing Language Implementations with Objects, Closures, and Arrays
mick.jordan at oracle.com
Thu Jan 7 16:40:49 UTC 2016
On 1/7/16 8:19 AM, christian.humer at gmail.com wrote:
> As you wish. There is a nice R OO introduction . The draft versions
> take a bit. I am myself not a very experienced R developer.
>  http://adv-r.had.co.nz/OO-essentials.html
It's amazing to me how we are still using ancient benchmarks like Towers
and Richards in 2016! My first exposure to Towers was in 1974 as a grad
student at Cambridge. Later that decade, I ported Richards (which is a
simulation of the Tripos operating system by Martin Richards (inventor
of BCPL)) from BCPL to Modula-2 and other languages. As I recall
Richards essentially tests how fast you can do a virtual function call.
That aside, I think the main problem that we have is captured by
Christian in "I am myself not a very experienced R developer". I would
agree with that sentiment. It's a little bit worrying that our group is
implementing FastR absent any serious experience with R (unlike the core
group). [OTH, Java was originally implemented by a bunch of
Ironically, with all multiple language integration focus, it is perhaps
odd that we would even consider writing an algorithm like Towers or
Richards in R. Aren't we supposed to be using the "right" language for a
given part of the overall problem? R was most definitely not designed
with such algorithms as its primary focus but, hey it's Turing complete,
so it's possible. The question is whether the result has any useful
validity for real apps. That goes for the shootout benchmarks as well, IMHO.
It's true that there is a serious lack of representative benchmarks for
R, which is why we are concentrating mostly on real apps written by R
developers, when we can find them. Extracting parts of those into
micro-benchmarks makes a lot of sense for regression testing and
vehicles for analysis with, say, IGV, as the real apps are rather large,
and we continue to do that.
I haven't looked closely at these
relatively new benchmarks, but I believe they are likely to be more
representative of real code. I doubt we could write such marks given our
lack of background experience.
More information about the graal-dev