Project Proposal: Trinity

Volker Simonis volker.simonis at
Mon Nov 14 16:49:45 UTC 2016

Hi Karthik,

we had project "Sumatra" [1] for this which is inactive since quite some time.
We also have project "Panama" [2] which, as far as I understand, is
also looking into auto-parallelization/vectorization. See for example
the "Vectors for Java" presentation from JavaOne which describes some
very similar ideas to yours.

What justifies the creation of yet another project instead of doing
this work in the context of the existing projects?
What in your approach is different to the one described in [3] which
is already, at least partially, implemented in project Panama?



On Mon, Nov 14, 2016 at 5:23 PM, Karthik Ganesan
<karthik.ganesan at> wrote:
> Hi,
> I would like to propose the creation of a new Project: Project Trinity.
> This Project would explore enhanced execution of bulk aggregate calculations
> over Streams through offloading calculations to hardware accelerators.
> Streams allow developers to express calculations such that data parallelism
> can be efficiently exploited. Such calculations are prime candidates for
> leveraging enhanced data-oriented instructions on CPUs (such as SIMD
> instructions) or offloading to hardware accelerators (such as the SPARC Data
> Accelerator co-processor, further referred to as DAX [1]).
> To identify a path to improving performance and power efficiency, Project
> Trinity will explore how libraries like Streams can be enhanced to leverage
> data processing hardware features to execute Streams more efficiently.
> Directions for exploration include:
> - Building a streams-like library optimized for offload to
> -- hardware accelerators (such as DAX), or
> -- a GPU, or
> -- SIMD instructions;
> - Optimizations in the Graal compiler to automatically transform suitable
> Streams pipelines, taking advantage of data processing hardware features;
> - Explorations with Project Valhalla to expand the range of effective
> acceleration to Streams of value types.
> Success will be evaluated based upon:
> (1) speedups and resource efficiency gains achieved for a broad range of
> representative streams calculations under offload,
> (2) ease of use of the hardware acceleration capability, and
> (3) ensuring that there is no time or space overhead for non-accelerated
> calculations.
> Can I please request the support of the Core Libraries Group as the
> Sponsoring Group with myself as the Project Lead.
> Warm Regards,
> Karthik Ganesan
> [1]

More information about the discuss mailing list