Project Proposal: Trinity

Karthik Ganesan karthik.ganesan at
Tue Nov 15 05:57:58 UTC 2016

Hi Volker,

Thanks for your comments and the relevant questions. We have reviewed 
projects Sumatra and Panama and talked to members who are familiar with 
the projects.

Project Sumatra was aimed at translation of Java byte code to execute on 
GPU, which was an ambitious goal and a challenging task to take up. In 
this project, we aim to come up with APIs targeting the most common 
Analytics operations that can be readily offloaded to accelerators 
transparently. Most of the information needed for offload to the 
accelerator is expected to be readily provided by the API semantics and 
there by, simplifying the need to do tedious byte code analysis.

While the vector API (part of Panama) brings some most wanted 
abstraction for vectors, it is still loop based and is most useful for 
superword type of operations leveraging SIMD units on general purpose 
cores. The aim of this proposed project is to provide a more abstract 
API (similar to the Streams API) that will directly work on streams of 
data and transparently accommodate a wider set of heterogeneous 
accelerators like DAX, GPUs underneath. Initially, the project will 
focus on coming up with a complete set of APIs, relevant input/output 
formats, optimized data structures and storage format that can be used 
as building blocks to build high performance analytics 
applications/frameworks in Java. Simple examples of such operations will 
include Scan, select, filter, lookup, transcode, merge, sort etc. 
Additionally, this project will also require more functionality like 
operating system library calls, handling Garbage Collection needs amidst 
offload etc.

The artifacts provided by Project Panama including the code snippets (or 
even the Vector API) along with value types from project Valhalla will 
come in handy to be leveraged wherever it is applicable in this project. 
Overall, I feel that the goals of this project and the needed work are 
different from what the Vector API is targeting. Hope this answers your 



On 11/14/2016 10:49 AM, Volker Simonis wrote:
> Hi Karthik,
> we had project "Sumatra" [1] for this which is inactive since quite some time.
> We also have project "Panama" [2] which, as far as I understand, is
> also looking into auto-parallelization/vectorization. See for example
> the "Vectors for Java" presentation from JavaOne which describes some
> very similar ideas to yours.
> What justifies the creation of yet another project instead of doing
> this work in the context of the existing projects?
> What in your approach is different to the one described in [3] which
> is already, at least partially, implemented in project Panama?
> Thanks,
> Volker
> [1]
> [2]
> [3]
> On Mon, Nov 14, 2016 at 5:23 PM, Karthik Ganesan
> <karthik.ganesan at> wrote:
>> Hi,
>> I would like to propose the creation of a new Project: Project Trinity.
>> This Project would explore enhanced execution of bulk aggregate calculations
>> over Streams through offloading calculations to hardware accelerators.
>> Streams allow developers to express calculations such that data parallelism
>> can be efficiently exploited. Such calculations are prime candidates for
>> leveraging enhanced data-oriented instructions on CPUs (such as SIMD
>> instructions) or offloading to hardware accelerators (such as the SPARC Data
>> Accelerator co-processor, further referred to as DAX [1]).
>> To identify a path to improving performance and power efficiency, Project
>> Trinity will explore how libraries like Streams can be enhanced to leverage
>> data processing hardware features to execute Streams more efficiently.
>> Directions for exploration include:
>> - Building a streams-like library optimized for offload to
>> -- hardware accelerators (such as DAX), or
>> -- a GPU, or
>> -- SIMD instructions;
>> - Optimizations in the Graal compiler to automatically transform suitable
>> Streams pipelines, taking advantage of data processing hardware features;
>> - Explorations with Project Valhalla to expand the range of effective
>> acceleration to Streams of value types.
>> Success will be evaluated based upon:
>> (1) speedups and resource efficiency gains achieved for a broad range of
>> representative streams calculations under offload,
>> (2) ease of use of the hardware acceleration capability, and
>> (3) ensuring that there is no time or space overhead for non-accelerated
>> calculations.
>> Can I please request the support of the Core Libraries Group as the
>> Sponsoring Group with myself as the Project Lead.
>> Warm Regards,
>> Karthik Ganesan
>> [1]

More information about the discuss mailing list