Embed VLC video output into a Scene Graph

David DeHaven david.dehaven at oracle.com
Tue Jul 17 10:20:00 PDT 2012

> I'm trying to embed a VLC player in a JavaFX 2.2 application.
> For that I'm using the libvlc.dll library's API.
> VLC provided two ways to render a video:
> 1. Call API function that takes HWND: libvlc_media_player_set_hwnd (libvlc_media_player_t *p_mi, void *drawable) Set a Win32/Win64 API window handle (HWND) where the media player should render its video output.
> 2. Register a callback that gets called every time a frame should be rendered: libvlc_video_display_cb (void *opaque, void *picture) Callback prototype to display a picture.
> With the HWND approach, I managed to extract an HWND pointer of the Scene using Glass framework, causing the video to render on the whole scene, no other nodes are visible.
> With the Callback approach, and by using JavaFX 2.2 new javafx.scene.canvas.Canvas node, I was able to throw pixels at the canvas, but it turns to be very inefficient in terms of memory and thread usage.
> I would love to have a solution as with AWT + JNA: the HWND can be extracted from java.awt.Canvas component (using JNA), and be given to libvlc API function. This canvas can be placed anywhere in the frame, not occupying all the entire window.
> Hard Guidelines:
> 1. Integrating AWT/Swing with FX application is not an option, I want it to be a pure JavaFX application.
> 2. Using JavaFX Media API for playback is not an option, since I have a requirement to modify the media player's source code.
> 3. The video must not occupy the entire window, allowing other components to be visible.
> Possible solution is to create my own node or control that will wrap Glass component View that has a HWND but I don't know how to bind and make it work with other Glass components.

If you start digging into private API's you're dooming your app to crash and burn at the next update, especially true for 3.0 where a lot of under the hood changes are expected to be made.

The canvas approach is interesting, but wasteful as it allocates far more resources than necessary.

If you get a raster from VLC, then just create an image from the raster. That's what Jasper did for JavaOne last year with the video preview portion of his Microsoft Kinect demo, just create a new image for each frame (sounds wasteful, I know, but not really much different than how MediaView works internally). When you create an image, it gets uploaded to a D3D/GL texture if you're using hardware rendering so the actual rendering doesn't suffer any performance drawbacks.

I think you may need to look at WritableImage for this as I'm not sure if the method used for JavaOne is public or even exists any more… WritableImage allows you to change the image contents on the fly, so it should provide what you need without all the overhead of a Canvas node.

What I'm more interested in is why you need to use VLC? What's missing in the media stack that you need? File feature requests in JIRA and we can investigate including them in future releases...


More information about the openjfx-dev mailing list