Infrastructure for Virtual Reality Viewers

The emphasis of this design stage was the integration of the market-leading HMDs (head-mounted displays) into virtual reality content production tools.

This process has occurred through the development of a system capable of supporting the HTC Vive and Oculus in laboratory applications, as well as external free systems.

We have designed and implemented the "HMD Bridge SDK". Here is the description of the project: "This library is intended to allow for testing different HMD SDKs for C, C ++ and Python applications. The so-called bridge is a high-level wrapper, which is the implementation of each of the supported SDKs. "

The HMD Bridge SDK enabled us to natively support both of the aforementioned devices (Live and Oculus) in a unified way. Once the program in question incorporates the library, it is no longer necessary to treat each device (and future devices) individually.

Until then, independent applications have only to rely on the individual SDKs (OpenVR for Vive and OVR for Oculus), unmatched OpenSource solutions (OpenHMD, and OSVR were analyzed during the research) or more complex tools such as Unity or Unreal Engine.

It is noteworthy that solutions such as Unity or Unreal Engine bring a number of advantages to the development of applications and are robust enough to meet most of the needs of the laboratory. Still, a solution like the HMD Bridge SDK allows for independent prototypes, and a better understanding of the new technologies on the market. Even if only for the prototyping steps.

The project code is located in github (https://github.com/dfelinto/hmd_sdk_bridge), as well as in the Visgraf repository.

Blender and Oculus

The buzz about VR (Virtual Reality) is far from over. And just as with regular stereo 3D movie pipelines, we want to work in VR as soon as possible.

That doesn’t mean necessarily to sculpt, animate, grease-pencil all in VR. But we should at least have a quick way to preview our work in VR – before, after, and during every single of those steps.

At some point we may want special interactions when exploring the scene in VR, but for the scope of this post I will stick to exploring a way to toggle in and out of VR mode.

That raises the question, how do we “see” in VR? Basically we need two “renders” of the scene, each one with a unique projection matrix and a modelview matrix that reflects the head tracking and the in-blender camera transformations.

This should be updated as often as possible (e.g., 75Hz), and are to be updated even if nothing changes in your blender scene (since the head can always move around). Just to be clear, by render here, I mean the same real-time render we see on the viewport.

There are different ways of accomplish this, but I would like to see an addon approach, to make it as flexible as possible to be adapted for new upcoming HMDs.

At this very moment, some of this is doable with the “Virtual Reality Viewport Addon“. I’m using a 3rd-party Python wrapper of the Oculus SDK (generated partly with ctypesgen) that uses ctypes to access the library directly. Some details of this approach:

Fig - Virtual Reality Viewport Addon in action – sample scene from Creature Factory 2 by Andy Goralczyk

Not supporting Direct Mode (nor the latest Direct Driver Mode) seems to be a major drawback of this approach (Extended mode is deprecated in the latest SDKs). The positive points are: cross-platform, non-intrusiveness, (potentially) HMD agnostic.

The opposite approach would be to integrate the Oculus SDK directly into Blender. We could create the FBOs, gather the tracking data from the Oculus, force drawing update every time (~75Hz), send the frame to Oculus via Direct Mode. The downsides of this solution:

All considered, this is not a bad solution, and it may be the easiest one to implement. In fact, once we go this way, the same solution could be implemented in the Blender Game Engine.

That said I would like to see a compromise. A solution that could eventually be expanded to different HMDs, and other OSs (Operating Systems). Thus the ideal scenario would be to implement it as an addon. I like the idea of using ctypes with the Oculus SDK, but we would still need the following changes in Blender:

The OpenGL Wrapper change should be straightforward – I’ve done this a few times myself. The main off-screen rendering change may be self-contained enough to be incorporated in Blender without much hassle. The function should receive a projection matrix and a modelview matrix as input, as well as the resolution and the FBO bind id.

The BGE change would be a nice addition and illustrate the strength of this approach. Given that the heavy lift is still being done by C, Python shouldn’t affect the performance much and could work in a game environment as well. The other advantage is that multiple versions of the SDK can be kept, in order to keep maintaining OSX and Linux until a new cross-platform SDK is shipped.

Dalai Felinto, Sept. 2015

Immersive Storyboarding

Parts of the challenge of stereo movie making is to work in 3d as soon as possible in your pipeline. This is the main reason the Multi-View implementation ranges from the 3D viewport all the way to the sequencer.

VR (Virtual Reality) movie making is no different. Even more so, if we consider the uniqueness of the immersive experience.

So what if … What if we could preview our work in VR since the first stroke of the storyboard?

Here in the video we show the Oculus Addon for Blender developed by Dalai Felinto as part of our research in virtual reality. video

Notice that the experience shows work done by Daniel “Pepeland” Lara in his demo file.

The applications of this addon are various, but it mainly focus on support HMD (head mounted displays) in the Blender viewport.

At the moment the support is restrict to Oculus (Rift and DK2), and it excels on Windows since the fastest direct mode is only supported on Oculus’s latest (Windows-only) SDK.

Dalai Felinto, Nov 2015

ProtoViz

In addition, the plugin developed for Blender has been used in various projects. One highlight in particular is the Oculus Studio. Oculus is one of the market leaders bought by the giant Facebook. The mission of his studio is to make interactive films in virtual reality. For one of their ongoing projects, they needed storyboarding tools that would dialogue with this new media. Due to the features in the program and the advances made possible thanks to our plugin, Blender has become a pioneering tool for this area, named by them from Protovis. We consider this application a recognition of the work of Visgraf, and the relevance of the research applications.

Applications

We advance in two areas:

The virtual reality plugin in the free Blender 3D program has been restructured to use the HMD SDK Bridge. This brought huge visibility to the project, and allowed artists to create content for immersive platforms such as Google Cardboard, Oculus and Live, as well as simply using 3D devices to extend the traditional 3D work experience.

The panorama viewer was created for internal research by Visgraf, led by Professor Luiz Velho, and research by Aldo Zang. You can see a more recent iteration of the panorama viewer project in the video:

Media Convergence

The integration of different projects from the laboratory is a test of new media. From these tools we were able to explore the same environment with different media virtual reality viewers, 3D projection table, and panoramic screen). A video of the result can be found in the link below: