Published by: 0

NeRF Unity for Oculus Quest 2 is a project based on the code of kwea123/nerf_Unity, which uses a volumetric rendering technique to visualize 3D images generated in Nerf within Unity. This version includes support for the Oculus Quest 2 virtual reality headset and adds new tools for importing volume data in PNG sequences. The project includes different scenes to demonstrate the various ways of using this rendering technique, such as rendering 3D models and volumes. The project requires Unity 2022.2.1f1 and additional assets downloaded and imported into Unity for use. Additionally, it includes two custom shaders: an old one (VolumeShad2) and a new one (VolumeColorRenderingShader) for volume rendering.

Can you imagine creating realistic 3D objects from 2D photos? That’s what the AI of NeRF does, a neural rendering technique that uses neural networks to represent and render 3D scenes based on a collection of 2D images. The NeRF AI can fill in gaps and correct human errors when capturing photos.

In this article, I’ll tell you how the idea for this project came about, what NeRF is and how it differs from photogrammetry, and what you can do with this project if you’re interested in trying it out.

Neural Radiance Field (NeRF) is a neural rendering technique that uses neural networks to represent and render 3D scenes based on a collection of 2D images, using only a limited number of 2D images from those scenes. The neural network learns to represent the scene as a function that relates the position and direction of the camera to the color and transparency of each point in space. Using this function, the neural network can create new images of the scene from any angle or distance.

The main difference between NeRF AI and photogrammetry is that the former uses neural networks to generate 3D scenes, while the latter uses geometric algorithms to reconstruct scenes from photos. The advantage of NeRF AI is that it can produce more realistic and detailed results with less data, and it’s capable of generating a scene very quickly, taking into account the capabilities of today’s powerful graphics cards.

I became interested in NeRF AI when I saw the impressive results that could be obtained with this technique, such as generating realistic 3D objects from 2D photos. I thought it was a very innovative and powerful way to create 3D content without the need for special cameras or scanners.

However, I realized that there was no easy and accessible way to use NeRF AI in virtual reality (VR). That’s why I decided to create the Nerf Unity for Oculus Quest 2 project, a fork of the original kwea123/nerf_Unity project that adapts the original code to work with the Oculus Quest 2 VR headset. The Oculus Quest 2 is one of the most popular and affordable VR headsets on the market, offering high graphics quality and a wireless experience.

The project includes a new shader and tools for importing 3D volumetric data from png sequences. The shader is based on volumetric rendering using raymarching, which is a technique that simulates the behavior of light passing through a medium with different densities and optical properties. Instead of calculating direct intersections between light rays and 3D objects, raymarching advances rays in small steps and evaluates the nearest distance function at each step. This allows for rendering complex and realistic scenes with effects such as soft shadows, refraction, and transparency.

The most interesting part of the project is the Shader that I created, combining elements from the unity-volume-rendering project’s shader and kwea123/nerf_Unity. The shader uses a 3D texture (_DataTex) and performs a raymarch through the texture to produce volumetric rendering. The shader has several adjustable properties to customize the rendering result, such as minimum and maximum values (_MinVal, _MaxVal), transition and alpha factor (_alphaTransition, _alphaFactor), depth factor (_deptFactor), lighting and light factor (_lighting, _lightFactor), and noise and noise factor (_Noise, _noiseFactor).

The code is divided into several parts:

  • The Properties section defines the variables that can be modified from outside the code, such as the number of iterations, the 3D data texture (_DataTex), minimum and maximum values, ranges for x, y, and z, and other parameters.
  • The SubShader section defines the tags for rendering the transparent object and sets the LOD (level of detail), Cull (face culling), ZTest (depth testing), ZWrite (depth writing), and Blend (color blending).
  • The Pass section defines the vertex and fragment shaders, including input and output structures, as well as uniform variables for 3D and 2D textures, lighting, number of iterations, and noise.
  • The RayInfo section defines the structure that stores the information of the ray passing through the texture. The intersectAABB function (which calculates the intersections of the ray with an axis-aligned box), getRayBack2Front (which returns the direction and start of the ray for a given vertex), and getViewRayDir (which calculates the direction of the ray from the camera) are also defined.
  • The main part consists of the vertex shader and fragment shader functions. The vertex shader function simply returns the position of the vertex on the screen and its coordinate in the 3D texture. The fragment shader function performs the raymarch through the volumetric texture, sampling the texture at each step and accumulating the color and alpha. It also applies lighting and noise effects according to the shader properties. Finally, it returns the resulting color.

You can view the project on my Github.