Accelerating Neural Radiance Fields with Parallel C++ Implementation

A project proposal for 15-418/618 - Parallel Computer Architecture and Programming

Team Members

Kevin Huang (kzh)
Changyang Wu (changyaw)

Summary

We propose to implement a parallel version of Neural Radiance Fields (NeRF) in C++ to accelerate scene rendering. The project will explore both multi-core CPU and GPU parallelization strategies—using frameworks such as OpenMP and CUDA—to optimize the computationally intensive neural network inference and volumetric integration inherent to NeRF. This work will not only yield a high-performance rendering tool but also provide insight into effective mapping of irregular workloads to modern parallel architectures.

Background

Neural Radiance Fields (NeRF) have emerged as a powerful representation for photorealistic scene synthesis. At its core, NeRF uses a deep neural network to predict color and density at continuous 3D locations, integrating these predictions along rays to render images. While highly expressive, the method is notoriously computationally expensive, as it requires numerous evaluations of a neural network per pixel.

Our approach leverages the inherent parallelism in NeRF's rendering process:

The Challenge

The primary challenges of this project include:

Resources

Hardware:

Software and Libraries:

Goals and Deliverables

Primary Goals (Must-Achieve)

Extra Goals (Nice-to-Have)

Schedule

Week Dates Tasks
Week 1 Now – March 31
  • Conduct a detailed literature review on NeRF and existing parallel implementations
  • Finalize the project design and outline the architecture
  • Set up the development environment
Week 2 April 1 – April 7
  • Develop a baseline serial implementation in C++
  • Begin profiling the serial code to identify hotspots
Week 3 April 8 – April 14
  • Parallelize the CPU version using OpenMP
  • Validate correctness and perform initial performance benchmarking
  • Prepare progress updates for milestone report
Milestone Report Due: April 15, 11:59pm Submit a detailed milestone report including current implementation, performance benchmarks, and updated project schedule
Week 4 April 16 – April 22
  • Develop the GPU-accelerated version using CUDA
  • Optimize data transfers and kernel launches
  • Begin integrating performance measurement tools
Week 5 April 23 – April 28
  • Complete performance evaluations for both CPU and GPU versions
  • Prepare graphs and detailed analyses for the final report
  • Finalize the code base and documentation
Final Report Due: April 28, 11:59pm Compile the final report (approx. 10 pages, including figures and analysis)
Poster Session April 29 Present the project via a poster session