Giter Site home page Giter Site logo

irath96 / focal-guiding Goto Github PK

View Code? Open in Web Editor NEW
18.0 2.0 5.0 66.29 MB

Implementation of "Focal Path Guiding for Light Transport Simulation" (SIGGRAPH 2023)

Home Page: https://graphics.cg.uni-saarland.de/publications/rath-2023-focal-guiding.html

License: GNU General Public License v3.0

Dockerfile 0.02% Python 0.87% Shell 0.09% C 6.26% Makefile 0.01% C++ 89.63% XSLT 0.16% Batchfile 0.01% PowerShell 0.01% TeX 2.17% Objective-C 0.08% Objective-C++ 0.35% GLSL 0.30% CSS 0.01% CMake 0.03% HTML 0.02%
mitsuba siggraph2023

focal-guiding's Introduction

Focal Path Guiding for Light Transport Simulation

🍿 Check out our SIGGRAPH 2023 presentation on YouTube!
🔥 Try our interactive guiding demo or our interactive visualizer!

Teaser

This repository contains the authors' Mitsuba implementation of the Focal Path Guiding algorithm. We have implemented our algorithm in a recursive path tracer, which can be found under mitsuba/src/integrators/path/focal_path.cpp. The underlying data structure, as described in our paper, can be found under mitsuba/src/integrators/path/focal_guiding.h.

Scenes

You can download all scenes tested in our paper from this link. The renders from our paper can be viewed with our interactive online viewer. By default (e.g., by running mitsuba camera-obscura.xml without further arguments), the scenes will render using focal path guiding with identical settings to our paper. You can switch between integrators using the $integrator variable (e.g., by passing -Dintegrator=bdpt for bi-directional path tracing). Note that you will need to tune the number of samples by hand for those integrators that do not support equal time rendering. Also be aware that equal-time results may look different on your hardware if it happens to be less or more powerful than our testing hardware. The shared link also contains references, albeit with some residual outliers due to the enormous complexity of the light transport featured in the scenes.

Parameters

⚠️ Please note that this implementation does not support Russian roulette. Common variants of Russian roulette are prone to undoing the benefits of guiding. Given its orthogonality, we leave implementing sophisticated variants as future work.

Our algorithm performs multiple training iterations before producing the final render. For more details, please refer to our paper. The following parameters are supported by our integrator:

budget (default 120)

The time (in seconds) allocated for rendering.

iterationCount (default 15)

The number of training iterations to be performed.

iterationBudget (default 6)

The time (in seconds) that each training iteration takes. A good initial guess is to dedicate half of the time to training and the other half to rendering [Müller et al. 2017]. To this end, make sure that iterationCount times iterationBudget equals half of the budget.

dumpScene (default false)

Dumps the geometry of the scene as single PLY mesh file, used by our visualizer.

orth.threshold (default 1e-3)

Controls the resolution of the guiding octrees. Smaller values will yield finer resolution, at the expense of higher computational cost per sample. If many focal points are present, consider lowering this value. Please consult our paper for a study on this parameter.

Compilation

To compile the Mitsuba code, please follow the instructions from the Mitsuba documentation (sections 4.1.1 through 4.6). Since our new code uses C++11 features, a slightly more recent compiler and dependencies than reported in the mitsuba documentation may be required. We only support compiling mitsuba with the scons build system, but we do support Python 3.

We tested our Mitsuba code on

  • macOS (Ventura, arm64)
  • Linux (Ubuntu 22.04, x64)

License

The new code introduced by this project is licensed under the GNU General Public License (Version 3). Please consult the bundled LICENSE file for the full license text.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.