Comments (7)
First light test:
The Google Colab notebook to generate the sensitivities data is here: https://colab.research.google.com/drive/18X8ICpZ6MZmWxOzXNpUF0ehb_emfUOf-
It makes me wonder if you have a build of the renderer (or recipe) that works on the Colab VMs? That could be useful! If not I will work on one when I have some spare cycles.
Note that I have an almost exactly 1 stop energy loss between the CMFS and the camera sensitivities that I need to investigate but it reminds me #14. The issue is most likely between the chair and the screen.
from mitsuba2.
Another related question is: Recommendations on how to output spectral-images? :)
from mitsuba2.
Discussing here is fine. But what is a "CMFS"? :-)
from mitsuba2.
Hi @wjakob,
First congratulations are in order, and I should have started there! Quite an awesome release, I'm super excited :)
CMFS stands for Colour Matching Functions, I would like to replace the CIE 1931 2 Degree Standard Observer data with cameras sensitivities.
I'm part of an Academy working group trying to come up with a gamut mapping algorithm for scene-referred data, and I would like to generate spectral imagery from multiple camera models.
I have been using existing hyper-spectral images and integrated them with narrow-band LEDs and various camera sensitivities here: https://academy-vwg-gm-hyper-spectral-images.imfast.io/ The problem with that approach (even though it generates offending data as we require) is that it does it too well, the entire image is affected! I would like to generate synthetic data in Mitsuba with mixed lighting and that is where the different sensitivities come to play.
Hope it makes sense!
Cheers,
Thomas
from mitsuba2.
Hi @KelSolaar,
that makes sense. As a quick-and-dirty solution, you could simply replace the CIE XYZ tables in src/libcore/spectrum.cpp
by something else. A cleaner solution, and a technique for outputting arbitrary number of spetral channels should not be too hard by implementing a new meta-integrator that calls a nested "sub-integrator". This is already done in aov.cpp
and and would just need some small changes to integrate against different spectral response functions.
Best,
Wenzel
from mitsuba2.
As a quick-and-dirty solution, you could simply replace the CIE XYZ tables in src/libcore/spectrum.cpp by something else.
Yeah that is what I have been doing for now (re-compiling is long though).
I reckon it would be 2 new useful features!
from mitsuba2.
It makes me wonder if you have a build of the renderer (or recipe) that works on the Colab VMs?
Not really. It would be great to have! Feel free to open a PR to add a recipe to the documentation, that would be really useful!
I am going to close this as this is not really an issue.
from mitsuba2.
Related Issues (20)
- [🔨 compilation issue] 'cmake -GNinja' step HOT 3
- [❔ other question] How to convert/extract enoki.autodiff_cuda.Float32 type array from Bitmap class? HOT 4
- [❔ other question]runtime API error = 0719 HOT 1
- [🐛 bug report] importing Pytorch after mitsuba => Segmentation fault (core dumped) HOT 1
- [❔ other question] Performance on Windows much lower than Linux? HOT 5
- [🐛 bug report] hdrfilm.cpp: duplicate channel detection HOT 3
- [❔ other question] how to uninstall mitsuba2 in linux? HOT 4
- The examples of Differentiable rendering applications HOT 2
- [🐛 bug report] OptiX "arithmetic involving uninitialized variable!" with Certain Scenes HOT 1
- [✨ feature request] Tutorial on differential rendering material acquisition HOT 1
- Something about code HOT 1
- Caught a critical exception: main [properties.cpp:401] └1∩Æ∞☺
- [✨ feature request] Does the mitsuba2 support bsdf for cloth model? HOT 1
- 用java或C++用物件導向的概念設計程式解下面的問題:讀取一組時間,計算出時針與分針的夾角,時間的格式是0:00到12:00,小時數可能為1或2位數,分鐘數總是2位數,00到59之間。輸出腳讀為0到180之間的值,角度的精確度到千分之一。例如9:00是90.000度,不是-90,也不是270,2:00則是60 HOT 1
- Rendered polarized images' intensity doesn't change according to the change of polarization angle- [❔ other question] HOT 1
- [❔ other question] setting the variant("llvm_spectral_polarized"), the code didn't work HOT 2
- [❔ other question]How can we output UV map in python? HOT 1
- - [❔ other question] I want to optimize the values in the transform component of my sensor plugin HOT 1
- Unable to compile: Windows 10: Visual Studio Community 2022 HOT 1
- White noise in spectral rendering - [❔ other question] HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mitsuba2.