Giter Site home page Giter Site logo

lightmapper's People

Contributors

aidinabedi avatar ands avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lightmapper's Issues

Noisy lightmaps

I'm having a weird problem that I can't figure out. You can see a screenshot and the generated lightmap here: https://imgur.com/a/2NqIT

It's like some of the samples are just wrong (usually black instead of grayish, but sometimes white), as if depending on the hemisphere orientation the light is either visible or not visible, but that's wild speculation. I tried turning off interpolation to render all the hemispheres, but that didn't help.

How to pass lightmap data between passes

What's the correct way to transfer the lighting results of Pass N to Pass N+1?

From looking at the sample, it seems like the results of each pass is converted to a plain old texture, and then the next pass uses that texture as emissive data. But that seems wrong. It seems like you wouldn't want to convert the HDR lightmap info to an actual texture until the very end of all your passes/bounces. You'd want to keep the original floating point 32-bits-per-channel array you pass to lmSetTargetLightmap() around for use in the subsequent pass, rather than rely on a clamped/exposed/processed 8-bits-per-channel texture.

So am I just doing it wrong?

Baking a thickness texture

I would like to use lightmapper to bake a "thickness" texture, i.e. something that approximates the thickness of a closed mesh beneath each surface point. Blender has an option to negate the sample vectors in its AO shader node, in order to sample "inside" the mesh. Substance Painter has also has a thickness baker, documented here.

What modifications of lightmapper.h would have to be made to make this possible?

Any further documentation ?

Hello, I am currently looking for a way to bake simple global-illumination lightmaps, and this seems to be the most popular solution for that task. I have been struggling my way around the example application and all available info for a while now, however since I mainly work in C++ some of the C-aspects were unfamiliar to me, and it was generally quite hard to grasp how the library works. Is there any further documentation, absolutely minimal examples or some for baking global illumination and not just Ambient Occlusion ? How can lights be added into the trace and not just a single background color ?

Duplicate Normalize line in the lightmapper code

Hi,

Been looking through the code and came across this in lightmapper.h

569: ctx->meshPosition.sample.direction = lm_normalize3(lm_add3(n0, lm_add3(lm_scale3(nv2, uv.x), lm_scale3(nv1, uv.y))));
570: ctx->meshPosition.sample.direction = lm_normalize3(ctx->meshPosition.sample.direction);

Unless I'm reading this wrong it looks to me like a typo as direction is being normalized twice.
So I imagine line 570 can just be removed or commented out.

Slow?

Baking becomes slow when there is more geometry/objects in the scene.

hemispheres float texture always NaN if glDisable( GL_BLEND ); not issued beforehand

Hi,

I do not know if this project is still active but I'd like to suggest a fix for a no-show issue if lightmapper.h is included in an existing codebase that already sets few openGL states.

In particular if glEnable( GL_BLEND ); is active, every texel generated will result into a NaN.

Maybe you can force it in lmCreate, not sure though if you decide to defer the render, or maybe in renderscene, when you already force set the depth test.

Cheers,
Dado.

Radiosity Normal Mapping?

How would someone achieve radiosity normal mapping?
According to some Half Life 2 Papers, we need to generate three lightmaps, one with a different directional component. How would that be achieved?

Shadow baking

Hello, I am studying your code, I tried to modify the shader to add bakeScene, but it seems that the shadow cannot be baked properly. Can you tell me an idea? Or provide some reference code? Sorry, I am a novice. Thank you!

Normal oriented

This is not an issue but a request.

Could you make lightmapping normal oriented? Like, make camera rotation be normal aware, it would reproduce similar results as blender I think:

http://i.imgur.com/F5KYC7z.png

The way it is atm is bad for smooth shading.

Lighting setup guide

Is there a quick step or guide on how to setup different types of lighting, position, color, etc?

ands

Normals get transformed twice

When calling lmSetGeometry there is an option to pass in vertex normal data.
If no normal data is passed (type is set to LM_NONE), the normals will internally be calculated from the triangle vertices.

However the normal seems to get calculated from the triangle vertices AFTER the triangle was already transformed by the model matrix. So the normals are automatically in world space.
They then get transformed again by the normal matrix (inverse transpose of the model matrix).

ctx->meshPosition.triangle.p[i] = lm_transformPosition(ctx->mesh.modelMatrix, p);

// ...
// decode uv coordinates
// ...

lm_vec3 flatNormal = lm_cross3(
	lm_sub3(ctx->meshPosition.triangle.p[1], ctx->meshPosition.triangle.p[0]),
	lm_sub3(ctx->meshPosition.triangle.p[2], ctx->meshPosition.triangle.p[0]));

for (int i = 0; i < 3; i++)
{
	// decode and pre-transform vertex normal
	const void *nPtr = ctx->mesh.normals + vIndices[i] * ctx->mesh.normalsStride;
	lm_vec3 n;
	switch (ctx->mesh.normalsType)
	{
		// TODO: signed formats
	case LM_FLOAT: {
		n = *(const lm_vec3*)nPtr;
	} break;
	case LM_NONE: {
		n = flatNormal;
	} break;
	default: {
		assert(LM_FALSE);
	} break;
	}
	ctx->meshPosition.triangle.n[i] = lm_normalize3(lm_transformNormal(ctx->mesh.normalMatrix, n));
}

It would seem to me that the transformation with the normal matrix should only be done if the normalsType is not LM_NONE.
If I could get some confirmation that this is indeed a bug, I'd be glad to make a pull request.

Note: Great library. Small, yet super effective. It does a great job!

Small request for help - lightmapper in Qt

Small request for help ... :)

I am trying to port/run lightmapper in Qt with OpenGL2. It started to work but results are broken - maybe you could point me where to start with this problem - what could cause these artifacts?

Thank you in advance
Pawel

Results:

result
debug_interpolation

and shaders:

// hemisphere shader (weighted downsampling of the 3x1 hemisphere layout to a 0.5x0.5 square)
{
	const char *vs =
		"#version 120\n"

        "#extension GL_EXT_gpu_shader4 : require\n"

		"const vec2 ps[4] = vec2[](vec2(1, -1), vec2(1, 1), vec2(-1, -1), vec2(-1, 1));\n"
		"void main()\n"
		"{\n"
			"gl_Position = vec4(ps[gl_VertexID], 0, 1);\n"
		"}\n";
	const char *fs =
		"#version 120\n"

        "#extension GL_EXT_gpu_shader4 : require\n"

        "uniform sampler2D hemispheres;\n"
		"uniform sampler2D weights;\n"
		"uniform ivec2 weightsTextureSize;\n"

        **"vec4 texelFetch(sampler2D tex, ivec2 size, ivec2 coord)\n"
        "{\n"
        "    vec2 fCoord = vec2((2.0*coord.x + 1.0)/(2.0*float(size.x)),(2.0*coord.y + 1.0)/(2.0*float(size.y)));\n"
        "    return texture2D(tex, fCoord);\n"
        "}\n"**

        "vec4 weightedSample(ivec2 h_uv, ivec2 w_uv, ivec2 quadrant)\n"
		"{\n"
			"vec4 sample = texelFetch(hemispheres, h_uv + quadrant, ivec2(0));\n"
			"vec2 weight = texelFetch(weights, w_uv + quadrant, ivec2(0)).rg;\n"
			"return vec4(sample.rgb * weight.r, sample.a * weight.g);\n"
		"}\n"

		"vec4 threeWeightedSamples(ivec2 h_uv, ivec2 w_uv, ivec2 offset)\n"
		"{\n" // horizontal triple sum
			"vec4 sum = weightedSample(h_uv, w_uv, offset);\n"
			"offset.x += 2;\n"
			"sum += weightedSample(h_uv, w_uv, offset);\n"
			"offset.x += 2;\n"
			"sum += weightedSample(h_uv, w_uv, offset);\n"
			"return sum;\n"
		"}\n"

		"void main()\n"
		"{\n" // this is a weighted sum downsampling pass (alpha component contains the weighted valid sample count)
			"vec2 in_uv = (gl_FragCoord.xy - vec2(0.5)) * vec2(6.0, 2.0) + vec2(0.01);\n"
			"ivec2 h_uv = ivec2(in_uv);\n"
			"ivec2 w_uv = ivec2(mod(in_uv, vec2(weightsTextureSize)));\n" // there's no integer modulo :(
			"vec4 lb = threeWeightedSamples(h_uv, w_uv, ivec2(0, 0));\n"
			"vec4 rb = threeWeightedSamples(h_uv, w_uv, ivec2(1, 0));\n"
			"vec4 lt = threeWeightedSamples(h_uv, w_uv, ivec2(0, 1));\n"
			"vec4 rt = threeWeightedSamples(h_uv, w_uv, ivec2(1, 1));\n"
			"gl_FragColor = lb + rb + lt + rt;\n"
		"}\n";

Right-Handed Projection Matrix

The projection matrix code in lm_setView is right-handed. When used in a renderer that's left-handed the culling has to change based on whether its a lightmap render or a regular scene render. Could you add a setting, perhaps in lmCreate, where left/right handedness could be specified?

Also, does the library make that assumption, or any other assumption about handedness/winding order/culling anywhere else? I ask because I switched the projection function to left-handed and the result was darker leading me to believe the library relies on that handedness to some degree.

Proper obj format

Hi, I'm trying to test other objs or just simply open the current example gazebo.obj model to Blender or any other modelling tool and exporting it again. The new output model crashes the example program with this error.

File: C:\ands\example\example.c, Line 319
Expression: !"unknown identifier"

Any ideas?

Can I decide which lightmap gets baked?

I have several meshes. Each mesh has it's own unique lightmap.
I want to bake lighting for each mesh separately, but shadows of other meshes should be visible on the currently baked lightmap.

Can lightmapper do this?

License

You should probably add a license to this repo, for legal reasons.

TrianglePacker is nessary?

Hi, I am currently integrate this lib into my renderer, but I am new to this. I found that in development branch example exist TrianglePacker but master branch not, I want to know if this is nessary for bake? And if you can explain this theory a little to me, that's will be great!

Assert fail on boundary check

Hi, very cool and useful project, I am integrating it into my 3d modeling software.

I found maybe a bug when I am testing. it crashes on this assert sometimes,

assert(x >= 0 && x < ctx->lightmap.width && y >= 0 && y < ctx->lightmap.height);

Assertion failed: (x >= 0 && x < ctx->lightmap.width && y >= 0 && y < ctx->lightmap.height), function lm_getLightmapPixel, file src/qtlightmapper.h, line 472.
Printing description of ctx->lightmap:
(lm_context::(anonymous struct)) lightmap = {
  width = 512
  height = 512
  channels = 4
  data = 0x000000010de6b000
}
...
x = 512
...
Printing description of ctx->meshPosition:
(lm_context::(anonymous struct)) meshPosition = {
  ...
  rasterizer = (minx = 510, miny = 482, maxx = 512, maxy = 488, x = 512, y = 482)
  ...
}

I guess the max bound of xy should be the width/height minus one?

lightmapper/lightmapper.h

Lines 1059 to 1060 in 85fafed

ctx->meshPosition.rasterizer.maxx = lm_mini((int)bbMax.x + 1, ctx->lightmap.width);
ctx->meshPosition.rasterizer.maxy = lm_mini((int)bbMax.y + 1, ctx->lightmap.height);

I changed it to,

ctx->meshPosition.rasterizer.maxx = lm_mini((int)bbMax.x + 1, ctx->lightmap.width - 1);
ctx->meshPosition.rasterizer.maxy = lm_mini((int)bbMax.y + 1, ctx->lightmap.height - 1);

and it worked quite well, but I am not sure if this is a right fix, or did i do something wrong? I have ported it to the Qt environment to make use of the QOffscreenSurface, by wrapping all the functions to a class and changed the shader code to make it work on 120 version, but leave all the other codes untouched.

About Spherical Harmonics

Hello,

Thanks for that great library. I'm investigating the library and example quite some time.The static lightmaps that lib. generates are great and really realistic. I wonder is that possible to calculate sh coefficients with that lib. to implement something kind of Unity's light probes to approximate real time global illumination. Do you have a reference resource you can recommend? I know this is a long path to go but i really want to deep dive into that kind of stuff.

John

Preparing obj

Hello! :) How I can prepare my obj without separately indexed vertex attributes with UVAtlasTool?

Thank You)

Run entirely offline?

I'm exploring this project as a potential offline lightmapper for a WebGL level editor. I'm looking through the source code now but I'm new to C. How easy would it be to make this run entirely offline / in the background?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.