Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is very good and useful; I'll have to update my ray-tracer accordingly.

One thing not discussed though is what to do about values that don't fit in the zero-to-one range? In 3-D rendering, there is no maximum intensity of light, so what's the ideal strategy to truncate to the needed range?



> what's the ideal strategy to truncate to the needed range?

This depends on your aesthetic goals. There’s no single right answer here.

There’s been a large amount of academic research into “high dynamic range imaging”. If you do a google scholar search, you can find hundreds of papers on the subject, I recommend you start with the most cited ones, and then follow the citation graph where your interests lead you.

Or start with Wikipedia, https://en.wikipedia.org/wiki/High-dynamic-range_imaging


Mostly my goal is just "do the simplest thing that doesn't look bad". My naive approach is to just clamp the values to not exceed 1.0, but it occurs to me that it might be worth asking if there's something else I should be doing instead.

I'm more interested in the computational and algorithmic side of ray-tracing, so I care more about things like constructing optimal bounding volume hierarchies than getting all the physically-correct-rending details right to produce the absolute best possible output. I just don't want the output to be ugly for easily fixable reasons.


"getting all the physically-correct-rending details right"

So, the short answer to your real question is 'tone mapping' which...is kind of dumb, imho. Clamping is probably fine.

The important thing to remember is that, while ray tracing is cool and fun to code, it has no basis in physical reality. It's no more or less of a hack than scanline polygon rendering (which is to say, you could possibly look at them as approximate solutions to the 'rendering equation' with important things variably ignored? but that's like saying y=x is a cheap approximation of y=x^2...)

One cool hack to take a description of a scene graph as a set of polygons and end up with an image of the scene is to be like "ok what polygon is under this pixel in the target view? Which way is it facing? Which way are the lights facing relative to it? What color is this material? Ok multiply that crap together, make the pixel that color". That's good old fashioned scanline 'computer graphics'. Another cool hack is "well, what if we followed a 'ray' out from each pixel, did angle-of-incidence-equals-angle-of-reflection, did some csg for intersecting the rays with surfaces, see if we end up with a light at the end and then multiply through the light and the material colors blah blah blah" but its also just a hack.

I mean, it takes some loose inspiration from the real world I guess, but it's not physically correct at all.

I mention this because I totally get where you are coming from. You might want to check out some techniques that are physically-based though, because they also have interesting implementations (mlt, photon mapping, radiosity)....you might even find it useful to drive your physically-based renderer's sampling bias from intermediate output of your ray tracer!


Plain Whitted-style ray tracing has a lot of shortcomings and in general doesn't look great compared with what people expect from modern graphics, but I don't think it's fair to say that ray tracing is "just a hack". Global illumination methods such as path tracing, Metropolis light transport, and photon mapping are much more accurate and are all fundamentally based on ray tracing. (Radiosity is not, but then radiosity is tremendously slow and doesn't handle non-diffuse surfaces.)

My goal is real-time rendering. I've met with some success; it's definitely nowhere close to the fastest ray tracers around, but I can manage to pull off around 30 fps on a well-behaved static scene with maybe a few tens of thousands of triangles at 720x480 on a dual-socket broadwell Xeon setup with 24 cores. This means that it's fast enough to make simple interactive simulations and/or games, which is what I mostly care about.

Ray tracing has a lot of advantages when it comes to building applications. I can define by own geometric primitives that are represented by compact data structures. I can do CSG. I can create "portals" that teleport any ray that hits them to another part of the scene and use that to build scenes that violate conventional geometry. I can trace rays to do visibility calculations, collision detection, and to tell me what I just clicked on. I can even click on the reflection of a thing and still be able to identify the object. There may be ways to do some of these things in scanline renderers, but I find it satisfying to be able to do them purely in software with a relatively simple codebase.

I don't have the CPU resources or the skill at optimization to attempt global illumination in real time, but there are other projects that are working on that sort of thing. I have done non-real-time photon mapping before in an earlier incarnation of my ray-tracer; maybe I'll port that forward some day.

(In case anyone is curious, my ray-tracer minus a lot of changes I've made in the last month or so and haven't bothered to push yet can be found here: https://github.com/jimsnow/glome)


I'd say scanline rendering is "more of a hack" than ray tracing, but both can produce useful looking results, of course. They are both crude approximations of reality, but ray tracing is a closer model and it makes non-local phenomena easier to model (especially if you do things in a physically-correct way, respecting the law of conservation of energy etc.) Arguably, you would end up writing a "Universe simulator" if you wanted to accurately model absolutely everything that's happening in the real world :)

By the way, I've written about this exact same topic in my initial ray tracing post:

http://blog.johnnovak.net/2016/04/28/the-nim-raytracer-proje...


One way is to spread out the too-high value across neighbouring pixels, so-called bloom. This helps in creating a sensation of brightness even if of course no pixel actually has a higher value than 1.0.

The easiest way is to make a copy of the image, subtract 1.0 from the copy, blur that a bit and then add it on top of the original. This should of course be done before you go into gamma space, at which point you do clamp to 1.0.


Just clamping at 1.0 is fine. HDR tonemapping trying to emulate human vision is mostly bullshit and generally looks awful anyway.

As long as you're rendering in a linear space (which means simply that you've gamma-corrected all your inputs) and displaying with a display gamma applied, then you're fine.

Beyond that you might choose to emulate a "film look" by applying an S-curve to your image.

EDIT: also, you really don't want to be clamping at all when saving images - just write out a floating-point OpenEXR file.


> HDR tonemapping trying to emulate human vision is mostly bullshit and generally looks awful anyway.

This is just because most of the academic researchers in the field (basically mathematicians/programmers) have horrible aesthetic taste and poor understanding of human vision, not because the concept of local contrast control is inherently bad.

HDR “tonemapping” when well done doesn’t call attention to itself, and you won’t notice it per se.

This is a problem with image processing in general. Researchers are looking for something to demo at SIGGRAPH to non-artists. They want something very obvious and fancy on a few hand-picked images for a 5 minute talk. They aren’t necessarily trying to build general-purpose tools for artists.

Real photographers use all kinds of trickery to control contrast and dynamic range. Ansel Adams tweaked the hell out of his photographs at every stage from capture to print, using pre-exposed negatives, non-standard chemistry timings, contrast-adjusting masks sandwiched with negatives, tons of dodging and burning, etc.


Right, but what you are talking about is a very involved, creative process (what I would call color grading), not an automatic tone-mapping operator which you would apply on an image and hope it doesn't look like crap.


I think “color grading”, “color correction”, “photo retouching”, “image editing”, etc. are all pretty terrible names. Unfortunately there isn’t a great alternative name.

The old term was “printing”, but to a layman that now connotes pressing a button to get an inkjet or whatever.

* * *

You can certainly come up with automatic operators which look better than “hard clip everything brighter than 1.0”.

I agree though that there should be more work put into making usable tools for artists instead of “let’s magically fix the picture in one click”.


Maybe you're referring to "The HDR Look", the one with halos and high saturation.

Check out these photos - https://imgur.com/a/cisJY - this was shot with a DSLR with 14 stops of dynamic range. My eyes couldn't see much in the dark spots when I was looking at the bright buildings.

These same RAW photos look horrible (and I mean really, unbearably bad) with "linear mapping".


Edit using a scene referred application and select a decent display referred transform.

ACES has a decent enough transform.


Please read up on scene referred to display referred transforms. You can read about them at http://cinematiccolor.com, which has a Visual Effects Society endorsed PDF available.

Ultimately it depends on what aesthetic you are aiming for and the context you are displaying it.

You don't truncate, you map.


That PDF looks excellent, thanks! I will link to it in the further reading section.


Check out these couple of articles for some nice tone-mapping functions: http://filmicgames.com/archives/category/tonemapping




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: