From the post: "Ray tracing was invented by Turner Whitted around 1980."
I believe that Turner probably did some great things, but somehow I don't think I believe he invented ray tracing. Ray tracing has been around for so long in physics...
Whitted published the first paper applying recursive ray tracing to the problem of rendering an image. Quibbling over this is like saying no one invented compute graphics because Renaissance artists knew how projection worked.
I checked this out because I expected to learn about the brand new RT Core accelerated ray tracing. Turns out this tutorial does not touch that topic. It's awesome either way, but https://devblogs.nvidia.com/nvidia-optix-ray-tracing-powered... might be more relevant for people like me.
Can someone explain Ray Tracing to me like i'm five?
Assume that this 5 year old loves playing a lot of different varieties of games and would be glad if his favorite developers end up making better experiences for him using ray tracing.
Ray tracing is relatively easy to understand, it's not ray tracing that is actually complicated. Ray tracing is simply sending out rays from the virtual camera, seeing what they hit along the way, and calculating the lighting effects of the materials and light sources when you hit those things. In the end you've simulated a sample of light that hit the camera. Repeat a couple million times and you've got an accurate picture of what the virtual scene looks like.
"Normal" computer graphics is comprised of doing every dirty shortcut trick you can imagine to avoid tracing hundreds of millions of rays per second while still having some semblance of lighting effects in the virtual scene.
also, don't forget that the nature of material also influences how light behaves e.g. scattering, reflection, refraction etc. etc. and depending on the scene, computations can be quite expensive...
Most games use scanline rendering. The program starts with a list of all the objects in the scene, and one by one calculates where they go on the screen.
Raytracing starts with all the pixels on the screen, and one by one calculates what goes in each pixel.
Adding shadows with scanline rendering is hard. Checking every object and calculating where its shadow goes is complicated. (Look up volumetric shadows and shadow maps if you want to know more.) Raytracing shadows is easy. For each pixel, you check if anything is blocking the light. If yes, it's in shadow. If no, it's lit.
Adding reflections with scanline rendering is almost impossible (ignoring hacks that only work in some situations). Since one object can have many reflections, you can't just go down the list of objects and calculate a reflection for each one. Raytracing reflections is easy. For each pixel on a reflective object, you check to see what would cast a reflection there.
Now, nobody is actually building fully raytrace games. "Raytraced" games first draw everything with scanline rendering, then go back and use raytracing to add shadows and reflections.
I still feel like that was more like an eli10 than an eli5, but good enough. :-)
Imagine light rays coming out of your eyes. For every ray that hits an object, draw rays to every light source from that hit point.
Light in reverse.
In-between those two steps you can figure out intensity and colour of light, the colour of objects, roughness, transparency, refraction, reflectivity, etc etc. You can even bounce multiple rays to other objects for global illumination and whatever.
With ray tracing, you cast an imaginary ray from your eye to each pixel of your monitor and you follow that ray until it hits some kind of object behind it.
And then when you hit that object, you cast more rays from that intersection hit point:
- to light sources and see whether or not there's some other object between that intersection and the light source. This determines shadows.
- to the reflected position of the initial ray, to calculate reflections.
- ...
It's a recursive process where you gather light values as you go, the combination of which gives you a final color for the pixel through which you shot that initial ray.
There are two common ways to draw stuff in 3D computer graphics. The most common one in games is "rasterization" which means: "For each object, figure out what pixels it covers." Movies (and soon games) use a mixture of rasterization and ray tracing. Ray tracing means "For each pixel figure out what objects cover it."
You can do simple ray tracing by looping over each pixel, defining a ray that starts from the viewpoint and passes through the pixel on the screen, then looping over every object in the whole scene to see if and where that ray hits each object. That'll work, but it'll be very slow. Then the fun begins figuring out data structures and algorithms to make that process faster.
The advantage of rasterization is that it can be very fast because it works with small amounts of data at a time that has fairly predicable access patterns and good locality between pixels. But, that's also it's down side. It doesn't work well with information that is not local to a single point on an object (for example, how close are other nearby objects? That's hard for a rasterizer).
The advantage of ray tracing is that it is built from the ground up around querying the entire environment. So, features like shadows, reflections and ambient occlusion become much easier to get to work. That's also it's down side. Querying the environment is difficult to make fast. So, even though the features are easy to write, they are still a challenge to keep under the frame time budget.
As the topic is about Ray Tracing in One Weekend and the book admits it's "technically a path tracer" here's a great Disney video introducing path tracing: https://youtu.be/frLwRLS_ZR0
A ray is a vector with a starting point and a direction (a line in 3D space).
Tracing means: "where does this vector intersect an object".
So there is nothing special about ray tracing. But GPUs now can calculate those vectors very fast. And by tracing rays we can simulate how real lights behaves. So today we can have photorealistic images in seconds or even realtime.
What's wrong with the code? I have CUDA 10 installed and all Nvidia's shipped samples work fine, while this doesn't. I'm new and totally lost, how can this be debug?
1. Don't assume it's someone else's fault - "What's wrong with the code" is the wrong first question.
2. Read the original article, in its entirety. Fortunately people are still nice enough to help you, but the combo of "how is this screwed up" and "I didn't thoroughly read the article" is off-putting to people who would otherwise love to assist you.
The relevant section:
> If you start with my Makefile, note that I build for a GTX 1070 card using specific -gencode flags for that card (-gencode arch=compute_60,code=sm_60). You will want to adjust the architecture and feature settings for the GPU or GPUs you will be running on.
The programming model for GPUs is pretty different from CPUs. (Branching is discouraged on GPUs, simpler hardware, highly parallel, etc.). I think the right approach is to have separate languages for the two.
Yes, and the question was whether it makes sense for Rust to target the GPU, not whether it's possible. I think most people would agree that in an ideal world you would have a different programming language for the GPU than you have for the CPU because the hardware is so different. CUDA uses C++ (and C and Fortran) because there is already lots of code written in these languages and it makes it easy to port code to run on the GPU.
"Ray Tracing in One Weekend/The Next Week/The Rest of Your Life" have recently switched to DRM-free, "Pay What You Want" pricing http://in1weekend.blogspot.com/2016/01/ray-tracing-in-one-we...