There are quite a lot of rendering engines out there, especially raytracers, since it is possible to create a simple raytracer in one weekend. On the other hand, the modern raytracers are complicated and have a lot of techniques to generate “photorealistic” images. Many of these images cannot be distinguished from real photos – at least if you are not taking a close look at them.
This complicatedness results because still computers are slow, and you have to use proper heuristics and approximations to the reality. But it makes implementing and maintaining a raytracer more complicated, so I wonder if there is some project which tries to create a raytracer which is physically correct rather than fast. One could argue that this would not be usefull, since – as I already wrote – computers are too slow, and it would be better to focus on improving the current raytracers. On the other hand, this is what computer science is about in general.
Compare this to filesystems, where in the beginning you basically had data-slots which were numbered, and then 11-byte-filenames under DOS. Having 11 byte filenames is still better than just having 4 byte integers, its an approximation, a tradeoff between efficiency and convenience. But with larger and faster hard disks this became obsolete, and meanwhile, in modern filesystems, the lengths of filenames are just bound by “natural” boundaries like number-ranges, and in most filesystems a filename can have at least 1024 characters.
Compare this to executable formats. In the past people coded via the raw processor instructions, then via assembly languages, which makes it more convenient but takes more system resources, then compilers were invented which even produce less efficient code but help creating larger projects much faster. And meanwhile, we have scripting languages, intermediate code, etc., because it is simply fast enough in many cases, while being very convenient.
So in general, having approximations in the beginning is a good thing, but with approximations stacking on other approximations, systems can get complicated, and so, with the hardware becoming better, the acceptability of the tradeoffs accompaning with this stacking will decrease. And I think this is what will happen in the world of raytracers, too. While now, raytracing is one of the few practically relevant problems in this category of complexity, with multicore-processing and cloud-computing, the practical problems of raytracing already changed a bit. And I am sure they will change in the future, up to a degree that approximations dont make sense anymore in the kind they do now.
Having a raytracer that tries to be physically correct, including interference and optical quantum effects, would make it easier to create photorealistic graphics – because you wouldnt have to know about proper approximations – would also allow to generate artificial pictures (for example when adding materials that are not realistic), and – above all – would be interesting.
I already asked my former cg-lecturer about this, but he also didnt know if there is such a project. Maybe for some simulations in physics there is already a software doing this. I dont know, I didnt find one yet.