I've been playing around with making a real-time raytracer using OpenGL (and SFML for window management). I got a pretty simple scene down but the camera is broken. Here's a very brief video:
I'm using the same camera system I use for my OpenGL projects, but I think there's an issue with the coordinate system. The camera rotation isn't working but the movement appears to work decently (at least forward and backward). I think that I need to transform the vertices of the objects into view space using the ModelViewProjection matrix which is what I normally would do using OpenGL:
layout(location = 0) in vec3 VerticePosition;
uniform mat4 uMVP;
void main()
{
gl_Position = uMVP * vec4(VerticePosition, 1.0);
}
However, I'm not sure how to emulate this with the purely GLSL raytracer raytracer. I found an example on
GitHub which I'm working on reverse-engineering for educational purposes.
I have
my code on GitHub if anyone's interested in checking it out. I used the C++ raytracer in the same repo as a guideline. The GLSL version is not optimal but more focused on getting the job done for testing purposes. I'm interested with the idea of learning compute shaders and maybe making raytraced Pong or something which could be cool.
I'm interested in any comments or critisicms of course!