Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think leaving the "old-school" out of the title doesn't do this justice.

My first thought was, with modern engines, or even raw OpenGL that isn't that hard... I did it in a weekend for my "Advanced Computer Graphics" elective in college using OpenGL and that was 2008. But this is kind of cool... it's more of a tiny ray tracer and... well to borrow the title... old-school arcade game.



> leaving the ‘old-school’ out

He added it, but now it unfortunately looks like it says “build your own school shooter in a weekend...”


> in college using OpenGL and that was 2008

Wolf3D was 1992, 16 years before. SGI released OpenGL a few months later. IIRC, the first gfx cards were thousands of dollars. The first successful consumer gfx was 3DFX Voodoo2 in 1996. We take so much for granted nowadays.


It was just 3dfx voodoo, the 2 came later in 1998 :)


I'm not sure what your point is here. My point was it was easy in 2008 I'm sure it is as easy or easier now so a first person game isn't that impressive. I have massive respect for the people who do it from scratch. Especially the ones who were pioneers in the 90's and earlier.

Programming a game from scratch, even a simple looking one without the aid or modern APIs is certainly much more difficult. Which is why I was saying this posting is amiss by excluding the fact this Github project uses older techniques from the title.


One big advantage this has over a simple opengl engine is that it doesn't obscure how the rendering occurs.


Ok, let's put the old school back in there.


Title was edited later to include "old school" and my original comment now makes no sense.


> it's more of a tiny ray tracer

It's a ray caster, where the rays are sent out from the camera to intersect the map. With ray tracers, the rays are sent out from the light source, IIRC.

Your point on OpenGL is valid, but that just removes all the learning from it. OpenGL does so much of the grunt work for you. This kind of old-school game engine is a great learning experience.


> It's a ray caster, where the rays are sent out from the camera to intersect the map. With ray tracers, the rays are sent out from the light source, IIRC.

Not correct. I see this pedantry often, but it's factually wrong. If you look at the Wikipedia articles for both ray casting [0] and ray tracing [1], they BOTH describe the methods as sending rays from the camera/eye.

From what I can tell, the difference between ray casting and ray tracing is that in ray casting, only the primary ray is traced. The ray does not get reflected, refracted, or even traced to light sources for checking shadows. At most, surface normals are dealt with for lighting and texture mapping is applied.

[0] https://en.wikipedia.org/wiki/Ray_casting

[1] https://en.wikipedia.org/wiki/Ray_tracing_(graphics)


It's all a mess.

However typically in video game rendering, ray casting refers to the specific technique where you only trace rays against a 2D scene for a single scan-line, and then draw the entire column of pixels based on that result, such as is done in Wolfenstein.

Ray-Tracing really can be used for anything that intersects rays against a scene but is typically used for set of techniques that at-least involve sending out primary rays from the camera via ray-tracing, but with "ray-tracing" GPUs we are seeing it used more specifically for secondary rays coming from the Camera.

"path-tracing" is generally the set of techniques that end up with a full path from camera to light.

Now, there isn't really much in rendering that sends out rays only from the light source, it's still just too computationally impractical.

However, there is "bidirectional path-tracing" which generally sends out rays from both the camera and the light source, then tries to join them in the middle. It's a bit more complicated but generally converges quicker than other montecarlo renderers.

Anyways, as I said, it's a big mess, partially because it's a big continuum, and there are generally renderers that exhibit properties from multiple of these categories.


> With ray tracers, the rays are sent out from the light source, IIRC.

Are there any implementations which send rays from the light source(aka. forward ray tracing)? This is astoundingly inefficient, as most rays will not intersect the camera.

I've never seen one, other than in brief academic discussions. What you can use forward ray tracing for is to compute shadows.


I had considered making a ray tracer that worked that way, but with the slight difference that rays didn't have to hit the camera, but would just have to hit a point within line of sight of the camera. Obviously there would be massive gaps between pixels, but I would fill it in with a Voronoi diagram [0], or perhaps shaded with Delaunay Triangulation [1]. This renderer would be nothing more than a toy or proof-of-concept, and not intended for real usage.

The classic FOSS ray tracer, POV-Ray, can actually do this. You can define a light and define an object, and it will shoot rays from the light to the object and trace each ray through refraction and reflection. With this, you can simulate the way ripples in a pool concentrate light on the floor of the pool [2] or bending and refracting [3], without manually calculating it and adding extra light sources.

[0] https://en.wikipedia.org/wiki/Voronoi_diagram

[1] https://en.wikipedia.org/wiki/Delaunay_triangulation

[2] http://www.antoniosiber.org/bruno_pauns_caustic_en.html

[3] http://www.povray.org/documentation/view/3.6.2/424/


Yeah, there are multiple techniques that do this sort of thing. Photon Mapping and bidirectional path tracing are on ends of the spectrum


There are two-pass approaches that do this, such as photon mapping. In the first stage, light is emitted from light sources and allowed to to scatter throughout the scene, and each "hit" of a photon on a medium is stored in a large data structure. Then in the second pass rays are traced out from the camera, and on each medium the ray intersects, nearby "hits" are used to estimate the light coming from that spot. This produces effects like caustics efficiently.


It's not really ray tracing, but some engines do something analogous to render shadows. Rendering the scene from the point of view of the main light source, and just recording the depth value at each pixel, yields a shadow map. Now for the main render, for each pixel rendered, re-project its coordinates into the light source's camera and compare the distance to the shadow map. If it's further, it's in shadow.


Sure, you could compute shadows with forward ray-tracing, but it's still generally more efficient to compute shadows with rays originating from the camera end of the path.

Of course, this all starts to get a little more complicated when trying to compute global illumination.


Oh apparently ray casting is considered a form of ray tracing: "Ray casting is the most basic of many computer graphics rendering algorithms that use the geometric algorithm of ray tracing." [1]

1: https://en.wikipedia.org/wiki/Ray_casting


What you mean is forward ray tracing. But "ray tracing" usually refers to the variant sending rays from the camera as forward ray tracing is almost never used.

Ray casting on the other hand is basically ray tracing without any reflections or shadows, in other words each ray stops with the first collision.


Ray-tracing does not necessarily mean rays are cast from the light source.


Thanks everyone for the corrections, obviously my understanding was flawed there!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: