Here is a summary I wrote on my attempt to render balls using only a fragment shader. I am pretty happy with the result, and I've included a breakdown of the creation process.
In understanding how the rotation was implemented, it took me longer than it probably should have to realize that the normal map (which I think of in terms of lighting) is actually a radial position on the sphere.
I had to go back and forth with the final and original code a few times, and finally noted that "n" gets renamed to "point" when it becomes an argument to colorLookup.
This dual usage of the normal for lighting brightness calculation and as a radial position on the sphere surface for looking up the color makes sense in retrospect but is also a somewhat subtle aspect that you might want to address more explicitly in your text instead of just silently renaming the variable in the code.
In the final video, there are still some texture artifacts whenever the number is on the edge (e.g. the ball 4 at 5 seconds). Maybe because of mipmaps? I can't really think of a reason why it happens. It even happens at the bottom of the 4, even though that part is white in the texture.
Nice work! My only recommendation is to add a pow(color, 1.0/2.2) or similar at the end, and then turn down your light brightness. Your light falloff has the distinctive look of gamma-space lighting makes it look dated and less realistic. Doing your lighting in linear space should help.
Great article! I would be interested in more info about the physics and ball path mapping that you mentioned in the beginning.
Over the last few months, I’ve started competing in billiards, and the roll vs slide, english, tangent lines, and even forced follow to be incredibly interesting.
Is any of that involved in the physics simulations that you’ve worked out? If it is, I would really like to know more.
Cool, I recently started with GLSL, and it’s interesting there are many roads that lead to a similar solution (how I draw circles or make a center is tad different)
I was around and writing OpenGL when we started getting the first programmable cards. Nvidia shot out first with their Cg language, but I held back and waited for the "Orange Book" to come up. I bought it right away and learned from it and by looking at other peoples shaders.
With a good phong it's all about the dot products and specular highlight (if you using highlights). Make sure you clamp the dot products and give yourself a minimum ambient light value of around 0.15 to 0.2.
The thing that got me, in a classic Phong, you have to test whether dot > 0.0, it really won't work outside of the front hemiphere. The clamp isn't enough.
In the 90's, texture maps were a luxury for anything. As a result shaders (for offline rendering, realtime shading languages didn't exist yet) were doing everything procedurally.
Here is the only image I could find of the RenderMan Shading Language VIDI SPORTS SHADERS Makina Works sold at the time (you got no source, just bytecode understood by Pixar's implementation at the time, AFAIK):
What is interesting here is that the shaders did everything on a sphere primitive.
All RenderMan implementations at the time (there were more than just Pixar's) were micropolygon renderers. I.e. displacement was almost free in terms of compute overhead.
Even the football operated on a sphere that the shader deformed into the resp. shape. All patterns, stripes etc. were procedurally generated. Each shader had texture slots to apply custom logos.
This reminds me of the same rendering technique described in Hustle Kings for the PS3. There was a write up done, however shader code was not included.
That's very interesting. Thank you for the link. From what I read and see in the images, I gather he's done the same thing I have, as you guessed as well. Obviously he's going much further. The author is using a subtle bump map (simulating balls scratches and imperfections), a reflection map with a gradient and indecent angle, what looks to be a faux sub surface scattering effect.
Also he is placing his balls in a 3D environment with nice shadows that really help to sell the illusion.
That's a good find and an interesting read. Thank you.
It might be interesting to explore with the students why this would not be quite right for a general 3D view, and how it could be fixed. Conic sections, etc.
I guessed correctly by the title that the article is about coding a 3d image, but was waiting for the classic "two balls and a cylinder" scene, which college students naturally come up with in path tracing assignment in computer graphics courses.
I'm just getting my feet wet with shaders myself. Could this fragment shader be used in a "real" 3D game? At the moment, the sphere is orthographicly projected onto the screen so the sphere will always appear as a circle rather than an ellipse. How easy would it be to use this shader in place of a spherical mesh as you might find in a typical geometry based 3D render? I.e. could you use the same technique and render multiple balls on the screen with positions in 3D space and rendered correctly taking into account any distortion caused by the camera?
I'm also new to shaders, so I might be wrong: Camera distortion (/perspective) is done in the vertex shader, this post here presents a fragment shader. So if you take a quad, in the vertex shader then rotate it so it'll face the camera (i.e. all vertices have same z/depth value in clip space), you can then probably use this fragment shader to render the ball.
You have to rotate the quad to face the camera, since it has to cover the area where the ball may render (imagine the extreme case: if the quad is perpendicular to the camera, it looks like a line to the camera, you could only render the ball within that line, it wouldn't protrude to the left or right).
It can be incorporated into a 3D scene as long as you can make a square billboard . You may have problems with clipping that' you need to address manually, but it might be worth it as you can get better visuals, assuming you are trying to render spheres.
As far as I know there shouldn't be any distortion issues. A sphere will always look like a circle from any perspective. Well that is unless it's really really big or your are really really tiny and you give the size difference you are relatively close to its surface. At that perspective it just look like a flat plane.
Parent may or may not know this, but "billboards" are textured quads that always face the camera in a 3D scene. Trees and foliage are things that usually turn into billboards in a typical game if they are far away.
Think of it similar to an impostor: in world space, a sphere is always a sphere, and you should do all your lighting in world-space, so you don't need to account for the distortion. You do need to project into the camera's viewing, typically typically the projection matrix. But since a sphere under rotation has the same silhouette, you can also rotate the quad to view different parts of the sphere, and use a circle inside, though the rotation math would have to view a different part of the sphere. To integrate with other objects, you likely would turn on depth writing, and use gl_FragDepth to write the front depth to the buffer, and discard when outside of the silhouette.
Well first of you get infinite smoothness and better lighting (and reflections if you add them) because there is no underlying geometry. It's just math. It's also much faster, as you're only giving the GPU four vertices defining the square. This can even be sped up further by using point sprites which consume only one vertex.
"It's just math." But math can be expensive, and in the fragment shader you are calculating everything per-pixel, whereas if you provide your geometry to the vertex shader, you only have to calculate something once per vertex, which is usually much less.
Furthermore, depending on the GPU and the exact workload, it can be that vertex and fragment shading can run in parallel. In that case, it is better to spread the load over both, instead of letting the fragment shaders do everything.
The same goes for texturing. Instead of calculating most of the ball's color pattern in the fragment shader, and only using texture lookups for the number itself, if you just have a texture covering the whole ball, you basically offload a lot of computation to the texture units.
That said, for a pool game where the balls are one of the most important items on the screen, it's probably worth it to get the perfect smoothness and lighting, as you mentioned.
I've been advised not to use point sprites. Apparently, most GPU inplementations merely convert it into a quad in the backend, and gl_PointSize seems to have a soft maximum that depends on implementation.
I've found a mechanism that uses the "interleaved attribs" feature of transform feedbacks to spit out 4 verts at a time (as six to draw two GL_TRIANGLES) into a buffer that a quad-drawing shader program consumes.
Using GL_POINTS/gl_PointCoord is probably better though just for teaching children fundamentals, I just wish that it actually worked properly in practice.
Even after 10 years I still have anxiety looking at OpenGL code. I still don't understand why we needed to write ray-tracing during a BSc course starting from a blinking cursor. The professor tried to defend that at the time, but I firmly believe that it is the wrong curriculum for a 20 year old CS student. Computer graphics were and are niche programming, and should not be taught main-stream. Out of 100 graduates maybe one will ever use this, why not use some common sense and teach something useful?
Because graphics is fun. It's a good way to engage with students. Beside, what's important is not the outcome but the journey. You learn to read documentation, to design, to debug and more importantly you learn to learn. Which is what higher education is about.
Education is about exercising the mind. I don't think many play with inflated balls in their adulthood, yet it's very important to teach children hand-eye coordination, team play, and other concepts so that they become better functioning humans.
There are problems with the curriculum though, one of them that it plays catch-up with the state of the world. IT in particular moves very fast. Graphics programming is not one of these things however. For example, if you've done the courses, and experiences just how hard it is to crate anything in 3d, you can better appreciate the work that goes into modern 3d engines. Look at this beautiful breakdown of the inner workings of the GTA V engine:
inb4 you get 10 replies along the lines of "wow I wish my professors let me do something cool like ray-tracing. I spent 4 years implementing cache-inefficient data structures that nobody uses in an ancient version of some programming language that nobody uses in a crappy env doesn't even have a real debugger"
Not that I can relate. I dropped out of uni well before I encountered anything that would discourage me from going into software :)
Not really, that's more related to the fixed precision of IEEE 754 floating-point numbers. In any fixed-precision representation there's always a certain representation which happens to be the best approximation of any constant like pi. That's the one you would pick.
However, you could perhaps infer from the article that IEEE 754 double-precision numbers are sufficient for most physical calculations.
Other way around, they use that value because that's the most precision you get with IEEE-754 64-bit floating-point numbers, which is the representation also used by JS for floating-point.
The value was superior to mathematical Pi (as is best float approximation, but not best double approximation), such as having 2*PI normalized into [0,2*Pi] by standard trigonometric functions (which use accurate approximations of Pi) would not preserve it and yield a value close to 4.2e-10, which could cause suprises (for example if angular ranges are defined as (start,end) and not (start,span)).
I'm surprised you'd have to define pi in a fragment shader at all. Surely such a commonly used constant would be defined in the language/headers already?
I've always though about that fact myself. Can somone find the answer to why didn't the GLSL designers / video card driver people make PI built into their language?
Not to pee on your party, but I wouldn't want to the users who click on that title expecting...something else, to be disappointed. Therefore I have made the minimum viable edit.
(For anyone wondering: the submitted title was "Rendering my balls in a fragment shader" and the article title was changed from "Rendering pool balls" to that.)
The actual article was titled "Rendering Pool Balls". He fixed it by updating the article title rather than the HN submission(which I don't think you even can).