Blog

Qt and OpenGL 3.3

Some time ago I stumbled upon a great e-book on OpenGL programming: Learning Modern 3D Graphics Programming. The best thing about it is that it teaches the modern approach to graphics programming, based on OpenGL 3.3 with programmable shaders, and not the "fixed pipeline" known from OpenGL 1.x which is now considered obsolete. I already know a lot about vectors, matrices and all the basics, and I have some general idea about how shaders work, but this book describes everything in a very organized fashion and it allows me to broaden my knowledge.

When I first learned OpenGL over 10 years ago, it was all about a bunch of glBegin/glVertex/glEnd calls and that's how Grape3D, my first 3D graphics program, actually worked. Fraqtive, which also has a 3D mode, used the incredibly advanced technique of glVertexPointer and glDrawElements, which dates back to OpenGL 1.1.

A lot has changed since then. OpenGL 2.0 introduced shaders, but they were still closely tied to the fixed pipeline state objects, such as materials and lights. The idea was that shaders could be used when supported to improve graphical effects, for example by using per-pixel Phong lighting instead of Gouraud lighting provided by the fixed pipeline. Since many graphics cards didn't support shaders at that time, OpenGL would gracefully fall back to the fixed pipeline functionality, and everything would still be rendered correctly.

Nowadays all decent graphics cards support shaders, so in OpenGL 3.x the entire fixed pipeline became obsolete and using shaders is the only "right" way to go. There is even a special mode called the "Core profile" which enforces this by disabling all the old style API. This means that without a proper graphics chipset the program will simply no longer work. I don't consider this a big issue. All modern games require a chipset compatible with DirectX 10, so why should a program dedicated to rendering 3D graphics be any different? Functionally OpenGL 3.3 is more or less the equivalent of DirectX 10, so it seems like a reasonable choice.

I was happy to learn that Qt supports the Core profile, only to discover that it's not actually working because of an unresolved bug. Besides, the article mentions that "some drivers may incur a small performance penalty when using the Core profile". This sounds like a major WTF to me, because the whole idea of the Core profile was to simplify and optimize things, right? Anyway I decided to use OpenGL 3.3 without enforcing the Core profile for now, but to try to implement everything as if I was using that profile.

Another problem that I faced is that my laptop is three years old, and even though its graphics chipset is pretty good for that time (NVIDIA Quadro NVS 140M), I discovered that the OpenGL version was only 2.1. I couldn't find any newer drivers from Lenovo, so I installed the latest generic drivers from NVIDIA and now I have OpenGL 3.3. Yay! So I modified my Descend prototype to use shaders 3.30 and QGLBuffer objects (which are wrappers for Vertex Buffer Objects and Index Buffer Objects), but I will write more about it in the next post.

Filed under: Blog

To blog or not to blog?

When I created this website over six years ago, it was simply a place where I could publish my development projects and components. Over time I started adding photo galleries and posting some personal notes once in a while, but I thought that the whole blogging business was simply for people with too much free time. But things have changed since then. Ex-bloggers are using Facebook nowadays, and modern technical blogs are one of the most important sources of specialist knowledge for us programmers. So it's not a matter of having fans, regular readers, etc. It's rather a matter of feeding Google's spiders with information that someone, someday may find useful.

Writing has always been the most natural form of communication for me, especially about technical things. In the past I've been constantly publishing various open source components (I will return to this topic in a while), but this requires a lot of time. I found it easier to write short technical articles and I can't deny that they actually started forming a blog. So today I tagged all posts and placed a nice "tag cloud" in the sidebar to make the whole thing look a bit more like Web 2.0 (or is it 3.0 already? I'm always lagging behind ;>). And now that I'm slowly starting to work on Descend, you can expect more about 3D graphics and compiler programming in the nearest future. I think it's worth doing it even if it's just for the purpose of archiving and helping my thinking process by writing things down.

Another change that I finally made was adding previous/next links to images in the gallery. I should've done this a long time ago, and it was simply a matter of copying some code from forum module to the image module. All right, I should've upgraded this whole website a long time ago, but I made so many customizations, that manually patching the code here and there became easier than migrating the whole thing.

Talking about legacy code, now I return to the topic of open source components. Those for MFC haven't been updated for years and I'm no longer able to maintain them even if I wanted to. Besides, who uses MFC today? Obviously those who have to maintain legacy code, but no sane person would start a new project using it. So as part of the cleaning process, I moved all those components to a single place. The documentation is still available, obviously, but moved out of this website. I've been also thinking about deleting all the related forums, but I decided to leave them for now for archival purposes. And by the way, even more out of date versions of these articles are still available on CodeGuru.com, where I initially published them.

Finally, I renamed the "articles" section to "components", because that's what they actually are. In a sense my blog posts are more like articles. Anyway... I also have to publish new versions of the Qt articles components, because the code is finished for a long time and I just have to update the documentation and demo projects.

Filed under: Blog

Adaptive Tessellation

The Descend project is now officially reactivated and yesterday I committed the current version of the prototype into SVN repository. The UI is very basic, but it does its job of drawing a 3D surface based on mathematical equations. So far it consist of three important parts:

  • A compiler and interpreter of Misc, a programming language designed for calculating geometry with high performance. You can read more about it in the previous post and I will keep returning to it as it's quite an interesting subject.
  • An adaptive tessellator which is described below in more details.
  • A vertex and pixel shader that perform per-pixel Phong shading, which looks nice on the Barbie-pink surface :).

A parametric surface is described by a function V(p, q), where V is the vector describing the location of a point in 3D space and p and q are parameters (usually in [0, 1] or some other range). Obviously the surface consists of an infinite number of points, so we must calculate a number of samples and join the resulting points into triangles. This process is called tessellation. If the triangles are small enough, the Phong shading will create an illusion that the surface is smooth and curved.

The only difficulty is to determine what does "small enough" mean. Some surfaces are flat or almost flat and need just a few triangles to look good. Other surfaces are very curved and require thousands of triangles. In practice most surfaces are flatter is some areas and more curved in other areas. Take a sphere for example. It's curvature is the same everywhere, but we must remember that our samples are not distributed uniformly on its surface. Imagine a globe: meridians are located much closer to each other near the poles than near the equator. So in practice the distance between two samples located near the equator is greater and the surface needs to be divided into more triangles. This way, the size of all triangles will be more or less the same. Without adaptive tessellation, triangles would be closely packed near the pole and very large near the equator.

The tessellation algorithm works by first calculating four points at the corners of the surface. Wait, where does a sphere have corners? Just unwrap it mentally into a rectangular map, transforming it from (x, y, z) space into the (p, q) space. This gives us a square divided diagonally into two triangles. Then we calculate a point in the middle of the diagonal and divide each triangle into two smaller triangles. This process can be repeated recursively until the desired quality is reached.

How to measure the quality? The simplest method is to calculate the distance between the "new" point and the line that we are attempting to divide. The greater the distance, relatively to the length of the line, the more curved the surface. If this distance is smaller than some threshold value, we simply assume that the point lays on the line and discard it. The smaller the threshold, the more accurate the tessellation and the more triangles we get.

Unfortunately there are situations when this gives wrong results. If the curvature of the surface between two points resembles a sinusoid, then the third point in between appears to be located very near the line drawn between those two points. The tessellation algorithm will assume that the surface is not curved in this area. This produces very ugly artifacts.

So I came up with a method which produces much more accurate results. In order to render the surface with lighting, we need to calculate normal vectors at every point. For the Phong shading to look nice, those normals must be calculated very accurately. So two more points are calculated at a very small distance from the original one and the resulting triangle is used to calculate the normal. Note that the angle between normals is a very accurate measure of the curvature. An algorithm which compares the angle between the normals of two endpoints and the normal of the "new" point with a threshold angle can handle situations like the above much better. It's also more computationally expensive, because we must calculate three samples before we can decide if the point is rejected or not.

Of course this method can also be fooled in some specific cases, but in combination with the first one it works accurately in most cases. Experimentation shows that the threshold angle of 5° gives excellent results for every reasonable surface I was able to come up with.

In practice we also have to introduce the minimum and maximum number of divisions. Up to a certain point we simply keep dividing the grid into smaller triangles without even measuring the curvature, because otherwise the results would be very inaccurate. And since the curvature may be infinite in some points, we also must have some upper limit.

Final notes: Adaptive tessellation of parametric surfaces is the subject of many PhD dissertations and my algorithm is very simplistic, but it's just fast and accurate enough for the purposes of Descend. Also it should not be confused with adaptive tessellation of displacement mapping, which is a different concept.

Filed under: Blog

Misc

A long period since my last post indicates that a lot has happened. In case you didn't notice in the photo gallery, my son Adam was born on January 11th at about 3 a.m. I just uploaded some more recent photos to the gallery. He's changing a lot, already smiles, speaks various vowels and tries to catch objects and hold them in his hand. It's a great experience to be a father, although it requires a lot of patience. Well, it's never a bad time to learn something new, and I personally find it very inspiring :).

In the meantime I released an update for WebIssues, changed the layout of my web sites a bit and also made some improvements of Saladin (you can expect a new version soon). I also made a pretty good progress with my novel. I will write more about it shortly, but it's generally a story about hackers, magic and virtual worlds. The nature of my day job is rather destructive for my imagination (and for the amount of time I can spend on other things), but it turns out that I'm still able to use it and I have some interesting ideas from time to time.

I'm also very proud that after so many years I successfully revived the idea of Descend. In a nutshell, it's the successor of a program for drawing parametric surfaces which I wrote over 10 years ago. Last year I wrote an adaptive tessellator and I made a working prototype of the program using Qt's built-in JavaScript engine. It worked nicely, but it wasn't very fast. Qt uses the JavaScriptCore engine from WebKit, which is currently developed by Apple and powers the Safari browser. It is one of the fastest JS engines, with a register based vritual machine and built-in JIT compiler (you can find more interesting information about it in this post). However, JavaScript is an object oriented, dynamically typed language, best suited for creating complex AJAX-based applications. It's not optimized for number crunching.

So I designed my own language, which is statically typed and has only four types: boolean, float, 4D vector and 4x4 matrix. I called it Misc, although it doesn't have much to do with the old Misc engine that I wrote 7 years ago (which was dynamically typed and supported complex types like objects and arrays). I wrote a relatively simple compiler which produces bytecode and doesn't perform any fancy optimizations, and a stack based virtual machine with dedicated instructions for vector and matrix operations, trigonometric functions, etc. As soon as I got it working, I performed some benchmarks.

I created three versions of Descend: one based on JavaScriptCore, another one using the new Misc engine, and the third one with equations hard-coded in C++. All three versions were compiled using Visual C++ 2008 with full optimizations and link-time code generation. I measured the total time of generating a deformed sphere which consisted of 43,000 vertices and required calculating 135,000 samples (the tesselator needs 3 samples per vertex in order to accurately calculate normal vectors).

The results are very encouraging: the version using Misc is not only 5 times faster than the one based on JavaScriptCore, but also just 3.5 times slower than the one using native code! Obviously this doesn't mean that Misc code is almost as fast as native code, but the overhead added by the Misc interpreter is not very big, compared to the cost of the tessellator itself and the cost of calculating the trigonometric functions (which is the same for all three versions). In contrast, the overhead of JavaScriptCore is much more noticeable.

Surely there's still some room for improvement, but I doubt I would be able to squeeze more than a few CPU cycles without making the code significantly more complex. So I'd rather start working on the UI to make it possible to draw something more than a single surface. I will publish the first version once it's usable, which may take a few months depending on how much time I will be able to dedicate to it.

Filed under: Blog

Sixth anniversary

This year's anniversary of the mimec.org website coincides with two very important events. First, I'm finally releasing version 1.0 of WebIssues. It's been the longest development cycle I've ever made, as it took about 2.5 years; in that time I (and other contributors) made about 1,000 commits into SVN, and published nine pre-release versions. The popularity of this project is also continuously growing, reaching about 15,000 downloads this year, and a few weeks ago WebIssues was one of the featured projects of SourceForge.net main page. But the most important thing is that I managed to put all the long awaited features into it, and it became a really unique, innovative project, which can compete even with commercial software.

The second, even more important event, that we're impatiently awaiting with my wife, is the birth of our first son. All indications are that it's going to happen right after Christmas. This is really going to be the biggest and most important "project" in the next few years :). Starting next year, you can expect some new photo galleries to appear on this site, especially that we're going to continue visiting various interesting places in the world, as it seems to be the most reasonable way to spend money in these uncertain times.

Hopefully I'm still going to find some time to continue working on various other things:

  • I'm planning to update the mimec.org sites, refresh their appearance a bit and update some content, for example by publishing new versions of some articles.
  • I'm also going to continue developing Saladin, as it's a great, promising project that also deserves a bit of attention.
  • There's still a lot that can be done with WebIssues, so after some time off I'm also going to return to it. I'm also seriously thinking about starting some professional services related to WebIssues, such as paid hosting, support, etc., but time will tell what will come of it.
  • There are a few other ideas that keep circling in my head, for example a bazillionth version of the Misc computing language interpreter, a novel that I recently started to write, etc.
Filed under: Blog
Syndicate content