The technology behind Noesis Gui

February 8, 2012 @ 4:37 | In Noesis, Programming | 7 Comments | twitter rss digg

As many of you know, in recent years I have been working at Noesis Technologies in charge of the technology. We have been very excited these days because we are ready to release our first public software, our internal user interface technology released as a middleware called NoesisGui. This week we are starting to roll out the SDK to participants in the private beta program and as many of you have asked questions about technical details I though it would be a good idea posting here a brief overview of the technology complexities we faced in NoesisGui.

At the beginning of the development of NoesisEngine, in 2009, we were evaluating different options we had for the user interface and neither of them really satisfied us. We were looking for a GUI that was fully portable, flash-like and fully GPU accelerated. We discussed and decided to develop, in parallel with the rest of our projects, our own technology for rendering user interfaces. One of the big troubles we were aware of was the editor. We didn’t have the resources to implement an editor so we started to evaluate using an already established format. Our first option was parsing swf files but we really didn’t like that option because flash without action script is useless and we didn’t want to interpret action script because we were looking for a native c++ solution. Apart from that, swf is a binary proprietary format and depending on a proprietary tool for editing our resources didn’t appeal us at all. Other options were equally insufficient for us: qt, xul, wxWidgets, windows form, etc. And then we discovered xaml. Xaml is an open declarative xml-based language format developed by Microsoft. Xaml is used extensively in several Microsoft products like WPF, Silverlight or the Metro UI. Thanks to its great versatility, we finally chose xaml for NoesisGui. There are excellent editors like Microsoft Expression Blend, Visual Studio, Kaxaml or XamlPad and being a xml format it can be edited by hand allowing easy integration with external tools and pipelines.

And that is basically NoesisGui, a xaml interpreter that renders using hardware graphics. Let’s dive into the details.

Xaml Interpreter

Xaml is a declarative xml-based format with lots of quite interesting features like routed events and dependency properties. It is so powerful that you can even write user interfaces with animations, reactions to events or visual transitions without touching a single line of code.

The first step performed to visualize a xaml in NoesisGui is converting it to our native format. In Noesis we have an universal object model based on reflection and every object loaded at runtime must be stored in that format. This conversion is usually performed offline and the result is an optimal binary blob that loads super fast on each target platform.

At runtime we maintain two trees. The first one represents the logic part and it is the one you interact with. This tree is stored directly inside the binary blob. The second tree stores the visual representation. It contains the nodes you see on screen. This tree is the result of expanding the logic tree with visual templates. For example, a button depending on its skin may be represented with a rounded rect or with an ellipsoid. This template mechanism is what allows skinning and it is usually applied at load time. When the visual tree is ready it contains all the instructions to render itself on screen. The rendering part is done by our vector graphic library. This is where the interesting stuff begins.

Vector Graphics Library

VGL is our SVG-compliant renderer. We experimented with different implementations like openvg and direct2d although we finally opted for an implementation on top of our directx /opengl gpu renderer. This ended up being the most portable and flexible solution. In this implementation all calculations run inside the gpu through different shaders. A combinatory of approximately one hundred shaders is needed to cover all cases.

The most important challenge faced here was that everything can be animated. For example, geometries may need tessellation on each frame or ramp stop points may be animated. To improve the stress in the communication between cpu and gpu different caching and compression mechanisms were implemented. For example, ramps are heavily compressed on the cpu side and geometry vertices are stored in float16 whenever possible.

Another important aspect to consider is minimizing the number of batches that are sent to the graphics card. For that purpose our vector library uses a very aggressive batching mechanism. You can draw elements in the order you desire because this algorithm will sort the primitives to minimize the number of draw calls.


After the visual tree is updated we need to generate primitives compatible with the gpu, that is, triangles. This is performed in two steps:

  • Flattening: where curves are converted into straight segments.
  • Triangulation: where contours coming from the flattening step are converted into triangles.

The triangulation step is critical and can easily become a bottleneck. We tried different implementations:

  • An implementation using the stencil buffer. We discarded this technique because it is very fill-rate hungry and caching the results in images was not memory efficient. Besides that we were already using the stencil buffer for other purposes.
  • We tried more gpu alternatives but we found all of them too complex and unable to implement all the svg features. And for portability purposes we did not want to use complex shaders.
  • After all the mentioned problems we ended up using a pure cpu solution and implemented a polygon tessellation algorithm based on libtess but with several memory optimizations. This chosen solution allowed us to cache static geometry very efficiently.

Threading Architecture

The task of rendering a single frame on NoesisGui is subdivided into many small jobs that are processed by a task scheduler:

  • All the logic always happens in the thread of the caller. Logic actions that change the visual representation (for example clicking a button) are packed in a job.
  • The job that calculates the new visual representation needs each changed path to be tessellated. A job for each tessellation is created.
  • Render commands are sent to the GPU with a job.

Dependencies between jobs form a graph that represents the full frame execution. The more cores are available in the host the faster that graph is consumed. All this complexity is hidden from the point of view of the sdk client that receives a single API with two asynchronous methods, Update() and Render().

For the task scheduler we use tbb, an Intel library that also comes with an interesting scalable allocator.


High visual quality is a very important aspect where we compete with traditional software rasterizers that render with an almost perfect antialiasing algorithm. NoesisGui offers two solutions for fighting aliasing:

  • NoesisGui is compatible with MSAA surfaces. We take into account sub-pixel information to properly flatten curves, render glyphs or displace paths. MSAA is usually very fast on modern gpus although not so fast on mobile devices. Also MSAA requires a lot of more memory. That is why we experimented with other options.
  • When a MSAA surface is not available or when MSAA is too inefficient we extrude the contours of the path to get an approximation of pixel coverage on each pixel. This technique is mentioned in the CodeItNow blog. The problem with this technique is that shapes are slightly altered.

Definitely, antialiasing is an unsolved problem which needs more experiments to get 100% satisfactory results when MSAA is disabled. What is really frustrating here is that this problem could be easily solved if we had access in the pixel shader to the pixel coverage calculated by the rasterizer.

Font rendering

The appearance of fonts is crucial to improve the quality of any GUI. Freetype is the third-party we are using for rendering font glyphs. We experimented with different text rendering techniques to end up implementing the following ones:

  • We opted for disabling hinting as much as possible because it deforms the aspect of the font. In NoesisGUI only vertical hinting is considered and horizontal sub-pixel displacements are allowed. The reasoning behind this is perfectly explained in a document from the excellent AGG project.
  • When rendering on LCD displays sub-pixel font rendering technology is used. This technique can be implemented in hardware as detailed in beyond3d forums.
  • As the rest of paths, stroking can be applied to text. For example, by stroking text without filling the interior, you can achieve an outline effect.


I have highlighted the most interesting features of NoesisGUI. Of course, we offer the rest of features you would expect in a decent sdk (like profiling information, memory hooking or filesystem abstraction) and that I am not mentioning here because I do not want to bore you.

After all the time working on this part of Noesis engine, we are confident that NoesisGui is one of the most powerful, if not the most, GUIs for real-time applications. We encourage you to test it and give us feedback. Beta program is already available for free! And as always, comments are welcome here or in my twitter account.


  1. Hi!

    Nice article, but are you sure you are “fighting antialiasing”?
    Shouldn’t you fight aliasing? :-)

    Comment by Laury
    February 9, 2012 @ 13:03 #

  2. hehe! yes, thanks!! ;)

    Comment by ent
    February 9, 2012 @ 15:43 #

  3. Finally it is here!

    Comment by Zalo
    February 10, 2012 @ 23:47 #

  4. Really good job. Please “bore” us with more :)

    Comment by gyakoo
    February 15, 2012 @ 13:12 #

  5. [...] Access to the beta as well as further information can be found on their website: If you are looking for technical details on how their libraries works, check out that blog entry: Technology behind NoesisGUI [...]

    Pingback by New Ogre binding for Noesis GUI available « OGRE – Open Source 3D Graphics Engine
    November 11, 2012 @ 12:05 #

  6. Nice work and article, though we want more technical in depth :)
    Did you try texture caching of results Vs direct rendering ? (mipmapping/lod problems ?)
    Did you try to combine it with texture atlassing ? ( kindof virtual texture 2D)
    Did you plan/try nvidia path rendering ? (

    Comment by tuan kuranes
    November 12, 2012 @ 9:46 #

  7. Hi!

    Yes, we texture cache in specific scenarios (text and opacity groups) but we are going to add that feature for all the elements. This way, you manually mark the vectors you want to be cached. (feature for v1.1)

    About atlases, we had an automatic mechanism but we finally ended up leaving it as an automatic process (in fact, we developed a plugin for

    And, yes, we played with the nvidia path API and will probably use it when our opengl implementation is ready.

    Yes, definitely I am going to write a new article about all this and a lot more. Meanwhile we can discuss about this in our forums for betatesters if you want.

    Thanks for reading!

    Comment by ent
    November 15, 2012 @ 15:58 #

Sat, 19 Apr 2014 17:29:08 +0000 / 27 queries. 1.305 seconds / 2 Users Online

gentoo link wordpress link apache link PHP link

Theme modified from Pool theme. Valid XHTML and CSS