Skip to content

May update

It has been a while since the last update. Since then I have mostly worked on engine related stuff, but there have also been some improvements in the UI:

Leonardo is getting closer and closer to the point where you can use it for real work, but there are still a few things missing:

  1. HUD-notifications (worked a long time ago but the code has decayed)
  2. Custom brush-stamps as opposed to just ellipses.
  3. Add, remove and rearrange brush-preset list.
  4. Merging and changing Z-order of layers.
  5. Better UI for layers.
  6. Drag-n-drop from the export bar.

Categories: Chatter, Leonardo.

Review 3 (0.3.3)

Two days ago, Max and I traveled to Gävle to conduct three user-tests on a couple of art school students. Here are my (Henning) observations:

All user tests were conducted in full-screen mode which meant that no user could access the menu bar (this should be changed until next user test so that the menu bar “auto hides” in full-screen mode).

  1. All participants requested layers (we have layers but no UI for it, yet).
  2. Although no participants explicitly said so, all of them would have enjoyed a brush-preview when they where adjusting the brush settings.
  3. All of the participants instantly understood how to use the new color picker (wheel + triangle) and two of the participants loved it more then any other color picker they have ever used.
  4. All participants seamed to like the new default eraser (its own context plus pressure sensitive diameter).
  5. Two of the participants complained that the diameter slider was “too sensitive to get the exact diameter you want” and one of them requested being able to enter the diameter numerically from the keyboard (current diameter slider: 207 pixels wide, from 0.5 to 1500 with gamma 2.5).
  6. Two of the participants did not know what the difference between Flow and Opacity is and could not figure it out even after some testing.
  7. All of the participants used the Hardness parameter quite a lot.
  8. Two of the participants requested being able to change the background color (currently possible, but only from the console).
  9. Two of the participants asked if it would be possible to select and move a portion of the canvas (not currently possible).
  10. Two of the participants tried to hit the Z key when they were told to zoom (doesn’t work).
  11. One participant tried to “double click” the space-bar when told to snap the view (doesn’t work).
  12. None of the participants noticed the “zoom box” at the lower left corner even after they were asked to zoom.
  13. All participants seamed to like the rotate-canvas-feature and understood how to move the pivot point.
  14. One of the participants asked “what is the current DPI?” (he knew he was at 100% zoom but he didn’t realize that’s how you are suppose to know the resolution you are at).
  15. All of the participants loved the draw-straight-line feature but requested more advanced functionality. One of the participants (Kalle) requested being able to draw curves in a similar fashion and told us it’s very important to be able to offset the curve, without changing it, to draw repeating patterns.
  16. Two of the participants requested being able to get some sort of “A4-paper reference” that just hovered above the canvas.
  17. None of the participants realized that you could move around the color picker and brush settings window.
  18. We forgot to test if the user realized that it was possible to pan around the canvas using the rulers.
  19. One of the participants thought the F key would be more logical for flip-horizontal then the H key.

Categories: Leonardo, User Experience.

Review 2 (0.2.2)

Two weeks ago we did a user test of Leonardo on Martin Piazzolla. These are the main things we came up with:

  1. Martin loved being able to rotate the canvas but didn’t like the shortcut Space+Shift. Martin usually rotated the canvas in a way that made it possible to move the hand in a back-and-forth (as opposed to left-right) direction while painting.
  2. Martin loved the draw straight line tool and was able to find the shortcut himself (Shift).
  3. Martin used the eraser a lot, both as a painting effect and a way of undoing mistakes. Martin likes to work with an eraser with a large diameter as well a smaller one.
  4. Martins brush setting of choice for sketching was a hard round 24 pixel wide brush at 36% zoom with a light blue color.
  5. Martin liked to outline his sketch with a hard black brush with a diameter that depends on pressure (not possible on the current version).
  6. Martin requested the “multiply blending mode” between layers.

Categories: Leonardo, User Experience.

Review 1

This is how Leonardo currently looks:

Yesterday, Daniel & Max was over and we did a user test on Daniel. Here is a list on things that probably should get addressed:

  1. Daniels initial reaction after lunching the application was something along the line: “What tool, brush and color do I currently have and how do I change it?” Daniel eventually figured all of this out, but it was not obvious from the start.
  2. When Max was experimenting around in Leonardo he zoomed out to ~15% and continue painting without realizing that he was at ~15% zoom. This was never a problem for Daniel though.
  3. Daniel wanted to draw a straight line but never figured out how to do it (it is obviously impossible for the user to know to hold down Q). Once I told him how to do it he instantly loved the way the “draw straight line tool” worked.
  4. When Daniel got the task: “convert your current painting to gray-scale” he did not realize that the command “Luminosity” under Filters does this.
  5. Both Daniel and Max wanted to create a new Canvas and start searching for “New…” under the File menu, but is not there…
  6. By mistake Max eyedropped white, he then resumed painting but did not realize he was painting with white and asked why nothing was happening.
  7. When Daniel was switching to Finite Canvas he thought it was strange that the canvas boundary did not get aligned with his current view.
  8. When Daniel should flip the image he flipped it vertically when he was supposed to flip it horizontally (an icon should probably make this much more obvious).

Categories: Leonardo, User Experience.

Content Creation Software

This is an intressting way of categorizing content creation software:

(applications that export data that is intended for the end-user)

  • Image compositor, exports PNG, JPEG etc. (this is Leonardo)
  • Video compositor, exports AVI, MPEG etc.
  • Document compositor, exports PDF.
  • Code compositor, exports EXE.

(applications that collect data from the outside world)

  • Image collector, imports RAW etc.
  • Video collector, imports R3D etc.

(applications that export data that is not intended for the end-user but rather to be imported by some compositor)

  • Surface modeler, exports OBJ etc.
  • Voxel modeler, export OBJ with corresponding textures (this is Michelangelo).
  • Texture painting, export textures.
  • Model animation, exports OBJs etc.
  • 2D vector graphic editor, exports SVG etc.
  • Spreadsheet editor, exports XML.
(this list is incomplete)

Categories: Chatter.

Single “Project” Interface

I have changed my mind: We are now making Leonardo a “Single Document Interface” program. The reasons for this include:

  1. Users rarely have more then one tab open which make a tab interface just look dumb.
  2. There is already a “tab-interface” inside the OS: the task-bar! (Of course, a Chrome style tab systems can do things the task-bar can’t, but the point is that handling more then one project really is something that should be done by the OS or, alternatively, with a multi-process architecture as Google Chrome does it)
  3. It would be possible to use Ctrl + Tab to swap between layers instead of projects (tabs).
  4. It’s easier to develop a SDI application then a TDI.
Below is a sketch of Leonardo with all panels visible, except console:

Categories: Chatter.

Definitions & Constrains

Here are some definitions & constrains about Xade and Leonardo:

  1. Xade applications are defined by what they export.
  2. Xade applications internal file formats should be viewed as a project file with revision control.
  3. Xade applications should be able to import every format that could be of interest for producing what the application export.

So, what is Leonardo?

  1. Leonardo only export image files like PNG, JPEG, TIFF, TGA, BMP and OpenEXR.
  2. But, Leonardo can import a whole host of formats including 2D- and 3D-vector graphics files. These formats get imported as special read-only layer that can be used either as reference material or rasterised to an image.

This divides Leonardo in to two parts:

  1. A still image, layer based compositor.
  2. A raster based painting system.

And just to be clear, Leonardo will not do the following:

  1. Export vector-graphics or video.
  2. Manipulate vector-graphics or video.
  3. Have specialized tools for photo editing.
  4. Have node based compositing.
  5. Paint on 3D-objects.

Categories: Chatter.


It is important to have crystal clear definitions so everybody knows what you are talking about. I have therefor compiled a list of the most common definitions which you can find under the ‘definition’ page to the right.

Categories: Chatter.

Back from India!

Back from India (one week ago now).

I have created three new pages to aggregate some good ideas (look to the right under ‘pages’)

Categories: Chatter.

State of the Application 2

On Saturday I will travel to India for 4 weeks. So I thought it’s time to give you The State of the Application!

User Interface
This is how Xade Leonardo currently looks:

(The Swedish flag should not be there in the final version ;-) )

Since I want to minimize UI clutter I have thought long and hard over what should be visible at all time. I have come up with the following things: Current tool, tool help, current color, file size, zoom level, currently selected layer and if Leonardo is currently doing some background work. You don’t need to show the current brush and radius since the cursor already contains that information.

Although the Leonardo engine can handle Layers, the UI doesn’t show you any information about them right now. So far, my best idea for this is to put small tabs on the right hand side of the canvas where the Layer name is written vertically to save space. I am planing on making layers a “first class citizen” so that it will be possible to drag-n-drop layers between tabs, drop files in and out as layers including recently exported files from the Export bar (not shown in picture above).

One thing that hit me recently is that you want to avoid having sliders on the left and upper part of the screen. This becomes apparent when you use a tablet with a built-in display. Your arm will then cover most of the screen while you are adjusting the slider something you obviously want to avoid.


Color Spaces and Gamma
A couple of weeks back I spent some time teaching myself about Color spaces and Gamma correction. I had some prior knowledge of this but if you would have asked me: “Why doesn’t a standard HSV-Hue shift preserve luminosity?” I would have no good answer. Now I know the answer and I am planning on addressing it in Leonardo among a whole host of other issues. What is amazing to me is that not even Photoshop manage to do all this correctly. I guess they know all these at Adobe but they are stuck with what they got because of backward compatibility issues.

My current thinking is having Leonardo work in an absolute color space like sRGB or AdobeRGB as opposed to just “random”-RGB and storing pixels in linear-space as opposed to Gamma-space. I also hope to be able to do some of the pixel operations in LUV-1976 space (like Hue shifts) which I find a really nice color space although there might be some problems with out-of-gamut colors.

Everybody is familiar with a tone-histogram (the one you get in a digital camera or under Photoshop Levels) which mostly is used for setting the black- and white-point of an image. A couple of days ago I had a crazy idea on taking this to the next level with a density or contour plot of the chromaticity of an image. I think this will be an awesome visualization for Hue/Saturation and Color-correction style adjustments and it will make it obvious to a novice user why a Hue-shift is a modular adjustment.


Destructive vs. Non-destructive editing
Over the past 5 months I must have spent over 60 hours just thinking about destructive vs. non-destructive editing (my favorite occupation while taking a walk along the lake). Now, Leonardo is primarily a destructive image editor but since you want some form of synthesis between different layers the question is how far you go down the path of non-destructive editing? Do you allow blend modes? Do you allow procedurally generated layers?  Do you allow vector layers? Do you allow non-destructive adjustment layers? Do you allow non-destructive warps? Do you allow visibility masks? Do you allow “layer styles”? All of these are still open questions…

Another problem related to this is my personal disgust about “blending modes”, I understand they are extremely powerful for the expert user, but even me, with a strong mathematical background, can’t use them intuitively! On the other hand, I haven’t come up with a good alternative :-(


Node recursion
In a previous blog post I talked about switching to a fixed root system, well, I switched back! I found a way to solve the problem and still keeping a non-fixed-root which I am really satisfied with.

While we are on the topic of node recursions, this is one of the most beautiful things I have ever written:

struct node_s {
    unsigned int cb : 4;
    unsigned int id : 28;
    struct node_s *childs[0];
node = node->childs[ bitcount[ node->cb & ((1<<c) - 1) ] ];
(node->bash is a compact child pointer list, node->cb is child-bits and c is the child number you want to get to)

The node data have a very small footprint (notice the struct-hack), it’s super fast and yet the whole thing is relatively simple. Storing you node meta data in this way only takes a fraction of the space it otherwise would! Unfortunately I use quite big nodes these days (128×128 pixels) so this doesn’t really matter that much anymore :-(

Categories: Leonardo, Technical.

User cases

(these also appears in the the ‘user cases’-page)

I have defined 3 primary user cases for Leonardo:

  1. Lean Forward – the user sits in front of his PC or Mac with his right hand on the Wacom and the left hand on the keyboard. The user might drag-n-drop tabs and files, using the built in console and other “advanced” stuff.
  2. Lean Backward – the user sits in a sofa with a wireless Wacom in his lap while doodling on his big screen TV. The user might not have “easy” access to the keyboard and the TV might be far away which makes small things harder to see.
  3. Doodling in a public place – the user is doing some rough sketching on his iPad on the subway, in a cafe or public park. When the user gets back home he wants to continue on his PC or Mac with minimum fuzz.

I believe the second user case really is the future of digital doodling and less hardcore painting. I realized this after seeing two of my friends throwing out their stationary PCs and now is doing everything in front of their TVs with a wireless keyboard and mouse.

This has convinced me to buy a big screen TV, a Mac Book Air, an iPad and a wireless Intuos.

Categories: Leonardo, User Experience.

Node Size

A couple of weeks ago I researched all the different possible node sizes, from 32×32 all the way up to 256×256. There are three places where you store nodes: On disk, in memory and on the GPU. In memory you generally want small nodes so you don’t bloat the footprint (a pixel in memory take 4*sizeof( float ) = 16 bytes) and stay relatively cache-coherent. But small nodes generate a ton of disk IO which is costly. The optimal node size for the renderer is a non-issue since you can always bundle nodes together in to atlases and a pixel on the GPU takes only 4 bytes.

So the problem is disk vs. main-memory, which one should you optimize for? or can you find a workaround?

After spending a couple of days trying to come up with a good solution for minimizing disk IO, while maintaining small nodes (32×32 or 64×64), I finally gave up since all my solutions was immensely complex and the disk IO was still was pretty bad. Instead I decided to set the node size to whatever is optimal for disk and then try to work around the problems that would arise in memory. I knew the optimal node size for disk was either 128×128 or 256×256. What was surprising though was that 256×256 only generated about 60% of the IO requests compared to 128×128 where you naively would have expected ~25% and since the bandwidth and file-size goes up with 256×256 I decided against it and set the final node size to 128×128.

To solve the memory footprint problem I introduced nodes in different pixel precision. Doing it this way means that nodes that should only be displayed can stay in the same precision as on disk while nodes that should be edited can be converted to RGBA_FP (the main reason I use RGBA_FP is to spare myself all the fixed-point math headaches during development). If you start running low on memory you can start converting nodes back to file precision before you finally clean and unmap them.

A side effect of using large node sizes is that the “TOC” (Table Of Content) for each file gets a lot smaller which allows you to store hash values in them (somewhat similar to a .torrent file). This turns out the be extremely convenient if the user stores his data in the cloud since everything can then be fetched from a local cache file eliminating 99% of all downstream traffic, and since Amazon don’t charge you for upstream traffic you have effectively eliminated 99% of the bandwidth cost of using Amazon S3! (of course if the user change host, all data must be downloaded again but that should be quite rare).

Categories: Leonardo, Technical.

Xade Software from the business side of things

“You can only connect the dots looking backwards” — Steve Jobs

In the last couple of weeks it have started to crystallize for me exactly what I have been doing for over a year now: I have researched and developed a kick-ass quad-tree/octree engine and soon it is time to tap the potential of this engine by starting a company and develop a product around it. The engine can be used for at least two different products with at least three different platforms on each product:

  1. Xade Leonardo – pixel painting and sketching for the PC, Mac and iPad.
  2. Xade Michelangelo - voxel sculpting and painting for the PC, Mac and iPad.

On top of this, the engine allows for very efficient streaming of data between client and server so a cloud based storage solution would be possible with a revenue model similar to DropBox. Now of course I will not build all the above permutations at once. The Leonardo-PC version have the highest priority mainly because pixels are easier then voxels and I am more familiar with the PC compared to the Mac and iPad.

But the core value of Xade Software is in the engine technology, which, to my knowledge, is unique in the following areas (at least compared to Photoshop):

  1. Infinite canvas — just pan the view in any direction and keep painting.
  2. ‘zero’ load/save time — even if the file is thousands of MB it loads and saves under a second.
  3. zero lag for brushes/filters — even if you use a complex brush that is hundreds of pixels in diameter there is no hick-ups or lag.
  4. Cloud storage — My streaming data technology means you can work over a relatively slow network connection (ex. 3G) without any lag which means storing your data in the cloud would be possible.
  5. Flexible canvas history – You don’t loose the history when you close a file and the history is branched instead of just linear.

On top of this there is of course tonnes of minor features, and I am also working hard on the non-engine stuff…

So, if a technical person ask, what is Xade Software? I will answer:

“We are experts in quad-tree/octree technology which we use to build software that handle large data sets of raster-based graphics. Our first product is Xade Leonardo which is painting application that focus on the experience of digital painting

And if a non-technical investor/banker ask, I will answer:

“We have developed a technology that is really revolutionary which we use to build an image editor, streamlined for painting and sketching, that is better then anything else that is out there, including Photoshop”

And if a customer ask, I will answer:

“We are the creator of Xade Leonardo which is a revolutionary new painting and sketching application for the PC, Mac and iPad. Our primary focus for Leonardo is the experience using it as opposed to a gazillion features. Just try it for 5 minutes, it is super intuitive, you will never want to go back!”

Categories: Business, Leonardo, Marketing.

Prototype delay :(

Last week I spent 6 days trekking in northern Sweden, probably my only vacation this summer. Now it’s 100% xade going forward :)

I’m starting to realize that having a finished prototype before midsummer is probably a little too optimistic, but I might have a engine Proof-of-Concept by then…

Categories: Chatter.

2 major breakthroughs…

For the last ~3 weeks I have struggled to get the progressive rasterization code to work, and today I am happy to tell you that I finally got it to work! This was the result of 2 major breakthroughs in the last 2 days:

1. Fixed the bug where future snapshot node references got screwed during node spawn.

2. Swapped to a fixed root system.

The first breakthrough was not resolved until I finally drew a 1D version (a binary tree) of the problem on paper and walked trough each step that needed to happen.

The second breakthrough allowed me to use Morton keys to identify nodes which have tonnes of benefits when we need to locate all nodes that share a particular place in space. Last spring, during my master thesis on Octrees, I did try the fixed-root/Morton-key implementation but decided against it. This time around it looks much more promising for the following reasons:

1. Each node stores a grid of pixels/voxels so the quadtree/octree never have to map down to individual pixels/voxels.

2. We don’t use any ray tracing now which was the biggest reason for keeping the height of the quadtree/octree small.

3. With a quadtree and 64^2 pixels per node we can fit a Morton key inside a single 32-bit word.

One small drawback of using a fixed root system is that the canvas is not technically ”infinite” anymore. My current implementation clamp the canvas size to 2 million by 2 million pixels which would be enough for most users (going higher then this would, among other things, require me to switch to doubles instead of floats inside the engine).

There is at least another week of engine work and then maybe 2~3 weeks of client work before I am finally done with my Leonardo prototype! Stay tuned!

Categories: Leonardo, Technical.

I’m alive!

A couple of weeks after the last blog post I decided to stop writing this blog, but I think I have changed my mind and will now continue to write it… But maybe not that often…

This is the current look of Xade Leonardo:

(the edges around the xade drawing look bad because of the current flawed implementation of alpha compositing)

Here are some of the features not visible on the above picture:

  1. You can toggle a Quake style console between the tab strip and the menu.
  2. Below the status bar you can toggle a Chrome like “download bar” where files that recently have been exported appear which then can be dragged-n-dropped to other applications.
  3. You can toggle a pie menu that appear centered around the cursor with the most basic features for the current tool.

The UI is, among others, inspired by Chrome, Silo, Quake, Nuke, Mari, Google Docs & Facebook.

The engine code starts to come together for the most basic functionality, but there are probably many more weeks before the more complex stuff is working.

Right now there is only one tool: the paint brush! In the next couple of weeks I will add a few more and my current plan is to add the tools/filters which is the most likely to break the current engine architecture first and save the “easy” ones for last. I will probably implement them in the following order:

  1. Export tool (should work as a background process for large images).
  2. Eyedropper (pretty easy, but one of the few tools that needs a round trip to the server).
  3. Selection tool (very different from all other tools).
  4. Gaussian blur filter (a good way to stress test the engine in adaptivity and speed).
  5. Clone tool (probably needs reading and writing to the same snapshot which might be tricky with SMP).
  6. Smudge tool (needs some kind of sampling from the current snapshot)
  7. Liquify tool (very different from other tools since it works on polygons instead of pixels).

Categories: Leonardo, Technical.

New Website

New web site up at

I tought a lot about the choice of font. Finally I decided on Georgia with all letters uppercase and having the first letter in every word a bit larger then the rest. This together with some extra space between each letter works great and give a certain Renaissance feel to the text.

Categories: Aesthetics, Website.

Xade Leonardo

As I was writing in the previous post I have decided on making a digital painting application named Xade Leonardo. Today most digital painting is done in Photoshop with Corel Painter and SketchBook Pro a distant second and third. All of these programs (especially the first two) have been around for a long time and are not really streamlined for todays hardware and working methods. I believe I can make a better application, suited for the 21:th century…

What I want to accomplish with Xade Leonardo over Photoshop in painting is what:
Silo accomplished over Maya in modeling or,
Mari accomplished over Modo in texture painting or,
Chrome accomplished over IE in browsing or,
Lightroom accomplished over Photoshop in photo editing.
In other words: A streamlined lightweight easy-to-use yet powerful application with minimum UI clutter.

Here is a list of some of my loose goals for Xade Leonardo:

  1. I will only will focus on the painting side of things as opposed to photo editing and vector based graphics.
  2. The feeling while working with Leonardo is the most important measure over how good it is.
  3. Leo will be designed around a Wacom Tablet. The artist should be able to lean back in their chair an never use the keyboard.
  4. Minimize chrome. Almost the entire screen should be the painting area. This will be accomplished with heavy use of pie-menus and an infinite canvas.
  5. The artist should never be forced to wait on anything. Initializing, loading, saving, complex filters etc. should all be handled as a background process.

Some of the more technical ideas that I have in mind (a lot of them are what I developed during the fall but converted to 2D):

  • Using a hybrid of a SDI and TDI application ala Google Chrome style.
  • Use Sqlite for the .leo file format and store everything as a quadtree with a BLOB for each node.
  • A Quadtree image format would allow for an infinite canvas.
  • Each tab in the xade UI corresponds to a module DLL. This makes it possible in the future to add a sculping module that can be opened in same instance of xade.
  • Leonardo is split in 3 parts: Client (collecting user input and rendering), Server (performing user commands to the canvas) and Database (file IO). All of these run in a separate threads.
  • The Server handles all image operations in a mipmap fashion and feed the Client what ever it has finished for the moment. This allows the client thread to never stall and always display a, although possibly blurry, representation of the canvas.
  • Rendering everything including UI with OpenGL.

Categories: Leonardo.

UI work and some random thoughts

Since my last post I have done a lot of work on the UI framework. Below is screen dump of how it currently looks:

As you can see I have borrowed the main theme from Google Chrome e.g. where each open file is a tab and it’s possible to drag and drop tabs in and out of xade.

In my last post a lot of what xade is going to be was in flux. Since then I have nailed down some of issues:

1. I will make a painting/sketching application first as opposed to doing the voxel sculpting application. The primary reason behind this is minimizing risk. I guesstimate that a painting application only will take half the time compared to the sculpting application and at least 50% of the code from the painting application can be reused in the sculpting application.

2. The name Xade will be used as an umbrella name for my future company and my current idea is to name the individual products after renaissance artists (or the Turtles) like Xade Leonardo and Xade Michelangelo.

3. Even though Xade won’t be able to edit all 3D asset types it should still be able to display a whole host of file formats like all the major image and mesh formats.

4. I will focus on a minimalistic UI and the most important measure of the application is how good it feels to work with as opposed to the quantity of features.

Going up against Photoshop with a painting application might seem like complete suicide but the more I think about it, it feels like the right thing to do. In a future post I will talk about what I have in mind…

Categories: Chatter.

Xade Philosophy

I have compiled a list of my software and programming philosophies. My biggest inspiration for these lists, besides 12 years of programming, is the philosophies of UNIX, Google and John Carmack.

My philosophy on software:

1. No save button.
2. No latency or hiccups.
3. Focus the application on one specific task.
4. Focus on the beginner and the advanced user.
5. Predictability is more important then features.
6. Minimize chrome and use mostly different shades of gray.
7. Keep both the interface and implementation simple.
8. Make it beautiful but not distracting.
9. Satisfy only 90% of your users.

My philosophy on programming:

1. Fight code entropy.
2. Data structures, not algorithms, are central to programming.
3. All the data structures you need are:  arrays, linked lists, hash tables and simple trees.
4. If you come up with a solution that must be tweaked to get robust, start all over!
5. If the interface is clean, well documented and the module is small enough, the internals can look like shit.
6. A fast compile, start-up and shutdown time is essential to keeping programming fun.
7. Try to make all your bugs a loud deterministic crash.
8. Debug all your code, even if it seems to be working.
9. Unit and regression tests makes you sleep better.

(some of these rules should be taken with a grain of salt)

Categories: Chatter.

Back from vacation

I’m back in Sweden after 8 weeks in South Asia. During my trip I came to a big conclusion concerning the xade project:

Maybe it’s better to do a concept-art drawing application before I do the voxel sculpting application.

Now this might seem like a crazy idea, but it actually has a lot going for it:

To succeed with a voxel sculpting application it is really important to nail the 3D painting aspect which should be similar to its 2D counterpart. Now, I have never done a painting program of any kind so to start with a 3D-painting/sculpting program might be a little risky. Therefore it might be better to first do a painting program for 2D (specifically designed for concept art) and then use what I learned to create the 3D-sculpting program.

Categories: Chatter.

State of the Application 1

Tomorrow I travel to Nepal and Thailand for 7 weeks. Below are my accomplishments over the last 9 weeks and what I will try to accomplish when I get back to Sweden.

Voxel Engine
I have build a voxel engine around the concept of Sparse Octrees where each leaf store 16^3 grid of voxels. Each voxel is represented by a 16-bit  ”iso”-value and then 8-bits each for R G B and specular. This works great and by storing an array of voxels in each leaf we avoid the pressure of minimizing overhead in the node data structure.

Right now the voxel engine is split on a Server and a Client side which run on separate threads. The server side is responsible for disk read/write, voxel edits and tessellation. The client run in the same thread as the rest of the application and handles rendering and ray tracing. The main reason for this split is to avoid hick-ups in the thread that runs the graphics but have also some really nice side benefits like parallelization.

Disk storage
I have already rewritten this part of the code 3 times. Right now I use a solution based on Sqlite which gives me atomic commits. This is really a must when we handle files of several 100′s of MB. I don’t really use any of the RDBS stuff that comes with SQL, I just want to store tens of thousandths records of varying size with an unique 64-bit identifier in a single file with transactions. I tried to write this kind of system myself first and although it’s easy to get it to work, it’s very difficult to guarantee that data never gets corrupt on a crash. It is really convenient to use a 64-bit node identifier over a 32-bit since we then don’t have to reuse any identifiers.

One of the main things I have researched the last 9 weeks is the rendering. I have come to the some what boring conclusion that Marching Cubes probably is the most efficient and artifact free solution. Voxel ray tracing looks promising at first but using ray marching directly against the voxels is waaaaay to slow and converting the voxels to a discrete SVO gives us the ugly Lego-artifact. Building a renderer based on splats is possible but a Marching Cube based solution is not much more expensive and looks a lot better. Since we use an Octree to store the nodes it should still be possible to ray trace against our marched cube triangles in real-time using GPGPU.

Right now I use vertex buffers objects and store all the indexes in CPU memory. This minimize the amount of draw calls we have to make but increase the data traffic over the PCI. This gives us about 0.7 million polys at 60 Hz. When I get back I will try to push this higher by experimenting with index buffers and glDrawMultiElements. Since I’m planing to relay only on vertex colors for texturing we want to be able to render more triangles then there are pixels on screen.

One big advantage with raster based data over vector data is down sampling. This works really well but when you do this adaptively with Marching Cubes you get pixel cracks between different resolutions. Hopefully this can be fixed by introducing “skirts” on all cubes but I haven’t tested this yet.

To calculate normals you can either use the voxel gradient or calculate it from the triangles. I have tried both and come to the conclusion that calculating it from the triangles probably is better but I’m 100% sure yet.

I have written a widget system for UI rendering that works amazingly well. One of the core features is how I handle the XYWH coordinates for all the widgets. I basically works like this: All coordinates are relative to the widget parent. There are 2 types of coordinates, one for window area and one for client area. This coordinates can be mounted to the left or right edge. Finally we store all the coordinates on a file and allow them to be changed through the UI which allows you to try different UI layouts without ever recompiling or restarting the application.

Below is a list of what remains to be researched when I get back home:

  • Procedural texturing.
  • Automatic UV-mapping in tessellation code.
  • Prioritize tessellation front-to-back.
  • Marching Cube skirts.
  • Use odd number of voxels with redundancy per node?
  • Parallelization of tessellation.
  • Locally change voxel resolution.
  • Optimize away solid and empty spaces.
  • Parallelization of voxel edits.
  • GPGPU voxel edits.
  • Adaptive voxel edits.
  • The Level Set Method for deformation.
  • More then one voxel object per file.
  • Solidbits.

Categories: Technical.


The last couple of days I have spent some time thinking of the scope of xade and how it should fit in the current production pipeline for film & games. This is what I came up with:

So when you use xade you can either start from scratch or import a mesh, and when you are done you export a UV mapped mesh with corresponding textures.

I thinking of a ribbon style UI where each stage in the xade pipeline have it’s own tab. This will clean up the UI and make it easier for the user.

Categories: Chatter.

Marching cubes works great

The last couple of days I have among other things implemented the marching cubes algorithm for rendering and it works great! Tesselation is pretty fast and since we don’t require any texture bindings it should be possible to render the entire model with very few drawing calls, maybe only 1 (!).

Using triangles for rendering is kind of boring compared to ray tracing but for the moment it seems to work out a lot better. Ray casting directly against the voxel array is slow and using SVO is kind of ugly if you don’t use post process blurring.

This means that Goal 1 of the this project has to change a little. Here is a updated list of the features the future prototype should have:

1. A Storage Engine that can handle think surfaces and normal calculations well.
2. Basic tools for voxalizing solids such as boxes and spheres.
3. Basic carve, sculpt and smooth brushes.
4. A Renderer build on Marching cubes with no visual artifacts.
5. A development framework: console, command prompt and basic UI.

This prototype is still a couple of weeks away. Hopefully I will make it before I travel to Nepal (middle of November).

The storage engine is still build around an octree, where each node store a 8^3 voxel grid. This gives tones of benefits: mipmapping, recursive culling, efficient space mapping, fast ray tracing etc.

Categories: Technical.

SE version 2: Check!

After spending 10 hours fixing an bug I finally got my SE version 2 to work. However the ray tracing speed seems to be a big disappointment, about 0.070 to 0.150 MR/s on 1 CPU core, which is of course way to slow for rendering. Moving this over to OpenCL might make it 10 to 50 times faster but thats still pretty slow. Another problem is that the data structures are kind of messy and octree ray tracing require recursion which GPGPU’s don’t handle all that well.

All this have led me to consider some other solutions:

1. A two-level grid for storing the voxels. The big advantage is that we can use simple line drawing algorithms for ray tracing, the data structures are super basic and we don’t require any recursion.

2. Triangle rasterisation with Marching Cubes.

Categories: Chatter.

SE version 2

Ok, time to rewrite the voxel Storage Engine, it will be fun to see how many times I have to do this during the development. Version 1 of the storage engine was super basic with just a big array storing all the voxels.

To create the smoothing-tool we need to be able to store 2 versions of the volume and I have come to the conclusion that best way to solve this is by introducing a undo-redo history scheme. To be able to use really large filter kernels with our smoothing we want to use separable convolution which needs a temporary buffer to store the intermediate data, this can also be solved with the undo-redo history scheme.

To ray trace the current storage engine we just step along the ray and check for an ISO surface. This works great and create minimum artifacts but is dependent on the world size which is completely unacceptable. In Baerentzen’s PhD thesis he uses a two-level grid the solve the problem but I think an octree is a much better solution. The leafs in my octree will contain grid of 16^3 voxels or something similar to that.

So version 2 of my storage engine will have the following features:
1. Octree based, with voxel grids in leafs.
2. Undo-redo history with support for temporary data.
3. Disk storage.
4. Basic statistics.

I want to minimize code entropy during the research phase of the project and therefor I will exclude the following features for now:
1. No equivalent to Photoshop Layers.
2. Only one voxel file open at one time.
3. All voxels must fit in memory.
4. No OpenCL/CUDA acceleration.

Categories: Chatter.

Probabilistic Risk Assessment

I have analysed the risk in the xade development, I focus mostly on technical risks here:

The likelihood means percentage chance that I can’t get it to work and Severity is how bad it is for the project. With this graph in mind it’s easy to prioritize what order things should be researched.

Categories: Chatter.

Voxel and UI work

6 days of work on xade so far. But only 6 to 8 hours per day since there is always a lot of shit you have to take care of when you come home from a big trip.

I have been working on 2 parallel lines, one is the hardcore voxel code and the other is the UI. I really like splitting my work like this since if I get stuck or really bored in of the lines I can switch to the other.

On the hardcore voxel side of things I have found a wonderful PhD thesis from Andreas Baerentzen covering a lot of the things I need to research. He’s thesis introduced me to a couple of things I hadn’t thought much about:

1. Having the voxel value represent the signed distance to the closest ISO surface. This introduce some advantages ex. it’s easy to find the foot point. But there is also at least one drawback, which is that after you have done operations on the volume the voxel distanced is probably fucked and you have to run a algorithm called the Fast Marching Method to correct it.

2. The Level-Set Method. Which is a more mathematical correct way of deformation sculpting. The theory is quite complex, but as I understand it comes down to 2 voxel look-ups and one gradient calculation per voxel and then running the FMM algorithm on all the effected voxels.

When it comes to rendering the voxel volume, the way I see it , there is 4 realistic solutions:

1. Convert the volume to triangles with Marching Cubes algorithm and then let OpenGL render it.
2. Convert the volume to splats and then let OpenGL render it with the GL_POINTS and the GL_DISTANCE_ATTENUATION_EXT extension.
3. Ray casting directly against the voxel volume.
4. Convert the volume to a SVO and ray cast against it.

I will probably implement all of the above and compare the results but my guess is that it’s the two ray cast solutions that will win in the end.

One the UI side of things, I’m thinking of rendering everything with OpenGL. This have some big advantages like speed, easy porting and that you don’t have to touch crummy OS API’s. The only disadvantage I can come up with is that translating the application to a non ANSI based character set will probably be a lot more difficult.

I have thought a lot about the UI desgin and have come up with a solution which borrows ides from Photoshop, Mari, Chrome, Modo and Lightroom. In my opinion the ZBrush and Mudbox interfaces are ugly so not much to learn from there.

In a later post I will talk more about the UI…

Categories: Chatter.

Goal 1

My first goal is to build a prototype with the following features:
1. A Storage Engine that can handle think surfaces and normal calculations well.
2. Basic tools for voxalizing solids such as boxes and spheres.
3. Basic carve, sculpt and smooth brushes.
4. Basic paint brushes.
5. A CPU renderer based on ray tracing SVOs with no visual  artifacts that should be easy to port to OpenCL.
6. Integrate ray traced image well with OpenGL, ex. setting up the Z-Buffer correct.
7. A development framework: console, command prompt and basic UI.

The following things will NOT be in the prototype, “Focus is a matter of deciding what not to do”:
1. All voxels must fit in memory
2. Only one voxel file open at a time.

After the prototype is done I will research the following areas:
1. Try making a renderer based on splats.
2. Adaptive voxalization of solids.
3. Adaptive deformation with brushes.
4. Marching cubes with unique texturing.
5. Free transform tool Photoshop.

Categories: Chatter.

First post

Hello World!

On this blog I (Henning Tegen) will blog about the development of xade, a voxel sculpting application.

Categories: Chatter.