Friday, November 28, 2014

Progressive LOD

We still have to apply this to entire scenes, but here you can see how it is possible to have progressive level of detail transitions with voxel data.

Below you can see the same mushroom from the earlier post going through several LODs:


Here is another example, this time using a more regular object so you can better follow the voxel splits:


Wednesday, November 26, 2014

The Missing Dimension

I believe when you combine voxels with procedural generation you get something that goes well beyond the sum of these two parts. You can be very successful at any of these two in isolation, but it is when you mix them that you open up a whole set of possibilities. I came to this realization only recently.

I was watching a TV series the other night. Actors were filmed against a green screen and the whole fantasy environment was computer generated. I noticed something about the ruins in this place. The damage was clearly done by an artist's hand. Look at the red arrows:


The way bricks are broken (left arrow) reminds me more of careful chisel work than anything else. The rubble (right arrow) is carefully arranged and placed around the floor. Also we should see smaller fragments of rocks and dust.

While the artists were clearly talented, it seems they did not have the budget to create physically plausible damage by hand. The problem with the series environment was not that it was computer generated. It wasn't computer generated enough.

Consider physically-based rendering. It is used everywhere now, but there was a time when artists had to solve the illumination problem by hand. Computing photons is no different than computing rolling stones. You may call it procedural generation when it is about stones, and rendering when it is photons, but these are the same thing.

As we move forward, I see physically based generation becoming a thing. But there is a problem. Until now we have been too focused on rendering. Most virtual worlds (like game scenes) are described only as a surface. You cannot perform physically based generation in a world that is only a surface. We are missing the inner dimension.

Our world is 4D. This is not your usual "time is the fourth dimension" pickup line. The fourth dimension is the what, like when you ask what's inside a box. Rendering was focused mostly on where the what turns from air into solid, which is a 3D surface. While 3D is good enough for physically based rendering, we need 4D for a physically plausible world.

Is that bad that we are not 4D? In games this translates to static worlds, or scripted destruction at best. You may be holding the most powerful weapon in the universe but it won't make a dent on the floor. It shows everywhere as poor art, implausible placement of rocks, snow, debris and damage, also as lack of detail in much larger features like cities, castles and landscape.

If you want worlds that can be changed by its inhabitants, or if you want to generate content by simulation, you need to know your world as a volumetric entity. Voxels are a very simple way to achieve this.

Going 4D with your content is a bit of a problem. Many of the assets you may have could not work. Not every mesh defines a volume. Often, meshes have holes in them. They do not show because they are hidden by other parts of the object. These are not holes like the center of a doughnut. It is a cut in the mesh that makes it just a surface in 3D space, not a closed volume.

Take a look at the following asset:

The stem of this mushroom is not volumetric. It is missing the cap. This does not show because the top of the mushroom is sunk into the stem and this hole is completely hidden from sight. If you tried to voxelize this stem it would have unpredictable results. This hole is a singularity to the voxelization, it may produce all sorts of artifacts.

We have voxelization that can deal with this. If you voxelized the top and bottom together, the algorithm is robust enough to realize the hole is capped by other pieces. But we just got lucky in this case, the same does not apply to any open mesh.

Even if you get meshes that are closed and topologically correct, you are only describing a surface. What happens when you scratch the surface? If I cut the mushroom with a knife, it should reveal some sort of mushy, moist material. Where is this information coming from? Whoever is creating this asset has to put it there. The same applies to the bricks, rocks, plants, even living beings of your virtual world.

I think the have reached a turning point. Virtual worlds will remain static and very expensive to build unless we can make physically correct decisions about the objects in there. Either to destroy them or to enhance them, we need to know what they are made of, what is inside.

Tuesday, November 25, 2014

Instance Voxelization

We are finishing the voxelization features in Voxel Studio.  Here is how it looks like:


At only 40x80x40 voxels it is a good reproduction of the Buddha. You can still see the smile and toes.

This computes 12 levels of detail so when this object is distant we can resort to a much smaller representation. If you know what texture mipmaps are, you'd see this is a very similar concept.

The LOD slider produces a very cool effect when you move it quickly: you can see the model progress quickly from high to low resolution.

And here is the Dragon at 80x80x50 voxels:


Saturday, November 15, 2014

Cantor City

The Cantor set (or how you obtain it) is probably one of the simplest fractals out there. It was one of the first examples I encountered and it helped me greatly understanding whatever came after. Here is how I like to explain it:

1. We start with a horizontal segment.
2. We divide the segment into three equally sized segments.
3. We remove the segment in the middle
4. Repeat from step (2) for the two remaining segments



This produces a set of segments that puzzled mathematicians for a while. I am more interested in the fractal properties of the whole construct. It shows how a simple rule operating at multiple scales can generate complex structures. This set is too primitive to be used for realistic content, but it is the seed of many techniques you will require to achieve that.

This is how you would code the Cantor set using our grammars:

This is the real world so we need to make sure it stops. An easy way would be to add another definition of "cantordust" for when the size is small enough. This definition would do nothing, thus stopping any further subdivision:


Here you can see the output:


Let's make it more interesting. First let's add depth to it. Instead of segments we will use 3D boxes:


This already looks like we were growing little towers. We can make the towers taller if there is enough room:



Let's add a random element. We will not split into three equal spaces anymore, instead we will pick a random value for each iteration:


For a microsecond this image can fool you into thinking you have seen a city. I think that is pretty cool if you consider it is only a few lines of grammar code.

Of course this is not a replacement for a city or buildings. The cantor grammars are probably the simplest you can have. You will need a lot of more code to produce somethings that can pass as a city. But odds are it will be mostly variations of the Cantor dust.

Sunday, November 9, 2014

Life without a debugger

Some programmers love the more literary aspects of coding. They enjoy learning new languages and the new structures of thought they bring. You will often hear words like "terse" and "art" from them. Some programmers like learning different frameworks, while others die by the not-invented-here rule. Some see themselves more like civil engineers: they have little regard for the nuances of coding, frameworks are just bureaucracy to them, whatever they build must stand on its own merits.

I have been a member of all these camps in one moment or another, so really no judgment here. But what probably unites all coders is that if you write a program, you will likely have to debug it.

If you have not stepped through your code at least once, you probably do not know what it is doing. Yes you can unit test all you want, but even then you should also step through your test code.

Karma dictates if you put a new language out there, you must also give a way for people to trace and debug it. That is what we just did with our LSystem and grammar language:




It turned to be quite surprising. We are very used to tracing imperative languages like C++ and Java, but tracing the execution of a L-System is unlike anything I had tried before.

I get the power of L-Systems and context sensitive grammars better now. This system feels like it had the ability to foresee, to plan features ahead. You see it happening in the video: often empty boxes appear in many different spots, like if the program was testing for different alternatives. That is in fact what it is happening.

It looks amazing that end features like the tip of a tower may appear even before than the base. In reality the program has already decided there will be a base, but its specific details are still a bit fuzzy so they come up much later. But then once they do appear, everything connects as it should, all predictions line up properly.

All that did not require planning from the programmer. In this case the code is not a series of instructions, it is more like a declaration of what the structure should be. The blocks fall into their pieces, the debugger just shows how.



It reminds me of another type of code at work: DNA. Now that would be one really cool debugger.