Sunday, December 17, 2017

Making the Citadel

We just put up a video of how the Magic Citadel demo for UE4 was built:


The demo is not available yet, we are still working on the game-side of it in UE4, but the Citadel model is pretty much complete at this point. I would like to cover a couple of aspects that I find interesting from this experience.

A question I often get is why use voxels at all. I usually point at the obvious bits: If you want to do real-time constructive solid geometry (CSG) pretty much anything else is too slow. CSG is what allows to create game mechanics like harvesting, tunneling, destruction and building new things. Also, if you are doing procedural generation of anything that goes beyond heightmaps, voxels make it much easier to express and realize your procedural objects into something you can render using traditional engines like UE and Unity.

What I rarely say is that once you work with voxels, your mind changes. I let people figure this out by themselves, I do not want to be that weird guy saying you really need to try LSD. You change because you begin seeing your entire project as a single fabric of content. You feel more like you are working on a canvas. There is no difference between a tower roof versus terrain you have terraformed. It is a really distinct feel, which cannot be explained rather experienced.

If you have developed for UE4 or Unity before, think of how you would approach a project like this Citadel. While it is possible, you would be building out of a myriad of objects placed in your scene. You would have an object for the terrain, static meshes for the towers, walls, even the rocks making up your cliffs would be a bunch of instanced meshes clearly intersecting each other. Simply put, there is no canvas, instead, you have a collection of things.

If you want to have large organic shapes, like a massive spiral tower that slowly unravels over hundreds of meters, you would need to carefully plan how to deal with all this unique geometry. The image below shows an example of this from the Citadel:


It gets messy. This often leads to not having unique geometry at all, as it is too much trouble. It is unfortunate. Unique geometry can take your content to a whole new level. Once you have experienced it for a while, going back to the traditional instance-based approach is immersion breaking, at least it is for me now.

When you build out of individual small pieces, even if they have LODs of their own, their agglomeration cannot be trivially condensed into single objects that will efficiently LOD. Serious consideration needs to go into which objects you use to build the world, how large they can be, how you can reuse them and create cheap variations of them. All this planning takes a lot of work and mostly, a big deal of experience.

This is why it takes a Triple-A team to produce complex scenes and rich open worlds. Even when there is plenty of very talented artists out there, the slew of tricks you need to apply remains a veiled, mysterious art. We should not need GDC talks. The current state of the industry is as if Microsoft Word would limit the kind of novel you can write with it, and only those versed in Word's options and macros were able to create compelling fiction with it.

As I see it, it is really about the "fabric" that makes the virtual world. Once it becomes an organic canvas, you can automate tricks like LODs, culling and visibility sets in simple, robust ways. Let the computer do the hacks for you.

The other advantage of developing a virtual world as it were a canvas, is that your workflow becomes closer to what you experience working in Photoshop, versus the Maya-Blender experience. This is one of my favorite bits in the video above, it starts around the 2:54 mark. The artist first defines the basic volumes and then continues to refine them. I find this very intuitive and close to how people create in pixel-based systems like Photoshop.

Talking about artists, this Citadel project was possible thanks to Ben, who became part of the Voxel Farm team early this year. The amount of work he was able to put into this Citadel is incredible, as is the quality of his work. Ben caught everyone's attention as a player-builder in Landmark, under the Ginsan alias. Here is one voxel beauty he created back then:

Screenshot from Landmark (SOE/Daybreak)

A true Renaissance man, Ben also created the superb music for the video above. He often tweets about his progress in new Voxel Farm projects, if you are curious about what he is working on, make sure to follow him https://twitter.com/adamiseve

Tuesday, August 29, 2017

Is voxel data bigger than polygon data

We just got some fresh measurements that I would like to share.

Voxels and polygons are alternative forms of storing and visualizing 3D information. They are pretty much equivalent in terms you could represent the same information, the key difference is there are penalties attached to each method.

For instance, if you want to change the world in real-time, like making holes, cutting pieces or merging different shapes, voxels are likely to outperform polygons. The same applies if you want to merge layers of procedural content in real time. This is fast because voxels are a much simpler representation of the content. If you were doing this with polygons, you must use more complex and slower methods.

On the other hand, polygons can represent and reproduce some surfaces more economically. This is the reason why the graphics industry adopted polygons so early.

One aspect where we can do an apple-to-apple comparison is data size. The experiment would be this: Get a fairly large scene, store it both as voxels and polygons, and see which dataset is larger. We would be measuring the final size of the package, that is, how much data you need to download to have a complete scene.

This is what we did. We used Ben's work-in-progress scene, which features a massive citadel. The following video shows a character running around this place. You do not have to watch all this to realize it is a pretty big place:


(Please ignore the rough edges in the video, this is an un-optimized test aimed to get a feeling of the scale of the place.)

Everything you see there is voxel content. There are no props or instances. This is all unique geometry, forming a watertight mesh:


Here are the core stats about the scene:

54,080,225 triangles
2,203,456,000 voxels

This is the first takeaway. It takes 2 billion voxels to represent the same content as 54 million polygons. You need 40 times more voxels than polygons.

Is the voxel dataset 40 times the size of the polygon dataset?

That, you guessed, depends on how much smaller a voxel is than a polygon, also what is the overhead in storing them. Let's talk about that.

We store meshes as:
  • a list of vertex coordinates (3 x 32bit float)
  • a list of faces, where each face is three indices into the vertex list (3 x 32bit int)
  • a list of UV pairs, one per each vertex in a face (2 x 32bit float)
  • a list of material identifiers, one per each face (16 bit)
For the entire scene, the final compressed version of this data is 527 MB.

Voxels, on the other hand, store:
  • attributes (empty, has material, has UV, etc. 8bit int)
  • one 3D point (3 x 8bit float)
  • up to 12 UV entries with surface properties (each 64bit)
  • inner material (16bit int)
The compressed final version of the voxel data is 1,210 MB.

It seems the voxel data takes twice the space. This somehow feels right, considering everything we have heard about voxels versus polygons, it is no surprise voxels take twice the space as polygons for the same content.

But there is a little problem with this test. It is not really apples-to-apples. Here is why:

The polygon version of the content captures only the visible surfaces. That is when the solid materials meet air. These are the portions of the model you can actually see.

The voxel version of the content also captures hidden surfaces. While you cannot see these initially, they may become exposed later due to changes made by the viewer to the scene, for instance, while destroying or building things.

This image shows why these two sets of surfaces are different:


The red arrows point to surfaces that appear in the voxel set but are not included in the polygon set.

Luckily for us, we can change the contour rules and also produce these surfaces in the polygon dataset. After a collecting a new set of stats for this new configuration, the new polygon count is 122,470,300 triangles. Once this is compressed, the final storage is 1,105 MB.

Now, this has come very close to the voxel database size. Does this make any sense?

What is maybe most surprising is that we expected the sizes to be different. In both cases, we are capturing surfaces. Even if they are fully volumetric, voxels only really get "busy" around surfaces. This is not much different than polygons.

Of course, there are nuances in how the information is compressed. In each case, we could be using tailored compression schemes. But at this point, this will be producing diminishing returns, and the ratio between voxel data and polygon data is not likely to change much.

If you have questions or opinions about these measurements, I'd love to discuss them. Just post a comment below.

Friday, June 30, 2017

Unity versus Unreal

This topic is as divisive as the US 2016 presidential election, so I'll tread carefully.

As a middleware maker, it makes no sense to have favorites. We do our best to keep integrations of Voxel Farm on par so we reach as many users as possible. As an individual, I see no problem stating I prefer Unreal, but this is only because it is an all C++ environment. It is not a rational thing.

This post, however, is not about how I feel. It is rather about the state of the two engines and how much they facilitate procedural generation and working with voxel data. I think many of the issues we have encountered over the past few years are common if you are doing a similar type of work with these engines. Hopefully, our story can help.

Let's start with the visuals. Both Unity and Unreal are capable of rendering beautiful scenes. Both are also able to render at very high frame rates, even for fairly complex content. This has likely been the lion's share of their R&D for years now. Unity has one crucial advantage over Unreal, which is it natively supports texture arrays. Unreal almost supports them, in fact, we managed to make them work in a custom branch of UE4 with little effort. However, this is not possible with the out-of-the-box Unreal distribution. That is a dealbreaker if your middleware is to be used as a plugin like it is our case.

Texture Arrays in Unity allow precise filtering and high detail

Texture arrays make a big difference if you need complex materials where many different types of surfaces need to be splatted in a single draw call. When an engine lacks texture array support, you must use 2D atlasing. This raises a whole hell of issues, like having to pick mip levels yourself and wasting precious memory in padding textures to avoid bleeding. When you hit this low point, you begin to seriously question your career choices.

If your application uses procedural generation, it likely means the contents of the scene is not known when the application is in design mode. This is at odds with how these engines have evolved to work. If your application allows users to change the world, it only makes it worse. For the most part, both engines expect you to manage new chunks of content in the main thread. This is something that if left unattended can cause severe spikes in your framerate.

There are multiple aspects involved in maintaining a dynamic world. First, you must update the meshes you use to render the world. This is fairly quick in both engines, but it does not come free. Then, you must create collision models for the new geometry. Here Unreal does better. Since you have closer access to the PhysX implementation, you can submit a much simpler version of the content. In Unity, you may be stuck with using the same geometry for rendering as colliders. (EDIT: I was wrong about this, see the comments section.) From reading their latest update, I see this motivated the Card Life developers to ditch PhysX collisions altogether.

Card Life, made in Unity, features a hi-res voxel world

Voxel Farm allows players to cut arbitrary chunks of the world, which then become subject to physics. Unity was able to take fragments of any complexity and properly simulate physics for them. Unreal, on the other hand, would model each fragment as a cube. Apparently, PhysX is not able to compute convex hulls, so for any object subject to physics, you must supply a simplified model. Unity appears to create these on-the-fly. For Unreal, we had to plug in a separate convex hull generation algorithm. Only then we could get the ball rolling, literally.

When it comes to AI and pathfinding, both engines appear to use Recast, which is a third party navigation mesh library. Recast uses voxels under the hood (go voxels!) but this aspect is not exposed by its interface. For a voxel system like us, it is a bit awkward to be submitting meshes to Recast, which then are voxelized again and ultimately contoured back into navigation meshes. But this is not bad, just messy. There is one key difference here between Unreal and Unity. Unreal will not let you change the scope of the nav-mesh solution in real-time. That means you cannot have the nav-mesh scope follow the player across a large open world. It is unfortunate since this is a tiny correction if you can modify the source code, but again for a plugin like Voxel Farm it is not an option.

Dynamic nav-mesh in UE4

This brings me to the last issue in this post, which is the fact Unreal is open source while Unity is closed. As a plugin developer, I find myself surprised to think a closed source system may be more amicable for plugin development. Here is my rationale: So far the open source model has been great allowing us to discover why a given feature will not work in the official distribution. You can clearly see the brick wall you are about to hit. For application developers, open source works better because you can always fork the engine code and remove the brick wall. The problem is this takes the pressure off and the brick wall stays there for longer. In Unity, both application and middleware developers must use the same version of the engine. I believe this creates an incentive for a more complete interface.

I'm sure there is more to add to this topic. There are some key aspects we still need to cover for both engines, like multiplayer. If you find any of our issues to be unjustified, I would love to be proven wrong, for the betterment of our little engine. Just let me know by dropping a comment.

Tuesday, May 2, 2017

Plugin Status

I'm happy to see our team of excellent developers at chez Voxel Farm has made quick and large improvements to our plugins for Unreal Engine 4 and Unity 5.

The UE4 plugin is now a proper UE4 plugin, not just an example for integration anymore. This opened a whole new set of possibilities. In a very short time, we were able to put together this video from different scenes and interaction modes within UE4:


The new Voxel Farm UI makes it quite simple to add Voxel Farm to any existing or new project. There is a button that will do that for you, requiring you to just point to the target project:


The plugin already offers blueprint access for typical tasks like block edition, voxelization and physics. The threading model is much better, resulting in a smoother experience.

There is a new demo for UE4, now including the plugin:


If you want to get a feeling of how the plugin is used in UE4, these topics will help.

If you are thinking Unity gets no love, you would be wrong. Most of our recent efforts went into the Unity plugin. I will cover this in my next post.


Thursday, March 23, 2017

Destroy The Block

"Destroy The Block" is a new demo we put together to showcase the new Unreal Engine 4 plugin. In this post, I will go over what this took.

The demo will soon be included in Voxel Farm's demo package, meanwhile here is a video:



If 20 min of that was not enough, here is an earlier video of just driving around town in different cars:



We did not create this town model. It was a Minecraft import. Following a comment in this blog by Piotr Kucharczyk, I took a look at Minecraft's Anvil format. It turned out to be quite reasonable and easy to use. 

After a few hours of work, I was able to see Minecraft levels in Voxel Studio. I started with the King's Landing model, I was curious if our systems would be OK with such a complex model. It turned out to be alright:


This model was not a good option for several reasons, mainly it was too crowded for any first person gameplay to happen. Maybe riding a voxel dragon and setting the city on fire, but that would be too obvious.

So we settled on the town level. The natural environment, which is not blocky but rather smooth, was created with a single Smart Biome in Voxel Studio in a few minutes:


We then imported this project into Unreal Engine 4, using the new Voxel Farm plugin. It took some time to figure out what would be the right scale for the scene. Since Voxel Farm's voxels are much smaller, the default configuration felt closer to a Godzilla/Kaiju simulator. That would have been a nice demo, but I was looking for a more human-level experience.

Minecraft levels may appear simple in the mind's eye, but a level like this town is insanely complex. All buildings, without exception, have intricate interiors. Here you can see a cross section of a residential tower:


As you can see each apartment is fully defined, they even have little beds!

To further complicate things, the draw distance needed to be insanely high so detail like windows would appear when from viewed far away:


This tall building is 1.5 Km away, but it is still rendered in full detail. The player can use the sniper scope at any time, and the switch must be immediate. There is not enough time to load a higher definition of the building.

Mesh optimization really helped here. Any surface that contains multiple voxels of the same material can be heavily optimized. The following image shows how this makes a big difference in triangle counts:


I do not think vanilla Minecraft does this. Just thinking about how many triangles they need to push gave me a new sense of respect for their rendering engine.

The main goal of this demo was to tune the UE4 plugin and in particular the physics. We also spent some effort making sure the whole scene, including terrain, would load in 10 seconds or so. The demo's behavior and interaction with the Voxel Farm plugin were done using UE's blueprints. There is not a single line of C++ in this project. 

I can say the demo is quite fun. My girls have spent countless hours just driving around and destroying stuff. At the beginning, they were quite afraid of breaking anything, as if the police would come after them. Once they realized there were no consequences, they were able to fully unleash their destructive instincts.

One last thing I would like to point out in these videos is what you cannot see: LOD changes and framerate hitches. This is achieved thanks to our new scene management system, which I began to cover here and here (third and final part of this series coming next).

The Minecraft import should become a standard Voxel Farm feature soon, also depending on the interest we see around it. If you would like to do the same for one of your projects, just let us know.

It was great that this entire exercise was triggered by a reader's comment to another post. As usual, I look forward to your comments and feedback.

Thursday, February 2, 2017

The very-far-away

The previous post described a new system that allows rendering rich surfaces we call "meta-materials" using low-resolution geometry. Meta-materials cover ranges from 100 meters to 10 meters. What about anything more distant than that, that is, the range from tens of kilometers down to 100 meters?

It turns out the same system applies. You can think of this as "meta-meta-materials", we just do not call them that because one "meta" in a name is already too much. We have multiple objects that do fit that description. A terrain biome is one example.

In this post, you can see the results of applying this method to biome objects. All images are in faux solid color, which we use to make sure feature placement is correct.

Here is a single biome and the amount of geometry it takes to represent it:


In order to capture the detail, this biome also uses 1024x1024 texture maps for diffuse color, normals and other maps required for physically based rendering. Terrain voxels, which are generated on the fly, emit UV coordinate pairs which link the voxel's position in the world with the right section of these texture maps.

Here you can see multiple biomes in the same image, again in faux color, covering an area of approximately 3000 square kilometers:


Since most detail is contained by textures, it is possible to use a much coarser geometry. The following images show that we can crank up the mesh simplification and still obtain fairly good looking features:


As a creator of worlds, this feature is entirely transparent to you. These detail textures are automatically generated. Actually, all the content in these images was generated by our procedural algorithms, but if you had custom made maps, you would not need to be concerned about creating and maintaining the detail textures.

Like I said in the previous post, this is a technique frequently used in modern polygon-based terrain. The key here is this is now working on voxel terrains. These environments can be modified in real time by players. They can harvest materials, make trenches, even blow out entire craters in real time.

Monday, January 30, 2017

Prettier, Faster Terrains

We will be updating the terrain systems in Voxel Farm soon. Hopefully, it will get a lot prettier and faster. It is not often you get improvements in these two, for the most part, opposite directions. In this case, it seems we got lucky.

It was thanks to a synergy between two existing aspects of the engine that get to play together really well. One is UV-mapped voxels, the other is meta-materials.

Here is how it works: A single meta-materials describes a type of terrain. For instance, mountain cliff. Within this single meta-material you may find different materials. In the case of a cliff that could be exposed rock, mossy rock, grass, dislodged stones, dirt, etc. An artist gets to create how the meta-material surface is broken down into these sub-materials. The meta-material also has a volumetric definition, which is a displacement map and can be carefully tied to the sub-material map.

When you are close to the meta-material's surface, it must be rendered as full geometry. This is because features in the meta-material, let's say a rock that sticks out, can measure up to dozens of meters. This content must be made of actual voxels so it can be harvested, destroyed, etc. It is not just a GPU displacement trick.

As you are farther away from these features, using geometry to capture detail becomes expensive. You face the hard choice of keeping a high geometry density or dial down geometry and loose detail.

The new terrain system can dial down geometry, but keep the appearance of detail by using automatically generated textures for the metamaterial. For the close range, it still uses geometry to capture detail, but at a certain distance the meta-material displacement can be represented with just a normal map. High resolution sub material textures for grass, rock, etc. are not needed anymore. A single color map is able to capture the look of the metamaterial from this distance. These are only a few extra maps that can be reused anywhere in the scene where the meta-material appears.

The following image shows a single meta-material that uses geometry for the close range and texture maps for the medium-far ranges:


The colors in the wireframe view show where each method is applied. Just by comparing the triangle densities you can see this saves a massive amount of geometry:


This method is not new in terrain rendering, however, it is quite new in a voxel terrain. It is all possible thanks to the fact our voxels can have UV coordinates. Voxels output by the terrain component in the green area have UV coordinates. These coordinates make sure the normal, diffuse and other maps created to render the meta-material at this distance match the volumetric profile and sub-material patterns in the meta-material up close.

The beauty of it is that this work with any type of terrain topology, not just heightmaps. If you are doing caves, cave walls, ceilings and ground are very distinct meta-materials and they would all benefit from this method. And it should be all automatic, we can turn this system on, and it won't require artists to create any new assets.

We are still figuring out how to solve some kinks in the system, but so far I am very pleased with the results. I will be posting more pictures and videos eventually.

Monday, January 23, 2017

Boxes to Voxels to Boxes

MagicaVoxel is an incredibly fun and popular voxel editor. It allows you to recreate that pristine volumetric 8bit look that never was, but we all remember so fondly. We are often questioned whether we could support that kind of look in Voxel Farm scenes, so the team quickly wrote an importer for MagicaVoxel models:


The following video shows the experience, from importing MagicaVoxel models to running a full environment inside Unreal Engine 4:


As you can see from the video, once we began importing these models into the scenes, we figured out it would be nice if the entire world was blocky too. This was now outside MagicaVoxel's realm, but it turned out quite simple to switch off the smooth surfaces in Voxel Farm and make it all look blocky.

A very interesting turn was how to handle the LOD. As I was posting work-in-progress images of this on Twitter I got some interesting feedback. Jens Blomquist, who wrote Blockscape, mentioned he chose not to use LODs in his block game since the larger blocks in the distance produced confusing distance cues. What he meant can be clearly seen in the following image:


Here the boxes in area A should be smaller than boxes in area B.

Paniq (Leonard Ritter) suggested an interesting experiment: What if we used smooth surfaces for distant LODs but keep blocks for near range? It did not turn out well, but it was worth a try:


I found another illusion-based trick that did introduce some improvements: Decrease LOD near player (+2), increase LOD (-2) for buildings, leave far away LOD unchanged (+0). Thanks to the adaptive scene density, the architecture did not need to be degraded so it could remain at the best LOD possible as well:


Then I realized all these were solutions to a problem we should not have in the first place. We were equating voxels to boxes and that was wrong. Instead, we should be using the variable-sized voxels to encode constant-sized boxes. Unfortunately, I did not get to test that theory because lunch was served. When I was done it was time to go back to non-boxy things. Maybe some day in the near future...

Tuesday, January 17, 2017

When Mountains Move

There is an aspect of procedural generation I do not see discussed often. What happens when you have layers of procedural content, add hand-made content on top of it, and then go on to improve the procedural generation?

Last week we did a release of our tools that make it really simple to create terrain biomes. You can see it in action here:


This system can produce fairly good looking terrain with just a few clicks, however, there are still some aspects of it I wanted to improve.

One key issue that you can see in this video, is that it tends to produce straight lines. There is still some "diamond" symmetry we need to address the core algorithm. The image below shows a really bad case, where the produced heightmap has parallel features forming an almost perfect square:


At the left, you can see the generated heightmap. The right panel shows a render of the same heightmap.

This did not take long to fix. Instead of running the generation algorithm on a grid made of squares, switching to irregular quadrilaterals is enough to break most of the parallel lines. The following image shows the results after the fix:


A simple fix or improvement like this leaves us with bigger questions: How do we make this backward compatible? Do we make it backward compatible at all, or just apologize and ask human creators to relocate their data?

We could provide an option to turn this improvement off. That would be the most diplomatic approach. But then what about any other improvements we have planned for the future, do they all get their own setting? At this point, it seems we would be complicating the UI with many options that are meaningless to new users. These would be just switches to make algorithms behave in more primitive forms.

Whenever algorithms create content along with humans it really becomes a muddy, gray area for me. I still haven't figured this one out.