Tuesday, December 10, 2013

Voxel Physics

Voxels are typically associated with creating content. As it turns out, they are quite useful when the time comes for destruction.

Voxels make it easier to break things. Imagine you fire a rocket into a column. You can blast a hole where the rocket hit. Using the column's voxels you could create several fragments of debris. If the column's ability to stand or support other things is compromised, voxels can tell you that. At this point you get even more fragments, which could impact other voxels generating more fragments and so on.

Since you are looking at volumetric data, computing the mass and other dynamic properties of these fragments is much easier. Imagine you have a very irregular shape, made of many different materials. For a proper physics simulation you need to figure out how much the thing weights. This is a trivial process if you are using voxels. Each voxel has a material assigned to it, the material's density tells you how much the voxel weights. The weight of the fragment is the sum of the weight of its voxels. And it is more than that, you can figure out where the mass center is. Imagine a fragment that is half rock, half styrofoam. The object's center of gravity is where the rock is concentrated. The styrofoam adds very little weight.

Here is a video showing a little bit of destruction. This is still in an early stage but hopefully you will get an idea of the potential.


This video was captured on my old PC, with an Intel i5 and an ATI 4770.

Wednesday, December 4, 2013

More statues

Here are other screenshots showing some statues. I did not create this cat statue. It was a model I found free for download on the web. In each case the model was voxelized and "pasted" into the scene.




Bringing components other have made beats sculpting them yourself. The question is how simple we can make this process for the average player.


Sunday, December 1, 2013

Golem

I was wondering how easy it would be to create a statue inside the demo, using only a spherical brush. I thought it would be nice if the creatures you sculpt could come to life. They could help you build or defend you against other creatures. I decided to give it a quick try.

I found it was not very easy. You certainly miss the toolset from programs like Zbrush, Mudbox or 3D-coat. I think we need better tools for a first-person editor. We certainly will be working on that.

Meanwhile, here is the stone golem I created. It took me around 20 minutes to build.




Tuesday, October 22, 2013

Water bodies

There is a good reason why I have postponed doing water. It is not because I consider it particularly difficult. Water is bound to simpler rules than other systems I have already looked into. I had a different reason.

Water is less dense than buildings and terrain. A damn or a mountain will determine where water goes. In my mind, the heavy stuff must be there first, then comes the water. You may say water does have a significant effect in terrain features. It is true, but as you will see in my next post I think there is a way around that.

Anyway I wanted to have a better idea about my buildings and terrain. I did not want the water solution getting in the way. I think I have reached this point now, it is time to look again into the water generation.

This is about generating large water bodies like rivers and lakes. This is not about water flowing downhill. I think I will have two different systems for this. Even if the water looks the same, it will be handled by different parts of the code. Ideally you would like to make no distinction between a lake and water inside a bucket. In reality it is all the same, but sadly our hardware is eons away from that. As usual we need some clever hacks.

I am trying this system now:

You start from a heightmap. Your terrain does not have to be heightmap based. It could be full voxel with caves, overhangs, etc. The heightmap defines the surface visible to the water simulation, it must register with the terrain volume. This heightmap looks like this:


Nothing fancy here, this is your run-of-the-mill heightmap. But how you section this work is important. I chose to generate tiles approximately 50 km wide at once. That means all the rives and lakes will be enclosed in that tile. You won't have a river going for 100 km with this approach. If you want that, the tile size would have to be bigger. In my case 50 km was a good starting point.

The next step is to add some water sources to it. You do not do this everywhere, just where the terrain is high enough. For a 50 km tile I add around 200 points.

Then you find the closest path from each point to a lower plane, which you can consider to be the ocean level. You can use any path finding algorithm, like A-star. This gives you rivers, which show in blue below. You can see ocean water in a darker blue:


The red points show the water sources. As you can see the rivers follow the path of least effort over the terrain, always looking for the ocean.

I also wanted to have lakes, but did not see a simple way to add them. That until one day I was debugging the river path-finding. To find that bug (which I already forgot what was about) I chose to highlight the exploration the river algorithm was doing before selecting a particular course. I noticed the path-finding would spread its search whenever encountering flat surfaces. It was behaving pretty much just like water.

Once I added the areas where path-finding searched, I got lakes!


I liked how lakes could appear at different altitudes. Also if you notice the heightmap values in the lake shore, you will see they do not diverge by much. This means the lake surface is mostly flat. In real life often lake surfaces are not entirely flat. This is because the lake actually flows.

I will finish this topic in my next post, where you will get to see how to come up with the actual water surfaces. Also how you find out where waterfalls should go. You got to have those waterfalls.




Tuesday, October 15, 2013

Cow business

The past few years have been an incredible ride for me. I got to see what started as a hobby grow into a business.

Setting up a company is tricky. Now you must worry about many other things, it is not just writing cool programs anymore. One of these things is getting a logo. Since I have developed all this in public, I thought I could ask how you guys feel about our potential new logo. Here it is:
Yes, it has a red cow. We are considering this slogan to go with it:

"Don't be square"

You can leave comments as usual, but if you want to help us the most here is a form so you can vote and leave your feedback.

EDIT: I want to thank everyone for the feedback. We will do a new iteration on the logo and slogan.


Monday, September 30, 2013

Video Update for September 2013

Here is a video update for September 2013.



It shows the new instancing system, the elastic fill tool and other improvements I made to the landscape rendering.

Tuesday, September 24, 2013

GiantBomb interview

A voice interview I did for the gaming site GiantBomb.com just came out:

http://www.giantbomb.com/podcasts/the-future-is-voxels/1600-624/

There is a lot of my usual crazy talk there. If you want to take on any of these issues here, please leave a comment in this post.

Friday, September 20, 2013

Voxel Farm is hiring!

In case you do not know already, in 2012 I started a company called Voxel Farm Inc. All the pajama work I had done since 2006 went into this company. We hired a few developers to work on the contracts we were getting at the time, like EverQuest Next.

At this point we are looking for one or two very talented artists for our mystery project. This is the ideal profile:

- Digital sculpting (z-brush, mudbox or 3D-coat)
- Creation of seamless photo-realistic textures

If you live in Montreal it is a bonus. We are setting a nice sunny office in the south shore. Your involvement can be part-time or full-time. It does not matter as long as you are an ace in your work.

If interested please email your portfolio to: mceperog (at) gmail.

Spread the word!

Monday, September 16, 2013

Elastic Fill tool

I did a new tool that opens some new interesting possibilities. It allows you to quickly fill a gap. You need to point to two opposing faces. The start and end faces could be almost anywhere in your building, and they could be pointing at different angles. Even if the start and end faces are aligned with the grid, the connection between these two goes off-grid very often. Also note ramps are one special case of this.

The new tool allowed me to build this cabin, which has a lot of odd angles:








When approaching a new tool design my main concern is to make it simple and fun to use. Voxels are very flexible and very often I feel tempted to add new tools just because I see what they could do. The real problem  is how to find a suitable UI. In this particular case it turned out quite simple to use so this one is a keeper.

This is how it works: You point to the start place, hold a key down, and then point to the end place. You see some sort of elastic box showing you where the voxels will appear. Once you release the key, the program will fill the space you have highlighted.

The elastic box changes color depending on how aligned is the box with the absolute grid. If you get perfect alignment is shows in white. If you get alignment over one plane, it shows green. If there is no alignment it shows yellow. As a novice builder you can ignore these changes of color, but they help a lot if you are ready to take notice.



Here is a time-lapse video showing how the entire structure was built. Only three different tools were used here: add/remove voxels, fill and smooth.


Don't ask me why the compression in the video is so bad. YouTube does not like me anymore. I have spent many hours trying many different settings, codecs, anything short of voodoo.

Monday, September 2, 2013

Voxel Instancing

Instancing is a dirty trick widely used in computer graphics. We are always trying to push the limits in terms of what we can render on screen. Instancing allows to repeat the same object many times in one scene. The object's model is stored only once in memory and we only need to store addition information for each of its occurrences in the scene. That is, their position and orientation. It makes a big difference. Without instancing most of the richness you see in current game worlds would be gone. Boulders, patches of vegetation, even trees and sometimes entire buildings are heavily instanced.

Instancing is a dirty trick because there is no instancing in the real world. Each pebble in that river bed is unique. We use this shortcut in virtual worlds because somehow it still does the trick. Maybe our brain is not able to detect these are all copies of the same object, or maybe it is something we just forgive because nobody has been able to show us anything better.

To the occurrence of each object we call an instance. The source model for all the same instances we can call a "class". This is pretty much the same concepts you find in object oriented programming.

In most game engines, which store all models as polygons, instancing is done at the polygonal level. In my case I saw the same advantages would apply if you had them in voxel form. Their memory footprint is constant, and they a blazing fast to bring into the world.

Translated to voxels, the class stores the voxel values that define the object. This can be done either in a regular grid, or an adaptive grid like an octree. Or, in any other form that makes sense to you. In my case I store them compressed using some form of Run-Length-Encoding. These classes may take one or two megabytes each, in compressed form. Each instance is a much smaller piece of information. It just needs to record where the instance is and to which class it belongs to.

You have seen a lot of this instancing system already in my videos and screenshots. It is how trees are made. But until now, only trees were using instancing. I had my tree generation tool produce the compressed classes directly.

This weekend I added a new feature to the Voxel Studio tool. It can now load a 3D mesh and create the instance data for it. Thanks to this I can have more interesting instances, like giant boulders, dilapidated houses, ruins, even pine trees!

Here are a few screenshots showing how a few instances can spice up the landscape. The last two show how you can carve and edit the instanced voxels. They are no different than voxels coming from any other source.










Saturday, August 3, 2013

EverQuest Next

EQNext has been revealed. What we got to see was so good, they have set the whole industry on fire. They have Voxels and Procedural generation as one of the main pillars of this game. If you saw the reveal, and have been following this blog for a while, you will find many similarities between EQNext and what Voxel Farm does. This is no coincidence: EQNext is using the Voxel Farm engine.


The engine is just a tool. The EQNext team deserves all the credit in realizing this vision. Their art direction and engineering skills are unlike anything I have seen. I am blown away by what they have achieved with the engine, especially in so little time. I am very proud of being involved in this project.

You can see the entire keynote here. It has plenty of real gameplay footage:

http://www.twitch.tv/soe/c/2680835

And here are a series of videos that have appeared in different gaming sites:




Friday, August 2, 2013

Lean Trees

Trees with thin trunks are problematic for a voxel engine. The reason is aliasing. When the tree is far away you still see its crown, but the trunk may have disappeared entirely because of the voxel resolution cannot hold such a thin feature anymore.

I was not ready to give up on skinny trees. They are abundant in colder climates, definitively a must-have for the engine. After some kicking and screaming, I managed to get it done:


Let's see how.

This problem is linked to the sampling theorem and Nyquist frequencies. In a nutshell what this means is you can only reconstruct some information if your sampling frequency is at least twice of the information's frequency. If that sounds weird to you, you are not alone. As it turns out, we live in a freaky reality. Things, regardless of them being real or virtual, have frequencies sort of baked into them and their arrangements. In this particular case the virtual tree trunks had frequencies that were higher than the voxel frequency used to represent them. These trunks would just disappear.

As long as you use a discrete method, like pixels or voxels, to represent a continuous reality you are guaranteed to suffer from these aliasing issues. In the case of pixels, we solved this problem by just throwing lots of memory and processing power to it. With voxels, the hardware is still far from being there.

So this limitation will be there for a while. We better learn it well. Once you think about it, you see the limit is not really how thin a feature can be, but how close two thin features can be. If your voxel size is 1 meter, you can still do a golf ball with them. What you cannot do is have another golf ball next to it, unless you place it at 2 meters.

Maybe this image will help explain it better:


This image is a 2D representation of voxels, which are 3D, so you will need to extrapolate a little bit. The eight squares are voxels. The two red dots are two balls. Each voxel measures "d", which for the sake of argument we will make equal to one meter. We can engage voxels 1, 2, 3 and 4 to represent the first ball. This is how we can achieve a feature inside these voxels that is much smaller than "d". In fact, it could be really small, near zero.

So even huge voxels could encode a tiny feature. The real limit is how close the next feature can appear. In the image you see we cannot use voxels 2, 5, 4 and 7 to add another ball there. This is because voxels 2 and 4 area already engaged into expressing the first ball. So the closest ball can be placed two voxels away, using voxels 5, 6, 7 and 8. The distance between the two balls cannot be less than two meters, that is 2 times "d". This is the sampling theorem rearing its ugly head again.

But this was the key to my solution. Because of how forests are, I did not need two thin trees one immediately next to another after all. I just needed the thin trunks to align with the largest voxels that would still need to display the trunk. This involved shifting the tree a clever amount, which was never too much to disrupt the distribution of trees in the forest.

If you look at the forest screenshot again, you will see some fairly thin trunks in the distance. These trunks are an order of magnitude thinner than  the voxels used to represent them.


Tuesday, July 30, 2013

Simplex toys are the best

I know I said no more posts about children toys, but this one was a really good find. If you work with 3D entities drawing in paper will get you only so far. At some point you really need to look around, hold it in your hand.

Last time it was a voxel playset. This one is a "simplex" playset:


A simplex is the the minimum geometric unit you can have. In 2D they are triangles, in 3D tetrahedrons and in 4D, well, you do not really want to go there.

They matter because when you are looking for a solution to a problem, it is often best to target the simplest element possible. If your solution is based on them, it is likely to be the simplest solution as well.

It is no coincidence we use triangles extensively for rendering. In 3D simplexes are equally useful. For instance, Perlin rewrote his famous noise function to work over simplexes instead of cubes. It resulted in a faster, better looking noise, which Perlin aptly named -you have guessed- "Simplex Noise". At this point in time, there is no reason why someone would use Perlin noise when Simplex noise is available. We also have Marching Tetrahedrons, which improves over Marching Cubes.

In my case I was looking at them because of their role in interpolations. Trilinear interpolation is often done in a cube. If you do it over simplexes you can shave off a few multiplications. When this is in a hot area of your code simplexes can make a difference. And above all you also have an excuse to play with these cool toys. Did I mention they are magnetic?

Wednesday, July 24, 2013

Video Update for July 2013

Here is the latest video update. If you are keeping count you will notice I skipped the one for June. I would have done it in time, but one of my twins snapped the microphone I use for such recordings. I did not have time to get a new one until last week. For that reason this update is a bit longer.


Tuesday, July 9, 2013

Emancipation from the skirt

I like skirts. I hope one day men are able to wear them without being judged by the square minds out there. Even miniskirts. I think the Wimbledon tournament should require male players to wear white miniskirts, it would bring us to a new level of tennis. It was equally great when women liberated from the skirt and got to wear pants last Century.

But we will be talking about a different type of skirt. Here is the story.

When generation algorithms run in parallel you have to deal with multiple world chunks at the same time. You can think of a chess board and imagine all black squares are generated at once. You could put anything you want in these squares and it would be alright, you would never get discontinuities along the edges because black squares never share edges.

Now comes the time where you need to generate the white squares. At this point you need to think about the edges, make sure anything you place in the white square will connect properly with the adjacent black square. You have two options here:
  1. You remember what was in the black squares. 
  2. Your generation algorithm must be able to produce content "locally", that is the value obtained for one point does not depend on the neighboring points
In most cases we opt for (2). This is how noise functions like Perlin's and Worley's work. This is also how Wang Tiles and derivative methods work. Once your generation function is "local", it does not really matter in which order you generate your chunks. They will always line up correctly along the edges. This choice of (2) may seem a no-brainer at this point, but we will come back to this decision later.

Now, if instead of a checkerboard arrangement you have multiple levels of detail next to each other (a clipmap), you soon run into a problem. Running the same local function at different resolutions creates discontinuities. They will appear as holes in the resulting world mesh. The following screenshot shows some of them:


The clean, nice solution for this is to create a thin mesh that connects both levels of detail. This is usually called a "seam". This is not difficult for 2D clipmaps. For a full 3D clipmap it can get a bit messy. 

In general your way out if this is always to extend the same algorithm you use for meshing. For instance if you are using marching cubes, you will need a modified marching cubes that runs at one resolution on one end, and at a different resolution on the other end. This is exactly what the guys in the C4 engine have done with their Transvoxel algortithm: http://www.terathon.com/voxels/

In my case I chose not to use seams in the beginning at all, but a different technique called skirts. This is a technique that was often applied to 2D clipmaps as well. The idea is to create a thin mesh that is perpendicular to the edge where the discontinuity appears. While this would not connect to the neighboring cell, it does hide the holes you get just like the seams. 

Just like seams, skirts in 3D clipmaps are kind of complicated as well. Imagine you are doing a thin vertical column. You need to make sure the skirts go into the right angle and never go too far. You don't want these skirts protruding out of the other side of your mesh. 

Skirts have a big problem. Since the vertices in the skirt mesh do not connect to the other end of the edge, you will have some polygons overlapping on screen. This can produce z-fighting at render time. This is not a big deal, you can always shift the skirts in the Z-buffer and make sure they will never fight with the main geometry in your clipmap cells. But this works only if the geometry is opaque. If you are rendering water or glass, skirts make rendering transparent meshes a lot more difficult.

Still skirts have a massive advantage over seams. In order to produce seams you must evaluate the same function at two different resolutions for adjacent cells. If your function has a time penalty per cell, let's say you need to load some data, or access some generation cache, you will be paying this penalty twice for every cell that has a seam. You pay it once when you generate the contents of the cell, then again when you generate the seam.

A properly generated seam creates a serial link between two neighboring cells. For a system you want to be massively parallel, any serial elements come to a price. There is no way around this, you either pay the price in processing time or in memory (where you cache the results of earlier processing). Skirts, on the other hand, can be computed with no knowledge of neighboring cells. They are inherently parallel.

Back to the checkerboard example, even if you chose option (2), when you are doing seams you will be forced to look into the black squares when you are generating the white ones. Skirts have yet another advantage. Nothing is really forcing you to use the same function from one square to the next. Even if the function has discontinuities the skirts will mask them. You may think this never happens, and that is true while you are using simpler local functions like perlin noises or tilesets. But at some point you may be generating something that your standard seaming cannot mend, it just takes for the generation function to produce slightly different results for different levels of detail.

Anyway in my case it was time to get properly connecting seams. They would be nice for water, ice crystals, glass and other transparent materials in the world.

I run the dual contouring mesh generation over the seam space. Like in the transvoxel algorithm, one side of the voxels have double the resolution than the other side. Instead of going back and generating portions of the neighboring cells, I just store their boundaries. So there is a little bit of option (1) in the checkerboard example. It adds some memory overhead but it is worth the save in processing time.

Here you can see some results in wireframe:


The seams appear in yellow.

I am actually not finished with this yet. Still need to bring the right materials and normals into the seams. But I would say the hard part is over.



Friday, June 21, 2013

TUG too

Here is another group who licensed the Voxel Farm engine. This actually was the very first license to go out, back in October 2012.

They are Nerd Kingdom and the game is called TUG, which stands for The Unknown Game. This may strike you as the most unimaginative name ever, but once you understand where this project comes from and what are their goals, it actually makes perfect sense.


I am not sure at this point how much of the original source code engine remains in this project. It is quickly getting into shape and looking good.

This project is coming from a different angle. This baby is spawn by behavioral scientists. By collecting data on how people play, they believe they can reshape the game as it unfolds.

One example they give is: Imagine they have an algorithm that detects when a player is griefing other players. Eventually they would know who are the trolls in the community and maybe they could do something about it, like placing all trolls together in an island and see what happens. In the future this may give ideas to other game designers on how to deal with griefing and trolling in games.

A game that watches you play all the time may seem a bit big-brotherish. They do come from a science background. Behavioral scientists have been creating these mad experiments for a long time now, I'm not sure this ever came at the expense of someone's privacy.

These experiments in the past were small scale compared to what you can achieve in this era of big data. I  hope NK will remain transparent on what data is collected about you and how it is linked to your real identity, or simply make it so you are out of the experiment by default and you have to actually opt-in.

You may invoke a nightmarish scenario where a government denies you boarding a plane due to your psychotic behavior in a game. If you worry about this kind of thing and still want to play TUG you should probably take it with Nerd Kingdom.




Tuesday, June 18, 2013

StarForge avec Voxel Farm

Here are some very exciting news at least for me. A few months ago CodeHatch, the company behind Starforge, licensed the VoxelFarm engine. They are using it from Unity in this latest video showing their new game terrain:


These guys are crazy-talented. Moreover, they are the nicest, most positive people I have ever encountered and worked with. Maybe it is because they are Canadians, who knows.

My heart is with Starforge, I really hope great things come to this project. If you have not checked this game out, it is on Steam.

Tuesday, May 28, 2013

Video Update for May 2013

This update sums up a few nice additions done over the last month: the ability to use meshes as brushes for creation and how you can go off-grid.


Sunday, May 26, 2013

Euclideon Geoverse 2013

Euclideon is back. Here is their latest video:


It seems they put their tech to good use. Their island demo was very nice, and now this is one of the best point cloud visualization I have seen so far. Is it groundbreaking or revolutionary? No. There is plenty of this going around. But this is good stuff.

This announcement may be very disappointing to some people who may have taken their past claims literally. Two years ago they said they had made computer game graphics 100,000 better. They would show you screenshots from Crysis and explain how bad all this was compared to their new thing. Panties were thrown all over the internet.

The reality is today this still ranks below the latest iterations of CryEngine, Unreal Engine and id's Tech 5. It even ranks below previous versions of those engines. Actually I do not think you can make a game with this at all. If you want to have the texture density you have on screen in most games, you would need to store far more information than what they have in the demos shown here. It is a perfect match for the state of laser scanning, where you do not have much detail to begin with. They are not showing grains of dirt anymore, that is for sure.

The real improvement in the last two years? The hype levels are a lot lower.




Monday, May 20, 2013

Going off-grid

Here is a teaser of a series of screenshots and videos to come very soon:


This is not Minecraft on LSD.

Once you have a voxel system that is capable of representing surfaces in any direction and curvature, the real challenge becomes UI mechanics. Creating content has to be simple and rewarding. It has to feel like a game. You cannot expect players to pickup a manual or become experts in full voxel edition systems like Z-brush or 3D Coat.

But the potential is definitively there. Going off-grid allows for more interesting creations. If the system is intelligent enough to adapt to whatever is already there, you could be creating all sort of angled and curved content without busting any veins in your forehead.

What is even more interesting: there is no reason why you would limit this only to cubes. These elements you place could be anything: column disks, rocks, crystal shards, archways, statues. They could be even portions of stuff you or someone else has done before.

I'll leave it there for now. Hopefully you will be intrigued enough to come back later checking for more.

Tuesday, May 7, 2013

Covering the Sun with a finger

The oldest optimization in real-time graphics is to avoid rendering what you don't see. When you explain this to people who are not in the field, they usually shrug and say something in the line of "Duh, Sherlock".

It is easier said that done. Well, actually a big part of it is quite easy. The first trick you see in all graphics books is to render only what's inside the field of view. While the scene surrounds entirely the camera, the camera only captures a narrower slice of it. Anything outside this slice, which is usually 90 degrees along the horizontal, does not need to render. For a mostly horizontal scene, only 90 degrees out of 360 need to be rendered. This simple optimization reduces scene complexity four times. Another way to put it is, now you can have four times more detail without a performance drop. This technique is called Frustum Culling.

Frustum Culling is a no-brainer for small objects that are scattered around the scene. As scene complexity rises you must batch as many objects together as possible. The need for aggressive batching apparently has relaxed a bit recently, but there is no question that batching is still necessary. This goes against Frustum culling. What if there is an entire batch that is only partially in the field of view? You would still need to render it all. So, the more you batch, the more you can loose from the frustum culling optimization... unless your batches are somehow compatible with the scene slices you need to render. More to that later.

Even if you were able to perfectly cull all the information outside the field of view, there is usually a lot of polygons being rendered in a scene that never make it to the screen as pixels. This is because they become hidden later by a closer polygon.

Imagine a huge mountain with a valley behind it. If the mountain was not there you would see the valley. With the mountain in front of you, all the efforts rendering this valley go to waste. If we could somehow detect we can skip this valley, we would save a lot of rendering. We could have a much nicer mountain.

This technique is called Occlusion Culling. It is in principle a difficult problem, as the final rendering is the ultimate test of what is really visible and what not. Obviously some sort of approximation or model has to be used. A simpler model of the scene allows to estimate what portions of the final rendering will become hidden so it is safe to skip them.

And then again, if you had the occlusion problem perfectly solved, you would still have the issue with batching. It is not that different than with frustum culling. Maybe just a small clip of a large batch is visible, still that would require the entire batch to render... unless your batches are somehow compatible with the scene volumes being occluded.

I wondered that maybe there was a single approach that would help with all these issues at once. Yes, some sort of silver bullet. I set out to look for one, and did find something. Well maybe it is not a silver bullet, but it is quite shinny.

It is about the geometry clipmaps. I have covered them many times in the past. The idea is somewhat simple: if your world can be represented as an octree, you can compute any scene from this world as as series of concentric square rings. Each ring is made of cubic cells. The size of these cells grow exponentially as the rings are farther from the viewer.


The image above shows a projection of a clipmap in 2D.

You can see right away how this helps with batching and frustum culling. Each cell is an individual batch, which can contain a few thousand polygons. It is quite simple to determine whether a cell is inside the field of view. Also, cells go out of the field of view quite efficiently as their size is constrained by their very definition.

The clipmap turned to be very friendly for occlusion testing as well. Imagine you could identify some cells as occluders in one specific direction of the clipmap. It becomes fairly simple to test if more distant cells are occluded or not.

The following image shows how this principle works:


Here four cells have been identified as occluders. They show as vertical red lines. Thanks to them, we can safely assume all the cells painted in dark red can be discarded. These batches are never sent to the graphics card.

In my case I am performing the tests doing software rasterization. It is very fast because the actual cell geometry is not rendered, only cell aligned planes. So far a depth buffer of 64x64 provides sufficient resolution.

Not bad!

Sunday, April 28, 2013

Video Update for April 2013

An update about clouds and a surprise topic for the second part.


Making these recordings feels a bit weird. I love talking to real audiences, but this feels more like leaving a message in someone's answering machine.

Wednesday, April 24, 2013

The Unity plugin is looking good!

Here is an update on the Unity front. Let's see if a screenshot is worth a thousand words:


This is the same terrain generation you see in other screenshots and videos I have posted. This image in particular was not created by me, but by some very talented guys who took the Voxel Farm engine and are using it from Unity.

The sky and clouds in the screenshot is a box with a static image on it, it is not related to the clouds and sky streak of posts I had earlier. The thing is, once you are in Unity there are several plugins that will do real-time skies for you. Actually there are plugins that will do real-time anything for you. That is the point. We are hoping to become another one.

Monday, April 15, 2013

Some Clouds

Clouds come in many forms. When it comes to generating them it seems there is no silver bullet method. Part of the problem is we call clouds to just one aspect of a more general process: water particles suspended in air. This could also be fog, or the misty breath coming out of trees and plants in a jungle. This is what the initiated in this occult science call "The Participating Media".

I decided to tackle this problem by having different layers working together. Which layer to do first? Even in the highest places in Earth, it is likely to find a layer of clouds over your head. I did some experiments over the weekend on how this particular layer could be rendered.

Here are a couple of early screenshots for your consideration:




It is a very simple and fast method that allows clouds to animate and evolve over time. You can go from a clear sky to a very cloudy one as well. It takes into account the sun's position and does some basic scattering and self shadowing.

These clouds are rendered in the same skydome that performs the day-night cycle, so they do not add any new geometry. This is also the problem with this method: This is a flat layer. There is the impression of volume thanks to how the light is computed, and this trick holds as long as the clouds do not move too fast. If you make them sprint over your head it becomes obvious it is a flat layer. You cannot also come too close to these clouds, that also kills the illusion.

For what it does, I think the method is quite neat, especially if you don't have much GPU cycles to spend in clouds. It does not use any textures or any other resources. This is 100% GPU so it would run nicely in demos or WebGL frames. I think it deserves a future technical post on its own, that of course assuming you guys like how they look.

Let me know what you think by dropping a comment.



Wednesday, April 10, 2013

The sun rises in ProcWorld, again

Last week I posted some early screenshots of the nigh-day cycle. There was a lot going wrong in them and you guys were very helpful in pointing out solutions.

I did another iteration on this, while not everything is as it should, I think there was some improvement. This time I have captured a video. The transitions are better appreciated like this. Again let me know what you think.



The main issues with the previous iteration were the brightness of the sky (or lack of it), and how the distant features failed to blend with the sky. This time I made sure there was enough atmosphere so more light was trapped between the horizon and the eye. The distance to the sky is also consistent with the terrain dimensions, now the colors in terrain and sky match better.

A few comments in the previous post suggested a different method called "Precomputed Atmospheric Scattering". It certainly produces better results than the method I am using here, which is the one from O'Neil in GPU Gems 2.

I had a quick look at the method and saw that in its vanilla form it could run slower than what I have now. While the method uses precomputed tables as textures to accelerate rendering, all the work is done in the fragment shader. That means every pixel on screen now had to perform two or three additional texture fetches.

The method from O'Neil does all the heavy lifting in the vertex shader. Consider this scene:


The sky, even if it appears softly shaded, has only a few vertices:


I think in this case it makes a big difference.

The precomputed method could also run in the vertex shader, but then it would take some time to port the tables, which now are in pixel formats that cannot be read by the vertex shader.

Of course there is a chance I am reading this wrong. If you have worked in this area before and see what I am missing please let me know.

Wednesday, April 3, 2013

The sun rises in ProcWorld

So I finally added some proper light scattering to the sky atmosphere in the realtime demo.

I am using the classic approach devised by O'Neil, which produces great results but it is also very sensitive to any change in the input parameters. A lot of tweaking is still required.

Here is a series of shots, keep in mind this is a work in progress but any early comments surely will help.




 



Saturday, March 30, 2013

Network Update

Here is a new video I recently captured. It shows the networking and storage components in action.



When it comes to networking this is the smallest test possible, you cannot really go below two connected clients. I have tested this same server code with nearly a hundred clients performing queries and changes at rates many times higher than what humans would do. Network tests are good at showing why some stuff does not work. But when the results are good it does not really mean anything. The real network is so complex you cannot replace it by any model. In this case results are as good as any network test can be at this stage. There is very little overhead from the thread and connection management, which is what I was looking for.

While this is good news and by all means necessary, the real bottleneck comes from how any application using this engine chooses to store and process information. So again what you are seeing here is just a brick. You could create many different houses with it.

You could do it like Minecraft servers do, have everything including procedural generation run in the server. You could do like this particular demo does, where user-created content is stored in a server and everything else remains client-side. And you could have solutions in-between, for instance have some custom server-side generation which is merged later with the rest of the client-side generation.

This is a fascinating subject to me, I will be covering some of these approaches in the future.

Wednesday, March 27, 2013

Storage Matters

Imagine you were creating a massive persistent world where everyone would be able to change anything at will. It is a simple, powerful idea that eventually has occurred to everyone ever exposed to a game. Why there aren't many of these worlds out there? Well, this very simple idea is quite difficult and expensive to execute. Not only you need to store the information, you have to be able to write it and read it in a timely fashion.

Then how about your own personal world, something you can run in your PC and invite some friends to play over. How much of your PC's performance are you willing to sacrifice, how many people could you actually invite before you would see the quality of your gameplay begin to suffer?

I began wondering whether all the above could be manifestations of the same problem. What if you could have a storage solution that is lightweight so enthusiasts could run at home, and if you pieced enough of them together you could scale it so it would run massive worlds the size of planet Earth?

As it turns out it was possible. I have now a shinny new database system that does exactly that. The main trick is it aligns with the same other concepts of the voxel world. So this is mainly a voxel database. It won't do any SQL queries, XPath evaluation or any other form of traditional DB interaction. It just stores and retrieves voxel data very fast.

How fast? Over a 10 minute period, a machine with six-year-old Intel processor (T2500 at 2GHz) and an equally crappy HD was able to serve 10 Gigabytes worth of individual queries while another 10 Gigabytes worth of queries were being written. Each query ranged from 500 bytes to 100KBytes worth of data.

That would translate into a lot of friends sharing your server. To give you a better idea, a volume of 40x40x40 worth of player voxels compresses to 2K as an average. Here is how you would compute how much space 10 GB of voxel data would be:

1 chunk = 40x40x40 voxels = 12x12x12 meters
1 chunk = 2K
10 GB = 5,242,880 chunks = 2048x2048x2048 meters

How many people can create this amount of voxel content in 10 minutes? I have no idea, but I bet it will be an entire army. At this point the DB is the least of your concerns. The bottleneck is in the network.

The twist comes now: While this rate was sustained for 10 minutes, it was not meant to push the system to the limit. The DB process CPU usage never went up 1% and the memory usage for the process remained at 3 MB. The system was responsive and usable (well as usable as a six year old PC can be), showing no big difference in behavior.

Here is some evidence:


For most of you who are more artistically or design inclined this is certainly the most boring screenshot I have ever posted. But if you are into programming this kind of thing, this is process porn.

Of course the system is doing real work. The main clue is in a different column not displayed by Task Manager: Virtual memory, which was hovering all the time below 20 Megs. Even then the virtual memory was lower than what Google Chrome was using, which was a whooping 99 Megs.

The voxel database is so fast because it uses the same virtual memory management of the OS. So, instead of writing to files in the HD directly, all the information is mapped through the OS paging system. Only the pages that need to be altered go into memory. Also the system does a lazy write to the HD. Even after the process is gone, the OS continues to save the changes to disk.

I feel this is the stepping stone for great things. It will be fairly easy and inexpensive for people to set up their own servers. They could be hosting a lot of players and barely take a hit for it. This of course depends on how the networking is implemented, which leads into another favorite topic of mine: how to make a server that will not bring your PC to its knees. I will be covering that in the near future.

Friday, March 15, 2013

Melodive and Oveja

I couple of days ago I got this game from one of the readers of this blog. It is all procedural, including the progression of music. The name is Melodive and it is available for iOS.


Even without the help of mind altering substances, the game takes you to a different dimension. I did find the control scheme a bit frustrating, but it seems there is a whole class of games using this form of tilt, rotate interface. To many of you out there the controls may seem standard.

Oveja

And there is this other non-procedural game a Russian friend did from scratch, including programming, graphics and music. He named it Oveja, which in Spanish means "sheep". (Why a Russian guy is giving Spanish names to his games, it is beyond me.) The game is fun and equally surreal, but on a different level.


It runs in Android and it is available free from Google Play:

https://play.google.com/store/apps/details?id=com.mastercluster.oveja

If you like air-traffic control games you should give it a try. It is like these games, but with sheep in it. I did not get why the black sheep needed to be segregated from the white sheep. This is 2013, those times should be over.

The next version is rumored to include sheep poop and other equally interesting gameplay mechanics.

Monday, February 25, 2013

This one is a talkie!

Here are two new videos showing some new features.

I finally decided to add an audio track to a video. It does help describing what is new as it appears on screen:



Let me know if you like this format. I think I could do one of these every month.

The second video is some sort of fast steady-cam flyover:


Thursday, February 21, 2013

Undergrowth

Last weekend I took some time to add a new feature to the engine. It is some sort of mesh instancing system that brings additional detail on top of the geometry output. It can be used to add a new vegetation layer under trees, like the following images show:



I will be using it for rocks, pebbles, even man-made elements sticking out of the blocks you place.

Polygon counts are now escalating quickly because of this. My old 4770 still averages 40 FPS at 1080p but begins to struggle. It is still manageable, there are some polygons right now I can cut.

This one was long needed. I will be posting a video later so you guys can see how the LOD transitions are managed. I think this has improved a lot.

Monday, February 18, 2013

Unity makes strength

I like writing everything from scratch, but not everyone shares this form of dementia.

Many in the past have asked whether any of this would run on mainstream game engines. I could not see a reason why not. The VoxelFarm engine outputs traditional polygons, in theory it could be plugged into any engine using polygons for rendering, physics, etc. That remained a nice theory until recently. Now we have some hard proof:


This screnshot shows the VoxelFarm realtime engine providing polygons for terrain in Unity.

Here is are a video. It shows a simple physics test and a little bit of walking. The capture speed and resolution is not good, but hopefully you will get the idea.


I understand if this tech is to be widely used, it will likely come in the form of plugins for mainstream engines. So think this is very encouraging news.

Monday, February 4, 2013

Voxel Studio Videos

Here are three videos I captured from Voxel Studio to help with the AiGameDev.com interview:

Tuesday, January 29, 2013

AiGameDev.com Live Broadcast


Next Sunday February 3rd I will be doing a live interview at AiGameDev.com

You can find the right moment to tune in for you time-zone here in this page, where the broadcast will eventually appear:

http://aigamedev.com/broadcasts/procworld/

Make sure you catch it on time. The archived version may be playable only to members of the AiGameDev community.

Sunday, January 20, 2013

Don't be square!

Minecraft gets many things right, but the top in my list is how easy it is to create. Nothing is simpler than laying out boxes. Since everything is square (even the cows), being limited to boxes does not feel bad at all.

If you are doing some sort of sandbox environment going beyond just boxes is not trivial. One possible approach is to do like Blockscape, where in addition to the classic box you now have a large repertoire of prefixed angular shapes. I do not like this approach as it makes the interface very complex and frustrating. It makes you forehead veins pop.

So I chose a different approach, where you still lay boxes the same as in Minecraft, but then you can go back and alter them. I saw that a single operation was enough to produce both curved and straight angled surfaces. If you applied it gently you would get curves. If you applied it more, it would straighten out.

Here you can see a round hole and a needle, both initially created as boxes. It only takes a few clicks to shift their shapes:


I think this is the right direction. Still it is not simple to implement. While I got this thing working, there are many issues to fix.

I leave you with a video, let me know what you think: