Tuesday, July 9, 2013

Emancipation from the skirt

I like skirts. I hope one day men are able to wear them without being judged by the square minds out there. Even miniskirts. I think the Wimbledon tournament should require male players to wear white miniskirts, it would bring us to a new level of tennis. It was equally great when women liberated from the skirt and got to wear pants last Century.

But we will be talking about a different type of skirt. Here is the story.

When generation algorithms run in parallel you have to deal with multiple world chunks at the same time. You can think of a chess board and imagine all black squares are generated at once. You could put anything you want in these squares and it would be alright, you would never get discontinuities along the edges because black squares never share edges.

Now comes the time where you need to generate the white squares. At this point you need to think about the edges, make sure anything you place in the white square will connect properly with the adjacent black square. You have two options here:
  1. You remember what was in the black squares. 
  2. Your generation algorithm must be able to produce content "locally", that is the value obtained for one point does not depend on the neighboring points
In most cases we opt for (2). This is how noise functions like Perlin's and Worley's work. This is also how Wang Tiles and derivative methods work. Once your generation function is "local", it does not really matter in which order you generate your chunks. They will always line up correctly along the edges. This choice of (2) may seem a no-brainer at this point, but we will come back to this decision later.

Now, if instead of a checkerboard arrangement you have multiple levels of detail next to each other (a clipmap), you soon run into a problem. Running the same local function at different resolutions creates discontinuities. They will appear as holes in the resulting world mesh. The following screenshot shows some of them:


The clean, nice solution for this is to create a thin mesh that connects both levels of detail. This is usually called a "seam". This is not difficult for 2D clipmaps. For a full 3D clipmap it can get a bit messy. 

In general your way out if this is always to extend the same algorithm you use for meshing. For instance if you are using marching cubes, you will need a modified marching cubes that runs at one resolution on one end, and at a different resolution on the other end. This is exactly what the guys in the C4 engine have done with their Transvoxel algortithm: http://www.terathon.com/voxels/

In my case I chose not to use seams in the beginning at all, but a different technique called skirts. This is a technique that was often applied to 2D clipmaps as well. The idea is to create a thin mesh that is perpendicular to the edge where the discontinuity appears. While this would not connect to the neighboring cell, it does hide the holes you get just like the seams. 

Just like seams, skirts in 3D clipmaps are kind of complicated as well. Imagine you are doing a thin vertical column. You need to make sure the skirts go into the right angle and never go too far. You don't want these skirts protruding out of the other side of your mesh. 

Skirts have a big problem. Since the vertices in the skirt mesh do not connect to the other end of the edge, you will have some polygons overlapping on screen. This can produce z-fighting at render time. This is not a big deal, you can always shift the skirts in the Z-buffer and make sure they will never fight with the main geometry in your clipmap cells. But this works only if the geometry is opaque. If you are rendering water or glass, skirts make rendering transparent meshes a lot more difficult.

Still skirts have a massive advantage over seams. In order to produce seams you must evaluate the same function at two different resolutions for adjacent cells. If your function has a time penalty per cell, let's say you need to load some data, or access some generation cache, you will be paying this penalty twice for every cell that has a seam. You pay it once when you generate the contents of the cell, then again when you generate the seam.

A properly generated seam creates a serial link between two neighboring cells. For a system you want to be massively parallel, any serial elements come to a price. There is no way around this, you either pay the price in processing time or in memory (where you cache the results of earlier processing). Skirts, on the other hand, can be computed with no knowledge of neighboring cells. They are inherently parallel.

Back to the checkerboard example, even if you chose option (2), when you are doing seams you will be forced to look into the black squares when you are generating the white ones. Skirts have yet another advantage. Nothing is really forcing you to use the same function from one square to the next. Even if the function has discontinuities the skirts will mask them. You may think this never happens, and that is true while you are using simpler local functions like perlin noises or tilesets. But at some point you may be generating something that your standard seaming cannot mend, it just takes for the generation function to produce slightly different results for different levels of detail.

Anyway in my case it was time to get properly connecting seams. They would be nice for water, ice crystals, glass and other transparent materials in the world.

I run the dual contouring mesh generation over the seam space. Like in the transvoxel algorithm, one side of the voxels have double the resolution than the other side. Instead of going back and generating portions of the neighboring cells, I just store their boundaries. So there is a little bit of option (1) in the checkerboard example. It adds some memory overhead but it is worth the save in processing time.

Here you can see some results in wireframe:


The seams appear in yellow.

I am actually not finished with this yet. Still need to bring the right materials and normals into the seams. But I would say the hard part is over.



20 comments:

  1. Weaving skirts without seams sounds like a pretty challenging process anyway...

    ReplyDelete
  2. How does gt6 get away with adaptive tessellation?

    https://www.youtube.com/watch?v=QTVU838725c&feature=player_detailpage#t=70s

    ReplyDelete
    Replies
    1. What they show in this video is closer to progressive meshes. There are different ways to do this. They do not use clipmaps for the car models.

      As for their terrain, it appears to be 2D. In this case you have the classic approach: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter02.html

      Delete
    2. Is it possible to do something similar to adaptive tessellation with voxel data points?

      I am imagining something similar to what euclideon does to quickly same huge data sets. Rather then highlighting a per pixel data point, could you create procedurally generated meshes based on a per pixel screen sample?

      Your world would be stored as data points, and a mesh with a polygons spanning each pixel (or group of pixels) would adapt and change to fit those data points. The mesh wouldn't have to be continuous and gaps between data points larger then a threshold value would yield separations in said mesh. But since these mesh separations are perpendicular to your view, they would be unobservable.

      Delete
    3. Not sure how you would handle different topologies. For instance something with a hole in the middle. Somehow you need to know how vertices connect to each other.

      Delete
    4. I'm imagining a conical shaping filling up one pixel of the screen, it reaches out until it either exceeds a max render distance, or pings back the closest data point it encounters. Each of the pixel points would create a 1920x1080 (or whatever) resolution depth map. This depth map would be split wherever depth values vary more then a specified amount (either defined by a finite distance, or a scaling cutoff point based on the distance of the closest data point).

      In your example, it would mesh the surface, then the abrupt change in distance caused by the hole would generate a split in the mesh. There would be a gap, but this gap would exist between pixels, so it would never be rendered.

      Delete
  3. Hi Miguel, I was wondering, how do you store material information of your terrain? Do you use a texture or it's per-vertex?

    ReplyDelete
  4. I am curios, what would the Hardware requirements for that game be?
    I am not sure if my pc would even be able to run it.

    ReplyDelete
    Replies
    1. Bare minimum you will need a 2 core CPU and a 4770 (or equivalent Nvidia)

      What is your hardware?

      Delete
  5. my processor Intel Core i7.
    and ATI Radeon HD 5570 graphic Card.

    ReplyDelete
  6. Or you could integrate with C4 for visualizing your voxel fields :D

    ReplyDelete
    Replies
    1. I think C4 is marching cubes, no sharp feature reconstruction.

      Delete
  7. Just a little bit off-topic. Are You planning to support Oculus Rift in Your engine?

    ReplyDelete
    Replies
    1. It is a distant feature. There are many things I would do first.

      Delete
    2. Ok, thanks for reply :) And good luck :)

      Delete
  8. A good day Miguel, I just want to say that I absolutely love your work and I'm so very happy seeing your whole project come together. My favourite topics covered on your blog are tree generation and procedural cities. It's amazing how much a machine can build on its own with a few nudges in the right direction.
    I have also recently started working on a procedural game about survival.It's by far not as complex as what you are doing, though. If you could take a look and tell me what you think, I would appreciate it very much :) And good luck with those skirts ;)
    http://someguywhodoesthings.blogspot.ro/

    ReplyDelete
  9. Hi,
    why didn't you use the adaptive dual contouring approach of Ju et al (I think you cited their paper in a previous post)? I reckon you implemented the main part of it but chose another way to handle the cracks between chunks of different LODs...
    If you make sure 2 adjacent chunks have equal or consecutive LODs, you should be able to work with a reasonable neighbourhood. Maybe the skirts still require even less data around the cracks?

    Kudos for your great and inspiring work!

    ReplyDelete
  10. Could you describe the actual algorithm, which generates polygons out of the stored points?

    ReplyDelete