Tuesday, April 26, 2016

Geometry is Destiny

In the previous post, I introduced our new land mass generation system. Let's take a look at how it works.

For such a large thing like a continent, I knew we would need some kind of global generation method. Global methods involve more than just the point of space you are generating. The properties for a given point are influenced by points potentially very far away. Global methods, like simulations, may require you to perform multiple iterations over the entire dataset. I favor global methods for anything larger than a coffee stain in your procedural table cloth. The reason is they can produce information whereas local methods cannot: information is limited to the seeds used in the local functions.

The problem in using a global simulation is speed. Picking the right evaluation structure is paramount. I wanted to produce maps of approximately 2000x2000 pixels, where each pixel would cover around 2 km. I wanted this process to run in less than five seconds for a single CPU thread. Running the generation algorithm over pixels would not get me there.

The alternative to simulating on a discrete grid (pixels) is to use a graph of interconnected points. A good approach here is to scatter points over the map, compute the Voronoi cells for them, and use the cells and their dual triangulation as the scaffolding for the simulation.


I had tried this in the past with fairly good results, but there was something about it that did not sit well with me. In order to have pleasant results, the Voronoi cells must be relaxed so they become similarly shaped and the dual triangulation is made of regular triangles.

If the goal was to produce a fairly uneven but still regular triangle mesh, why not just start there and avoid the expensive Voronoi generaion phase? We would still have implicit Voronoi cells because they are dual to the mesh.

We started from the most regular mesh possible, an evenly tessellated plane. While doing so we made sure all diagonal edges would not go in the same direction by making their orientation flip randomly:



Getting the organic feel of the Voronoi driven meshes from here was simple. Each triangle is assigned a weight and all vertices are pulled or pushed into triangles depending on these weights. After repeating the process a few times you get something that looks like this:


This is already very close to what you would get from the relaxed Voronoi phase. The rest of the generation process operates over the vertices in this mesh and transfers information from one point to another using the edges connecting vertices.

With the simulation scafolding ready, the first actual step into creating the land mass is to define its boundaries. The system allows a user to input a shape, in case you were looking for that heart-shaped continent, but if no shape is provided a simple multiresolution fractal is used. This is a fairly simple stage, where vertices are classified as "in" or "out". The result is the continent shoreline:


Once we have this, we can compute a very important bit of information that will be used over and over later during the generation: the distance to shoreline. This is fairly quick to to compute thanks to the fact we operate in mesh space. For those triangle edges that cross the shoreline we set the distance to zero, for edges connected into these the distance is +1 and so on. It is trivial to produce a signed distance if you add for edges in mainland and subtract for edges in the ocean.

It is time to add some mountain ranges. A naive approach would be to use distance to shore to raise the ground, but this would be very unrealistic. If you look at some of the most spectacular mountain ranges on Earth, they happen pretty close to coast lines. What is going on there?

It is the interaction of plate tectonics what has produced most of the mountain ranges that have captured our imagination. This process is called orogeny, and there are basically two flavors of it, accounting for most mountains on Earth. The first is when two plates collide and raise the ground. This is what gave us the Himalayas. The second is when the oceanic crust (which is a thinner New-York-pizza-style crust) sinks below the thicker continental crust. This raises the continental crust producing mountains like the Rockies and the Andes. The two processes are necessary if you look for a desirable distribution of mountains in your synthetic world.

Since we already have the shape of the continental land, it is safe to assume this is part of a plate that originated some time before. More-so, we can assume we are looking at more than one continental plate. This is what you see when you look at northern India, even if it is all a single land mass, three plates meet at this point: the Arabian, Indian and Eurasian plates.

Picking points fairly inland, we can create fault lines going from these points into the map edge. Again this works in mesh space, so it is fairly quick and the results have the rugged nature we initially imprinted into the mesh:

Contrary to what you may think, this is not a pathfinding algorithm. This is your good-old midpoint displacement in action. We start with a single segment spanning from the fault source to the edge of the map. This segment, and each subsequent segment, is refined by adding a point in the middle. This point is shifted along a vector perpendicular to the segment by a random amount. It is fairly quick to know which triangles are crossed by the segments so the fault can be incorporated into the simulation mesh.

In this particular case the operation has created three main plates, but we are still missing the oceanic plates. These occur a bit randomly, as not every shoreline corresponds to an oceanic plate. We simulated their occurrence by doing vertex flood fills on selective corners of the map. Here you can see the final set of plates for the continent:


The mere existence of plates is not enough to create mountain ranges. They have to move and collide. To each plate we assign a movement vector. This encodes not only the direction, but also the speed at which the plate is moving:


Based on these vectors we can compute the pressure on each vertex and decide how much it should be raised or lowered, resulting in the base elevation map for the continent:


All the mountains happened to occur in the South side of the continent. You can see why this was determined by the blue plate drifting away from the mainland, otherwise we would have had a very different outcome. This will be an interesting place anyway. While the gray-scale image does not show it, the ground where the blue plate begins sinks considerably, creating a massive continent-wide ravine.

Getting the continent shape and mountain ranges is only half the story. Next comes how temperature, humidity and finally biomes are computed. Stay close for part two!


17 comments:

  1. Amazing. Can you define your own plate boundaries and directions? What types of variables do you envision for mountain shape?

    ReplyDelete
    Replies
    1. Yes can you feed custom plates with their vectors. For mountain shapes we are working on something very cool. It is a system where you submit pictures of the Himalayas and it will try to come up with something looking like that. I will have a post on this very soon.

      Delete
    2. This is coming along quite a bit faster than I expected. I dabbled a little in some of the terrain tools that people use for massive heightmaps, and I will admit they are daunting, difficult to use, and often must be used in tandem to create great results. It has also been slow to develop, at least from my vantage point. jeez, people are still using Wilbur for rivers, and it hasn't been supported for over 10 years!

      Delete
    3. Yes we are coming from the same experience. World Machine is a great example of defining terrain bottom to top. It is quite versatile but also low-level, it requires significant skill to produce something beyond the provided examples. Our approach is top-bottom, so you provide the system a portfolio defining the look you want and it will try to approximate it. You will be in the director's chair, asking the system "give me mountains like the Monserrat in Spain".

      It is physically-based too, so you will get more water erosion where clouds unload their moisture, and if you want desert-type erosion in a particular zone the inputs you'll have at your disposal will be things like the base temperature and the wind direction. Just like with physically based rendering, it is harder to fake it. I'm not sure if this trade-off is a problem.

      I think this approach will be welcomed by creators who want believable and unique spaces but do not want to spend time cobbling messy circuits of procedural nodes.

      Delete
    4. "cobbling messy circuits of procedural nodes."

      Yup! I have been on-off following ME-DEM, the project to rebuild a huge 40k? heightmap of Middle Earth. They have been through a huge handful of tools and have spent the better part of a decade trying to make one believable heightmap, and the contortions they have gone through proves there is still a need for a killer app.

      Delete
    5. I did not know about ME-DEM, looks very good, but yes still not there. Thanks for the reference.

      Delete
  2. Wow this reminds me a lot of this article:
    http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/

    ReplyDelete
    Replies
    1. nha, it has similar intentions but different methods.

      Delete
    2. Yes it should remind you of it. Amit's contributions to procedural generation permeate all we do :)

      Delete
  3. "five seconds for a single CPU thread" Why you imposed such restriction? Sure more optimized is better, but I don't know what application would need such requirement. You intend to generate multiples ones, to make a selectable output? In some kind of interactive generation process? If not, it may be something to consider. User input in the middle of the process, to select what shape suits better, before the next step.

    ReplyDelete
    Replies
    1. Well children need boundaries :)

      This is a part of an iterative, creative process. You may change parameters like wind direction to improve on your earlier results and this should be an enjoyable experience, you do not want to wait minutes for the tool because then the tool is in control of your workflow, you will avoid experimenting and exploring.

      Also this will can be part of real time generation. As you explore the world, more of it can be generated. We do not want this generation to take over your CPU.

      Delete
  4. This comment has been removed by the author.

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. Why do you compute the shoreline before generating the plates ?
    Would it not make more natural result to deduce it from the plates movements ?
    I would expect sea with cliff along the shore where the plate drift away.

    ReplyDelete
  7. So grateful for your work/blog, I've been a huge fan for a couple years now! I've been building this system myself in C# in Unity.

    I'm having problems right now estimating the sort of density that the (irregular) mesh grid should have. If I understand-- the jaggedness of the fault lines in the screenshots is from sampling the mesh's triangles. It seems very granular, almost like you're using the 2000 x 2000 size you mentioned earlier as being much too slow. So, while I love the idea of avoiding square pixels while evaluating the global methods, I'm very confused as to how this solves the problem of the simulation time, if you're using basically the same number of points.

    I guess I'm personally interested in this b/c I'm running into Unity-specific problems implementing this section myself-- I'm using Unity's built-in Mesh class and its vert limit is giving me problems for mesh grids of even 256x256 points... I don't mean to ask for the entire solution, but I'm hoping I'm just mis-understanding the size of the mesh. If not, I guess I'll need to create some other class to work with the 2d array of points and figure out their connectivity? Sounds like fun! lol

    ReplyDelete
    Replies
    1. The current version of Voxel Studio will use a 256x256 mesh grid. I believe this is the same value used to produce the images in this post. A mesh in this case is more like a graph of interconnected nodes, probably a full-fledged Unity mesh is an overkill. If you have access to the Voxel Farm code, take a look at the FastQuadrics class. This is what we use to handle the meshes involved in continent generation.

      Delete
    2. Thanks for the response! Oh, only 256x256? hmmm..interesting. I guess I need to write a custom mesh class either way.

      I actually started building my proc-gen system before you'd announced your licensing options, or else I would have never began implementing it all myself LOL. But now I feel like it'd make things complicated legally to license VF and actually be able to look at your source code, be tempted to copy bits directly, etc.

      Not sure how that all works legally, I'm a very novice dev. We all owe you anyways just from reading this whole blog and all the linked white-papers-- it's the only reason I understand this stuff at all!

      Delete