Pages

Wednesday, March 11, 2015

Population Explosion

I described in an earlier post my algorithm for placing the capital city in each of my countries, expanding on that I wanted to gather some more information about the make up of each country so I can make more informed descisions about further feature creation and placement.

One key metric is the population for the country as knowing how many people live in each country is key in deciding not just how many settlements to place but also how big they should be.  Until this is known much of the rest of the planetary infrastructure cannot be generated as so much of it is dependent upon the needs of these conurbations and their residents.

World population density by country in 2012.
(Image from Wikipedia)
A completely naive solution would be to simply choose a random number for each country but this would lead to future decisions that are likely to be completely inappropriate for both the constitution of the country's terrain and it's geographical location.  A step up from completely random would be to weight the random number by the country's area so the population was at least proportional to the size of the country but again this ignores key factors such as the terrain within the country - a mountainous or desert country for example is likely to have a lower population density than lush lowlands.

To try to account for the constitution of the country's physical terrain rather than just use the area I instead create a number of sample points within the country's border and test each of these against the terrain net.  As mentioned in a previous post, intersecting against the triangulated terrain net produces points with weights for up to three types of terrain depending on the terrains present at each corner of the triangle hit.  By summing the weights for each terrain type found on each sample point's intersection I end up with a total weight for each type of terrain giving me a picture of the terrain make across the entire country.  I can tell for example that a country is 35% desert, 45% mountains and 20% ocean.

This is of course just an estimate due to the nature of sampling but the quality of the estimate can easily be controlled by varying the distance between sample points at the expense of increased processing time.  Given a set number of samples however the quality of the estimate can be maximised by ensuring the points chosen are as evenly distributed across the country as possible.

The number of samples chosen for the country is calculated by dividing it's area by a fixed global area-per-sample value to ensure the sampling is as even across the planet's surface as possible - currently I'm using one sample per 2000 square kilometres.  Once the number is known the area of each triangle in the country's triangulation is used to see how many sample points should be placed within that triangle.  Any remainder area left over after that many samples worth of area has been subtracted is propagated to the area of the next triangle - this also applies if the triangle is too small to have even a single sample point allocated to it to make sure it's area is still accounted for.

If a triangle is big enough to have sample points allocated within it those points are randomly positioned using barycentric co-ordinates in a similar manner to how the capital cities were placed.  There is nothing in this strategy to prevent samples from adjacent triangles from falling close to each other but in general I am happy that it produces an acceptably even distribution with a quality/performance trade-off  that can easily be controlled by varying the area-per-sample value.

So given the proportion of each type of terrain making up a country how do I turn that into a population value?  I assign a number of people per square kilometre to each type of global terrain, multiply that by the area of the country then by the weight for that type of terrain to get the number of people living in that type of terrain in the country then finally sum these values for each terrain type to get the total number of people in the entire country.

I've produced these people-per-Km2 values largely by using the real world figures for population density found on Wikipedia as a basis.  Using figures a year or two old, the Earth has an approximate average population density of just 14 people per square kilometre when you count the total surface area of the planet but this rises to around 50 people per square kilometre when you count just the land masses.  

As shown on the infographic above however the huge variety factors influencing real world population density lead to some areas of the planet being very sparsely populated with only a couple of people per square kilometre while other areas have 1000+.  There isn't enough information present in my little simulation to reflect this diversity yet however so I'm currently working with a far more restricted diversity based around the real world mean.  The values I am currently using are:

Figures for how many "people" to create per square kilometre for each terrain type 
So given the fairly random distribution of terrain on my planet as it now stands what sort of results drop out of this?  Currently the terrain looks like this:

The visual planetary terrain composition, currently the ocean terrain type has a slightly higher weighting so there is more water surface area than any other single terrain type  
while making 100 countries produces the figures shown here

Terrain composition for a number of the countries along with their resultant population densities by total area and by landmass area
As can be seen basing the terrain population densities on real world values has generated a planetary population not too different to our own but as my planet is currently 24% water rather than the 70% or so found on Earth the actual densities are probably on the low side.  The global terrain composition is currently made up like this:

Proportion of the planet covered by each terrain type and resultant contribution to the planet's population

What is interesting is that the sampling strategy ensures that the population count properly reflects the proportion of the planet that is land mass - you can see countries such as Sralmyras which are 32% water have a lower population density by area of just 13 while Kazilia which is 7% water has a density of 17 people per Km2.

With a population count my plan is to now use that in conjunction with the terrain composition profile to derive a settlement structure for each country so I know how many villages, towns and cities to create.  Watch this space.

Saturday, March 7, 2015

Derivative Mapping

I've been somewhat unhappy with the lighting on my terrain for a while now especially the way much of the time it looks pretty flat and uninteresting.  It's using a simple directional light for the sun at the moment along with a secondary perpendicular fill light as a gross simulation of sky illumination plus a little bit of ambient to eliminate the pure blacks.  Normals are generated and stored on the terrain vertices but even where the landscape is pretty lumpy they vary too slowly to provide high frequency light and shade and with no shadowing solution currently implemented there's not much visual relief evident leading to the unsatisfying flatness of the shading.

While I plan to add shadowing eventually I don't want to open that can of worms quite yet so I had a look around for a simpler technique that would add visual interest.  I thought about adding normal mapping but the extra storage required for storing tangent and binormal information on the vertices put me off as my vertices are already fatter than I would like.  While looking in to this however I came across a blog post by Rory Driscoll from 2012 discussing Derivative Mapping, itself a follow up to Morten Mikkelsen's original work on GPU bump mappng.  This technique offers a similar effect to normal mapping but uses screen space derivatives as a basis for perturbing the surface normal using either the screen space derivatives calculated from a single channel height map texture or the pre-computed texture space derivatives of said height map stored as a two channel texture.

This was appealing to me not just because I hadn't used the technique before and was therefore interested just to try it out, but also from an implementation point of view I would not have to pre-compute or store anything on the vertices to define the tangent space required for normal mapping saving a considerable amount of memory given the density of my geometry.  It also solves the problem of defining said tangent space consistently across the surface of a sphere, a non-trivial task in itself.

Thanks to the quality of the posts mentioned above it was a relatively quick and easy task to drop this in to my terrain shader. I started with the heightfield based version as with the tools I had available creating a heightfield was easier than a derivative map but while it worked the blocky artifacts caused by the constant height derivatives where the heightmap texels were oversampled were very visible especially as the viewpoint moved closer to the ground.  I could have increased the frequency of mapping to reduce this but when working at planetary scales at some point they are always going to reappear.  To get round this I moved on to the second technique described where the height map derivatives are pre-calculated in texture space and given to the pixel shader as a two channel texture rather than a single channel heightmap.  The principle here is that interpolating the derivatives directly produces a higher quality result than interpolating heights then computing the derivative afterwards.  I had to write a little code to generate this derivative map as I didn't have a tool to hand that could do it but it's pretty straightforward.

Although this takes twice the storage and a little more work in the shader the results were far superior in my context with the blocky artifacts effectively removed and the effect under magnification far better.


A desert scene as it was before I started.  With the sun overhead there is very little relief visible on the surface
The same view with derivative mapping added.  The surface close to the viewpoint looks considerably more interesting and the higher frequency surface normal allows steeper slope textures to blend in
As you can see here the difference between the two versions is marked with the derivative mapped normals showing far more variation as you would expect.  The maps I am using look like this:


The tiling bump map I am using as my source heightfield
The derivative map produced from the heightfield.  The X derivatives are stored in the red channel and the Y in the green.


Here is another example this time in a lowlands region:


A lowlands scene before derivative mapping is applied
The same scene with derivative mapping.
Particularly evident here is the additional benefit of higher frequency normals where it is the derivative-mapped normal not the geometric one that is being used to decide between the level ground texture (grass, sand, snow etc.) and the steep slope texture (grey/brown rock) on a per-pixel basis.  This produces the more varied surface texturing visible in the derivative mapped version of the scene above.

Finally, here are a couple more examples, one a mountainous region the other a slightly more elevated perspective on a mixed terrain:


Mountain scene before derivative mapping
The same scene with the mapping applied, the mountain in the foreground shows a particularly visible effect 
A mixed terrain scene without mapping
The same scene with the derivative mapping applied.  The boundaries between the flat and steeply sloped surface textures in particular benefit here with the transitions being far smoother
To make it work around the planet a tri-planar mapping is used with the normal for each plane being computed then blended together exactly as for the diffuse texture.  For a relatively painless effect to implement I am very pleased with the result, the ease of use however is entirely down to the quality of the blog posts I referenced so definitely thanks go to Rory and Morten for that.

Monday, March 2, 2015

In Deep Water

I've been thinking about water, particularly how the oceans of a planet affect the evolution of the people living on it, their choices for habitation, agriculture and infrastructure.  Of course there are many other factors that influence these things but oceans seem like a good place to start, and with 71% of the Earth covered by them there is certainly plenty of reference material about.

I had a couple of thoughts about how to procedurally generate the water masses for my planet, the most straightforward and probably most commonly used is to simply decide upon an elevation level to define as the sea level with everything generated by the noise functions under that counting as water.  You can then render the water at that elevation and the GPU's Z buffer will sort out the intersection with the land.  While I still want the simplicity and benefits of an established sea level, I wanted to have a look at making water region generation more integral to the overall planet's procedural system rather than it being treated essentially as a post process.  

Eventually it would be nice to have bodies of water at different elevations so mountain lakes, tarns and similar could be created preferably with rivers and streams connecting them to each other, waterfalls and larger bodies of water but that's all for the future.

The TL;DR version: this video shows the effect I'm going to describe here as it now stands:



To make water body creation part of the terrain system the heightfield generation itself has to be aware of the presence of water so there needs to be a way to define where the water should go.  My first attempt at this was using the country control net as I thought the country shapes would make pretty decent seas and by combining a couple of adjacent ones some reasonable oceans.  By creating the countries as normal then simply flagging some of them as water such regions can be established, the heightfield generator can then ray-cast the point under consideration against the country net and if it finds it's within a water "country" use a noise function appropriate for sea beds that will return low altitude points below the global sea level.

When I tried this however a couple of points became apparent, firstly having to ray cast against the country net's triangulation slows down the heightfield generator which is significant when it has to be run so frequently to generate the terrain and while optimisation might mitigate some of this cost I felt a more significant problem was the regularity of the water zones created.  With each being formed from a single country there were for example no islands generated which is quite a big drawback and for me pretty much discounted this approach.

Instead of the country net then how about using the terrain net instead?  By adding a terrain type of Ocean that generates a plausible seabed heightfield that lies below sea level to the existing terrain types such as mountain, hilly and desert deep water regions could be formed using the same controllable system employed for those other type of terrain.  The turbulence function described previously that perturbs the terrain type determination will also then affect the water region borders creating some interesting swirls and inlets.

The blending of mountainous or hilly terrain into the seabed generator around the transition areas also produces plenty of islands of various sizes


Islands formed by perturbing the ray cast against the terrain net
There are some drawbacks to using the terrain net instead of the country one however, there is nothing for example to prevent an entire country from ending up under water or possibly more problematically the vast majority of a country could be under water leaving an implausibly small section remaining that simply would not really exist in the real world.  On balance however I thought this is the better of the two systems so am going with this for now.


Rendering Water

Rendering of realistic water is a long standing challenge for real time graphics and one that I've looked at myself from time to time in my various forays into terrain generation and rendering. For this project I thought a good place to start with generating the actual geometry for the water surface would be to use essentially the same system as I already use for the terrain, namely a subdivided icosahedron.

The existing triangular patch generator can easily be extended to determine whether any vertices in the patch are below sea level or not, and if so a second set of vertices for the ocean level surface patch is generated.  These vertices represent the triangular area of the surface of a sea level radius sphere centred on the centre of the planet and encompassed by the patch in question. Although the plan is to have realistic waves on the water these will be generated by the vertex shader displacing the vertices so a smooth sphere is enough to start with.


Wireframe of the water surface before being perturbed in the vertex shader using the displacement of the simulated water surface
The level of detail system described previously can also be leveraged to decide which water geometry patches to render by simply running it in parallel on both the terrain patches and the water ones - a different distance scalar can also be used for the water patches enabling lower detail geometry to be used for the water potentially improving performance as long as the visual quality doesn't suffer too much.

Even though the identical geometric topology allows the water patches to use the same index buffer as the terrain there is an additional and more significant optimisation opportunity here that would save significant memory.  With each water patch representing what is essentially an identical shaped section of the sphere's surface at that level of detail they could actually all be drawn using a single vertex buffer with a suitable transformation matrix to put it at the correct position and with the correct orientation.  Setting this up is a little fiddly so I'm leaving it for a future task but should memory become an issue it's likely to be one of the first things I'll revisit.

As for the actual water visuals, the movement and appearance of water is notoriously difficult to simulate especially as the real thing behaves so radically differently in different situations - shallow water is totally different to deep for example and white water rapids completely at odds with languid meandering rivers.  To make such a difficult problem manageable I chose to focus on just one aspect and when talking at planetary scales deep water seemed like the logical choice - the oceans created as described above will dominate any other water features I add in the future.

There are quite a few references and demos for deep water rendering but the one I chose was the NVIDIA "Ocean" demo from their now superceded graphics SDK which is in turn based on such classic work as Jerry Tenssendorf’s paper “Simulating Ocean Water”. I liked this demo as it is well documented and being GPU based heavily targeted at real time graphics.

This demo uses a couple of compute shaders to perform the necessary FFT followed by a couple of pixel shaders to produce two 2D texture maps, one storing the displacement for the vertices over the square patch of water and the other storing the 2D gradients from which surface normals can be calculated and a 'folding' value useful for identifying the wave crests.


High LOD wireframe showing how the smooth sphere vertices have been displaced to create the waves.  Note that normally a lower LOD version is used as the majority of the shading effect comes from the normals computed in the pixel shader rather than the wave shapes produced in the geometry.
The displacement map is fed in to the water geometry's vertex shader to displace the vertices creating the waves while the gradient/folding texture is fed in to the pixel shader to allow per-pixel normals to be created for shading and wave crest effects to be added.

Using these textures as the primary inputs, there are a number of shading effects taking place in the pixel shader here to give this final ocean effect.  Although the NVIDIA demo produces a nice output I decided to largely ignore the final pixel shader as it's use of a pre-generated cube map for the sky colour along with a simulated sun stripe was a bit too hard coded for my needs.  Instead I fiddled around some and came up with my own shading system.

Firstly the gradient texture is used to compute a surface normal which is then used with the view vector to calculate the reflection vector (using the HLSL reflect function).  I then create a ray from the surface point along that reflection vector and run it through the same atmospheric scattering code that is used for rendering the skybox, this gives me the colour of the sky reflected from that point and ensures that the prevailing sky conditions at that location at that time of day are represented.  This is important to ensure effects such as sunsets for example are represented in the water but even at less dramatic times of the day gives the water somer nice variegated shades of blue.


Bright early morning sun reflected in the water
The last traces of sunset bounce off the ocean surface, an effect that would be difficult to achieve with direct illumination
Capturing the sun's contribution in this unified manner is especially useful as it's size and colour varies so much based upon the time of day, trying to represent that as a purely specular term on water can be a challenge often ending up with a sun "stripe" that doesn't match the rendered sun - especially near the horizon.

The folding value from the gradient texture is used to add a foam effect at the top of the waves to make the water look a bit choppier.  The foam texture itself is a single channel grayscale image with the degree of folding controlling which proportion of the grayscale is used.  A low folding value for example would cause just the brightest shades of the foam texture to be used while a high one would cause most or all of the greyscale to be present producing a much stronger foam component to the final effect.  A global "choppyness" value is also used to drive the water simulation which affects how perturbed the surface normals are in addition to introducing more foam - this value can be changed dynamically to vary the ocean from a millpond to a roiling foamy mass: 


A fairly low "Choppyness" value produces pleasing wave crests and moderate surface peturbation
A higher "Choppyness" produces more agitated wave movement, sharper surface relief and considerably more foam.
In addition to foam at the wave crests I also wanted foam in evidence where the water meets the land, to accomplish this a copy of the depth buffer is bound as a pixel shader input and the depth of the pixel being rendered compared against it.  This produces a depth value representing the distance between the water surface and the terrain already rendered to that pixel.  This depth delta is added not just to the foam amount to render but is also used to drive the alpha component letting the water fade out in the shallows where it meets the land.  This can be seen in both the images above where the water meets the land.

The benefit of using the screen space depth delta rather than a pre-computed depth value stored on the vertices is that it reacts dynamically to both the movement of the vertices driven by the water surface displacement map and to anything else that penetrates the water surface.  The latter can't be seen just yet other than where the water geometry meets the terrain as I don't have any such features but in the future should I have ships, jetties or gas/oil rigs the alpha/foam effect will simply work around where they intersect the water helping them feel more grounded in-situ.


Problems of scale

As mentioned above one of the fundamental problems with water rendering is that it behaves and appears so radically different depending on it's situation, but another problem with rendering water especially with planetary scale viewpoints is how it appears from different distances. The 512x512 surface simulation grid I'm using looks good close up but simply tiling it produces unsightly repeating patterns when viewed from larger distances.



Not only does the limited simulation area become very apparent but the higher frequency of the surface normal variation produces very visible aliasing in both the reflection vector used to compute the water colour and the wave crest effect producing unsightly sparkling in the rendered image.

Rather than simply increase the simulation area which would produce just a limited improvement and incur increased simulation cost instead I vary the frequency at which I sample the simulated surface with distance.  The pixel shader uses the HLSL partial derivative instructions to determine an approximate simulation texture to screen pixel ratio then scales the texture co-ordinates to obtain an equally approximate 1:1 mapping.  This effectively causes the simulation surface to cover increasingly large areas of the globe as the viewpoint moves further away.

This is in no way physically accurate but produces a more pleasing visual effect than the aliasing, and by blending between adjacent scales a smooth transition can be achieved to hide what would otherwise be a very visible transition line between scales - an effect very similar to a mip line when trilinear or anisotropic filtering is not being used.  Take a look at the second half of the video above to see how this looks in practice as the viewpoint moves from near the surface all the way out into space.

There is more to the effect than just scaling the simulated water surface to cover larger and larger areas though, as while this eliminates most of the tiling artifacts it also inevitably makes all water features larger which can look increasingly unrealistic. To alleviate this undesirable consequence certain aspects of the effect are toned down with increasing distance from the viewpoint.  The first is the per-pixel normal calculated from the simulation's gradient texture which has the gradient's effect scaled down to produce less variation with distance making the resultant variations in reflected sky colour more subtle while the second is the foam wave crest effect which is also scaled down and ultimately removed with distance:


More distant view showing how the wave crests peter out and the water adopts a smoother aspect with distance

Combining these helps make distant water a bit more appropriately indistinct.  


Making it all 3D

The final challenge with the water was taking the two dimensional result of the simulated water surface and applying it to my three dimensional planet.  There are a variety of ways to map 2D squares to the surface of a sphere but each has major trade-offs involving distortion in some way or other.  I decided to keep it simple and use the same triplanar projection system used for texturing the terrain - i.e. using the 3D world space position of the point being shaded to sample the texture in the XZ, XY and ZY planes then using the surface normal to blend between them.

The only trick here is to make sure the directions of the normals from the gradient map are consistent so they are oriented correctly for the plane they are being applied to, getting this wrong and the sun and sky will not reflect properly in the water.


Next Steps

I'm pretty happy with the water effect now, it has some artifacts still but I feel I've spent enough time on it for now and the effect is good enough not to irritate me.  Next I think is integrating it's placement more into the geographic terrain generator especially in the area of trying to make interesting coastlines along with processing it's impact on the countries themselves.