Friday, November 5, 2010

Continuing the theme of procedurally generated planets, I’ve started a new project I’ve called Osiris (the Egyptian god usually identified as the god of the Afterlife, the underworld and the dead) which is a new experiment into seeing how far I can get having code create a living breathing world.

Although my previous projects Isis and Geo were both also in this vein, I felt that they each had such significant limitations in their underlying technology that it was better to start again with something fresh.  The biggest difference between Geo and Osiris is that where the former used a completely arbitrary voxel based mesh structure for its terrain Osiris uses a more conventional essentially 2D tile based structure.  I decided to do this as I was never able to achieve a satisfactory transition effect between the level of details in the voxel mesh leaving ugly artifacts and, worse, visible cracks between mesh sections – both of which made the terrain look essentially broken.

After spending so much time on the underlying terrain mesh systems in Isis and Geo I also wanted to implement something a little more straightforward so I could turn my attention more quickly to the procedural creation of a planetary scale infrastructure – cities, roads, railways and the like along with more interesting landscape features such as rivers or icebergs.  This is an area that really interests me and is an immediately more appealing area for experimentation as it’s not an area I have attempted previously.  Although a 2D tile mesh grid system is pretty basic in the terrain representation league table, there is still a degree of complexity to representing an entire planet using any technique so even that familiar ground should remain interesting.


The first version shown here is the basic planetary sphere rendered using mesh tiles of various LOD levels.  I’ve chosen to represent the planet essentially as a distorted cube with each face represented by a 32x32 single tile at the lowest LOD level.  While the image below on the left may be suitable as a base for Borg-world, I think the one on the right is the basis I want to persue...



While mapping a cube onto a sphere produces noticeable distortion as you approach the corners of each face, by generating terrain texturing and height co-ordinates from the sphere’s surface rather than the cube’s I hope to minimise how visible this distortion is and it feels like having what is essentially a regular 2D grid to build upon will make many of the interesting challenges to come more manageable.  The generation and storage of data in particular becomes simpler when the surface of the planet can be broken up into square patches each of which provides a natural container for the data required to simulate and render that area.

At this lowest level of detail (LOD) each face of the planetary cube is represented by a single 32x32 polygon patch.  At this resolution each patch covers about 10,000 km of an Earth sized planet’s equator with each polygon within it covering about 313 km.  While that’s acceptable when viewing the planet from a reasonable distance in space as you get closer the polygon edges start to get pretty damn obvious so of course the patches have to be subdivided into higher detail representations.

I’ve chosen to do this in pretty much the simplest way possible to keep the code simple and make a nice robust association between sections of the planet’s surface and the data required to render them.  As the view nears a patch it gets recursively divided into four smaller patches each of which is 32x32 polygons in it’s own right effectively halving the size of each polygon in world space and quadrupling the polygonal density.





Here you can see four stages of the subdivision illustrated – normally of course this would be happening as the view descended towards the planet but I’ve kept the view artificially high here to illustrate the change in the geometry.  With such a basic system there is obviously a noticeably visible ‘pop’ when a tile is split into it’s four children – this could be improved by geo-morphing the vertices on the child tile from their equivalent positions on the parent tile to their actual child ones but as the texturing information is stored on the vertices there is going to be a pop as the higher frequency texturing information appears anyway .  Another option might be to render both tiles during a transition and alpha-blend between them, a system I used in the Geo project with mixed results.

LOD transitions are a classic problem in landscape systems but I don’t really want to get bogged down in that at the moment so I’m prepared to live with this popping and look at other areas.  It’s a good solid start anyway though I think and with some basic camera controls set up to let me fly down to and around my planet I reckon I’m pretty well set up for future developments.

2 comments:

  1. Do you just go around in spherical coordinates to get each point, or is there some kind of transformation you can apply to each flat 32x32 patch to stretch it onto its section of the sphere?

    ReplyDelete
  2. The mapping from cube space to the sphere is simply a normalisation of the vector representing the point on the cube.

    So for example the point (x,y,z) on the surface of the cube can be mapped onto the sphere by dividing each of it's three co-ordinates by sqrt(x*x + y*y + z*z)

    Moving from tile to tile is accomplished by using 2D (u,v) co-ordinates for the position on the 'current' side of the cube then a lookup table is used to map (u,v) co-ordinates and direction of travel from one cube face to the next when crossing a side boundary.

    ReplyDelete

Comments, questions or feedback? Here's your chance...