ProcPlanets
JavaScript web application for procedurally generating planets with dynamic LOD.
Erik Martin Vetemaa
Live app
Project Plan
Objective
The main objective of the project is to develop an algorithm for dynamic level of detail (LOD), that is compatible with WebGL and to procedurally generate interesting looking planets.
Dynamic LOD is achieved by rendering more vertices on parts of the mesh closer to the viewer and vice versa.
Procedural generation is done by using layers of noise (perlin) and mapping the values to a sphere.
Ideally the result would look something like this (left - zoomed out, right - zoomed in):
Technology
I chose WebGL because of the simple setup and the possibility to easily demonstrate ProcPlanets on the web.
Three.js to simplify WebGL and make development faster.
Initial timeline
Milestone 1
- Generating a sphere (standard sphere / normalized cube / spherified cube / icosahedron)
- Subdividing faces
- Caching faces and relationships between them (parent <-> child)
Milestone 2
- Dynamically adding faces based on distance between the face’s midpoint and the viewer (a tree structure has to be used to avoid calculating the distance of every individual face every frame)
Milestone 3
- Dynamically removing and readding faces based on distance from viewer
Milestone 4
- Mapping perlin noise (no clue how long this will take)
- Experimentation with noise (layers)
Milestone 5
- Optimization (caching vs regenerating, …)
- Lighting
Milestone 6
- More experimentation with noise (rivers, biomes)
Extra
- Space panorama -> sky panorama
- Other celestial bodies (sun, moon(s))
- Foliage
Milestones
Milestone 1 (08.03)
Planned time 7h, actual time 10h
- Project and repository setup (1h, 1h)
- Generating a sphere (standard sphere / normalized cube / spherified cube / icosahedron) (3h, 3h)
- Subdividing faces (3h, 5h)
- Caching faces and relationships between them (parent <-> child) (2h)
Process
I chose to use an icosahedron as the base geometry for the planet, because all the faces are equal in size, which will become useful when I start to generate terrain. The vertices are generated using the corners of three perpendicular golden ratio rectangles and the faces are added manually. Increasing detail is achieved by adding vertices to the midpoints of all neighbouring vertices and normalizing them.
The reason subdivision took 2 hours more then expected was because I found that, when subdividing a single face, normalization caused holes in the mesh. This was fixed by normalizing a midpoint vertex only when it is also used in another face, which meant that I needed to start caching vertices.
Face and vertex caches were created to make reusing them possible in the future.
faceCache:
vertexCache:
Result
Currently it is possible to generate an icosphere and subdivide it. The generated faces and vertices are held in corresponding caches.
Milestone 2 (22.03)
Planned time 6h, actual time 10h
- Dynamically increasing LOD when zooming in (6h, 10h)
- Subdividing a face in real-time based on distance from camera (3h, 3h)
- Subdividing faces in a tree structure to increase processing speed (3h, 6h)
- UI for testing and demonstration (was not planned, 1h)
Process
When starting this milestone an unexpected problem occurred. WebGl does not allow allocating more VRAM to for the program after rendering. This meant that I can’t add more faces to a mesh in real time, which would make subdivision impossible. After searching online I found that I could make a large amount of invisible faces at startup and make those visible when necessary. This is not a pretty solution, but it works.
To choose which faces to subdivide in real-time I iterated through every visible face in faceCache and calculated it’s edge length in relation to the viewer and if that was longer than an arbitrarily chosen length (tesselationConstant), then the face would be subdivided.
This worked quite well and produced a smooth transition from low to high tesselation.
In order to improve performance a new subdivison technique was implemented. Instead of iterating through all visible faces, the algorithm iterates in a tree structure checking if a face could have any child faces that need subdivision and recursively moving through the tree until arriving at the faces that require subdivision. Some changes needed to be made to the original tree structure to make this easier (more specific description below). The result looks less smooth than the previous solution and determining which faces need subdivision is less precise. It does seem to be a bit more efficient, but this needs to be tested more precisely. Overall this was more difficult than anticipated and took 3 hours of more work than planned.
Subdivided mesh after zooming into one point and then zooming out.
Subdivision through iteration - left, subdivision using a tree structure - right.
Tree structure
This is the structure used for subdivision. The first 20 base faces of the icosahedron calculate their edge length relative to the viewer using the face’s midpoint distance from the camera and it’s edge length. Then they calculate the possible visible length of their deepest child using their own visible edge length and dividing that number by pow(0.5, maxDepth), because every child face's edge length is 2 times smaller than its parent's. If the calculated length is smaller than the specified tesselationConstant, then the parent face has a child that needs to be subdivided. Then this same process is done on all the children recursively until maxDepth equals depth and then the face will be subdivided. After the face has been subdivided it’s maxDepth increases by one and all it’s parent's maxDepths have to be increased to this as well.
GUI
Some buttons were added to the graphical user interface to make demoing easier, but they also proved to be unexpectedly useful for testing and debugging.
Result
Milestone 3 (05.04)
Planned time 8h, current time 17h
- Dynamically decreasing LOD when zooming out (3h, 8h)
- Smoother camera control (5h, 9h)
- Exponentially slower zooming when closer to mesh
- Avoid visible clipping
Process
After the last milestone just decreasing LOD was quite straightforward. I just used the same tree structure I used for increasing LOD and added a minDepth variable for checking if a face has a child that is rendered and its edge length is too small. However I ran into a bug that took about 3 hours to fix. It was a simple typo, but debugging JavaScript is really difficult and the error message led me to believe it had something to with asynchronous code, which then I spent about an hour learning about. Finally I just stumbled upon the typo, but moving forward I will try to learn what are the best practices for debugging in JavaScript and WebGL. The normalization of vertices had to be remade to allow denormalization, which I had not thought about and added 2 hours to the milestone.
Dynamic LOD demo. Tessellation through iteration - left, tessellation using a tree structure - right.
Stats for the GIFs above. FPS - blue, milliseconds used to generate frames during the peak FPS - green.
Camera controls
A lot of time was spent on experimentation to find what controls felt the best to use. I tried many of the premade controls for three.js, but none of them worked well for this project. Orbit controls with alternating targets used on Google Earth seems the best for this type of scene, but implementation would take too much time. So I used the single target orbit controls that I had been using before, but made the zoom and rotation speed relative to the camera distance from the sphere. To make zoom and rotation feel smoother I used an easing function generated on timotheegroleau.com.
Note
Currently subdivision stops when the camera gets very close to the sphere. This will probably need to be fixed in a future milestone.
Milestone 4 (12.04)
Planned time 7h, actual time 8h
- Adding perlin noise to the algorithm to generate terrain (6h, 3h)
- Experimentation with multiple noise layers (1h, 1h)
- Coloring the terrain (not planned, 4h)
Process
Instead of Perlin I chose simplex noise because I haven’t used it before and it is supposedly faster, but I have not tested those claims. The JavasSript implementation I chose is written by jwagner on Github. Using it to generate terrain was quite simple. After normalizing a vertex I generated a noise value using the verticis position and multiplied that position by the generated noise value. To get a more realistic terrain multiple layers of simplex noise was used to generate the noise value.
Because the milestone took less time than anticipated I started to implement colorization of the terrain. This took a long time because the color value of a vertex depends on whether the vertex is normalized or not and this needed to be implemented.
Result
Milestone 5 (03.05)
Planned time 10h, actual time 10h
- Improve colors (2h, 1h)
- Zoom bugfix
- Add more colors and interpolation between them
- Moving camera to planet surface (not planned, 2h)
- Experiment with different ways to optimize processing speed (8h, 3h)
- Choosing between using iteration or the tree structure for tessellation
- Try to only render a section of the face array
- Only update elements that need to be updated
- Stop dividing faces that are facing away from the camera (+ not in frustum)
- Look for bottlenecks in the code
- Check if decreasing LOD actually helps
- Put faces that need to be subdivided into a stack and only subdivide until delta time gets too high
- Problems with vertex normalization (not planned, 4h)
Process
Using the built-in raycaster that three.js offers I created a pointer so that the user can choose a location on the planet to inspect. Using that location and an interpolation engine called tween.js the camera will move smoothly towards the planet. This should be improved for the final product, but it works for testing purposes.
Moving the camera to the surface
To start optimizing the performance I started by back-face and view-frustum culling. Back-face culling was implemented by comparing face and camera normals and view-frustum culling by using a three.js frustum-contains method.
Finding only visible faces using culling
I tried the culling in my iterative LOD method and the performance increased dramatically, however, using culling in LOD meant that two adjacent faces could have a huge difference in subdivision depth, which exposed a bug in vertex normalization.
Normalization bug
To improve performance even more I looked for slow methods using Chrome developer tools. Under the performance tab it is possible to see how much time different methods took to run, which was very convenient to find badly written code.
Chrome developer tools
Milestone 6 (17.05)
Planned time 11h, actual time 18.5h
- Improve moving the camera to the surface (5h, 8h)
- Interpolate camera direction and FOV
- Implement a way to look around while on the surface
- Fix surface tearing when normalizing vertices (4h, 0.5h)
- Or remove culling to avoid it
- Try to make the planet more intresting (2h, 10h)
- Add more colors
Process
Camera interpolation is achieved by using tween.js and interpolating camera fov, position and rotation quaternion. However quarternion rotation is still a bit buggy so I used euler rotation for now. Once on the planet's surface, the controls are switched from OrbitControls to PointerLockControls to enable viewing around.
Surface tearing was fixed by making sure that neighbouring faces have the same tessellation depth when subdividing a face. It works well, but makes everything much slower. Finding neighbouring faces is achieved by finding all faces from cache that have at least two vertices in common with the original face and that iteration takes a lot of time.
To make make procplanets more interesting to play with, more controls were added. Now it is possible to choose colors and noise values and even add more of them.
Project Expo (24.05)
Planned time -, actual time 14h
- Improve performance
- Fix surface camera's initial rotation