Procedural and Dynamic Meshes
Ott Adermann, Jaagup Kirme, Ergo Nigola
Project Description and Goals
The aim of this project was to get a better understanding of how to create and change meshes of objects during runtime through code and to experiment with some of the different things this allows us to do.
We are using Unity and C#!
The various demos
Generate a world with some hills, valleys, lakes, rivers, and a bunch of flora, and look around in it.
Not satisfied with the landscape? Make a new one with custom parameters! Each generation is unique.
Don't be scared to move in real close with the camera and admire the flora. Every tree, bush, rock, and even every tuft of grass is different.
- WASD & LShift, LCtrl to move
- Esc for menu
Five rooms, each showing an example of a way to implement voxel-based sculpting (or just 3D dynamic voxel-based meshes in general).
- Each voxel as a separate cube (Extremely slow and inefficient)
- Isosurface based on faces between voxels (Duplicate vertices, very blocky)
- Isosurface based on faces between voxels (Shared vertices, slightly less blocky)
- Isosurface based on vertice grid (Marching cubes, duplicate vertices, still kind of blocky)
- Isosurface based on vertice grid (Marching cubes, shared vertices, almost smooth)
- WASD to move
- Left click to carve and enter rooms
- Right click to un-carve
- Scroll wheel to change brush size
Procedural objects – trees, rocks, bushes, grass
Made by Ergo Nigola
All of the objects are generated 100% from code. Important part of the trees design is that they are self-similar (not infinitely). An n-iteration tree consists of the trunk part and a single or mutiple lower iteration (n-1 or n-2) trees. The directions, angles, iterations and count of branches are random, in certain ranges of course. Also when combining the branches and the trunk, the top of the trunk and bottoms of the branches are connected to form a single continuous and nice looking mesh. The dead tree and tree with leaves use the same base code, the difference is in iteration count, trunk width and existence of leaves.
The bushes, rocks and leaves for trees are generated using the same base concept. First an icosphere or icosahedron is created, which are basically spheres made out of (almost) equilateral triangles. Icosphere used in this program is made by subdividing icosahedron triangles into four smaller ones and then moving all of the new vertices to also be on the surface of the imaginary sphere. After that, a random translation is applied to all of the vertices. How big the random translation can be, depends on the type of object the icosphere is needed for. The vertices of rocks deviate the most from their original location, for bushes and leaves the deviation is smaller. Finally the mesh is scaled to look appropriate. The algorithm for generating icosahedrons and icospheres was taken from an online source.
Grass tuft generation consists of two parts: grass blades generation and combining them to a tuft. Grass blades are made of rectangles on different heights, which are connected. The higher the rectangle is the smaller it is, also they have some randomness in their location. Grass blades function is also given a general direction in which the blade should lean, so that when they are put together into a tuft, they could be oriented to lean outwards. A grass tuft is made out of 4 – 9 grass blades, so that one blade is in the center and other ones in varying directions around it, oriented to lean outwards.
As the last step for all of the objects described above they are converted to resemble flat shading by creating seperate vertices for each face. Some research was done to implement actual flat shading in Unity, but understanding shading in Unity proved to be too difficult to be worth its value.
Probably the most complicated part was making the tree with leaves. What made it difficult was the fact that a single mesh needed to have two different materials, one for wood part and another for leaves. That meant having two seperate arrays of triangles, which meant also changing a few already written methods. For example the method that combines two meshes into one, had to be reworked to be able to handle two triangle arrays and account for the needed offsets. Unity has also it's own method for combining meshes, but it also includes some unnecessary parts and perhaps would have changed the order of vertices, which would have ruined the algorithm. Another approach would have been to keep track of the branch tip vertices, but that would not have been easier.
Made by Jaagup Kirme
River generation is initialized by finding a tall enough vertice to start at. Then, while the river hasn't reached a cycle or run off the map, the algorithm uses the normals of the surrounding vertices to find the normal of the surface and determine where the river moves for the next step (in the rough direction of the xz-projection of the surface normal). The river also takes into account the surface normal of the last step as a kind of simulation of momentum, as well as a precaution against the river stopping as soon as the underlying surface is completely flat (the xz-projection of the surface normal being a zero vector). Furthermore, on each step the river adds a small amount of turbulence so that on even ground the river does not run perfectly straight until it meets an obstacle.
General terrain creation and height modification
Made by Jaagup Kirme and Ott Adermann
The terrain is initially a uniform grid of vertices. This allows for easy access of a vertex at any xz-position, because the index of a vertex is defined by said position. The terrain is assigned a texture where each pixel corresponds to a single vertex - the vertices are mapped to the UV coordinates of the middle of their corresponding pixel. Next, triangles are defined in a zig-zag fashion to help reduce the monotony of the grid. Vertex normals are calculated automatically by Unity.
Terrain modification is done only by modifying the y-coordinate of vertices. This means that UV-coordinates and triangles don't have to be reassigned. The generator uses two different methods for changing terrain height. Both methods are given a position and a radius in UV coordinates, which they use to find which vertices need to be raised/lowered. For each such vertice, its distance from the (center) position is used as the argument for the function, and the result is the amount by which it is raised. For mountains, the argument is mapped from 0..r to a 1..0 range, and the function used is x2. For everything else, the argument is mapped from 0..r to a 0..π range, and the function used is cos(x) / 2 + 0.5. The result is then mapped to a 0..height range, and the vertice is moved by that amount. Finally, after all the modifications are done for the frame, the texture is updated based on the new height values of the vertices.
Mountains, hills, lakes, and flora scattering
Made by Ott Adermann
Mountain ranges are constructed by choosing two random points on two different sides of the map and drawing a line between them. Mountains are then placed at equal distances along the line and moved by a random amount mostly perpendicular to that line. Mountains near the middle are higher. The desired heights and positions are saved and then interpolated from 0 to the desired value over time. This is the same for how all terrain is shown to be made over time, rather than instantly.
Hills and lakes are actually similar, except that one is raised, the other lowered. Hills get placed at random places on the map, while lakes appear at the ends of rivers. Both are created by sprinkling points around the original position, and raising/lowering all those points.
Flora is created by rolling a random for each vertice. If the random value is lower than the flora density, a flora object is instantiated on that vertice and scaled up over time. High mountaintops contain no flora, vertices below sea level or just below the height threshold have only rocks and grass. Otherwise, any one of the flora objects is created, with a much higher chance for regular trees near the ground and dead trees higher up.
Made by Ott Adermann
Each room in the sculpting demo showcases a slightly different approach to the same problem - how to display a 3D grid that is either filled or empty at all given points of the grid.
The first example uses a separate cube - 24 vertices, and a box collider - for every point. If a cube is clicked, it deletes itself. Similarly, a cube could detect which of its faces was clicked, and spawn a new cube there. This solution doesn't scale too well, as the vertice counts get very high, very fast, and the physics engine struggles with all the colliders.
The rest of the examples use a single mesh and isosurface rendering techniques. The basic idea behind this is that the algorithm checks all pairs of adjacent grid points, and if the function value crosses a threshold between any two points, a vertex/edge/face is drawn there. For this implementation, the only possible function values are true and false.
The second example creates a square face between two grid points if the function's value changes there.
The fourth example is an implementation of Marching Cubes, which checks 8 grid points at a time. (The vertices of a cube, basically.) Based on the boolean values of each of those vertices, an 8-bit number is constructed, and the corresponding vertex and triangle configuration is fetched from a lookup table.
The third and fifth examples use the same technique as two and four, respectively, but the resulting mesh is made smooth by removing duplicate vertices at cube endpoints. This is done by mapping the coordinates of each created vertex to the index of that vertex. Existing vertex indices are then fetched instead of creating new vertices at the same locations.
For examples 2-4, multiple meshes could also be used instead of a single one. This would reduce mesh reconstruction times every time a change is made, and improve various culling techniques. However, if the meshes get too small, the problem in example 1 starts to resurface.
The project deviated from it's originally intended course, and as such, there are less demos to show than expected.