Recreating Obduction Loading Screens
Karl Suurkaev, Rasmus Peäske, Rainer Talvik
Links
- Azure DevOps Project Overview page
- Main repo link
- November 11th 2024 - Proof of Concept Build on Google Drive
- January 7th 2025 - Final Release Build on Google Drive
Setup and Controls
To try out the final build on your own system:
- Download the final release build from the link above or from this link here
- Extract the downloaded
.zip
archive to a destination of your own choosing - Launch
ObductionRecreation.exe
Controls:
- Mouse to look around and navigate the menus
W
,A
,S
,D
to move aroundE
to interactP
to open up the Pause Menu
Project Description
In the video game Obduction there is a really cool visual effect for loading screens in which the surrounding scene dissolves into floating particles before forming into a new scene. The effect can be seen in this video. The aim of this project was to replicate the same visual effect in a set of 3D environments. The project was built on the Unreal Engine, in which the environments and visual effects were created.
Final Release (07.01.2025)
Goals
The main goal for this project was to recreate the visual effects used for the loading screens in the game Obduction. A demonstration from within the game can be seen in this video.
To achieve this the aim was set to create two 3D environments between which the user could then teleport. The teleportation process would be triggered by interacting with a button found in the scene. The teleportation process would be accompanied by visual effects matching the ones found in the game Obduction.
Methods
The project was implemented using Unreal Engine 5.4 which is a tool that none of the developers had experience with prior to this course. Work was divided into three main fronts: creating the environment, creating the visual effects, and creating the necessary interactable components and scripts.
The Environment
The two scenes that were decided for were a lakeside in the forest and a coastal beach. The initial creation of the environment began by finding and selecting the necessary plugins and tools.
The first order of business was to determine how to depict water since both of the desired scenes were based around a body of water. For that the integrated Water plugin in UE5 was sufficient. It was easy to use while providing the required level of detail.
The second order of business was to find assets to be used in both scenes. Initially, we used Quixel Megascans assets, which appeared suitable. However, they were very performance-heavy, especially when many instances were present in the scene, exceeding the texture streaming budget. Instead, the asset packs from Project Nature were opted for. These assets were lighter yet still offered great detail, particularly from a distance.
The work on actually creating the scene could begin once the required assets and tools were found. The process for creating the lakeside forest scene went as follows:
- Create a rudimentary landscape using the Landscape Tool.
- Create the body of water's and tweak the position and parameters to achieve a fairly plausible environment.
- Add trees and smaller foliage using the Foliage Tool. Some trees in the scenes were hand-placed to achieve greater detail whilst not sacrificing performance too much.
- Utilize Nanite on detailed meshes for minor performance improvements without compromising visual quality.
- Apply post-processing effects, such as overall volumetric fog, modified sun position, and enhanced saturation and contrast etc.
Work first began on the lakeside forest scene. Once the general scene layout was determined and the foundations were set, work began on the coastal beach scene.
The first pass of the lakeside forest scene. Initial versions of the project used a third-person perspective.
The first results looked appealing but were very performance-heavy. Therefore, it was necessary to scale back on some of the visual aspects of the scene while still meeting our overall expectations.
The second pass of the lakeside forest scene. Initial versions of the project used a third-person perspective.
The process for creating the coastal beach scene was mostly similar to the lakeside forest scene creation:
- Look for suitable (free) assets which would not be too taxing on the system.
- Use the Landscape tool to mold the initial environment.
- Create and later edit a body of water into the landscape.
- Add the found props. Replace those for which better alternatives were found.
- If needed repeat some or all steps.
However, some additional considerations that had to be taken into account were the fact that unlike in the forest section, if the user was standing on the beach, then the horizon was somewhat visible. Some different solutions were tried like replacing the initial lake (that was supposed to represent and ocean) with an actual ocean but that seemed to sink the world (in addition to seemingly requiring some major environment remodeling). Additionally the already constructed lake provided more granularity. The final solution was to still create an ocean, but to not have it affect the landscape. This approach provided a good balance between visual quality by creating an illusion of a large body of water and functionality by not requiring the entire environment to be reworked.
In contrast to the lakeside forest scene there was also the wish to create more realistic wave structures in the beach "ocean". The solution was not immediately obvious, but once the correct asset which was affecting the wave movement was found it was possible to build upon and tweak the provided Gerstner waves that were provided by the Water plugin. This process mostly entailed understanding what all the given parameters did and what they affected. Although there were quite a lot, the final result is a definite improvement over the default asset.
All the properties that the Water plugin allows to change to implement a more custom wave flow.
Finally, another important step in massively improving performance was optimizing some very heavy assets. The largest performance impact was caused by the use of 4K textures for everything in a single plugin. Optimizing them one-by-one seemed to be outside of the scope for this project and what if there was a need to change them again later on? The process of changing individual textures (including normal maps, roughness etc) would have taken too much time. Instead, the solution was to use the Property Matrix, which easily allowed the editing of similar textures (e.g. branches, leaves) in bulk.
Besides these main processes, there were always numerous additions/removals/revisions etc ongoing throughout the entire development. It was also important to always keep track of the used disk space since the environment assets take the bulk of the size of the project. Having immediate sanity checks after the usage of asset packs and removing any unused assets from the project helped scale down the project size. Attempting to determine which assets were left unused long after having used the asset pack was very difficult.
The final versions of the Lakeside Forest scene and the Coastal Beach scene. The large emissive red cube was left in the forest scene to showcase the interaction with the particle effects.
The final list of asset (pack) authors whose work was used in the environment part of the project:
The Particle Effects
Initial work on the particle effects began in a very rudimentary testing area composed of a desert with three little bumps and a few boxes. The first focus was to determine how to get the output from a Camera
object in the world and output the image onto an object.
After that, the next step was to create some particles. Particles themselves were quite straight-forward, but sampling colours from a texture caused difficulties at first. A method for sampling colors from a texture was found through trial-and-error, but the first approach strangely did not allow the particles to move at all.
After further research into possible solutions for the color sampling problem a tutorial video was found which helped lay the foundations for the particle effects by properly showcasing how to sample colors from a texture.
On the left: Example of the first attempts at sampling colors from a texture.
On the right: Example of a surface created from particles with colors sampled from an image of Miles Edgeworth.
Feedback from the coach meeting provided a lot of valuable information about particles and connecting the various pipelines in Unreal Engine. One idea that was proposed was to use a GBuffer
to get the base colour from the lowest level - however, the GBuffer
seemingly* only draws things that are visible in screen space (occlusion culling). This is not enough for 3-dimensional capture, since the particles need to also be sampled from behind the camera. Culling and other settings could likely be adjusted with engine settings, but since the GBuffer
method already required more time than was expected then it was sidelined.
The next approach was to use a CaptureComponentCube
: a 360 degree cube camera built right into Unreal Engine. This component however returns a cubemap, from which the different faces of the cube map would have to be separated into the 6 various images. Alternatively it is possible to skip the separation and build the sampling logic with the Cube in mind. Implementing this would not be too difficult, since the Cube Captures are separated into Mips (mipmaps), but this approach was also sidelined out of fear of unexpected delays caused by unknown factors.
The last approach was the rudimentary approach of attaching 6 cameras to the player actor and passing each render target (the "captured" image) to each of the 6 particle systems as a variable, then combining the particle systems in a separate blueprint for ease of use.
*unconfirmed speculation
The camera setup as seen in the BP_FirstPersonCharacter
asset.
Thanks to the video mentioned before, figuring out how to sample the colours onto particles was not too difficult when spawning in a grid. But that grid lays on a flat plane, not a sphere, which would be much more appropriate for spawning particles around a character's head. This created one of the largest dilemmas in the particle section - whether to create a sphere or a cube of particles. A sphere would be better in every way, but due to issues encountered with cube maps, trying to map 6 flat images on a sphere using the blueprint logic while not completely morphing the images seemed too daunting. Using a particle cube (or any sort of rectangular prism) creates slight visual inconsistencies between the particles and the player camera, but was much easier to implement, leading to the use of this approach for the final version.
The cube, however, had its own problems. Even in the final submission, when facing ±45 degrees with respect to the world axes (±XYZ
), the player is looking straight at the intersection of two "particle faces", which is not ideal. The probable fix for that would be that the particle cube needs to be spawned in with the rotation depending on the player camera instead of the world. The cube needs to have the rotation separate from the player camera ("locked") - but all the fixes tried against this problem lead to different bugs in the implementation. In the end the cube was left as-is.
The image shows what the player sees approximately 0.2 seconds after interacting with the button stand while facing "southwest" in the level. It can be seen that the cube creates less than ideal faint lines where the vertices overlap.
All of the previous things solve the first part of the teleport process - sample colours onto particles and spawn the particle cube. The second part, however, required sampling colours once more. This required a custom Do Once
module in the Particle Update
section so that the particle colour would only change when it needed to. Since the teleport locations are configured to be static, it was plausible to somehow have incorporated capture actors to the locations. This was sidelined as the 6 directional capture from the player character was sufficient. To ensure that the dark void created around the character (more on this under interactable components and scripts) did not interfere with the captures, it was necessary to have the cameras ignore the Void Sphere
attached to the player character. For that, the scene capture flags were modified, specifically ignoring the capture of translucent objects. This approach does not pose a problem in this project demo since there are no other objects in the scene. The modified capture flags could cause issues when implemented in another project. A plausible way of solving this would be to add some sort of custom ignore list.
After the initial setup for the camera system was complete it became apparent that all of the captures ate up memory and GPU power. The demands of the cameras were lowered by having all post-processing effects removed and the capture lists filtered such as to only capture the assets in the scene that really contributed to the particle colour. Capture resolutions were reduced to 100x100
since particle emitters *only* spawned 6x100x100
particles around the player. Additional performance reductions were acquired by setting up manual camera captures as up until now the cameras had "captured" every frame or every frame when movement happened. While constant capture gave an accurate and up-to-speed image, it had a huge toll on performance. Manually capturing had the side effect of introducing a delay on the first capture when loading the project, but the performance benefits easily outweighed the slight visual hiccups that appeared once per project launch.
One last roadblock with particles were variable access scopes in Unreal Engine. During the final steps of the project's implementation, sudden issues were encountered with particle colour sampling: instead of accurate colours, the colours seemed to be an approximate average of the sides' colour. This turned out to be a problem with the access scope of the GridUVW
variable in the Niagara Grid Location
module. To make sure that the issue would be solved indefinitely, a custom Niagara module was created based on Grid Location
which output the value to a Particle
namespace variable with the working name UVWMaybe
(this name was kept to the end of the project). This ensured that both Particle Spawn
and Particle Update
stages could access the GridUVW
values, which were the foundation for colour sampling in this project.
Highlighted in the NE_Side
is the custom-made Custom Grid Location
module. It duplicates the GridUVW
value to the UVWMaybe
variable, which is used both for the initial colour sampling and the sampling after the character has teleported.
The Interactable Components and Scripts
The project was originally based on the third-person template provided by the Unreal Engine, but shortly after starting the project it was changed to use the first-person template instead. The switch to first-person perspective was made so the resulting work would be more true to the original source material.
While work started on the environment and particle effects, the first interactive elements that were added on top of the first-person template were the UI elements. The main UI elements created for this project were the Main Menu, the Pause Menu, and the Settings Menu. Implementing these provided an overall better structure for the project and its presentation.
Main Menu of the project application.
The Settings Menu, which is available through both the Main Menu and the Pause Menu, offers an option between four different graphical presets. This allows for the project to be presented even on lower-end hardware systems.
Settings Menu of the project application.
The Pause Menu pauses the simulation of the game and provides an user-friendly way of exiting the project.
Pause Menu of the project application.
The next priority was setting up a button stand that would start the teleportation process and granting the user the ability to interact with it. The button stand itself is a simple custom asset made from resources provided by the starter content of the Unreal Engine. Interaction with the button was implemented by casting a ray from the user's perspective and checking if the object that the ray collided with had the required interaction interface.
Interacting with the button calls for the teleportation process logic which is mainly stored in the button stand itself. The logic attached to the button stand handles the general flow of the teleportation process, calls the necessary events tied to the player character, and teleports the player character to the desired location. The general flow of the button stand's logic is:
- Interaction event is triggered
- Input for movement is set as disabled
- Event is called for capturing the surroundings
- Event is called for spawning particle effects
- Event is called for fading in the spherical void
- Delay of fade time + half of the teleport time
- Player is teleported to the destination location
- Event is called for capturing the surroundings
- Delay of half of the teleport time
- Event is called for fading out the spherical void
- Delay of fade time
- Input for movement is set as enabled
The first-person character blueprint provided by the first-person template (BP_FirstPersonCharacter
) also had to have many custom additions. For example disabling the movement input for the teleportation did not take away the player's ability to jump which required custom handling. Additional logic attached to the first person character blueprint includes the interaction logic, handling inputs for the Pause Menu, handling the fade in/out of the void sphere and its timing, spawning the particle effects, and capturing the scene for color sampling.
In order to create the illusion of the particles dissolving into nothingness, a spherical void was attached to the player character at all times. By default the sphere is fully transparent, but for the teleportation process the sphere is made opaque. By disabling depth testing for both the particles and the spheres, both elements are rendered on top of the scene, which creates the illusion of the scene disappearing.
Additionally there was an encounter with a small bug related to UE 5.4 and its blueprint interfaces, requiring the addition of an extra function as the latest function in the list was for some reason always replaced by a custom event in the blueprint.
Results
The results of the project can be seen by either downloading the final build and setting it up on a local machine or in the demonstration video seen here.
Initial Progress (12.11.2024)
For starters we wanted to familiarise ourselves with the Unreal Engine as it is a tool that none of us had any prior experience with. Work started on two main fronts: creating the environment and creating the visual effects.
Particles:
We created a very rudimentary testing area in a desert with three little bumps and a few boxes. Then we learned how to get the output from a "camera" object in the world and output the image on an object.
After that, the next step was to create some particles. Particles themselves were quite straight-forward, but sampling colours from a texture didn't work too well at first - although admittedly, this was very much done at the beginning using trial-and-error. One night, we accidentally figured it out, but that method strangely didn't allow the particles to move at all.
Then we stumbled on this video, which addressed that concern as well. Here's a cropped smartphone camera image of a particle'd Miles Edgeworth:
After some fine-tuning, the POC particle demo was ready.
All resources used for learning particles are listed on this page in the project wiki.
Environment:
The initial creation of the environment began by finding and selecting the necessary plugins and tools.
The integrated Water plugin in UE5 was sufficient. It was easy to use while providing the required level of detail.
Initially, we used Quixel Megascans assets, which appeared suitable. However, they were very performance-heavy, especially when many instances were present in the scene, exceeding the texture streaming budget. Instead, we opted for asset packs from Project Nature. These assets were lighter yet still offered great detail, particularly from a distance.
The process can be summarized as follows:
- We chose the Third Person template and later switched to the First Person template.
- Created a rudimentary landscape using the Landscape Tool to form a lake basin.
- Concurrently adjusted the lake's position and parameters to achieve a fairly plausible environment.
- Added trees and smaller foliage using the Foliage Tool. Some trees around the lake were hand-placed to achieve greater detail whilst not sacrificing performance too much.
- Utilized Nanite on the meshes for minor performance improvements without compromising visual quality.
- Applied initial post-process effects, such as overall volumetric fog, modified sun position, and enhanced saturation and contrast etc.
As shown in the first environment screenshot, the result looked appealing but was very performance-heavy. Therefore, it was necessary to scale back for the first milestone while still meeting our overall expectations:
Proof of Concept Demo & Launch Guide video: