Brain Data Visualization
Fedor Stomakhin, Siim Parring, Hain Zuppur (sprv. Ilya Kuzovkin)
link to project LIVE page (takes around 20 seconds to load)
Our goal was to create a web application to visualize brain activity on a rotatable brain model. We use data from 100 humans who were implanted with electrodes inside of their brain. These people were then shown several images from 8 categories and their brain responses were recorded. We animated these responses, so you can visualize the changing brain activity.
Architecture && Technologies
The project runs as a node web app on ReactJS, with Three.js handling the graphical side. There were some challenges to getting the latter run with React, and required an understanding of various react hooks and component lifecycle methods. Ultimately, it was handled with the setup of the 3D environment in the componentDidMount method, mounting it with a React ref.
Graphics && Rendering
Initially, we rendered every single electrode as a separate sphere. This, however, was very slow, so we decided to render the electrodes within the brain as a Three.js Points object, which is rendered in WebGL with gl.Points with the
size and whether the electrode is
hidden being attributes that are passed to the shader. Each point uses a transparent texture of a circle, dropping the fragments with a sufficiently low opacity, or if they are supposed to be hidden.
Hiding certain electrodes caused an issue, where the hidden electrodes started flickering due to float conversion issues in the shader - 1.0 did not always exactly equal 1.0 - which we solved by accounting for a margin.
We used the brain model from Here. The model is a 3D mesh of a true human brain, which was constructed from 12 volumes acquired using magnetic resonance imaging (mri). In the model, the triangles did not share vertices, that meant we had to merge them together in blender. To make the model load faster we also merged vertices by distance, by doing that, we got the model from almost 600 000 triangles to 60 000 triangles and file size from 46.3 MB to 4.6 MB. To make the model look smoother we apply smoothing in Three.js.
Input & Data
The data we used in our visualization was originally gathered for the article Activations of deep convolutional neural networks are aligned with gamma band activity of human visual cortex and was preprocessed and provided to us by our supervisor Ilya Kuzovkin. The experiment was conducted on epilepsy patients whose brains had been implanted with 100 - 150 electrodes each for the original purpose of monitoring their brain activity during seizures (see image below). This allowed the researchers to gather very accurate electrical activity information from inside the brain.
Our application allows the choice between two main types of data: baseline-normalized LFP (Local Field Potential) responses and baseline-normalized frequency power responses in the high gamma frequency range. Each electrode is represented by a circle, the size of which is determined by the strength of the signal. When the 'Color-code in accordance with the change in activity' option is selected the circles are colored according to a red - white - blue gradient, where red means that LFP is positive and blue means it is negative. The darker the color, the stronger the activity. When the DCNN option is chosen, only circles that can be mapped to the layers of a deep convolutional neural network are shown and they are colored according the corresponding DCNN layer. The layers are colored according to the following: Layer 0 - #25219E, Layer 1 - #23479, Layer 2 - #2C5BA7, Layer 3 - #00B7EC, Layer 4 - #48C69B, Layer 5 - A7D316, Layer 6 - #FFD100, Layer 7 - #FF5F17, Layer 8 - #E61A26. The higher the level, the more complex visual features that part of the brain processes. This mapping was determined in the aforementioned article and is significant because it shows how DCNN's have learned to process images in the same way as the human brain.