[2.27.2025] I received 2024 graduate student best paper award by the School of Psychology, GT 😉
I shared all my experiment materials and data on OSF platform.
Visual Sequence Memory Task: https://osf.io/frxdz (where you can find my self-composed music stimuli)
Memory and Emotional Music Task: https://osf.io/yr5em/
Email me for any questions or further requests
Have you ever wanted to literally pick up your brain and examine it from all angles? No? Just me? Well, after seeing someone's 3D-printed brain paperweight on social media, I became obsessed with the idea. After all, what conversation starter could possibly top, "Oh that? That's just my brain," when guests notice the strange object on your desk.
So I embarked on this journey to transform the three pounds of neural tissue inside my skull into a tangible object I could hold, examine, and use to assert intellectual dominance during office meetings.
First things first—you need data of your actual brain. For this, you'll need a high-resolution T1-weighted structural MRI scan. As a neuroscientist, I had the perfect opportunity to obtain my scan as a pilot subject for a colleague's study
If you're not running your own neuroimaging lab, you still have several options:
Volunteer as a control subject for neuroscience studies (most cognitive neuroscience labs constantly need participants)
Collaborate with radiology departments as part of educational initiatives
Participate in large-scale brain mapping projects that provide participants with their data
Some imaging centers offer research-grade scans for participants in specific protocols
Important note for fellow scientists: Even when using research scans, always ensure you're following proper IRB protocols and data handling procedures. Many institutions have specific policies about using scan data for non-research purposes.
When you get your scan, make sure to request your data in DICOM format with full metadata preserved. You'll need header information intact for proper spatial normalization and tissue segmentation in the next steps.
Here's where we delve into the neuroimaging pipeline. The goal is to perform cortical surface reconstruction from the volumetric MRI data, creating accurate mesh representations of the pial surface (gray matter/CSF boundary) and the white matter surface.
I used FreeSurfer, the gold standard in structural neuroimaging for cortical parcellation and surface extraction.
Set up the environment variables and source configuration
Convert DICOM to NIfTI format with header preservation
Run the recon-all pipeline (the full morphometric analysis suite)
for example: recon-all -i output.nii -s my_brain -all -qcache
Extract the pial surface meshes in STL format
mris_convert --to-scanner $SUBJECTS_DIR/my_brain/surf/lh.pial lh.stl
mris_convert --to-scanner $SUBJECTS_DIR/my_brain/surf/rh.pial rh.stl
The raw mesh from FreeSurfer's surface reconstruction isn't quite ready for printing. The pial surface representation requires several modifications to ensure printability while preserving neuroanatomical fidelity.
Combine the hemispheres: I used Meshlab to merge the left and right hemisphere meshes, ensuring proper interhemispheric alignment along the longitudinal fissure.
Topological cleanup and mesh optimization: Meshlab
Structural modifications for printability: use simplication and remeshing function to smooth the brain
Get it printed! I printed a regular version and a squishy version 😁
Now I have the most unique paperweight in my office, and it's amazing how many conversations it starts..
Plus, there's something deeply satisfying about pointing to a specific region and saying, "That's probably where I stored the memory of what I had for breakfast yesterday, but clearly, it's not working properly."
Disclaimer: I am not a medical professional, and this blog post should not be considered medical advice. Always consult with healthcare providers before pursuing medical imaging for non-medical purposes.