From simplified wireframe models to photorealistic extended reality visualizations with Unity/Unreal → WebGL/headsets
1) Deep learning brain segmentation for CT/MRI
2) Surface reconstruction from segmented brain
3) Surface visualization of regions of interest using game engines such as Unity or Unreal
Slides are compiled mostly for people into 3D visualization, computer graphics, game developers who have not seen so many many brain visualizations necessarily.
Overview of brains are currently being visualized and how could be upgraded
Alternative download link:
https://www.dropbox.com/s/69b176g8d8kuw1b/brain_viz.pdf?dl=0
2. GOAL
Visualize simple “Tuftean” clinically-
relevantvectorizedsurfacesfor clinicians,
insteadof“messier”voxelbrainimages
Slides are compiled mostly for people into 3D visualization,
computer graphics, game developers who have not seen so many
many brain visualizations necessarily.
Overview of brains are currently being visualized and how could
be upgraded
3. ExecutiveSummary
“Rasterbrain”
e.g. 1mm3
MNI CT
Withlabel masksforhematoma
andbeyond
“Vectorizedbrain”
e.g.mesh/NURBS
Labelmaskskept for
surface soyoucan
easilye.g.“peel off” in
XRbrain around
hematoma
Could start modularwith2 models,but thiscould be eventually end-to-end model of course
“PerfectMesh”
(well surface+uncertainty)
visualizationdevcouldbe done
separatelyfrommeshextraction
Start with simplewireframes, advance
to morephotorealistic renders,
scalable to XR applications. Use
industry-standardgameengines
4. Ideaillustrated: Real-time interactive 3DModel
See the moodboardend of this slideshow for more visualizations
Visualize the brain vasculature ifthatisthatinterestof the clinician (e.g.CT Angiography)
Nowelletal. 2016http://doi.org/10.3791/53450
5. Ideaillustrated: What complexity level?
Nowelletal. 2016 http://doi.org/10.3791/53450
https://iiif.wellcomecollection.org/image/B0007786.jpg/full/
2048%2C/0/default.jpg
https://www.unrealengine.com/en-US/spotlights/
helping-brain-surgeons-practice-with-real-time
-simulation
Tradeoffs:
●
Looking pretty vsbeing understood by the end-users(e.g. datascientistsand clinicians). You need todo a
usabilitystudyonthe user experience (UX) atsome pointif you startdeveloping newvisualization tools
●
Looking pretty requiresalsoheavier hardware? Isthisalwaysfound in hospitals(no!), and same visualization
pipeline could have variouscomplexitylevelsimplemented? Or write anew “wireframe”library if the current
off-the-shelfphotorealisticrenderersyou are happywith?
6. Howlowpolyyoucango whilestillkeepingthevisualized
braininterpretable?
“Engineered”
planar surfaces
Easytosimplify this
forvisualizations
withhighfidelity
Leaves,likehumanbrain,require
morehighpolyrepresentation
Easier tomodelbrainsof species
withnogyrificationifyouareintomodeling
forexamplemousebrain
https://www.livescience.com/47421-human-brain-wrinkles.html
lowpolygang Artby 🚲 Art by @andreykrygovhttps://www.instagram.com/p/CIqh_8JDV_Y/
https://www.blendswap.com/blend/6770
9. Mesh→ Unreal/Unity WebGL, etc. ifyou are into visualization→
Helpingbrainsurgeonspractice withreal-time
simulationAugust30,2019bySébastien Lozé
https://www.unrealengine.com/en-US/spotlights/helping-brai
n-surgeons-practice-with-real-time-simulation
In their 2018 paper Enhancement Techniquesfor Human AnatomyVisualization, Hirofumi
Seo and Takeo Igarashi state that “Human anatomy is so complex that just visualizing it in
traditional ways is insufficient for easy understanding…” To address this problem, Seo has
proposed a practical approach to brain surgery using real-time rendering with
UnrealEngine.
Now Seo and his team have taken this concept a step further with their 2019 paper
Real-Time Virtual Brain Aneurysm ClippingSurgery, where they demonstrate an
application prototype for viewing and manipulating a CG representation of a
patient’sbrain in real time.
The software prototype, made possible with a grant (Grant Number JP18he1602001) from
JapanAgencyforMedical Researchand Development(AMED), helps surgeons visualize a patient’s
uniquebrainstructurebefore, during,and after anoperation.
BrainBrowser isanopensourcefree3DbrainatlasbuiltonWebGLtechnologies,
ituses Three.JStoprovide3D/layeredbrainvisualization. Reviewedin
medevel.com
Blender.blendfilebyplacedintheAssetsfolderofaUnityproject
https://forum.unity.com/threads/holes-in-mesh-on-import-from-blender.248126/
Interaction betweenVolumeRendered3DTextureandMeshObjects
https://forum.unity.com/threads/interaction-between-volume-rendered-3d-texture-and-mes
h-objects.451345/
12. Makingthe Brains physicalwith 3D Printing
Makingdatamatter:Voxelprintingforthe digital
fabricationof data acrossscalesanddomains
Christoph Bader et al. The Mediated Matter Group,Media Lab,Massachusetts Institute of Technology,Cambridg
https://doi.org/10.1126/sciadv.aas8652 (30 May2018)
We present a multimaterial voxel-printing method that
enables the physical visualization of data sets commonly
associated with scientific imaging. Leveraging voxel-based
control of multimaterial three-dimensional (3D) printing, our
method enables additive manufacturing of discontinuous data
types such as point cloud data, curve and graph data, image-
based data, and volumetric data. By converting data sets into
dithered material deposition descriptions, through
modifications to rasterization processes, we demonstrate that
data sets frequently visualized on screen can be converted into
physical, materiallyheterogeneousobjects.
Representative 3D-printed models of image-based data. (A) In vitro reconstructed living human lung
tissue on a microfluidic device, observed through confocal microscopy (29). The cilia, responsible for transporting
airway secretions and mucus-trapped particles and pathogens, are colored orange. Goblet cells, responsible for
mucus production, are colored cyan. (B) Biopsy from a mouse hippocampus, observed via confocal expansion
microscopy(proExM) (30). The 3D print visualizesneuronal cell bodies, axons, and dendrites.
(H) White matter tractography data of the human brain, created with the
3D Slicer medical image processing platform (37), visualizing bundles
of axons, which connect different regions of the brain. The original data
wereacquiredthroughdiffusion-weighted(DWI) MRI.
14. Surface (mesh orNURBS) fromvolumetricdata
FastSurfer- Afastandaccurate deep learning
basedneuroimagingpipeline
Leonie Henschel et al. German Center for Neurodegenerative Diseases (DZNE),Bonn, Germany
https://arxiv.org/abs/1910.03866 (9Oct 2019) Citedby17
To this end, we introduce an advanced deep learning architecture
capable of whole brain segmentation into 95 classes in
under 1 minute, mimicking FreeSurfer’s anatomical
segmentation and cortical parcellation. The network architecture
incorporates local and global competition via competitive dense
blocks and competitive skip pathways, as well as multi-slice
information aggregation that specifically tailor network
performance towards accurate segmentation of both
corticaland sub-corticalstructures.
Further, we perform fast cortical surface reconstruction and
thickness analysis by introducing a spectral spherical
embedding and by directly mapping the cortical labels from the
image to the surface. This approach provides a full FreeSurfer
alternative for volumetric analysis (within 1 minute) and
surface-based thickness analysis (within only around
1h run time). For sustainability of this approach we perform
extensive validation: we assert high segmentation accuracy on
several unseen datasets, measure generalizability and
demonstrate increased test-retest reliability, and increased
sensitivity to disease effectsrelative to traditional FreeSurfer.
15. Surface (mesh orNURBS) fromvolumetricdata#2
Surface-BasedConnectivity Integration
Martin Cole et al. (2020)
https://doi.org/10.1101/2020.07.01.183038
DeepCSR:A3D Deep LearningApproachforCorticalSurface
Reconstruction
RodrigoSantaCruz et al. (2020)
https://arxiv.org/abs/2010.11423
Moreover, DeepCSR isasaccurate, more precise, and faster thanthe widelyused FreeSurfer toolboxand itsdeep
learningpowered variant FastSurfer on reconstructing cortical surfacesfrom MRIwhichshould facilitatelarge-scale
medical studiesand new healthcare applications.
17. Surface (mesh orNURBS) fromvolumetricdata#4
DeepSpline:Data-Drivenreconstructionof
ParametricCurvesandSurfaces
JunGao,Chengcheng Tang,VigneshGanapathi-Subramanian,Jiahui Huang, Hao Su, LeonidasJ. Guibas Universityof
Toronto;VectorInstitute;Tsinghua University; Stanford University; UCSan Diego
(Submittedon 12Jan2019) https://arxiv.org/abs/1901.03781 -Citedby14
https://github.com/SteveJunGao/deepspline
See also orbingol/NURBS-Python
Reconstruction ofgeometrybased ondifferentinputmodes,suchasimagesorpoint
clouds,hasbeen instrumentalin thedevelopmentof computeraided design and
computergraphics.Optimalimplementationsoftheseapplicationshavetraditionally
involvedtheuseof spline-based representations attheircore. Mostsuchmethods
attempttosolveoptimization problemsthatminimizean output-target mismatch.
However,theseoptimization techniques requireaninitializationthatiscloseenough,as
they arelocalmethods by nature.Weproposea deeplearningarchitecturethat
adaptstoperform splinefittingtasks accordingly, providingcomplementaryresults
to theaforementionedtraditionalmethods.
To tacklechallengeswiththe 2Dcases suchas multiplesplineswithintersections,
we use a hierarchical Recurrent Neural Network (RNN) Krauseetal.2017
trained with
ground truth labels, to predict a variable number of spline curves, each with an
undeterminednumberofcontrolpoints.
In the 3D case, we reconstruct surfaces of revolution and extrusion without sel-
fintersection through an unsupervised learning approach, that circumvents the
requirement for ground truth labels. We use the Chamferdistance to measure the
distance between the predicted point cloud and target point cloud. This architecture is
generalizable, since predicting other kinds of surfaces (like surfaces of sweeping or
NURBS), would require only a change of this individual layer, with the rest of the
modelremainingthesame.
Petteri: What would be the open-source workflow in production with NURBS?
Harder to use Rhinoceros 3D in production (open question)
20. Meshresampling? Already in surfacereconstruction net?
Controlthe complexity ofthebrain mesh for the platform used to
viewthe render
How suitable aretriangular meshes actually? Howeasy to simplify
e.g. comparedto NURBS? Do youhavea NURBSpresentation
that youjustconvert to meshes if thevisualization library only
support meshes?
25. Coarse-to-fineupsampling aswell with deep learning
topological updates of Loop Subdivision, but predicting vertex positions using a neural
network conditioned on the local geometry of a patch. This approach enables us to
learn complex non-linear subdivision schemes, beyond simple linear averaging used in
classicaltechniques.Oneofour keycontributionsisa novel self-supervised training
setup that only requires a set of high-resolution meshes for learning network
weights. For any training shape, we stochastically generate diverse low-resolution
discretizations of coarse counterparts, while maintaining a bijective mapping that
prescribestheexacttargetpositionofeverynewvertexduringthesubdivisionprocess
27. ExtendedReality(VR/AR/MR/XR)inMedicine
“VRx”: A MedgadgetBook Interview with AuthorDr.
Brennan Spiegel
DECEMBER8TH,2020 SCOTTJUNG
https://www.medgadget.com/2020/12/vrx-a-medgadget-book-interview-
with-author-dr-brennan-spiegel.html
https://doi.org/10.1007/s11936-019-0722-7
The most readily available benefits of XR are in the
form of visualizations of 3D anatomy and real-time
display of anatomy and tooling. There are currently
no published prospective clinical trials using
AR or XR in human subjects. However, given the
rapid development in the field, we expect to see
human data in the near future. The XR hardware
landscape is changing rapidly. Future hardware
advances should improve visual realism and user
comfort. Incorporation of haptic feedback into
XR systems may be an important breakthrough for
interventional procedures.
Readers interested in more information about this
field, particularly XR display hardware considerations,
should consult our previous review [23]. The
book Mixed and Augmented Reality in
Medicine [24] will be of interest to readers looking
for an in-depth resourceabout XRin medicine.
28. KeeptheExtendedReality(XR-VR/ AR)optionopen #1
Multi-ThreadedIntegrationofHTC-ViveandMeVisLab
February2018 doi: 10.13140/RG.2.2.18864.05121
Conference: SPIE MedicalImaging2018,Project: VirtualRealityinMedicine
Simon Gunacker,MarkusGall,DieterSchmalstieg,Jan Egger
Virtualinteractionandvisualisationof 3Dmedicalimagingdata withVTK andUnity
Gavin Wheeler; Shujie Deng; NicolasToussaint;Kuberan Pushparajah; JuliaA. Schnabel;John M. Simpson;Alberto
Gomez Healthcare TechnologyLetters ( Volume:5, Issue:5, 10 2018)
https://doi.org/10.1049/htl.2018.5064 - Cited by 17
Thesurfacerenderingtechniquesused in recentVR and AR medical visualisation systemsbuiltusing Unity requirea patient-specific
polygonalmodel oftheanatomy ofinterest. Such surfacemodelsaretypicallyderived frommedical imagesthrough
segmentation, using manual orsemi-automaticmethods.Inmostcases, thisinvolvesmanual effort,and thetimeand skill to do thismay
besignificant. Moreover, the segmentationprocessinherently losesinformationpresentin theoriginal volumedata.
Volumedata often do nothavepreciseboundaries, butvolumerendering allowstheuser to interactivelytunerendering parametersor
applyfilters,to achievethedesired appearance. By integratingvolume rendering in VR, weremovethe potentially
erroneoussegmentation stepsandgive theusermore flexibility andcontrol.
In thiswork, weaimto integrateVTK intoUnity to bring themedical imaging visualisation featuresof VTKinto interactivevirtual
environments developed using Unity. Particularly,wedescribea method to integrateVTKvolumerendering of 3Dmedical imagesinto a
VR Unityscene,and combinetherendered volumewithopaquegeometry,e.g.spherelandmarks.Wefocuson creatingcore
technology to enablethisandgivedevelopersandresearchersthe easeofuse andflexibility ofUnity combined with
thevolumerendering featuresof VTK.
High level workflowdiagramshowing thecommunication and interaction between
MeVisLab and theHTCVivevia OpenVR capsuled byan ownthread.
29. KeeptheExtendedReality(XR-VR/ AR)optionopen #2
NextMed,AugmentedandVirtualRealityplatform for 3Dmedical
imagingvisualization
González-Izard,S.AlonsoPlaza,Ó,Sánchez Torres, R.Juanes-Méndez,J.A.
García-Peñalvo, F.J.(2020)http://repositorio.grial.eu/handle/grial/1803
The Grid Factory, a U.K.-based provider of NVIDIA GPU-accelerated services, is partnering with
telecommunications company Vodafone to showcase the potential of 5G technology with a network built at
Coventry University. Operating NVIDIACloudXR on the private 5G network, student nurses and healthcare
professionalscan experience lessonsand simulations invirtual realityenvironments.
https://blogs.nvidia.com/blog/2020/11/17/coventry-university-cloudxr/
30. KeeptheExtendedReality(XR-VR/ AR)optionopen #3
Validation of virtualreality orbitometry bridgesdigitaland physical worlds
PeterM.Maloca,BalázsFaludi,MarekZelechowski,ChristophJud,TheoVollmar,SibylleHug,PhilippL.Müller,EmanuelRamosdeCarvalho,JavierZarranz-Ventura,
MichaelReich,ClemensLange,CatherineEgan,AdnanTufail,PascalW. Hasler,Hendrik P.N.Scholl&PhilippeC.Cattin
ScientificReportsvolume10,Articlenumber: 11815(July2020) https://doi.org/10.1038/s41598-020-68867-6
In summary, the orbit and
possiblyall other three-
dimensionalspaces can be
non-invasively and digitally
visualized and measured in
close-to-reality conditions
and investigated with a
high precision using aVR
image display method that
measuredwhat it purportsto
measure. An objective diameter
measurement can be attained
to quantifythe dimensionsof
the orbit and improve spatial
awareness, diagnosis,
monitoringand pre-surgical
planning.
31. KeeptheExtendedReality(XR-VR/ AR)optionopen #4
Virtualreality in advanced medicalimmersive imaging: a workflow for introducing virtual reality asasupporting
tool in medicalimaging
MarkusM.Knodel,Babett Lemke,MichaelLampe,MichaelHoffer,ClarissaGillmann,MichaelUder,JensHillengaß, GabrielWittum&TobiasBäuerle
Computing andVisualization in Sciencevolume18,pages203–212(2018) https://doi.org/10.1007/s00791-018-0292-3
Our approach is based on the use of the graphical surface package VRL [4] (Visual Reflection
Library) which preserves various own packages and brings together third party single packages to
enable within one program the extraction of 3D volume and surface rendered CT and MRI image stacks.
Such an approach is based on a very intuitive and simple GUI (graphical user interface) to project them
into the virtual reality space within the environment of a versatile and prominent 3D virtual reality project,
namely COVISE / opencover (COllaborative VIsualization and Simulation Environment) [5]
of the HLRS (Höchstleistungsrechenzentrum) Stuttgart which is used widely within automotive
development processes of e.g. Mercedesand Porsche.
See UnityFormafor similar functionalityfor the automotive industry and beyond
https://blogs.unity3d.com/2020/12/09/introducing-unity-forma-reimagine-m
arketing-with-real-time-3d/
33. KeeptheExtendedReality(XR-VR/ AR)optionopen #6
How Iused AR To Build 3D MRI Scans
RiyaMehtaNov19,2019 https://medium.com/@riyamehta9001/how-i-used-ar-to-build-3d-mri-scans-5e0df497c594
Autodesk 3D
Itwasawesometoplayaroundwiththisandbeableto
implementmodelsIbuiltprogrammedintothe
augmentedrealityplatformIusedcalledSketchfab,
whichconverts/programs3DmodelsintoeitherVRor
ARdemos.Here’saquickglanceatwhatIwasabletodo.
Augmentedrealitywillcompletelychangethewaywediagnosepatients,view3D
models&observethemedicalindustryasawhole.
35. KeeptheExtendedReality(XR-VR/ AR)optionopen #8
https://www.cgtrader.com/low-poly-3d-models/brain
165 low poly 3D Brain models are available for download. These models contain a significantly smaller number of polygons and therefore requireless computing power to render.
Models which have fewer polygons are best used in real time applications that require fast processing, like virtual reality (VR), augmented reality (AR), mixed reality (MR), cross reality
(XR) and games, especially mobile games. Choose from our collection of rigged and animated models to easily use them in your real-time applications. Get 3D assets for environments, pick
props, objects or buy complete 3D model collections, bundles and packs with everything your game might need. Save development time and costs, make prototype experiences or use
3Dmodelsas placeholdersinyour project.Tofindmodelsthatrenderpredictably in variousengines,usethePBRfilternexttothesearchbar.
https://www.cgtrader.com/3d-models/science/medical/low-polygon-art-medical-brain-color
36. GameEngines(e.g. Unity orUnreal) fromsimple wireframestoXRvisualization
StartwithlightweightWebGL / 3DPDF visualizationswithclinicianswithlittle hardware
and/ortechnicalskills.Three.JSapproache.g. used in X Toolkit mightbe even simpler?
37. WebGL as your baseline? https://medevel.com/15-webgl-medical-visualization-projects/
AnatomyLearning is a free 3D anatomy atlas that exported to work with
WebGL using Unity Game Engine & Unity Web Player. It provides 2
versions one for the modern web browser that is built on WebGL2.0 and
other for Android mobile systems.
This 3D heart anatomyVR(Virtual Reality) project is built by BabylonJS: a
WebGL JavaScript Framework, It's published as a VR (Virtual Reality)
Project at VESTA a WebVR social network that provides a sharing platform
VRartists, developers, & normal users.
TheOpen AnatomyProject is an open source 3D
anatomy atlas that works directly from the web
browser as it uses pure HTML technologies and
WebGL rendering. The Open Anatomy project is
carried out and developed by the Brigham and
Women's Hospital in Boston aiming to deliver rich
digital anatomy atlases to students, doctors,
researchers, and thegeneral public.
BrainBrowser isanopensourcefree3DbrainatlasbuiltonWebGLtechnologies,ituses Three.JSto
provide3D/layeredbrainvisualization.WehavereviewedBrainBrowser andlistedallofitscurrent
features,youmayreadaboutitathere:
BrainBrowser:OpensourceWeb-basedBrainVisualizationwithVolume&Surfaceviewers
38. WebGL as your baseline?
ModernScientificVisualizationson the Web
LoraineFrankeand DanielHaehn InformaticsSeptember2020,7(4),37; https://doi.org/10.3390/informatics7040037
Modern scientific visualization is web-based
and uses emerging technology such as
WebGL (Web Graphics Library) and
WebGPU for three-dimensional computer
graphics and WebXR for augmented and
virtual reality devices. These technologies,
paired with the accessibility of websites,
potentially offer a user experience beyond
traditional standalonevisualizationsystems.
We review the state-of-the-art of web-
based scientific visualization and
present an overview of existing methods
categorized by application domain. As part
of this analysis, we introduce the Scientific
Visualization Future Readiness Score
(SciVis FRS) to rank visualizations for
a technology-driven disruptive
tomorrow. We then summarize
challenges, current state of the publication
trend, future directions, and opportunities for
thisexciting research field.
39. WebVR
Virtual Reality Volume Rendering forreal-time VisualizationofRadiologicAnatomy
https://tobias.rautenkranz.ch/blog/code/webvr-vr.html (2017)
A new Web technology (WebVR) in combination with powerful smartphone graphic processors and
inexpensive virtual reality viewers (Google Cardboard) have made virtual reality cheap and easily obtainable.
These new developments and the before mentioned possibility of a more natural visualization have lead me to
implementanexperimentalVRwebpageforanMRIscan.
Implementation Details
The existing volume renderer for WebGL was adapted to support
the new WebVR standard to allow virtual reality rendering in the
browser.
Like for mostVRrenderers aforward renderer is used. Thedifference
being, that this is normally done to be able to use MSAA (see
Optimizing the Unreal Engine 4Renderer for VR, Pete Demoreuille);
but this is of no use for volume rendering. Instead, the ability of the
existing WebGL renderer to output an image of the volume and its
segmentation simultaneously is not needed. On the contrary, they
need to be combined in one image. Thus, the renderer was adapted
to a single stage. As a side effect, color renderings are now possible
without usingthe WEBGL_draw_buffers extension.
Since multiple layers are not supported by WebVR implementations,
some additional overlay renderers are injected before or after the
volume rendering.
Please note that, due to its experimental nature, the source code is
sometimesconfusingor even confused.
40. Example of Unityuse
Interactive heartwithUnity3D
http://www.medicalgraphics.de/en/projects/making-ofs/interactive-heart-with-unity3d.html
To useWebGL there areanumberof toolsandenginesavailable.
Sometimeagowedealtwith the possibilitiesof Sketchfab,aweb
servicethatallowsasimplewaytovisualize3DdatainWebGL.
Sketchfabismainlya puredataviewer,interaktionissomehow
limited.Also themodelsareattachedto aweb serviceandcannot
me usedstand-aloneoroffline.
With theliberalizationoftheirlicenseconditionsandthe
integrationof WebGL/html5Unity3Dwasbecomingavery
interesting tool.Unityisactuallyagameengine,adevelopment
environmentfor games.Through it´spossibilitiestwo writeown
codeandscriptsandthenexporttheapplicationfordifferent
devices(PC, Mac,web,consoles,mobile)the possibilities are
virtuallyunlimited.AlsoUnityisnotawebservice,therefore
youhavefullcontroloverthecreatedapplication.
Asasimpletestobjectfor aninteractive,medical
application,wechose ahuman heart,which shouldbe freely
explorableintheapp.Inaddition,webuiltasimplefunctionalityto
emphasizeanatomicalstructures- inthisparticularcase there is
anoptiontoremove the coronaryvesselsandtoopena
partofthe hearttolookintothe ventricles.Thisisasimple
example ofthepossibilitiesofinteractivity3DModels.
41. Example of Unityuse #2
HolographicReconstructionofAxonalPathwaysinthe HumanBrain Mikkel V. PetersenJeffrey Mlakar Suzanne N. Haber Martin Parent Yoland Smith Peter L. Strick
Mark A. Griswold Cameron C. McIntyre Published:November 07, 2019DOI: https://doi.org/10.1016/j.neuron.2019.09.030
42. Example of Unityuse #3
ColorRenderinginMedical Extended-RealityApplications
AndreaSeungKim, Wei-ChungCheng, RyanBeams& AldoBadanoJournal of Digital Imaging (November 2020)
https://doi.org/10.1007/s10278-020-00392-4
RGBinputandoutputforfivedigitalmaterial,digitallighting,anddigitalcamera
configurationswithintheUnityengineintherenderingofcross-platform
applicationsfor selectedscenes:(a)adigitalpathologyimage[8],(b) adigital
chestradiograph,and(c)afull-fielddigitalmammogram[9]
When buildingan XRapplication fordifferent
platforms,developersshouldconsiderthefilesize
with associatedmemorysizerequirements,pixel
dimensions,andresolution oftheimagetexturesfor
each targetplatform[25],asthetypeoftexture
compression isdependenton theintendedplatform.
Forinstance,astandaloneXR HMD,Androidmobile
device,andaPCwill each havetheir own unique
compression formatsthatwork with their specific
hardwareassomegraphicsdevicesonlyusecertain
compressed formats.Developershavetheabilityto
designatespecificcompressionsettingsforeach
platformin theimportsettingsoftheinspector
window.
43. Example of Unityuse #4
Virtual linearmeasurementsystemforaccuratequantificationof medicalimages
Gavin Wheeler;ShujieDeng;Kuberan Pushparajah;JuliaA. Schnabel;John M. Simpson;AlbertoGomez
School ofBiomedicalEngineering& ImagingSciences, King'sCollegeLondon,London,UK
HealthcareTechnologyLetters ( Volume:6, Issue:6,122019)https://doi.org/10.1049/htl.2019.0074
Hierarchical structureof themeasurementprefab, asimplemented in Unity.Themeasurement
objecthas fivechild objects, asillustrated. Objectsmarked with ‘I’havephysicsinteractors.Blueand
purplearrowsindicatethelinking of theconnectorlinesto thestartpoint, end pointand label. Green
arrowsindicatetheUnityscriptsgoverning thescaleof theobjects. Thered arrow indicatesthe
redirection of editing (translate, rotate)fromtheconnectorto themeasurementparent. Shapes,
coloursand label textarearepresentativeexample
We proposed a 3D VR system to carry out linear measurements on volumetric images, and
demonstrated it on echocardiographic images of a calibration phantom and of cardiac patients.
All measurements were carried out with Philips QLAB (our baseline), Tomtec (its 3D
measurement system only) and our proposed VR platform. Overall, this study showed that a
VR system can have measurement tools that are comparable to clinically used
commercial tools, while providing further insight and understanding into complex 3D
anatomy.
44. Example of Unityuse #5
ApplicationsofVRmedicalimage visualizationtochordallengthmeasurementsforcardiacprocedures
PatrickCarnahan, John Moore, Daniel BainbridgeM.D., Gavin Wheeler, ShujieDeng, Kuberan Pushparajah, ElvisC. S. Chen, John M. Simpson, TerryM. Peters
ProceedingsVolume11315,MedicalImaging2020: Image-GuidedProcedures, RoboticInterventions,andModeling; 1131528(2020) https://doi.org/10.1117/12.2549597
45. Example of Unrealuse #1
Helpingbrainsurgeonspractice withreal-time
simulation August30,2019bySébastienLozé
https://www.unrealengine.com/en-US/spotlights/helping-brai
n-surgeons-practice-with-real-time-simulation
In their 2018 paper Enhancement Techniquesfor Human AnatomyVisualization, Hirofumi
Seo and Takeo Igarashi state that “Human anatomy is so complex that just visualizing it in
traditional ways is insufficient for easy understanding…” To address this problem, Seo has
proposed a practical approach to brain surgery using real-time rendering with
Unreal Engine.
Now Seo and his team have taken this concept a step further with their 2019 paper
Real-Time Virtual BrainAneurysm ClippingSurgery, where they demonstrate an
application prototype for viewing and manipulating a CG representation of a
patient’sbrain in realtime.
In developing the application, Seo’s team chose Unreal Engine as the underlying real-time
technology because of its graphics and programming tools. “Unreal Engine has powerful
mathematical C++ APIs such as FVector, FMath, and UKismetMathLibrary, so we find it to
be asuitable platform for research on3D CG geometry,” saysSeo.
46. Example of Unrealuse #2
VolumeRendering inUnrealEngine4.
08-04-2016,04:20PMTobehonest,Iamnotsurethisshouldbehere,butIfelttheother topicswereevenlessrelevantasIamtalkingaboutrendering.Justnotthestandard
methodsinUE4.FeelfreetomoveifIplaceditinthewrongarea.Tostart,letmebetransparent. IamworkingonamastersthesisusingVRandscientific
visualization.Isawpotentialinthemergingof UE4andscientificvisualizationforstudents,scientists,gamersandallgraphicalartistsalike.
https://forums.unrealengine.com/development-discussion/rendering/91596-your-thoughts-on-and-comments-to-volume-rendering-in-unreal-engine-4
https://youtu.be/z34X_52O20U
47. Example of Unrealuse #3
VolumetricMedicalData Visualizationfor
Collaborative VREnvironments 27 October 2020
RolandFischer,Kai-ChingChang, René Weller,Gabriel
Zachmann
https://doi.org/10.1007/978-3-030-62655-6_11
Wepresentaneasy-to-useandexpandable
systemforvolumetricmedicalimage
visualizationwithsupportformulti-userVR
interactions.Themain ideaistocombine astate-of-
the-artopen-sourcegameengine,theUnreal
Engine4,withanewvolumerenderer forCT
images.
Theunderlyinggameenginebasis guaranteesthe
extensibility andallowsforeasy adaptionofour
systemtonewhardwareandsoftware
developments.Inourexample application,remote
userscanmeetinasharedvirtualenvironmentandview,
manipulate anddiscussthe volume-rendereddatain
real-time.
OurnewvolumerendererfortheUnrealEngineis
capableofreal-timeperformance,aswellas,high-quality
visualization.
For the future we plan to expand the interaction possibilities with the volume
visualization,specifically,wearelooking atintegratingadynamicclippingplaneforabetterview
of internal regions and a volumetric drawing tool allowing for quick sketches and
annotationsinside the volume.Other improvementswouldbe adirectintegrationand
parallelization of the preprocessing part to speed up the workflow and allowing for a
dynamic adjustment of the transfer functions. To improve the visualization of complex
structures and organs that involve multiple materials support for multi-dimensional
transferfunctionscouldbeadded.
48. Example of Unrealuse #4
3D Kinematicsof UpperLimb FunctionalAssessment UsingHTCVive in
UnrealEngine4
KaiLiang Lew,Kok SweeSim,ShingChiangTan, FazlySalleh Abas19November
2020
https://doi.org/10.1007/978-3-030-63119-2_22
The purpose of research in this paper is to quantify the accuracy
and precision of HTC Vive by making upper limb
assessment measurements and performing functional
tasks in the Unreal Engine 4. Thirty healthy males performed
daily aim functional tasks, and arm length measurement and
assessment were made. Each participant attended two testing
sessions and one arm length measurement session. The upper
limblength wasmeasured using HTC Vive after making three types
of hand posture exercises. The arm assessment included the
minimum and maximum angle of shoulder adduction, abduction,
flexionand extension.
The experiment showed all the upper limb measurements
collected from the functional tasks as well as the position and
rotation of the upper limb could be estimated correctly. The
proposed system is potentially useful for assessing stroke
rehabilitationinthe hospital and rehabilitationcenter.
49. Example of Unrealuse #5
The Uterine Games:UsingaGame EnginetoDevelop
a 3D DigitalFemale ReproductiveTracttoAidin
AnatomyEducation
YunaK. Park DanielleRoyer(18 April2020)
https://doi.org/10.1096/fasebj.2020.34.s1.04584
The aim of this project was to iteratively design and develop a
mobile application (app) depicting a 3D model of the
plastinated female reproductive tract. A 3D surface
model of the plastinate was digitally reconstructed using an
Artec Space Spider 3D Scanner. Artifacts were smoothed and
texture was refined in ZBrushCore 2018 and Autodesk Maya
2019. The model was packaged into a mobile app using a
gameengine, UnrealEngine4 (UE4).
Compared to other app development software, UE4 was
chosen for its robust visualization of 3D models, cross‐
platform deployment, and zero upfront costs. With online
tutorials, UE4’s Blueprints visual scripting system is
relatively simple to grasp, and the node based interface is‐
a powerful approach for non programmers‐programmers , allowing
extreme flexibilitywithout the need for coding.Utilizing
this flexibility, the app was designed to promote self paced‐
independent learning of the female reproductive tract and
associatedpelvicanatomy.
https://youtu.be/EFXMW_UEDco
https://www.unrealengine.com/en-US/spotlights/vr-med
ical-simulation-from-precision-os-trains-surgeons-five-ti
mes-faster
57. Collection of lowpoly brain models clickimagesfor source
In other words, why hand-model these, if you could create automatically low-poly brains from acquired CT/MRI images, and these low-poly models with ROI
overlays of the structure of interest highlighted in them, in a interactive 3D model. And if nothing else, you can use this as an inspiration for your startup
branding
64. https://brainder.org/research/brain-for-blender/
Imageacquisition and
reconstruction
The images were acquired at the Research Imaging Institute, University of
Texas Health Science Center at San Antonio, in a Siemens magnetom Trio 3T
system, in two sessions, each consisting of 6 acquisitions of T1-weighted
images, using a mprage sequence, with voxel size of 0.8×0.8×0.8 milimeters.
The images were registered and averaged to improve signal-to-noise ratio, as
described here, and bias corrected using spm8 software. The already
realigned, averagedand bias-corrected volume,in nifti format, isavailable here.
The generation of the cortical meshes and subcortical
segmentations used FreeSurfer 5.2.0. The splitting of the cortical
meshes into independent objects was performed using a custom script that
soon will be released at Brainder.org (update: they are now available here). The
subcortical meshes were produced from the volumetric segmentations, as
described here.
Subcortical structures
In addition to the above cortical meshes, surfacesforsubcortical structures
are also available.These are not produced directly by the FreeSurfer pipeline.
However, the segmented volumesthat are part of the subcortical stream can
be used to generate surfacesforvisualisation purposes, asdescribed here.
The meshesforthe same brain, in different formats,can be downloaded here:
srf mz3 obj ply.
92. NVIDIA Healthcare2.0– Developingand deployingAIin healthcare
https://tectales.com/ai/nvidia-healthcare-2-0-developing-deploying-ai-in-healthcare.html
ImFusionusesdeeplearningtoturn2Dultrasounddatainto3Dimages.
NVIDIA is already working with various partners in adopting AI for their products. For example, Siemens Healthineers is using a NVIDIA GPU-based
supercomputing infrastructure to develop AI software to generate organ segmentations that enable precision radiation therapy. Furthermore, Siemens’
SherlockAI supercomputer whichisused to run more than 500 AI experimentsdaily,is also powered byNVIDIAtechnology.
However, NVIDIA is not only working with the industry, but also with academic and research institutions. They are collaborating with the King’s College
London (Jorge Cardoso et al.) to bring AI in medical imaging to the point of care. In another project, they are applying ‘federated learning’ to algorithm
development, allowing algorithms to be developed on site, using data from the local institutions, without the need for data to travel outside of its own domain.
The work could lead to breakthroughs in classifying stroke and neurological impairments, determining the underlying causes of cancers, as well as
recommendingthe best treatment forpatients.