• Nie Znaleziono Wyników

Integrative Visualization of Whole Body Molecular Imaging Data

N/A
N/A
Protected

Academic year: 2021

Share "Integrative Visualization of Whole Body Molecular Imaging Data"

Copied!
125
0
0

Pełen tekst

(1)

Integrative Visualization of

(2)

Based on Leonardo da Vinci’s Vitruvian Man (photography by Luc Viatour / www.Lucnix.be) and a partial depiction of the Aten (sun-disk).

(3)

Integrative Visualization of

Whole Body Molecular Imaging Data

Proefschrift

ter verkrijging van de graad doctor aan de Technische Universiteit Delft,

op gezag van de Rector Magnificus prof. ir. K.C.A.M. Luyben, voorzitter van het College voor Promoties,

in het openbaar te verdedigen op vrijdag 12 december 2014 om 10:00 uur

door

Peter KOK

Master of Science in Computer Science TU Delft, Nederland

(4)

Prof. dr. ir. B.P.F. Lelieveldt Prof. dr. ir. F.W. Jansen

Samenstelling promotiecommissie:

Rector Magnificus voorzitter

Prof. dr. ir. B.P.F. Lelieveldt Technische Universiteit Delft, promotor Prof. dr. ir. F.W. Jansen Technische Universiteit Delft, promotor Prof. dr. E. Eisemann Technische Universiteit Delft

Prof. Dr.-Ing. B. Preim Universit¨at Magdeburg, Duitsland Prof. dr. J.B.T.M. Roerdink Rijksuniversiteit Groningen

Dr. L. Van Der Weerd Leiden Universitair Medisch Centrum

Dr. C.P. Botha vrml. Technische Universiteit Delft

Advanced School for Computing and Imaging

This work was carried out in the ASCI graduate school. ASCI dissertation series number 321.

c

(5)
(6)
(7)

Contents

1 Introduction 1

1.1 Motivation/Context . . . 1

1.2 Goals . . . 3

1.3 Structure of this thesis . . . 5

2 Cyttron Visualization Platform 7 2.1 Motivation . . . 7

2.2 Related Work . . . 8

2.2.1 Generic biomedical imaging . . . 8

2.2.2 Microscopy . . . 9

2.2.3 Linking images to metadata . . . 9

2.3 Defining the CVP . . . 10

2.3.1 Requirements for integrative visualization . . . 10

2.3.2 General requirements . . . 12

2.3.3 CVP contributions . . . 13

2.4 Realization and Framework Architecture . . . 14

2.4.1 Appearance and global functionality . . . 14

2.4.2 System layout . . . 14

2.4.3 Modules . . . 15

2.4.4 Data nodes and data flow . . . 17

2.4.5 State attributes . . . 19

2.4.6 Implementation details . . . 21

2.5 Applications in life science research . . . 21

2.5.1 Visualization for Bioluminescence Imaging . . . 22

2.5.2 Articulated Planar Reformation . . . 22 vii

(8)

2.5.3 Combining histology slices with fluorescence microscopy . . . 23 2.6 Conclusions . . . 23 3 Integrated Visualization 29 Abstract . . . 29 3.1 Introduction . . . 30 3.2 Methods . . . 32

3.2.1 Geometric approximation of the 3D light source location . . . 32

3.2.2 Landmark-based registration of BLI and CT . . . 32

3.2.3 Combined visualization of BLI and CT . . . 34

3.3 Experimental setup . . . 35

3.4 Results . . . 36

3.5 Discussion . . . 36

3.6 Acknowledgements . . . 37

4 Articulated Planar Reformation 41 Abstract . . . 41

4.1 Introduction . . . 42

4.2 Related work . . . 44

4.3 Method . . . 46

4.3.1 Whole-body registration . . . 47

4.3.2 Articulated Planar Reformation . . . 47

4.3.3 Articulated Planar Reformation View . . . 49

4.3.4 Focus view . . . 49 4.3.5 Feature visualization . . . 53 4.3.6 Confidence visualization . . . 54 4.4 Implementation . . . 54 4.5 Evaluation . . . 55 4.5.1 Study conclusions . . . 58 4.5.2 Limitations . . . 58

4.6 Conclusions and future work . . . 59

5 Local Super-Resolution Reconstruction 61 Abstract . . . 62

5.1 Introduction . . . 62

5.2 Materials and methods . . . 63

5.2.1 Experimental mouse model and imaging . . . 63

5.2.2 Interactive local SRR reconstruction . . . 65

5.2.3 Case study A: MRI+CT+BLI . . . 65

5.2.4 Case study B: MRI+BLI . . . 67

5.2.5 Super-resolution reconstruction . . . 68

5.2.6 Software platform . . . 68

(9)

CONTENTS ix

5.3.1 Case study A: MRI+CT+BLI (bone tumors) . . . 68

5.3.2 Case study B: MRI+BLI (kidney tumors) . . . 69

5.4 Discussion . . . 70

5.4.1 Relevance to tumor research and other biological applications . . . . 70

5.4.2 Post mortemto in vivo SRR-MRI . . . 74

5.4.3 Interactive local SRR . . . 74

5.4.4 Image quality vs imaging time . . . 75

5.4.5 Reconstruction times . . . 76

5.5 Conclusions . . . 76

5.6 Acknowledgments . . . 76

5.7 Addendum . . . 76

5.7.1 VOI-to-patient transform . . . 77

5.7.2 Extracting VOIs and completing transforms . . . 79

5.7.3 Output . . . 81

6 Summary and outlook 83 6.1 Summary . . . 83 6.2 Limitations . . . 85 6.3 Outlook . . . 86 Bibliography 89 List of Figures 99 List of Tables 105 Samenvatting 107 Limitaties . . . 109 Vooruitzicht . . . 110 Acknowledgements 113 Curriculum Vitae 115

(10)
(11)

CHAPTER

1

Introduction

1.1

Motivation/Context

In the past three decades, major advances have been made in biomedical imaging technology. The miniaturization of image acquisition hardware has enabled a detailed study of struc-ture, anatomy and function in animal models. Molecular imaging methods such as biolu-minescence imaging, fluorescence imaging, µPET and µSPECT imaging enable the study of specific chemical processes in cells. Structural imaging techniques such as µCT, µMRI and ultrasound provide a detailed depiction of anatomy, and can be used to monitor changes caused by disease, development or treatment. Histological sectioning and other microscopic imaging techniques enable zooming in on the cellular scale. Combining information from all these modalities poses challenges that have not yet been addressed from a data visualization1 point-of-view.

Figure 1.1 illustrates the difficulties encountered when combining data from two differ-ent modalities: bioluminescence imaging (BLI) and CT. BLI is based on a light-emitting chemical reaction that occurs in bioluminescent animals such as fireflies and some algae and jellyfish species. This process can be integrated in for instance cancer cells, enabling the monitoring of a tumor based on the amount of emitted light. However, BLI lacks anatomical detail, while structural modalities such as MR and CT have a low sensitivity to small masses,

1In imaging, specifically molecular imaging, the word visualization refers to the “making visible of processes at the cellular and molecular level through imaging”. The complete imaging procedure, with the aim of detecting the labels and the corresponding processes or molecules, is called visualization. In the field of data visualization, the word visualization refers to the graphical representation of the acquired data to the user in order to provide an understanding of what it represents. This latter meaning is the one that is used in this thesis.

(12)

Figure 1.1: Top: a multi-angle bioluminescence imaging (BLI) dataset that features photographs in visible light on which the color-coded BLI signal is overlayed. Bottom: a three-dimensional CT image visualized with two orthogonal image planes and an isosurface.

but high spatial resolution, enabling better lesion shape and size characterization. The com-bination of optical and structural imaging would enable a much more sensitive and accurate source localization and quantification, amplifying the utility of the individual modalities. The BLI in this case has been acquired as a series of eight two-dimensional images from different angles. The color-coded signal is overlayed on photographs for context. The CT is a three-dimensional single image volume. The subject for both datasets is the same, but it is very difficult to directly see the spatial relation between the BLI signal and the anatomy in the CT image.

A second challenge stemming from the advances in imaging technology is that they allow subjects to be imaged in vivo. This enables longitudinal studies to be performed on single subjects. However, it is virtually impossible to position a subject exactly the same between scans. This is demonstrated in Figure 1.2. The postural variation makes it difficult to visually detect changes over time, as the observer has to mentally find the corresponding areas and mentally transform the associated structures or signal in the images before a comparison can be made.

A third problem originates from the fact that biological processes manifest themselves at multiple scale levels, and the different imaging modalities also operate at different informa-tion scales. This is illustrated in Figure 1.3. In the top part of the figure, a slice of an MRI scan and a histology section acquired with a microscope show a roughly similar overview of a mouse brain. However, the histology section has a much higher resolution, which becomes apparent at high zoom levels, at which the MRI image becomes less informative. It is possi-ble that additional images are availapossi-ble, optionally at different scales, or targeting a different part of the subject spatially. A visualization that allows exploration of a composition of such

(13)

1.2. GOALS 3

Figure 1.2: Postural variation in a subject between follow-up scans. The images feature isosurfaces extracted from CT data.

images should automatically determine which images to show, hide or partly show to provide an optimal combination of focus and context at that particular zoom level.

A solution that tackles several of the above problems, is to map all image data into a com-mon reference frame. This was proposed in [Lelieveldt 11] and is illustrated in Figure 1.4. The reference frame in this case is a mouse atlas. The mapping into the reference frame requires the data to be registered to this atlas. Regarding the postural variation, a method to achieve CT registration based on an articulated atlas has been developed by Baiker et al. [Baiker 07, Baiker 10]. This method was extended for other modalities by Khmelinskii et al. [Khmelinskii 10, Khmelinskii 11a, Khmelinskii 12b]. In this thesis, we concentrate on the information integration and visualization, as demarcated in the top half of Figure 1.4.

1.2

Goals

As mentioned above, several of the registration challenges from raw data to an atlas template have been addressed. However, the challenge of visually representing the integrated data to the user remains. This thesis addresses a number of visualization challenges emerging from molecular imaging follow-up data. The specific goals of this work are:

• To develop visualization methods for heterogeneous molecular imaging data, for whole body follow-up imaging data and for multi-scale imaging data.

• To integrate these methods into a visualization platform that allows intuitive combina-tion and exploracombina-tion of this data.

(14)

Figure 1.3: A slice from a mouse brain (not the same subject) from an MRI image (left) and a histology section (right). The different zoom levels demonstrate the higher resolution of the histology data.

Figure 1.4: Integration of data from multiple modalities, over time. Registration to a common atlas space (bottom) removes heterogeneity in posture and image structure. Subsequently, dedicated methods are required for fused visualization of multiple modalities (top left) and follow-up visualization (top right).

(15)

1.3. STRUCTURE OF THIS THESIS 5 • To demonstrate the added value of these visualization methods in a set of use cases in

translational cancer research.

1.3

Structure of this thesis

This thesis is structured as follows.

In Chapter 2, a description is presented of the Cyttron Visualization Platform (CVP), that was created to enable the visualization of scale, modal, intersubject and multi-timepoint datasets. Throughout the thesis, all of the developed visualization methods were implemented in this platform. Chapter 2 describes the construction of the platform from a software development point-of-view. Also, a number of applications of CVP in life sciences research are briefly summarized that are not included in this thesis.

In Chapter 3, the multi-modal aspect is explored by integrating CT and BLI. Two-dimensional BLI images are mapped into the CT space by landmark registration and 3D reconstruction. The fused data can be explored interactively, allowing the user to relate opti-cally detected tumors in the BLI data to structural information in the CT data. This CVP ex-tension has been extensively used in life-science research for qualitative analysis of metastatic lesion formation in breast cancer.

Chapter 4 focuses on the challenge of varying postures of a subject in follow-up studies. It introduces Articulated Planar Reformation (APR): a method to map multiple whole-body datasets into a common atlas space. After matching an articulated model of the mouse skele-ton to the target CT data set, the data can be reformatted along the principal axes of each bone. Repeating this for data from multiple timepoints allows visualization of changes be-tween timepoints and bebe-tween subjects in an intuitive manner. The method has been evalu-ated using structured interviews with domain experts, in the follow-up analysis of metastatic breast cancer lesions.

In Chapter 5, a different application of the APR presented in Chapter 4 is presented: the APR method is combined with an existing method for Super Resolution Reconstruction (SRR) of MRI data to enable faster reconstruction times for targeted, and intuitively selected volumes of interest, regardless the posture of the animal. To achieve this, a series of whole-body MRI images is acquired under varying scanning angles. The APR method is used to select a small volume of interest in the MR data, which is subsequently reconstructed using localized SRR. The approach is quantitatively validated using an MR phantom, and demonstrated in two different life-science research use cases.

Finally, Chapter 6 summarizes the main contributions of the thesis, and discusses possi-ble directions for future research.

(16)
(17)

CHAPTER

2

The Cyttron Visualization Platform

-a comprehensive visu-aliz-ation tool for biologists

2.1

Motivation

In the past three decades, major advances have been made in biomedical imaging technology. The miniaturization of image acquisition hardware has enabled a detailed study of struc-ture, anatomy and function in animal models. Molecular imaging methods such as biolu-minescence imaging, fluorescence imaging, µPET and µSPECT imaging enable the study of specific cellular processes. Structural imaging techniques such as µCT, µMRI and ultra-sound provide a detailed depiction of anatomy, and can be used to monitor changes caused by disease, development or treatment. Histological sectioning and other microscopic imaging techniques enable zooming in on the cellular scale.

The combination of molecular and structural imaging modalities enables life-science re-searchers to study disease processes and treatment effects over time, from molecule to organ-ism. These new integrated imaging possibilities have generated a new problem: the amount and complexity of imaging data make it very difficult to interpret and quantify the complex relationships between molecular processes and the functional and structural changes they cause. For a single time point, an imaging study may consist of heterogeneous imaging data at multiple scales: photographs, photon emission images, CT, MR or PET slices, functional MR imaging, MR spectroscopy, histological slices etc. Differences in imaging geometry, an-imal posture and information scale occur between modalities. In addition, large variability in animal posture may exist between time points in a follow-up study. As a result, there is cur-rently a great demand for new visualization and data fusion methods to efficiently visualize

(18)

the information in this heterogeneous imaging data.

Biomedical image visualization is an active field of research, and several visualization platforms have been developed beforehand for this purpose. They can be classified as gen-eral purpose and task specific. Gengen-eral purpose visualization software is often feature-rich, but lacks specialized data structures and functionality for integrative visualization. On the other hand, there are software tools that focus on performing specific tasks, with limited extension flexibility. However, very limited research efforts have been described that simul-taneously address the multi-scale, multi-modal and multi-time point visualization challenges that emerge from small animal imaging data. This chapter describes the development of the Cyttron Visualization Platform (CVP)1, which was developed to facilitate the visual analysis of heterogeneous small animal imaging in follow-up studies. First, a number of existing vi-sualization platforms is described. Then, an overview is presented of the requirements to the CVP. Subsequently, the CVP architecture and design choices are described in detail. Finally, a number of application examples in life-sciences research are presented.

2.2

Related Work

Due to the large number of medical visualization platforms, it is not possible to discuss them exhaustively. Instead, we focus on a number of notable visualization packages meant for the biomedical field, a number of software tools specific for microscopy and a project that shares characteristics with the CVP in terms of database browsing.

2.2.1

Generic biomedical imaging

A number of generic visualization packages are available, which are characterized by their flexibility and can be used to visualize various types of data. For instance, MeVisLab [Mev] (available under different licenses) is a modular framework that allows for custom implemen-tation of modules, which can be interactively connected into a network. It focuses on image processing, visualization and interaction and provides functionality for image segmentation, registration and more.

DeVIDE [Dev] is an open source visual programming environment for the rapid proto-typing of algorithms for visualization and image processing and shares many similarities with MeVisLab. Its major differentiating characteristic is the possibility to interact with any part of the system, at run-time. This can happen through its graphical user interface, but also through interactive programming.

The Visualization Toolkit (VTK) [Schroeder 06] is a comprehensive open source library that supports a diversity of data structures and image processing and visualization algorithms that can be freely used by other applications. Both platforms described above work with VTK: MeVisLab integrates with it, while DeVIDE is built on it. The CVP also makes use of

1Cyttron is a consortium of companies and academia that aim to integrate bioimaging technologies. http://www.cyttron.org.

(19)

2.2. RELATED WORK 9 VTK extensively. As VTK is not a stand-alone application, direct use by our target audience is limited.

One package that is particularly well known in the field of microscopy is Amira [Stalling 05]. Amira is a commercially developed, feature-rich application that offers processing, visu-alization, quantification and presentation of a diverse range of imaging modalities, principally aimed at data from imaging in life sciences. Specific functionality is available for common data types and tasks in microscopy.

Although in these packages it may be technically possible to set up a combined visual-ization to perform integrative visualvisual-ization over scale, time or modalities, they do not imple-ment the specific functionality to facilitate this, such as methods for registering heterogeneous datasets. It cannot be expected from our target user group, consisting of biologists, to set up such a visualization. The CVP specifically supports these types of use cases. Also, there is no specific functionality for preclinical imaging.

2.2.2

Microscopy

For a specific field such as microscopy, more specialized software is available. A number of tools have been developed at the National Center for Microscopy and Imaging Research (NCMIR) an overview of which is presented at [NCMIR]. In general, these tools are aimed at one specific task or type of data. In addition, a range of different software tools are avail-able for molecular microscopy specifically. An extensive list is availavail-able online [STMM]. Also here, the tools themselves do not allow integrative visualization. There is no specific functionality for integration over scales, which is explicitly available in the CVP.

ImageJ is a popular public domain application with an active community in the field of microscopy [ImJ]. The system can be extended with plugins and its main functionality is image processing. As such, the options for integrative visualization are very limited.

2.2.3

Linking images to metadata

Concerning the connection between data and metadata, the number of software packages is limited. This applies even more if the metadata is used to interactively retrieve related datasets, resulting in a database browser.

The Allen Brain Atlas (ABA) [ABA, Lein 07] is an excellent example of a database browser. It represents a database of gene expressions in the brain (originally only the mouse brain), that provides advanced exploration options. There are various methods to search for gene expression, which can subsequently be visualized by highlights in a 3D brain volume. The original experiments and metadata can be retrieved, visualized and synchronized with an atlas to determine the region(s) at which expression occurs. This method of browsing the data shares similarities to the idea behind the CVP and its database with respect to searching and navigating through data. Instead of targeting gene expression data, the CVP targets mi-croscopic data and small animal imaging in general. In addition, it aims at providing more advanced functionality for data integration.

(20)

Ontology database Image database

Image ● id, author, study, modality ● raw data ● linked images ● ontology labels Image Image Term Term

● namespace, id, label, definition

● related terms (synonyms, subclasses, ...)

Term

Figure 2.1: The two main types of databases. One contains the images with their attributes, which include explicit links to related images. Alternatively, implicit relations with other images can be derived (not shown), for instance when they share the same study id. In addition, image tags link to one or more ontology terms, which are stored in a second database. Ontology terms can also be interrelated.

2.3

Defining the CVP

As discussed in the previous section, a number of software solutions are available for bio-logical imaging data, but none of them tackle the integrative visualization challenges that are inherent to in vivo small animal imaging.

2.3.1

Requirements for integrative visualization

The main tasks of the CVP are to support three types of integrative visualization. These refer to multi-scale, multi-modal and multi-timepoint combination of data. In addition, it enables the annotation of data with descriptive metadata (which we will refer to as simply metadata) either spatially, in case of atlases, segmentations and labels, or non-spatially in case of references to author, study, related data, etc. To enable these types of integrative visualization, the following elements are required.

Registration

To enable visualization of multiple images at the same time, registration of the data into a common coordinate space is required. As various types of data should be supported, support for corresponding registration methods is required. In addition, this requires suitable data structures to manage the various datasets and the relations between them.

Multi-timepoint and inter-subject data support

Support for multi-timepoint data and time-dependent comparison is required to be able to track changes in follow-up studies. Changes in structure (MR / CT) and signal (PET / SPECT

(21)

2.3. DEFINING THE CVP 11 / optical) are of interest as they indicate changes in anatomy and physiology. Also, the data is compared to a control case, which also requires comparative visualization. In addition, some specialized functionality is required to deal with varying postures of the subjects in multi-timepoint datasets.

Multi-modal data support

Data of additional modalities is typically acquired to provide information that complements the first modality. Registration betweeen different modalities is typically more challenging, especially if one of the modalities lacks structural information. In addition to suitable regis-tration functionality, visualization methods are required to fuse the available information in multi-modal data.

Multi-scale data support

For integrative visualization of multi-scale sets of images, several aspects must be taken into account. The scene should only show the image that is appropriate to the current zoom level. I.e. an image should only be shown if no excessive scaling is necessary for visualization. Switching to another image when zooming in or out should be performed seamlessly. It is as-sumed that registration information between zoom levels is available or has been determined in a preprocessing step. A context view should support the user in determining the current location. For the edge case between two scale levels, it may be useful to show both images at the same time, for instance by overlaying them.

Spatial metadata - atlases and segmentations

Spatial metadata can be available in different forms, such as atlases, segmentations and spatial labels. If they are not already in the same space as the original dataset, a transform is required that maps one dataset into the space of the other. An example of a dataset combined with a segmentation is depicted in Figure 2.2.

As is the case for most combinations of datasets, there is a risk of clutter and occlusion when visualizing data together with metadata. Suitable types of visualization are required to properly combine these meta-structures with the primary data. Options include coloring regions in an image dataset, for instance by means of overlay.

It must be clear what each meta-structure represents, for instance by providing a legend or labeling the structures with a caption. Depending on its availability, the user must be able to request additional information by interactively selecting a meta-structure. This is illustrated by Figure 2.2.

Non-spatial metadata

For non-spatial metadata, the visualization problem is different, as the meta-information is not to be combined with the spatial data. The metadata could be displayed in a separate list

(22)

Figure 2.2: An instance of a mouse brain dataset in an Alzheimer study. A CT dataset is shown (orthogonal slice visualizations) together with the corresponding segmentation (semi-transparent isosurfaces) and annotation for the selected structure.

or in a caption such as depicted in Figure 2.2. It must be possible to load related datasets on demand.

The relation with other datasets may be explicit or deduced from the metadata, for in-stance because the data shares certain terms from the ontology or because it was uploaded by the same user. Ideally, it would be possible to search for related datasets based on any attribute.

2.3.2

General requirements

With regard to user interaction, it should be considered that the intended user group, bi-ologists, typically do not have a background in visualization. Therefore, tuning numerous parameters is undesired. In addition, it is important that users can perform their (routine) tasks with minimal effort. Sensible types of visualizations must be set up by the application by default, with appropriately chosen parameters. For routine multi-step tasks, a wizard-like interface is required.

(23)

2.4. REALIZATION AND FRAMEWORK ARCHITECTURE 13

Figure 2.3: A typical instance of the Cyttron Visualization Platform with multiple datasets loaded. A CT and multi-angle BLI dataset are combined with an atlas and a reconstructed light source, for which the relation with the 2D BLI images are shown by means of lines. On the lower left, the properties for the reconstructed tumor location isosurface are shown.

modularfashion. Specifically, it must be possible to extend the system with new modules, such as reader modules and custom visualization modules.

For presentation purposes, it should be possible to generate high-resolution screenshots and animations of the data.

2.3.3

CVP contributions

The CVP is designed to provide built-in support for integrative visualization of multi-timepoint, multi-modal and multi-scale data. In addition, as a modular platform it can be customized to perform a broad range of visualization tasks, while at the same time limiting the multi-tude of parameters by providing wizards for task-specific routines and setting up sensible default pipelines and values where possible. The extensibility of the CVP allows integration of existing software tools by wrapping them as CVP extensions or plugins.

(24)

2.4

Realization and Framework Architecture

2.4.1

Appearance and global functionality

Starting with a new instance of the CVP, the user has several options. The user can open existing projects, import data and (optional) default visualizations into the current project or start pre-defined tasks. At any stage, the user can edit the visualization pipeline by adding, editing or removing module instances. An instance of the application is demonstrated in Figure 2.3. It shows the main window, which is subdivided into a number of areas:

• A list displaying the instances of the modules that are loaded. Items for which it is semantically appropriate are shown as children of a parent node (for instance, viewer frames can be regarded as children of a viewer and data nodes can be composed of multiple sub nodes). New items can be added through the menu or through the context menu of existing items, in which case only relevant options are shown (e.g. visualiza-tion inputs that connect to a polydata input will only allow addivisualiza-tion of visualizavisualiza-tion items that can handle polydata).

• Selecting any item in the above list will show its property editor, such as shown on the lower left in Figure 2.3 for the isosurface visualization item.

• One or more viewers that are divided into one or more viewer frames display the re-sulting visualization.

2.4.2

System layout

As is illustrated in Fig. 2.4, the CVP consists of a core infrastructure combined with a set of modules that contain certain types of functionality such as file reading or visualization. The core infrastructure provides the skeleton of the application, including the main window, and provides a number of services to the modules. The framework’s tasks can be enumerated as follows:

• Managing projects, which can be regarded as sets of module instances. This includes opening and saving projects, adding module instances and looking them up.

• Providing a GUI framework, which includes a place to put a widget for editing the module instance’s properties and a workspace where modules can add a viewer. The core framework has limited functionality and thus, limited dependencies. The main dependencies are on the GUI libraries. In contrast, it is not dependent at all on any visu-alization implementation, as the visuvisu-alization functionality is completely contained in the modules. This implies that module developers are free in choosing alternative visualization implementations.

(25)

2.4. REALIZATION AND FRAMEWORK ARCHITECTURE 15 Core module management core modules Extension modules project management (module instances) main window GUI instance list property editor viewer workspace parser / serializer project file module instance state attributes: get/set/update

Figure 2.4: The global layout of the CVP. The core infrastructure contains the functionality that manages the module types and module instances. The module instances and their properties can be managed and edited through the GUI. Additional sets of modules can be added as extensions.

2.4.3

Modules

As becomes evident from the system layout, the functionality in the CVP is highly modular-ized. Users can interactively build their own visualization pipeline by connecting the different modules. Taking into account that the system is aimed at users without a visualization back-ground, it is required that each module represents a logical unit of functionality. Compared to VTK, the level of functionality is higher, meaning that each module often covers multiple VTK modules. For instance, a module that represents the visualization of a polydata dataset is a wrapper around a polydata normals filter, a mapper and an actor, which are set up without the user having to know about these.

Several types of modules can be distinguished: readers, filters, data nodes, viewers, viewer frames, visualization inputs, visualizations, registration modules and animation mod-ules. The relations between these module types are described in Section 2.4.4 and a number of them are depicted in Figure 2.5. The types that are not depicted (registration, animation and task modules) operate in a stand-alone fashion to some extent.

Medical imaging data comes in many different file formats, depending on the scanner or camera used, the type of data and any processing that has been performed. To accommo-date various data types, a number of readers have been implemented in the CVP. The most appropriate reader is selected based on the files or folders that the user selects in the GUI.

(26)

The reader selection is performed by inquiring each reader about the probability that it can read the data. This value can be based on the file extension, the presence of special files or folders, or any other rule of thumb. Some readers have properties that can be edited, such as the dimensions for raw data or the number of slices in a multi-slice volume. Defaults are filled in for these values where possible.

The filters in the CVP are wrappers around the filters in VTK. Any filter from VTK can be added this way.

Data nodes are modules that correspond to a dataset and describe how they are related to other data nodes in space and time. They are described in more detail in Section 2.4.4. Viewers and viewer frames are the parts of the window where the visualization is displayed. A project can contain multiple viewers that can be subdivided into multiple viewer frames, each containing their own scene. These frames can be resized and repositioned within the viewer. The viewer frames have their own set of properties, such as background color, type of projection, stereo mode, view angle, camera position, etc.

Visualization input modules are a type of connector switch between the data, a group of visualizations and the viewer frame, that allows the user to move the group of visualizations between frames or to apply the visualizations to a different part of the filter pipeline. The main two settings are the source of the data and the target viewer frame. Together, the visualization input lets the visualization modules know what their input data is and in which viewer frame to create the visualization. This is also how the timeline for a multi-timepoint dataset switches between different timepoints in a single-timepoint visualization.

Visualization modules are contained within the visualization inputs and visualize the data they provide in the appropriate viewer frame. Common visualization modules include image plane widgets, isosurfaces and volume renderings.

Registration modules are an important feature of the CVP. They allow the registration of datasets by setting the transform that they produce in the corresponding data node, at the beginning of the transform pipeline. There are some manual registration modules, but others require a reference to a second dataset. The methods include a manual method to edit the transform matrix, interactive editing through a widget, landmark registration (2D, 3D and 2D into 3D) and an iterative closest point method. In addition, a module was implemented that uses the Elastix software [Klein 10] to perform a registration.

Advanced animation functionality was implemented for presentation purposes. Key frame animations allow the user to quickly set up any animation. Interpolation between key frames is performed on the state attributes, which is discussed in detail in Section 2.4.5. In addition, animation scripting was implemented to generate pre-defined animations on the scene. The animation category is only functional and there are no specific other features.

(27)

spe-2.4. REALIZATION AND FRAMEWORK ARCHITECTURE 17 Data node ● ... Reader Filter(s) Visualization inputs Viewer Viewer frame Viewer frame Visualization

Switch module to select

● Input flow (module, timepoint) ● Output (viewer frame)

Data node

● Transform relative to parent ● Time relative to parent

Data node

● ...

data flow (data objects, transforms, attributes), corresponds to data node

Figure 2.5: The relation between different types of modules in the CVP. The data node hierarchy is depicted in the upper left, the data flow modules down the center and the viewers in the lower right. Each data node leaf corresponds to one data flow, which itself can consist of multiple datasets (for instance for atlases or filters that split the data), transforms and attributes. Visualization inputs select the current source data flow (e.g. based on the timepoint), send it to the visualization modules and redirect the result to the selected viewer frame.

cialist tasks will require a more complex combination of steps to obtain the desired visualiza-tion pipeline. In other cases, a simple step is to be repeated a number of times. For instance, a specific filter and image registration could be required to set up a certain type of visualization. For this purpose, task modules can be implemented by a developer and executed by the end user. These are modules that do not have a representation in the scene, but rather manage other module instances to assemble the desired pipeline.

The available tasks can be executed through the tasks menu in the main window. Tasks have complete freedom in how they present themselves. One option is to display a wizard-like procedure to the user consisting of a number of steps. A typical step consists of programmatic actions, calls to external programs or user interaction. Another option could be to execute a completely automated procedure based on a loaded dataset.

2.4.4

Data nodes and data flow

Data nodes

To accommodate the fusion and visualization of multi-modal, multi-timepoint and inter-subject datasets, a data structure is required for composition in space and time. The data

(28)

node module provides this functionality by placing a dataset into space and time through an affine transform and a time offset to the parent data node. Because of the hierarchical construction of data nodes, any dataset can be related to any number of other datasets by concatenating the transforms and time offsets that are encountered when traversing the data node tree. These transforms are at the starting point of the data flow pipeline and could be further concatenated to. The timepoints are used by, among others, the viewers, which show a timeline to enable switching within multi-timepoint datasets. Each data node corresponds to a data flow, which is further explained below and illustrated in Figure 2.5.

Data flow

Data flow modules represent the units in the processing and visualization pipeline; they have specific functionality for passing on data. The data flow is not always immediately apparent from the user interface (the list in Figure 2.3): inputs can be implicit (e.g. a visualization module getting the input from its parent) or explicit by specification of the source data flow module(s) in its properties (e.g. a registration module that requires multiple inputs). Data flow modules provide data outputs, transform outputs and attribute outputs (not to be confused with state attributes). This is illustrated in Figure 2.5. There can be any number of outputs for each type of output, thus accommodating for datasets that consist of multiple parts, such as atlases or segmentations.

Data output

The data output actually does not directly contain data, but rather the VTK object from which to retrieve the data. In this manner, the modules that are getting the data output have complete freedom in the way they retrieve their input data.

Transform output

The transform output provides a concatenation of all transforms up to that point in the pipeline. This includes the registration transform in the data node module and all transforms that are applied by filters. At the end of the pipeline, the resulting transform is used by the visualization modules to position the data.

Attribute output

The two output types above can be viewed as a way to standardize the passing of datasets and transforms, but in fact the data flow module can provide any piece of data through the attribute output. A module can simply request an attribute by name and if the input module doesn’t contain it, the request is passed on to its own input, and so on. In the default imple-mentation, the module checks for Python attributes in the module, which can also be custom functions of that module. By providing and calling the appropriate function, any type of data or information can be passed along the pipeline.

(29)

2.4. REALIZATION AND FRAMEWORK ARCHITECTURE 19

Figure 2.6: The default module property editor is automatically generated from the state attributes. Each elementary attribute has an attribute editor. Complex attributes are further specified in a sublist. The developer can set options and choose for custom (attribute) editors in the specification of the state attributes.

2.4.5

State attributes

An important feature of all modules is that their attributes are described by state attributes. In most cases, these are attributes that can be edited by the user. For instance, an isosurface visualization module has state attributes for the isovalue, color and transparency. The state at-tributes are contained in the (static) description of the module and contain meta-information such as the name, type, default value, valid range (numeric attributes) or possible options (choice attributes), references to custom get- and set-functions, editor class, tooltip, descrip-tion, flags for interpolation and animadescrip-tion, and a number of other settings.

State attributes are available for integers, floats, booleans, text, choice options, references to module instances (by ID), lists of state attribute values and any complex type that has state attributes itself (e.g. colors, transforms, file and folder locations, transfer functions, etc.).

State attributes are highly useful for a number of features, which are discussed below. Editor construction and validity checking

Using the list of state attributes for a module, an editor can be constructed automatically for any module instance. An elementary editor is available for each attribute type and the most straight-forward way to build a module editor out of these is to put the attributes in a list (Figure 2.6). On selection of an attribute, the appropriate attribute editor is instantiated. Attribute editors are available for all basic attribute types, but these can be overridden with custom ones. In addition, it is possible to override the entire module instance editor with a custom implementation. The attribute editors will also check whether the user input is valid. In case of numerical values for instance, the editors will make sure the input is in the allowed range. An attribute can be flagged as hidden from the user in case it is not suited for manual

(30)

Figure 2.7: The animation editor consists of a timeline with markers corresponding to key frames, some global settings and some key frame specific settings. Each key frame corresponds to a state of the scene, which can be saved and restored by clicking the timeline.

editing, such as image plane positions. Parsing and serialization

The state of each module instance can easily be serialized and parsed again by serializing/-parsing each of its state attributes, as it is known how to do this for each attribute type. This functionality enables serialization and parsing of complete project descriptions, which can be stored to and loaded from files. Currently, the CVP uses the XML format for this purpose, but this could easily be adapted to other formats as the code for serialization and parsing of state attributes is highly localized.

Animation

One of the well-developed features of the CVP is its animation functionality. A key frame animation of any visualization can be created in a quick and intuitive manner, without having to set many parameters. The animation is based on a timeline with a number of key frames that the user can edit (Figure 2.7). The user simply sets up the scene for every key frame and the CVP automatically interpolates between them. It does so by interpolating each of the state attributes that can be interpolated, which are the numerical values unless they have been flagged as attributes that cannot be interpolated. The other attributes are kept at the same value until the next key frame.

Interpolation is performed using Kochanek-Bartels splines [Kochanek 84] with tension, bias and continuity set to 0.0 (=Catmull-Rom spline) with exception of the beginning and end of an attribute change, in which case the tension is set to 1.0 to make sure the value doesn’t overshoot. This results in a smooth interpolation from the moment an attribute starts changing until it stops. Anything from camera position to surface color and transparency can be interpolated automatically in this way.

Synchronization

Although this feature has currently only been implemented for viewer camera position and orientation and image plane positions and window-level settings, it is relatively straightfor-ward to synchronize arbitrary attributes or complete module instances by creating a link

(31)

be-2.5. APPLICATIONS IN LIFE SCIENCE RESEARCH 21 tween them. Each time an attribute is changed, either through the user editing it directly or through interaction in the scene, it will also update the linked attributes in other module instances. Also, attributes that are not directly related could be synchronized, such as the color map on an isosurface rendering and the corresponding color map on an image plane in a different view.

The main aim for this feature is to facilitate visualization of multiple datasets simulta-neously. For instance, attributes such as slice plane position and camera position can be synchronized to enable side-by-side comparison of datasets in multiple views.

The added value of the state attributes as described above is that the developer can rapidly add or update modules by adding the appropriate state attribute specification. Functionality for editing, parsing, serialization and interpolation for every attribute is based on these state attributes. There is no need for the developer to implement these functions. Code remains cleaner, more readable and thus easier to maintain.

2.4.6

Implementation details

The CVP is largely written in Python [Rossum]. For the user interface, Qt wrapped in Py-Side [Qt] is used. For visualization functionality we use the Visualization Toolkit library (VTK) [Schroeder 06], although it is straightforward to implement modules that employ other visualization libraries. The motivation behind the choice for Python is that it is a straightfor-ward and flexible language that enables fast implementation. A number of features that are more computationally intensive are programmed in C++. Some elements contain Matlab code to interface with external functionality that is available as Matlab scripts.

The communication and data transfer between the client (CVP) and the database is man-aged by an intermediate local module written in Java. It is executed either through the browser (using Java Webstart) when accessing the database through the web interface, or by the CVP itself through the system command line. The client module communicates to the database and transfers the data through the SOAP protocol using Apache Axis web services on the server side. After data transfer has been completed, it reports back to a local SOAP server in the CVP and initiates the loading of the data.

2.5

Applications in life science research

The CVP has been successfully used as a visualization tool in a number of use cases that demonstrate the versatility and usefulness of the CVP. Several of the applications in this sec-tion are described in the following chapters of this thesis. In addisec-tion to that, a brief descrip-tion is given here of the purpose of the visualizadescrip-tion and how it was created and implemented in the CVP.

(32)

2.5.1

Visualization for Bioluminescence Imaging

In Chapter 3 [Kok 07] a visualization pipeline for bioluminescence imaging is described. Originally implemented as stand-alone software, it was later converted into an extension package for the CVP. Images created with this package were featured in [Kaijzel 07, Kaijzel 09, Khmelinskii 11b, Snoeks 11, Lelieveldt 11].

The package consists of the following components:

• A separately developed plugin that extracts the multi-angle images from the format produced by the BLI camera and a corresponding Reader module that can process the data from this module. The BLI signal, visible photo, combined image and thresholded subject silhouette are stored separately.

• A filter module that performs a basic backprojection reconstruction of the multi-angle signal.

• A registration module that enables landmark registration between the BLI data (two points per landmark) and the 3D structural modality image (one point).

• A visualization module that puts the 2D BLI images in a 3D carousel layout.

• A visualization module that interactively shows connection lines between the 2D im-ages and the corresponding location in the 3D volume.

• A task module that allows the user to walk through the necessary steps to create a combined visualization of the BLI data and the structural modality.

Visualization functionality that allows overlaying of the 3D BLI data on the structural modality is part of the basic functionality of the CVP.

2.5.2

Articulated Planar Reformation

In Chapter 4 [Kok 10] a method for articulated planar reformation is presented. Based on the articulated registration approach described in [Baiker 10], a whole body image is split into multiple separate parts. As each part has been registered to the same atlas, they now reside in a common atlas space, which enables direct comparison between multiple images of the same part. Images created with this package were featured in [Baiker 12]

The extension package that contains this functionality consists of the following compo-nents:

• A Matlab script to convert the articulated registration transforms into a mapping file in XML format.

• Datastructures for the mappings and the atlas parts.

• A filter module that takes the mappings and produces the multiple image volume parts by reslicing the input image. In addition, it transforms the position of each part. The layout can be performed manually or automatically.

(33)

2.6. CONCLUSIONS 23 • A visualization module that visualizes the atlas in an articulated layout. Specifically designed colormaps are available to visualize the registration error or the amount of bone change. In addition, a camera actor is available, representing the camera position and direction of the focus views.

APR with CT and SPECT

In [Khmelinskii 11a, Khmelinskii 12b], the APR is applied to CT and µSPECT datasets of one subject allowing side-by-side comparison and fusion of those modalities, in addition to comparisons between timepoints or subjects. If the SPECT signal sufficiently corresponds to skeletal tissue, these structures can be used to facilitate the registration with the atlas. The registration and reformatting process for SPECT is illustrated in Figure 2.8 and examples of the resulting fusion are depicted in Figure 2.9, which demonstrates different modes of comparison to compare multiple subjects when both CT and SPECT are available.

Application to multiple modalities

In [Lelieveldt 11], APR is applied to CT, MRI and SPECT, allowing for a side-by-side com-parison such as in Figure 2.10. The only aspect that is different from the standard procedure for CT is that different registration methods are used to register the atlas to the MRI and SPECT data. In the case of Figure 2.10, different subjects were used for the different modal-ities.

2.5.3

Combining histology slices with fluorescence microscopy

In [Keereweer 11, Keereweer 12] visualizations are shown that use the CVP to overlay flu-orescence imaging data over histology slices. The CVP provides a number of methods for registering 2D images, such as landmark registration. Transparency can subsequently be adjusted to create an overlay, as is demonstrated in Figure 2.11.

2.6

Conclusions

Exploring collections of datasets that cover multiple timepoints, modalities or scales requires integrative visualization methods for comparison, fusion and composition. As the target audi-ence is not expected to have a background in computer graphics or visualization, any software should support the user where possible. Existing software packages do not provide the fea-tures that allow this kind of exploration.

For this purpose, we have developed the CVP. To enable integrative visualization, data structures were implemented that support positioning of datasets relative to each-other in space and time. These data structures combine with a modular visualization pipeline consist-ing of readers, filters and visualization modules that can be interactively constructed. Several methods for data fusion and comparative visualization were developed. In addition, specific

(34)

Figure 2.8: To apply APR to SPECT data, the data is thresholded to reveal the bone structures to which the atlas is registered. APR is then applied in the same manner as for CT.

(35)

2.6. CONCLUSIONS 25

Figure 2.9: The result of applying APR to multiple (inter-subject) CT and SPECT datasets. A number of options for comparison and fusion are demonstrated for arbitrarily chosen bones.

(36)

Figure 2.10: An impression of what a side-by-side view of multiple modalities looks like after applying APR to CT, MRI and SPECT datasets. In this case, different subjects were imaged for the different modalities.

Figure 2.11: A histological section of a cervical lymph node (a), combined with a fluorescence signal indicating tumor tissue (b) and the resulting fusion (c).

(37)

2.6. CONCLUSIONS 27 functionality for preclinical imaging was developed to visualize small animals in changing postures.

To aid the user in setting up a visualization, default pipelines can be created for common types of data and parameters are filled in where possible. For custom tasks, wizard-style modules can be added that guide the user step by step.

Furthermore, the CVP was designed such that it allows for extension with minimal effort by writing custom modules. Defining a module’s properties as state attributes results in in-stant support for parsing, serialization, editor generation, animation and synchronization with other modules.

Finally, we have presented a number of use cases that demonstrate the successful applica-tion of the CVP in practice. We can conclude that the CVP is a versatile visualizaapplica-tion tool for integrating and exploring heterogeneous series of datasets. The system allows for integrative visualization of multi-scale, multi-modal and multi-timepoint datasets and annotation with spatial or non-spatial metadata. The applications described in the following chapters go into detail of these aspects and address specific problems in small animal imaging.

(38)
(39)

CHAPTER

3

Integrated Visualization of

Multi-angle Bioluminescence Imaging and Micro-CT

This chapter was published verbatim as undermentioned. Figures 3.1 and 3.3 have been up-dated for aesthetic purposes and one of the insets in Figure 3.7c now states the view direction.

P. Kok, J. Dijkstra, C.P. Botha, F.H. Post, E.L. Kaijzel, I. Que, C.W.G.M. L¨owik, J.H.C. Reiber & B.P.F. Lelieveldt. Integrated Visualization of Multi-Angle Bio-luminescence Imaging and Micro-CT. In Proc. SPIE Medical Imaging, volume 6509, pages 1–10, 2007

Abstract

This paper explores new methods to visualize and fuse multi-2D bioluminescence imaging (BLI) data with structural imaging modalities such as micro-CT and MR. A geometric, back-projection-based 3D reconstruction for superficial lesions from multi-2D BLI data is pre-sented, enabling a coarse estimate of the 3D source envelopes from the multi-2D BLI data. Also, an intuitive 3D landmark selection is developed to enable fast BLI / CT registration. Three modes of fused BLI / CT visualization were developed: slice visualization, carousel visualization and 3D surface visualization. The added value of the fused visualization is demonstrated in three small-animal experiments, where the sensitivity of BLI to detect cell clusters is combined with anatomical detail from micro-CT imaging.

(40)

3.1

Introduction

Bioluminescence imaging (BLI) is a relatively novel imaging technique that has found wide-spread application in life-sciences research over the past decade [Ntziachristos 05]. BLI is based on a light-emitting chemical reaction that occurs in nature in bioluminescent animals such as fireflies and some algae and jellyfish species. For instance, in fireflies, the enzyme luciferase catalyzes the conversion of a substrate luciferin into oxiluciferin, which emits pho-tons during the process. By integrating the luciferase encoding gene into living cells, lu-ciferase will be produced in the cell. After adding luciferin, this same bioluminescence reac-tion can be induced in this cell, causing it to emit light. The producreac-tion of luciferase can be made switchable, such that luciferase only is produced when a particular gene is activated. This selective activation has enabled a range of possibilities to track cells and monitor the function of specific genes and processes in the cellular biochemistry with a high sensitivity; this all in the living animal.

To visualize the BLI activity, the anaesthetized animal is placed in a dark acquisition chamber, and using a sensitive, cooled CCD camera, the bioluminescent signal can be de-tected. Data is typically acquired in conjunction with a photograph in visible light taken with the same camera (see Figure 3.1). Most commercially available systems enable 2D imaging. However, more recently, optical molecular imaging instrumentation is advancing towards 3D imaging. This is realized by acquiring emission and visible light images from several an-gles. This provides information about the 3D position of the light source in the animal (see Figure 3.2).

The aforementioned 2D and 3D BLI also have a few limitations: they are highly sensitive in detecting very small cell clusters, long before any structural changes become apparent; however they lack anatomical detail, and are mainly semi-quantitative. Structural modalities such as MR and CT on the other hand, have a low sensitivity to small masses, but high spatial resolution, enabling better lesion shape and size characterization. The combination of optical and structural imaging would enable a much more sensitive and accurate source localization and quantification, amplifying the utility of the individual modalities.

The goal of this work was to develop tools for an effective and intuitive exploration of fused multi-angle BLI / micro-CT data. In this paper, we introduce the following novel elements:

• An algorithm to make a rough geometric approximation of the 3D light source location based on a number of multi-angle BLI images. This source approximation serves to highlight potential structural abnormalites in the structural imaging data.

• An intuitive 3D landmark selection to enable fast multi-view BLI / CT registration. • Three modes of interactive visualization to enable an intuitive exploration of the fused

data.

The efficacy of the fused visualization is demonstrated in three small-animal case studies, where the sensitivity of BLI to detect small cell clusters is combined with anatomical detail from micro-CT.

(41)

3.1. INTRODUCTION 31

Figure 3.1: Biolominescence images are usually acquired in pairs: a visible light photograph (top left) and a photon emission image (bottom left), which is typically viewed as an overlay on the visible light image (right image).

Figure 3.2: Multi-view bioluminescence imaging, where images are acquired sequentially in 45 degree angles surrounding the subject.

(42)

3.2

Methods

We developed a modular visualization platform called the CVP (Cyttron Visualization Plat-form) that allows fast implementation and testing of visualization algorithms for fusion of structural imaging (MR and CT) and optical molecular imaging data. It has been designed as a pipeline-type architecture, where three separate components can be distinguished. The first deals with the coarse geometry-based reconstruction of superficial 3D BLI hotspots from multi-angle 2D BLI images. The second focuses on registration of data sets between differ-ent modalities. The third part consists of fusion and visualization of the data from multiple modalities. Below, we describe three components that were implemented specifically for the problem of fusing multi-angle 2D BLI images with CT data.

3.2.1

Geometric approximation of the 3D light source location

Currently available multi-angle BLI scanners typically acquire a fixed number of views sur-rounding the subject. The transformations corresponding to the views are known and consist of a rotation about one axis and a projection onto the image plane. Based on the known ge-ometry of the imaging system, we developed an elementary geometric reconstruction, where the BLI signal is back-projected into a reconstruction volume. For each voxel in the recon-struction volume, the corresponding intensity value is retrieved for each of the N 2D views. The current voxel is assigned the average intensity over all views:

V(px,y,z) = ∑Ni

Ii(Ti(px,y,z)) n

where V (p) is the intensity at the specified point in the volume, I(p) is the intensity at the corresponding point in the 2D images, T represents the projection transformations and n is the number of projections.

It should be noted that this approach does not represent an exact tomographic signal strength reconstruction: we aim to approximate the 3D location of the light source in order to a direct the attention of the user to possible areas of interest in the structural imaging modal-ity. We do not correct for absorption, scattering or refraction, as a result the application of our method is mainly limited to superficial lesions. Methods for accurate bioluminescence tomography of deeper BLI sources require non-linear photon models, which have been re-ported elsewhere [Wang 04, Troy 05, Kuo 05, Alexandrakis 06, Alexandrakis 05, Cong 06, Cong 05, Li 04]; as yet, heterogeneous tissue models have proven to be computationally very expensive, and only using simplified assumptions about optical tissue properties, some of these methods can be deployed for on-line, interactive application [Kuo 05].

3.2.2

Landmark-based registration of BLI and CT

Although BLI in itself does not contain any structural information that can be used for regis-tration, each BLI acquisition consists of a visible-light photograph and a corresponding BLI emission image. Because these photographs have been acquired with the same camera, these

(43)

3.2. METHODS 33 2D BLI images 2D photographs Structural data (CT) pre-registered Transformation registration reconstruction 3D BLI reconstruction 3D fusion of BLI and structural data

Figure 3.3: Overview of the registration process. The structural modality is registered with the BLI data using the visible light images. The resulting transformation is used to fuse the structural modality with the 3D reconstruction of the emission BLI data.

(44)

Figure 3.4: Interactive manual landmarking was developed to enable a landmark-based rigid registration. The user indicates 3 characteristic anatomical landmarks in two non-opposing BLI views, and the same landmarks in the CT data (colored balls in the left pane). Based on these corresponding landmarks, the micro-CT data is rigidly registered to the multi-view BLI data.

images are expressed in the same coordinate space, and can therefore be used to register the BLI images with structural modalities such as CT. Based on this, we developed a landmark based registration method as outlined in Figure 3.3.

An interactive user interface was developed (see Figure 3.4) to identify corresponding anatomical landmark pairs in the BLI and CT data as follows. To obtain a 3D landmark from the 2D BLI photographs, the user is required to indicate two points in two separate pho-tographs that are not opposite from each other, representing the same characteristic anatomi-cal landmark. For this purpose, a line is back projected into the volume for both points. The 3D location of the landmark is defined as the point between the projection lines where the distance to both lines is equal and minimal. The third point is directly selected in the CT volume on the corresponding location using orthogonal slice viewing. Thus, three points are required for one landmark pair. A minimum of three landmarks is required to register both data sets.

3.2.3

Combined visualization of BLI and CT

After registration, the BLI and CT data can be jointly visualized. We explored three visual-ization modes, which provide an intuitive exploration of the BLI source location, the relation between the 2D projections and the 3D reconstruction and a fused visualization of the BLI and CT data:

Fused slice visualization consists of an orthogonal slice browser for the CT data, with color enhanced BLI sources. The structural data is encoded as intensity values in the HSI signal, while the BLI data is represented by the hue. In addition, semi-transparent

(45)

iso-3.3. EXPERIMENTAL SETUP 35 surfaces around the light sources are generated using Marching Cubes [Lorensen 87]. The iso-threshold for these surfaces is manually chosen. This results in a visualization where the reconstructed light sources can be easily localized using the surfaces. In addition, the structural environment of the light source can be further inspected using the color-coded data representation.

Carousel visualization: the multi-2D BLI data is visualized and linked in the 3D space in an intuitive manner by placing the 2D images around the 3D CT volume and the recon-structed light sources. This visualization is interactive: if a point on one of the 2D images is clicked, the corresponding point in the 3D data is connected to it with a line. If the 3D surface reconstruction is clicked on, the point is projected onto each 2D image and connected with a line. Physically placing the 2D images around the 3D reconstruction and explicitly visual-izing the relationship between points in 3D space and their 2D views makes this technique useful, especially for users that are accustomed to conventional 2D image inspection.

Volume-surface visualization shows a surface reconstruction of the BLI sources together with a volume visualization of the CT data. Also here, the BLI source surfaces are derived using Marching Cubes [Lorensen 87] with a manually chosen iso-threshold. This provides a quick overview of the approximate location of the light sources within the subject.

3.3

Experimental setup

To test the developed platform, we performed a pilot evaluation study on small animal imag-ing data that was acquired within ongoimag-ing experimental protocols at our institution. A quali-tative validation on three case studies was performed to evaluate two aspects:

1) Correctness of the 3D reconstructed source location.We selected data from two exper-iments in mice, where bioluminescent cells were injected at a known anatomical location, as to visually verify the reconstructed BLI sources with the injection site as seen in the micro-CT data. In subject 1, 100000 RC21-luc cells, a luciferase expressing human renal carcinoma cell line, were injected under the renal capsule. Four weeks after implantation, a luc-expressing renal cell carcinoma was established and the mouse was scanned. Subject 2 was injected with 100 µl 100000 KS-HisLuc cells, a human luciferase-expressing stem cell line, into the left heart ventricle three weeks before the scan.

2) Complementarity of BLI (sensitivity) and micro-CT (structural detail). We explored this aspect by studying the interaction between breast cancer metastases and the skeleton, where BLI provides a sensitive source detection and CT provides a detailed monitoring of bone resorption in the skeleton. Breast cancer has a preference for bone to metastasize, and at the location of a metastatic lesion, the bone is broken down and resorbed, causing structural damage in the skeleton, such as fractures or completely resorbed bones. To this end, we selected a study, where subject 3 was injected with luciferase positive human MDA231 breast cancer cells into the cardiac left ventricle. The animal was scanned 40 days after cell injection to screen for possible metastases.

In all experiments, the following imaging protocol was applied: five minutes before the scan, luciferin was injected, and the mouse was then recorded for 30 seconds per image with

(46)

a Xenogen VIVO Vision IVIS 3D scanner (Alameda, CA, USA) at a range of wavelengths between 585 and 700 nm. Micro-CT scans were acquired with a SkyScan 1178 micro-CT scanner (Aartselaar, Belgium) at a resolution of 80 × 80 × 80 µm3. The acquired scans were subsequently reconstructed, registered and visualized using the INTEGRIM platform, and qualitatively assessed by an expert observer.

3.4

Results

The resulting visualizations are presented in Figures 3.5, 3.6 and 3.7. For subject 1, a strong signal at the location of the kidney was expected, since the tumor cells were injected directly under the renal capsule. Figure 3.5(a) and 3.5(c) show that there is one light source present near the spine of the mouse. A closer inspection of the color-coded fusion in Fig. 3.5(a) shows in more detail that the light source is indeed located in the kidney. The added value of Fig. 3.5(b) is that it shows the orientation of the 2D projections around the reconstruction volume as well as the explicit relation between the 2D and 3D BLI data. For subject 2 (Figure 3.6), a strong signal in the cardiac area of the chest was expected at the site of the stem cell injection. As in subject 1, there is a clear correspondence between the expected location and the reconstructed source location, where one source in visible in the cardiac area of the chest.

For subject 3 (Figure 3.7), it was expected that the locations of strong signals in the BLI data were located near, in or on the skeletal bones, and that structural changes take place in the bone structures due to bone destruction. Here, no signal was expected at the cell injection site (the cardiac left ventricle), but more downstream in the circulation and in the skeletal structures. Several lesions in or on the skeletal structures were detected. Also, the combined visualization clearly reveals the bone resorption in the CT at the location of the osteolitic metastases as identified in the multi-angle BLI.

3.5

Discussion

This paper presents a method to approximate the light source location from multi-angle 2D BLI images, a method to register BLI and structural data and three new visualization modes for fusion of BLI and structural imaging data. We applied these techniques to three case stud-ies and presented the resulting visualizations: the correctness of the approximated BLI source location was verified by performing two experiments where cells injected at a known loca-tion. In both experiments, results correlated with our expectations about the natural behavior of the injected cells, and the reconstructed BLI hotspots corresponded to the cell injection sites. Second, the complementarity of BLI for detection sensitivity and CT for anatomical detail was demonstrated in one case study, where we found that 1) only BLI hotspots were reconstructed in close proximity of the skeletal structures, and 2) at these skeleton-based BLI hotspots, clear bone resorption and destruction was visible in the CT data. These experiments underscored the added value of the fused visualization compared to the traditional 2D BLI

Cytaty

Powiązane dokumenty

Na ko- niec serii pytań o nasz sposób uczestnictwa w teatrze i kulturze okazuje się, że nasze miejsce na białej macie kwadratu sceny jest określone przez ciąg tych decyzji, które

Pytania te można postawić sobie również w przypadku filmu Xawerego Żuław- skiego Wojna polsko-ruska powstałego na podstawie prozy Doroty Masłowskiej.. Badaczka

Znalazły się też u podstaw inicja- tywy podjętej przez Katedrę Teologii Katolickiej we współpracy z AWSD i przy współudziale stowarzyszeń i ruchów katolickich Białegostoku

To address this problem, IHC Merwede is in the process of developing a risk quantification framework in the context of deep sea mining, which makes use of state of the

Pytając: dlaczego tak się w praktyce dzieje?, au to r wskazuje na jedną z przyczyn uwidaczniających się już w procesie kształcenia lekarza. Student medycyny,

Forensyczność, jako praktyka związana z procedurami zapewniającymi ujawnienie prawdy i wszczęcie wieloetapowego dochodzenia w celu wykrycia mordercy, pojawia się także w

± Moůna przypuszczaý, ůe podawane przez Ernestiego i Schlaga formy czasu przeszâego zostaây przez nich uznane za reprezentujĊce formy czasownika w

Dziecko jawi się przecież jako owoc miłości, spotkania osób - małżonków, rodziców - tutaj natomiast spotyka się z zaskakującą propozycją, przeradzającą się