Difference between revisions of "Invasive Imagination and its Agential Cuts"

From Volumetric Regimes
Jump to navigation Jump to search
(Blanked the page)
Tag: Blanking
Line 1: Line 1:
 +
== Invasive imagination and its agential cuts ==
  
 +
There is a conversation missing on the politics of computer tomography, on what is going on with data captured by MRI, PET and CT scanners, rendered as 3D-volumes and then managed, analyzed, visualized and navigated within complex software environments. By aligning medical evidence with computational power, biomedical imaging seems to operate at the forefront of technological advancement while remaining all too attached to modern gestures of cutting, dividing and slicing. Computer tomography actively naturalizes modern regimes such as Euclidean geometry, discretization, anatomy, ocularity and computational efficiency to create powerful political fictions: invasive imaginations and inventions that provoke the technocratic and scientific truth of so-called bodies.
 +
This text is a call for trans*feminist1 software prototyping, a persistent affirmation of the possibility for radical experimentation, especially in the hypercomputational context of biomedical imaging.
 +
 +
=== 1. Slice ===
 +
 +
''In which we follow the emergence of a slice and its encounters with Euclidean geometry.''
 +
 +
The appearance of the slice in biomedical imaging coincides with the desire to optimize the use of optical microscopes in the 18th century. Specimen were cut into thin translucent sections mounted between glass, to maximize their accessible surface area and to slide them more easily under the objective. Microtomography, after “tomos” which means slice in Greek, seems at first sight conceptually coherent with contemporary volumetric scanning techniques or computer tomography. But where microtomography produces visual access by physically cutting into specimen, computer tomography stays on the outside. In order to affectively and effectively navigate matter, ocularity has been replaced by digital data-visualisation.
 +
 +
In computer tomography, “slice” stands for a data entity containing the total density values acquired from a cross-section of a volume. MRI, PET or CT scanners rotate around matter conglomerates such as human bodies, crime scenes or rocks to continuously probe their consistency with the help of radiation.1 The acquired data is digitally discrete but spatially and temporally ongoing. Only once turned into data, depths and densities can be cut into slices, and computationally flattened onto a succession of two-dimensional virtual surfaces that are backprojected to each resemble a contrasted black and white X-ray. Based on the digital cross-sections that are mathematically aligned into a stack, a third dimension can now be reverse-engineered. This volumetric operation blends data acquired at different micro-moments into a homogeneous volume. The computational process of translating matter density into numbers, re-constructing these as stacks of two-dimensional slices and then extrapolating additional planes to re-render three-dimensional volumes, is at the basis of most volumetric imaging today.
 +
 +
Tomography emerged from a long-standing technoscientific exploration fueled by the desire to making the invisible insides of bodies visible. It follows the tradition of anatomic experiments into a “new visual reality” produced by early x-ray imagery.1 The slice was a collective invention by many: technologists, tools, users, uses, designers and others knotted the increasing availability of computational capacity to the mathematical theorem of an Austrian mathematician and the standardization of radio-densities.2 Demonstrating the human and more-than-human entanglements of technoscientific streams, the slice invoked multiple pre-established paradigms to provoke an unusual sight on and inside the world. Forty years later, most hospitals located in the Global North have MRI and CT scanners operating around the clock.3 In the mean time, the slice became involved in the production of multiple truths, as tomography propagated along the industrial continuum: from human brain imaging to other influential fields of data-extraction such as mining, border-surveillance, mineralogy, large-scale fishing, entomology and archaeology.
 +
 +
The acceleration produced by the probable jump to the third dimension can hardly be overestimated. This jump is made even more useful because of the alleged “non-invasive” character of tomography: tomography promises visual access without the violence of dissection. Looking at the insides of a specimen which was traditionally conditioned by its death or an-aesthesia, does not anymore require physical intervention.1 But the persistence of the cross-cut, the fast assumptions that are made about the non-temporality of the slice, the supposed indexical relation they have to matter, the way math is involved in the re-generation of densities and the location of tissues, all of it makes us wonder about the not-non-invasiveness of the imagination at work in the bio(info)technological tale. Looking is somehow always already an operation.
 +
 +
Slices necessitate powerful software platforms to be visualized, analyzed, rendered and navigated. We call such platforms ‘powerful’ because of their extensive (and expensive) computational capacities, but also because of ways they embody authority and truth-making. Software works hard to remove any trace of the presence of the scanning apparatus and of the mattered bodies that were once present inside of it. For slices to behave as a single volume that is scanned at a single instant, they need to be normalized and aligned to then neatly fit the three orthogonal planes of X, Y and Z. This automated process of ‘registration’ draws expertise from computer vision, 3D-visualisation and algorithmic data-processing to stack slices in probable ways.
 +
 +
From now on, the slices act in line with the rigidity of Euclidean geometry, a mathematical paradigm with its own system of truth, a straight truth.1 It relies on a set of axioms or postulates where the X, Y and Z axis are always parallel, and where all corpo-real volumes are located in the cubic reality of their square angles.2 For reasons of efficiency, hardware optimization, path dependency and compatibility, Euclidean geometry has become the un-questionable neutral spatial norm in any software used for volumetric rendering, whether this is gaming, flight planning or geodata processing. But in the case of biomedical imaging, X, Y and Z planes are also conveniently fitting the ‘saggital’, ‘coronal’ and ‘axial’ planes that were established in anatomical science in the 19th century.3 The slices have been made to fit the fiction of medicine as seamlessly as they fit the fiction of computation.
 +
 +
Extrapolated along probable axis and obediently registered to the Euclidean perspective, the slices are now ready to be rendered as high-res three dimensional volumes. Two common practices from across the industrial continuum of volumetric imaging are combined for this operation: Ray-tracing and image segmentation. Ray-tracing considers each pixel in each slice as the point of intersection with a ray of light, as if it was projected from a simulated eye and then encountered a virtual object. ‘Imaging’ enters the picture only at the moment of rendering, when the ray-tracing algorithm re-inserts the re-assuring presences of both ocularity and a virtual internal sun. Ray-tracing is a form of algorithmic drawing that makes objects appear on the scene by projecting lines that originate from a single vantage point. It means that every time a volume is rendered, ray-tracing performs Duerer’s enlightenment classic, Artist drawing a nude with perspective device.1 Ray-tracing literally inverses the centralized god-like ‘vision’ of the renaissance artist and turns it into an act of creation.
 +
 +
Image segmentation starts at the boundaries rendered on each slice. A continuous light area surrounded by a darker one suggest the presence of coherent materiality; difference signals a border between inside and outside. With the help of partially automatic edge detection algorithms, contrasted areas are demarcated and can subsequently be transformed into synthetic surfaces with the help of a computer graphics algorithm such as Marching Cubes. The resulting mesh- or polygon models can be rendered as continuous three dimensional volumes with unambiguous borders.1 What is important here, is that the doings and happenings of tomography literally make invisible insides visible.
 +
 +
From the very beginning of the tomographic process there has been an entanglement at work between computation and anatomy.1 For a computer scientist, segmentation is a set of standard techniques used in the field of Computer Vision to algorithmically discern useful bits and pieces of images. When anatomist use the same term, they refer to the process of cutting off one part of an organism from another. For radiologists, segmentation means visually discerning anatomical parts. In computer tomography, traditions of math, computation, perspective and anatomy join forces to perform exclusionary boundaries together, identifying tissue types at the level of single pixels. In the process, invisible insides have become readable and eventually writable for further processing. Cut along all-too-probable sets of gestures, dependent on assumptions of medical truth, indexality and profit, slices have collaborated in the transformation of so-called bodies into stable, clearly demarcated volumes that can be operated upon. The making visible that tomography does, is the result of a series of generative re-renderings that should be considered as operative themselves.1 Tomography re-presents matter-conglomerates as continuous, stable entities and contributes strongly to the establishment of coherent materiality and humanness-as-individual-oneness. These picturings create powerful political fictions; imaginations and inventions that provoke the technocratic and scientific truth of so-called bodies.
 +
 +
The processual quantification of matter under such efficient regimes produces predictable outcomes, oriented by industrial concerns that are aligned with pre-established decisions on what counts as pathology or exploitation. What is at stake here is how probable sights of the no-longer-invisible are being framed. So, what implications would it have to let go of the probable, and to try some other ways of making invisible insides visible? What would be an intersectional operation that disobeys anthropo-euro-andro-capable projections? Or: how to otherwise reclaim the worlding of these possible insides?
 +
 +
=== 2. Slicer ===
 +
 +
''In which we meet Slicer, and its collision with trans*feminist urgencies.''
 +
 +
Feminist critical analysis of representation has been helpful in formulating a response to the kind of worlds that slices produce. But by persistently asking questions like: who sees, who is seen, and who is allowed to participate in the closed circuit of “seeing”, such modes of critique too easily take the side of the individual subject. Moreover, it is clear that in the context of biomedical informatics, the issue of hegemonic modes of doing is more widely distributed than the problem of the (expert) eye, as will become increasingly clear when we meet our protagonist, the software platform Slicer. It is why we are interested in working through trans*feminist concepts such as entanglement and intra-action as a way to engage with the complicated more-than-oneness that these kind of techno-ecologies evidently puts in practice.
 +
 +
Slicer or or 3D-Slicer is an Open Source software platform for the analysis and visualization of medical images in research environments.1 The platform is auto-framed by its name, an explicit choice to place the work of cutting or dividing in the center; an unapologetical celebration of the geometric norm of contemporary biomedical imaging. Naming a software “Slicer” imports the cut as a naturalized gesture, justifying it as an obvious need to prepare data for scientific objectivity. Figuring the software as “Slicer” (like butcher, baker, or doctor) turns it into a performative device by which the violence of that cut is delegated to the software itself. By this delegation, the software puts itself at the service of fitting the already-cut slices to multiple paradigms of straightness, to relentlessly re-render them as visually accessible volumes.1 In such an environment, any oblique, deviating, unfinished or queer cuts become hard to imagine.
 +
 +
Slicer evolved in the fertile space between scientific research, biomedical imaging and the industry of scanning devices. It sits comfortably in the middle of a booming industry that attempts to seamlessly integrate hardware and software, flesh, bone, radiation, economy, data-processing with the management of it all. In the clinic, such software environments are running on expensive patented radiology hardware, sold by global technology companies such as Philips, Siemens and General Electric. In the high-end commercial context of biomedical imaging, Slicer is one of the few platforms that runs independent of specific devices and can be installed on generic laptops. The software is released under an Open Source license which invites different types of users to study, use, distribute and co-develop the project and its related practices. The project is maintained by a community of medical image computing researchers that take care of technical development, documentation, versioning, testing and the publication of a continuous stream of open access papers.1
 +
 +
At several locations in- and around Slicer, users are warned that this software is not intended for clinical use.1 The reason Slicer positions itself so persistently outside the clinic might be a liability issue but seems most of all a way to assert itself as a prototyping environment in-between diagnostic practice and innovative marketable products.2 The consortium managing Slicer draws in millions worth of US medical grants every year, already for more than a decade. Even so, Slicer’s interface comes across as alarmingly amateurish, bloating the screen with a myriad of options and layers that only vaguely remind of the subdued sleekness of corresponding commercial packages. The all-over-the place impression of Slicer’s interface coincides with its coherent mission to be a prototyping rather than an actual software platform. As a result, its architecture is skeletal and its substance consists almost entirely of extensions, each developed for very different types of biomedical research. Only some of this research concerns actual software development, most of it is aimed at developing algorithms for automating tasks such as anomaly detection or organ  segmentation. The ideologies and hegemony embedded in the components of this (also) collectivelly-developed-software are again confirmed by the recent adoption of a BSD license which is considered to be the most “business-friendly” Open Source license around.
 +
 +
The development of Slicer is interwoven with two almost simultaneous genealogies of acceleration in biomedical informatics. The first is linked to the influential environment of the Artificial Intelligence labs at MIT. In the late nineties, Slicer emerged here as a tool to demonstrate the potential of intervention planning. From the start, the platform connected the arts and manners of Quantitative Imaging to early experiments in robot surgery. This origin-story binds the non-clinical environment of Slicer tightly to the invasive gestures of the computer-assisted physician.1
 +
 +
The second, even more spectacular genealogy is Slicer’s shared history with the Visible Human project. In the mid-nineties, when the volume of tomographic data was growing, the American Library of Science felt it necessary to publicly re-confirm the picturings with the visible insides of an actual human body, and to verify that the captured data responded to specifically mattered flesh. While the blurry black and white slices did seem to resemble anatomic structures, how to ensure that the results were actually correct?
 +
 +
A multi-billion dollar project was launched to materially re-enact the computational gesture of tomography onto actual flesh-and-blood bodies. The project started with the acquisition of two 'volunteers', one convicted white middle-aged male murderer, allegedly seeking repentance through donating his body to science, and a white middle-aged female, donated by her husband. Their corpses where first vertically positioned and scanned, before being horizontally stabilized in clear blue liquid, then frozen, and sawn into four pieces.1 Each piece was mounted under a camera, and photographed in a zenithal plane before being scraped down by 3 millimeter, to be photographed again. The resulting color photographs where digitized, color-corrected, registered and re-rendered volumetrically in X, Y, Z planes. Both datasets (the MRI-data and the digitized photographs) where released semi-publicly. These two datasets, informally renamed into “Adam” and “Eve” still circulate as default reference material in biomedical imaging, amongst others in current versions of Slicer.2 Names affect matter; or better said: naming is always already mattering.3
 +
 +
The mediatized process of the Visible Human project coincided with a big push for accessible imagining software platforms that would offer fly-through 3D anatomical atlases, re-inserting modern regimes on the intersection of computer science, biomedical science and general education.1 It produced the need for the development of automatic registration and segmentation algorithms such as the Insight Segmentation and Registration Toolkit (ITK), an algorithm that is at the basis of Slicer.1
 +
 +
Slicer opens a small window onto the complex and hypercomputational world of biomedical imaging and the way software creates the matter-cultural conditions of possibility that render so-called bodies volumetrically present. It tells stories of interlocking regimes of power which discipline the body, its modes and representations in a top-to-bottom mode. It shows how these regimes operate through a distributed and naturalized assumption of efficiency which hegemonically reproduces bodies as singular entities that need to be clear and ready in order to be "healed". But even when we are critical of the way Slicer orders both technological innovation and biovalue as an economy1, its licensing and positioning also create the collective conditions for an affirmative cultural critique of software artifacts. We suspect that a FLOSS environment responsibilizes its community to make sure boundaries do not sit still. Without wanting to suggest that FLOSS itself produces the conditions for non-hegemonic imaginations, its persistent commitment to transformation is key for  radical experiments, and for trans*feminist software prototyping.
 +
 +
=== 3. Slicing ===

Revision as of 09:33, 3 April 2020

Invasive imagination and its agential cuts

There is a conversation missing on the politics of computer tomography, on what is going on with data captured by MRI, PET and CT scanners, rendered as 3D-volumes and then managed, analyzed, visualized and navigated within complex software environments. By aligning medical evidence with computational power, biomedical imaging seems to operate at the forefront of technological advancement while remaining all too attached to modern gestures of cutting, dividing and slicing. Computer tomography actively naturalizes modern regimes such as Euclidean geometry, discretization, anatomy, ocularity and computational efficiency to create powerful political fictions: invasive imaginations and inventions that provoke the technocratic and scientific truth of so-called bodies. This text is a call for trans*feminist1 software prototyping, a persistent affirmation of the possibility for radical experimentation, especially in the hypercomputational context of biomedical imaging.

1. Slice

In which we follow the emergence of a slice and its encounters with Euclidean geometry.

The appearance of the slice in biomedical imaging coincides with the desire to optimize the use of optical microscopes in the 18th century. Specimen were cut into thin translucent sections mounted between glass, to maximize their accessible surface area and to slide them more easily under the objective. Microtomography, after “tomos” which means slice in Greek, seems at first sight conceptually coherent with contemporary volumetric scanning techniques or computer tomography. But where microtomography produces visual access by physically cutting into specimen, computer tomography stays on the outside. In order to affectively and effectively navigate matter, ocularity has been replaced by digital data-visualisation.

In computer tomography, “slice” stands for a data entity containing the total density values acquired from a cross-section of a volume. MRI, PET or CT scanners rotate around matter conglomerates such as human bodies, crime scenes or rocks to continuously probe their consistency with the help of radiation.1 The acquired data is digitally discrete but spatially and temporally ongoing. Only once turned into data, depths and densities can be cut into slices, and computationally flattened onto a succession of two-dimensional virtual surfaces that are backprojected to each resemble a contrasted black and white X-ray. Based on the digital cross-sections that are mathematically aligned into a stack, a third dimension can now be reverse-engineered. This volumetric operation blends data acquired at different micro-moments into a homogeneous volume. The computational process of translating matter density into numbers, re-constructing these as stacks of two-dimensional slices and then extrapolating additional planes to re-render three-dimensional volumes, is at the basis of most volumetric imaging today.

Tomography emerged from a long-standing technoscientific exploration fueled by the desire to making the invisible insides of bodies visible. It follows the tradition of anatomic experiments into a “new visual reality” produced by early x-ray imagery.1 The slice was a collective invention by many: technologists, tools, users, uses, designers and others knotted the increasing availability of computational capacity to the mathematical theorem of an Austrian mathematician and the standardization of radio-densities.2 Demonstrating the human and more-than-human entanglements of technoscientific streams, the slice invoked multiple pre-established paradigms to provoke an unusual sight on and inside the world. Forty years later, most hospitals located in the Global North have MRI and CT scanners operating around the clock.3 In the mean time, the slice became involved in the production of multiple truths, as tomography propagated along the industrial continuum: from human brain imaging to other influential fields of data-extraction such as mining, border-surveillance, mineralogy, large-scale fishing, entomology and archaeology.

The acceleration produced by the probable jump to the third dimension can hardly be overestimated. This jump is made even more useful because of the alleged “non-invasive” character of tomography: tomography promises visual access without the violence of dissection. Looking at the insides of a specimen which was traditionally conditioned by its death or an-aesthesia, does not anymore require physical intervention.1 But the persistence of the cross-cut, the fast assumptions that are made about the non-temporality of the slice, the supposed indexical relation they have to matter, the way math is involved in the re-generation of densities and the location of tissues, all of it makes us wonder about the not-non-invasiveness of the imagination at work in the bio(info)technological tale. Looking is somehow always already an operation.

Slices necessitate powerful software platforms to be visualized, analyzed, rendered and navigated. We call such platforms ‘powerful’ because of their extensive (and expensive) computational capacities, but also because of ways they embody authority and truth-making. Software works hard to remove any trace of the presence of the scanning apparatus and of the mattered bodies that were once present inside of it. For slices to behave as a single volume that is scanned at a single instant, they need to be normalized and aligned to then neatly fit the three orthogonal planes of X, Y and Z. This automated process of ‘registration’ draws expertise from computer vision, 3D-visualisation and algorithmic data-processing to stack slices in probable ways.

From now on, the slices act in line with the rigidity of Euclidean geometry, a mathematical paradigm with its own system of truth, a straight truth.1 It relies on a set of axioms or postulates where the X, Y and Z axis are always parallel, and where all corpo-real volumes are located in the cubic reality of their square angles.2 For reasons of efficiency, hardware optimization, path dependency and compatibility, Euclidean geometry has become the un-questionable neutral spatial norm in any software used for volumetric rendering, whether this is gaming, flight planning or geodata processing. But in the case of biomedical imaging, X, Y and Z planes are also conveniently fitting the ‘saggital’, ‘coronal’ and ‘axial’ planes that were established in anatomical science in the 19th century.3 The slices have been made to fit the fiction of medicine as seamlessly as they fit the fiction of computation.

Extrapolated along probable axis and obediently registered to the Euclidean perspective, the slices are now ready to be rendered as high-res three dimensional volumes. Two common practices from across the industrial continuum of volumetric imaging are combined for this operation: Ray-tracing and image segmentation. Ray-tracing considers each pixel in each slice as the point of intersection with a ray of light, as if it was projected from a simulated eye and then encountered a virtual object. ‘Imaging’ enters the picture only at the moment of rendering, when the ray-tracing algorithm re-inserts the re-assuring presences of both ocularity and a virtual internal sun. Ray-tracing is a form of algorithmic drawing that makes objects appear on the scene by projecting lines that originate from a single vantage point. It means that every time a volume is rendered, ray-tracing performs Duerer’s enlightenment classic, Artist drawing a nude with perspective device.1 Ray-tracing literally inverses the centralized god-like ‘vision’ of the renaissance artist and turns it into an act of creation.

Image segmentation starts at the boundaries rendered on each slice. A continuous light area surrounded by a darker one suggest the presence of coherent materiality; difference signals a border between inside and outside. With the help of partially automatic edge detection algorithms, contrasted areas are demarcated and can subsequently be transformed into synthetic surfaces with the help of a computer graphics algorithm such as Marching Cubes. The resulting mesh- or polygon models can be rendered as continuous three dimensional volumes with unambiguous borders.1 What is important here, is that the doings and happenings of tomography literally make invisible insides visible.

From the very beginning of the tomographic process there has been an entanglement at work between computation and anatomy.1 For a computer scientist, segmentation is a set of standard techniques used in the field of Computer Vision to algorithmically discern useful bits and pieces of images. When anatomist use the same term, they refer to the process of cutting off one part of an organism from another. For radiologists, segmentation means visually discerning anatomical parts. In computer tomography, traditions of math, computation, perspective and anatomy join forces to perform exclusionary boundaries together, identifying tissue types at the level of single pixels. In the process, invisible insides have become readable and eventually writable for further processing. Cut along all-too-probable sets of gestures, dependent on assumptions of medical truth, indexality and profit, slices have collaborated in the transformation of so-called bodies into stable, clearly demarcated volumes that can be operated upon. The making visible that tomography does, is the result of a series of generative re-renderings that should be considered as operative themselves.1 Tomography re-presents matter-conglomerates as continuous, stable entities and contributes strongly to the establishment of coherent materiality and humanness-as-individual-oneness. These picturings create powerful political fictions; imaginations and inventions that provoke the technocratic and scientific truth of so-called bodies.

The processual quantification of matter under such efficient regimes produces predictable outcomes, oriented by industrial concerns that are aligned with pre-established decisions on what counts as pathology or exploitation. What is at stake here is how probable sights of the no-longer-invisible are being framed. So, what implications would it have to let go of the probable, and to try some other ways of making invisible insides visible? What would be an intersectional operation that disobeys anthropo-euro-andro-capable projections? Or: how to otherwise reclaim the worlding of these possible insides?

2. Slicer

In which we meet Slicer, and its collision with trans*feminist urgencies.

Feminist critical analysis of representation has been helpful in formulating a response to the kind of worlds that slices produce. But by persistently asking questions like: who sees, who is seen, and who is allowed to participate in the closed circuit of “seeing”, such modes of critique too easily take the side of the individual subject. Moreover, it is clear that in the context of biomedical informatics, the issue of hegemonic modes of doing is more widely distributed than the problem of the (expert) eye, as will become increasingly clear when we meet our protagonist, the software platform Slicer. It is why we are interested in working through trans*feminist concepts such as entanglement and intra-action as a way to engage with the complicated more-than-oneness that these kind of techno-ecologies evidently puts in practice.

Slicer or or 3D-Slicer is an Open Source software platform for the analysis and visualization of medical images in research environments.1 The platform is auto-framed by its name, an explicit choice to place the work of cutting or dividing in the center; an unapologetical celebration of the geometric norm of contemporary biomedical imaging. Naming a software “Slicer” imports the cut as a naturalized gesture, justifying it as an obvious need to prepare data for scientific objectivity. Figuring the software as “Slicer” (like butcher, baker, or doctor) turns it into a performative device by which the violence of that cut is delegated to the software itself. By this delegation, the software puts itself at the service of fitting the already-cut slices to multiple paradigms of straightness, to relentlessly re-render them as visually accessible volumes.1 In such an environment, any oblique, deviating, unfinished or queer cuts become hard to imagine.

Slicer evolved in the fertile space between scientific research, biomedical imaging and the industry of scanning devices. It sits comfortably in the middle of a booming industry that attempts to seamlessly integrate hardware and software, flesh, bone, radiation, economy, data-processing with the management of it all. In the clinic, such software environments are running on expensive patented radiology hardware, sold by global technology companies such as Philips, Siemens and General Electric. In the high-end commercial context of biomedical imaging, Slicer is one of the few platforms that runs independent of specific devices and can be installed on generic laptops. The software is released under an Open Source license which invites different types of users to study, use, distribute and co-develop the project and its related practices. The project is maintained by a community of medical image computing researchers that take care of technical development, documentation, versioning, testing and the publication of a continuous stream of open access papers.1

At several locations in- and around Slicer, users are warned that this software is not intended for clinical use.1 The reason Slicer positions itself so persistently outside the clinic might be a liability issue but seems most of all a way to assert itself as a prototyping environment in-between diagnostic practice and innovative marketable products.2 The consortium managing Slicer draws in millions worth of US medical grants every year, already for more than a decade. Even so, Slicer’s interface comes across as alarmingly amateurish, bloating the screen with a myriad of options and layers that only vaguely remind of the subdued sleekness of corresponding commercial packages. The all-over-the place impression of Slicer’s interface coincides with its coherent mission to be a prototyping rather than an actual software platform. As a result, its architecture is skeletal and its substance consists almost entirely of extensions, each developed for very different types of biomedical research. Only some of this research concerns actual software development, most of it is aimed at developing algorithms for automating tasks such as anomaly detection or organ segmentation. The ideologies and hegemony embedded in the components of this (also) collectivelly-developed-software are again confirmed by the recent adoption of a BSD license which is considered to be the most “business-friendly” Open Source license around.

The development of Slicer is interwoven with two almost simultaneous genealogies of acceleration in biomedical informatics. The first is linked to the influential environment of the Artificial Intelligence labs at MIT. In the late nineties, Slicer emerged here as a tool to demonstrate the potential of intervention planning. From the start, the platform connected the arts and manners of Quantitative Imaging to early experiments in robot surgery. This origin-story binds the non-clinical environment of Slicer tightly to the invasive gestures of the computer-assisted physician.1

The second, even more spectacular genealogy is Slicer’s shared history with the Visible Human project. In the mid-nineties, when the volume of tomographic data was growing, the American Library of Science felt it necessary to publicly re-confirm the picturings with the visible insides of an actual human body, and to verify that the captured data responded to specifically mattered flesh. While the blurry black and white slices did seem to resemble anatomic structures, how to ensure that the results were actually correct?

A multi-billion dollar project was launched to materially re-enact the computational gesture of tomography onto actual flesh-and-blood bodies. The project started with the acquisition of two 'volunteers', one convicted white middle-aged male murderer, allegedly seeking repentance through donating his body to science, and a white middle-aged female, donated by her husband. Their corpses where first vertically positioned and scanned, before being horizontally stabilized in clear blue liquid, then frozen, and sawn into four pieces.1 Each piece was mounted under a camera, and photographed in a zenithal plane before being scraped down by 3 millimeter, to be photographed again. The resulting color photographs where digitized, color-corrected, registered and re-rendered volumetrically in X, Y, Z planes. Both datasets (the MRI-data and the digitized photographs) where released semi-publicly. These two datasets, informally renamed into “Adam” and “Eve” still circulate as default reference material in biomedical imaging, amongst others in current versions of Slicer.2 Names affect matter; or better said: naming is always already mattering.3

The mediatized process of the Visible Human project coincided with a big push for accessible imagining software platforms that would offer fly-through 3D anatomical atlases, re-inserting modern regimes on the intersection of computer science, biomedical science and general education.1 It produced the need for the development of automatic registration and segmentation algorithms such as the Insight Segmentation and Registration Toolkit (ITK), an algorithm that is at the basis of Slicer.1

Slicer opens a small window onto the complex and hypercomputational world of biomedical imaging and the way software creates the matter-cultural conditions of possibility that render so-called bodies volumetrically present. It tells stories of interlocking regimes of power which discipline the body, its modes and representations in a top-to-bottom mode. It shows how these regimes operate through a distributed and naturalized assumption of efficiency which hegemonically reproduces bodies as singular entities that need to be clear and ready in order to be "healed". But even when we are critical of the way Slicer orders both technological innovation and biovalue as an economy1, its licensing and positioning also create the collective conditions for an affirmative cultural critique of software artifacts. We suspect that a FLOSS environment responsibilizes its community to make sure boundaries do not sit still. Without wanting to suggest that FLOSS itself produces the conditions for non-hegemonic imaginations, its persistent commitment to transformation is key for radical experiments, and for trans*feminist software prototyping.

3. Slicing