Difference between revisions of "The Fragility of Life"

From Volumetric Regimes
Jump to navigation Jump to search
 
(9 intermediate revisions by 2 users not shown)
Line 3: Line 3:
 
'''Simone C Niquille in conversation with Jara Rocha and Femke Snelting'''
 
'''Simone C Niquille in conversation with Jara Rocha and Femke Snelting'''
  
 +
<div style="visibility:hidden;line-height:0;"><a id="ii-061-1" href="#Item_Index">Item 061</a></div>
  
'''This text was edited from a conversation, recorded after the screening of process material for Niquille’s film ''The Fragility of Life'', which was shown at the Possible Bodies residency in Akademie Schloss Solitude, Stuttgart (May 2017).'''
+
'''This text was edited from a conversation, recorded after the screening of process material for Niquille’s film ''The Fragility of Life'' at the Possible Bodies residency in Akademie Schloss Solitude, Stuttgart (May 2017).'''
  
 
[[File:Fol thefragilityoflife-2.jpeg|thumb|none|600px|Simone C Niquille, ''The Fragility of Life'', 2017, filmstill]]
 
[[File:Fol thefragilityoflife-2.jpeg|thumb|none|600px|Simone C Niquille, ''The Fragility of Life'', 2017, filmstill]]
  
'''Jara Rocha''': In the process of developing the Possible Bodies trajectory, one of the excursions we made was to the Royal Belgian Institute of Natural Science’s reproduction workshop in Brussels, where they were working on 3D-reproductions of Hominids. Another visitor asked: “How do you know how many hairs a monkey like this should have?” The person working on the 3D reproduction replied, “It is not a monkey.”<ref name="ftn144">Another aspect of the ''Hairy Hominid effect'' appears in our conversation with Phil Langley, “We hardly encounter anything that didn’t matter,” in this chapter.</ref> You could see that he had an empathetic connection to the on-screen-model he was working on, being of the same species. I would like to ask you about norms and embedded norms in software. Talking about objective truth and parametric representation and the like, in this example you refer to, there is a huge norm that worries me, that of species, of unquestioned humanness. When we talk about “bodies”, we can push certain limits because of the hegemony of the species. In legal court, the norm is anthropocentric, but when it comes to representation…
+
'''Jara Rocha''': In the process of developing the Possible Bodies trajectory, one of the excursions we made was to the Royal Belgian Institute of Natural Science’s reproduction workshop in Brussels, where they were working on 3D-reproductions of Hominids. Another visitor asked: “How do you know how many hairs a monkey like this should have?” The person working on the 3D reproduction replied, “It is not a monkey.”<ref name="ftn144">See also: “We hardly encounter anything that didn’t matter,” in this book.</ref> You could see that he had an empathetic connection to the on-screen-model he was working on, being of the same species. I would like to ask you about norms and embedded norms in software. Talking about objective truth and parametric representation and the like, in this example you refer to, there is a huge norm that worries me, that of species, of unquestioned humanness. When we talk about “bodies”, we can push certain limits because of the hegemony of the species. In legal court, the norm is anthropocentric, but when it comes to representation…
 +
 
 +
<div class="page-break"></div>
  
 
'''Femke Snelting''': This is the subject of “Kritios They”?
 
'''Femke Snelting''': This is the subject of “Kritios They”?
  
 
'''Simone C Niquille''': Kritios They is a character in ''The Fragility of Life'', a result of the research project ''The Contents''. While ''The Contents'' is based on the assumption that we as humans possess and create content, living in our daily networked space of appearance that is used for or against us, I became interested in the corporeal fragility exposed and created through this data, or that the data itself possesses. In the film, the decimation scene questions this quite bluntly: when does a form stop being human, when do we lose empathy towards the representation? Merely reducing the 3D mesh’s resolution, decreasing its information density, can affect the viewer’s empathy. Suddenly the mesh might no longer be perceived as human, and is revealed as a simple geometric construct: A plain surface onto which any and all interpretation can be projected. The contemporary accelerating frenzy of collecting as much data as possible on one single individual to achieve maximum transparency and construct a “fleshed out” profile is a fragile endeavor. More information does not necessarily lead to a more defined image. In the case of Kritios They, I was interested in character creation software and the parameters embedded in its interfaces. The parameters come with limitations: an arm can only be this long, skin color is represented within a specified spectrum, and so on. How were these decisions made and these parameters determined? Looking at design history and the field’s striving to create a standardized body to better cater to the human form, I found similarities of intent and problematics.
 
'''Simone C Niquille''': Kritios They is a character in ''The Fragility of Life'', a result of the research project ''The Contents''. While ''The Contents'' is based on the assumption that we as humans possess and create content, living in our daily networked space of appearance that is used for or against us, I became interested in the corporeal fragility exposed and created through this data, or that the data itself possesses. In the film, the decimation scene questions this quite bluntly: when does a form stop being human, when do we lose empathy towards the representation? Merely reducing the 3D mesh’s resolution, decreasing its information density, can affect the viewer’s empathy. Suddenly the mesh might no longer be perceived as human, and is revealed as a simple geometric construct: A plain surface onto which any and all interpretation can be projected. The contemporary accelerating frenzy of collecting as much data as possible on one single individual to achieve maximum transparency and construct a “fleshed out” profile is a fragile endeavor. More information does not necessarily lead to a more defined image. In the case of Kritios They, I was interested in character creation software and the parameters embedded in its interfaces. The parameters come with limitations: an arm can only be this long, skin color is represented within a specified spectrum, and so on. How were these decisions made and these parameters determined? Looking at design history and the field’s striving to create a standardized body to better cater to the human form, I found similarities of intent and problematics.
 +
 +
[[File:04 bertillon identification system.jpg|thumb|none|600px|Alphonse Bertillon, Anthropometric data sheet and Identification Card, 1896]]
  
 
Anthropometric efforts ranging from Da Vinci’s ''Vitruvian Man'', to Corbusier’s ''Modulor'', to Alphonse Bertillon’s ''Signaletic Instructions'' and invention of the mug shot, to Henry Dreyfuss’s ''Humanscale''… What these projects share is an attempt to translate the human body into numbers. Be it for the sake of comparison, efficiency, policing…
 
Anthropometric efforts ranging from Da Vinci’s ''Vitruvian Man'', to Corbusier’s ''Modulor'', to Alphonse Bertillon’s ''Signaletic Instructions'' and invention of the mug shot, to Henry Dreyfuss’s ''Humanscale''… What these projects share is an attempt to translate the human body into numbers. Be it for the sake of comparison, efficiency, policing…
  
 
In a Washington Post article from 1999<ref name="ftn145">William M. Arkin, “When Seeing and Hearing Isn’t Believing,” ''Washington Post'', February, 1999, [https://www.washingtonpost.com/gdpr-consent/?next_url=https%3A%2F%2Fwww.washingtonpost.com%2Fwp-srv%2Fnational%2Fdotmil%2Farkin020199.htm https://www.washingtonpost.com/gdpr-consent/?next_url=https%3a%2f%2fwww.washingtonpost.com%2fwp-srv%2fnational%2fdotmil%2farkin020199.htm].</ref> on newly developed voice mimicking technology, Daniel T. Kuehl, the chairman of the Information Operations department at the National Defense University in Washington (the military’s school for information warfare) is quoted as saying: “Once you can take any kind of information and reduce it into ones and zeroes, you can do some pretty interesting things.”
 
In a Washington Post article from 1999<ref name="ftn145">William M. Arkin, “When Seeing and Hearing Isn’t Believing,” ''Washington Post'', February, 1999, [https://www.washingtonpost.com/gdpr-consent/?next_url=https%3A%2F%2Fwww.washingtonpost.com%2Fwp-srv%2Fnational%2Fdotmil%2Farkin020199.htm https://www.washingtonpost.com/gdpr-consent/?next_url=https%3a%2f%2fwww.washingtonpost.com%2fwp-srv%2fnational%2fdotmil%2farkin020199.htm].</ref> on newly developed voice mimicking technology, Daniel T. Kuehl, the chairman of the Information Operations department at the National Defense University in Washington (the military’s school for information warfare) is quoted as saying: “Once you can take any kind of information and reduce it into ones and zeroes, you can do some pretty interesting things.”
 
[[File:04 bertillon identification system.jpg|thumb|none|600px|Alphonse Bertillon, Anthropometric data sheet and Identification Card, 1896]]
 
  
 
[[File:03 henry dreyfuss humanscale.jpg|thumb|none|600px|Humanscale 7b: Seated at Work Selector, Henry Dreyfuss Associates, MIT Press, 1981. Photo: Courtesy of Cooper Hewitt, Smithsonian Design Museum http://collection.cooperhewitt.org/objects/51689299]]
 
[[File:03 henry dreyfuss humanscale.jpg|thumb|none|600px|Humanscale 7b: Seated at Work Selector, Henry Dreyfuss Associates, MIT Press, 1981. Photo: Courtesy of Cooper Hewitt, Smithsonian Design Museum http://collection.cooperhewitt.org/objects/51689299]]
  
To create the “Kritios They” character I used a program called Fuse.<ref name="ftn146">Jeanette Mathews, “An Update on Adobe Fuse as Adobe Moves to the Future of 3D & AR Development,” September 13, 2019, [https://www.adobe.com/products/fuse.html# https://www.adobe.com/products/fuse.html].</ref> It was recently acquired by Adobe and is in the process of being integrated into their Creative Cloud services. It originated as assembly-based 3D modeling research carried out at Stanford University. The Fuse interface segments the body into Frankenstein-like parts to be assembled by the user. However, the seemingly restriction free Lego-character-design interface is littered with limitations. Not all body parts mix as well as others; some create uncanny folds and seams when assembled. The torso has to be a certain length and the legs positioned in a certain way and when I try to adapt these elements the automatic rigging process doesn’t work because the mesh won’t be recognized as a body.
+
To create the “Kritios They” character I used a program called Fuse.<ref name="ftn146">Jeanette Mathews, “An Update on Adobe Fuse as Adobe Moves to the <span class="column-break" style="break-after:column;"></span>Future of 3D & AR Development,” September 13, 2019, [https://www.adobe.com/products/fuse.html# https://www.adobe.com/products/fuse.html].</ref> It was recently acquired by Adobe and is in the process of being integrated into their Creative Cloud services. It originated as assembly-based 3D modeling research carried out at Stanford University. The Fuse interface segments the body into Frankenstein-like parts to be assembled by the user. However, the seemingly restriction free Lego-character-design interface is littered with limitations. Not all body parts mix as well as others; some create uncanny folds and seams when assembled. The torso has to be a certain length and the legs positioned in a certain way and when I try to adapt these elements the automatic rigging process doesn’t work because the mesh won’t be recognized as a body.
  
 
A lot of these processes and workflows demand content that is very specific to their definition of the human form in order to function. As a result, they don’t account for anything that diverges from that norm, establishing a parametric truth that is biased and discriminatory. This raises the question of what that norm is and how, by whom and for whom it has been defined.
 
A lot of these processes and workflows demand content that is very specific to their definition of the human form in order to function. As a result, they don’t account for anything that diverges from that norm, establishing a parametric truth that is biased and discriminatory. This raises the question of what that norm is and how, by whom and for whom it has been defined.
Line 63: Line 66:
  
 
<nowiki>#0082a</nowiki> is a whole body scan mesh from the CAESAR database,<ref name="ftn148">Products based on this database are commercialized by SAE International, [http://store.sae.org/caesar/ http://store.sae.org]. </ref> presumably the 82nd scanned subject in position a. The CAESAR project’s aim was to create a new anthropometric surface database of body measurements for the Air Force’s cockpit and uniform design. The new database was necessary to represent the contemporary U.S. military staff. Previous measurements were outdated as the U.S. population had grown more diverse since the last measurement standards had been registered. This large-scale project consisted of scanning about 2000 bodies in the United States, Italy and the Netherlands. A dedicated team travelled to various cities within these countries outfitted with the first whole body scanner developed specifically for this purpose by a company called Cyberware. This is how I initially found out about the CAESAR database, by trying to find information on the Cyberware scanner.
 
<nowiki>#0082a</nowiki> is a whole body scan mesh from the CAESAR database,<ref name="ftn148">Products based on this database are commercialized by SAE International, [http://store.sae.org/caesar/ http://store.sae.org]. </ref> presumably the 82nd scanned subject in position a. The CAESAR project’s aim was to create a new anthropometric surface database of body measurements for the Air Force’s cockpit and uniform design. The new database was necessary to represent the contemporary U.S. military staff. Previous measurements were outdated as the U.S. population had grown more diverse since the last measurement standards had been registered. This large-scale project consisted of scanning about 2000 bodies in the United States, Italy and the Netherlands. A dedicated team travelled to various cities within these countries outfitted with the first whole body scanner developed specifically for this purpose by a company called Cyberware. This is how I initially found out about the CAESAR database, by trying to find information on the Cyberware scanner.
 +
 +
[[File:06 imgf0016.png|thumb|none|600px|CAESAR database used as training set in the research towards a parametric three-dimensional body model for animation. Loper et al, “Method for providing a threedimensional body model,” patent US 10,417,818 B2 filed by Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V., 2019]]
  
 
I found a video somewhere deep within YouTube, it was this very strange and wonderful video of a 3D figure dancing on a NIST [U.S. National Institute of Standards and Technology] logo. The figure looked like an early 3D scan that had been crudely animated. I got in touch with the YouTube user and through a Skype conversation learned about his involvement in the CAESAR project through his work at NIST. Because of his own personal fascination with 3D animation he made the video I initially found by animating one of the CAESAR scans, #0082a, with an early version of Poser.
 
I found a video somewhere deep within YouTube, it was this very strange and wonderful video of a 3D figure dancing on a NIST [U.S. National Institute of Standards and Technology] logo. The figure looked like an early 3D scan that had been crudely animated. I got in touch with the YouTube user and through a Skype conversation learned about his involvement in the CAESAR project through his work at NIST. Because of his own personal fascination with 3D animation he made the video I initially found by animating one of the CAESAR scans, #0082a, with an early version of Poser.
 
[[File:06 imgf0016.png|thumb|none|600px|CAESAR database used as training set in the research towards a parametric three-dimensional body model for animation. Loper et al, “Method for providing a threedimensional body model,” patent US 10,417,818 B2 filed by Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V., 2019]]
 
  
 
Cyberware<ref name="ftn149">“Cyberware,” Wikipedia, accessed July 1, 2020, [https://en.wikipedia.org/wiki/Cyberware https://en.wikipedia.org/wiki/Cyberware].</ref> has its origins in the entertainment industry. They scanned Leonard Nimoy, who portrays Spock in the Star Trek series, for the famous dream sequence in the 1986 movie Star Trek IV: The Voyage Home. Nimoy’s head scan is among the first 3D scans… The trajectory of the Cyberware company is part of a curious pattern: it originated in Hollywood as a head scanner, advanced to a whole body scanner for the military, and completed the entertainment-military-industrial cycle by returning to the entertainment industry for whole-body scanning applications.
 
Cyberware<ref name="ftn149">“Cyberware,” Wikipedia, accessed July 1, 2020, [https://en.wikipedia.org/wiki/Cyberware https://en.wikipedia.org/wiki/Cyberware].</ref> has its origins in the entertainment industry. They scanned Leonard Nimoy, who portrays Spock in the Star Trek series, for the famous dream sequence in the 1986 movie Star Trek IV: The Voyage Home. Nimoy’s head scan is among the first 3D scans… The trajectory of the Cyberware company is part of a curious pattern: it originated in Hollywood as a head scanner, advanced to a whole body scanner for the military, and completed the entertainment-military-industrial cycle by returning to the entertainment industry for whole-body scanning applications.

Latest revision as of 09:40, 23 August 2022

The Fragility of Life

Simone C Niquille in conversation with Jara Rocha and Femke Snelting

<a id="ii-061-1" href="#Item_Index">Item 061</a>

This text was edited from a conversation, recorded after the screening of process material for Niquille’s film The Fragility of Life at the Possible Bodies residency in Akademie Schloss Solitude, Stuttgart (May 2017).

Simone C Niquille, The Fragility of Life, 2017, filmstill

Jara Rocha: In the process of developing the Possible Bodies trajectory, one of the excursions we made was to the Royal Belgian Institute of Natural Science’s reproduction workshop in Brussels, where they were working on 3D-reproductions of Hominids. Another visitor asked: “How do you know how many hairs a monkey like this should have?” The person working on the 3D reproduction replied, “It is not a monkey.”[1] You could see that he had an empathetic connection to the on-screen-model he was working on, being of the same species. I would like to ask you about norms and embedded norms in software. Talking about objective truth and parametric representation and the like, in this example you refer to, there is a huge norm that worries me, that of species, of unquestioned humanness. When we talk about “bodies”, we can push certain limits because of the hegemony of the species. In legal court, the norm is anthropocentric, but when it comes to representation…

Femke Snelting: This is the subject of “Kritios They”?

Simone C Niquille: Kritios They is a character in The Fragility of Life, a result of the research project The Contents. While The Contents is based on the assumption that we as humans possess and create content, living in our daily networked space of appearance that is used for or against us, I became interested in the corporeal fragility exposed and created through this data, or that the data itself possesses. In the film, the decimation scene questions this quite bluntly: when does a form stop being human, when do we lose empathy towards the representation? Merely reducing the 3D mesh’s resolution, decreasing its information density, can affect the viewer’s empathy. Suddenly the mesh might no longer be perceived as human, and is revealed as a simple geometric construct: A plain surface onto which any and all interpretation can be projected. The contemporary accelerating frenzy of collecting as much data as possible on one single individual to achieve maximum transparency and construct a “fleshed out” profile is a fragile endeavor. More information does not necessarily lead to a more defined image. In the case of Kritios They, I was interested in character creation software and the parameters embedded in its interfaces. The parameters come with limitations: an arm can only be this long, skin color is represented within a specified spectrum, and so on. How were these decisions made and these parameters determined? Looking at design history and the field’s striving to create a standardized body to better cater to the human form, I found similarities of intent and problematics.

Alphonse Bertillon, Anthropometric data sheet and Identification Card, 1896

Anthropometric efforts ranging from Da Vinci’s Vitruvian Man, to Corbusier’s Modulor, to Alphonse Bertillon’s Signaletic Instructions and invention of the mug shot, to Henry Dreyfuss’s Humanscale… What these projects share is an attempt to translate the human body into numbers. Be it for the sake of comparison, efficiency, policing…

In a Washington Post article from 1999[2] on newly developed voice mimicking technology, Daniel T. Kuehl, the chairman of the Information Operations department at the National Defense University in Washington (the military’s school for information warfare) is quoted as saying: “Once you can take any kind of information and reduce it into ones and zeroes, you can do some pretty interesting things.”

Humanscale 7b: Seated at Work Selector, Henry Dreyfuss Associates, MIT Press, 1981. Photo: Courtesy of Cooper Hewitt, Smithsonian Design Museum http://collection.cooperhewitt.org/objects/51689299

To create the “Kritios They” character I used a program called Fuse.[3] It was recently acquired by Adobe and is in the process of being integrated into their Creative Cloud services. It originated as assembly-based 3D modeling research carried out at Stanford University. The Fuse interface segments the body into Frankenstein-like parts to be assembled by the user. However, the seemingly restriction free Lego-character-design interface is littered with limitations. Not all body parts mix as well as others; some create uncanny folds and seams when assembled. The torso has to be a certain length and the legs positioned in a certain way and when I try to adapt these elements the automatic rigging process doesn’t work because the mesh won’t be recognized as a body.

A lot of these processes and workflows demand content that is very specific to their definition of the human form in order to function. As a result, they don’t account for anything that diverges from that norm, establishing a parametric truth that is biased and discriminatory. This raises the question of what that norm is and how, by whom and for whom it has been defined.

FS: Could you say something about the notion of “parametric truth” that you use?

SN: Realizing the existence of a built-in anthropometric standard in such software, I started looking at use cases of motion capture and 3D scanning in areas other than entertainment — applications that demand an objectivity. I was particularly interested in crime and accident reconstruction animations that are produced as visual evidence or in court support material. Traditionally this support material would consist of photographs, diagrams and objects. More recently this sometimes includes forensic animations commissioned by either party. The animations are produced with various software and tools, sometimes including motion capture and/or 3D scanning technologies.

These animations are created post-fact; a varying amalgam of witness testimonies, crime scene survey data, police and medical reports etc. Effectively creating a ”version of”, rather than an objective illustration. One highly problematic instance was an animation intended as a piece of evidence in the trial of George Zimmerman on the charge of second-degree murder on account of the shooting of Trayvon Martin in 2012. Zimmerman’s defense commissioned an animation to attest his actions as self defense. Among the online documentation of the trial is a roughly two-hour long video of Zimmerman’s attorney questioning the animator on his process. Within these two hours of questioning, the defense attorney is attempting to demonstrate the animations’ objectivity by minutely scrutinizing the creation process. It is revealed that a motion capture suit was used to capture the character’s animations, to digitally re-enact Zimmerman and Martin. The animator states that he was the one wearing the motion capture suit portraying both Zimmerman as well as Martin. If this weren’t already enough to debunk an objectivity claim, the attorney asks: “How does the computer know that it is recording a body?” Upon which the animator responds: “You place the sixteen sensors on the body and then on screen you see the body move in accordance.” But what is on screen is merely a representation of the data transmitted by 16 sensors, not a body.

A misplaced or wrongly calibrated sensor would yield an entirely different animation. And further, the anthropometric measurements of the two subjects were added in post production, after the animation data had been recorded from the animator’s re-enactment. In this case the animation was thankfully not allowed as a piece of evidence, but it nevertheless was allowed to be screened during the trial. The difference from showing video in court is, seeing something play out visually, in a medium that we are used to consume. It takes root in a different part of your memory than a verbal acount and renders one version more visible than others. Even with part of the animation based on data collected at the crime scene, a part of the reproduction will remain approximation and assumption.

3D animation by Reuter’s owned News Direct “Transform your News with 3D Graphics”, “FBI investigates George Zimmerman for shooting of Florida teen, Trayvon Martin”, News Direct, 2012

This is visible in the visual choices of the animation, for example. Most parts are modeled with minimal detail (I assume to communicate objectivity). “There were no superfluous aesthetic choices made.” However, some elements receive very selective and intentional detailing. The crime scene’s grassy ground is depicted as a flat plane with an added photographic texture of grass rather than 3D grass produced with particle hair. On the other hand, Zimmerman and Martin’s skin color is clearly accentuated as well as the hoodie worn by Trayvon Martin, a crucial piece of the defense’s case. The hoodie was instrumentalized as evidence of violent intentions during the trial, where it was claimed that if Martin had not worn the hood up he would not have been perceived as a threat by Zimmerman. To model these elements at varying subjective resolution was a deliberate choice. It could have depicted raw armatures instead of textured figures, for example. The animation was designed to focus on specific elements; shifting that focus would produce differing versions.

FS: This is something that fascinates me, the different levels of detailing that occur in the high octane world of 3D. Where some elements receive an enormous amount of attention and other elements, such as the skeleton or the genitals, almost none.

SN: Yes, like the sixteen sensors representing a body…

FS: Where do you locate these different levels of resolution?

SN: Within the CGI [computer-generated imagery] community, modelers are obsessed by creating 3D renders in the highest possible resolution as a technical as well as artistic accomplishment, but also as a form of muscle flexing of computing power. Detail is not merely a question of the render quality, but equally importantly it can be the realism achieved; a tear on a cheek, a thin film of sweat on the skin. On forums you come across discussions on something called subsurface scattering,[4] which is used to simulate blood vessels under the skin to make it look more realistic, to add weight and life to the hollow 3D mesh. However, the discussions tend to focus on pristine young white skin, oblivious to diversity.

JR: This raises the notion of the “epistemic object”. The matter you manipulated brings a question to a specific table, but it cannot be on every table: it cannot be on the “techies” table and on the designers table. However, under certain conditions, with a specific language and political agenda and so on, The Contents raises certain issues and serves as a starting point for a conversation or facilitates an argument for a conversation. This is where I find your work extremely interesting. I consider what you make objects around which to formulate a thought, for thinking about specific crossroads. They can as such be considered a part of “disobedient action-research”, as epistemic objects in the sense that they make me think, help me wonder about political urgencies, techno-ecological systems and the decisions that went into them.

SN: That’s specifically what two scenes in the film experiment with: the sleeping shadow and the decimating mug shot. They depend on the viewer’s expectations. The most beautiful reaction to the decimating mug shot scene has been: “Why does it suddenly look so scary?”

The viewer has an expectation in the image that is slowly taken away, quite literally, by lowering the resolution. Similar with the sleeping scene: What appears as a sleeping figure filmed through frosted glass unveils itself by changing the camera angle. The new perspective reveals another reality. What I am trying to figure out now is how the images operate in different spaces. Probably there isn’t one single application, but they can be in The Fragility of Life as well as in a music video or an ergonomic simulation, for example, and travel through different media and contexts. I am interested in how these images exist in these different spaces.

FS: We see that these renderings, not only yours but in general, are very volatile in their ability to transgress applications, on the large scale of movements ranging from Hollywood to medical, to gaming, to military. But it seems that, seeing your work, this transgression can also function on different levels.

SN: These different industries share software and tools, which are after all developed within their crossroads. Creating images that attempt to transgress levels of application is a way for me to reverse the tangent, and question the tools of production.

Is the image produced differently if the tool is the same or is its application different? If 3D modeling software created by the gaming industry were used to create forensic animations, possibly incarcerating people, what are the parameters under which that software operates? This is a vital question affecting real lives.

JR: Can you please introduce us to Mr. #0082a?

SN: In attempting to find answers to some of the questions on the Fuse character creator software’s parameters I came across a research project initiated by the U.S. Air Force Research Laboratory from the late 1990s and early 2000s called “CAESAR” [Civilian American and European Surface Anthropometry Resource].

#0082a is a whole body scan mesh from the CAESAR database,[5] presumably the 82nd scanned subject in position a. The CAESAR project’s aim was to create a new anthropometric surface database of body measurements for the Air Force’s cockpit and uniform design. The new database was necessary to represent the contemporary U.S. military staff. Previous measurements were outdated as the U.S. population had grown more diverse since the last measurement standards had been registered. This large-scale project consisted of scanning about 2000 bodies in the United States, Italy and the Netherlands. A dedicated team travelled to various cities within these countries outfitted with the first whole body scanner developed specifically for this purpose by a company called Cyberware. This is how I initially found out about the CAESAR database, by trying to find information on the Cyberware scanner.

CAESAR database used as training set in the research towards a parametric three-dimensional body model for animation. Loper et al, “Method for providing a threedimensional body model,” patent US 10,417,818 B2 filed by Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V., 2019

I found a video somewhere deep within YouTube, it was this very strange and wonderful video of a 3D figure dancing on a NIST [U.S. National Institute of Standards and Technology] logo. The figure looked like an early 3D scan that had been crudely animated. I got in touch with the YouTube user and through a Skype conversation learned about his involvement in the CAESAR project through his work at NIST. Because of his own personal fascination with 3D animation he made the video I initially found by animating one of the CAESAR scans, #0082a, with an early version of Poser.

Cyberware[6] has its origins in the entertainment industry. They scanned Leonard Nimoy, who portrays Spock in the Star Trek series, for the famous dream sequence in the 1986 movie Star Trek IV: The Voyage Home. Nimoy’s head scan is among the first 3D scans… The trajectory of the Cyberware company is part of a curious pattern: it originated in Hollywood as a head scanner, advanced to a whole body scanner for the military, and completed the entertainment-military-industrial cycle by returning to the entertainment industry for whole-body scanning applications.

CAESAR, as far as I know, is one of the biggest databases available of scanned body meshes and anthropometric data to this day. I assume, therefore it keeps on being used — recycled — for research in need of humanoid 3D meshes.

While looking into the history of the character creator software Fuse I sifted through 3D mesh segmentation research, which later informed the assembly modeling research at Stanford that became Fuse. #0082 was among twenty CAESAR scans used in a database assembled specifically for this segmentation research and thus ultimately played a role in setting the parameters for Fuse. A very limited amount of training data, that in the case of Fuse ended up becoming a widely distributed commercial software. At least at this point the training data should be reviewed… It felt like a whole ecology of past and future 3D anthropometric standards revealed itself through this one mesh.

Notes

  1. See also: “We hardly encounter anything that didn’t matter,” in this book.
  2. William M. Arkin, “When Seeing and Hearing Isn’t Believing,” Washington Post, February, 1999, https://www.washingtonpost.com/gdpr-consent/?next_url=https%3a%2f%2fwww.washingtonpost.com%2fwp-srv%2fnational%2fdotmil%2farkin020199.htm.
  3. Jeanette Mathews, “An Update on Adobe Fuse as Adobe Moves to the Future of 3D & AR Development,” September 13, 2019, https://www.adobe.com/products/fuse.html.
  4. “Subsurface Scattering,” Blender 2.93 Reference Manual, accessed July 1, 2020, https://docs.blender.org/manual/en/latest/render/shader_nodes/shader/sss.html.
  5. Products based on this database are commercialized by SAE International, http://store.sae.org.
  6. “Cyberware,” Wikipedia, accessed July 1, 2020, https://en.wikipedia.org/wiki/Cyberware.