Difference between revisions of "We hardly encounter anything that didn’t really matter"

From Volumetric Regimes
Jump to navigation Jump to search
 
(8 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
__NOTOC__
 
__NOTOC__
 
== We hardly encounter anything that didn’t really matter ==
 
== We hardly encounter anything that didn’t really matter ==
'''Phil Langley in conversation with Possible Bodies '''
 
  
 +
'''Phil Langley in conversation with Possible Bodies'''
  
'''As an architect and computational designer, Phil Langley develops critical approaches to technology and software for architectural practice and spatial design. Our first conversation started from a shared inquiry into MakeHuman<ref name="ftn1">See: Possible Bodies, “MakeHuman,” in this same book. [https://possiblebodies.constantvzw.org/book/index.php?title=MakeHuman https://possiblebodies.constantvzw.org/book/index.php?title=MakeHuman]</ref>, the Open Source software project for modeling 3-dimensional humanoid characters.'''
 
  
'''In the margins of the yearly Libre Graphics meeting in Toronto, we spoke about the way that materiality gets encoded into software, about parametric versus generative approaches, and the symbiotic relationship between algorithms that run simulations and the structure of that algorithm itself. “I think there is a blindness in understanding that the nature of the algorithm effects the nature of the model … The model that you see on your screen is not the model that is actually analyzed.”<ref name="ftn2">See: “Phil Langly in conversation with Possible Bodies, Comprehensive Features,” on-line. https://possiblebodies.constantvzw.org/book/index.php?title=Comprehensive_Features</ref> '''
+
'''As an architect and computational designer, Phil Langley develops critical approaches to technology and software for architectural practice and spatial design. Our first conversation started from a shared inquiry into MakeHuman,'''<ref name="ftn162">See: Possible Bodies, “MakeHuman,” in this book.</ref>''' the Open Source software project for modeling 3-dimensional humanoid characters.'''
  
'''Six years later, we ask him about his work for the London based architecture and engineering firm Bryden Woods where he is now responsible for a team that might handle computational design in quite a different way.'''
+
'''In the margins of the yearly Libre Graphics meeting in Toronto, we spoke about the way that materiality gets encoded into software, about parametric versus generative approaches, and the symbiotic relationship between algorithms that run simulations and the structure of that algorithm itself. “I think there is a blindness in understanding that the nature of the algorithm effects the nature of the model… The model that you see on your screen is not the model that is actually analyzed.”'''<ref name="ftn163">See: “Phil Langly in conversation with Possible Bodies, Comprehensive Features,” [https://volumetricregimes.xyz/index.php?title=Comprehensive_Features https://volumetricregimes.xyz/index.php?title=Comprehensive_Features].</ref>
 +
 
 +
'''Six years later, we ask him about his work for the London-based architecture and engineering firm Bryden Woods where he is now responsible for a team that might handle computational design in quite a different way.'''
  
 
=== A very small ecosystem ===
 
=== A very small ecosystem ===
  
'''Phil Langley''': For the creative technologies team that I set up in my company, we hired twenty people doing computational design and they all come from very similar backgrounds: architectural engineering plus a postgraduate or a master’s degree in ‘computational design’. We all have similar skills and are from a narrow selection of academic institutions. It is a very small ecosystem.
+
'''Phil Langley''': For the Creative Technologies team that I set up in my company, we hired twenty people doing computational design and they all come from very similar backgrounds: architectural engineering plus a postgraduate or a master’s degree in “computational design”. We all have similar skills and are from a narrow selection of academic institutions. It is a very small ecosystem.
  
I followed a course around 2007 that is similar to what people do now. There’s some of the technology that moves on for sure, but you’re still learning the same kind of algorithms that were there in the 1950s or sixties or seventies. They were already old when I was doing them. You’re still learning some parametrics, some generative design, generative algorithms, genetic algorithms, neural networks and cellular automatisms, it is absolutely a classic curriculum. Same texts, same books, same references. A real echo chamber.  
+
I followed a course around 2007 that is similar to what people do now. There’s some of the technology that moves on for sure, but you’re still learning the same kind of algorithms that were there in the 1950s or sixties or seventies. They were already old when I was doing them. You’re still learning some parametrics, some generative design, generative algorithms, genetic algorithms, neural networks and cellular automatisms, it is absolutely a classic curriculum. Same texts, same books, same references. A real echo chamber.
  
One of the things I hated when I studied was the lack of diversity of thoughts, of criticality around these topics. And also the fact that there’s only a very narrow cross-section of society involved in creating these kinds of techniques. If you ever mentioned the fact that some of these algorithmic approaches came from military research, the response was: “So what?". It wasn’t even that they said that they already knew. They were just like “Nothing to say about that, how can that possibly be relevant?”
+
One of the things I hated when I studied was the lack of diversity of thoughts, of criticality around these topics. And also the fact that there’s only a very narrow cross-section of society involved in creating these kinds of techniques. If you ever mentioned the fact that some of these algorithmic approaches came from military research, the response was: “So what?. It wasn’t even that they said that they already knew that. They were just like “Nothing to say about that, how can that possibly be relevant?”
  
 
=== How can you say it actually works?  ===
 
=== How can you say it actually works?  ===
  
'''PL''': When building the team, I was very conscious about not stepping straight into the use of generative design technologies, because we certainly haven’t matured enough to start the conversation about how careful you have to be when using those techniques. We are working with quite complex situations and so we can’t have a complex algorithm yet because we have too much to understand about the problem itself.  
+
'''PL''': When building the team, I was very conscious about not stepping straight into the use of generative design technologies, because we certainly haven’t matured enough to start the conversation about how careful you have to be when using those techniques. We are working with quite complex situations and so we can’t have a complex algorithm yet because we have too much to understand about the problem itself.
  
We started with a much more parametric and procedural design approach, that was much more... I wouldn’t say basic... but lots of people in team got quite frustrated at the beginning because they said, we ''can'' use this technique, why don’t we just use this? It’s only this year that we started using any kind of generative design algorithms at all. It was forced on us actually, by some external pressures. Some clients demanded it because it becomes very fashionable and they insisted that we did it. The challenges or the problems or the kind of slippage is how to try and build something that uses those techniques, but to do it consciously. And we are not always successful achieving that, by the way.  
+
We started with a much more parametric and procedural design approach, that was much more... I wouldn’t say basic... but lots of people in the team got quite frustrated at the beginning because they said, we ''can'' use this technique, why don’t we just use this? It’s only this year that we started using any kind of generative design algorithms at all. It was forced on us actually, by some external pressures. Some clients demanded it because it becomes very fashionable and they insisted that we did it. The challenges or the problems or the kind of slippage is how to try and build something that uses those techniques, but to do it consciously. And we are not always successful achieving that, by the way.
  
 
The biggest thing we were able to achieve is the transparency of the process because normally everything that you pile up to build one of those systems, gets lost. Because it is always about the performance of it, that is what everybody wants to show. They don’t want to tell you how they built it up bit by bit. People just want to show a neural network doing something really cool, and they don’t really want to tell you how they encoded all of the logic and how they selected the data. There are just thousands of decisions to make all the way through about what you include, what you don’t include, how you privilege things and not privilege other things.
 
The biggest thing we were able to achieve is the transparency of the process because normally everything that you pile up to build one of those systems, gets lost. Because it is always about the performance of it, that is what everybody wants to show. They don’t want to tell you how they built it up bit by bit. People just want to show a neural network doing something really cool, and they don’t really want to tell you how they encoded all of the logic and how they selected the data. There are just thousands of decisions to make all the way through about what you include, what you don’t include, how you privilege things and not privilege other things.
Line 28: Line 29:
 
At some point, you carefully smooth all of the elements or you de-noise that process so much… You simplify the rules and you simplify the input context, you simplify everything to make it work, and then how can you say that it actually works? Just because it executes and doesn’t crash, is that really the definition of functionality, what sort of truth does it tell you? What answers does it give you?
 
At some point, you carefully smooth all of the elements or you de-noise that process so much… You simplify the rules and you simplify the input context, you simplify everything to make it work, and then how can you say that it actually works? Just because it executes and doesn’t crash, is that really the definition of functionality, what sort of truth does it tell you? What answers does it give you?
  
You make people try to understand what it does and you make people talk about it, to be explicit about each of those choices they make, all those rules, inputs, logics, geometry or data, what they do to turn that into a system. Every one of those decisions you make defines the n-dimensional space of possibilities. And if you take some very complicated input and you can’t handle it in your process and you simplify so much, you’ve already given a shape to what it could possibly emerge as. So one of the things we ended up doing is spending a lot of time on that and we discuss each micro step. Why are we doing it like this? It wasnt always easy for everyone because they didn’t want to think about documenting all the steps.  
+
You make people try to understand what it does and you make people talk about it, to be explicit about each of those choices they make, all those rules, inputs, logics, geometry or data, what they do to turn that into a system. Every one of those decisions you make defines the n-dimensional space of possibilities. And if you take some very complicated input and you can’t handle it in your process and you simplify so much, you’ve already given a shape to what it could possibly emerge as. So one of the things we ended up doing is spending a lot of time on that and we discuss each micro step. Why are we doing it like this? It wasn’t always easy for everyone because they didn’t want to think about documenting all the steps.
  
Yesterday we had a two hour conversation about mesh interpolation and the start of the conversation was a data flow diagram, and one of the boxes just said something like: “We’re just going to press this button and then it turns into a mesh”. And I said: “Woah, wait a minute!” some people thought “What do you mean, it’s just a feature, it’s just an algorithm. It’s just in the software, we can just do it.” And I said, “No way.” That’s even before you get towards building something that acts on that model. I think that’s what we got out of it actually, by not starting with the most let’s say sophisticated approach, it has allowed us to have more time to reflect on what fueled the process.  
+
Yesterday we had a two hour conversation about mesh interpolation and the start of the conversation was a data flow diagram, and one of the boxes just said something like: “We’re just going to press this button and then it turns into a mesh”. And I said: “Woah, wait a minute!” some people thought “What do you mean, it’s just a feature, it’s just an algorithm. It’s just in the software, we can just do it.” And I said, “No way.” That’s even before you get towards building something that acts on that model. I think that’s what we got out of it actually, by not starting with the most let’s say sophisticated approach, it has allowed us to have more time to reflect on what fueled the process.
  
 
=== Decisions have to be made  ===
 
=== Decisions have to be made  ===
Line 36: Line 37:
 
'''Possible Bodies''': Do you think that transparency can produce a kind of control? Or that ‘understanding’ is somehow possible?
 
'''Possible Bodies''': Do you think that transparency can produce a kind of control? Or that ‘understanding’ is somehow possible?
  
'''PL''': It depends what you mean by control, I would say.  
+
'''PL''': It depends what you mean by control, I would say.
  
It is not necessarily that you do this in order to increase the efficacy of the process or to ensure you get better results. You don’t do it in order to understand all of the interactions because you can not do that, not really. You can have a simpler algorithmic process, you can have an idea of how it’s operating, there is some truth in that, in the transparency, but you lose that quite quickly as the complexity grows, it’s more to say that you re-balance the idea that you want to see an outcome you like, and therefore then claim that it works. I want to be able to be explicit about everything that I know all the way long. In the end that’s all you have. By making explicit that you have made all these steps, you make clear that decisions have to be made. That at every point you’re intervening in something, and it will have an effect. Almost every one of these things has an effect to a greater or lesser extent and we hardly encounter anything that didn’t really matter. Not even if it was a bug. If it wasn’t really affecting the system, it’s probably because it was a bug in the process rather than anything else.  
+
It is not necessarily that you do this in order to increase the efficacy of the process or to ensure you get better results. You don’t do it in order to understand all of the interactions because you cannot do that, not really. You can have a simpler algorithmic process, you can have an idea of how it’s operating, there is some truth in that, in the transparency, but you lose that quite quickly as the complexity grows, it’s more to say that you re-balance the idea that you want to see an outcome you like, and therefore then claim that it works. I want to be able to be explicit about everything that I know all the way long. In the end that’s all you have. By making explicit that you have made all these steps, you make clear that decisions have to be made. That at every point you’re intervening in something, and it will have an effect. Almost every one of these things has an effect to a greater or lesser extent and we hardly encounter anything that didn’t really matter. Not even if it was a bug. If it wasn’t really affecting the system, it’s probably because it was a bug in the process rather than anything else.
  
I think that transparency is not about gaining control of a process in itself, it’s about being honest with the fact that you’re creating something with a generative adversarial network (GAN) or a neural network, whatever it is. That it doesn’t just come from TensorFlow<ref name="ftn3">TensorFlow is “An end-to-end open source machine learning platform” used for both research and production at Google. [https://www.tensorflow.org/ https://www.tensorflow.org/]</ref>, fully made and put into your hand and you just press play.  
+
I think that transparency is not about gaining control of a process in itself, it’s about being honest with the fact that you’re creating something with a generative adversarial network (GAN) or a neural network, whatever it is. That it doesn’t just come from TensorFlow,<ref name="ftn164">TensorFlow is “An end-to-end open source machine learning platform” used for both research and production at Google. [https://www.tensorflow.org/ https://www.tensorflow.org/].</ref> fully made and put into your hand and you just press play.
  
 
=== Getting lost in nice little problems ===
 
=== Getting lost in nice little problems ===
 
 
'''PL''': The point I was trying to make to everyone on the team was, well, if you simplify the mesh so much in order that it’s smooth and so you can handle it in the next process, what kind of reliance can you have on the output?
 
'''PL''': The point I was trying to make to everyone on the team was, well, if you simplify the mesh so much in order that it’s smooth and so you can handle it in the next process, what kind of reliance can you have on the output?
  
Line 56: Line 56:
 
=== I try to talk about language with everybody ===
 
=== I try to talk about language with everybody ===
  
'''PL''': I try to talk a little bit about language with everybody. I’m trying not to overburden everybody with all of my predilections. I can’t really impose anything on them. Language is a big thing, like explaining Genetic Algorithms with phrases like , “So this is the population that would kill everybody, that’s like unsuitable or invalid” And for example, in a particular kind of genetic algorithmic method, there are lots of nuances in how you set them up.
+
'''PL''': I try to talk a little bit about language with everybody. I’m trying not to overburden everybody with all of my predilections. I can’t really impose anything on them. Language is a big thing, like explaining Genetic Algorithms with phrases like , “So this is the population that would kill everybody, that’s like unsuitable or invalid.For example if you use a multi objective Genetic Algorithm, you might try to keep an entire set of all solutions or ''configurations'', as we would call them, that you create through all of the generations of the process. The scientific language for this is “population”. That’s how you have to talk. You might say, “I have a population of fifty generations of the algorithm. Five thousand individuals would be created throughout the whole process. And in each generation you’re only “breeding” or combining a certain set and you discard the others.” You leave them behind, that’s quite common. And we had a long talk about whether or not we should keep all of the things that were created and the discussion was going on like, “But some of them were just like rubbish. They’re just stupid. We should just kill them, no one needs to see them again.” And I’m like, “well I don’t know, I quite like to keep everybody!”
 
 
You can have for example parallel objectives that are trying to resolve rather than trying to create each ‘individual’ perfectly. Basically you end up with a more negotiable outcome. And it’s a very common way of doing it these days, the process is ‘solving’ for multiple performance goals that are often competing – like getting something that is incredibly light but also incredibly strong. For example if you use a multi objective Genetic Algorithm, you might try to keep an entire set of all solutions or ''configurations'', as we would call them, that you create through all of the generations of the process. The scientific language for this is ‘population’. That’s how you have to talk. You might say, “I have a population of fifty generations of the algorithm. Five thousand individuals would be created throughout the whole process. And in each generation you’re only ‘breeding’ or combining a certain set and you discard the others.” You leave them behind, that’s quite common. And we had a long talk about whether or not we should keep all of the things that were created and the discussion was going on like, “But some of them were just like rubbish. They’re just stupid. We should just kill them, no one needs to see them again.” And I’m like, “well I don’t know, I quite like to keep everybody!”
 
  
 
Of course all you’re really doing is optimizing, tending towards something that’s better and you lose the possibility of chance and miss something. There’s a massive bit of randomness in it and you have a whole set of controls about how much randomness you allow through the generational processes and so I have this massive metaphor and it comes with huge, problematic language around genetics and all that kind of stuff that is encoded with even more problematic language, without any criticality, into the algorithmic process. And then someone is telling me “I’ve got a slider that says increase the randomness on that”. So it’s full of all those things, which I find very challenging.
 
Of course all you’re really doing is optimizing, tending towards something that’s better and you lose the possibility of chance and miss something. There’s a massive bit of randomness in it and you have a whole set of controls about how much randomness you allow through the generational processes and so I have this massive metaphor and it comes with huge, problematic language around genetics and all that kind of stuff that is encoded with even more problematic language, without any criticality, into the algorithmic process. And then someone is telling me “I’ve got a slider that says increase the randomness on that”. So it’s full of all those things, which I find very challenging.
Line 66: Line 64:
 
=== The Hairy Hominid effect ===
 
=== The Hairy Hominid effect ===
  
'''PB''': We would like to bring up the Truthful Hairy Hominid here<ref name="ftn4">Item 086: The Truthful Hairy Hominid [https://possiblebodies.constantvzw.org/inventory/?086 https://possiblebodies.constantvzw.org/inventory/?086]</ref>. The figure emerged when looking over the shoulder of a designer using a combination of modeling softwares to update the representation of human species for the ‘Gallery of Humankind’. They were working on one concrete specimen and the designer was modeling their hair, that was then going to be placed on the skin. And someone in our group asked the designer, “How do you know when to stop? How many hairs do you put on that face, on that body?” And then the designer explained that there’s a scientific committee of the museum that handed him some books, that had some information that was scientifically verified, but that all the rest was basically an invention. So he said that it’s more or less this amount of hair or this color, this density of hair. And this is what we kept with us: When this representation is finished, when the model is done and brought from the basement to the gallery of the museum, that representation becomes the evidence of truth, of scientific truth.<ref>Another aspect of the ''Hairy Hominid effect'' appears in our conversation with Simone C. Niquille, "The Fragility of Life," in this chapter.</ref>
+
'''PB''': We would like to bring up the Truthful Hairy Hominid here.<ref name="ftn165">“Item 086: The Truthful Hairy Hominid,” ''The Possible Bodies Inventory,'' 2014.</ref> The figure emerged when looking over the shoulder of a designer using a combination of modeling softwares to update the representation of human species for the “Gallery of Humankind”. They were working on one concrete specimen and the designer was modeling their hair, that was then going to be placed on the skin. And someone in our group asked the designer, “How do you know when to stop? How many hairs do you put on that face, on that body?” And then the designer explained that there’s a scientific committee of the museum that handed him some books, that had some information that was scientifically verified, but that all the rest was basically an invention. So he said that it’s more or less this amount of hair or this color, this density of hair. And this is what we kept with us: When this representation is finished, when the model is done and brought from the basement to the gallery of the museum, that representation becomes the evidence of truth, of scientific truth.<ref name="ftn166">Another aspect of the ''Hairy Hominid effect'' appears in our conversation with Simone C Niquille, “The Fragility of Life,in this chapter. </ref>
  
 
'''PL''': It acts like as a stabilization of all of those thoughts, scientific or not, and by making it in that way, it formalizes them and becomes unchallengeable.
 
'''PL''': It acts like as a stabilization of all of those thoughts, scientific or not, and by making it in that way, it formalizes them and becomes unchallengeable.
  
'''PB''': The sudden arrival of an invented representation of hominids on the floor of a natural science museum, this functional invention, this efficacy, is turned into scientific truth. This is what we call'' The Hairy Hominid effect''. Maybe you have some stories related to this effect, on the intervention of what counts or what is accountable, what counts? Or is the tunnel already one?
+
'''PB''': The sudden arrival of an invented representation of hominids on the floor of a natural science museum, this functional invention, this efficacy, is turned into scientific truth. This is what we call “The Hairy Hominid effect”. Maybe you have some stories related to this effect, on the intervention of what counts or what is accountable, what counts? Or is the tunnel already one?
  
'''PL''': Well, the technology of the tunnel project maybe is, and how we’re using this stuff.  
+
'''PL''': Well, the technology of the tunnel project maybe is, and how we’re using this stuff.
  
A point-cloud contains millions and millions of data-points from surveys, like in lidar scannning, it’s still really novel to use them in our industry, even though the technology has been around for years.<ref name="ftn0">Lidar is an acronym of “light detection and ranging” or “laser imaging, detection, and ranging”.</ref> I would say the reason it still gets pushed as a thing is because it has become a massive market for proprietary software companies who say: “Hey, look, you have this really cool map, this really cool point cloud, wouldn’t it be cool if you could actually use it in a meaningful way?” And everybody goes, “yes!”, because these files are each four and a half gigabytes, you need a three thousand pound laptop to open it and it’s not even a mesh, you can’t really do anything with it, it just looks kind of cool. So the software companies go: “Don’t worry about it. We’ll sell you thousands of pounds worth of software, which will process this for you into a mesh”. But no one really is thinking about, well… how do you really process that?  
+
A point-cloud contains millions and millions of data-points from surveys, like in LiDAR scanning, it’s still really novel to use them in our industry, even though the technology has been around for years.<ref name="ftn167">LiDAR is an acronym of “light detection and ranging” or “laser imaging, detection, and ranging”.</ref> I would say the reason it still gets pushed as a thing is because it has become a massive market for proprietary software companies who say: “Hey, look, you have this really cool map, this really cool point cloud, wouldn’t it be cool if you could actually use it in a meaningful way?” And everybody goes, “yes!”, because these files are each four and a half gigabytes, you need a three thousand pound laptop to open it and it’s not even a mesh, you can’t really do anything with it, it just looks kind of cool. So the software companies go: “Don’t worry about it. We’ll sell you thousands of pounds worth of software, which will process this for you into a mesh”. But no one really is thinking about, well… how do you really process that?
  
 
A point-cloud is just as a collection of random points. You can understand that it is a tunnel or a school or a church by looking at it, but when you try and get in there and measure, if you’re really trying to measure a point cloud ... what point do you choose to measure? And whilst they say the precision is like plus or minus zero point five millimeters... well, if that was true, why have we got so much noise?
 
A point-cloud is just as a collection of random points. You can understand that it is a tunnel or a school or a church by looking at it, but when you try and get in there and measure, if you’re really trying to measure a point cloud ... what point do you choose to measure? And whilst they say the precision is like plus or minus zero point five millimeters... well, if that was true, why have we got so much noise?
Line 92: Line 90:
 
=== A potential for possibilities ===
 
=== A potential for possibilities ===
  
'''PB''': When we spoke in Toronto six years ago, you defended generative procedures against parametric approaches, which disguise the probable as possible. Did something change in your relation to the generative and it’s potentially transformative potential?
+
'''PB''': When we spoke in Toronto six years ago, you defended generative procedures against parametric approaches, which disguise the ''probable'' as ''possible''. Did something change in your relation to the generative and it’s potentially transformative potential?
  
'''PL''': I think it became more complex for me, when you actually have to do it in real life. I still think that there’s huge risks in both approaches and at the time I probably thought that the reward is not worth the risk in parametric approaches. If you can be good at the generative thing, that’s riskier, it’s much easier to be bad at it, but the potential for possibilities is much higher.  
+
'''PL''': I think it became more complex for me, when you actually have to do it in real life. I still think that there’s huge risks in both approaches and at the time I probably thought that the reward is not worth the risk in parametric approaches. If you can be good at the generative thing, that’s riskier, it’s much easier to be bad at it, but the potential for possibilities is much higher.
  
What is more clear now is that these are general processes that you have to encounter, because everybody else is doing it, the bad ones are doing it. And I think it’s a territory that I’m not prepared to give up, that I don’t want to encounter these topics on their terms. I don’t consider the manifestations, those that we don’t like in lots of different ways, to be the only way to use this technology. I don’t consider them to be the intellectual owners of it either. I am not prepared to walk away from these techniques. I want to challenge what they are in some way.  
+
What is more clear now is that these are general processes that you have to encounter, because everybody else is doing it, the bad ones are doing it. And I think it’s a territory that I’m not prepared to give up, that I don’t want to encounter these topics on their terms. I don’t consider the manifestations, those that we don’t like in lots of different ways, to be the only way to use this technology. I don’t consider them to be the intellectual owners of it either. I am not prepared to walk away from these techniques. I want to challenge what they are in some way.
  
Over the last few years of building things for people, and working with clients, and having to build while we were also trying to build a group of people to work together, you realize that the parametric or procedural approaches give you an opportunity to focus on what is necessary, to clarify the decision-making in all these choices you make. It is more useful in that sense. I was probably quite surprised how little people really wanted to think about those things in generative processes. So we had to start a little bit more simple.  
+
Over the last few years of building things for people, and working with clients, and having to build while we were also trying to build a group of people to work together, you realize that the parametric or procedural approaches give you an opportunity to focus on what is necessary, to clarify the decision-making in all these choices you make. It is more useful in that sense. I was probably quite surprised how little people really wanted to think about those things in generative processes. So we had to start a little bit more simple.
  
You have to really think first of all, what is it you’re going to make? Is it okay to make it? There’s a lower limit almost of what’s okay to make in a parametric tool because changes are really hard, because you lock in so many rules and relationships. The model can be just as complicated in a generative process, but you need to have a kind of fixed idea of what the relationships represent within your model, within your process. Whereas in generative processes, because of the very nature of the levels of abstraction, which cause problems, there are also opportunities. So without changing the code, you can just say, well, this thing actually is talking about something completely different. If you understand the maths of it, you can assign a different name to that variable in your own mind, right? You don’t even need to change the code.
+
You have to really think first of all, what is it you’re going to make? Is it okay to make it? There’s a lower limit almost of what’s okay to make in a parametric tool because changes are really hard, because you lock in so many rules and relationships. The model can be just as complicated in a generative process, but you need to have a kind of fixed idea of what the relationships represent within your model, within your process. Whereas in generative processes, because of the very nature of the levels of abstraction, which cause problems, there are also opportunities. So without changing the code, you can just say, well, this thing actually is talking about something completely different. If you understand the math of it, you can assign a different name to that variable in your own mind, right? You don’t even need to change the code.
  
With a parametric approach you’re never going to get out of the fact that it is about a building of a certain type, you can never escape that. And we built parametric tools to design housing schemes or schools as well as some other infrastructure things, data centers even, and that is kind of okay, because the rules are not controversial when you think about schools for example. And you’re probably thinking, hang on a minute, Phil, these can be controversial, but in the context of our problem definition, they were unchallengeable by anybody, they came from the government.  
+
With a parametric approach you’re never going to get out of the fact that it is about a building of a certain type, you can never escape that. And we built parametric tools to design housing schemes or schools as well as some other infrastructure things, data centers even, and that is kind of okay, because the rules are not controversial when you think about schools for example. And you’re probably thinking, hang on a minute, Phil, these can be controversial, but in the context of our problem definition, they were unchallengeable by anybody, they came from the government.
  
 
=== Showing the real consequences ===
 
=== Showing the real consequences ===
  
'''PL''': Parametric approaches make problems in the rules and processes visible. I think that’s a huge thing. Because of the kind of projects we are building, we are given a very hard set of rules that no one is allowed to challenge. So you try and encode them into a parametric system and it won’t work basically.  
+
'''PL''': Parametric approaches make problems in the rules and processes visible. I think that’s a huge thing. Because of the kind of projects we are building, we are given a very hard set of rules that no one is allowed to challenge. So you try and encode them into a parametric system and it won’t work basically.
  
In the transport infrastructure projects we were doing, there are rule changes with safety, like distance between certain things. And we could show what the real consequences would be. And that this was not going to achieve the kind of safety outcome that they were looking for. Sometimes you’re just making it very clear what it was that they thought that they were asking for. You told us to do it like this, this is what it gives you, I don’t think that’s what you intended.  
+
In the transport infrastructure projects we were doing, there are rule changes with safety, like distance between certain things. And we could show what the real consequences would be. And that this was not going to achieve the kind of safety outcome that they were looking for. Sometimes you’re just making it very clear what it was that they thought that they were asking for. You told us to do it like this, this is what it gives you, I don’t think that’s what you intended.
  
 
We never allowed the computer to solve those problems in any of the things we’ve built. It just tells you, just so you know, that did not work, that option. And that’s very controversial, people often don’t really like that. They’re always asking us to constrain them. “Why do you let the system make a mistake?”
 
We never allowed the computer to solve those problems in any of the things we’ve built. It just tells you, just so you know, that did not work, that option. And that’s very controversial, people often don’t really like that. They’re always asking us to constrain them. “Why do you let the system make a mistake?”
  
 +
<div class="no-margin-top">
 
=== Sometimes it is not better than nothing ===
 
=== Sometimes it is not better than nothing ===
 +
</div>
  
 
'''PB''': When we speak to people that work with volumetric systems, whether on the level of large scale databases for plants, or for making biomedical systems … when we push back on their assumption that this is reality, they will say, “Of course the point-cloud is not a reality. Of course the algorithm cannot represent population or desire.” But then when the system needs to work, it is apparently easy to let go of what that means. The need to make it work, erases the possibility for critique.
 
'''PB''': When we speak to people that work with volumetric systems, whether on the level of large scale databases for plants, or for making biomedical systems … when we push back on their assumption that this is reality, they will say, “Of course the point-cloud is not a reality. Of course the algorithm cannot represent population or desire.” But then when the system needs to work, it is apparently easy to let go of what that means. The need to make it work, erases the possibility for critique.
  
'''PL''': One of the common responses I see is something like, “Yeah, but it is better than nothing.” Or that is at least part of the story. They have a very Modernist idea that you run this linear trajectory towards complete know-how of knowledge or whatever and that these systems are incomplete rather than imperfect and that if you have a bit more time, you’ll get there. But where we are now, it’s still better than then. So why not use it?
+
'''PL''': One of the common responses I see is something like, “Yeah, but it is better than nothing.” Or that is at least part of the story. They have a very Modernist idea that you run this linear trajectory towards complete know-how or knowledge or whatever and that these systems are incomplete rather than imperfect and that if you have a bit more time, you’ll get there. But where we are now, it’s still better than then. So why not use it?
  
In the construction sector you constantly encounter these unlucky wanna be Silicon Valley tech billionaires, who will just say like, “But you just do it with a computer. Just do it with an algorithm!” They’ve fallen for that capitalist idea that technology will always work in the end. It must work. And whenever I present my work in conferences, I always talk about my team, what people are in the team, how we built it in some way. To the point that actually lot’s of people get bored of it. Other people when they talk about these kinds of techniques will say “We’ve got this bright kid he’s got a PhD from wherever. He’s brilliant. He just sits in the corner. He’s just brilliant.” And of course, it’s always a guy as well. They instrumentalize these people, as the device to execute their dream, which is that the computer will do everything. There’s still this kind of a massively Modernist idea that it’s just a matter of time until we get to that.  
+
In the construction sector you constantly encounter these unlucky wanna-be Silicon Valley tech billionaires, who will just say like, “But you just do it with a computer. Just do it with an algorithm!” They’ve fallen for that capitalist idea that technology will always work in the end. It ''must'' work. And whenever I present my work in conferences, I always talk about my team, what people are in the team, how we built it in some way. To the point that actually lot’s of people get bored of it. Other people when they talk about these kinds of techniques will say “We’ve got this bright kid he’s got a PhD from wherever. He’s brilliant. He just sits in the corner. He’s just brilliant.” And of course, it’s always a guy as well. They instrumentalize these people, as the device to execute their dream, which is that the computer will do everything. There’s still this kind of a massively Modernist idea that it’s just a matter of time until we get to that.
  
Sometimes a point-cloud is not better than nothing because it gives you a whole other problem to deal with, another idea of reality to process. And by the time you get into something that’s usable, it has tricked you into thinking that it’s real. And that’s true about the algorithms as well. You’re wrestling with very complicated processes and by the time you think that you kind of control it, it just controlled you, it made you change your idea of the problem. You simplify your own problem in order that you can have a process act on it. And if you’re not conscious about how you’re simplifying your problem in order to allow these things to act on it, if you’re not transparent about that, if you don’t acknowledge it, then you have a very difficult relationship with your work.
+
Sometimes a point-cloud is ''not'' better than nothing because it gives you a whole other problem to deal with, another idea of reality to process. And by the time you get into something that’s usable, it has tricked you into thinking that it’s real. And that’s true about the algorithms as well. You’re wrestling with very complicated processes and by the time you think that you kind of control it, it just controlled you, it made you change your idea of the problem. You simplify your own problem in order that you can have a process act on it. And if you’re not conscious about how you’re simplifying your problem in order to allow these things to act on it, if you’re not transparent about that, if you don’t acknowledge it, then you have a very difficult relationship with your work.
  
 
=== Supposed scientific reality ===
 
=== Supposed scientific reality ===
  
'''PL''': We use genetic algorithm on a couple of projects now and the client in one project was just not interested in what methods we were using. They did not want us to tell hem, they did not care. They wanted us to show what it does and then talk about that, which is kind of okay. It’s anyway, not their job. The second client was absolutely not like that at all, they were looking for a full explanation of everything that we did. And our explanation did not satisfy them because it didn’t fit with their dream of what a genetic process does.  
+
'''PL''': We use genetic algorithms on a couple of projects now and the client in one project was just not interested in what methods we were using. They did not want us to tell them, they did not care. They wanted us to show what it does and then talk about that, which is kind of okay. It’s anyway, not their job. The second client was absolutely not like that at all, they were looking for a full explanation of everything that we did. And our explanation did not satisfy them because it didn’t fit with their dream of what a genetic process does.
  
We were fighting this perception that as soon as you use this technique, why doesn’t it work out of the box? And then we’re building this thing over a matter of weeks and it was super impressive how far we got, but he still told us, I don’t understand why this isn’t finished. It took the US military 50 years to make any of this. Give me a break!
+
We were fighting this perception that as soon as you use this technique, why doesn’t it work out of the box? And then we’re building this thing over a matter of weeks and it was super impressive how far we got, but he still told us, I don’t understand why this isn’t finished. But it took the US military fifty years to make any of this. Give me a break!
  
 
'''PB''': The tale of genetics comes with its own promise, the promise of a closed circuit. I don’t know if you follow any of the critiques on genetics from microchimerism or epigenetics, basically anything that brings complexity. They ask: what are the material conditions in which that process actually takes place? It’s of course never going to work perfectly.
 
'''PB''': The tale of genetics comes with its own promise, the promise of a closed circuit. I don’t know if you follow any of the critiques on genetics from microchimerism or epigenetics, basically anything that brings complexity. They ask: what are the material conditions in which that process actually takes place? It’s of course never going to work perfectly.
  
'''PL''': The myth-making comes with the weight of all other kinds of science and therefore implies that this thing should work. Neural networks have this as well, because of, again, this storytelling about the science of it and I think the challenge for those generative processes is exactly in their link to supposed scientific realities and the sort of one-to-one mapping between incomplete science, or unsatisfactory science, into another incomplete unsatisfactorily discipline, without question. You can end up in pretty spooky place with something like a Genetic Algorithms that are abstracted from biochemistry, arguing in a sort of eugenic way.
+
'''PL''': The myth-making comes with the weight of all other kinds of science and therefore implies that this thing should work. Neural networks have this as well, because of, again, this storytelling about the science of it and I think the challenge for those generative processes is exactly in their link to supposed scientific realities and the sort of one-to-one mapping between incomplete science, or unsatisfactory science, into another incomplete unsatisfactorily discipline, without question. You can end up in pretty spooky place with something like a Genetic Algorithm that is abstracted from biochemistry, arguing in a sort of eugenic way.
  
 
=== You can only build one building ===
 
=== You can only build one building ===
  
'''PL''': I think inherent in all of this science is the idea that there is a right answer, a singular right answer. I think that’s what optimization means. For the sort of stuff we build, we never say, “This is the best way of doing it.” The last mile of the process has to be a human that either finishes it, fills in the gaps or chooses from the selection that is provided. We never ever give one answer.  
+
'''PL''': I think inherent in all of this science is the idea that there is a right answer, a singular right answer. I think that’s what optimization means. For the sort of stuff we build, we never say, “This is the best way of doing it.” The last mile of the process has to be a human that either finishes it, fills in the gaps or chooses from the selection that is provided. We never ever give one answer.
  
I think someone in my world would say, “But Phil, we’re trying to build a building, so obviously we can only build one of them?” This is not quite what I mean, I think there’s an idea within all of the scientific constructs of the second half of the twentieth century where computer vision and perception, computer intelligence, whatever you want to call it, and genetics, they’re the two biggest things. Within both of those fields, there’s the idea that we will know, that we will at some point find out a truth about ourselves as humans and about ourselves plus machines. And we will make machines that look like us and then tell ourselves that because the machine performs like this, we are like those machines. I think it’s a tendency which is just super Modernist.  
+
I think someone in my world would say, “But Phil, we’re trying to build a building, so obviously we can only build one of them?” This is not quite what I mean, I think there’s an idea within all of the scientific constructs of the second half of the twentieth century where computer vision and perception, computer intelligence, whatever you want to call it, and genetics, they’re the two biggest things. Within both of those fields, there’s the idea that we will know, that we will at some point find out a truth about ourselves as humans and about ourselves plus machines. And we will make machines that look like us and then tell ourselves that because the machine performs like this, we are like those machines. I think it’s a tendency which is just super Modernist.
  
 
They want a laser line to get to the best answer, the right answer. But in order to get to that, the thing that troubles me probably most of all, and this is true in all of these systems whether parametric or genetic, is the way in which the system assumes a degree of homogeneity.
 
They want a laser line to get to the best answer, the right answer. But in order to get to that, the thing that troubles me probably most of all, and this is true in all of these systems whether parametric or genetic, is the way in which the system assumes a degree of homogeneity.
Line 148: Line 148:
 
=== We’re not behaving like trained software developers ===
 
=== We’re not behaving like trained software developers ===
  
'''PL''': By now we have about twenty people on our team and they’re almost all architects.  
+
'''PL''': By now we have about twenty people on our team and they’re almost all architects.
  
When I do a presentation in a professional context, I have a slide that says, “We’re not software developers, but we do make software.” And then I try to talk about how the fact that we’re not trained as software developers, means that we think about things in different ways. We don’t behave like them. We don’t have these normative behaviors from software engineering either in terms of what we create or in the way in which we create things. And as we grow, we make more things that you could describe as software, rather than toolkits or workflows.  
+
When I do a presentation in a professional context, I have a slide that says, “We’re not software developers, but we do make software.” And then I try to talk about how the fact that we’re not trained as software developers, means that we think about things in different ways. We don’t behave like them. We don’t have these normative behaviors from software engineering in terms of either what we create or in the way in which we create things. And as we grow, we make more things that you could describe as software, rather than toolkits or workflows.
  
After one of these events, someone came up to me and said, “Thank you, that was a very interesting talk. And then she asked, “So who does your software development? To who do you outsource the development?” It is completely alien to this person that our industry could be responsible for the creation of software itself. We are merely the recipients of product satisfaction.  
+
After one of these events, someone came up to me and said, “Thank you, that was a very interesting talk. And then she asked, “So who does your software development? To whom do you outsource the development?” It was completely alien to this person that our industry could be responsible for the creation of software itself. We are merely the recipients of product satisfaction.
  
 
Architects are not learning enough about computation technology either practically or critically, because we’ve been kind of infantilized to be the recipient discipline.
 
Architects are not learning enough about computation technology either practically or critically, because we’ve been kind of infantilized to be the recipient discipline.
Line 158: Line 158:
 
=== Not everyone can take part ===
 
=== Not everyone can take part ===
  
'''PB''': We noticed a redistribution of responsibilities and also a redistribution of subjectivity, which seems to be reduced to a binary choice between either developer or a user.
+
'''PB''': We noticed a redistribution of responsibilities and also a redistribution of subjectivity, which seems to be reduced to a binary choice between either developer or user.
  
'''PL''': I think that’s true. It feels like we’re back in the early nineties actually. When computational technology emerged into everyday life, it was largely unknown or was unknowable at the time to the receiving audience, to the consumers. There was a separation, an us and them, and even talking about a terrible version of Windows or Word or something, people were still understanding it as something that came from this place over there. Over the last two or three decades, those two things are brought together, and it feels much more horizontal; anybody can be a programmer. And now we’re back at the place where we realise that not everybody can be part of creating these things at all.  
+
'''PL''': I think that’s true. It feels like we’re back in the early nineties actually. When computational technology emerged into everyday life, it was largely unknown or was unknowable at the time to the receiving audience, to the consumers. There was a separation, an us and them, and even talking about a terrible version of Windows or Word or something, people were still understanding it as something that came from this place over there. Over the last two or three decades, those two things are brought together, and it feels much more horizontal; anybody can be a programmer. And now we’re back at the place where we realize that not everybody can be part of creating these things at all.
  
Governments have this idea that we’ll all be programmers at some point. But no, we won’t, that’s absolutely not true! Not everybody’s going to learn. So one of the things I try to hold on specifically is that we need to bring computational technology to our industry, rather than have it created by somebody else and then imposed on us.  
+
Governments have this idea that we’ll all be programmers at some point. But no, we won’t, that’s absolutely not true! Not everybody’s going to learn. So one of the things I try to specifically hold on is that we need to bring computational technology to our industry, rather than have it created by somebody else and then imposed on us.
  
The goal is not to learn how to be all a software company or a tech company.  
+
The goal is not to learn how to be all a software company or a tech company.
  
 
=== If something will work, why not use it? ===
 
=== If something will work, why not use it? ===
 +
'''PB''': We are troubled by the way 3D techniques and technologies travel from one discipline to another. It feels almost impossible to stop and ask “hey, what decisions are being made here?” So we wanted to ask you about your experience with the intense circulation of knowledge, techniques, devices and tools in volumetric practice.
  
'''PB''': We are troubled by the way 3D techniques and technologies travel from one discipline to another. It feels almost impossible to stop and ask “hey, what decisions are being made here?” So we wanted to ask you about your experience with the intense circulation of knowledge, techniques, devices and tools in volumetric practice.
+
'''PL''': It is something that I see every day, in our industry, and in our practice. We have quite a few arguments about the use of image recognition or facial recognition technologies for example.
 
 
'''PL''': It is something that I see every day, in our industry, and in our practice. We have quite a few arguments about the use of image recognition or facial recognition technologies for example.  
 
  
 
When technologies translate into another discipline, into another job almost, you don’t just lose the ability to critique it, but it actually enhances its status by that move. When you reuse some existing technology, people think you must be so clever to re-apply it in a new context. In the UK there are tons of examples of R&D government funding that would encourage you to use established existing techniques and technologies from other sectors and reapply them in design and construction. They don’t want you to reinvent something and they certainly don’t want you to challenge anything. You’re part of the process of validating it and you’re validated by it. And similarly, the people of the originating discipline get to say, “Look how widely used the thing we created is”, and then it becomes a reinforcement of those disciplines. I think that it’s a huge problem for anyone’s ability to build critical practices towards these technologies.
 
When technologies translate into another discipline, into another job almost, you don’t just lose the ability to critique it, but it actually enhances its status by that move. When you reuse some existing technology, people think you must be so clever to re-apply it in a new context. In the UK there are tons of examples of R&D government funding that would encourage you to use established existing techniques and technologies from other sectors and reapply them in design and construction. They don’t want you to reinvent something and they certainly don’t want you to challenge anything. You’re part of the process of validating it and you’re validated by it. And similarly, the people of the originating discipline get to say, “Look how widely used the thing we created is”, and then it becomes a reinforcement of those disciplines. I think that it’s a huge problem for anyone’s ability to build critical practices towards these technologies.
  
That moment of transition from one field to another creates the magic, right? A technology apparently appears out of nowhere, lands fully formed almost without friction and also without history. It lacks history in the academic sense, the scientific process and indeed, also lacks the labor of all of the bodies, the people that it took to make it, no one cares anymore at that point.  
+
That moment of transition from one field to another creates the magic, right? A technology apparently appears out of nowhere, lands fully formed almost without friction and without history. It lacks history in the academic sense, the scientific process and indeed, also lacks the labor of all of the bodies, the people that it took to make it, no one cares anymore at that point.
  
What I’ve seen in the last five years is that proprietary software companies are pushing things like face recognition and object classification into Graphical User Interfaces (GUI’s), into desktop software. Something like a GAN or whatever is not a button and not a product; it is a TensorFlow routine or a bunch of python scripts that you get off GitHub.
+
What I’ve seen in the last five years is that proprietary software companies are pushing things like face recognition and object classification into Graphical User Interfaces (GUIs), into desktop software. Something like a GAN or whatever is not a button and not a product; it is a TensorFlow routine or a bunch of Python scripts that you get off GitHub.
  
There’s a myth-making around this, that makes you feel like you’re still engaged in the kind of practice of creating the technique. But you’re not, you’re just consuming it. It’s ready-made there for you. Because it sits on GitHub, you feel like a real coder, right? I think the recipient context becomes infantilized because you’re not encouraged to actually create it yourself.  
+
There’s a myth-making around this, that makes you feel like you’re still engaged in the kind of practice of creating the technique. But you’re not, you’re just consuming it. It’s ready-made there for you. Because it sits on GitHub, you feel like a real coder, right? I think the recipient context becomes infantilized because you’re not encouraged to actually create it yourself.
  
 
You’re presented with something that will work, so why not use it? But this means you also consume all of their thinking all of their ways of looking at the world.
 
You’re presented with something that will work, so why not use it? But this means you also consume all of their thinking all of their ways of looking at the world.

Latest revision as of 11:05, 11 May 2022

We hardly encounter anything that didn’t really matter

Phil Langley in conversation with Possible Bodies


As an architect and computational designer, Phil Langley develops critical approaches to technology and software for architectural practice and spatial design. Our first conversation started from a shared inquiry into MakeHuman,[1] the Open Source software project for modeling 3-dimensional humanoid characters.

In the margins of the yearly Libre Graphics meeting in Toronto, we spoke about the way that materiality gets encoded into software, about parametric versus generative approaches, and the symbiotic relationship between algorithms that run simulations and the structure of that algorithm itself. “I think there is a blindness in understanding that the nature of the algorithm effects the nature of the model… The model that you see on your screen is not the model that is actually analyzed.”[2]

Six years later, we ask him about his work for the London-based architecture and engineering firm Bryden Woods where he is now responsible for a team that might handle computational design in quite a different way.

A very small ecosystem

Phil Langley: For the Creative Technologies team that I set up in my company, we hired twenty people doing computational design and they all come from very similar backgrounds: architectural engineering plus a postgraduate or a master’s degree in “computational design”. We all have similar skills and are from a narrow selection of academic institutions. It is a very small ecosystem.

I followed a course around 2007 that is similar to what people do now. There’s some of the technology that moves on for sure, but you’re still learning the same kind of algorithms that were there in the 1950s or sixties or seventies. They were already old when I was doing them. You’re still learning some parametrics, some generative design, generative algorithms, genetic algorithms, neural networks and cellular automatisms, it is absolutely a classic curriculum. Same texts, same books, same references. A real echo chamber.

One of the things I hated when I studied was the lack of diversity of thoughts, of criticality around these topics. And also the fact that there’s only a very narrow cross-section of society involved in creating these kinds of techniques. If you ever mentioned the fact that some of these algorithmic approaches came from military research, the response was: “So what?”. It wasn’t even that they said that they already knew that. They were just like “Nothing to say about that, how can that possibly be relevant?”

How can you say it actually works?

PL: When building the team, I was very conscious about not stepping straight into the use of generative design technologies, because we certainly haven’t matured enough to start the conversation about how careful you have to be when using those techniques. We are working with quite complex situations and so we can’t have a complex algorithm yet because we have too much to understand about the problem itself.

We started with a much more parametric and procedural design approach, that was much more... I wouldn’t say basic... but lots of people in the team got quite frustrated at the beginning because they said, we can use this technique, why don’t we just use this? It’s only this year that we started using any kind of generative design algorithms at all. It was forced on us actually, by some external pressures. Some clients demanded it because it becomes very fashionable and they insisted that we did it. The challenges or the problems or the kind of slippage is how to try and build something that uses those techniques, but to do it consciously. And we are not always successful achieving that, by the way.

The biggest thing we were able to achieve is the transparency of the process because normally everything that you pile up to build one of those systems, gets lost. Because it is always about the performance of it, that is what everybody wants to show. They don’t want to tell you how they built it up bit by bit. People just want to show a neural network doing something really cool, and they don’t really want to tell you how they encoded all of the logic and how they selected the data. There are just thousands of decisions to make all the way through about what you include, what you don’t include, how you privilege things and not privilege other things.

At some point, you carefully smooth all of the elements or you de-noise that process so much… You simplify the rules and you simplify the input context, you simplify everything to make it work, and then how can you say that it actually works? Just because it executes and doesn’t crash, is that really the definition of functionality, what sort of truth does it tell you? What answers does it give you?

You make people try to understand what it does and you make people talk about it, to be explicit about each of those choices they make, all those rules, inputs, logics, geometry or data, what they do to turn that into a system. Every one of those decisions you make defines the n-dimensional space of possibilities. And if you take some very complicated input and you can’t handle it in your process and you simplify so much, you’ve already given a shape to what it could possibly emerge as. So one of the things we ended up doing is spending a lot of time on that and we discuss each micro step. Why are we doing it like this? It wasn’t always easy for everyone because they didn’t want to think about documenting all the steps.

Yesterday we had a two hour conversation about mesh interpolation and the start of the conversation was a data flow diagram, and one of the boxes just said something like: “We’re just going to press this button and then it turns into a mesh”. And I said: “Woah, wait a minute!” some people thought “What do you mean, it’s just a feature, it’s just an algorithm. It’s just in the software, we can just do it.” And I said, “No way.” That’s even before you get towards building something that acts on that model. I think that’s what we got out of it actually, by not starting with the most let’s say sophisticated approach, it has allowed us to have more time to reflect on what fueled the process.

Decisions have to be made

Possible Bodies: Do you think that transparency can produce a kind of control? Or that ‘understanding’ is somehow possible?

PL: It depends what you mean by control, I would say.

It is not necessarily that you do this in order to increase the efficacy of the process or to ensure you get better results. You don’t do it in order to understand all of the interactions because you cannot do that, not really. You can have a simpler algorithmic process, you can have an idea of how it’s operating, there is some truth in that, in the transparency, but you lose that quite quickly as the complexity grows, it’s more to say that you re-balance the idea that you want to see an outcome you like, and therefore then claim that it works. I want to be able to be explicit about everything that I know all the way long. In the end that’s all you have. By making explicit that you have made all these steps, you make clear that decisions have to be made. That at every point you’re intervening in something, and it will have an effect. Almost every one of these things has an effect to a greater or lesser extent and we hardly encounter anything that didn’t really matter. Not even if it was a bug. If it wasn’t really affecting the system, it’s probably because it was a bug in the process rather than anything else.

I think that transparency is not about gaining control of a process in itself, it’s about being honest with the fact that you’re creating something with a generative adversarial network (GAN) or a neural network, whatever it is. That it doesn’t just come from TensorFlow,[3] fully made and put into your hand and you just press play.

Getting lost in nice little problems

PL: The point I was trying to make to everyone on the team was, well, if you simplify the mesh so much in order that it’s smooth and so you can handle it in the next process, what kind of reliance can you have on the output?

I’ll tell you about a project that’s sort of quite boring. We are developing an automated process for cable rooting for signaling systems in tunnels. We basically take a point cloud survey of a tunnel and we’re trying to route this cable between obstacles. The tunnel is very small, there is no space, and obviously there’s already a signaling system there. So there are cables everywhere and you can’t take them out while you install the new ones, you have to find a pathway. Normally this would be done manually. Overnight people would go down in the tunnel and spray paint the wall and then photograph it and then come back to the office and try and draw it. So we’re trying do this digitally and automate it in some way. There’s some engineering rules of the cables, that have to be a certain diameter. You can’t just bend them in any direction... it was a really nice geometric problem. The tunnel is a double curvature, and you have these point-clouds ... there were loads of quite nice little problems and you can get lost in it.

PB: It doesn’t sound like a boring project?

PL: No it’s absolutely not boring, it’s just funny. None of us have worked in rail before. No one has ever worked in these contexts. We just turned up and went: “Why’d you do it like that?”

Once you finally get your mesh of the tunnel, what you’re trying to do is subdivide that mesh into geometry again, another nice problem. A grid subdivision or triangles or hexagons, my personal favorite. And then you’re trying to work out, which one of these grid subdivisions contains already a signal box, a cable or another obstruction basically? What sort of degree of freedom do I have to navigate through this? Taking a very detailed sub-millimeter accuracy point-cloud, that you’re reducing into a subdivision of squares, simplifying it right down. And then you turn it into an evaluation. And then you have a path-finding algorithm that tries to join all the bits together within the engineering rules, within how you bend the cable. And you can imagine that by the time you get to start mesh subdivision, if you process that input to death, it’s going to be absolutely meaningless. It will work in the sense that it will run and it will look quite cool, but what do I do with it?

I try to talk about language with everybody

PL: I try to talk a little bit about language with everybody. I’m trying not to overburden everybody with all of my predilections. I can’t really impose anything on them. Language is a big thing, like explaining Genetic Algorithms with phrases like , “So this is the population that would kill everybody, that’s like unsuitable or invalid.” For example if you use a multi objective Genetic Algorithm, you might try to keep an entire set of all solutions or configurations, as we would call them, that you create through all of the generations of the process. The scientific language for this is “population”. That’s how you have to talk. You might say, “I have a population of fifty generations of the algorithm. Five thousand individuals would be created throughout the whole process. And in each generation you’re only “breeding” or combining a certain set and you discard the others.” You leave them behind, that’s quite common. And we had a long talk about whether or not we should keep all of the things that were created and the discussion was going on like, “But some of them were just like rubbish. They’re just stupid. We should just kill them, no one needs to see them again.” And I’m like, “well I don’t know, I quite like to keep everybody!”

Of course all you’re really doing is optimizing, tending towards something that’s better and you lose the possibility of chance and miss something. There’s a massive bit of randomness in it and you have a whole set of controls about how much randomness you allow through the generational processes and so I have this massive metaphor and it comes with huge, problematic language around genetics and all that kind of stuff that is encoded with even more problematic language, without any criticality, into the algorithmic process. And then someone is telling me “I’ve got a slider that says increase the randomness on that”. So it’s full of all those things, which I find very challenging.

But if you ever could strip away from the language and all of the kinds of problems, you look at it in purely just what it does, it’s still interesting as a process and it can be useful, but the problem is not what the algorithm does. It’s what culturally those algorithms have come to represent in people’s imagination.

The Hairy Hominid effect

PB: We would like to bring up the Truthful Hairy Hominid here.[4] The figure emerged when looking over the shoulder of a designer using a combination of modeling softwares to update the representation of human species for the “Gallery of Humankind”. They were working on one concrete specimen and the designer was modeling their hair, that was then going to be placed on the skin. And someone in our group asked the designer, “How do you know when to stop? How many hairs do you put on that face, on that body?” And then the designer explained that there’s a scientific committee of the museum that handed him some books, that had some information that was scientifically verified, but that all the rest was basically an invention. So he said that it’s more or less this amount of hair or this color, this density of hair. And this is what we kept with us: When this representation is finished, when the model is done and brought from the basement to the gallery of the museum, that representation becomes the evidence of truth, of scientific truth.[5]

PL: It acts like as a stabilization of all of those thoughts, scientific or not, and by making it in that way, it formalizes them and becomes unchallengeable.

PB: The sudden arrival of an invented representation of hominids on the floor of a natural science museum, this functional invention, this efficacy, is turned into scientific truth. This is what we call “The Hairy Hominid effect”. Maybe you have some stories related to this effect, on the intervention of what counts or what is accountable, what counts? Or is the tunnel already one?

PL: Well, the technology of the tunnel project maybe is, and how we’re using this stuff.

A point-cloud contains millions and millions of data-points from surveys, like in LiDAR scanning, it’s still really novel to use them in our industry, even though the technology has been around for years.[6] I would say the reason it still gets pushed as a thing is because it has become a massive market for proprietary software companies who say: “Hey, look, you have this really cool map, this really cool point cloud, wouldn’t it be cool if you could actually use it in a meaningful way?” And everybody goes, “yes!”, because these files are each four and a half gigabytes, you need a three thousand pound laptop to open it and it’s not even a mesh, you can’t really do anything with it, it just looks kind of cool. So the software companies go: “Don’t worry about it. We’ll sell you thousands of pounds worth of software, which will process this for you into a mesh”. But no one really is thinking about, well… how do you really process that?

A point-cloud is just as a collection of random points. You can understand that it is a tunnel or a school or a church by looking at it, but when you try and get in there and measure, if you’re really trying to measure a point cloud ... what point do you choose to measure? And whilst they say the precision is like plus or minus zero point five millimeters... well, if that was true, why have we got so much noise?

The only thing that’s real are the data points

PL: One of the things that everybody that everybody thinks is useful, is to do object classification on a point-cloud, to find out what’s a pipe, what’s a light, what’s a desk, what’s a chair. To isolate only those points that you see and then put them on a separate layer in the model and isolate all those things by category. The way that that’s mostly done right now, even in expensive proprietary software, is manually. So somebody sits there and puts a digital lasso around a bunch of points. But then how many of the points, when did you stop, how did you choose how to stop? Imagine, processing ten kilometer of tunnel manually...

PB: It’s nicer to go around with spray paint then.

PL: Definitely.

The most extensive object classification techniques come from autonomous vehicles now, that’s the biggest thing. These data-sets are very commonly shared and they do enough to say, “This is probably a car or this is a sign that probably says this” but everything is guesswork. Just because a computer can just about recognize some things, it is not vision. I always think that computer vision is the wrong term. They should have called it computer perception.

There is a conflation between the uses of computer perception for object classification, around what even is an object and anyway, who really cares whether it’s this type or this, what’s it all for? Conflating object classification with point-cloud technology as a supposedly perfect representation, is actually useless because you can’t identify the objects that you need and anyway, it has all these gaps, because it can’t see through things and then there is a series of methods, to turn that into truth, you de-noise by only sampling one in every three points… You do all of these things to turn it into something that is ‘true’. That’s really what it is like. It’s a conflation of what’s real while the only thing that’s real are the data points, because well, it did capture those points.

A potential for possibilities

PB: When we spoke in Toronto six years ago, you defended generative procedures against parametric approaches, which disguise the probable as possible. Did something change in your relation to the generative and it’s potentially transformative potential?

PL: I think it became more complex for me, when you actually have to do it in real life. I still think that there’s huge risks in both approaches and at the time I probably thought that the reward is not worth the risk in parametric approaches. If you can be good at the generative thing, that’s riskier, it’s much easier to be bad at it, but the potential for possibilities is much higher.

What is more clear now is that these are general processes that you have to encounter, because everybody else is doing it, the bad ones are doing it. And I think it’s a territory that I’m not prepared to give up, that I don’t want to encounter these topics on their terms. I don’t consider the manifestations, those that we don’t like in lots of different ways, to be the only way to use this technology. I don’t consider them to be the intellectual owners of it either. I am not prepared to walk away from these techniques. I want to challenge what they are in some way.

Over the last few years of building things for people, and working with clients, and having to build while we were also trying to build a group of people to work together, you realize that the parametric or procedural approaches give you an opportunity to focus on what is necessary, to clarify the decision-making in all these choices you make. It is more useful in that sense. I was probably quite surprised how little people really wanted to think about those things in generative processes. So we had to start a little bit more simple.

You have to really think first of all, what is it you’re going to make? Is it okay to make it? There’s a lower limit almost of what’s okay to make in a parametric tool because changes are really hard, because you lock in so many rules and relationships. The model can be just as complicated in a generative process, but you need to have a kind of fixed idea of what the relationships represent within your model, within your process. Whereas in generative processes, because of the very nature of the levels of abstraction, which cause problems, there are also opportunities. So without changing the code, you can just say, well, this thing actually is talking about something completely different. If you understand the math of it, you can assign a different name to that variable in your own mind, right? You don’t even need to change the code.

With a parametric approach you’re never going to get out of the fact that it is about a building of a certain type, you can never escape that. And we built parametric tools to design housing schemes or schools as well as some other infrastructure things, data centers even, and that is kind of okay, because the rules are not controversial when you think about schools for example. And you’re probably thinking, hang on a minute, Phil, these can be controversial, but in the context of our problem definition, they were unchallengeable by anybody, they came from the government.

Showing the real consequences

PL: Parametric approaches make problems in the rules and processes visible. I think that’s a huge thing. Because of the kind of projects we are building, we are given a very hard set of rules that no one is allowed to challenge. So you try and encode them into a parametric system and it won’t work basically.

In the transport infrastructure projects we were doing, there are rule changes with safety, like distance between certain things. And we could show what the real consequences would be. And that this was not going to achieve the kind of safety outcome that they were looking for. Sometimes you’re just making it very clear what it was that they thought that they were asking for. You told us to do it like this, this is what it gives you, I don’t think that’s what you intended.

We never allowed the computer to solve those problems in any of the things we’ve built. It just tells you, just so you know, that did not work, that option. And that’s very controversial, people often don’t really like that. They’re always asking us to constrain them. “Why do you let the system make a mistake?”

Sometimes it is not better than nothing

PB: When we speak to people that work with volumetric systems, whether on the level of large scale databases for plants, or for making biomedical systems … when we push back on their assumption that this is reality, they will say, “Of course the point-cloud is not a reality. Of course the algorithm cannot represent population or desire.” But then when the system needs to work, it is apparently easy to let go of what that means. The need to make it work, erases the possibility for critique.

PL: One of the common responses I see is something like, “Yeah, but it is better than nothing.” Or that is at least part of the story. They have a very Modernist idea that you run this linear trajectory towards complete know-how or knowledge or whatever and that these systems are incomplete rather than imperfect and that if you have a bit more time, you’ll get there. But where we are now, it’s still better than then. So why not use it?

In the construction sector you constantly encounter these unlucky wanna-be Silicon Valley tech billionaires, who will just say like, “But you just do it with a computer. Just do it with an algorithm!” They’ve fallen for that capitalist idea that technology will always work in the end. It must work. And whenever I present my work in conferences, I always talk about my team, what people are in the team, how we built it in some way. To the point that actually lot’s of people get bored of it. Other people when they talk about these kinds of techniques will say “We’ve got this bright kid he’s got a PhD from wherever. He’s brilliant. He just sits in the corner. He’s just brilliant.” And of course, it’s always a guy as well. They instrumentalize these people, as the device to execute their dream, which is that the computer will do everything. There’s still this kind of a massively Modernist idea that it’s just a matter of time until we get to that.

Sometimes a point-cloud is not better than nothing because it gives you a whole other problem to deal with, another idea of reality to process. And by the time you get into something that’s usable, it has tricked you into thinking that it’s real. And that’s true about the algorithms as well. You’re wrestling with very complicated processes and by the time you think that you kind of control it, it just controlled you, it made you change your idea of the problem. You simplify your own problem in order that you can have a process act on it. And if you’re not conscious about how you’re simplifying your problem in order to allow these things to act on it, if you’re not transparent about that, if you don’t acknowledge it, then you have a very difficult relationship with your work.

Supposed scientific reality

PL: We use genetic algorithms on a couple of projects now and the client in one project was just not interested in what methods we were using. They did not want us to tell them, they did not care. They wanted us to show what it does and then talk about that, which is kind of okay. It’s anyway, not their job. The second client was absolutely not like that at all, they were looking for a full explanation of everything that we did. And our explanation did not satisfy them because it didn’t fit with their dream of what a genetic process does.

We were fighting this perception that as soon as you use this technique, why doesn’t it work out of the box? And then we’re building this thing over a matter of weeks and it was super impressive how far we got, but he still told us, I don’t understand why this isn’t finished. But it took the US military fifty years to make any of this. Give me a break!

PB: The tale of genetics comes with its own promise, the promise of a closed circuit. I don’t know if you follow any of the critiques on genetics from microchimerism or epigenetics, basically anything that brings complexity. They ask: what are the material conditions in which that process actually takes place? It’s of course never going to work perfectly.

PL: The myth-making comes with the weight of all other kinds of science and therefore implies that this thing should work. Neural networks have this as well, because of, again, this storytelling about the science of it and I think the challenge for those generative processes is exactly in their link to supposed scientific realities and the sort of one-to-one mapping between incomplete science, or unsatisfactory science, into another incomplete unsatisfactorily discipline, without question. You can end up in pretty spooky place with something like a Genetic Algorithm that is abstracted from biochemistry, arguing in a sort of eugenic way.

You can only build one building

PL: I think inherent in all of this science is the idea that there is a right answer, a singular right answer. I think that’s what optimization means. For the sort of stuff we build, we never say, “This is the best way of doing it.” The last mile of the process has to be a human that either finishes it, fills in the gaps or chooses from the selection that is provided. We never ever give one answer.

I think someone in my world would say, “But Phil, we’re trying to build a building, so obviously we can only build one of them?” This is not quite what I mean, I think there’s an idea within all of the scientific constructs of the second half of the twentieth century where computer vision and perception, computer intelligence, whatever you want to call it, and genetics, they’re the two biggest things. Within both of those fields, there’s the idea that we will know, that we will at some point find out a truth about ourselves as humans and about ourselves plus machines. And we will make machines that look like us and then tell ourselves that because the machine performs like this, we are like those machines. I think it’s a tendency which is just super Modernist.

They want a laser line to get to the best answer, the right answer. But in order to get to that, the thing that troubles me probably most of all, and this is true in all of these systems whether parametric or genetic, is the way in which the system assumes a degree of homogeneity.

It does not really matter that it is ultimately constrained

PL: I think with these generative algorithmic processes, people don’t accept constraint either discursively or even scientifically. At most they would talk about the moment of constraint being beyond the horizon of usefulness. At some point, it doesn’t create every possible combination. Lots of people think that it can create every option that you could ever think of. Other people would say that it is not infinite, but it goes beyond the boundary of what you would call, ‘the useful extent of your solution space’, which is the kind of terminology they use. I think that there’s a myth that exists, that through a generative process, you can have whatever you want. And I have been in meetings where we showed clients something that we’ve done and they say, “Oh, so you just generated all possible options.” But that’s not quite what we did last week!

There’s still that sort of myth-making around genetic algorithms, there’s an illusion there. And I think there’s a refusal to acknowledge that the boundary of that solution space is set not really by the process of generation. It’s set at the beginning, by the way in which you define the stuff that you act on, through your algorithmic process. I think that’s true of parametrics as well, it’s just that it’s more obviously to improve metrics. Like, here’s a thing that affects this thing. And whether you complexify the relationships between those parameters, it doesn’t really matter, it’s still kind of conceptually very easy to understand. No matter how complex you make the relations between those parameters, you can still get your head around it. Whereas the generative process is a black box to a certain extent, no one really knows, and the constraint is always going to be on the horizon of useful possibilities. So it doesn’t really matter that it is ultimately constrained.

We’re not behaving like trained software developers

PL: By now we have about twenty people on our team and they’re almost all architects.

When I do a presentation in a professional context, I have a slide that says, “We’re not software developers, but we do make software.” And then I try to talk about how the fact that we’re not trained as software developers, means that we think about things in different ways. We don’t behave like them. We don’t have these normative behaviors from software engineering in terms of either what we create or in the way in which we create things. And as we grow, we make more things that you could describe as software, rather than toolkits or workflows.

After one of these events, someone came up to me and said, “Thank you, that was a very interesting talk. And then she asked, “So who does your software development? To whom do you outsource the development?” It was completely alien to this person that our industry could be responsible for the creation of software itself. We are merely the recipients of product satisfaction.

Architects are not learning enough about computation technology either practically or critically, because we’ve been kind of infantilized to be the recipient discipline.

Not everyone can take part

PB: We noticed a redistribution of responsibilities and also a redistribution of subjectivity, which seems to be reduced to a binary choice between either developer or user.

PL: I think that’s true. It feels like we’re back in the early nineties actually. When computational technology emerged into everyday life, it was largely unknown or was unknowable at the time to the receiving audience, to the consumers. There was a separation, an us and them, and even talking about a terrible version of Windows or Word or something, people were still understanding it as something that came from this place over there. Over the last two or three decades, those two things are brought together, and it feels much more horizontal; anybody can be a programmer. And now we’re back at the place where we realize that not everybody can be part of creating these things at all.

Governments have this idea that we’ll all be programmers at some point. But no, we won’t, that’s absolutely not true! Not everybody’s going to learn. So one of the things I try to specifically hold on is that we need to bring computational technology to our industry, rather than have it created by somebody else and then imposed on us.

The goal is not to learn how to be all a software company or a tech company.

If something will work, why not use it?

PB: We are troubled by the way 3D techniques and technologies travel from one discipline to another. It feels almost impossible to stop and ask “hey, what decisions are being made here?” So we wanted to ask you about your experience with the intense circulation of knowledge, techniques, devices and tools in volumetric practice.

PL: It is something that I see every day, in our industry, and in our practice. We have quite a few arguments about the use of image recognition or facial recognition technologies for example.

When technologies translate into another discipline, into another job almost, you don’t just lose the ability to critique it, but it actually enhances its status by that move. When you reuse some existing technology, people think you must be so clever to re-apply it in a new context. In the UK there are tons of examples of R&D government funding that would encourage you to use established existing techniques and technologies from other sectors and reapply them in design and construction. They don’t want you to reinvent something and they certainly don’t want you to challenge anything. You’re part of the process of validating it and you’re validated by it. And similarly, the people of the originating discipline get to say, “Look how widely used the thing we created is”, and then it becomes a reinforcement of those disciplines. I think that it’s a huge problem for anyone’s ability to build critical practices towards these technologies.

That moment of transition from one field to another creates the magic, right? A technology apparently appears out of nowhere, lands fully formed almost without friction and without history. It lacks history in the academic sense, the scientific process and indeed, also lacks the labor of all of the bodies, the people that it took to make it, no one cares anymore at that point.

What I’ve seen in the last five years is that proprietary software companies are pushing things like face recognition and object classification into Graphical User Interfaces (GUIs), into desktop software. Something like a GAN or whatever is not a button and not a product; it is a TensorFlow routine or a bunch of Python scripts that you get off GitHub.

There’s a myth-making around this, that makes you feel like you’re still engaged in the kind of practice of creating the technique. But you’re not, you’re just consuming it. It’s ready-made there for you. Because it sits on GitHub, you feel like a real coder, right? I think the recipient context becomes infantilized because you’re not encouraged to actually create it yourself.

You’re presented with something that will work, so why not use it? But this means you also consume all of their thinking all of their ways of looking at the world.

Notes

  1. See: Possible Bodies, “MakeHuman,” in this book.
  2. See: “Phil Langly in conversation with Possible Bodies, Comprehensive Features,” https://volumetricregimes.xyz/index.php?title=Comprehensive_Features.
  3. TensorFlow is “An end-to-end open source machine learning platform” used for both research and production at Google. https://www.tensorflow.org/.
  4. “Item 086: The Truthful Hairy Hominid,” The Possible Bodies Inventory, 2014.
  5. Another aspect of the Hairy Hominid effect appears in our conversation with Simone C Niquille, “The Fragility of Life,” in this chapter.
  6. LiDAR is an acronym of “light detection and ranging” or “laser imaging, detection, and ranging”.


The first part of this conversation took place in 2015, in Toronto: Phil Langley in conversation with Possible Bodies, Comprehensive_Features