Difference between revisions of "We hardly encounter anything that didn’t really matter"

From Volumetric Regimes
Jump to navigation Jump to search
Line 4: Line 4:
  
 
[to be added]
 
[to be added]
</onlyinclude>
+
 
 +
<noinclude>
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 152: Line 153:
  
 
Categorization for me is a normal thing to do, it is just horrible when you fix it, "this is the only way you can be called". Topology in math, like set theory, is all about an approach that objects can be in multiple sets. It can be more things to other objects. It is one property that puts it in other sets, although 'property' is not a very good word. But it does not preclude it from appearing in another set.
 
Categorization for me is a normal thing to do, it is just horrible when you fix it, "this is the only way you can be called". Topology in math, like set theory, is all about an approach that objects can be in multiple sets. It can be more things to other objects. It is one property that puts it in other sets, although 'property' is not a very good word. But it does not preclude it from appearing in another set.
 +
 +
<onlyinclude>

Revision as of 14:49, 1 April 2021


Comprehensive Features. Conversations with Phil Langley

[to be added]


The first part of the conversation which took place on May 4 2015, Flight Cafe East, Toronto. Original recording: flightcafe.MP3

The environment of simulation

PB: There seem to be a few problems with how 3D operates. The first is how volumetric representations of physical objects in digital space are constrained by the the mesh, a very particular way of dealing with inside and outside. The second is related to resolution: a disconnect between the hyper-real and a crude underlying structure. And the third is related to the parametric, and the limited 'space of possibilities' it creates.

PL: I think these issues all intersect as well. There is also something about the environment of simulation, the world in which the simulation is created in order to then simulate within, or to represent within or to modify within.

From my background as an architect you have the simulation of a buildings' performance, that is the way I am normally exposed to it so: how much energy would it use, would it be structurally stable, these kinds of questions. In order to simulate that, you would have a digital model and you run a bunch of algorithms over it that would test solar gain (?), structural stability, things like this. But that digital model is very much a reduction of the most detailed model that might exist during the design process, partly due to computing power, or mostly due to it in fact, you have to simplify the geometry of that building down to boxes, effectively, and that reduces the relationship between two rooms that share a wall and the algorithm does not really care exactly how big that room is, it can't really change with a change in geometry, even in the order of a couple of meters in floor area, it will not really effect the simulation outcome. The simulation I am talking about is of a certain type (?), the constructing of a world in which you test your 'proposal', and very often that is merely talked about as the environment in which you construct, and in a structural simulation it is what kind of algorithms are you using, is it efficient, does it take this into consideration, does it take that into consideration, but very rarely do you see what you need to do to the geometry in order to expose or subject it to the algorithm.

I think there is an almost blindness in the understanding that the nature of the algorithm effects the nature of the model. So in architectural discourse on digital modeling at the moment you have a lot of hyperbole about how this stuff gets way more capable of modeling huge amounts of detail, the model is almost quasi one-to-one, it might have every nut and bolt, the window ledge will be modeled with the exact profile... There is a fallacy held that that geometry is then translated in the environmental simulation, and will render a better result but it doesn't. The model that you see on your screen is not the model that is actually analysed.

A symbiotic relationship

PB: When we try to critique the crudeness of 3D models, the response is often: “we need more data”, “if only we had more computing power”, “it is a question of efficiency” … Is that the kind of reduction you talk about, meaning is it a problem that can be solved if you would have more data-points or rendering power, so that the reduction could be minimized?

PL: My understanding of this, is that there is a symbiotic relationship between the algorithm that runs the simulation and the structure of that algorithm. It is not that it could ever understand the profile of a window sill or an opening or an overhang. It has been designed with the computing power available in mind to deal with the fact that you just can't … So, there is a link between the digital environment in which you are simulating, the processes you modify, you have to modify the model, the physical materiality of computing itself. You can't just swap one in and one out. They have been involved symbiotically probably.

I use the word 'reduction' and it is pejorative often and I don't necessarily mind it but what I mind is the lack of visibility in that process.

The social condition of use

PL: I think the bigger issue is more how that materiality is encoded into the model, what you consider worthy of encoding and what is left out. So the materiality of a building might get reduced to a face, a single line of a plane in the model, so that wall that should be 300 mm thick only needs to be described as a single face for the purposes of the energy use. But there is no user in there. There are no people. There is no behavioural model, the materiality of us is left out of that simulation. There is the tick, cross ... tick, cross next to the things you are going to include in it, and across the spectrum of super curvy algorithmic architecture, and super boxy algorithmic architecture, it is still the same question for me actually: what have you included and what have you left out in order to get to this outcome?

PB: Do you think that there is a way that 'thinking with computers' could be more interesting than ticking boxes?

PL: Well, in the nineties there was much excitement about the potential of all these new software techniques in architecture, also linked to post-modernism, as architects were as always late to the party, and the potential of that kind of expressive geometry, that you could generate more easily, using those kind of algorithmic techniques, to be a way for example for allowing for a redundancy of functional spaces, so rather than saying: "this zone is for this, and this is for this, enclosed by walls, and whatever" you would actually have a place of encounter, in which uses and behaviour that you would not have predicted could flourish somehow. You are not so prescriptive anymore for what kinds of spaces you make. It was a very exciting moment. When you look at the designs, fifteen years later, and the glossy pictures by people like NOX for example, it looks a bit naive at best and suspicious at worst. But they come from this tradition, of being less prescriptive and less deterministic about behaviour inside a building. That was a very exciting moment, but I feel they bankrupted themselves, almost literally actually, by trying to build these things. The problem for me is ... it is not just the materiality of the building that creates the social condition of its use.

Even if you are trying to encourage more different types of sociality in your building, you probably shouldn't be doing that only through the materiality of the building.

And that is where it hit its buffer, I would say. And that mode ended. They are still doing the same thing, regardless of what it looks like or the technique ... or what they hope the space provides. They still use the same technique as the Greeks were doing by building a big temple and going ...

PB: "This is where you go!"

PL: Exactly. So it wasn't really plugging into other kinds of ... What groups like Stealth did, although they left the digital behind a little bit now.

PB: When looking at them now, it is hard to imagine how these iterations could be more interesting, more inclusive somehow and talk back to multiple parameters, not just those provided by the software.

PL: This is how I got interested in coding at all. I was fascinated by these glossy images of almost impossible geometry and the 'coolness'. I wanted to be able to make something like that - not necessarily to build it, but to somehow produce it on the screen. But I never really did it in the end. By the time I learned how to code my interest in that approach had ended and although I wanted to make shapes like that, I didn't want buildings to be like that. In fact, it was more the potential for constant iteration and change that interested me and that possibility of not just that thing we have seen a million times before being produced and almost 'de-professionalising' the process somehow, from an architectural point of view. In the professional world there are countless books about what you do with concrete, what you do with bricks, what you do with steel, what you do with timber whereas these other techniques were saying 'actually, who knows what you do with this!', and that was quite exciting. There was this instability around what the profession was about and somehow the existing processes of design were questioned by it.

PB: But then in that de-professionalisation, how does the user find a place?

PL: Well, in that model in the 1990s, not all! It is conditioned by the buildings' materiality as its main approach and for a long time I found it very difficult to think of a way that it could be more than this. And actually the trick is not to be obsessed only with the materiality and to broaden what that 'expressive' model is, what it could be based on and including more things inside the model. And i think it is very hard to do! But it comes down to the way in which you are defining that system in which you are operating in. The architectural paradigm is a Newtonian physics world of gravity. Actually that is a bit harsh, it is a bit more sophisticated than that, but it still is Newtonian in the user experience even if the underlying physics is a bit more sophisticated than that. It is a world in which there are no people... let alone different types of people. There are only these conditions of performance, there is only this materiality, there is only this physical sciences behaviour. But why couldn't you build a different world? And then you could test other things. Of course, now you have software that can test how people move in an airport, but that is not really ...

That's a really bad way of doing it and people like Space Syntax, for example are involved in that kind of thing and it is something that I completely don't agree with. It is the reduction of the user to generalised behaviour.

Limited materiality

PB: You were talking about your thesis and that in the chapter about simulation, you wanted to look at MakeHuman. So this software - well I'm not sure it is about people! - but at least it is about body shapes. Can you explain why, as someone looking at let's say algorithmic architecture and the digital technologies around it, a software for building humanoid figures would be considered interesting? How does it relate? Or not?

PL: I think it relates very, very closely firstly because of this question of materially. Somehow the software is deciding what is going to be used to define a human and that materiality is defined through the mesh but it doesn't include anything beyond the surface, apart from the topological skeleton which everything hangs off. But it is a very limited materiality that they have defined and that is exactly what an architectural model would do. So it is fixing that but also giving you some feeling of flexibility which, again, is what this kind of parametric architecture does. There is this sense that you are somehow in control of potential outputs or outcomes and it is not an uncommon thing to see in a digital architectural design process. Certainly, it is a the very least implicit that you fix some things - of the design brief let's say - and then you are able to flex, but only within those things and who determines where those things sit and who fixes them? this is a very common approach, in digital and non-digital architectural design processes and the Make Human software has exactly this kind of interface in terms of the parametric sliders, in terms of the visualisation change in the color or the materiality of the body itself. And also what it is leaving out - there is no nervous system, there is no circulatory system (such as you can describe those two things) and certainly no personality!

I guess the analogy between architecture and Make Human would be less strong between architecture and Make Human if the features hadn't been so 'comprehensive'. Because it tries to do so much, it makes it really similar to architectural software, it is also really trying to give you everything. But it just can't, so why try?

Inappropriate features

PB: So in MakeHuman there is the discreteness of the sliders in terms of how they are presented and how they are actually not separate. There is the fact that they are aligned as similar horizontalities and at the same scale, which is really nauseating. Then there is the different types of binary that are going on, no?

Man/Woman is quite a different horizon than Young/Old! And the most surprising one then, is to put race in this. Have you found something in the documentation, or the way in which the code is written?

PL: I have been trying to go through it to find when it appeared. So one of things I am interested in this kind of software is how stable it becomes, in terms of its features and functionality. At what point can you no longer change the software and you can only talk about what it is good and bad for. In my attempts to modify the source code, my technical ability was nowhere near enough to deal with the maths behind the mesh management. It is way beyond my coding skills and maths. But I was able to understand the parametric links and to de-couple some of them or to alter the proportionality in order to generate some other results and I was able to change the color effect of the 'ethnicity' sliders to RGB. So there is a huge amount of stability in it now that to change it yourself you would need to be so skilled. It is so tightly wound, it is so efficient in what it does that to unpick it was way beyond me. Instead, I tried to go back to find out when the features arrive, at what point. Having looked at a bunch of their repositories of previous versions, the 'ethnicity' is there for ages. It is not a thing that comes late, it is in their minds it seems a long, long time ago. If you look on the message boards you can see a few people complain and then they get told to shut up!

PB: Complain about what?

PL: About the 'ethnicity' feature being inappropriate, but there didn't seem to be many people that supported that view.

Mesh integrity

PL: If you go into the folder structure and see how it is organised you can see the 'genitals' folder or whatever. But you can't just swap in an alternative model. There is an individual body part model - say the genitals or the eyes - but you can't just take that file and swap it for your own one. It has to fit a certain file type and be processed into a certain type of data structure for it to be readable. I never found an easy way of just swapping the parts. And again, I sort of understand the huge amount of technical knowledge that has gone into making this very efficient system, but it is super frustrating not to be able to drop things in and out, even if it made an imperfect mesh. And their whole thing is about making the perfect mesh actually, whereas I would rather be able to put a head on the end of a leg and see what it looks like! It is really un-playful in that way.

PB: From what we saw at GenderBlending I am confused, especially when we were making gender changes. Looking at it with Xavier, who has an understanding of what it means to change the gender of a physical body, he insisted that it is not like sticking on genitals. On the one hand, it feels like a fragmented body image, and you describe that the elements are in different places, and at the same time it is quite unwilling to let go of it's illusion of 'wholeness'.

PL: Absolutely. I think that the fragmented nature I could deal with. If it was a collaging tool almost, that wouldn't bother me at all. It is not a direct analogy to actual bodies and would not be misunderstood as that. But actually it is a kind of collaging tool, hidden behind a huge amount of complex maths and code, to produce the impression of biological integrity. That goes with the 'scientific' data from which the body parts are generated and the supposed precision of those sliders so that fact that it is trying to create the impression that it is a useful representation of bodily process and change. Actually, it is not at all. It is a collage of things, which I would be much more comfortable with because it is more obviously wrong, and it would not be misunderstood as anything other. And it would be far more fun!

PB: The first thing you want to do is make a 'collage body' when you think what it would mean to make a humanoid in software. And then you realise it is the hardest thing. Students managed to glitch it, and then they could find ways to expose it. For example, if you make certain combinations of 'race' and 'gender' then the skin colour doesn't extend to the genitals and they start to stick out because they are coloured differently. So in that sense you can start to reveal the fragmentation that is in there but that you cannot work with. So I was trying to talk about the three interconnected problems that we see coming together. The collage character that is hidden behind the need for digital integrity that becomes confused with the image of a natural body.

PL: Yes, that kind of mesh integrity...

PB: This is where the mesh problem and the resolution problem start to meet each other.

PL: That mesh integrity is driven by the underlying topology of the 'skeleton' and the resolution of that skeleton is super scary because you have these two oppositional things. A complete reduction in the freedom movement though the simplification of the skeleton and a mesh that supposes to include all details and wrinkles.

An acknowledged relationship

PL: OK. So, in a generative technique - genetic algorithm, neural network, cellular automata would be described as generative techniques - you establish a set of rules that are explored through the execution of the algorithm, the outcome of which is determined by a process whose level of complexity means that you could not have predicted the result. So there is a gap between the system you put in place and the thing it produced. Whereas in a parametric system you make a box and you play within the box. In a parametric system you are rarely surprised.

PB: Because you get what you want.

PL: Right, you have already set a boundary on that space of possibility. And in fact, one of the images I did for GenderBlending was to put all of the sliders of MakeHuman to the left and then all of the sliders to the right - I know this was a little facetious and it is not really how the software is structured - but that's it, it's going to be within those two bounds.

Whereas in generative approaches, the idea is that you don't quite know what it will produce and then in some way the algorithm has an agency, which of course you are part of. For me it is a far nicer way of working, but again, so long as it is an acknowledged relationship between the agencies.

PB: So, you say that 'generative' has a complexity that might surprise you, is then the moment that agency goes away the moment I understand what is happening? Is the complexity pseudo-magic? Is it just because I don't understand that I am surprised?

PL: No, I don't think it is that you don't understand. It is about the 'predictability' rather than 'understanding'. So if you have a genetic algorithm, which is a nice example, you can 'evolve' a design solution. Your algorithm becomes a metaphor of evolutionary process, at least as it was understood 20 years ago because the algorithm is always super-far behind! which I don't really mind as it is an analogy more than anything else (at least it should be taken as that). So, you evolve a design solution and you are having a symbiotic relationship with...er...I am trying to use the term 'digital companion' in my writing.

It is not pseudo magic

I don't want it to be misunderstood as that thing that happens at the moment where everyone goes SARCASTICALLY 'aren't smart-phones changing us' or 'isn't digital technology...' or 'oooo my phone is really clever'. That's not what I mean. It is more that you make it and remake you relationship and it evolves and you evolve. So, with a genetic algorithm, you made a 'thing' from yourself and you can be very open and honest about that. It is a way of expanding that space of possibilities for me. It is, of course, limited. It is not magic, it is not pseudo magic - it can't just CLICK create something. It is unpredictable, but it is not that it can make anything at all - there are still boundaries to what it is capable of producing it is just that you can't follow in your mind each one of those steps.

I think that at least you can't see the boundaries, that is what I would be interested in, that they are outside the periphery of your vision. That would be the ideal of a really good generative algorithm, from my point of view. That is not how everybody else would necessarily see it, some people really want to, you know...erm...they want their genetic algorithm to solve a very specific problem. Wheras, for me, parametric modeling technology and techniques and interfaces and whatever, puts 'front and centre' that it is in this sand pit. Whatever you are going to get, you can see the boundaries within which it is going to come.

PB: You could also say that's 'visibility'?

PL: Yes, absolutely. But I think on your understanding of the process. It could be very explicit - 'it's just this. this is a thing that does that'. In MakeHuman, they don't say 'this is just some of the things that exist in the world'. It is very much 'this is the range of your humanity'!

I think there are also some very practical things that could be done. I think some of it is about it's aspiration to represent 'all' physical bodies - i think they could be a bit more explicit about how it doesn't.

I think that the interface itself could be much more...er...honest about interaction between variables.

Everything is parametric

PL: I think that 'parametric' is a much abused term now. Everything is parametric. As soon as you start to codify somehow, everything has a parameter. In architecture, there's lots of, I would say - and you can quote me on this - odious books by people like Patrick Schumacher about how parametric architecture is a new 'thing', which I just can't deal with. Everything is parametric. A door is a parametric - it can be a bit wider a bit taller, that's parametric. As soon as you have a variable it is parametric, so i don't see it in that way at all.

PL: I see it more about ... ER ... the difference being that the use of parametric ... or the appropriation of parametric as a term ... is a way of making a toy to play with. The sliders, make it reconfigurable but there are somehow a limited amount of possible outcomes. So you could calculate every single position of slider, every single possible variation of that.

PB: So there is a number, even if there are a lot of variations.

PL: Yes, but you could still calculate it - that's a number right? With a genetic algorithm, yes, you can do it, but that number is much much bigger.

When you have a parametric system, you can basically know everything it can make as soon as you have defined it, every single possibility already exists.

So there are things that you did not really think that were necessarily possible and this is a very directed example, you know you are trying to evolve a brandy glass, you know what a that looks like, as a human you have a relationship to what it is producing. But when you change a variable in that system, to the topological definition of the glass, the degree of freedom you give, you know like you define this by point point point, sweep ... so than it can start to do almost anything. So as soon as you change some of these definitions, you really change huge amounts of what it can do, which is never true in a parametric system.

The visibility question ... how do you convey that complex process to somebody ... an interface is always about hiding the complex process somehow, could you ever make an interface that allowed you to get what is going on?

Interfacing the possible

PB: It is hard to interface something like that. You talked about that earlier, the need for interfaces in parametric software, that it does not really exist without it. In these generative processes, the exploration itself, the probing of what is possible, is where the interface seems to be?

PL: One of the reasons you don't see very many, very good graphical user interfaces on genetic algorithm software, because it is not very helpful. Because in the end, what you are changing is so much under the hood ... There is no benefit in a slick, or even a contained graphical user interface. You change something so fundamental, when you modify it.

That is what I like, it is one of the things I try to talk about when I talk to students about code, is that stability of the software becomes problematic. The interface marks a level of stability ...

Well, this is now a system

PB: We talked about disconnected skins and crude skeletons, and how this limits humanoid figures, but also interactions between 'people' seems to be missing.

PL: I think the nervous system in general is a thing that is missing from often. One of the things I put in the notes for Topological Subjectivities at GenderBlending, is systems in bodies, an interesting one for me, because it is all about the line, the boundary you put around a bunch of things, and say, "well, this is now a system". With the circulatory system or the whatever system, and the nervous system is one where that boundary is felt very fixed in Western science for a long time, and now it is much more fuzzy and people are a lot less clear about where it begins and where it ends.

Everyone used to think it was the brain, and now they go ... oh, the spinal chord is kind of interesting too, so it is probably a bit of that. And there is a much more radical working field (?) where your entire nervous system contains all your brain power. You can't have brain power anymore, it is what some radical thinkers say, it is not a way (another way?) to talk about our capacity. Our capability ... and you can imagine the historic change from thinking the heart was the only thing that mattered. What I like about that, is that it is not that our bodies have changed, or have particularly evolved during that period, the materiality is exactly the same. But the topological understanding is what is evolving. And that is I think again the stability issue. During GenderBlending at some point we were talking about species, categorization, all this kind of stuff. For me the problem is not necessarily categorization, but more the stability of categorization, the fixity of it.

Categorization for me is a normal thing to do, it is just horrible when you fix it, "this is the only way you can be called". Topology in math, like set theory, is all about an approach that objects can be in multiple sets. It can be more things to other objects. It is one property that puts it in other sets, although 'property' is not a very good word. But it does not preclude it from appearing in another set.