We hardly encounter anything that didn’t really matter

From Volumetric Regimes
Jump to navigation Jump to search
The first part of this conversation took place on May 4 2015, Flight Cafe East, Toronto. Original recording: PL:http://snelting.domainepublic.net/files/flightcafe.MP3 flightcafe.MP3]

Comprehensive Features. Interview with Phil Langley

FS: Maybe we can start with the different problems we have been running into with MakeHuman. I see three areas of problems, when trying to think about humanoids in digital space. One is related to the way 3D representations of physical objects in digital space are linked to the way the mesh functions, a very particular way of dealing with inside and outside, and that creates problems I think.

PL: Absolutely.

FS: The second is related to resolution: the disconnect between the hyper-real and the crudeness of the underlying structure.

PL: Absolutely.

FS: And the third is related to the tool being parametric. The fact that we are dealing with sliders and so on, and the limited 'space of possibilities' it creates. To me, each of them brings up different problems. Maybe we can go over these three areas. I don't know, maybe you see another entrance somehow?

PL: Well, I think they all intersect as well, from my point of view at least.

FS: Yes.

PL: For me, there is also something about the environment of simulation, the world in which the simulation is created in order to then simulate within, or to represent within or to modify within. And each of these problems you mention in some way speaks about that.

FS: So if you say 'simulation', what do you mean by that, or how you use that term?

PL: From my background as an architect you have the simulation of a buildings' performance, that is the way I am normally exposed to it so: how much energy would it use, would it be structurally stable, these kinds of questions. In order to simulate that, you would have a digital model and you run a bunch of algorithms over it that would test solar gain (?), structural stability, things like this. But that digital model is very much a reduction of the most detailed model that might exist during the design process, partly due to computing power, or mostly due to it in fact, you have to simplify the geometry of that building down to boxes, effectively, and that reduces the relationship between two rooms that share a wall and the algorithm does not really care exactly how big that room is, it can't really change with a change in geometry, even in the order of a couple of meters in floor area, it will not really effect the simulation outcome. The simulation I am talking about is of a certain type (?), the constructing of a world in which you test your 'proposal', and very often that is merely talked about as the environment in which you construct, and in a structural simulation it is what kind of algorithms are you using, is it efficient, does it take this into consideration, does it take that into consideration, but very rarely do you see what you need to do to the geometry in order to expose or subject it to the algorithm.

I think there is an almost blindness in the understanding that the nature of the algorithm effects the nature of the model. So in architectural discourse on digital modeling at the moment you have a lot of hyperbole about how this stuff gets way more capable of modeling huge amounts of detail, the model is almost quasi one-to-one, it might have every nut and bolt, the window ledge will be modeled with the exact profile, and there is a fallacy held that that geometry is then translated in the environmental simulation, and will render a better result but it doesn't. The model that you see on your screen is not the model that is actually analysed.

FS: You use the word 'reduction' and I would like to think a bit about that because when we hear about models and try to critique their crudeness, the response is often: “we need more data”, “if only we had more computing power”, “it is a question of efficiency” … Is that the kind of reduction you talk about, meaning is it a problem that can be solved if you would have more data-points or rendering power, so that the reduction could be minimized?

PL: My understanding of this, is that there is a symbiotic relationship between the algorithm that runs the simulation and the structure of that algorithm. It is not that it could ever understand the profile of a window sill or an opening or an overhang. It has been designed with the computing power available in mind to deal with the fact that you just can't … So, there is a link between the digital environment in which you are simulating, the processes you modify, you have to modify the model, the physical materiality of computing itself. You can't just swap one in and one out. They have been involved symbiotically probably.

FS: It is kind of interesting to think about what it means to design an algorithm with the computer in mind …

PL: Yeah!

FS: … which in itself does not have to mean a reduction.

PL: No. I use the word 'reduction' and it is pejorative often and I don't necessarily mind it but what I mind is the lack of visibility in that process.

FS: Yesterday I was talking to one of the developers at https://thegrid.io/ which supposedly offers an artificial intelligent problem solving technology for web lay-out. I was trying to talk about the difference between designing with an algorithm, or outsourcing the design to the algorithm. You just used the word 'visibility' but he kept on using 'introspection'. I realised that I am no so interested in that part, of literally 'seeing what it does', but finding a way to be (to use the word of the week) ... in conversation. Which is something else than visibility, it only starts with it I think.

PL: Absolutely, yes! I guess by visibility I mean acknowledging that this is going on I suppose. It is interesting you mention AI because when I worked on neural networks in the past, and you start to understand the history of neural networks and algorithmic approaches, you start to see exactly the relationship between the algorithm and the hardware. So the initial phase, the postwar phase of AI in the US, funded by military, when everyone is getting very excited by hard AI ...

FS: Hard AI?

PL: Meaning you are basically hard-coding it's functionality, it is not really learning in the way that soft AI is seen as capable of learning.

FS: OK.

PL: You get this break in the sixties before second wave cybernetics where everyone goes … “this is really not going to work” and when MIT goes: it's got to work, it's got to because they have invested their entire careers in it. But it is just not going anywhere. And in the mid-sixties there's this big exhibition in London, Cybernetic Serendipity, and everyone goes: “Oh No!, we can't … it is just not going anywhere” and after a lot of stasis and soul-searching, and changing of the guard (!?) you get the second wave of cybernetics and so-called soft AI (Gregory Bateson and all this) and that gets much more powerful and much more doable and that also plateaus a little bit because everyone goes like “well, is it really Artificial Intelligence or is it just an algorithm" and what is it's relationship to the biological and body-sciences from which it is taking it's metaphors. More recently we see a return to hard AI because we have all this computing power. Google for example, they don't have any soft-AI, they don't use any of the algorithmic efficiency of soft AI which you could run on a basic laptop. But they don't need to … why be clever when you can be big, is their algorithmic approach. So there is very much that kind of symbiotic relationship there between hardware and the architecture of the computer, which is of course an understanding of the brain somehow, back in the day. The notion of networked computers now is much bigger today, more powerful than it used to be. So they don't need to look for efficiency. There is this huge history of algorithmic design is around efficiency. Plenty of developers wanting to make their code super efficient. But why would you do that if you have enough computing power?

FS: If you design an algorithm with a computer in mind, and it is not about efficiency, but still has the computer in mind, how would that be different?

PL: It is not that I think it is different, it happens anyway but again the fact that it is not acknowledged, the computer is often described as a neutral executer and the algorithm is a separate cultural approach to something, which is not really true, it is much more a connected relationship. It has always happened, but it is not always talked about, there is this artificial separation.

FS: After Artificial Intelligence we have the Artificial Separation?

PL: It is in the Matthew Fuller book, the Software Studies book I think. It might be Andrew Goffey talking about algorithms and he says something as "you can't separate it from the data-structure", they can never exist on it's own, it is always responding to some kind of context. He makes a specific point about data.

FS: There's also a chapter in the Evil Media book about that. If you think about these softwares operating with hardware, computing traditions, imagining both computers but also buildings … and maybe the system of interconnected elements is actually much larger … what about the imagination of the physical? Well, you showed this image search for algorithmic architecture, showing the shocking similarity between all these buildings, differing in their details but their overall expression is pretty similar.

PL: I think that is more about the contemporary condition in architectural design, at least in the Western context is defined by computing power and software. Whether or not you are deploying that for a super curvy kind of expressive building or for the most mundane kind of speculative resi(dential) housing or whatever, it is still being produced by a huge amount of algorithms. There is a kind of glamour to the algorithm, that you can control them a bit more, or supposedly control them, and you can make these kind of expressive shapes but in the end there is not a huge amount of difference between what the two ends of that spectrum are, I would say.

I think the bigger issue is more how that materiality is encoded into the model, what you consider worthy of encoding and what is left out. So the materiality of a building might get reduced to a face, a single line of a plane in the model, so that wall that should be 300 mm thick only needs to be described as a single face for the purposes of the energy use. But there is no user in there. There are no people. There is no behavioural model, the materiality of us is left out of that simulation. There is the tick, cross ... tick, cross next to the things you are going to include in it, and across the spectrum of super curvy algorithmic architecture, and super boxy algorithmic architecture, it is still the same question for me actually: what have you included and what have you left out in order to get to this outcome?

FS: Do you think that there is a way that 'thinking with computers' could be more interesting than ticking boxes?

PL: Well, in the nineties there was much excitement about the potential of all these new software techniques in architecture, also linked to post-modernism, as architects were as always late to the party, and the potential of that kind of expressive geometry, that you could generate more easily, using those kind of algorithmic techniques, to be a way for example for allowing for a redundancy of functional spaces, so rather than saying: "this zone is for this, and this is for this, enclosed by walls, and whatever" you would actually have a place of encounter, in which uses and behaviour that you would not have predicted could flourish somehow. You are not so prescriptive anymore for what kinds of spaces you make. It was a very exciting moment. When you look at the designs, fifteen years later, and the glossy pictures by people like NOX for example, it looks a bit naive at best and suspicious at worst. But they come from this tradition, of being less prescriptive and less deterministic about behaviour inside a building. That was a very exciting moment, but I feel they bankrupted themselves, almost literally actually, by trying to build these things. The problem for me is ... it is not just the materiality of the building that creates the social condition of its use.

Even if you are trying to encourage more different types of sociality in your building, you probably shouldn't be doing that only through the materiality of the building.

FS: No!

PL: And that is where it hit its buffer, I would say. And that mode ended. They are still doing the same thing, regardless of what it looks like or the technique ... or what they hope the space provides. They still use the same technique as the Greeks were doing by building a big temple and going ...

FS: "This is where you go!"

PL: Exactly. So it wasn't really plugging into other kinds of ... What groups like Stealth did, although they left the digital behind a little bit now.

FS: When looking at them now, it is hard to imagine how these iterations could be more interesting, more inclusive somehow and talk back to multiple parameters, not just those provided by the software.

PL: This is how I got interested in coding at all. I was fascinated by these glossy images of almost impossible geometry and the 'coolness'. I wanted to be able to make something like that - not necessarily to build it, but to somehow produce it on the screen. But I never really did it in the end. By the time I learned how to code my interest in that approach had ended and although I wanted to make shapes like that, I didn't want buildings to be like that. In fact, it was more the potential for constant iteration and change that interested me and that possibility of not just that thing we have seen a million times before being produced and almost 'de-professionalising' the process somehow, from an architectural point of view. In the professional world there are countless books about what you do with concrete, what you do with bricks, what you do with steel, what you do with timber whereas these other techniques were saying 'actually, who knows what you do with this!', and that was quite exciting. There was this instability around what the profession was about and somehow the existing processes of design were questioned by it.

FS: But then in that de-professionalisation, how does the user find a place?

PL: Well, in that model in the 1990s, not all! It is conditioned by the buildings' materiality as its main approach and for a long time I found it very difficult to think of a way that it could be more than this. And actually the trick is not to be obsessed only with the materiality and to broaden what that 'expressive' model is, what it could be based on and including more things inside the model. And i think it is very hard to do! But it comes down to the way in which you are defining that system in which you are operating in. The architectural paradigm is a Newtonian physics world of gravity. Actually that is a bit harsh, it is a bit more sophisticated than that, but it still is Newtonian in the user experience even if the underlying physics is a bit more sophisticated than that. It is a world in which there are no people... let alone different types of people. There are only these conditions of performance, there is only this materiality, there is only this physical sciences behaviour. But why couldn't you build a different world? And then you could test other things. Of course, now you have software that can test how people move in an airport, but that is not really ...

FS: It is just back to the Frankfurt kitchen.

PL: Yes, that's a really bad way of doing it and people like Space Syntax, for example are involved in that kind of thing and it is something that I completely don't agree with. It is the reduction of the user to generalised behaviour.

FS: And only one type of behaviour gets looked at.

PL: Exactly.

FS: You were talking about your thesis and that in the chapter about simulation, you wanted to look at MakeHuman. So this software - well I'm not sure it is about people! - but at least it is about body shapes. Can you explain why, as someone looking at let's say algorithmic architecture and the digital technologies around it, a software for building humanoid figures would be considered interesting? How does it relate? Or not?

PL: I think it relates very, very closely firstly because of this question of materially. Somehow the software is deciding what is going to be used to define a human and that materiality is defined through the mesh but it doesn't include anything beyond the surface, apart from the topological skeleton which everything hangs off. But it is a very limited materiality that they have defined and that is exactly what an architectural model would do. So it is fixing that but also giving you some feeling of flexibility which, again, is what this kind of parametric architecture does. There is this sense that you are somehow in control of potential outputs or outcomes and it is not an uncommon thing to see in a digital architectural design process. Certainly, it is a the very least implicit that you fix some things - of the design brief let's say - and then you are able to flex, but only within those things and who determines where those things sit and who fixes them? this is a very common approach, in digital and non-digital architectural design processes and the Make Human software has exactly this kind of interface in terms of the parametric sliders, in terms of the visualisation change in the color or the materiality of the body itself. And also what it is leaving out - there is no nervous system, there is no circulatory system (such as you can describe those two things) and certainly no personality!

FS: Yes.

PL: I guess the analogy between architecture and Make Human would be less strong between architecture and Make Human if the features hadn't been so 'comprehensive'. Because it tries to do so much, it makes it really similar to architectural software, it is also really trying to give you everything. But it just can't, so why try?

FS: And in a way, when those sliders operate on an image of human, your intuitions for what is wrong are much more visceral.

PL: For sure.

FS: I didn't realise how much it actually talks back to architecture and is related in that sense.

PL: I think it really does. And the sliders, for example, are a paradigm of computational design in architecture. Yes they are functional but also dangerous.

FS: Maybe we can try to talk about the problems we see and to find words for what is going on there, but also I would like to think about where it could be different, where we could think of another type of algorithmic software that could be more conversational somehow. So, in MakeHuman, we have a series of sliders which create a double problem. Let's just take 'gender', 'age' and 'race' to begin with, just make it simple!

PL: That's what they did!

FS: These three are put on the same level, so that is already a problem. They are literally expressed as if they are of the same nature, which is very strange. so that is one thing and the second is that 'gender' goes from one extreme to another which is something very different to age - you are born, you live and you die - a very different type of horizon.

PL: That is the linearity of the slider. You could talk about time in this way but you cant talk about gender like that.

FS: It is even questionable whether you could talk about time in that way.

PL: Yes, but at least you try to make a case for it. I don't know how you could justify it for gender.

FS: This could be an interface issue. We have seen by looking at the code that what is expressed in the interface might no be what is being operated on.

PL: Yes, I think that is definitely true. Having tried to modify the code and once you start looking at it, you realise there is a huge amount of inter-relationships between the parameters which is normal. It is inherent in the act of making a parametric model, you shouldn't just have a bunch of discrete variables, they would always inter-operate, but then the sliders in the interface are always discrete on the screen so you don't see these relationships. Sometimes when you move one another one moves, so you can see that, but you don't know why they move.

FS: Wait, they move ...

PL: Some of them are linked, in quite a complex way actually, even though they are presented as discrete on the screen. And there are certain things you can't have. For example on the race sliders, they can't be full 'African', full 'European' and full 'Asian' (or whatever the three distinctions were). You can only ever be up to the value of one whole - you can be 0.3 of this, 0.3 of this and 0.4 of the other.

FS: You cannot be mixed between more than two?

PL: No. I can sort of see how they get to this. Once you have made the decision to put these three sliders in, why would you possibly need to create a mix more than the value of one. That's a very simple example of the maths in the code, some of them are much more complex. Some of it is to do with resolving the changes to the geometry of the mesh, but the slider doesn't really alter the mesh points, it is actually swapping meshes around - this is my understanding of it - and this is something that is not visible in the interface.

FS: So there are two distinct ways of interacting with the model. One is to move the points makes a variation on a structure that is already there, and the other is to literally inject new points, or to take them out or whatever.

PL: Yes, that is exactly what the 'age' and the 'gender' sliders do. It is not like a balloon that you squeeze to change the shape, the topography, but retain the topology. They are adding new points and there is some incredibly heavy duty mesh work going on which I can't say I really understand fully.

FS: But have you seen, when working with the sliders and trying to understand what's going on, that there is feedback in the interface? If you work with Blender, for example, it is quite a different thing to add points to a mesh, it is a different interface and the experience of what happens is really different so you are aware. When we talk about topology ... it is something else to squeeze the balloon or to make a hole.

PL: That is something that is definitely missing. There is a trick in the interface too - when you move the sliders around the model disappears from the screen whilst the mesh is being regenerated but it is presented in another way. When you change the 'ethnicity slider', which is changing the mesh colour and not the geometry, the change happens almost immediately but when you are changing the 'age' or the 'height' there is a huge amount of maths going on to regenerate a mesh to match those parameters and then it just swaps the two meshes. so you are not playing with this balloon at all.

FS: So this is something we could try at some point, to make the diff visible. I think that would be interesting to see.

PL: I suspect you would find - this is a speculation of course - you wouldn't see much similarity, which is a bit tricky. On the one hand this is done for very 'practical' reasons. There is lots in it that feels super practical and I look at it and admire the technical ability needed to make something that is capable of this. But I would find it much more interesting to start with a blob of something and craft it into what I wanted. I know that there is sculpting software that allows you to do this, but I kind of like the idea that all of your skin was there when you were created and then the topology doesn't change so much.

FS: From talking to the developer of thegrid.org, there is this sense that the types of digital objects we want and the types of websites we want are way above our technical abilities. So I may be able to craft a blob in 3D but I will never come to a humanoid figure that makes any sense, so then the software steps in to help me. And there is a lot of 'helping' going on here! But I think this is something to work on, to see if we could express when a change is made that points are also changed. To be able to see that, should be possible. So there is the discreteness of the sliders in terms of how they are presented and how they are actually not separate. There is the fact that they are aligned as similar horizontalities.

PL: And all at the same scale.

FS: And at the same scale, which is really nauseating. Then there is the different types of binary that are going on, no?

PL: Yes!

FS: Man/Woman is quite a different horizon ...

PL: LAUGHS

FS: ... than Young/Old. And the most surprising one then, is to put race in this. Have you found something in the documentation, or the way in which the code is written?

PL: I have been trying to go through it to find when it appeared. So one of things I am interested in this kind of software is how stable it becomes, in terms of its features and functionality. At what point can you no longer change the software and you can only talk about what it is good and bad for. In my attempts to modify the source code, my technical ability was nowhere near enough to deal with the maths behind the mesh management. It is way beyond my coding skills and maths. But I was able to understand the parametric links and to de-couple some of them or to alter the proportionality in order to generate some other results and I was able to change the color effect of the 'ethnicity' sliders to RGB. So there is a huge amount of stability in it now that to change it yourself you would need to be so skilled. It is so tightly wound, it is so efficient in what it does that to unpick it was way beyond me. Instead, I tried to go back to find out when the features arrive, at what point. Having looked at a bunch of their repositories of previous versions, the 'ethnicity' is there for ages. It is not a thing that comes late, it is in their minds it seems a long, long time ago. If you look on the message boards you can see a few people complain and then they get told to shut up!

FS: Complain about what?

PL: About the 'ethnicity' feature being inappropriate, but there didn't seem to be many people that supported that view.

FS: There is also the hint of, let's say, academic data being used for MakeHuman software - and this may be similar to what happens in architecture as well.

PL: I think it definitely has that, both in the behind the scenes code and the geometry that is being pulled in, and the way it is expressed in the interface in this incredibly 'scientific' way. But what I found is that the sliders aren't really pushing geometry around, they are actually swapping body bits and merging them.

FS: LAUGHS

PL: If you go into the folder structure and see how it is organised you can see the 'genitals' folder or whatever. But you can't just swap in an alternative model. There is an individual body part model - say the genitals or the eyes - but you can't just take that file and swap it for your own one. It has to fit a certain file type and be processed into a certain type of data structure for it to be readable. I never found an easy way of just swapping the parts. And again, I sort of understand the huge amount of technical knowledge that has gone into making this very efficient system, but it is super frustrating not to be able to drop things in and out, even if it made an imperfect mesh. And their whole thing is about making the perfect mesh actually, whereas I would rather be able to put a head on the end of a leg and see what it looks like! It is really un-playful in that way.

FS: From what we saw at GenderBlending I am confused, especially when we were making gender changes. Looking at it with Xavier, who has an understanding of what it means to change the gender of a physical body, he insisted that it is not like sticking on genitals. On the one hand, it feels like a fragmented body image, and you describe that the elements are in different places, and at the same time it is quite unwilling to let go of it's illusion of 'wholeness'.

PL: Absolutely. I think that the fragmented nature I could deal with. If it was a collaging tool almost, that wouldn't bother me at all. It is not a direct analogy to actual bodies and would not be misunderstood as that. But actually it is a kind of collaging tool, hidden behind a huge amount of complex maths and code, to produce the impression of biological integrity. That goes with the 'scientific' data from which the body parts are generated and the supposed precision of those sliders so that fact that it is trying to create the impression that it is a useful representation of bodily process and change. Actually, it is not at all. It is a collage of things, which I would be much more comfortable with because it is more obviously wrong, and it would not be misunderstood as anything other. And it would be far more fun!

FS: The first thing you want to do is make a 'collage body' when you think what it would mean to make a humanoid in software. And then you realise it is the hardest thing. Students managed to glitch it, and then they could find ways to expose it. For example, if you make certain combinations of 'race' and 'gender' then the skin colour doesn't extend to the genitals and they start to stick out because they are coloured differently. So in that sense you can start to reveal the fragmentation that is in there but that you cannot work with. So I was trying to talk about the three interconnected problems that we see coming together. The collage character that is hidden behind the need for digital integrity that becomes confused with the image of a natural body.

PL: Yes, that kind of mesh integrity...

FS: This is where the mesh problem and the resolution problem start to meet each other.

PL: That mesh integrity is driven by the underlying topology of the 'skeleton' and the resolution of that skeleton is super scary because you have these two oppositional things. A complete reduction in the freedom movement though the simplification of the skeleton and a mesh that supposes to include all details and wrinkles.

FS: What I find to hard speak back to is that we just need to fix the skeleton. So so we through more programming hours, we run it on more powerful computers so we have a skeleton that is, let's say 'aware of it's limitations' but of course we can make the model more refined. Is that the solution to the problem?

PL: Not for me, no not at all. I am always bound to ...er....i have always had a problem with parametric approaches in general. I have always preferred generative design techniques. Partly because of the way in which learned programming through a computational design master course that focused on generative algorithms.

FS: Can you explain the difference?

PL: OK. So, in a generative technique - genetic algorithm, neural network, cellular automata would be described as generative techniques - you establish a set of rules that are explored through the execution of the algorithm, the outcome of which is determined by a process whose level of complexity means that you could not have predicted the result. So there is a gap between the system you put in place and the thing it produced. Whereas in a parametric system you make a box and you play within the box. In a parametric system you are rarely surprised.

FS: Because you get what you want.

PL: Right, you have already set a boundary on that space of possibility. And in fact, one of the images I did for GenderBlending was to put all of the sliders of MakeHuman to the left and then all of the sliders to the right - I know this was a little facetious and it is not really how the software is structured - but that's it, it's going to be within those two bounds.

FS: Yes.

PL: Whereas in generative approaches, the idea is that you don't quite know what it will produce and then in some way the algorithm has an agency, which of course you are part of. For me it is a far nicer way of working, but again, so long as it is an acknowledged relationship between the agencies...

FS: Yeah ...

PL: ... otherwise we are back into to the 1950's AI project and this contemporary fear of robots taking over the world.

FS: So, you say that 'generative' has a complexity that might surprise you, is then the moment that agency goes away the moment I understand what is happening? Is the complexity pseudo-magic? Is it just because I don't understand that I am surprised? PL: No, I don't think it is that you don't understand. It is about the 'predictability' rather than 'understanding'. So if you have a genetic algorithm, which is a nice example, you can 'evolve' a design solution. Your algorithm becomes a metaphor of evolutionary process, at least as it was understood 20 years ago because the algorithm is always super-far behind! which I don't really mind as it is an analogy more than anything else (at least it should be taken as that). So, you evolve a design solution and you are having a symbiotic relationship with...er...I am trying to use the term 'digital companion' in my writing...

FS: That's nice!

PL: ... well it is from Donna Haraway's 'Companion Species'

FS: Yeah, of course ... fantastic term

PL: And for Donna Haraway it's the co-evolution of dogs and humans and that actually...

FS: That makes a lot of sense, that's very good!

PL: ... it don't want it to be misunderstood as that thing that happens at the moment where everyone goes SARCASTICALLY 'aren't smart-phones changing us' or 'isn't digital technology...' or 'oooo my phone is really clever'. That's not what I mean. It is more that you make it and remake you relationship and it evolves and you evolve. So, with a genetic algorithm, you made a 'thing' from yourself and you can be very open and honest about that. It is a way of expanding that space of possibilities for me. It is, of course, limited. It is not magic, it is not pseudo magic - it can't just CLICK create something. It is unpredictable, but it is not that it can make anything at all - there are still boundaries to what it is capable of producing it is just that you can't follow in your mind each one of those steps...

FS: And because you're interacting with it?

PL: Yep

FS: The system is more complex...no more complex does not help... it is...

PL: I think that at least you can't see the boundaries, that is what I would be interested in, that they are outside the periphery of your vision. That would be the ideal of a really good generative algorithm, from my point of view. That is not how everybody else would necessarily see it, some people really want to, you know...erm...they want their genetic algorithm to solve a very specific problem. Wheras, for me, parametric modeling technology and techniques and interfaces and whatever, puts 'front and centre' that it is in this sand pit. Whatever you are going to get, you can see the boundaries within which it is going to come.

FS: You could also say that's 'visibility'?

PL: Yes, absolutely. But I think on your understanding of the process. It could be very explicit - 'it's just this. this is a thing that does that'.

FS: Yes.

PL: But it's not really presented like this. Ever.

FS: No, but OK, that's something else.

PL: The line is never shown. In MakeHuman, they don't say 'this is just some of the things that exist in the world'. It is very much 'this is the range of your humanity'! LAUGHS

FS: And if you think about a bug report on this, it is very hard too crack, because it is a philosophical problem.

PL: Yes, but i think there are also some very practical things that could be done. I think some of it is about it's aspiration to represent 'all' physical bodies - i think they could be a bit more explicit about how it doesn't.

FS: Yes.

PL: I think that the interface itself could be much more...er...honest about interaction between variables.

FS: Yes.

PL: And I think they could...er....they could also give you access to that coupling. For example, a very simple change would be to have an extra slider - if we stick with their interface paradigm for a second! - that allowed you to change the relationship between two other sliders.

FS: Yes. Or even not take into account, to de-couple.

PL: Yes exactly. In my little, quick hack, that is what I was trying to do. it is kind of nice to be able to play with these links. The danger always is that you just explode the complexity or the requirements of the user to understand the complexity of the software by adding so many features or so much to it that people are SHRUGS 'I didn't really want it to be like this'. But actually you wouldn't really be adding that much, because you could stay within the same paradigm of the parametric sliders and you also then start to demonstrate the mechanics of what's going on - and that's what parametrics really is 'mechanics' (it is all a bit 'Newtonian' for me!)...

FS: LAUGHS

PL: ...and the final thing, as you have said, the sliders themselves, you can modify them. You can imagine very easily ones that - although they would still be imperfect - they at least be a bit more generous. So, for example, the gender one could be a circle...

FS: Yes

PL: which would be a far more...

FS:...interesting one. For me, it is the fact that they are all presented as equal is so shocking that it took me a while to even...er....understand....to believe what i was seeing.

PL: I remember when you introduced the software to me and I thought 'OH MY GOD!'

FS: Let's say at first...in the first split second you think, 'OK, it is kind of intriguing that it is not binary'. I mean it could also have a radio button for 'man/ woman'. So in that sense, there is a moment that you think 'this could be interesting' because it accepts gender as a continuum. But then that is a lie, because it is actually not doing that. That is interesting to me that, from your explorations, that it is not doing that. So that is a really strange statement now. And when you explore further you see that the fact that gender is presented as continuum is a result of wanting everything to be a continuum, which has nothing to do with a more interesting understanding of how humans are 'different'.

PL: It's not about humans at all. they have tried to make a bit of software to make digital models at they happen to look, somtimes, like humans you might see but by no means all. If it was called 'MakeDigitalModels'... if that's what it was called ... if it wasn't quite presented as this ... er ... as us ... re-presented to us as ourselves, then I would have a lot less of a problem with it. I think that the whole thing is constructed around the idea of making efficient digital meshes, except it is shown to us as a way of making ... us. and then it has these affectations - the slider that is really closer to a binary. And that is another condition of parametrics, that you have to have this slider, otherwise it isn't parametric PL:LAUGHS]. You are forced into having it. If you changed the way it represents itself, if the ambition was more closely contained, if it was a more honest piece of software that didn't make quite so many claims and if these interface issues went away, I wonder whether we would still have a conversation anyway - I don't know. Well, I know what I think - that I would still be very unsatisfied by it.

FS: What if it was a generative software, in the way you speak about it? How would that go, where would that go?

PL: Well ... I think that you could have - it would be a lot harder for a start! LAUGHS - a lot more interface possibilities. The example I would give - which will raise a lot more questions about visibility for sure - you could have an interface in which the model is derived from questions, or behavioral parameters or attributes, rather than geometric ones (which is effectively what they are doing). Some who lives a certain lifestyle ...

FS: But then the expressions of that 'lifestyle' - whatever, let's just assume, like someone who has lived ... er ...

PL: Right, who interprets that.

FS: Yes.

PL: Of course. But I think you can't talk about the front end without talking about the back end of it. So yes, you can't just put a different interface on to what they do with the meshing technology that they are using. You would have to have a different ... an open way of editing ... and i don't just mean open so that you can download the code...

FS: Yes, I understand that. I find it difficult to sense the difference between the generative and the parametric. If I even see these evolutionary algorithms, as far as I can get a sense of what they are doing, they are also partially parametric ...

PL: No.

FS: ... in my understanding of them.

PL: I think that 'parametric' is a much abused term now. Everything is parametric. As soon as you start to codify somehow, everything has a parameter. In architecture, there's lots of, I would say - and you can quote me on this - odious books by people like Patrick Schumacher about how parametric architecture is a new 'thing', which I just can't deal with. Everything is parametric. A door is a parametric - it can be a bit wider a bit taller, that's parametric. As soon as you have a variable it is parametric, so i don't see it in that way at all.

FS: OK.

PL: I see it more about ... ER ... the difference being that the use of parametric ... or the appropriation of parametric as a term ... is a way of making a toy to play with. The sliders, make it reconfigurable but there are somehow a limited amount of possible outcomes. So you could calculate every single position of slider, every single possible variation of that.

FS: So there is a number, even if there are a lot of variations.

PL: Yes, but you could still calculate it - that's a number right? With a genetic algorithm, yes, you can do it, but that number is much much bigger. And actually when you are tweaking the variables in a genetic algorithm for example, you are actually changing something much more fundamental in the functionality of the model. As an example of a genetic algorithm you could take these two objects (shows cup with ear and lid with hole) you could say they are topologically the same.

FS: Yes.

PL: So they have one hole, and you can say: "I have this object and this other object" and the genetic algorithm basically breeds, this metaphorical breeding, of children, a selection of two more children ... or a population breeding ... you run that as many times as you like, could be many, could be thousands, could be ten, you can ask the computer to decide when to stop, and all the things you have control over are the topological definitions of each of the objects. You have control over the fitness function, so how do you decide which one is the 'best' object to move forward in the evolutionary process, and also the amount of randomness that you want in your system so it is a very metaphorical mapping of a quite simplistic evolutionary understanding. And any time you change one of those, you change something much more than just sliding a variable.

FS: Because these variables are, in a parametric sense ...

PL: I am editing the boundaries of that space actually, rather than just walking around it.

FS: OK, yes.

PL: When you have a parametric system, you can basically know everything it can make as soon as you have defined it, every single possibility already exists.

FS: Just to understand ... so looking at the cup and the lid, a parametric approach would mean you can 'only' generate everything in-between but in a generative system ...

PL: You would never change the topology. I think the fitness function is the key. So in this (shows cup again), if I want to evolve an express cup, rather than a coffee mug ... an espresso cup is a bit smaller than this, there is maybe this much material in it, or it is lighter than this one. It still has to have a handle, the hole needs to be big enough to put your finger through it, so those are fitness functions of what is getting evolved, but the kind of espresso cup that it could generate could be this wide and this high (very high, not wide), because you did not constrain it, like "by the way I don't want to have it a diameter of a pinhead", you did not tell me not to do that. A parametric model would never get there.

FS: Yeah. I start to see it.

PL: The example my sadly departed tutor used to use, was how to evolve a brandy glass - he was a bit of a drinker - from two other kinds of glasses. The fitness function of a brandy glass is that it has got to have a stem, to lift the glass off the table, and in order to get a hand under it to warm the brandy. A whiskey glass has a big thick glass base and if you take the topological definition of a whiskey glass, it can be a profile swept around a circle, and there would be another glass, and the computer just makes versions based on this fitness function of getting your hand under it, and than there is also something like the profile. And after however many iterations, it goes: "So this is what you meant?". And you go, "Yeah, it is", but you also look what else it came up with, and "wow that's kind of cool". So there are things that you did not really think that were necessarily possible and this is a very directed example, you know you are trying to evolve a brandy glass, you know what a that looks like, as a human you have a relationship to what it is producing. But when you change a variable in that system, to the topological definition of the glass, the degree of freedom you give, you know like you define this by point point point, sweep ... you can only do this ... pppppppp ... so than it can start to do almost anything. So as soon as you change some of these definitions, you really change huge amounts of what it can do, which is never true in a parametric system.

The visibility question ... how do you convey that complex process to somebody ... an interface is always about hiding the complex process somehow, could you ever make an interface that allowed you to get what is going on?

FS: It is hard to interface something like that. You talked about that earlier, the need for interfaces in parametric software, that it does not really exist without it. In these generative processes, the exploration itself, the probing of what is possible, is where the interface seems to be?

PL: Yeah, yeah, I agree!

One of the reasons you don't see very many, very good graphical user interfaces on genetic algorithm software, because it is not very helpful. Because in the end, what you are changing is so much under the hood ...

FS: But also because it is so much linked to the transformation of the objects you are exploring.

PL: Exactly. There is no benefit in a slick, or even a contained graphical user interface. You change something so fundamental, when you modify it.

That is what I like, it is one of the things I try to talk about when I talk to students about code, is that stability of the software becomes problematic. The interface marks a level of stability ...

MakeHuman is a good example of that. The code is so tightly wound, so stable. The interface is so hard to modify.

FS: I am just thinking, just imagine MakeHuman with the possibility of transforming 5 human figures at the same time, not just one, that interact.

PL PL:LAUGHS]

FS: There is something a bit painful about always operating on a single human.

PL: It is really what makes it feel like "design your ultimate human", rather than that it is part of a population.

FS: Ehm, with the use of the term 'breeding' ...

PL: Hum, yes, that term is a bit ...

FS: We talked about disconnected skins and crude skeletons, and how this limits humanoid figures, but also interactions between 'people' seems to be missing.

PL: I think the nervous system in general is a thing that is missing from often. One of the things I put in the notes for Topological Subjectivities at GenderBlending, is systems in bodies, an interesting one for me, because it is all about the line, the boundary you put around a bunch of things, and say, "well, this is now a system". With the circulatory system or the whatever system, and the nervous system is one where that boundary is felt very fixed in Western science for a long time, and now it is much more fuzzy and people are a lot less clear about where it begins and where it ends.

Everyone used to think it was the brain, and now they go ... oh, the spinal chord is kind of interesting too, so it is probably a bit of that. And there is a much more radical working field (?) where your entire nervous system contains all your brain power. You can't have brain power anymore, it is what some radical thinkers say, it is not a way (another way?) to talk about our capacity. Our capability ... and you can imagine the historic change from thinking the heart was the only thing that mattered. What I like about that, is that it is not that our bodies have changed, or have particularly evolved during that period, the materiality is exactly the same. But the topological understanding is what is evolving. And that is I think again the stability issue. During GenderBlending at some point we were talking about species, categorization, all this kind of stuff. For me the problem is not necessarily categorization, but more the stability of categorization, the fixity of it.

Categorization for me is a normal thing to do, it is just horrible when you fix it, "this is the only way you can be called". Topology in math, like set theory, is all about an approach that objects can be in multiple sets. It can be more things to other objects. It is one property that puts it in other sets, although 'property' is not a very good word. But it does not preclude it from appearing in another set.

(...)