The interview was conducted via Zoom.
The interview was conducted via Zoom.
Alexia: So as a start, it would be great. If you could maybe tell us a little bit about your main interests in your studio, the type of work you do maybe the size of your studio.
Simone: I am Simone Rebaudengo and co-founder of oio we are a small studio we are. Four or five people plus various collaborators. It’s a pretty tiny practice and we started around two years ago officially. The core of what we do is kind of exploring new use cases, product interactions that we can imagine with the emerging technologies. A lot of the type of work that we do is really like looking at , complex technologies and also at sort of like future or like near future, or like not, not too far away future, but imagining what, Implications technology might have, whether it’s on the experience of, consumers, customers, users, or also in the, practice itself of designing. We are, we’re quite quite technology agnostic, but I would say a lot of the work that we do revolves around intelligence actually the, secret meaning of oio is other intelligences operation. We’re really sort of trying to build examples and also culture around thinking of intelligences also with like different metaphors. Not only looking at automation as such or, human, like assistance, but really thinking deeply like, what else can we do with this technology that , companies and engineer and research centers, centers are developping. And I think, yeah, especially being based in Europe and also both Matteo my co-founder and I are Italian. One thing that we really try to do is try to look at different meanings of even what products and experiences can be. Besides hyper efficiency, looking at surprises, play anything else. Rather than just looking at an extremely — especially when you look at artificial intelligence — a lot of the visions are about simplifying streamlining automating, those are all the verbs that you hear. We are interested in different types of things, like, surprise confused inspire and all the other things that you can do.
Alexia: Don’t you have like a AI art director on your team?
Simone: I think we are the first studio that officially as a non-human Not only, not only employee, it’s kind of in between an employee and a founding partner as well. So we have Robby’s or we call the AI creative director. And he’s a bunch of things. I mean, it started as a bot on discord. Cause we work a lot on discord and we have a community there and , we kind of developed this intelligence talk with us and , he has a bunch of different actions it can do, but also like we talk as Robbie is an assemble of bit of algorithms that we train and we use to create new products and new ideas using machine learning. He also has an Instagram account. So we kind of release him in the wild to share his own sort of quotes point of views imagery. We are trying to use it as a way to talk about the different way of looking for instance of machine learning tools as collaborator in your studio rather than only as a tool or as a, let’s say automating tool.
Alexia: Did you train it on like a specific data set or are you like retraining it often or does it learn? Does it evolve with through your interactions?
Simone: Robbie is a sum of different models. So we don’t have a centralized Robbie, even though we have a little house, so actually Robbie lives in a raspberry PI in a mini-version of the studio. So he kind of has a physicality. Now we’re looking at how to, kind of make Robbie more a thing. But yeah, so depending on the different different projects, so we’ve been doing experiments for instance, like training on various electronic products, data sets product description. So both images, stacks furniture we’ve been doing two projects with companies to try to kind of create a custom data set and now actually in a month or so we’re going to release our first product that is done fully in this l intelligence. So it’s an AI, let’s say neural network to craftsmen end-to-end process. Or basically at different stages we use different machine learning tools, both to generate ideas, make decisions. And then all the way, from 2d to 3d all the way to, working with actual human craftsman. So we are interested in that dichotomy as well of , putting very close together, like new technology, very fancy generative now we’re working with metal. So with people that are fusing metal. And so, these kinds of putting these tools together and kind of see, okay, how do you combine these worlds? Three years ago when we just started we had a project. that was actually never been published, unfortunately.
But we were working with iittala one of the oldest glass company and they have like this historical archive of work . We did a project to turn their historical archive into a model to then create new products. And I think that that part is really interesting, especially when you look at both as a design, let’s say a challenge, but also as as a point of view froman old company that has a history and that needs to make new products and like, how do you combine this? The fact that you could generate from your history, then you have different designers. How do you create this space?
Alexia: If you wouldn’t mind sharing more about your process, like how do you tackle these projects? Do you use specific tools each time? do you have a more like a custom-based approach?
Simone: When, when we started exploring this there wasn’t tools, I think now, now, with like things like runway very, very easy to, create a model, generate ideas. And then from there do something with it. When we started it. I think I was using spell to train and there was like this sort of tool that would allow you to. Cloud computing to basically generate and kind of train models, because that has been one of the biggest challenges from the start like if you don’t have a powerful machine and if you don’t have , if you’re not really good at coding, there was like a very big barrier for you to understand. First of all, how do I use this algorithm? Understanding, how do you train it? Where do you train it? So we are using a lot of kind of cloud tools, expel or also then now I have Google, Google cloud the dammit I have a big call, the fight on. Notebooks on Google cloud that I don’t anyway. So we’re using a bunch of ways to basically taking code that we could find Github from, hack i t and then see whether we could generate things. There was also some interesting work done back then, for instance, using processing or like p5.js with these tools to then generate all the sort of meshy latent space walk and so it was a bit more complicated. And I think what was interesting also is that because it was more complicated, you had to take more time to think through things like when, when I was preparing like a data set , because I knew that it would take me like a day to get the result. I was really thinking deeply, okay. Like really curating the data set, imagining what it could be. And then the more you work on this, the more you kind of get a sense of. How you, you can manipulate the data, set entry to, to figure out what other results you can get. So for instance, like one thing that we, we try to, especially with the project, we did that it was like creating different data sets that had one that was purely the products one that somehow was inspired or let’s say had the different things in it. So basically we retrain the same model on, on for instance, inspiration. So then figuring out, okay, how do I take an existing model? Then I turn it on another data set. So then I started to get this sort of. Blurring of glass with the, at that time we tried with cats because somehow we were thinking around the idea that if designers gets inspired by, the context they live in, and for instance, for Scandinavian design is a lot about nature. , the nature of the internet is mainly like cat picture or, the most. probable things you can find is like, memes and cat pictures. So again, playing with these tools, and seeing what comes out so that there is the, I think that the data creation part, the dataset creation part is really like a craft in and of itself. It’s not really scientific. It’s not really a process. You need to have a sort of sensitivity to know what you can get out of it.
Alexia: Would you say you, are able to kind of predict, or are you still surprised when you create your own dataset?
Simone: No, it’s still surprising. It really surprises me sometimes when I do things that are supposed to be really wrong like, I actually sometimes I don’t use really well cleaned data set on purpose. So where the images are not all in the same direction, because then you can start to get weird transitions between things. So for instance, now we’re working ontablewar e and we found, for instance, an historical data set from the MET where all the pictures are all kinds of 90 degrees but then, so then I started turning them on around, but then I also. Just put the one that are vertical resolve that. And suddenly like when, when I was starting to generate stuff, you get these weird ghost parts from the wrongly. Let’s say wrongly rotated assets, or even like, because some, have different background. It’s really about playing with the limits of the technology. Like, this, everyone tells you, okay, you must have a clean data set everything needs to be the same white background and everything. That’s the way to get perfect or, good outputs. But I think what is interesting for me is not the good outputs, but it’s kind of the exploring what else. So basically even the data set as a medium to express, what you might want to design.
Alexia: in a way, like you basically use machine learning throughout your entire process because it really start from the dataset. And then do you, train your own algorithm thinking of what you want to do, but then you don’t really use anymore tools such as Runway ML or Google CoLab?.
Simone:
The training in itself, there is a little bit of, let’s say craft in there because how much you train it. It’s again, like a choice that you make and you need to learn. What based on what you want to get, right. It really, for me, because for us, I think what we want to get is mostly inspiration and interesting things because what, especially when you look at physical products compared to, let’s say an image like if you were to generate, I don’t know postcards or, like the students who generated islands those a really good exploration, but if you’re trying to, let’s say create a physical product what you’re actually trying to look into the, what the algorithm generates is possible new, let’s say unexpected functions or unexpected aesthetics, or, and that is kind of a mental jump that you’re doing from something that looks like basically like a Rorschach test and say, oh, well, actually, Might mean that, there is a new part that could be an new ergonomical piece for this spoon for instance, and that’s where you, you get a sense of, what you put in, how you put it in, how much you train it.
And then the other bit, which I think is also interesting is it’s sort of I think Mario Klingemann because I was following Mario Klingemann a lot in his first experiments on, let’s say, larger generative models. He was always talking about this idea of sightseeing. So even his twitter feed was this sort of like narrative you, oh, I ended up in, these like monkey vase space, and so it was really sort of exploratory narrative, which is throwing, for instance, with the runway, it’s when runway first came out I actually was amazing to use it to just, scrawl and spend a lot of computing money just to, navigate aimlessly in this space and then kind of putting a flag and say, okay, I have this found this thing, which is a vector. And this is a nice spot. So then I can save that, then I can say, okay, this is a nice setup. And then I, I may be from these there, I can interpolate to another area and see what’s in between. And even that, that process, it’s really sort of like it’s purely like a conversation and you’re, you’re basically scrolling through this space, which. Which is also weird because on runway it’s rendered as a 2d, but it’s actually in case of Style Gan is 512 dimension. So there is not really like a way to navigate it normally. And then you. You basically get excited about what you find. And for instance, 1 thing that then again, to build up on that you can start to create, for instance, for the iittala project. We, one thing we really try to do is say, okay, because we’re working with us, that’s from historical designers, for instance can. Use machine learning to create objects that are in between. I dunno, Alvar Aalto 1 output that it’s really resembling, let’s say one of the Alto phases and one that it’s looking like the glasses from and then by relating to them, you find this weird space, which is, 40% of those 60% Kara and something else. And you can create these continuity between designers. And then, and then, you can start asking questions like, What does it mean if I create this object for instance, copyright issue, because this is like or with is the designer of the, so it starts to create all these questions, which are also part of the, the interesting bit of, of this technology.
Alexia: It’s interesting that you talk about unexpected functions and aesthetic yeah. . Don’t you think that we are risking to have an homogeneity of aesthetic ? Same thing with, the like the images that you generate with Runway you, you feel this particular aesthetic do you think we going towards that direction?
Simone: Yeah. I mean, absolutely. The, there is, I agree. There is a GAN, aesthetics and and I think it’s a first step for me into something. We were talking, we did this piece with a, which is Nick foster about the, basically the parallel of how for instance, amplifiers and distortion in music basically allowed to. Creating a rock music. And, basically by clipping, by bringing this glitch to the limit, you, you got these new sounds. And right now with machine learning for me is similar. Like, we, we use , when, when you saw the first puppies, slugs, it was really about pushing the limits of what you can do with it. And you have this, like, we have this weird aesthetic But it’s a step, I think, into another direction. I hope we’re going to get beyond it because I agree with you. I think right now, when you see a lot of, especially with all the, I mean, I don’t want to say NFT, but without a lot of this sort of now graphic design explosion there is a lot of generated things. We like clip per with the. There is even this one, boy. , it’s basically like , doing that on a, on a phone and a lot of things look the same, but for me, what it’s, what I would, we are not interested in is the pure output. So the clear output of a machine learning is just a step then. You need to do another step on top of it. So I think, when it, when it comes to graphic design is different because you can output with things that could be finished when you look at I don’t know, physical products, for sure not . I just agree with you in a way that there is a uniformity, but also because you can also avoid that uniformity. If you start from, from making your own data set from changing the way you generate
Alexia: do you, and I be curious to know if there is I don’t know, application of , machine learning application that you’d be curious to try or that you think right now, are really underused today. It could be tools that you heard about, or even just application really in general.
Simone: Well, I’m really fascinated by there is going to be a huge push on large language models. This is the modern like tech side and conversation in itself. Cause Google launched the, I think last year they presented this like Lambda, which is basically this, the biggest new language model. And what’s interesting is that, you can have a conversation with anything. They showcased how you can have a conversation with Pluto as a planet, and suddenly you have a bot basically impersonating and having a point of view of a planet. That, that is really interesting because we did a lot of work in that area. And just the ability for you to talk with something that. is a planet or a tree. I think it’s a really interesting experience in itself. In terms of like output yeah, I’m, I’m also interested in dissolve collaboration, kind of both on visual, on text or anything.
So basically, how do you. How do you infuse your process of thinking through , inputting, machine learning inputs? Like right now, I’m trying to write a book by sort of exploring ways in which writing with a machine. And for, I mean, there is an amazing example from years ago from Robbins law. Which was a scifi writer that he actually had this first example of like writing with a machine where he was using the Alto complete coming from at a data set that region created and curated himself. So I mean, I’m, I’m mostly interested in the, there is some stuff that is going to come out with 3d that I think is going to be fun to try. But again what I’m not interested at all is like things that are getting too very perfect outputs. Because at the end, that. I don’t think that that’s the interesting bit. Like if we live or if, if we use machine learning as a way to make final output for us, I think it’s not necessarily. I would like design to become in a way.
Alexia: So for you, there’s definitely a next step, as you were talking about with the example of physical object where you work from the output, that was correct from the outcome that was created or do, you mix it with something else, what is the next step for you usually like.
Simone: So right now we are actually, we created our own sort of pipeline to go from this, to the 2D world to a 3d space using tools like Houdini, and we’re working with some some collaborators that are basically have been creating GAN even within the having some sort of generative variables so that, we keep but even before that, for instance, what I find extremely interesting is like, what is the output that you choose? That is interesting, that’s step is really because right now you say, okay, I’m, I’m trying and I let the machine generate. And then I choose something that I find it. But for instance we, we experimented with even decision-making tools within it. So let’s say. Let’s run for instance, through all these images and see what is, what the machine sees as as the object we want, like a vase or a spoon or a fork. So, okay. Like, are we interested in things that are a hundred percent recognized as such or 80%? And what is the cutting line? 50% or less then it’s not an object. And then within that, it’s like, okay, can we then try and put in an algorithm based then on, labeling what we find interesting or what Mateo finds interesting or what is other person find interesting.
And then run all the outputs through that and see again. What, so even then, I think it’s interesting. You can, you can use other tools on top of it to then experiments with, for instance, the way you make decisions. And then after you can make it. So I think w we did it the, the genetic part is a really one bit, but what I, what I must say interesting is everything around and how does it change the process?
Alexia: There’s something you mentioned at the beginning that I thought it would be really interesting too, that if we could push a bit deeper on that, you talked about new type of metaphor. Is there metaphor specifically right now that really inspire you or what do you mean exactly by that?
Simone: Well, for instance, like the, I, a lot of projects that I also did in the past, like the , there is a very one-to-one connection, for instance, when you’re looking at the, assistance or, it’s most of the time, it’s an assistant in itself is a metaphor. And normally as a human , human assistant, so it has a voice you interact with it in a specific manner. It’s purpose is to help you and in a way serve you. And that’s kind of, coming from the history of how we see automation in history, it’s always been like, robots that help you in the house. And then, assistants that, tell you things from the internet. But for me, what is interesting is when you open it up, like, let’s say like, can you make an intelligence that is, for instance, not as intelligent as as a human? Like I did this project long time ago with, with Google creative lab, for instance, to create bird like home assistance so then, if you use that, it’s okay, how do you interact with it? It doesn’t talk. So what does it do? It outputs, still learns from the environment, but the output is not. , voice-based information, but it’s, sound maybe it’s memories, not that long because it’s a bird. So it’s a, so then you can also talk and think about privacy in that sense.
Like it, doesn’t not that much the way you interact with it is so, like, and I think this was. There was a lot of talks about it back, maybe in 2010, like Berg had these amazing articles about, be as smart as a puppy and all this talking about the metaphors of intelligence. But I think in the last, five years, we kind of abandoned that discussion and we just said “Oh yeah, Okay. It’s a voice assistant. It’s machine learning now is. It’s really worth to explore new — for instance the collaborator for us. It’s a clear metaphor. Like we talk a lot also about the centaur player like, the collaboration as the base and so I think it’s important that we talk about it because that’s the base of what, new tools you’re going to create. Is it a assisting tool? Is it a new Photoshop or is it a coworker? Is it an Intern? Is it just a weird guy that got a weird thing that has its own point of view? Because in that way you can, you can really shape what the role and interaction is with the, with that intelligence.
Alexia: What do you think it will be needed for this machine learning tools to be more accessible to designers today or in your mind? Are they already accessible enough?
Simone: I would say with, with the Runway in particular, you have a certain degree of learning, but you have also flexibility. What I’m scared is when it’s going to become too simple. And that’s what you see, Wombo AI sort of like click tools where you don’t have control on the data.
Alexia: I don’t know what that, how do you say Wombo AI ?
Simone: yeah, it’s an app that basically you can choose different styles and the, you have text input and then you can say, so it’s basically similar, like to CLIP, I think it uses CLIP, but basically you go from text to image and then you can choose different styles and it’s an app, so you can generate. That is where you you’re going to have this, very similar aesthetics. It almost becomes too simple. Which is fine if you’re making a screensaver or a poster. If we want to create tools for designers, I think there is still a necessity of allowing a bit of putting your hands in the machine. You need to understand a bit how it works. You cannot make a wooden chair if you don’t know how wood behaves, so you cannot make a nail-based output if you don’t understand a bit what it is. So, so yeah, I hope, I mean, I think we have some tools. I hope we, especially because they’re open source or they will be, they still be open source. We’re gonna keep having new algorithm being able to be explored. So I think that’s the most interesting stuff going from 2d to 3d, to, who knows animation to do whatever, but still. That it, it still becomes a tool and not just like, one click because otherwise if it one click it, it’s not anymore designed in a way.
Alexia: Yeah. It’s pure automation. Well, thank you so much.