State of Play

Escha Vera got death threats for posting AI art. She kept posting anyway.

Perplexity's designer runs a record label, trained her own LoRAs, and built the Comet invitations that broke the internet — each one unique, generated at scale, but deeply intentional.

We talk about the hate, the ethics, and why prompting isn't a gimmick skill, but communication.

Get the UX Tools Newsletter (written by me)
Join 100,000+ designers for weekly insights on creative software and the people shaping it: [https://uxtools.co](https://uxtools.co/)

CHAPTERS:
00:00 - "I can't post anything without death threats"
01:48 - How I found Escha's work
02:46 - Myspace and Neopets taught her to code
04:39 - Losing self-expression in client work
06:54 - "I call myself a designer and don't elaborate"
08:05 - Perplexity's culture: high trust, high autonomy
09:13 - "There's no roadmap, just do it"
11:53 - How the Comet invitations actually got made
14:51 - Scaling unique outputs to 10K+ generations
17:44 - Evaluating AI tools as inputs vs outputs
20:16 - Pushing Midjourney to break terms of service
21:35 - "Being a good designer is about communication"
24:22 - Trial and error prompting with Comet
26:22 - Prompting as a second-class citizen to features
30:48 - "Can you be pro AI and pro self-expression?"
36:13 - The ethics question that kept her at Descript
38:35 - The hate and vitriol from sharing AI work
40:51 - "Ask how it was made before throwing hate"
43:31 - The blurred line: how much of it is AI?
45:31 - Should we disclose AI in our work?
48:40 - Daily driving tools at Perplexity
50:37 - The spinning planet she shipped in 5 minutes

ABOUT TOMMY GEOCO
I spent 15+ years in tech and design. Former military. Father of five. Now building Internet Enjoyers, a weird little media + product studio rediscovering soul in creative tech. This show is how I'm rediscovering my love for the game.

ABOUT STATE OF PLAY
Host Tommy Geoco discovers what fuels the internet's most interesting designers and builders.

LINKS:
UX Tools Newsletter: [https://uxtools.co](https://uxtools.co/)
Follow Escha: https://x.com/eschadiol
Perplexity: [https://perplexity.com](https://perplexity.com/)
Comet: https://www.perplexity.ai/comet

FOLLOW ME:
X / Twitter: https://x.com/designertom
Instagram: https://instagram.com/itsdesignertom
LinkedIn: https://linkedin.com/in/tommygeoco

What is State of Play?

Conversations with designers, founders, and builders behind some of the best work

Escha Vera - Full Podcast.txt
English (US)

00:00:00.040 — 00:00:07.720 · Speaker 1
I got so much hate, like so much vitriol. But it very quickly got to like, I can't post anything without death threats.

00:00:07.720 — 00:01:41.660 · Speaker 2
Some people saw perplexities, browser invitations and thought to themselves, oh, that's a cool AI trick. What I saw was someone who understood something. Some designers are still kind of missing that AI isn't the output, it's the throughput. It's one layer in a stack of decisions, taste, and craft that you can bring to the table.

Echeveria is a designer at perplexity, and she runs a record label. She's been using tools like Midjourney since the early days, and somewhere along the way she started training Laura's building prompts like instrument panels and shipping work at a scale that most designers can only dream of while still making every output feel incredibly intentional.

But what also drew me into her is that she's someone who has been on the receiving end of the internet's rage. And unfortunately, I'm talking about death threats and hate, the full polarization treatment that comes with being publicly curious about using AI. And she's still here. Still making really great work and still sharing it openly.

For the next hour, we get into how she thinks about AI as a creative tool, not a replacement, but as a collaborator. How perplexity is built, a culture where designers ship things without asking for permission, and what it actually takes to maintain your creative identity. When the tools keep changing and the internet keeps yelling.

This is state of play, where I try to figure out what fuels the internet's most interesting designers and builders. Let's get into it.

00:01:48.860 — 00:02:44.070 · Speaker 2
I found you because of that great, incredible work you did with the comet invitations. And there's. And then as I discovered your work, I was like, why haven't. Why haven't I found her? There's like, so much great work in her past. Um. Of where you've worked. So the whole thing with this podcast is I really want to learn from people like you.

I'm kind of rediscovering my own creativity. I've been into this field for 15 years, and I feel like I've, you know, had to navigate speed and I've had to navigate all the things that we typically do. And I look at someone like you with your own record label and your inclination to play with so many different tools and mediums, and that's what I miss.

Where did your inclination to play start? Where did all of that come from? What was kind of your first experience creating something?

00:02:46.670 — 00:02:47.790 · Speaker 1
Myspace?

00:02:47.830 — 00:02:48.550 · Speaker 2
Yes.

00:02:49.110 — 00:02:50.550 · Speaker 1
Myspace and Neopets.

00:02:50.590 — 00:02:50.750 · Speaker 2
Yeah.

00:02:50.790 — 00:03:00.090 · Speaker 1
Neopets taught me how to code. I mean, back then, Myspace like customizing Myspace was just doing an image map and just overlaying it on top of the page.

00:03:00.490 — 00:03:41.010 · Speaker 2
Did you ever. Did you ever participate? Um, I'm 38 years old, so I don't know what part of the internet you grew up on. I grew up when there was an application called Palace, and you could write this IPS gray language, and you had little avatars that you could like with your cracked version of Photoshop, create these avatars.

It was very angsty. And you could, like, shoot lasers if you knew how to code the language at other people in the chat room. And then when that died, for me, it was like LiveJournal. And so I was like a live journal kid, like really out there. And I was using the internet and all of these different ways to express myself.

Was Myspace and Neopets like your way of of your first, like expression?

00:03:41.850 — 00:03:55.170 · Speaker 1
Yeah, honestly, there was another one called Asian Avenue. Obviously I'm not Asian, but I had a lot of Vietnamese friends and that's what they were all on. It was kind of like GeoCities. You could kind of do whatever you wanted with lots of scrolling. Marquee.

00:03:55.180 — 00:04:39.860 · Speaker 2
Yes, I remember a lot of that. It's it's interesting as we kind of talk about this, expressing myself on the internet was really where I think it all started for me. Right. Like getting the right song on my Myspace portfolio, um, learning how to ship little things, just everything felt like an expression of me.

And as I started to turn this into a career and take everybody else's ideas to turn into something, I think one of the things I lost was feeling like I was still kind of expressing myself through work, and I've had a hard time reconciling that a little bit. And does does that resonate with you with your record label and with all of the great work you do?

How how much of your own expression do you do you find in the work you do today?

00:04:39.900 — 00:05:49.800 · Speaker 1
I mean, yeah, there's there's like work, work, which I'm not happy unless I'm solving problems or I feel like I'm on the bleeding edge of something, but I'm kind of a creative person at heart, and I need to be making something, even if we're not working. I'm working on something. And so yeah, that's where like the label actually came out of was just me wanting to do more, more than just an app or a website.

I mean, half of my portfolio just doesn't even exist anymore. It's just, you know, lost to time or just delete it at some point. So one of the do something with physical mediums and kind of designed this whole packaging language, I just kind of got in the habit of like accidentally being a creative director, like having an idea of something, making a demo, showing it to like three different people who I think would kill it or would like, I would enjoy working with, uh, and it's just a lot of commissions and just pitching ideas and cobbling things together in a meaningful way, and putting it out there in a way that otherwise wouldn't, wouldn't exist.

00:05:50.080 — 00:06:54.230 · Speaker 2
Accidental creative direction. I love that because I find myself accidentally in a similar role. In fact, this this idea of like labels and roles is is something I'm I'm fighting against a lot right now. I have a hard time with it because I'm supposed to be producing shows and media for who, who's my ICP, who's my demographic, and and that's a really complicated question right now, because design as, as a, like a market is kind of going through its own evolution right now.

We're seeing it impacted by a lot of the tooling. We're seeing it impacted in the way people are working. And it doesn't feel sufficient to me to refer to myself as a designer anymore because there's like so many things I'm interested in and I'm working on, and the outputs are just so different than what I used to think a designer would output.

How do you work within boxes And then when do you break out of them and what does that look like?

00:06:54.270 — 00:07:42.030 · Speaker 1
I call myself a designer, and I usually don't elaborate because it's just there's too many things and everything that gets specific just feels like I'm pigeonholing myself into something. I mean, perplexity is like an amazing place to work, to be able to flex all of those muscles. Um, with just the team being so stacked and so talented and everyone has their like niche thing that they're passionate about that has overlap with the work.

I mean, half the time I'm tasked with something or I'm working on something, but it's always about going to the like a step back, looking at it at a higher level, seeing how that affects other things or everything else it touches, and just pitching ideas like, it would be cool if we did this thing, and most of the time the feedback is sick, let's do it.

So just like,

00:07:43.470 — 00:08:05.410 · Speaker 1
I don't know, like forcing myself to to work outside of the boxes that I'm being asked to work in and pitching those ideas. And with a team like the perplexity design team across brand and and product, everyone's got taste and and appreciation for that extra mile or the extra work or the extra effort.

00:08:05.930 — 00:08:21.770 · Speaker 2
What does it look like at perplexity? When you think about constraints and the freedom to explore creative ideas, how does how does the culture kind of support or what is it like the mandate on the teams that you've worked on?

00:08:21.770 — 00:09:13.020 · Speaker 1
I mean, there's a there's like phrases that go around like high trust, high ownership, high autonomy, and that couldn't be further like, that's so true. Um, in a way that I've never experienced anywhere else in tech that I've ever worked. There's just there's so much trust put into the design team. Uh, most of the designers code.

So like, if there's something that's bugging us. We just ping each other and fix it and put it in there. I mean, I was working on the design system, working on colors, making things more accessible. Uh, when I first came in, asking a bunch of questions and half the time it was like, yeah, let's just fix it. And it's like, okay, well, where do we put it that in?

In the roadmap, it's like, there's no roadmap, just do it.

00:09:13.740 — 00:10:18.400 · Speaker 2
I love that that's the type of cultural support that's at perplexity. It it seems like that's the case. And everyone I've talked to who works there just has such a unique way of expressing their ideas. And it seems like I wasn't sure who to get in touch with, like who owned a particular thing, because everyone I come across seems like there's such a heavyweight in their area and they're just doing stuff.

What do you think? I've got a team right now, and I'm trying to build out this culture of high agency as well. Like, hey, one person owns that. Do it how you want. You know who to ask for support, you know. Collaborate how you want to do things. On the other end of that, it's like, how do we make sure everyone's moving at least approximating in the direction of like the mission of the company?

Can you give me something tangible or like an example at perplexity on how that high agency is, is kind of treated in like something you've done there and, and how that also is like how you stay in the mission of, of where the team is trying to go.

00:10:18.440 — 00:10:37.560 · Speaker 1
So yeah, everyone has their own like either surface area or specialty, like within the product, and that's their main focus. Um, but there's just so much overlap with everything. So I might be tasked to a single thing, but I'm in it enough that I know that,

00:10:38.680 — 00:11:19.980 · Speaker 1
you know, fee or temporary or Henry or someone else has more just historical knowledge of how we got to where we were, and it's kind of on us as designers to lean on those people and make collaborations happen because they don't they don't come from the top down. It's kind of on on you as the designer to reach out and find the time to jam on something.

I mean, I'm like that with shampoo all the time. He's so busy, but he finds the time to to jam with me on stuff with I. That's kind of just over slack, getting thoughts on a design or jumping on a call and jamming for 30 minutes.

00:11:20.780 — 00:11:53.030 · Speaker 2
The comet invitations had had a heck of a moment because they were one. They were beautiful, but they were also pretty enlightening. And how to scale something beautiful. Right. And there's this idea that, oh, wow. Like there's tooling that can do this. And you created a great, um, walkthrough on kind of how you did that.

Can you give me some context to kind of how that idea came about. Was that borne of kind of a hi agency idea? Was it a large collaboration? Maybe walk me through that a little bit?

00:11:53.070 — 00:14:20.500 · Speaker 1
Yeah, it was a collaboration with with an engineer and, uh, our main growth designer. It was kind of just handed off as, like, we need invites. We need to be able to invite people while we're rolling out for public release. And they should feel special. Uh, and like, the first instinct was like, QR codes and let's make a QR code look nice.

Um, but, you know, a couple revs in, it just wasn't really clicking. It wasn't special enough. And I'm kind of an advocate online for using AI or generative AI in clever ways, or kind of looking at it as a throughput rather than the output. You know, it can be so much more than typing words into a box if you want it to be.

And if you're curious enough to figure out how things work or what tools can offer what at different stages of the process. So my whole thing is intermingling all of those. And I've done some lower training, like low level stuff in the past, but it's come a long way since I last looked at it. It's become so much more accessible, uh, and cheaper to do.

Um, and so I just, I wanted the invoice to feel special and unique for everyone who got one. So I took some of the, this sort of brand imagery that she had cooked up, uh, played around with some of the Midjourney styles that the sort of perplexity ambassadors have come up with. Mix those two together. Threw it into Photoshop, played with different compositions for like different elements that I wanted made a handful of just like highly curated assets.

Trained a couple of models from those with different like weights on what pieces I actually threw into the training data ended up coming out with like three of them that I really liked, and just found a way to use all three models with varying weights, with a prompt that I could swap out individual elements kind of randomly at runtime, so each card ends up being unique.

Some have black holes, some have live streams, some some have a little bit more motion than others. But all of that was very intentional and ended up finding something that could give us 10-K generations or more. And they all felt like they had some character to them.

00:14:20.540 — 00:14:51.440 · Speaker 2
Now I'm going to crack into the tutorial you shared because I have not yet as of today. But the thing that it immediately made me think of was what tools could help me reproduce something really unique at scale, the way you did with this. And I'm looking at flora, fauna, I'm looking at weaving. Maybe those are ways I can visualize it.

But you, like you had mentioned the word scale. Tens of thousands of these at scale seems impossible. But you did this. What tooling really helped you push this past?

00:14:51.920 — 00:15:37.040 · Speaker 1
Yeah, the scale bit was was an important one. Uh, I mean, I tried a couple of different things, like fi does amazing work, and he has, like, this cool isometric thing that he's been playing with, and I really wanted to do something with that, but it was just not possible to get. At least now it wasn't possible to get that at scale and have it not be wonky or have.

There's too many elements to get messed up, and it would have required hand curation for everything. So I just came up with something that could be kind of ambient and flexible enough that like, you could have an orb that's not perfectly circular, but it looks fine because there's lots of motion going on in the image.

And so the distortion kind of

00:15:38.120 — 00:16:48.750 · Speaker 1
was a flaw, but in context of that generation in that style is just a piece of it. So we used Midjourney pretty heavily. I like Midjourney because you can kind of with enough like style prompting or enough personal codes or whatever you, and however you end up curating those things, you can get really unique shading or just a particular like look on an image and using those for training data.

I think it was like AI civet or something. I'm totally butchering that. I can't remember which what site. I used to actually train the lorries, but then I used, uh, fall AI for the actual generation. And the parameters that you get with fall are good enough that you can actually control the the generation speed, like how many times it will run through the models to, to actually come up with the generation.

And I played with that for a bit until I got something down to a couple of seconds or less that still had continuity. Um, and that still felt unique.

00:16:48.950 — 00:17:44.210 · Speaker 2
Are there techniques that you're using when you do and explore with image generation that really help you? Because you've got to think, you know, some companies have the luxury of a lot of R&D time where it's like, I want to just explore some ideas and learn. If these like the scale issue that you were talking about, I imagine you had, maybe you looked at it right away and you're like, yeah, I can't even explore with that because I know it's going to fail right away.

Uh, or maybe you're like, let me just do a few iterations to see how accurate this generates each time. Either way, there's probably like a little bit of moments of like, I've got to see if this is even feasible. And I think a lot of designers are experiencing that right now. Let me go try the new tool. See what's possible with it.

Then try and see if it's feasible at scale. They have to go through a couple of motions. Do you have a process for going through those motions of exploring? If you should go deeper into a tool or method?

00:17:44.250 — 00:18:24.530 · Speaker 1
I mean, a lot of my process with image generation is a lot of inputs and outputs, a lot of back and forth playing with something, getting the output, not looking at that as an output, but looking at it as a throughput, pulling that into Photoshop, making edits, cutting it out, making it a collage, generating individual textures to use in a collage, then saving that up and putting that back in as as an input for the generation.

So just a ton of back and forth, like anything I've ever shared, has had many cycles of that back and forth. Um, just in and out of the tool. I again, don't really look at any new tool. I'm trying as

00:18:26.010 — 00:19:08.780 · Speaker 1
the kind of beginning in the end, like I'm not looking at that tool necessarily to see like how much effect I can get out of that one thing with one workflow. It's like playing with it and maybe spending an hour or two just trying things and trying to see if there's ways that I can use this particular tool in my greater process.

Is it a beginning thing? Is it an end thing? Can it be thrown somewhere in the middle? Is there potential to hook it up with something else back and forth? So a lot of times when I'm gauging new tools, I'm looking at it through the lens of how can I use this with other tools I'm familiar with.

00:19:09.300 — 00:19:38.200 · Speaker 2
Have there been times where you had mentioned with Laura, you know, you hadn't touched it in some time. So there was like the onboarding of like, okay, what's possible today? What's new? What has changed here that I can train with? Um, do you do you think of things in terms of like playgrounds, in terms of let me just experiment with this idea.

Do you already have a general idea of what you're going to try to press it to do, just to see where it fails and works. So how do you how do you kind of move through that as a throughput?

00:19:38.240 — 00:20:16.800 · Speaker 1
I mean, part of it is just having a library from Midjourney. I've been using Midjourney since early beta, so I just my library is stacked with prompts and like different stages of me going through how to write a prompt, you know, like tried poetry, try to making things abstract, try writing lyrics as a prompt or a full paragraph or short story or, you know, just comma spaced, uh, keywords.

I have a thing for, like pushing tools in a way that's like I'm trying to break the terms of service.

00:20:17.920 — 00:20:21.360 · Speaker 1
Um, like Midjourney, they're,

00:20:22.520 — 00:20:23.280 · Speaker 1
they're

00:20:24.440 — 00:21:35.790 · Speaker 1
they've gotten way more strict over time. Early days a lot of my old stuff is like gore and like stuff that Midjourney does not want me making. And if I try to use that prompt from scratch today, it would get flagged and I wouldn't get anything. But because I have those artifacts, I can go back and I can regenerate old prompts from two years ago or like for some reason, if I use that prompt verbatim, it still goes through and it still works.

But if I change a word or I change anything about it, it gets flagged. So like honestly, a lot of the time when I'm just playing around in Midjourney or another generation tool, I'm trying to be clever about how I can generate things that the tool doesn't want me to generate, uh, figure out. Oh, like thinking of ways to get blood without using the word blood.

Like, you could say red liquid. Although that's probably going to get flagged now. So you just got to, like, find ways around describing something to get the visual effect of what you want without explicitly saying it. And that's kind of taught me a lot about how important communication is. Like a lot of what I think

00:21:37.430 — 00:22:04.790 · Speaker 1
about, like being a good designer today and in the future is going to be how well you communicate, whether that's communicating your ideas to your colleagues, uh, trying to get buy in from an engineer to do something that's hard, or trying to get a generation that you're looking for. Um, really being able to communicate has been something that's kind of enlightened me in how to be a better designer.

00:22:04.870 — 00:22:45.090 · Speaker 2
It's done the same for me, too. Even on this media side, how do I tell these stories? How do I make sure that there's something to be taken away while also entertaining? And how do we distill the wonderful things that you're talking about? It's and it's challenging and there's so many different ways to do it.

And I've heard in a lot of media generation, people are starting to use when it works. Jason structures and they're finding themselves getting more accurate outputs from from that. Are you are you playing with like how do you communicate effectively to some of these AI tools like from from prompt structure to like what's your mental model sometimes to like the actual formatting.

00:22:45.130 — 00:26:22.660 · Speaker 1
It's a lot of trial and error. Um, I mean, an example, when I was testing comet, I was trying to get it to make me a playlist on SoundCloud, and at first I was like, just make me like a rally house playlist, and I totally failed. And then I was a little bit more descriptive about what I wanted, and I could see it trying and I could, you know, watch the the agent steps, like telling me what it's seeing and how it's trying to navigate and like, engage with the website.

And it would still kind of like get tripped up and fail, uh, because like the the add to playlist button was like hidden behind hovering on an icon. So it just didn't even see the option. I eventually got it to work, but at the end of the day, my prompt for it was like a paragraph long explaining exactly what it needed to do, where it needed to look on the page, what it needed to engage with, how to add it, how to search.

So it actually searched for songs that fit the genre, not just songs that had the genre in the name. And I think that's true, or at least to some level, has been true for anything I've ever tried with prompting or AI. It's at least in terms of using those tools. It's a lot of trial and error, like starting very minimal and seeing what happens and seeing where it kind of fails, and just getting more and more descriptive as I go.

I kind of see that personally as like a pitfall of of AI. It's like AI and tooling is in the baby stages. Like we haven't fully figured it out yet. It's like prompting is the headless approach. It's the engineering approach. It's so Flexible, but it leads to a blinking cursor problem. Like normal people and laypeople just don't know what to do, and they feel like you got to be a prompt and engineer in order to get anything out of it.

So like my approach with AI, like a lot, you could see a lot of this with D script. I was a D script from 2015 to like 2024. I was the only designer for a long time, and T scripts and an AI tool from the ground up always has been. Just never really marketed that way because that's not what was valuable. Was valuable was the tool in the feature set.

So like everything I ever worked on at T script, while most of it was AI, it was just presented as part of the tool. It was integrated in a way that you just used the feature. You didn't have to know that it was AI, and like the further down we got were like things like custom instructions were going to be valuable.

That's all it was. It wasn't like lead with the prompt. It was play with sliders and dropdowns to like set what you want in the background. That's generating a prompt, but it's not important to the user with a prompt is. But if you really want to have more control, just add custom instructions and then like prompting becomes in like a second class citizen to using the tool.

And I still feel that way. I feel pretty strongly about like AI integrated and into tools, especially creative tools. I think the the further we get away from prompting as the default, the more adoption we'll see, the the easier it will be to convert anti AI people who just either don't get it or scared of it.

You know, there's many reasons not like AI, a lot of them are valid, but I just think the approach to implementation and how you actually integrate those things into your product

00:26:23.740 — 00:26:29.070 · Speaker 1
will make a huge difference in just the impact it has and how it's perceived.

00:26:29.110 — 00:28:14.970 · Speaker 2
I think if more people have the opportunity to just tinker with it, right. It might. Might not change opinions. It might change a few opinions. It might at least let you understand. Okay, I can see a couple of different paths that this goes. And one of those isn't so bad. So instead of being anti, I think I'm advocating for that path with AI, you know.

And I think a lot of that starts with let me just try. Um, and so it's really interesting I look at, uh, you know, one of one of perplexities, creative ambassadors, uh, Tatiana Segal, who shares an incredible catalog of all the prompts that she's created and the visual styles and, and she'll share a lot of those prompts and the parameters and you're like, oh, man, that's real.

It feels kind of scientific. If I've got to learn, like all these different parameters to tweak and to get this outcome. Do you say, you know, I look at the prompts, I use everything from code generation to media generation, and they vary wildly. I go from anywhere of spending almost an entire day on a new project, building a library of context, right?

Pulling in the right imagery, pulling in all the the whatever, the conversations or the PDEs or the things I need for it before I hit that first prompt. And then there are times where I'm just like, let me just write the most caveman sentence I can think of and, and hit enter and start from there. And I don't know which one of those.

I can't tell you which one of those gives better quality. I think in context matters sometimes, like if it's for a coding project or a media, do you have a time like or do you sometimes just do the the kind of Neanderthal approach and just type something in and see what comes? Or are you a very thorough prompter?

00:28:15.250 — 00:28:18.770 · Speaker 1
I definitely start with the, with the, the the former.

00:28:18.810 — 00:28:19.410 · Speaker 2
Okay.

00:28:20.050 — 00:28:56.820 · Speaker 1
Um, like, I mean when I'm using perplexity. If you look through my library. Half the time I'm like, trying to find something. It's just a stream of consciousness. What's in my head? Like, literally what I would say to my friend. And they have the context. So, like, it would only make sense to someone who knows.

Um, and surprisingly, perplexity gets it most of the time. Or like the the beautiful thing about follow ups is you don't have to be very verbose. You can be very archaic in your follow ups when it doesn't work. Of course, I do have to sit down and think about what I'm writing.

00:28:56.820 — 00:28:57.580 · Speaker 2
But no, I have to.

00:28:57.780 — 00:29:44.560 · Speaker 1
I don't generally start there. And and I mean, one of the beautiful things about comet is how much context can be inferred by by owning the layer around the internet and not just being a search engine that can see the internet, but actually being able to control the frame around it has a lot of value, both in terms of like actually giving perplexity the, the, the ability to do things that it just couldn't do otherwise, but also that the sort of long, the long end of context building.

The more you use it, the more you're going to get out of it. Um, I think like a browser is kind of like the the first step to an operating system.

00:29:45.280 — 00:30:48.330 · Speaker 2
Yes. I love hearing you say that. I won't ask you because I know you guys can't say it yet, but I will say the further up we go with AI, the more excited I get. But, you know, there's a lot of companies that try to hit like, let's go to the ground level and push things up. I feel like when we're when we're kind of, uh, bringing it to the browser level or the operating system level, you just it seems like we get richer context.

And that's that's really exciting to me. So, you know, what's interesting here is someone might look at you and say, hey, you're a designer in tech specifically for AI companies. You know, that's if they're like anti AI. They'll say well like what are you doing. Like you're you're doing all the bad things for creativity.

You're this is this is counter self-expression. But I, I would say and I'm even looking at your wonderful ink right here like that. That could be there couldn't be anything further from the truth. How do you reconcile this idea that you can be both pro AI and pro self-expression?

00:30:48.370 — 00:31:18.470 · Speaker 1
I mean, a lot of that for, for me personally has to do with, with my workflow and my approach. Like a lot of the back and forth to where it gets to the point where I, I just know that you couldn't generate what I'm generating from scratch with just a prompt without all of the the inference and kind of work I put into creating the training data.

And it's cool because I see a part of myself anytime I see somebody sharing their comment. Invite. Um,

00:31:19.590 — 00:32:54.209 · Speaker 1
in the way that like, I see the composition that came out of it. And I remember being in Photoshop, putting those elements together to make that core foundation of the composition. A lot of what I do with, with the label is more about wanting to elevate other people, wanting to like, capture and do something with someone else's talent and like, I'll use AI for demos or concepts.

If I have an idea for something, I'll go into sono and I'll like, like maybe death metal can work with with drum and bass. Uh, let me just try and see if I can get something that sounds halfway decent, and then I'll find I'll like, you know, play around, get something, and then I'll send that to a producer that I work with and be like, can this work?

And I'll get to, like, work as a low bar. Uh. And they'll take that. We'll take the stems. You know that I export from sumo from that, or, uh, use bits and pieces or just find inspiration in something I generated to make something unique on their own. And so, like, I'm not going to release the thing that I generated.

Um, whether that's artwork or a song, I'm going to go to an artist and commission them with, like, here's a mood board. Uh, sprinkle in some of their work and be like, let's do something in this vibe. I'll go to a producer, be like, is this, like, genre of

00:32:55.770 — 00:33:18.300 · Speaker 1
some blend? Sound like it could work? Does this sound interesting to you? Uh, sometimes it's like, nah, but sometimes it's like, hey, I've never heard screaming in a German bass track before. So yeah, that's like a lot of the label stuff is me ending up playing a creative director role. But I'm really just like wanting to work with other talent.

00:33:19.180 — 00:34:41.360 · Speaker 2
That's interesting because that's how I feel with this kind of media studio. We've got this team of wonderful, talented folks. And, you know, I really relate to what you said, which was, you know, this whole thing of communication. What a lot of these, these generative AI tools have, have helped me do is communicate a lot of ideas more quickly.

And people will say, oh, it just produces slop. I think you're I think a lot of those people are misunderstanding the value of of that, which is like that slop is a better version, or at least a way that improves the conversation at the table. Now we can talk meaningfully, tangibly. Right? Some people love to live in the abstract, and they just want to sit up here and it's like you and I can be saying the same words in a room and without something on the table to point at a shared vocabulary.

We might be talking about things that we think are the same thing and are completely different. And and to me, that's one of the most helpful parts of where generative AI is today, improving the conversation. And so you're telling me that you look at that as, as long as the output has been materially changed before, it kind of reached its final stage by the human.

Um, that's that's kind of like the big differentiator for you. Is that true?

00:34:42.320 — 00:35:38.370 · Speaker 1
Yeah. Whether it's at the end or part of the beginning, like in the, in the with the comment invites. Right. Like I'm not curating the final generations. Those are the way they are. But I've done so much curation leading up to that that it that it works. Yeah. Like I met Kenneth who is the creative director for doom.

Uh, I think like Halo Reach or something. I like found him early days in the the Midjourney. Just discord found his work, followed him, started bugging him. Uh, and he was someone who, like, actually changed my mind or, like, solidified my bias on on AI. Like seeing someone like him use AI as a conceptual tool.

Something to move forward with, something to like, get seeds of ideas from to. Then,

00:35:39.490 — 00:35:52.330 · Speaker 1
you know, further facilitate a conversation, like seeing someone who I respect and who is like has so much talent. Using AI that way kind of

00:35:53.770 — 00:35:57.010 · Speaker 1
solidified it for me that it's okay, and I stopped feeling bad about it.

00:35:57.010 — 00:36:13.830 · Speaker 2
What's really interesting about what you just said was that there was a time you had a very different opinion about AI. Will you can you crack into that for me? Can you share before kind of the the turning point, what you initially thought about AI and tooling.

00:36:14.230 — 00:36:34.750 · Speaker 1
It became like an ethics question for me. Uh, like at descript, for instance, it was cloning audio, and I wasn't going to feel comfortable with that unless I put in the work to make sure that the tool could only be used to clone your voice.

00:36:35.950 — 00:37:34.770 · Speaker 1
If we had recorded this on, I'm adding it and descript. I wouldn't be able to change anything you said unless you gave me explicit permission, and you would go through a process of verifying that you are the person whose voice was recorded. And that's really the only thing that made me comfortable staying at descript so long, was that I knew that we had done it right.

And then you see other companies come out and you can do the same thing. 11 labs, amazing product, kind of lightweight on the ethical part. Like you can kind of really do whatever you want, and the terms of service is a checkbox at the bottom that says you won't abuse it. So it was important to me to build in the infrastructure so that you can't abuse it.

When I first saw generative art, I was like, what? What is this? Why would I use this? And then I played with Midjourney for the first time for like three hours. And it kind of it was an unlock for me. I was like, oh, I can do things now

00:37:36.010 — 00:38:06.250 · Speaker 1
that I couldn't necessarily do before or intermingled with my skills and my taste and what I know how to do. I can get something that's unique, other than kind of my ethical qualms with how it's used. It's this it's the same thing like Photoshop 30 years ago, like it was controversial to move a horse up a hill in an image for a book cover like that was on the news as being like, that's photoshopped.

00:38:06.370 — 00:38:09.580 · Speaker 2
That became became a vernacular, right? Yeah.

00:38:09.620 — 00:38:35.499 · Speaker 1
And obviously, like being someone who is on the internet on twitter.com, sharing these things with a fairly large following. It's like I got so much hate, like so much vitriol at first when, like, no one really knew what it was. It was like, oh yeah, that's kind of interesting. But it very quickly got to like, I can't post anything without death threats or, you know, the

00:38:36.900 — 00:38:47.659 · Speaker 1
just people not being okay with it. And I initially tried to engage in a lot of conversations there, both to try and understand, like what is really the issue here and

00:38:48.740 — 00:40:18.330 · Speaker 1
like, are there ways around that, like, can I use AI ethically and what does that mean? What's the process look like to make that ethical? I could use Photoshop in ethically. I could use Photoshop to do anything I want. And so it kind of over time, to me it became more about like the person, the person who's using it, how you're using it, what are your standards like?

What? What's your approach? What's your end goal? Are you making stuff to to get likes on the internet, or are you making stuff to express yourself? You know, a lot of a lot of slop that I see is either people trying to jump on and ride the hype, train things that aren't really AI, or just people who don't have taste, don't have an eye.

They don't. They're not using the output for anything. They are just typing into a box to get something to post on the internet. Uh, with, I don't know, it's not even like, I mean, nowadays you kind of need a trained eye in order to see what's a and what's and what's not. For a while, it was just so blatantly obvious.

Now it's gotten to the point where you just have to assume that everything is AI. That kind of sucks. Like that doesn't make me feel good. Um, but it's kind of I don't know, it's similar in movies like or TV or whatever. It's like reality TV. It's like it's not on reality TV. You can't believe anything that you see.

Uh,

00:40:19.610 — 00:40:51.210 · Speaker 1
and that's something I don't really know how to navigate, like, other than just assume everything's AI. And instead of hating something because it's AI, ask questions, ask how it was made, what went into it, what was the process? Why was it made? What does it what what purpose does it serve? Doesn't really have to serve a purpose.

But those questions are ones I wish people asked me before throwing hate or whatever it is, because those are the things that I want to ask people whenever I see something.

00:40:52.290 — 00:41:45.990 · Speaker 2
That's that's actually a very incredible answer. I really appreciate that framework of thinking about it, because I too have, uh, you know, as someone who just I just talk curiously. I say, look at the tools. Look what I can do. Look at what these people are using it for. Oh, you're an AI, bro. I'm just talking about the work, you know?

And, um, but but there are there's camps now. There's sides. The internet's very polarizing, and that's an easy thing to get riled up over. Uh, you see Kenneth working in Midjourney early. Midjourney starting to create some stuff. You have some initial knee jerk reaction around this things, it sounds like.

And and you've been very vocal about ethics. You've talked about this before in AI. That part hasn't changed. But when you saw Kenneth creating some of this work, what was kind of the key change for you that made you flip from one thing to another.

00:41:46.230 — 00:41:46.670 · Speaker 1
Part of.

00:41:46.670 — 00:41:47.310 · Speaker 2
It?

00:41:48.070 — 00:43:09.060 · Speaker 1
I'm not sure. Part of it was seeing the way that Kenneth was using it. He was also pushing the boundaries of the like the the policies of Midjourney, like he was in there advocating for. You should allow the word blood like you're not going to be a realistic conceptual tool if I can't get the stuff that I make.

Um, so just seeing how he was creative about that, seeing the quality of his work, seeing how he used those to influence his, his sketches. Like seeing him go from and actually doing in real time, iterating in Midjourney on an idea and making something unique from that, whether it was using that as an asset for an as an input or not, just the the fact that he was using the tool for his profession.

Same with, uh, the one of the creative directors for he does a lot of music videos. He had a lot of Nine Inch Nails music videos. I can't remember his name right now, but he was another person that I saw using AI from music videos in a way that, like you, you don't know it's AI. And he doesn't really talk about it being AI, but like

00:43:10.700 — 00:43:31.220 · Speaker 1
he's open about it, he just doesn't like gloat about it. But it's the kind of thing where it became obvious to me that something can be 10% AI. It can be 100% AI. It's like the how much of it is AI suddenly matters?

00:43:32.300 — 00:43:35.380 · Speaker 1
Or it's I wish it mattered more to other people.

00:43:36.740 — 00:44:03.760 · Speaker 1
Um, just being able to like, understand that there's there is a blurred line. You can use AI as a conceptual springboard. You can use AI as a throughput. You can use AI as an input. You can generate brushes, you can generate textures, you can make a composition with assets that were generated and at the end of the day, how?

How much of that is AI?

00:44:03.880 — 00:44:43.919 · Speaker 2
I have a question for you. So would you advocate? So, for example, you know, there's a question of when we produce something on the internet media, a piece of software, should we? And to which degree should we disclose how AI was used in that production? Right. So for example, in our videos, uh, I've hired a great team of people who so we, you know, we rather than me saying, I'm going to use AI to do all this stuff, I hire people who are talented, have great taste, and sometimes they use AI for little things.

But it's a very manual process. We clearly make the big investment into people, and

00:44:44.920 — 00:45:31.010 · Speaker 2
those tools help with percentage pieces of the things. On the flip side, we still very much value kind of analog things, like there are many stop motion parts of our videos or even the transitions, our paper ribs and those are filmed. Shot after shot after shot of paper cutouts. And the reason we do that, which is not efficient at all.

The reason we do that is kind of a call out to we still value some of the handcrafted ness of this stuff, even if it's a small detail that most people would say, you should automate that. And the question becomes, you know, I give credit to the people on our team in all of our videos. How how should we think about giving credit to AI in a production like that?

00:45:31.050 — 00:45:34.810 · Speaker 1
In in my personal experience online.

00:45:36.410 — 00:46:03.389 · Speaker 1
It got to the point where, I mean, I've always been open and vocal about using AI, trying to like, express how I use it to, to like maybe educate someone or like inspire someone or get someone who is curious enough to ask the questions. I've never, like, hid from it and I've never tried to deceive people, I think.

I think there is a difference between like not clarifying

00:46:04.430 — 00:46:09.470 · Speaker 1
what in how something was done versus being deceitful intentionally.

00:46:09.470 — 00:46:13.150 · Speaker 2
Like, I want to make sure people think this was me and all me. Yeah.

00:46:13.750 — 00:46:28.910 · Speaker 1
Because it did get to a point where under everything that I shared, I was clarifying with a disclaimer that it was AI. I didn't really elaborate all the time. I just said they had this AI, whether it was 10%, 50%, 100%.

00:46:29.950 — 00:47:46.690 · Speaker 1
And it started to feel like it was a target. It actually brought more hate because people are searching Twitter for that acronym. People see that acronym and they have a knee jerk reaction, and it didn't feel like it was helping, at least in my case at the time when I was doing that. And so I kind of stopped doing that.

I would still be vocal about using AI, still be vocal about how I think AI could be used, but I kind of separated that from the things I was sharing. And it kind of just is like, if you follow me and you read my tweets, you'll know if you see something I share. Show up in your feed and you don't follow me and you've never seen anything before.

You might not think that it's AI, and there's something to be said about that being deceitful, that that leading people to believe something that is not true. And I haven't shared anything really recently. I kind of just got to the point where, like, look, I'm doing this for myself, like I'm making art for me.

It's not about posting it on the internet. The reason I post it on the internet is because I like educating people or showing people that there is a way.

00:47:47.570 — 00:48:39.780 · Speaker 2
Well, I personally wish you would post more because I think your work is fantastic and I do think we're in a time right now where it is a hot topic, and it is sometimes going to require maybe to unplug for a little bit not to consume so much of the polarization. But but I'm very happy to have found you from it. Kind of.

One of the last things I wanted to talk to you about was, was the tooling specifically. Now there's two trains of thought here, right? Like, for example, that wonderful ink that you have. I'm also an ink collector myself. It's a pastime that I really enjoy for a number of reasons, and I use different tools to help me kind of think through my concepts there.

Be curious to hear for your hobbies how you do that. But before we do that at perplexity, what are some of the daily driving AI tools that you're using in your work over there? And how are you using those?

00:48:40.820 — 00:48:43.860 · Speaker 1
I mean, perplexity? Um.

00:48:47.100 — 00:49:18.040 · Speaker 1
For me personally, it's Midjourney and cryo. I don't do a lot of AI tooling. For my work I have used cursor. I should and want to use cursor more. I've. One of the beautiful things about being a perplexity is seeing how other people are using it. And like jumping on a call and seeing them use it and be like, damn, okay, we just spent 15 minutes talking about doing something while we were doing it.

00:49:18.960 — 00:49:28.080 · Speaker 2
Does that happen a lot in meetings where you guys will talk through a thing and someone in the is right there, like making it happen? Yeah, that's so cool.

00:49:28.240 — 00:49:32.600 · Speaker 1
It's pretty cool to see whether it's a designer doing it or an engineer doing it.

00:49:33.760 — 00:50:37.570 · Speaker 1
I mean, like I've, I've used like hacky things like for instance, the onboarding for comment, the, the kind of like landing page for onboarding is just a spinning planet. Ideally that's code. Ideally we have a 3D model like that would be more performant and just better. You could interact with it, but I use Perplexity Labs to make that nice.

I was like, I just need something quick and dirty and use Perplexity Labs to make a mini web app that was a sphere that rotated and had a button to allow me to upload a texture, another button to export a video that was a 360 degree rotation. And like I did that as a demo, I just did that to prototype it, but that ended up being what shipped.

We'll fix it eventually. But like right now in the onboarding is just a quick hacky thing I did in five minutes using Perplexity Labs to get the idea across, and it was good enough that it ended up in prod.

00:50:38.130 — 00:51:06.030 · Speaker 2
I love that, that is fantastic. I heard about this resource library you guys lean on for your brand guidelines. I mean a brand, a brand book these days, or, you know, in the past has traditionally been, uh, here's your your colors. Here's how certain patterns and motion and different things are used. And now brand books probably need to include more and more prompt parameters so that you can promote things, right?

Yeah, that would be a cool thing to get a look at.

00:51:06.070 — 00:51:13.270 · Speaker 1
Yeah, yeah. We have a library of either just generation style codes or like prompts, like pieces of prompts that that work.

00:51:13.310 — 00:51:16.710 · Speaker 2
Is that public or. That's probably in the back. Right.

00:51:16.990 — 00:51:27.430 · Speaker 1
It's the like the style codes that Tatiana shares. She'll put together like a mood board and and show that out publicly. But the the rest is just internal, right.

00:51:27.470 — 00:51:46.070 · Speaker 2
So then when it comes to things like your hobbies and maybe ink is one of those that you use is for maybe it's not. It certainly is one for me. Are there tools that you take to that, or is it also mostly Midjourney and other things like that? Are there any hobbies where you really dabble kind of deeper into the AI tooling space.

00:51:46.070 — 00:51:49.070 · Speaker 1
Mostly out of curiosity to keep up?

00:51:49.110 — 00:51:50.250 · Speaker 2
Yeah, Yeah.

00:51:50.690 — 00:52:14.090 · Speaker 1
Like Suno, for instance, I didn't really touch Suno when it was. When it was new. Only started dabbling with it somewhat recently. And I used again like a week ago. And it's already, like, totally different. They've shipped like two new models. They have more like better parameters, a better editor.

00:52:15.130 — 00:52:21.530 · Speaker 1
Like every time I use these tools, they add something new that then makes me think like,

00:52:23.290 — 00:52:31.770 · Speaker 1
okay, how can I, how can I actually integrate this? How can I use this? And that's kind of what I ended up doing with the label was like, okay, as soon as now, at a point where I can

00:52:32.850 — 00:52:46.010 · Speaker 1
try an idea that's been in my head since I was in high school, and I've never heard it before. And I don't know that it'll work, but I think it will, and get an output that's good enough that I can share with someone who's like,

00:52:47.130 — 00:53:13.860 · Speaker 1
actually a music producer and who can do it right and see them light up, see them be inspired by this janky output that I made. I don't, don't don't really use it for for tattoos for me. My whole approach to tattoos is like I follow nothing but tattoo artists on Instagram and have since 2012 or whatever, so I've always been a sort of a tattoo snob.

00:53:15.020 — 00:53:19.540 · Speaker 2
Your style is that style considered black work? What would you consider that?

00:53:19.580 — 00:53:25.180 · Speaker 1
It's. Yeah, it's black work. Yeah. It's like almost a blackout, but it's kind of got some negative space.

00:53:25.220 — 00:53:27.780 · Speaker 2
Yeah, it's it's very cool. I like it a lot.

00:53:28.140 — 00:54:05.960 · Speaker 1
Uh, this artist is from South Korea. All of their work looks like this. Um, and they visited New York, uh, at the end of last year, so I just, I couldn't I couldn't miss that opportunity. Um, but I generally just find an artist I like and give them the trust. Like, I might not even have a fully baked idea of what I want.

I just know a couple of pieces that they've done, or I like their style and I just want them to do their best work. I want them to tattoo something on me that they would want themselves.

00:54:06.000 — 00:55:44.410 · Speaker 2
Asha, this has been a fantastic combo. I truly haven't had an opportunity to really talk with somebody, both in your position and just so candidly about, um, kind of the evolution of thinking about AI. This is really, really helpful combo. So here's what I'm sitting with. After that conversation, Aysha made a point that I keep coming back to that being a good designer now and in the future is going to be about how well you communicate, and not just with clients and engineers, but with the tools themselves.

And that reframes a lot of things, because suddenly prompting isn't a gimmick skill. It's the same muscle we've always been building, explaining our ideas, clearly, understanding constraints, and iterating based on feedback. Only this time with a different collaborator on the other end. And the second thing is about her approach to AI, which is one I wish more people understood.

She's not pretending the output is hers in some pure, untouched way, but she's also not giving away credit to the 10,000 micro decisions she made before the machine ever touched it. The training data she curated, the compositions she built in manual tools, and the taste she applied at every step. There's a blurred line between using AI and creating with AI, and I think the people who figure out where that line sits with them, without apologizing and without overstating, are going to be the people who ship work that really matters.

The anti slop and Asha's been doing that for years. Quietly, intentionally. Anyway, that's the episode. I hope you learned something today and I'll see you next time.