Screaming in the Cloud with Corey Quinn features conversations with domain experts in the world of Cloud Computing. Topics discussed include AWS, GCP, Azure, Oracle Cloud, and the "why" behind how businesses are coming to think about the Cloud.
Daniel Feldman: [00:00:00] Yeah, Markdown definitely is one of those technologies. It's sort of a meta technology that underlies a lot of different systems in a lot of different ways. Uh, and if I have so much respect for the people who come up with stuff like that, because, you know, coming up with something that that's used in a thousand different products is, uh, really impressive and really challenging.
Corey Quinn: Welcome to Screaming in the Cloud. I'm Corey Quinn. I tend to spend a lot of time running my mouth on Twitter, but I spend even more time reading other people running their mouths. Daniel Feldman has been recurringly on my Twitter feed with insightful things that I would not have expected to come out of someone who was not, you know, a Twitter celebrity of sorts.
No, Daniel just is consistently funny and generally spot on, particularly when it comes to tech.
Daniel. What's your story? From whence do you come?
Daniel Feldman: Sweet. Uh, thank you so much, Corey. I also spend far too much time on Twitter, running my mouth a little bit. Uh, and I've greatly enjoyed your antics over the last few [00:01:00] years.
My story is that I do actually work in technology. I've been working on security products for years, and I can talk about that. Quite a bit about that, probably, ad nauseum, but I don't post about that very much. Mostly I'm posting about interesting things I find in the world of, uh, AI, interesting tech stories related to security, that sort of thing.
A few jokes, a few jokes at AWS's expense, and I've loved your podcast for a long time, so really happy to be here.
Corey Quinn: Backblaze B2 Cloud Storage helps you scale applications and deliver services globally. Egress is free up to 3x of the amount of data you have stored and completely free between S3 compatible Backblaze and leading CDN and compute providers like Fastly, CloudFlare, Vulture, and CoreWeave.
Visit backblaze. com to learn more. Backblaze, cloud storage built better.
Part of the problem that I've been running into, and I imagine you do too, when you work in security, or in my case, when you work in cost, a lot of the things you encounter in the course of your day to day [00:02:00] work don't really make good fodder for social media because it turns out, among other things, clients have a certain expectation.
That you're not going to be airing their dirty laundry to the outside world. I have a standing policy that I don't talk about individual things I encounter in costing environments until I see at least three companies suffering from it, because then, okay, now this is a pattern, not one company finding itself in a weird position.
If you can de identify yourself through these things, that tends to be a bit of a problem. And I still periodically get people asking, I didn't realize that we were a customer of yours. Nope, you're not. This is a problem that everyone tends to have. The collective suffering, the esprit de corps of what we all tend to experience in tech.
But most people, for whatever reason, don't shoot their mouths off about it on the internet as much as I'd like.
Daniel Feldman: Absolutely. It's the same in security. Of course, every company doesn't want to air its security dirty laundry, just like they don't want to air any of their dirty laundry. But there's always new security stories every single day.
That's it. Someone's getting hacked almost all the time these days. So there's always something to talk about [00:03:00] that's out there in public, uh, that I found. And, uh, unlike cost, a lot of what we work on is open source. Um, there's a lot of new open source tooling all the time, a lot of Discussion about open source tooling.
So there's plenty to talk about on my end, uh, talking about cloud security.
Corey Quinn: From where I sit, one of the problems that, that I've been, uh, I guess, encountering recently lately is because I've been somewhat negative on GenAI, which is, you know, a thing you might have heard about if you've, you know, not been living under a rock.
And that's been interpreted as, oh, Corey hates AI. Oh, I think it's hilarious. I use it for a number of different things and I don't want to go back to having to write code without Copilot. I am not opposed to the technology. What I'm opposed about is the, the breathless enthusiasm that companies are ramming it down our throats on.
This will completely transform the way you run your business. It's all I do is run my mouth in the breeze and it hasn't grossly transformed the way that I operate it from that perspective. It's, it's a useful tool, but it's not. Math did not [00:04:00] suddenly radically change across the board when calculators were invented.
Daniel Feldman: I'm right there with you. AI is an incredibly powerful tool. The enthusiasm is a little bit over the top. So I've been working on AI side projects for quite a long time now. Maybe one of my first things, a few people might remember this, is in 2019 I did an AI, uh, D& D Twitter bot. So you could actually play D& D socially with your friends, uh, typing, typing in tweets, replying to tweets, and it would, uh, simulate a little bit of a mud D& D type game.
Uh, that, that was incredibly fun. I shut that down when it got a little weird and people started, uh, going a little dark on that. So I just, I just killed it. But, um, yeah, that is the problem. But since then, I've done a number of different things. Just learning as much as I can. I just suck up as much information as I can about this technology.
We started a little conference here in Minneapolis called the Applied AI Conference. It's twice a year, spring and fall. Uh, we have different people from around town just talking about their applications for generative AI in the many [00:05:00] Fortune 500 enterprise-y corporate environments around here. Uh, fantastic conference.
I really recommend it. Um, And, uh, we also have a meetup. I learn all, all kinds of different things at that meetup, um, monthly meetup on AI. Um, I was doing, uh, AI art. I did a couple presentations on AI art, uh, well before DALL-E came out, uh, just messing around with, uh, stuff like VQGAN. If you've heard of VQGAN, that was sort of the predecessor technology to diffusion models.
Uh, learning as much as I can about this technology, like I said, lately, one of my side projects has been a website called quackernews. com. You might know Hacker News.
Corey Quinn: Oh yes, the Horrible Orange website.
Daniel Feldman: Yes, I hate reading Hacker News. It's just, uh, full of people who are trying very hard, I would say. Which, uh, I'm a little bit more sarcastic.
I like a little bit more of a fun tone to things I read. So I made QuackerNews. com. Every day it downloads the latest Hacker News headlines. Downloads every website that's linked to on Hacker News, downloads, uh, the top 20 or [00:06:00] so comments on each article, and just makes fun of them. Just makes it a little bit more light hearted.
Corey Quinn: Like Engate used to do manually.
Daniel Feldman: Exactly. I was inspired by Engate. Uh, unfortunately, Engate died, uh, three or four years ago, uh, but that was done manually. But QuackerNews. com, it, it works every day. So I do that. Lately, I've, I've been messing around with AI generated podcasts. So I just uploaded an episode of the Quacker News podcast.
AI generated just talks about Hacker News stuff that has about two subscribers right now. It's not a very popular podcast, but uh, just learning as much as I can about the technology and, uh, having fun with it, trying to have fun with it, uh, learn more, figure out what the applications are.
Just in my own work, I found generative AI to be incredibly helpful with the tedious parts of coding.
So I can now rapidly, rapidly create, um, API endpoints. A lot of what I used to do is write API endpoints by hand. Took forever. Checking values that come into your system. Unit tests, database queries, uh, there, there's a wide variety of things that are incredibly time [00:07:00] consuming in software engineering that AI can automate reasonably well most of the time.
You still have to check over the results. I catch it making mistakes on a fairly, fairly regular basis. Um, when you get into more complicated things, is not as helpful, but it can still sometimes give you a decent outline. I've been using, lately, I've been using the latest, uh, Anthropic Claude 3. 5, which is the most powerful AI model.
It can do quite a bit of stuff. So I was actually working on a, it's a fun little side project. It's a Tetris game, just Tetris, that everyone's familiar with. You actually play by moving your arms. So it's designed to be a, you know, just a little bit of a, not a workout, but get your blood flowing a little bit.
Uh, so you move your left arm, the block goes left. You move your right arm, the block goes right.
Corey Quinn: You're sitting here flailing like a wacky, uh, wacky arm flailing inflatable tube man outside of tire stores.
Daniel Feldman: Exactly, exactly. And, uh, you know what? It works pretty well, actually. Um, and I was able to get the Tetris game.
Claude knows how to write a Tetris game in WebGL. That took [00:08:00] about an hour, maybe, to get a really nice, basic Tetris game that you play with a keyboard. And then I was able to get the pose detection. There's a pose detection library that Google released. Able to download the docs for that, paste them into, into Cloud, and it could process the pose endpoints and did a fantastic job.
I still had to connect them. That's the thing. And I'm still actually working on some of the details of that. Uh, just getting it to work really smoothly, but it's playable and that's in a couple of days. So just the, the technology really eliminates a lot of the tedium from software engineering, which is what I love because I hate tedium.
Who doesn't? I think the implications on the industry overall, I don't know. I'm not smart enough to predict that. If I could. I would be a lot richer than I am.
Corey Quinn: Yeah, you'd have the same arguments a generation ago about Stack Overflow. If people just show how to do all this stuff online, are people just going to copy and paste their way to success?
It's like, well, I've been doing it for 15 years, so shut up. Yeah, there's a, there's a, there's always the old guard that turns against new technologies and says, Eh, not so much, but there's a, there's a far cry between that and the [00:09:00] breathless enthusiasm. This will transform every aspect of what you do. That clearly every problem that a cloud customer has today is of course a generative AI problem.
The number one GenAI problem companies have for real is that their cloud provider won't shut up about GenAI long enough to fix the things that they actually need that are core and central to their businesses.
Daniel Feldman: Absolutely. Absolutely. And I, you know, I do understand where it's coming from because. You talk to executives at these companies, and they do it on a regular basis.
I have friends who are, uh, you know, VPs at various cloud providers, various SaaS companies, and you chat with them, and, you know, there are several levels removed from the coding, uh, the coding work, of course, there. Uh, maybe they got a start in computer science, maybe they got a start, uh, as a frontline software engineer, but it's been a long time.
And then they go to Cloud, and they type in, uh, you know, write a script that does x, y, and z, that write a, um, you know, an API for, for x, y, and z, and, and it does it in two seconds. And then they start to wonder, well, can my software engineers just do this? Maybe I need fewer software engineers. Uh, it's a very [00:10:00] natural question to ask.
Uh, in reality, the vast majority of my time that I've spent in the, in the working world at, at large companies isn't writing code. Uh, the writing code is, is important. The vast majority of my time has been developing new ideas, advocating for new ideas, uh, figuring out how to implement new ideas effectively and efficiently.
Figure out what the customers actually need. And that work is not automatable. That's, uh, that's very hands on, human centered work. Writing the code, if you automated all the writing code, okay, so I saved 10 hours a week. I mean, that's significant, but that's not the majority of it.
Corey Quinn: Part of the challenge that I keep smacking into When I'm talking to folks about this is that when you talk, the executive level is excited, like, Oh, wow, this will optimize and improve performance of a bunch of things.
But you start digging into how exactly it's, Oh, great, we're gonna be able to get rid of significant swaths of our customer support team. It's really, really, is that how you envision this going out? Honestly, the fact in some cases, you have the work you do for your customer support team implies you need to [00:11:00] fix your documentation.
But even a step beyond that, by the time I reach out to a customer support person, My answer is not going to be on one of the boxes that pops up of the four options I can give. Make me start interacting with it in per in through a chat methodology and okay, but even if you have that moDALL-Ety with a person who's on the other end of like one of those intercom boxes who's actively empowered to fix the problem that I have, I still hate doing it because it feels like it's already a failure of whatever service or product I'm trying to use.
Making me talk to a robot instead of a person is not likely to make the customer experience better. Yes, it saves you some money, but on some level, when you start driving customers away because they get frustrated and decide to go to one of your competitors, have you saved anything at all?
Daniel Feldman: Absolutely. And in customer support, uh, maybe the frontline customer support is a little bit formulaic, but when you get to the backline engineering. They are solving problems that involve multiple components that they've never seen before under a tight timeline, you know, figuring out bugs and that no one has [00:12:00] ever encountered before because the the combinatorial complexity of systems like AWS is just Enormous.
Uh, and then it's interacting with all kinds of third party services. I'll, there's a lot of degrees of freedom there. So I have enormous respect for the customer service people. I think that, uh, in some ways that's more challenging than the software engineering because software engineering, you just make something, but you generally know if it works or not.
Customer support. It's an incredible challenge and I have enormous respect for anyone who chooses to do that.
Corey Quinn: APIs are deterministic in a way that humans are not.
Daniel Feldman: And then the other thing is that I'm just seeing this enormous backlash against AI. When I talk about AI online, like half the people roll their eyes.
Actually, most of my most successful tweets have been Failures of AI, because people like making fun of AI. They like thinking about how it can fail, how it's not as good as a human. For example, last week, I tweeted this. I didn't create this, I found this on the internet. I tweeted this gymnastics competition that was made using a text to video service.
It was just hilarious. [00:13:00] People would disappear, people had four legs, people had eight legs. People would disappear into, into the gymnastics mat. Uh, probably my most popular tweet of all time, actually. It got picked up by a number of different, uh, blogs and, um, A lot of people enjoyed that. And I think the reason is that there's just this backlash.
People are sick of hearing about AI. They're a little bit afraid of AI. They, they really, there is a genuine fear there that it will take their job.
Corey Quinn: Not only is this amazing transformative technology, it's going to put you out of work too. Isn't that exciting? Why aren't people happier about it?
Daniel Feldman: Exactly.
It's a, um, it's a strange place to be in. Uh, I've certainly found that Like, if I tell people I've made this cool website, Quacker News, that makes fun of Hacker News, they love it. And then if I tell them it's made automatically every day using an LLM, uh, they get a little, uh, you know, I don't know.
Corey Quinn: This is also, I think, part of the honest answer is that There's this expectation that people will want to consume something you couldn't even be bothered to create.
And, like, it's going to write my marketing copy for me. [00:14:00] Okay, great. Good for you. But if a human couldn't be bothered enough to craft the message you're sending out, why are you presuming that people will be happy enough, will be thrilled to read and internalize that? It's not lost on me that my newsletter, uh, combined with this podcast, takes roughly a year of human time every time that gets sent out, and I have to be cognizant of that.
Just in terms of how people, how long people spend reading it and listening to it. Where there's this idea that I have to be worthy of that investiture of human time. And yeah, I don't let AI write the dumb things out. I did do an experiment for a while, letting it craft the placeholder text about things.
And every once in a while, it hits it out of the park, not with its insight around what it's talking about, but with a hilarious turn of phrase. That I will then capture and use possibly in a different arena. Like, it never would have occurred to me to say that something was as melodious sounding as a bag of hammers being thrown down a spiral staircase during an earthquake, but I'm definitely going to use that phrase now.
Daniel Feldman: Amazing, but yeah, a lot of the world is signaling. Uh, so You know, why do we do anything? Why do we present [00:15:00] at conferences? Why do we go to college? Why do we apply for jobs? So a lot of it is signaling. If I, if I put the effort in, that shows that I'm, I'm serious about that effort, and I'm serious about that idea, serious about that project, that I, it doesn't necessarily say anything about my capabilities.
I mean, I never use anything I learned in college or grad school. Uh, of course I don't. No one does. Uh, but what it does is it signals that I, I put in some effort at some point in my life where I was confident that I would be working in this field. And when you use AI, of course, it's, it's sending the opposite signal.
It's sending the signal that you're not putting any effort in. So that's, uh, definitely a challenge that I think every company in this space is going to have to really think hard about how to solve that, how to fit AI into society. Because if you use AI to take shortcuts, people, people won't trust you.
Corey Quinn: I found it's been great for generative work as far as building images to put in conference slides.
Otherwise, I'm stuck with bad Photoshop or paying for stock photography, and I am not opposed to paying artists for their work, don't get me wrong. But it also feels a little weird to be paying [00:16:00] for like a stock photo of a data center rack. Like it, like just a hallway in a data center, like that is Great, if I'd had the wherewithal to take a picture of the last time I was in a data center, that would have worked fine.
And especially when you want to put a giraffe in the middle. No stock photographer in the world is going to do a deal with a zookeeper for that photo shoot. But I could make a robot generate that super quickly. And even if I tell it to spit it out in 16 by 9 aspect ratio, now it's the perfect size to put on a slide.
And it works to accentuate the probably ludicrous point that I'm making in that moment of the talk. without me having to sit there and figure out the best way to express that sentiment. I mean, people don't seem to realize for a lot of conference talks, most of my slides are placeholders. So I have something to stand in front of, but it's a talk.
I'm not reading my slides to you. If you can figure out what my, what my point is from a bunch of pictures in my slide deck, great. If you can get to that level, I just narrated something for you and that's not as engaging.
Daniel Feldman: Absolutely. Uh, so Corey, do you know what PowerPoint karaoke is?
Corey Quinn: I love that. I've also [00:17:00] heard it called Ignite Karaoke and a few other terms as well.
I do well at it because I suck at preparing for things. Like, here, like, for those who are unaware, it's, you have a bunch of slides that show up and you're seeing them on screen at the same time the audience is, and your job is to tell a story about it. One of the tricks as a general rule is to have a theme that you're going to go with and then roll with it come hell or high water.
One of the ones I've always used to do is this is why you should come work with me over at Google, which is terrific. It almost doesn't matter what you're going to see. There's something that you can turn it into a joke on some level there. It's a blast, but a lot of people aren't great at that sort of thinking on their feet and they just stand there like, uh, and that's a picture of a cantaloupe.
Like, yes, it is. Good work. Thank you for narrating it.
Daniel Feldman: Anyway, I did a PowerPoint karaoke event, uh, last week, actually. And I have a script, you can find it on my GitHub, that generates the slides automatically, uh, randomly, just using chat GPT, Dolly.
Corey Quinn: This is germane to my interest.
Daniel Feldman: Yes. And, uh, you know, it takes, it [00:18:00] takes two minutes and that is an application of AI where no one expects you to put any work in.
The work is in delivering the slides, it's not generating the slides. And I gave a fantastic, uh, Five minute presentation on how lawn gnomes are taking over the world. Because that's what ChatGPT decided I should be talking about.
Corey Quinn: Are you running critical operations in the cloud? Of course you are. Do you have a disaster recovery strategy for your cloud configurations?
Probably not, though your binders will say otherwise. Your DevOps teams invested countless hours on those configs, so don't risk losing them. Firefly makes it easy. They continuously scan your cloud and then back it up using infrastructure as code and, most importantly, enable quick restoration. Because no one cares about backups, they care about restorations and recovery. Be DR ready with Firefly at firefly. com. AI.
I have a question for you. When you're using this stuff programmatically with the chat GPT chat interface on the website, you can wind up putting it, I can say, give me a picture of a data center with a [00:19:00] giraffe standing in the, on the hot aisle. It'll do that.
But then I ask it to describe the prompt that would generate that image. And it gives me three paragraphs of text. That is much more clear. Uh, when I query DALL-E directly, I need to be a lot more verbose and a lot more specific at what I'm looking for. Unless I want lunacy. Do you find that, is there a decent API endpoint for that these days, or are you doing the multi prompt approach, where step one, ask something like GPT 4 to say, great, turn this brief description into a good prompt that will generate a great image out of DALL-E, and then you submit that prompt.
Daniel Feldman: I don't have a good solution for that right now.
Corey Quinn: Damn it, that was not the right answer. I was looking for a better one that I could implement. Like, honestly, the best answer would be, Oh yeah, there's a script in my GitHub, at which point, clickity clickity, and I've saved myself some work here. You're a crappy AI assistant.
Daniel Feldman: I'll have to figure that out. But ChatGPT definitely develops the prompt considerably further. And then the mysterious thing is that it won't output the actual prompt it inputs into DALL-E. So if I say a picture of a giraffe in a data center, it [00:20:00] then generates some, you know, paragraph long description of a giraffe in a data center.
And then DALL-E will be using Those sentences and do some kind of vector embedding and generate an image. But you can never actually get that middle layer prompt, um, out of the system. And that's, that's by design.
Corey Quinn: When you have it generate a prompt, it would submit to DALL-E and then take that.
Daniel Feldman: Perhaps. I don't know.
I'll have to check.
Corey Quinn: This gets into the borderline, uh, prompt injection. It's kind of awesome.
Daniel Feldman: Yes. Uh, well, the prompt injection stuff, uh, well, that circles back around to security because there's a lot of possibilities for hacking systems in really interesting ways through prompt injection. Just any, any system where you're taking inputs from a user and feeding them into an AI, there's probably a prompt injection attack that's possible if you haven't found it yet.
Corey Quinn: There have been a few ChatGPT system prompts that have been dumped from various places, and what's amazing is that these are the people at the absolute forefront of AI. I mean, don't get me wrong, Anthropic's doing great work too, and Amazon hangs out with some people, but the OpenAI folks are great, and [00:21:00] they are, and even their system prompts are like, in all caps, under no circumstances, and I repeat, like, Okay, even these people at the forefront of this are just like the rest of us.
They're begging the computer to please do the thing and hoping for the best. It's like, wow, that really does humanize machine learning scientists.
Daniel Feldman: Absolutely. And you know what? They're still, uh, they're still using bold and underlining in Markdown in their system prompts deep inside their system that you don't even see, uh, just to try to emphasize things to the AI and make sure it behaves in certain ways.
Corey Quinn: It's funny you mention that. Just before we started this recording, I saw a news, a news article in Ars Technica that I put up there, uh, because this is my cynical approach, uh, I said breaking news, um, Google finally admits defeat on the Google Docs team and acquiesces to doing something customers actually want.
Google Docs now will natively fully support Markdown. It's like, that is amazing. Although I don't, I, I do have some beef with the Ars Technica author who wrote this because they talked about it as an archaic style of [00:22:00] writing. It's like, first, f you buddy. That's how I write. But they're also not wrong because that is how I used to express emotion in IRC.
Back in the 90s. And yeah, you put asterisks around things because you couldn't embolden things. And smileys, uh, as opposed to the modern day emoji, or as I refer to them to piss people off, Gen Z hieroglyphics.
Daniel Feldman: Yeah, Markdown definitely is one of those technologies. It's sort of a meta technology that underlies a lot of different systems in a lot of different ways.
Uh, and if, I have so much respect for the people who, who come up with stuff like that, because, you know, coming up with something that, that's used in a thousand different products is, Uh, really impressive and really challenging.
Corey Quinn: Yeah, the original author of Markdown was John Gruber, daring fireball guy.
And he's the, he's an Apple pundit, but that's one of the early things that he did. It's like, step one, create this amazing thing called Markdown, and step two, declare it complete and never touch it again. And then everyone else is taking the ball and run with it. It's the, it's like one of the base, one of the best transition stories.
Like, uh, another one was the, uh, band LMFAO, [00:23:00] uh, Where they wound up, uh, creating a, uh, an album, uh, the original one, like Party Rock. Then the second one was Sorry for Party Rocking, and then disappeared and never did anything else. It's like, oh, we did a thing, now we're apologizing, and we're done! Yes!
There's something to be said for the performative art element of that. That's a beautiful aspect.
Daniel Feldman: Yeah, uh, similar with tech, the, the typesetting system. I used to use that quite a bit when I was writing papers and that sort of thing. You know, Donald, Donald Knuth created that.
Corey Quinn: Not the most user friendly thing in the world, eh?
Daniel Feldman: No, certainly not. But he started with version 3, and then he had 3. 1, and then 3. 14, and then 3. 141, getting progressively closer to Pi. And at some point after about 10 versions, he just stopped and said it was done.
Corey Quinn: Ugh, don't get me started on the versioning aspect of things. Where it's, okay, great, like, between version 1.
2 to 1. 3, there's a whole bunch of syntactical breaking changes in, uh, in libraries and whatnot. It's, do you think that versions are just numbers that you pick based, based on vibes? And then you look into their history of the project, and yeah, that's exactly what it is. It's based on [00:24:00] vibes. And great, but semantic versioning is a thing.
And yes, I know it's imperfect, but so is everything in this space. And for God's sake, give me a heads up when you're about to upset my Apple cart.
Daniel Feldman: Absolutely. Those, uh, those version numbers never really make too much sense. We actually, just earlier this week, uh, released a, um, IETF working document with a small group I'm working with that was version 0.
0 of the document because we couldn't agree on 0. 1. So I'm right there with you.
Corey Quinn: Oh, I like that. You can wind up getting asymptotically close to things too, which is also weird. The thing that drives me nuts is when at one point there was, I was following some project early on. It was version 3. 9 of something.
Great, okay, so the next one is clearly going to be 4. 0, right? Nope, 3. 10, which is, okay, I get going from 9 to 10, truly I do, but that's not how numbers work. If you do a sort of something that might naively assume it's a number, great, you're going to wind up with, at that point, like two things being the same because 3.
1 and 3. 10 are clearly identical, and [00:25:00] then you have a 3. 11 that's going to go right between 3. 1 and 3. 2, and No, I'm old, I'm grumpy, I don't like it.
Daniel Feldman: Well, that's when you go to semver. org and read about how, uh, how certain people have very strong opinions on how versioning should work.
Corey Quinn: Yeah, this is, honestly, it's one of those hills I'm willing to let other people die on.
But I do like embracing jokes like that, and humor around that, and arguments around that, because unlike a lot of the other stuff that I do, it becomes broadly accessible to people. When I make a joke that AMI is pronounced with three syllables instead of two, the way that Amazonians do, due to, I don't know what, collective corporate traumatic brain injury or whatnot.
But there's this sense that everyone can participate in that joke. And you can have a strong opinion, and of course it is ultimately meaningless. But when you start talking about, arguing about the nuances of deep programming considerations, That's where you need, you must at least be three years into your tech career for the work before you even begin to have an opinion that's sensible on this topic.
That's where it's, I don't like the exclusionary humor nearly as much. I like the things that everyone can [00:26:00] come in and start sounding off about.
Daniel Feldman: Absolutely. And, uh, I'm right there with you about the naming thing. Uh, what is the first company I worked at that used AWS? Everyone there. Probably over a hundred engineers called it Oz, like Wizard of Oz.
As in Wizard of? Oz, yes. Uh, and then, uh, they actually paid for me to go to reInvent, and I was very confused and probably seemed like a fool that year.
Corey Quinn: Well, I would pay not to go to reInvent at this point.
Daniel Feldman: Yeah, uh, well, I think you don't actually have to pay to attend. You can just show up in Vegas, and all the people are there not attending reInvent, wearing reInvent badges.
Corey Quinn: I strongly consider doing that. The challenge is, is one, I love walking the expo hall, because there's a lot of stuff that happens there. And two, I. I find myself in a bunch of weird meetings with folks that is very useful to be able to traverse a hallway where they won't let you in without having the badge.
And I, I also don't want to necessarily, when I'm meeting with clients and whatnot, give the impression that, Oh, I'm just cheap. I'm just here hanging out with all the other cool kids around the periphery, which increasingly is something I think should be normalized. But I have [00:27:00] enough meetings where I just need to talk to people in areas that are tricky to get to.
Daniel Feldman: So I do have a question. How do you always have a suit without any wrinkles? That is just incredible.
Corey Quinn: This is going to amaze people, you know, right? Because I am a very white man in a very techno forward city, but I learned how to use an iron back when I was a kid. And you can use travel irons in hotels. I also know how to pack clothes, which helps.
Daniel Feldman: Okay, well, I like everyone else in tech. I don't think I've ever worn a suit except for a wedding or a funeral, so it's all news to me.
Corey Quinn: That's why I started doing it, to be honest, is I did this as an experiment back in the early noughts. I was working at a company in 2000..., well, not early, 2006, 2007, and I showed up for a week every day wearing a button down shirt that didn't really fit, the tie I barely knew how to tie, and everyone made fun of me for that week.
While simultaneously taking me more seriously in meetings, because there's something societally hardwired, where if someone is in a suit, you give them undue levels [00:28:00] of credence, and, okay, I'll go ahead and roll with that, if that's how people are socialized, why not? It also amuses me to look like a salesperson that got lost on, uh, when I'm getting, walking onto a conference stage, people start to tune out.
And then I wind up going off on some technical topic with my typical sense of humor, because sarcasm is my first language. And people start to realize, Oh, wait, this isn't a sales pitch. This guy is an actual lunatic and I'm here for it. Yes. Thank you. I am too.
Daniel Feldman: It's fun to be underestimated, isn't it? But also everyone should just have some kind of a trademark.
Corey Quinn: I have to ask you, your Twitter following has gotten sizable enough that you're right around the point, I imagine, where I was when I started to get consistently recognized at conferences and whatnot from my Twitter nonsense. Has that started to happen to you yet?
Daniel Feldman: That has happened twice. Not very frequently.
Corey Quinn: It doesn't get better from here. Spoiler!
Daniel Feldman: No, very, very rarely. Fortunately, probably because, you know, I like being a little bit anonymous. Mostly I attend these open source conferences. Everyone's a little bit counter [00:29:00] cultural. They don't really spend that much time on Twitter. So, haven't had too much of a problem there, but it is always kind of fun.
Once a guy actually recognized me at a coffee shop, like down the street from my house, which that was really strange because that was a total Non tech, uh, context.
Corey Quinn: That's where it gets weird and creepy. It's, um, it finally happened where one of the, like, I, so my, my kid, my oldest child is in elementary school and I, I'm just some normal schmoo of a dad at these events by design.
Because if you start giving a tech talk at a dad barbecue, that does not go well for you, I imagine. And I don't, I don't want to hear about work outside of work, please. But I had a couple of dads corner me last time, like, I looked you up recently, and it turns out you really have a big audience. You're well known in this space.
It's like, well, crap. There goes the solitude and peace of me being alone at a dad barbecue. But yeah, it's the It's weird when it starts breaching containment into other audiences you generally wouldn't, wouldn't wind up talking to. It's [00:30:00] fun getting stopped by people in airports, faraway places at conferences, but I also like the fact that I can go to get a carton of eggs at the store in the morning on the weekend and not get recognized when I'm schlumpy in my, uh, in my dad, uh, my dad shirts and my sweatpants.
No thanks.
Daniel Feldman: Yeah, that's another advantage of the suit look, I suppose.
Corey Quinn: Exactly. Oh, and if I don't have my mouth happy mouth, uh, happy smile, my mouth wide open, no one's like, you look familiar, but I can't place it. I do the smile, which is also on my driver's license. And suddenly, I know you. There we go. We take it.
We can get.
Daniel Feldman: Definitely good to have a trademark, you know, a suit, a pink beard, a blue beard, crazy glasses. Love it when people get to express themselves a little bit at work.
Corey Quinn: We do what we must.
I really want to thank you for taking the time to speak with me. If people want to follow along with your adventures, as they should, where's the best place for them to find you?
Daniel Feldman: Twitter, Blue Sky, Mastodon, I'm always posting. Posting way too much, honestly. Always have a bunch of different stuff going on, like I said. Crazy AI projects that [00:31:00] are probably never going anywhere, but are sort of pushing the limits of what I can get away with. So then in the security world, I work quite a bit on this open source project called Spiffy.
We do, um, we do pretty regular Zoom meetings chatting about the technology with people from around the industry. Uh, that's what I'm really passionate about as a, as a day job, I suppose. I did co author a book, which is at spiffy. org slash book. It's called Solving the Bottom Turtle, which is about solving, um, what we call the bottom turtle of security issues, which is, uh, the, the root of identity.
How do you, how do you find a root of identity for these large distributed systems?
Corey Quinn: And the turtle is standing on top of a bike shed.
Daniel Feldman: Absolutely. Yeah, the turtle's not terribly steady these days, but I wrote that with some amazing people from some of the biggest tech companies in the world who are working on that problem.
It's an awesome group. I even have paper copies if you ever run into me in person. Just a self published book. It's not a real book.
Corey Quinn: Oh, it's a real book. Publishers don't have a stranglehold on it anymore.
Daniel Feldman: Anyway, it's a, it's a fun little, um, [00:32:00] exploration of the kinds of stuff I work on. I should probably follow it up with some updates because we're working on a number of big updates to Spiffy that I could, uh, could talk about ad nauseum, but probably shouldn't.
Corey Quinn: And we will make it a point to put those links in the show notes. Thank you so much for taking the time to speak with me. I appreciate it.
Daniel Feldman: Thank you, Corey.
Corey Quinn: Daniel Feldman, amazing Twitter account follow. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five star review on your podcast platform of choice.
Whereas if you hated this podcast, please leave a five star review on your podcast platform of choice, along with an angry comment, probably about how we're calling it Twitter instead of X. And at that point, we're going to have a bunch of follow up questions about where exactly you were on January 6th.