Known as xssfox online, Michael is a developer from Australia who recently released Big Buck AWS, a tool that exploits code vulnerabilities to enable users to host up to 75 gigs of data in AWS for free.
Join Corey and Michael as they discuss why Michael is a “code terrorist,” how to get 75 gigs of free storage in AWS with a tool called Big Buck AWS, how AWS might potentially shut this loophole down or why they might end up deciding to let it roll, what inspired Michael to even tinker with this idea in the first place, how Michael looks at each new AWS service and tries to push their limit, why Michael hopes nobody is actually using Big Buck AWS in production, ideas for other tools like this, and more.
Screaming in the Cloud with Corey Quinn features conversations with domain experts in the world of Cloud Computing. Topics discussed include AWS, GCP, Azure, Oracle Cloud, and the "why" behind how businesses are coming to think about the Cloud.
Transcript
Announcer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.
Corey Quinn: This episode is brought to you by DigitalOcean, the cloud provider that makes it easy for startups to deploy and scale modern web applications with, and this is important to me, no billing surprises. With simple, predictable pricing that’s flat across 12 global data center regions and UX developers around the world love, you can control your cloud infrastructure costs and have more time for your team to focus on growing your business. See what businesses are building on DigitalOcean and get started for free at do.co/screaming. That’s D-O-Dot-C-O-slash-screaming and my thanks to DigitalOcean for their continuing support of this ridiculous podcast.
Corey: This episode has been sponsored by CHAOSSEARCH. If you have a log analytics problem, consider CHAOSSEARCH. They do sensible things like separating out the compute from the storage in your log analysis environment. You store the data in S3; in your account. You know where it lives, you know what it costs, and then they compress it heavily while indexing it. And then they query that data using a separately scalable fleet of containers. Therefore, the amount of data you're storing no longer is bounded to how much compute you throw at it as well. It's broken that relationship leading to over 80% cost savings in most environments and being a sensible scaling strategy while still being able to access it through the APIs you've come to know and tolerate. To learn more, visit CHAOSSEARCH.io.
Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. I’m joined this week by Michael, who is, similar to a previous guest, an Australian code terrorist. Michael, welcome to the show.
Michael: Hey, thanks for having me.
Corey: So, you wrote something a while back that really took some of the serverless world by storm. It’s a GitHub repo that you called Big Buck AWS. What does that do, exactly?
Michael: So, it’s like a tech demo, I guess, of how you can abuse some of the inner workings of AWS, specifically Lambda. And it allows you to stream content, so like a mp4 HLS stream, without really paying for much data. And it does that by abusing the ability that you can pull your code back out of Lambda. Yeah, so, the thing is, when you upload code into Lambda, they also let you download it again, to view it in the online editor, or if you have some internal tool, you can use it. But the way it works internally in AWS is when you upload it, it ends up in an S3 bucket. You can grab that data back out from the S3 bucket through a signed URL, but that bucket is Amazon’s bucket, not yours, so you don’t get charged for it. So, that’s kind of a key part of it.
Corey: And that always struck me as something very strange. I’m sure there’s a technical reason behind it, but you get 75 gigs of Lambda storage per account, it’s an Amazon bucket, there’s no requester pays or anything like that, and it just never occurred to me that you could pull data back out of it, mostly because, I guess, I lack the appropriate level of code-terrorism-style imagination, but also because it’s something that has always happened under the hood. Oh, that’s where the—the code gets uploaded there, then it magically runs as your Lambda function. I guess, starting at the beginning, what got you thinking down this path?
Michael: I guess the key thing is, I was waiting for woefully large Lambda function to upload, and I was just looking at how big of a Lambda function can I actually upload, and what sort of limitations there are. So, they’re got me looking at the limits on that. And that’s when I started thinking down the path of, why is there a 75 gig limit to Lambda functions? There’s probably a good reason for that, and from there, I thought, “Oh, it’s because it’s not hosted in your bucket.” Unlike CodeDeploy or OpsWorks where you’ve run the infrastructure, it’s actually in Amazon’s bucket, so that’s why they’ve put that cap on it. And then I thought about ways that that could possibly be, I guess, abused to store your own stuff for free.
Corey: Oh, it could be abused awesomely. This is also, I guess, this is not the first time that I’ve looked into that 75 gigabyte free storage option for, I guess, effectively stealing resources that they didn’t think anyone would actually use in order to build something horrifying. Ben Kehoe and I have talked a bit about building a PackratDB, which is how much of a database can you actually shove into an AWS account without paying for anything. And we’re continuing to explore what that might look like, and this was absolutely one of the single largest points of data store you can get. Everything else is about free tags on resources that don’t cost anything, build an awesome key-value store, etcetera, etcetera. But this one blows away everything else we found so far, as far as, what can I get massive storage-wise without having to pay for any of it.
Michael: Yeah, certainly, and, the other thing is thinking about transfer costs. So, a lot of places you can store data, but you somehow get charged for requests to pull it back out. And so that’s what was a little bit weird with the Lambda functions. And, I guess, the only tricky part is making it usable for an end client without having to have a Lambda function that pulls it out, does some data mangling it to get it into the right format, and then send it back to the client. So, that’s the only tricky part with using Lambda functions like that.
Corey: Okay, so let’s continue through the demo. You figured out that you could have 75 gigs of data just hanging out there and you could pull it back at no charge. Where do you go from there?
Michael: Right. So, I remembered a blog post, a Medium post by Laurent. And that blog post was about using Google Docs to store video, and they used the HLS format to basically skip through to the section that actually contained the video content. So, in their example, they’re uploading the videos as inside a PNG file, and then they could skip through to that. Their purpose is they wanted to hide it away from Google. So, Google thought that it was just a picture, didn’t try to do any copyright data matching on it, and from there, I thought, “Hey, I could probably use that inside Lambda to make use of the storage.” So I guess the key problem for Lambda is you need to upload a zip file, so you need to somehow have the video content in the zip file, but still accessible by the client. So, for that, what we do is we take the zip file, and we actually compress it with zero compression. So, depending on your zip utility, there’s lots of ways of enabling that, but basically you say zip it up, but don’t compress it. So, the entire file is there, uncompressed, you just need to jump to the right byte offset. And that’s where the HLS stream comes in. You can say, “Hey, just jump to this part of the file, and you can skip all the zip header stuff.
Corey: Right, so, you have the header itself that’s there, but effectively, when people are saying they’re looking for a great compression algorithm, you were looking for the exact opposite of that.
Michael: Correct, yeah, we want something that’s not compressed, that way that—because the video client doesn’t know how to deal with the compression. So, yeah, if we have it not compressed then that works better in our favor.
Corey: Gotcha. So, you wind up with an effect of having one giant object sitting there but you can also do the byte offset to tell the video player where to get it from?
Michael: Yeah.
Corey: Is that using the byte-range stuff that is in S3’s GET API, or is it using something else?
Michael: Yeah. So, if you do a normal GET to S3, it uses the bytes range header as part of the request. So, it will skip through that.
Corey: Yeah, I think that’s one of those things that people aren’t generally aware exists, where you don’t need to pull the entire object down. You can just say, give me this very specific portion of it.
Michael: Yeah, it’s a very handy feature for more production-like workloads.
Corey: So, you wound up then putting this behind an API gateway, and then hooking that up to a Lambda function?
Michael: Yeah.
Corey: Have you looked at all into their new HTTP API option, which, I think, is now in beta? They talked about it a lot at re:Invent, but I haven’t had the chance to play with it myself yet.
Michael: Yeah, so, I actually tried too—because I thought this would be a brilliant demo of testing that out. And I tried to set that up and I followed all the steps, and I just could not get the API to return anything but, a 403 or a 500 or something. So, I clearly—
Corey: That sounds like most of my early explorations with API gateway from start to finish until I started just using something like serverless framework to wrap it for me. I feel like, for a long time, that most of what the stuff I was getting from API gateway was just a comedy of errors. It is not the most intuitive thing to learn. And I’m disheartened to hear that potentially is what we’re seeing from the new version as well.
Michael: Yeah, I’m not sure. I feel like I’ve probably done something wrong. There’s probably some key part of documentation or possibly I just seem to wait long enough for DNS to propagate or whatnot for the new API. But I quickly—
Corey: Like, who has the patience for that?
Michael: Yeah. Yeah.
Corey Quinn: This episode is sponsored by ExtraHop. ExtraHop provides threat detection and response for the Enterprise (not the starship) on-prem security doesn’t translate well to cloud or multi-cloud environments. And that’s not even counting IoT. ExtraHop automatically discovers everything inside the perimeter, including your cloud workloads and IoT devices, detects these threats up to 35 percent faster, and helps you act immediately. Ask for a free trial of detection and response for AWS today at ExtraHop.com/trial.
Corey: Okay, so once this video is up and running, and you give someone a URL that’s fronted by an API gateway, that’s awesome. But what, I guess, viewers or clients could wind up understanding that?
Michael: Yeah, so one of the problems you have with this is because it’s hosted in S3, in that bucket, they haven’t enabled calls for us. And you know, that’s not surprising. But you can play it in basically every modern HLS player, so VLC, MPlayer, I even got it running in Windows Media Player. The problem you have is you can’t embed it into a webpage because, of course, so you can link someone to it and they can open it in their player, but you can’t open it in a browser.
Corey: Forgive me if I’m remembering this backwards, would it be possible to adjust that by modifying the payload response that winds up being returned maybe through the Lambda function itself?
Michael: So, yes, but the problem is, the only thing we’re doing in the Lambda response is providing a 302, and it’s the S3 bucket of where all the Lambda functions are stored. That’s where the calls needs to be, and we don’t [crosstalk 00:12:23]—
Corey: Gotcha. And if we want to continue to make AWS eat the bill for it, there’s not really a great series of answers around it.
Michael: Yeah. So, in theory—and this is where my knowledge falls apart—in theory, I believe there’s like a same origin, or there’s an HTML tag that you could potentially put in the video tags to get it to work, but it seems that none of the out of the box HLS streaming JavaScript libraries that I looked at support doing that, and I don’t know if it is possible, but it seems to imply that it is.
Corey: What other limitations do you see in something like this?
Michael: So, you are limited in your chunk size. So, you can only have a 50 meg file per function, and that poses a problem if you’re doing 4k video because you won’t have a very large, like, a long duration of video content in that, and that can cause some issues with some of the clients trying to buffer because they only try to download the next couple of files not based on time, so you end up with stuttering stuff. So, for 1080 video it seemed to work okay. I haven’t tested it with 4k. I did this all by hand by manually writing the HLS stream file. It’s trivial to automate, but if I was to do a 4k one, I would have hundreds and hundreds of Lambda functions I need to upload
Corey: I’m guessing this is probably not intended for anything remotely resembling production use?
Michael: Most certainly not. I imagine Amazon is going to do something to block this somehow. There’s a few ways that I’ve thought of how they could do that, but I don’t know what approach they’ll take, because they do have to remain having backwards compatibility with—
Corey: Right, they view APIs as promises, as they love telling us, so the question then becomes, what could they change to make this a non-workable solution, and of that list, which of those are viable without having to break existing functionality?
Michael: Yeah, so the two that come to mind is they could put request-to-pays on the S3 bucket. And that would basically just mean that you’d start paying the cost for downloading that. I’m not sure if that’s something they would do. The other one that comes to mind is if they did the zipping themselves as well so that it is actually compressed, I’m not sure internally how that would go if that would break things. It would fix using this for HLS, but it could still be used for other data purposes.
Corey: Yeah, it’s one of those interesting areas where it would solve this particular use case, but there’s nothing preventing someone from building out something like this that just grants, Oh, run this one-liner, and you’ll get 75 gigs of storage per account.
Michael: Yep.
Corey: Yeah, it’s interesting to see how this might wind up. I guess, influencing AWS. I mean, there is always the option where they just decide that this is an acceptable loss. They’ve made something like what $32 billion in revenue last year. Yeah, if people want to go through these kind of hoops, okay, they won, will let them. Unless they start seeing widespread abuse of this, which frankly, I kind of have a hard time envisioning, I don’t know that this is necessarily going to be on the top of their list of things to chase down.
Michael: Yeah, I’m not certain about that because, I guess, one of the use cases for this is video piracy. So, it could potentially be used to pirate movies and stuff as they come out. And it means that the person doing the pirating or hosting the pirating, apart from having their Amazon account shut down, they don’t really risk spending a huge amount of money. But, the other use case I kind of thought about—I mean, I just did this for fun, I had no no use for it—but the other use case I thought of after I built this was those times where media outlets have this huge story, and they want to release it to the world, but they know that they’re going to take a pretty big hit in terms of hosting costs for it. They could just quickly do this. And that would save them like probably millions if everyone’s looking at the same video content.
Corey: Right, effectively a freestyle CDN to some extent. The counterpoint is I can’t really see any reputable media organization going down this path. It just seems like it would be a little bit too far towards the, “What do you folks think you’re doing?” model.
Michael: [laughing], Yes. Yeah, certainly. Yeah, I can’t imagine anyone doing it, but I don’t know, any way to cut costs, I guess. [laughing].
Corey: As soon as one person does it at that point, I feel like they’re no longer able to ignore this as just some weird proof of concept someone on the internet threw up.
Michael: Yeah, certainly, yeah. I imagine this will last a while. It’s definitely not—I imagine they’ll monitor it, maybe, run some analytics on this. And then at some point, once it gets to a tipping point where it’s worth changing, then they’ll look into it.
Corey: Right. And there’s always the customer-unfriendly approaches where they can just solve this entirely in their terms of service. And after they find egregious users, and start effectively turning off their AWS accounts, the message would probably get out. It feels like something that a company that isn’t Amazon would be likelier to do.
Michael: Yeah, yeah. I imagine like based on some of the previous examples of this sort of, I guess, code terrorism, it seems like they’re more likely to just eat up the bill until it becomes a huge problem.
Corey: I feel like I need to highlight yet again, this is not something that people should use for production use. For a while, I was giving a talk called “Terrible Ideas in Git,” and I had a Docker container that was published and ready to be used for this, just because resetting a whole bunch of Git repositories after you’ve mangled the hell out of them is obnoxious. Just run a Docker container every time you give the talk, and things are great. The container was called Terrible Ideas, and I’m sure someone was using it for something in production because people do terribly stupid things without any rationale, similar to the time where I started making jokes about using Route 53 as a database, and I started getting people responding with, “Well, that’s not the worst idea in the world. What if we did it like this?” And it’s no, no, no, no, no. At some point, the job takes on a life of its own, but you kind of want to at least keep the sharp edges away from people who may not understand what exactly it is they’re doing.
Michael: Yeah. And that’s why I tried to put a fairly decent write up on how it works, and also the limitations. Sort of a disclaimer to say, “Hey, this probably won’t work in the future.” But I am very scared about how many stars this has gotten on GitHub. And there’s apparently three forks of it already. So, hopefully, no one’s actually using this in the production sense.
Corey: One would very much like to hope. The counter-argument though, is that people will always surprise you with what ridiculous things they’re doing. Back when I was doing open-source development work on SaltStack, I figured that, oh, the problem clearly is that everyone who’s used Puppet, or Chef, or anything like that was just, oh, they weren’t very good at what they do, and the tool was not adequate. We’ve built this thing, it’s going to be amazing. And that lasted right until I saw my first customer use case where, oh, it turns out that anything is a hammer if you hold it wrong. It’s difficult to get people to see the vision, and I feel like the things that you build never survive encounters with other people’s horrifying use cases.
Michael: Yeah, a lot of the things I have built in the past have been, I guess, horrible, horrible things. Mostly just for fun, just seeing how far you can take a tool to work. I guess an example of that is I have built in the past a Lambda function that works as a custom resource in CloudFormation that starts a Mechanical Turk instance question and provides access keys and secret keys. So you can free text, create your CloudFormation and just say, “Hey, can you build me an S3 bucket? and it will fire off a Mechanical Turk request and ask someone on Mechanical Turk to build the S3 bucket for you.”
Corey: On some level, you have to wonder at what point they just automate a lot of these common solutions into something that is AWS Solutions option, or a Quick Start, versus how much of it is something like AWS IQ where you can effectively pay people a few bucks to do common or uncommon things, as the case may be. I would not put this past being wrapped around an official AWS offering at some point.
Michael: Yeah, yeah, I can imagine.
Corey: So, I’m assuming that this is not the thing that springs fully formed from, “You know, I’m going to go online today, on my first day on the job and go ahead and build something like this.” Where does it come from? Where did you wind up, I guess, starting down the path of thinking about creative use of services like this?
Michael: I always think about limits and how can I, at least, make use of the limits, get to the boundary? That limit is set, how do I get right up to that edge and make the most out of this? So, I guess, every time I look at something, if I see a limitation—I guess the prime example is S3 when they first released it, and I’m pretty sure they only charged for data transfer. When they first released it, they didn’t have any billing for GET requests, or HEAD requests, or options and all of that stuff. All those requests weren’t billed. So, I heard the story, a long time ago, about someone that essentially used that as a database. Because those requests were free, they weren’t really grabbing any data out of it. And that’s when Amazon had to add that limitation, I’m like, “At some point, I really want to be doing that. Like I want to be the reason why Amazon puts in that limit.” So, every time I look at a new service when it’s released, I look at the limits and try and work out like how can I use this to its fullest potential that Amazon never actually planned on it being used that way.
Corey: Right. I want to be the exception case, how do I make that possible?
Michael: Yep, exactly.
Corey: And, you always think, well, no one would actually go to the trouble of doing that stuff. Well, have you met me? Your Lambda function is a whopping 36 lines, all in, in Python. This is not a massive amount of code. It’s not anything that is overly complex. I think it just requires looking at these things from a certain point of view that, very often, the people building it never considered.
Michael: Exactly. And I feel like there’s probably a few other cases in Amazon where this approach, like not this exact code, but this approach can be applied to. I haven’t been able to see those. It’s just, I happen to use Lambda enough that I have worked out the inner workings a bit more. But, I’m sure there’s other places where you can upload data and get it back down for free that could be abused, maybe not for video streaming, but at least as a free database.
Corey: You almost start to wonder, okay, what is the upper bound of data you can attach to an AWS support ticket?
Michael: Yes.
Corey: Because it does have an API. One other thing I thought was kind of neat, too, right around the same time that this came out, was someone did a whole write up about how anything outside of the handler function in a Lambda ran with the full 3 gigs of resources and 2 vCPUs, and wound up not charging—or wound up not billing, or something on the order of that, were until it entered the handler, that it was either unbilled or build at a small fraction of what it was that was being charged. Do you remember what I’m talking about?
Michael: Yeah, I’ve seen that and read that, and so… yeah, I think, if I remember correctly, it’s just, as soon as—on that cold start, it has full power, full memory, to just set everything up. It’s designed for big Java applications and whatnot, so it can quickly get started, so that cold start time is really short and then run, but you can abuse that by running your code outside the handler and do that. And a lot of the stuff I do professionally, we do a lot of stuff outside that handler for the—just setting everything up. But I never really thought about using that extra time. But I have wondered, could you expand that to be more useful as a clustered, distributed computing system? It’d be really cool to see that expanded on because Amazon gave that tick of approval to say, that’s fine. So…
Corey: Yeah, they said, “Have fun,” on multiple folks who are in a position to be authoritative on Twitter, said, “Yeah, go for it. See what you can build.” So, all right, challenge accepted. This is the danger of goading people on who have very little sense of, “Oh, I shouldn’t do that. They wouldn’t like it. Oh, no.” At some point, you’ve got to get portions that AWS bill back.
Michael: Yes, Yes, certainly. Amazon is definitely not losing out on this. If you’re using any of their services, they’re winning.
Corey: Absolutely. I’ve yet to see a single exploit like this that didn’t result in, “Yes, and it winds up causing a slight discounting on my bill that is already a phone number.” It’s… this is very much a rounding error, even a per-user account for most of these things.
Michael: Yes.
Corey: So, we’ll see. So, if people want to discover more of your various acts of code terrorism, where can they find you?
Michael: They can find me on Twitter. So, @xssfox. X-ray, Sierra, Sierra, Foxtrot, Oscar, X-Ray.
Corey: Excellent. Thank you so much for taking the time to speak with me today, I appreciate it.
Michael: No worries. Thank you.
Corey: Michael, code terrorist at undisclosed location for obvious reasons. I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave an excellent review on Apple Podcasts. If you’ve hated this podcast, please leave an excellent review on Apple Podcasts.
Announcer: This has been this week’s episode of Screaming in the Cloud. You can also find more Corey at ScreamingintheCloud.com or wherever fine snark is sold.
This has been a HumblePod production. Stay humble.