The FWDstart Podcast is a weekly show at the intersection of venture capital, startups, and strategic industries shaping the MENA region. Each episode features candid conversations with founders, investors, and operators behind the regionβs most ambitious companies, from frontier AI and fintech infrastructure to climate tech, construction, energy, and space.
FORWARD START PODCAST β EPISODE 3
Arya Bolurfrushan, Founder & CEO, Applied AI
---
[00:00:00] ARYA BOLURFRUSHAN: This is gonna be the biggest change in human productivity since hydrocarbons. If all GDP is number of people multiplied by productivity, if this can at least 10x productivity, everything changes. There was one very famous investor that I won't name. We had a fundamental disagreement where he said that humans will have no work whatsoever β 0.0000. No blue collar work, no white collar work. Everything will be automated. In his words, humans are born crying and will die crying, and they have to find a way to entertain themselves. And I asked him, well, what about the geopolitical consequences? He says, yeah, one or two generations of war and they'll get used to the new norm. This guy manages more than a trillion dollars. And I was like, I disagree with you wholeheartedly, and I will fight to make sure you do not win.
[00:00:43] JAMIE LANE: I'm delighted to say that my guest today is Arya Bolurfrushan. He's the founder and CEO of Applied AI, the company behind Opus, a platform enabling enterprises to design AI-native workflows. The company raised a $42 million seed round back in November 2022, just days before ChatGPT launched. And then earlier this year raised a $55 million Series A led by G42 with Palantir and Bessemer also involved. We get into why he sees AI as the biggest productivity shift since hydrocarbons, his concerns over the transhumanist belief that humans could become economically obsolete, why it is that productivity gains must be shared with labor to avoid social unrest, and how it is that Opus keeps people in the loop as supervisors rather than passengers. I enormously enjoyed this conversation with Arya. I learned a ton as always, so I hope you find it as valuable as I did.
[AD BREAK β Sarwa & HubPay]
[00:03:36] JAMIE: Arya, very welcome to the podcast.
ARYA: Thank you, Jamie.
JAMIE: One of the things that struck me most during the course of the research for this was the timing of your seed round. I think it was November 2022. Another seismic event in AI took place a week after you announced the $42 million seed. Was raising before ChatGPT an advantage or a disadvantage in hindsight?
[00:04:04] ARYA: Very good question. We often say we were doing AI before it was cool. Because right now when you say you have an AI startup, people are gonna roll their eyes β just another AI startup. But we've been at it for a while, and it is hard to put yourself back into the mentality of our species back then. But I remember everybody was very excited about NFTs and crypto...
JAMIE: It was Web3 season.
ARYA: Yeah, it was. And we were going around from boardroom to boardroom with customers and investors, banging the table saying AI is gonna be the biggest change our species has ever seen. Our conviction was pretty solid that this is gonna be the biggest change in human productivity since hydrocarbons β maybe even fire. So we went with "Applied AI" as the company name. It was kind of anti-hype naming β hardcore applications. We had around 16 of our own proprietary models.
[00:05:03] We showed that no one cares how you do it, right? No one cares if it's an LLM or ChatGPT. No one buys AI β at least in the enterprise B2B side, you buy the results that AI generates. You're supposed to, at least. So we had that attraction of showing what that means. People want things faster, cheaper, higher quality. And then the downside was we had to throw away a lot of our models, or at least have the ones that are fine-tuned and test with bigger LLMs. So it was a pro in that we had really tested our product-market fit, and it was a con in that we had an existing tech stack that had to be revisited.
[00:05:43] JAMIE: What was the conviction born out of? What had you seen?
ARYA: It was COVID. I had just moved back and I was in quarantine for 18 days. Everything was taken away. You're just by yourself. And it was very early β March 2020. So it was when I had it and Tom Hanks had it and Idris Elba... people didn't know what it was. I was getting goodbye messages from friends. My parents were freaking out. People thought it was very serious. I was in the middle of a desert here, and I had a lot of time to think about where I was, where we are.
[00:06:23] I was like, okay, what's my highest conviction belief? I had some history in the oil and gas space, so I knew this one commodity changed our species in a very big way. I highly recommend reading The Prize by Yergin. I think it's a great book on looking at modern history through hydrocarbons. I had a technical degree from Carnegie Mellon, so I'm obviously not an expert on pretty advanced stuff today, but at least I know what's real and what's not real. And I started playing around with some models.
[00:07:02] And that moment β that GPT moment of "I did not say do this and it did it" β when you're just like... that was very distinct. And I remember thinking: what is the rarest commodity in the universe? Initially I thought it was love. Let's not get into that. But there's also some level of love in animals. Maybe it's intelligence, right? And if you can manufacture intelligence, then we're manufacturing the rarest thing in the universe.
[00:07:18] I'm a big history buff. If you look at human productivity for hundreds of thousands of years, it's pretty stable. Only so much you can do with your arms and your legs. Until we found this magical elixir on the ground where everything we see around β this microphone, this chair, this glass, the equipment videotaping us β this is all plastic. This is all manufactured in the past 150 years from one discovery. And human productivity went vertical.
[00:07:53] But it still was like almost artificial muscles, right? We automated the physical world, but it would break out of that anytime there was an exception. And I remember thinking about these power plants β you had Homer Simpson in the control room pushing buttons. And now we're automating the intelligent realm. That was our single domain of how we became dominant as a species.
[00:08:18] I remember thinking: if all GDP is number of people multiplied by productivity, if this can at least 10x productivity, everything changes. And the more I scratched the surface, the higher the conviction became.
[00:08:33] JAMIE: It's kind of the Robert Solow paradox β you can see the computer age everywhere but in productivity statistics. And I think I was definitely struck by that in your framing of the company as well. It's this anti-framing as it relates to AI hype. Does that resonate with you? Productivity is a word you mentioned a few times there.
[00:08:51] ARYA: Yeah. If you visit our office in Abu Dhabi β this was three years ago β we put up one big neon green wall light that says "Turbocharge Productivity."
[00:09:01] Back then, no one was really thinking about productivity. I think it was a Nobel Prize-winning economist β I forget his name β who said that in the short term, productivity is nothing; in the long term, it's everything. And there's almost a divine aspect to it, where output per input, output per human β the divine spark is not being squashed by the friction of admin.
[00:09:24] I often say, how many ideas died on the vine of friction? The friction of admin. How many Beethovens didn't have a piano? So imagine human creativity unleashed and what that productivity can actually be. This is something we've been on about for a while.
[00:09:38] JAMIE: You're suggesting that data entry in an Excel spreadsheet isn't the pinnacle of humanity?
ARYA: Yeah, soul-crushing. There's parts of your job that you hate and parts that you love. Like traffic β it's soul-crushing. And the Boring Company is trying to solve that.
[00:09:53] There's one thing I remember β looking back at the first visuals we were drawing. Using oil as an analogy: it's useless. It's like a sludge of black goo. If I give you it right now β great, what are you gonna do with it? But it's the applications of hydrocarbons. Petrochemical, plastics, glass. There's so much that can come from it. To make it useful, it needs to become something.
[00:10:30] The other analogy we had was electricity. It literally can kill you. It's so intangible. It took a company like General Electric to make really boring applications of electricity. And if you think about the word they invented β "appliance." A dishwasher and a washing machine. That had tremendous impact because washing clothes was like Excel entry β completely soul-crushing. Emancipation of women followed shortly afterwards. Half of our population became all of a sudden liberated from the soul-crushing task, and then they got the right to vote. Think about the productivity gains from that.
[00:11:03] JAMIE: Talk to me about the seed round itself. $42 million. It's a lot of money for a seed round at that particular time. I think my favorite angel is Garry Kasparov, as a chess aficionado. What was the product like at the time? How did that come together? Were you pitching on traction or was it vision?
[00:11:20] ARYA: A few things. Number one, with Kasparov in particular, we went after knowledge work. And he often claims that he was the first knowledge worker whose job was at risk with AI.
JAMIE: Not wrong.
ARYA: But he's also always been a kind of AI optimist, and he was helping us as our internal head of logic for a while.
[00:11:38] Back then, when you mentioned the word AI, your sales cycle became like 5x as long because you became an education company. Boards would bring you in, management would ask what AI is, the head of legal would come to understand what that means. So we became a kind of secret AI company.
[00:11:54] For example, our first product was called Deep Doc. We had a very strict guideline of what use cases we would look for. The mental model we have: if you think about the x-axis as time and the y-axis as the price of knowledge work β before the internet, the price of knowledge work was higher than the cost of onshore labor. That was the world as we knew it. And the cost of onshore labor is not an arbitrary term β it's regulated. There's minimum wage laws. That area under the graph was the economic profits.
[00:12:37] After the internet, the world became flat and we could access the cost of offshore labor. Very quickly, the price of knowledge work dipped below the cost of onshore labor. I remember my first thesis in undergrad and first master's was: will BPO take over the world? Is this the end of knowledge work as we know it?
JAMIE: Globalized labor is going to...
ARYA: Yeah, and there was massive concern. So outsourced knowledge work is not new. It was like a mechanical Turk. We'd just send the prompt to a bunch of humans in another country, sleep, wake up, and it was done.
[00:13:15] That was the beachhead we found β people are already doing this. It's just that the price of that outsourced knowledge work and the time it took and the quality of it is being collapsed. We found these very niche use cases and focused on high cost of error, highly regulated spaces.
[00:13:34] For example, with Deep Doc, it was independent medical examinations, charged by page. Outcome-based pricing. It would be between 18 to 36 cents per page, take around 48 hours to come back, and was around 90% accurate. We came to market offering 5 to 15 cents a page, less than 24 hours turnaround, and 94% accuracy. We didn't even say it was AI. People just said, okay, it's faster, cheaper, and better β I'll buy the product.
[00:13:55] That was how we started. Our main metric was productivity β that was the North Star. Output per person per day, which is the inverse of gross margin. With that product, we proved we could go up to 5x. A human pre-us would do around 1,800 pages a day. That was the cognitive load a human could withstand without losing the will to live. We went to over 6,000 pages and eventually 8,000 pages a day per person.
[00:14:35] We proved that works from a unit economics perspective, which meant we had pricing power. Then we said, okay, are we a one-hit wonder or can we do it again? We did it with pharmacovigilance β again, an extremely boring use case. Our requirement was: if your eyes don't gloss over and you lose focus when talking about it, we're in the wrong industry. Deep, boring, almost commoditized knowledge work use cases. Once we had those two proof points, then we did our seed round.
[00:15:11] JAMIE: For people who are unfamiliar with Applied AI and Opus the platform β it's on your t-shirt, on your chest at the moment.
ARYA: In my heart.
JAMIE: In your heart as well. It reminds me a small bit of Palantir in terms of being somewhat opaque β I don't mean that in a disparaging way, just externally. It's technical. It's not supposed to be a consumer product. Can you explain in plain language what it is, what it does?
[00:15:39] ARYA: I'm happy you raised that, because this is something we're trying to fix. What we were selling before was the machine. We're selling now the machine that builds the machine. We're automating ourselves. How do we build a new workflow quickly? How do we empower somebody who knows how that process works to build their own workflow? That's what Opus is.
[00:16:08] What we realized was every enterprise has to rewire with AI, which is a daunting task. Every new startup needs to wire with AI natively and build new processes. Usually the people who know the process the best β doing it every single day β aren't the ones building it. So there's this huge amount of frustration and signal loss between somebody who knows the process and has to go to centralized IT. They speak different languages. Explain the problem, what they want, how AI should potentially solve it.
[00:16:36] IT is overworked with hundreds of different use cases. Find the time. Bid for resources. Run requirements gathering, non-functional requirements gathering, as-is process mapping. Then do an RFP for POC where other vendors come in. Vendor selection. They go into production β and that takes at least nine months.
JAMIE: That's tough to listen to.
ARYA: And the person who owns the process is so frustrated. There's this love-hate business relationship because the business guy will change his mind, and the IT guy says, "Well, you said you wanted this." And he's like, "Yeah, but things changed." It's a very brittle process with snapshots of requirements. After nine months, when the business person gets the final product, they say either "This is not what I wanted" or "Even if it was, it's what I wanted nine months ago."
[00:17:25] So how do we collapse this design time and bring it to the edge? How do we empower the non-technical business owner to very simply build a compliant, reliable business process that's AI-native? Crazy tall task. Because this has to happen not once but a thousand times in an enterprise. Repeatable and durable. Centralized governance, decentralized innovation.
[00:17:47] The biggest trade-off is: how do you have design-time flexibility and runtime reliability? That's the holy grail. And that's what Opus strives to be. It should empower a non-technical business user to, in natural language, prompt and discover the process in their industry, in their geography, in their small niche β to be emergent in recommending best practices.
[00:18:10] That's called business process re-engineering. By the way, how this is done in the old world is this alphabet soup full of jargon: business process reengineering, business process management, business process services, business process outsourcing, business process mining, business process intelligence...
JAMIE: Tying yourself up in knots with that.
ARYA: It takes $3 to $4 million per process, which is what a lot of competitors charge. If Opus can collapse all of that into just "what do you wanna do" and just do it β Opus stands for Orchestration Platform for Universal Services. Opus is also "work" in Latin. We think it can stand on top of your tech stack. We call that liberation from the tyranny of point solutions, where it can help orchestrate your entire stack. Integrate with whatever you're using, read and write access with maximal governance. User permissioning at the data layer, at the read and write layer. But all that is invisible to the simplicity of the process owner just saying what they want, building it, and being able to iterate 30 times in one day.
[00:19:30] It won't be perfect. It'll take you 80% of the way there. And then in natural language you can iterate β integrate this, change that, do this.
[00:19:40] And finally, Opus is built for the human to stay in the loop. A lot of our friends are sacrificing our species at the altar of gross margin. We think, at least in highly regulated spaces, supervised automation is the solution where there needs to be some human that assumes the liability of that work.
[00:20:07] We don't think it's a last-mile solution where the AI does everything and then you check. Your productivity will go down that way because you spend more time looking for the compound error across 75 AI agents that it took to do something.
[00:20:22] Our solution is a mid-mile experience where the AI agent works, the human checks, the AI works, the human checks. It's this dance back and forth where productivity per human in the mid-mile increases by up to 20x because you're only doing the proving.
[00:20:38] We launched Opus Self-Serve on October 1st on beta. Now there's no demo β you can go in and try it and build it. It has bugs, but it's a beta. It should be stable in a month. The hope for next year is that a complete non-technical user, unlike Palantir or anybody else out there, can go in and engage with the robustness and power in simple terms and radically reduce time to value.
[00:21:14] JAMIE: So we're vibe-workflowing, in many respects. It's a similar sort of mission. Obviously vibe coding is now rife with negative meaning to a degree, but it's about getting it to that point.
ARYA: And by the way, I really feel for centralized IT teams. It's like your contractor in your house β everyone wants small tweaks and they change their mind and change orders. They're up against a very difficult task because they have to do 15,000 workflows. A bank we're working with right now has 2,000 point solutions.
[00:21:38] So to empower them to centralize governance β it's vibe-workflowing on the process side, but extremely rigid controls and governance by IT that locks in requirements after the business is done vibe-coding. They can do all the iterations they want, all the intention changes and requirements changes. Once they're happy with the full workflow, they hit activate and only then is it locked and goes to IT for approval. It solves both problems.
[00:22:17] JAMIE: Talent and hiring. The market is a bit nuts at the moment. You guys are incredibly proficient from a research perspective as well β I read a couple of the papers you've published. How do you assemble a research team that can be so competent on the research side but also shipping products simultaneously? You're not just a research lab β it needs to be applicable.
[00:22:42] ARYA: We really aim to be the home for the most talented folks, at least in the region. We've so far hired less than 1% of folks that applied. Real A-players thrive with other A-players, and you have this expectation of excellence across the whole organization.
[00:23:00] We're obviously a small startup, but the hope is that having Applied AI on your resume becomes something you're proud of because of the intensity of the work. Our work hours and work ethic really clear out people that have a life.
JAMIE: What is the culture like?
ARYA: I challenge anybody to go to our office at any time of night and not find engineers there. We have a beautifully hardworking team. Weekends. Midnight, almost every day. Extremely dedicated, very young team who works on first principles.
[00:23:39] We have managers who are at least 20 years younger than people that report to them. We don't have this old-age hierarchy. It's completely meritocracy. We hire from hackathons.
[00:23:50] We also learned a few lessons. Because this is such a revolutionary new technology, experience can oftentimes be a liability β where you start trying to build software the old way.
JAMIE: You have this inbuilt debt.
ARYA: Yeah. And it's hard. It's no one's fault. It's just your disposition is your experience. You hedge your ambition by the speed of software development at the time and feature releases and how it used to take.
[00:24:23] Whereas the person who doesn't have that maturity overpromises but oftentimes overdelivers by a factor of five. Someone who's hedged from the beginning on the promises made...
[00:24:40] Low twenties β a lot of our engineering team, a lot of folks with their first job. They're completely AI-native. A lot of them live together. We hire regionally, completely on merit.
[00:24:59] Our research side is almost like applied research β not research for research's sake. Literally the things we're trying to do, we hope somebody else did the research, but no one did. So we had to do it.
JAMIE: Born out of necessity as opposed to for its own sake.
ARYA: Yeah. And it helps us on training and onboarding new engineers because we say: here's the way we look at what a workflow is, how do you quantify a good workflow, how do you capture intention.
[00:25:15] Because often people say, "I want to add an AI agent to my existing process." We think that's not the best question. The best question is: where do I add a human to an AI-native process? To do that, we have to really understand what's the job you want to get done β not your process. What is the intention you're trying to achieve? What are all the constraints β legal constraints, policy constraints, idiosyncrasies for your particular organization? And then we will generate an AI-native workflow irrespective of your old process.
[00:25:55] That is a crazy moment for a lot of people. So many false constraints or self-limiting beliefs they had. Our processes were built in the '90s and early 2000s. They were built for the limitations of the human mind. And the human mind is incremental by nature and linear.
[00:26:07] If we give a person with the best of intentions a blank canvas with a low-code, no-code tool to build an AI-native process, they'll end up recreating their old process again. It's so hard to go from a local maximum to a global maximum because you don't know what you don't know.
[00:26:27] That workflow generation piece is very difficult. How do you capture the signals of intention and then traverse our work knowledge graph of all the best practices amongst all the industries and find and optimize based on various options in the graph of where you could go? That's why our research helps answer that question. A lot of our engineers on the research side oscillate between building and research.
[00:26:55] JAMIE: You talk about capturing intention. How difficult is that? You're somewhat contingent on the ability of the individual to explain. What level of detail are we talking in a prompt? These are obviously some very complicated workflows. Is there an average length, a particular structure that works best?
[00:27:16] ARYA: One thing we've built and are releasing is this kind of assistant β almost like a business process consultant that can help you get there. Usually you start with one line: "Vendor onboarding for a chemical manufacturing plant." And then it'll ask you where β in the UK? β and it knows UK law, it knows your space. And then it'll ask: what do you use currently? What are you optimizing for?
[00:27:39] Over time, the discovery tool helps you build that prompt. Is there a perfect prompt? No. But it'll build it out as much as you can. And then when you see it, you iterate very quickly. In one day, in one sitting, in an eight-hour stretch, you can do probably three or four generations and 30 iterations.
[00:28:00] By the time the day's over, you have a solution in production. You can only iterate on it when you actually put an input and get an output. Then you say, "Wait, this wasn't what I wanted," and you start writing. The playground gets very real very quickly. Time to iteration is where the magic really happens. If you're too prescriptive, oftentimes it's not helpful.
JAMIE: It's the problem you described β you're inhibited by your own set of constraints related to what you've seen in the past.
[00:28:28] ARYA: I can make it very specific if you want, just to give you a flavor. Take independent medical examinations. Every time you have a health insurance claim that's in dispute β you claimed, it was denied β a lot happening in that space in the US. In most states, almost all states, it has to go and be adjudicated by an independent physician on the medical necessity of the claim. Workers' compensation β do you go back to work, do you get paid?
[00:28:44] So what happens is they subpoena all of your providers and you end up with a Frankenstein PDF of like 30,000 pages of your time series on your whole life. And a poor doctor needs to sift through that.
[00:29:07] How it works today, or how it did work: they forward that PDF by email to a BPO abroad. They first sort it, which is very hard β all these different types and duplicates. Every page has three or four different dates: the date of incident, date of report, date of test, date of birth. Figure out which date is the most instructive. Then it's sent to be summarized. Then it goes back.
[00:29:28] Sorting is probably 80% of the $5 billion spent in this space in the US. Our initial models were built around sorting, boundary detection, and duplication. And then when we started with generation, it just skipped the entire sorting step. We were like, "Wait, wait a minute, you missed it!" It's correct. And we realized: why are we sorting? We're only sorting so the human mind can summarize.
[00:30:09] But if in the summary, wherever you click, it'll go back to the original form β then you don't even need to sort.
JAMIE: You don't need to conceptualize from there.
ARYA: It's just inference. That's probably millions of man-hours that we're doing for an intermediary step that's not needed. So they went from a day to five minutes. It went from 5 cents to 2 cents. It just completely collapsed that whole industry. That's just an example of business process reimagination.
[00:30:41] JAMIE: One of the phrases you use quite frequently is this idea of the Large Work Model versus the Large Language Model. What's the difference?
[00:30:50] ARYA: Another insight we had was human work is not infinite. Which is sad to hear sometimes, but it's finite. The number of things we do is finite by definition. The number of business processes are also finite.
[00:31:05] An in-house estimate β we estimate around 2.7 million business processes exist. That's it. That represents around 80% of jobs. Obviously the last 20% of every process is unique, but there's only so many ways you can pay an invoice. There's only so much you can do with accounts receivable. Vendor onboarding just has so many different manifestations.
[00:31:21] So we said, how can we start mapping this in some work knowledge graph where each node is a workflow? We couldn't find any central repository out there. We spoke to many of the leading firms in the world β we're willing to buy this. Surely somebody has mapped this before. Either they didn't want to share it with us or they didn't have it. But they were interested in buying it from us, which makes me believe...
[00:31:42] So we started mapping and indexing through the Opus Data Project. We've mapped probably around 1.4 to 1.5 million of them already.
JAMIE: Well, you're getting there.
ARYA: Yeah. And it's fascinating to see. We can show you at some point β it's a 3D visualization. Every node is a process. You can pick industries and see this fingerprint of an industry: this is banking, this is insurance, this is healthcare. Or you can say "vendor onboarding" and see the ones in every industry. There's so many best practices around that. You just start seeing human work in fascinating ways. And our Large Work Model is trained on that β to be able to help you generate a workflow off that.
JAMIE: Is it built exclusively on your own foundational models or do you have model flexibility?
[00:32:43] ARYA: We're agnostic. We think it may over time become more and more of a commodity, that part of it. So we're on top of it and we actually pick the best LLM for every task or agent.
[00:32:55] JAMIE: Given your own background in oil and gas, it seems like compute is quietly becoming the new unit of power. Gigawatts are nearly replacing dollars. How do you see that shift yourself?
[00:33:11] ARYA: I think it's fascinating. It's like energy in, intelligence out. And I think we're in a very advantageous spot to have access to energy.
[00:33:22] At scale, the limiting factor will become energy access. And this is not new. Energy has always been fought for as the unit of progress β whether it was the Industrial Revolution, World War I, World War II.
[00:33:44] One thing we're very passionate about is sharing the productivity gains with labor. Because we didn't before. It accrued to capital owners. And we had a book called Das Kapital that had a problem with that. If you read it again, it's actually quite a good warning tale β the alienation of labor, the accruing of excess profit, excess gross margin to capital versus people doing the work.
[00:34:07] A lesson we learned: it took us two world wars to process the Industrial Revolution as a humanity. In that revolution, labor was still getting minimum wage. In this one, it could be zero.
[00:34:27] You only grew as fast as your access to energy. OPEC β the great game was access to energy. I think that's going to exasperate. And I think Opus is not the but a solution that can share the productivity gains with labor, where the human in the loop can charge.
[00:34:45] If our productivity increases by 10x, and the Solow model says GDP is productivity times number of people, then the amount we produce as a species... The fundamental basis of economics is just pricing of scarcity β demand and supply. But if we have a lot of supply, then in a world of abundance, what do we price? What is scarce? What is rare? Where is value?
[00:34:58] From a first-principles perspective, what's scarce is time. And for some reason, we unitized labor by time. I think that will change. Labor will go to more outcome-based pricing. It happened with Uber with transport. Energy in, unit of labor out. The conversion of energy to salary will be the big algorithm.
[00:35:35] JAMIE: Can we talk about pricing with respect to that? You referenced on the site that you're 36 times cheaper than onshore BPOs. What are the most convincing ROI metrics for clients when the product isn't time?
[00:35:50] ARYA: We're trying to be as first-principles as possible. And pricing is hard.
JAMIE: Harder than ever, arguably.
ARYA: If you're a clone company and your business model is proven, you just replicate pricing models. But if you're actually innovating, you have to actually ask a question about how to price.
[00:36:08] We were inspired by the automobile revolution where the first cars had all this β cylinders and RPM and 6,000, 12,000. We're seeing that now with "7 billion parameter model" and "20 billion" β and no one knows what that means.
[00:36:26] So they unitized the engine. "This is a hundred horses." I know what horses cost, I know what that maintenance costs. It's not clear what kind of horse, male or female, how old they are. But we still use that term today because it captures a unit of value.
[00:36:51] We're using man-hours in a very similar vein. This particular process historically cost you 10 man-hours to do. Now your only question is: how do I allocate those man-hours? Do I buy them from humans or do I buy them from AI agents? How much will a human charge me for a human man-hour? $36, or more β depends where you are and how hard the task is. We charge $1 to $1.50 for an AI man-hour.
[00:37:12] So now you can choose your split β the man-hour mix. 85% AI agents and 15% for supervision and review, where every human becomes a manager.
[00:37:33] If you think about enterprises: it's people, process, technology. And people and process would try to fit into technology. Whereas now we're gonna have technology and process fit into people.
[00:37:43] JAMIE: Do you think we're increasingly going to see budgets reallocated? Something like Opus coming out of the labor budget as opposed to the tech budget?
[00:37:52] ARYA: We want it. We think so. Tech spend is like 5% of budgets. Everyone's fighting over that. But labor is the biggest part of the budget.
[00:38:12] If you think about what you're buying as an enterprise β let's say you have a thousand employees, you're buying 2 million man-hours. What you're producing as a company needs 2 million man-hours. Your hybrid workforce is: how many of those man-hours am I buying from humans? How many am I buying from AI agents? We think over time that mix will be a kind of AI adoption metric. By the way, we'll do that math for you. It's coming up soon.
[00:38:32] JAMIE: What do you think is a question that every AI company should be asking itself right now?
[00:38:37] ARYA: I think the biggest question is: are you being pennywise and pound foolish?
[00:38:45] If we don't think about... at the end of the day, governments are here to represent people. It's not a free market. There's no AI representation in government. Also, we are here to improve the lives of people. So are you on the right side of history? I'm an AI maximalist, fully. But in the service of humanity.
[00:39:03] There's some thoughts in the transhumanist camp where even if you're acting in self-interest, fast forward five or ten years β if there are mass protests, it doesn't matter what your valuation is. You won't see any of it.
[00:39:28] JAMIE: Are you concerned that that's a possibility?
[00:39:31] ARYA: Yeah. I think we have some deep reckoning to do. I understand all the arguments that every time something new comes up, we have the same questions and then we find new jobs are created. When neighborhood bookstores went out of business, there's the Amazon warehouse where you can work. All that stuff.
[00:39:51] But I think in this case... there was one very famous investor that I won't name that I had a major argument with when I pitched Opus. We had a fundamental disagreement. It's the first time I think he got a bit heated in the room. He said humans will have no work whatsoever. 0.0000. No blue-collar work. No white-collar work.
[00:40:15] Everything will be automated. And in his words, humans are born crying and will die crying, and they have to find a way to entertain themselves while they're alive. And I asked him, well, what about the geopolitical consequences? He says, yeah, one or two generations of war and they'll get used to the new norm.
[00:40:34] This guy manages more than a trillion dollars. And I was like, I disagree with you wholeheartedly, and I will fight to make sure you do not win.
[00:40:52] And I think this is not a joke. I'm under no illusions β Opus is not the solution in the grand answer. But at least it's a solution. At least there's a home. When there's 1 billion workers in the world earning $12 trillion of salary, when a hundred million of them go out of a job, at least Opus can be a home for a few of them. At least we're building some rest while maintaining the imperative for margin expansion and productivity increase. Not at the expense of capitalism, but in service of it.
[00:41:15] By the way, I am a capitalist. So I believe this is actually the best product. Because if you do want to regulate β if your daughter's health insurance claim gets rejected by an AI and your daughter dies because some algorithm, some coder in San Francisco made a mistake, who are you gonna go after?
[00:41:42] Who do you regulate? You're in the UAE β are you gonna sue someone in San Francisco? Or if you're in Bahrain or if you're in Laos β whose fault?
There is a definition of consciousness, which is the ability to suffer. And to be able to regulate, you must be able to inflict suffering for non-compliance.
[00:42:03] I think regulation is gonna be at the very edge of application. It's not gonna go all the way up. So someone in that insurance claim example needs to reject the claim and sign.
JAMIE: There needs to be human culpability in the loop. It can't just be discounted.
[00:42:19] ARYA: So there is a role for us. And I think it's a beautiful role. We're the supervisors. We're the managers.
[00:42:20] If you think about the two visions: one is dystopic, where we're replaced and have lost meaning in our lives because we have no work. We feel as if we're a burden and have no way to contribute to this world. We protest and are angry. In the Western world, our vote is the only ammunition we have to inflict vengeance.
JAMIE: Which increasingly feels like the case.
ARYA: Yeah. And then the other vision is where you're empowered. Your productivity β you contribute so much more. All your creativity's unleashed and you're getting rewarded and paid for it. That's another vision. I really hope we build for that one.
[00:43:16] JAMIE: It feels like we're woefully ill-equipped for that future at the moment though.
ARYA: And that's what a place like the UAE, I think, will do very well. I think democracies are gonna have a little bit of a tough time.
[00:43:27] JAMIE: What time horizon do you think we're talking about before we see that level of mass layoff? We've seen companies jump the gun.
ARYA: Last week Amazon laid off 30,000 people. It's a lot. I think things happen much quicker than people realize. Exponentially. At least hiring is gonna stop.
[00:43:50] The first-order churn's gonna happen. A lot of the entry jobs are gonna be gone. And if you have no entry jobs, how do you get the senior people in?
[00:44:00] But I think if we change labor to an outcome-based productivity and a variable-based comp β like, why should I pay you more if it takes you two hours? That will die. I'll pay you more if it takes you two seconds.
[00:44:13] So if we empower labor to compete in a post-AI world by giving them the best tools and pay them by outcome β then if you get paid less per job, so if you're paid 95% less per job, which means the consumer benefits all the way down, but you do 40x more jobs, you end up getting paid more.
[00:44:34] So I think that's the vision to build for. And I just hope we share the productivity gains with labor.
[00:44:45] JAMIE: Yeah, there's gonna be a lot of amplified inequality even further. Why are you doing this?
[00:44:57] ARYA: It is very hard, Jamie. It's like faced with demons every day. But I have a 19-month-old son now, and I just keep wondering what world he's gonna inherit. He's gonna ask me one day, "What did you do these days?"
[00:45:16] His best friend's probably gonna be an AI. He will never be in a room where a human is the smartest. He'll never experience that reality.
JAMIE: It's ridiculous. Increasingly. Yeah.
[00:45:33] ARYA: And I think, the same way four or five years ago I had the intuition that AI would change everything β I don't think we're taking this as seriously as we should.
[00:45:49] In the extreme, let's say like USS Enterprise. We won't have Scotty. There'll be no quality control. But what we do have, and I think maybe this is our purpose as a species, is morality control as opposed to quality control. The captain of a ship is making moral decisions. And all the ancient teachings, the ancient wisdom, is about morality. So maybe we go back to the basics.
[00:46:12] There are two schools of thought. One is the Western school of thought, which is more rationalism β to understand the meaning in the universe, you have to understand the external things. Very science-driven. You have to understand the physics and science of things to understand the universe.
[00:46:33] And there's the Eastern school of thought, which is: to understand the universe, understand yourself. That's a very internal morality, right and wrong.
[00:46:50] I think if AI is better than us at science and the world of mathematics and external things, there's gonna be a serious moment of introspection. What are we here for? And I do think it'll merge towards more of a meaning and morality point of view.
[00:47:05] JAMIE: Well, my existential crisis has not necessarily been alleviated by this conversation. But we're on it.
ARYA: But I think the call to order is: guys, we can build the future the way we want it.
JAMIE: Oh yeah.
ARYA: Yeah. Thank you very much. This was awesome.
JAMIE: Thank you. Cheers.
---
END OF EPISODE