Born & Kepler

Orbit is Born & Kepler’s new AI commentary format — a short, reflective series unpacking how artificial intelligence is reshaping the way we work, build, and think. Each edition appears from time to time as a written piece and narrated essay — concise, clear, and grounded in real shifts rather than hype.

The first episode, “AI Is a Step Change, Not a Hype Wave,” looks at why this moment in AI feels fundamentally different from past tech cycles like dot-com or mobile. It argues that AI isn’t just another trend — it’s a structural change in how value is created. From agents embedded in real business systems to Europe’s emerging regulations, the piece explores what separates pilot projects from real transformation — and what leaders should be doing now to move from chatbots to impact.

What is Born & Kepler?

Born & Kepler is named after the mathematician and scientists Max Born and Johannes Kepler. This bilingual podcast, offered in both German and English, dives into the expansive world of Artificial Intelligence (AI), exploring its foundations, evolving technology trends, academic search, and its impact on businesses and society.

Born & Kepler will feature a diverse lineup of experts from academia, venture capital, private equity, journalism, entrepreneurship, CTOs, and policymakers. Each guest offers unique insights into how AI is reshaping their sectors and what we might expect in the future.

Our goal is to provide a deep understanding of the core principles and breakthroughs in AI, enabling you to stay updated with the latest advancements in AI technologies and how they are transforming industries. During our episodes, we will explore how AI is influencing business strategies, optimizing operations, and driving innovation. We will also explore the ethical, social, and regulatory aspects of AI in everyday life.

Hi, I’m Orbit. Sit down. Have a coffee. Let’s talk about AI.

Every week, I share one long-form essay — no noise, no hype.

Just a calm, clear look at the technologies reshaping work, life, and power.

You won’t get answers. But you’ll get perspective.

This is one of those essays.

AI is not a hype wave.

It is a step change.

The kind that looks like a toy before it becomes a utility.

The kind that gets dismissed by experts before it disrupts their expertise.

The kind that seems overblown—until it’s everywhere.

We’ve seen this play before.

In 1440, the printing press was mocked. A mechanical tool for monks? Novel, not necessary. Within decades, it restructured Europe. Literacy jumped. Church power cracked. Ideas became infectious.

In the 1800s, steam power was terrifying. Railroads were noisy, dangerous, unstable. But they collapsed time. They collapsed space. By 1870, you could cross America in a week. By 1900, the world was suddenly… connected.

Electricity arrived and didn’t move the needle—at first. Factories still followed old layouts designed for steam. It wasn’t until they rewired everything—literally and organizationally—that productivity exploded.

Then came the computer. And later, the internet. Each dismissed. Each transformative.

AI is next. But it’s different.

Because it doesn’t just amplify effort. It amplifies thought.

Because it doesn’t just automate labor. It automates judgment.

Because it doesn’t wait for you to understand it before it starts changing things.

And it's already happening.

Let’s talk evidence. Not vibes. Data.

McKinsey ran a study across customer service teams: AI copilots boosted resolution speed by 14%, raised customer satisfaction scores, and—this is key—reduced agent burnout.

Another study: business professionals writing strategy memos with GPT assistance scored 18% higher on quality, and finished 40% faster. The kicker? The lower-skilled workers saw the biggest gains.

In software development, GitHub Copilot users complete tasks 55% faster on average. And not just faster—better. Fewer bugs. Cleaner syntax. Less context-switching.

These are not flukes. They are signals.

And the signal is this:

AI is not replacing everyone.

It’s making the average person 2–3x more effective.

That’s not a marginal upgrade. That’s a foundational shift.

But it doesn’t feel like that, does it?

That’s the J-curve.

Every major transformation follows it.

The internet wasn’t an overnight success. It took decades to become infrastructure. E-commerce didn’t eclipse brick-and-mortar in 1999. It got mocked in 2001. Then it quietly ate retail.

Why? Because adoption lags hype.

Because integration takes time.

Because old workflows don’t die easily.

The J-curve explains the lag.

First comes the spike: early promise, exaggerated claims.

Then the dip: disillusionment, skepticism.

And finally, the rise: the moment when systems adapt, habits shift, and productivity compounds.

AI is somewhere between dip and rise.

Executives pilot tools without strategy.

Teachers are told to “use ChatGPT” without guidance.

Developers bolt on LLMs like APIs and expect transformation.

But the curve will rise.

Because beneath the chaos, foundations are forming.

Data pipelines. Workflows. Guardrails. Interfaces.

Rewrites are coming. And when they do, the real curve begins.

Sector by sector, you can already feel the tremor.

Start with healthcare.

A doctor in Lagos uses a smartphone app powered by computer vision to identify early signs of cervical cancer. She no longer waits weeks for lab results. Diagnosis is immediate. Follow-up is faster.

In a U.S. emergency room, an LLM listens as physicians dictate notes. It summarizes in seconds. No more scribbling. No more copying from one system to another. Just care.

Radiology? AI flags anomalies that human eyes miss. Not as a replacement—but as a second set of eyes. Trained on millions of images. Tireless. Precise.

Biotech startups use generative models to design protein structures. What once took labs years now takes hours. Drug discovery isn’t just faster—it’s smarter.

And mental health? Chatbots offer 24/7 support. Not perfect. Not a substitute. But a bridge—for millions with no therapist at all.

The question isn’t “Will AI enter healthcare?”

It’s “What part of healthcare will it not touch?”

Then finance.

Fraud detection used to mean red flags. Now it means pattern recognition on a scale no human could comprehend. AI models monitor thousands of variables in real time, spotting subtle anomalies across global networks.

Risk models update continuously. Lenders don’t rely on credit scores alone—they use AI to factor in alternative data: transaction histories, regional shocks, behavioral signals.

In Southeast Asia, microloan programs run entirely on AI underwriting. No banker. Just data. A fruit vendor gets approved in 3 minutes—not 3 weeks.

Wealth management? A single advisor can serve 10x more clients, powered by personalized recommendation engines, real-time insights, and natural language reporting.

Even regulation is being transformed. AI systems help auditors spot accounting fraud. They don’t just read spreadsheets. They interpret context. They sniff out patterns.

Finance was always about information asymmetry.

AI levels that asymmetry—in real time.

But maybe the biggest shift isn't in tools.

It’s in talent.

AI compresses the career ladder.

A junior product manager using GPT-4 for market research, brainstorming, outlining strategy—can produce work that looks like a mid-level operator.

A first-year developer building with Copilot—writes code faster, debugged, and with documentation. Tasks that once took six hours now take two.

A new hire in a sales team uses AI to draft emails, tailor pitches, and manage leads. She closes more deals—not because she’s better, but because her assistant never sleeps.

This doesn’t eliminate jobs.

But it does distort expectations.

If the novice now performs like the intermediate, what happens to the middle?

If everyone has a sidekick, what’s the new baseline?

And if you’re not using AI… are you even in the game?

…And the curve keeps rising.

Education is next.

Not the school system — the learning system.

The old model: One teacher. Thirty students. One pace. One size fits none.

The new model: Adaptive, personalized, generative.

A history teacher in Ohio uses an AI tool to generate differentiated reading materials. One version for English learners. Another for advanced students. One for the kid who’s two grade levels behind. And it’s all built in minutes.

Homework isn’t just graded. It’s explained.

Essays don’t just get a red pen — they get a rewrite suggestion, a counterpoint, a Socratic nudge.

A tutor for every child. Always on. Patient. Fluent in any language.

It’s not magic. It’s mechanical scale applied to intellectual care.

And teachers? They don’t disappear.

They ascend.

They become designers of learning experiences.

Coaches, not content distributors.

And when the system works, they get their time back — to do what AI still can’t: motivate, inspire, model curiosity.

But this isn’t universal.

Some schools can afford the tools.

Others can’t.

The divide isn’t just digital now.

It’s cognitive. It’s exponential.

And it’s happening in real time.

So the question for education isn’t “Will AI help?”

It’s: “Who gets helped first?”

Then there’s media.

The speed of creation has collapsed.

A one-person newsroom can publish daily. Podcasts are generated with synthetic voices. Clips are cut, subtitled, posted — all by an AI that works around the clock.

And the scale?

News outlets now use LLMs to analyze leaked documents, cross-reference sources, flag inconsistencies. What once took a team now takes a script.

AI is the new intern. The new editor. The new engine.

But it’s also the new liar.

Synthetic media floods timelines. A fake quote here. A generated image there. A video that never happened.

Truth competes with speed.

Credibility competes with virality.

And human judgment becomes the last firewall.

Journalism can scale. But so can deception.

The arms race is here.

Who wins depends on who you trust.

And whether that trust is earned — or engineered.

Government is slower. But it’s shifting.

Regulators in Brussels read legislation written with LLMs. Clearer. Simpler. Easier to translate. Policy modeled, outcomes simulated. Stakeholder feedback summarized instantly.

In São Paulo, traffic planners use AI to reroute congestion dynamically.

In Bangladesh, tax documents are scanned, categorized, and flagged — without a single officer lifting a pen.

But automation cuts both ways.

Bias in training data becomes bias in law.

Opaque systems lead to unjust outcomes.

And the citizen doesn’t know who made the decision — or how.

Algorithmic governance needs algorithmic transparency.

That’s not a technical feature. That’s democratic infrastructure.

If we don't design with the public in mind, we’ll end up governed by the invisible.

And AI isn’t just a governance tool.

It’s a geopolitical lever.

Because this story isn’t playing out the same way everywhere.

In the U.S., foundational models dominate — GPT, Claude, Gemini.

Venture capital fuels the pace. Open-source models trail behind but gain ground.

Innovation is messy. Laws lag. Ethics are crowdsourced.

In China, deployment is the game.

AI is embedded in logistics, surveillance, education, e-commerce.

Speed over transparency. National support over public debate.

The models are censored — but the rollout is relentless.

In Europe, the axis tilts toward control.

The AI Act defines risk levels. Fines are real. Compliance is strict.

Here, trust isn’t just nice — it’s enforced.

India, meanwhile, is building for the many.

Voice bots in rural dialects. WhatsApp interfaces for government services.

Education apps that speak to children who’ve never touched a laptop.

Africa is leapfrogging, too.

From micro-finance chatbots in Nairobi to diagnostic tools in Lagos.

The constraint isn’t imagination. It’s infrastructure.

AI looks different on every continent.

But the pattern is clear:

Where there is data, there is lift.

Where there is scale, there is pressure.

Where there is need, there is invention.

No one has a monopoly on the future.

But some are moving faster than others.

And then we come back to work.

Because AI is not just a technology shift. It’s a labor shockwave.

But not the kind we’re used to.

Not mass layoffs. Not sci-fi robots.

Something subtler. Stranger.

A compression.

A junior marketer, using generative tools, produces work that rivals a senior.

A solo entrepreneur automates logistics, customer service, and analytics — and looks like a ten-person team.

A recent grad builds a mobile app over the weekend using a code assistant and AI-generated design assets.

This doesn’t eliminate the ladder.

It flattens it.

The result?

The expert isn’t out of work — but the path to becoming one is murkier.

The mentor has less time — and the mentee has less need.

Middle managers are squeezed. Entry-level roles are unstable.

In a world where performance is augmented, potential is harder to measure.

So we have to rebuild the pipeline.

Not just train.

But re-train.

Re-sequence.

Apprenticeship models. In-house academies. Pairing humans with AI — not to replace, but to accelerate.

And here's the paradox:

AI makes mediocre work easier.

But it makes exceptional work more valuable.

Because when average is automated, excellence stands out.

Now stretch this forward.

What happens when AI doesn’t just assist — it acts?

Agentic systems. Fully autonomous workflows.

An AI that doesn’t wait for your prompt. It sets goals. Tracks progress. Executes steps.

It books your flight, checks your visa, reschedules your meeting, and updates your CRM — without asking.

It’s not coming. It’s here.

In alpha. In beta. Quiet, but real.

And it’s about to hit enterprise like a freight train.

Companies will run leaner.

Startups will launch faster.

Organizations will build products with agents that talk to each other, debug themselves, generate tests, write documentation, and deploy.

We’ll see billion-dollar companies with teams of ten.

But power without judgment is risk.

And scale without accountability is danger.

AI can be biased.

It can be wrong, confidently.

It can flood the internet with noise.

It can be co-opted — by bad actors, by nation-states, by the powerful few.

So we need guardrails.

Not to slow the future.

But to make sure it’s one worth building.

Audit trails. Open models. Data transparency. User consent.

Values, not just features.

Rights, not just settings.

This isn’t just a job for developers.

It’s for leaders. Policymakers. Workers. Students.

Because AI isn’t neutral.

It reflects what we feed it.

And right now, we’re feeding it everything.

So ask yourself:

What will you automate?

What decisions will you delegate?

What data will you clean?

What ladders will you build?

Because we are all designers now.

Of systems. Of incentives. Of outcomes.

This isn’t theoretical anymore.

This is operational.

This is strategic.

This is moral.

AI is not a wave.

It is a step.

A higher curve.

The ground has shifted.

The only question is whether you climb.