DevOps and Docker Talk: Cloud Native Interviews and Tooling

Alex Chalkias of the Canonical MicroK8s project joins Bret and they dive into this easy and powerful Kubernetes distribution that had some major updates in 2020, including high availability.

Show Notes

Alex Chalkias of the Canonical MicroK8s project joins Bret and they dive into this easy and powerful Kubernetes distribution that had some major updates in 2020, including high availability. Since the interview, the open-source community around Microk8s has been releasing versions regularly, including keeping the Kubernetes versions up to date and continuing to add new add-ons to their one-line install list. It's getting quite impressive at this point, including kata containers, open EBS, and the KEDA (Kubernetes Event-driven Autoscaling) event-driven auto-scaling. Streamed live on YouTube on November 12, 2020, Ep 101.

Unedited live recording on YouTube Ep 101

Microk8s website
Canonical Ubuntu Kubernetes web page
Multipass install page
Ubuntu YouTube
Charmhub website

You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!

Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com

Creators & Guests

Host
Bret Fisher
Cloud native DevOps Dude. Course creator, YouTuber, Podcaster. Docker Captain and CNCF Ambassador. People person who spends too much time in front of a computer.
Producer
Beth Fisher
Producer of DevOps and Docker Talk podcast since 2019. Assistant producer on Bret Fisher Live show on YouTube. Business and proposal writer by trade.
Editor
Cristi Cotovan
Video editor and educational content producer. Descript and Camtasia coach.

What is DevOps and Docker Talk: Cloud Native Interviews and Tooling?

Interviews from Bret Fisher's live show. Topics cover container and cloud topics like Docker, Kubernetes, Swarm, Cloud Native development, DevOps, SRE, GitOps, DevSecOps, platform engineering, and the full software lifecycle. Full show notes and more info available at https://podcast.bretfisher.com

You're listening to DevOps and Docker Talk, and I'm your host, Bret Fisher.

These are edited audio only versions of my YouTube
Live show that you can join every Thursday bret.live.

This podcast is sponsored by my Patreon members.

I'd like to thank all the paid supporters that make this show possible.

You can get more info and follow my updates on all the content
and open source I'm creating at patreon.com/bretfisher.

And as a reminder, all the links for this show, the topics we discuss.

As well as the links I've already mentioned are available
on the podcast website at podcast.bretfisher.com.

MicroK8s is one of my favorite Kubernetes distributions.

It's easy to install.

Lightweight has a great command line interface for configuration.

Managing nodes and installing add-ons.

The Microk8s project is managed by Canonical.

You know, those, Ubuntu people.

But it works on any Linux distro that supports snap packages, which has a lot of them.

In November, 2020, I had project manager, Alex Chalkias on the show to talk
about the newly released multi-node high availability feature in Microk8s.

Which means you can now start considering Microk8s as an option for fully redundant cluster setups.

Since that interview, the open-source community around Microk8s has been releasing
minor versions about every four months, including keeping the Kubernetes versions
up to date and continuing to add new add-ons to their one-line install list.

That's getting quite impressive at this point, including kata containers, open EBS.

And the KEDA or K E D A event driven auto-scaling.

I pulled this year old interview out of my 2020 live show archive because it continues
to be one of my go-to k8s distributions because of its easy setup and tear down.

And I hope this show convinces you to give it a shot.

Now on with the show.

welcome to the show.

Thanks for having me.

Yeah.

So I'm excited for you to be here.

We've been, You actually reached out to me on Twitter eight,
six months ago, eight months ago, something like that.

I think like six months ago or something.

Yeah.

And we started talking about MicroK8s and how much I'm using it in my courses, and I enjoy using it.

It's a really easy Kubernetes distribution.

We started throwing around the idea of like, how do we get you on the
show, you asked me about help I needed, and so it's been really great.

I've enjoyed having, the sort of the support of Canonical to make sure our students
are getting, because you know, we have hundreds of students a day actually starting the
courses and there's, I think we're now up to like 15,000 students a month in the courses.

So that means that there's a lot of people having to
set up MicroK8s and learning Kubernetes at some point.

So tell me a little bit about your background, how you got into Kubernetes and this MicroK8s thing.

I have a software engineering background.

I finished my studies in the university of Patras in Greece and then I started working for
Nokia in Athens, so, still as a software engineer, but the Telco space is like a really
particular industry to say the least, a lot of learnings telecommunication protocols.

We'll do sort of a maintenance and operations as a first job.

And then I moved into a little bit more to the software
architecture and 5G and SDN projects, cutting edge.

And then, yeah I felt that I'm more into organizing technical teams rather than doing the tech
work, myself, and I slowly evolved into product manager, and I joined Canonical 11 months ago.

I had some theoretical knowledge around Kubernetes, but I really got my hands dirty with MicroK8s.

It's been actually really useful for me to use the product itself to learn Kubernetes.

I think it's one of the easiest way for people to onboard to their Kubernetes journey.

And now we even have like enterprise-grade features and we're considered to be production-ready.

So it's like really the end to end story we want to tell.

Yeah, because when I first learned MicroK8s a couple of years ago, or at least
about it and I tried it out it's a quite a different tool than it is today.

And I think back then it was single node only, you know, it wasn't
necessarily bragging about being like a production-ready tool at the time.

And it was kind of early days, but it's changed a lot and I
keep seeing from you, thank you updates on major new features.

So I'm excited to get into some of those details in the show today.

I have been saying Micro K 8s for a long time, mostly because K eight
was the original way I was saying the mini version of Kubernetes.

And now we have all these different variations on not just how to say Kubernetes, but now
every distribution of Kubernetes now, you know, the thousand ways there are to say kubectl..

So for those of you out there wondering if we call it different things,
that's just because it's the Internet and, you know, there's the official
way to say it maybe, but the rest of us all have different ways of saying it.

So you're okay with your way of saying it and I'm okay too.

So over the course of 2020, there have been multiple updates to MicroK8s.

So for those that have taken my courses, they have learned
some of the basics about installing it on one machine.

They basically just do the simple install command,
which we can get to, but what else is beyond that now.

And I know that they could enable DNS and a couple of other built
in features like ingress and whatnot, but what's happened this year?

So, a lot of things have happened.

I think the first major story was about the native installers were Windows and macOS.

So in the beginning, as many people have already used K8s on Windows and Mac, you needed a
virtual machine to host Linux OS and then deploy Kubernetes on top of your Linux in the VM.

What we have done for Windows and macOS is effectively provide with a native installer,
which means that if a Windows user was to click on this little Windows icon here, they
can get the installer for Windows, download it on their Windows machine, and then run the
GUI, the wizards exactly the same way as one would do for any other Windows application.

And then, after the setup is complete there is a lightweight VM with
Multipass, which I guess we'll discuss in more detail along the way.

And we'll host MicroK8s inside that little VM.

What it means is that all the complexity of setting
up a VM environment is removed from the end user.

You can click on the downloads and you get all the install
instructions for your Windows set up and likewise for Mac.

So our goal with MicroK8s is to remove all the complexity of setting up the
Kubernetes because we feel that it's really going towards the commoditization.

Everybody needs a Kubernetes nowadays, and we want to help users just get on
with it and just focus on where the value lies, which is container development.

So they don't need to care about the Kubernetes, and we think we've got it right
to take care of about all the intricacies that come with deploying the platform.

Yeah.

So that's about that is about the the Windows and Mac installers.

So, if you click on the Mac icon, you would get all the guidelines on how to install on Mac.

And then we have had the general availability of .the HA features.

So, highly-available Kubernetes is something really necessary for production.

We have been supporting multi-node instances for quite some time now, I think from the late 2019.

But now, as you install the latest version of MicroK8s, it comes with HA as a default.

And yeah we're super excited about having this really enterprise-grade story to
enable our customers and businesses drive their, their container use cases on top.

So, what I used to tell people was, you know, a year ago I was saying,
first you need a Linux machine and you can do that with Multipass.

So you're telling me that on Mac and Windows, because if I'm on a Ubuntu machine or any Linux
machine, I can use the snap, which is like an easier way to install a package right on Linux.

I can use that to just have Kubernetes natively on my machine.

That's actually one of the distinctions I try to make, because students
have a question about the difference between Minikube and MicroK8s.

And I talk about Minikube being sort of a virtual machine manager
where MicroK8s doesn't require a virtual machine on a Linux box.

You basically provide it any Linux machine and it installs natively on.

Okay.

But now on Windows and Mac, they don't have to go get Multipass separately,
they can just download a GUI, kind of like the Docker desktop experience, right?

Where they can just they install this thing and then it works out of the box.

Since we've talked about Multipass several times, let's just talk about that,
because that was another one of my favorite tools and I use it all the time.

So what is this thing, Multipass?

I sometimes have a hard time with the elevator pitch of trying to tell someone
how this is different from like VirtualBox or, you know, Hyper-V or whatever.

So they get confused about what this tool, the problem that this solves.

So, it's pretty much a similar experience that Multipass provides.

For us, the elevator pitch of Multipass would be like the easiest way to get an Ubuntu VM.

It doesn't need to be an Ubuntu VM, but that's really where we come from.

You can, again, install it pretty much anywhere with MicroK8s, you don't need
to care about the separate installation anymore as of the native installers.

Going to two, right.

Is that what you're saying?

If you're doing this for MicroK8s purposes, just go ahead and download the MicroK8s bundle.

Exactly.

Yes.

we have appliances in Multipass, so you can sort of have your specific images
that you want with the OS and the applications enabled within Multipass.

So you can launch your VM, that would include like
a specific set of configurations and applications.

It's really super easy.

I think CLI-wise it's probably one of the easiest virtual machine managers that I've seen around.

I've used many of these, especially on Windows, but for Linux and the CLI
I think Multipass is really, probably one of the easiest solutions there.

Yeah.

And that's one of the reasons I recommend it nowadays.

For those of you that have taken one of my courses and you've heard of the Docker machine
application, it was a CLI tool that is no longer supported actually, it still works, but
it's not being maintained, so it's time for us to find a new tool, but it solved the problem
of how, do I spin up multiple VMs to create a quick Docker cluster that I can run Swarm on?

Or I just needed a couple of different Docker machines and it, what it all is
really doing in the background was downloading an auto-provisioning, a Linux
VM, a tiny little Linux VM, and then pre installing Docker on it, essentially.

So it saved you a couple of steps and it's certainly way faster than
the old school days of, you know, download and Ubuntu ISO, right?

And then make a custom VM, then mount that, go through the whole installation next,
next, you know, picking my keyboard, all those different things I have to choose.

And then maybe, 20 minutes later, I have a VM that runs Ubuntu.

Whereas so Docker Desktop solved that, but it's going away, and it only was for Docker.

It didn't help you with Kubernetes.

It didn't really work with Kubernetes, didn't have any way to install Kubernetes.

So I have started.

About a year ago, I was starting to use this as my
default way for basically just provisioning any Linux VM.

Anything I needed locally on my machine and whether I was on Mac, Windows or Linux, it would just
figure out the right way to do that for my OS without me having to pick my proper virtualization
technology or whatever driver that I had to deal with for virtualization, it just would choose all
that for me, and then within that, it would download, I think it's like a run of 800 mb ISO file.

It basically downloads the pre-configured latest Ubuntu and you can specify different versions.

I think they might even show that example here, but I started using it.

So if I went to the command line now and did a Multipass list, I would have eight different VMs,
one for each version of Kubernetes over the last five versions, one that's just Linux without
Kubernetes, one that I might be running an old version of Ubuntu, to basically simulate maybe
a production server I have that's on you know, a 14 release or a 16 LTS, something like that.

So if those of you are out there this show isn't meant to be about Multipass.

If you need a way to spin up Linux VMs quickly, I love this tool.

So definitely check it out.

I just learned before the start of the show from Alex, that it's now bundled into MicroK8s.

So if you're just trying to get some local Kubernetes, then
you maybe don't even need to go download Multipass manually.

I'm curious though, I've installed MicroK8s from here.

Do I get the Multipass command line?

Is it like a full Multipass install?

Yeah, it is a full Multipass install.

The install happens only on Windows and Mac because on
Linux, you don't really need a Linux VM on top, of course.

But yeah, you get the full Multipass, you can see the little icon running in
your services and yeah, you get the Multipass CLI and effectively, you can
use other host native CLI to run your Linux commands or the Multipass CLI.

So it's really up to the user,

And then in chat, or let's see, let's look at some of the questions here.

can you run Docker Desktop and MicroK8s on the same machine?

Absolutely.

There's no magic with Multipass or MicroK8s.

Multipass is a virtualization machine manager, right?

So it just happens to provide the OS for that VM and the
VM, it layers over top of that, whatever VM solution.

So it's not replacing Hyper V or VMware or any of these other things that you would do.

It's just picking one or providing you one out of the box a little bit.

It's a little bit of the ease of use factor, but you're
still gonna need something in the background, right?

Still running a VM.

It's just creating that for you.

So with the case of the MicroK8s that may not need a VM if you're already
on a Linux box or if you already have a Linux VM in the cloud on your local
machine, if it's not using Multipass, it's just provided by something else.

Like for example, I have a full Mint OS Linux variant that I use, I play around with it, just
to kind of play with Mint, which runs basically on top of an Ubuntu install, and it provides
a different GUI than Ubuntu, so on that machine, it's already in a Parallels VM on my Mac.

So in order to get MicroK8s on it, I don't need Multipass because that's to create other VMs.

And sometimes I think some people get confused a little bit.

It's hard for me to explain it.

Sometimes people are like, well, wait what am I, what is this doing, is this providing both tools?

So just realize that MicroK8s is a Kubernetes distribution.

You can run it anywhere and Multipass is a local VM manager.

It sort of automates VM setup, tear down, swapping between them.

It gets you a quick shell, so you don't have to run, like if you've ever had to
use Virtual Box and to get a shell in there, you either have to figure out how to
SSH in which is always a pain or you have to you know, actually use their GUI for
the actual terminal, and that sucks because you don't have your local tooling.

So, multipass, it's easy, you can just shell right in and
you can hop back and forth, copy files and it's great.

I see two questions asking pretty much the same thing about the MicroK8s bare metal supports.

The answer is yes, we, you can run Microk8s on directly on bare metal
should be production-ready given the fact that we also support ha so
if you want to cluster multiple bare-metal machines, you can do so

nice.

Yeah, because I know that specifically on your website,
you've mentioned IoT, which is usually going to be bare metal.

How many worker nodes can we join and how many master nodes can we initiate?

So this gets a little bit into the ha

So for the worker nodes, there's no limit that I know of on the number of worker
nodes that you might want to join really depends on the resources of your system.

And then, so the thing with ha is that the I think I need to share my sort of slide and how
this works, because I think it would be easier for people to really understand the intricacies.

Let me go on to present mode and I need my slide here because I always figure
out that it's way easier for people to understand when I show them the picture,
instead of just saying a bunch of words that don't really explain nicely.

So, We took a kind of a step-by-step approach in, in enabling a high availability for case.

So in the beginning we just were supporting clustering.

So we went from supporting a single node.

So you would run, install MicroK8s on your host to being able
to link multiple hosts in, into a single a multi-node cluster.

So here we have a node and then if I type MicroK8s join
on a second node, they will automatically see each other.

And then a third node, just again, with a single command
Microk8's join forms, a three node cluster and so on.

And so.

So you need the add node command on the first on effectively, what
would be your master nodes and then joining the rest of the nodes.

And then you will have the API services available from all of the
nodes, but then your master node would be only the first note.

Then the rest of the nodes would be the work nodes.

So that was like right before we announced.

Ha so with ha it's a little bit different in order to support ha you need to have the
Kubernetes data store replicated alongside well replicated in multiple nodes effectively.

So the add node command still stands for the first node, but then we,
what we did is we embedded the data store into the Kubernetes API.

So the join will effectively connect the second nodes to the cluster.

And then the third one and th the magic that happens is that as soon as we have
three nodes, LinkedIn to a cluster, the database is automatically replicated.

So we have a leader node in the database forum, and two voter nodes to ensure
that all the Kubernetes state is probably replicated across across all nodes.

And then if I add a fourth node, the fourth node would be a standby node for the database.

And then the fifth node would be a spare.

So this is sort of what we call the zero ops ha clustering, because
the only thing the user needs to do in order to have ha is to join.

The nodes to the cluster, everything else, meaning database maintenance
is taken care of automatically without needing any user intervention.

And so for, just to give you an example of how this works the
self-healing result of the of high availability, we have the same setup.

What would happen if I have my first note die?

So I might need to do it a couple of more times.

So you had the database leader that just died and you
will see the cluster taking care of the transition.

So the second node that was part of the quorum was automatically elected a new leader.

The standby node became a voter node, and then the spare node
became my standby note and all this no user intervention necessary.

That's how help self healing ha works.

So, so, okay.

So I'm hearing every note out of the box is a potential manager or a leader rather.

And the what's the database technology in the backend, that's actually storing all this data.

So this is probably the most interesting change that we introduced with ha we swapped
at CD for equal light is a distributed sequel database that is actually quite popular.

And it was really nice with with embedded use cases.

And what it does is that it has the raft algorithm to
support everything with regards to the database quorum.

And this is what enables the automatic high availability with a self-healing capability.

Nice.

So, is there a support for installing MicroK8s via Mac ports?

Not

that I know of.

Yeah.

So I use brew Mac ports is actually an older technology that's still around.

We support

Brew out of the box.

Okay.

I think actually we don't know, but it's not listed on the website in the docs that I've seen.

So,

well, would there be a strong use case for supporting Mac ports?

I don't know.

Like I think everybody, well, not everybody, but the most popular insult tool would be brew.

Right.

Yeah.

So the other option is, yeah.

So you've got GUI and brew.

Is the, is there a GUI install for Windows or is it just, I'm sorry for Mac, or is it
only brew on Mac that we install for Windows, but I don't see it necessarily for Mac

it's GUI on Windows with a wizard and brew on Mac.

And this is a subtle little thing, because I'm looking at the website.

A subtle little thing here.

Is there used to be a dot for the MicroK8s commands?

Is that changing on everything or is that just on Mac?

No, no, no.

So we both still work.

All changes are backwards compatible in the beginning, it was MicroK8s dot.

foo now it's Microk8s space foo but both still work w we're pushing for
we're trying to keep like the UX coherent with other products as well.

So, the space is the most prominent option.

Nice.

And then for those of you getting started, I would
definitely recommend you do things like enabling DNS.

Like it says here, Microk8s's enabled dashboard and DNS.

So dashboard is the GUI that official Kubernetes dashboard and then DNS,
those are my most popular two choices if you're just getting started locally.

We talk about those in the Kubernetes mastery course.

And then let's see, what else, any reason for all nodes showing up as workers on H.

Yeah, that's a great question.

So, with ha I think I said before, all nodes are an API end points.

All nodes are worker nodes as well.

The data store is embedded into the API server and we
have all these different rules for the data source node.

So you have a leader, you have voters, you have standby and you have spares.

The reason is we wanted to simplify the operations effectively when with ha as soon as you have
three or more nodes you can use the kubectl commands everywhere on all nodes, which is really cool.

Okay.

So how cause I, I know that I've had students ask me
about seeing the indication when they get nodes, right.

When they do a controlled.

Is that still the right way to get status of like, where is my leader?

I mean, I know that technically we're not supposed to care, but are there are
there ways to see what's going on, I guess maybe the MicroK8s status command?

So the status command will give you like the IPs of the quorum.

So the data store replication nodes and then yeah the kubectl get nodes.

And this is also showing you what nodes are part of your cluster.

So, so let me just make sure I'm clear.

So the kubectl get nodes command.

Isn't going to provide the status of whose leader who's no.

Okay.

Because this is Kubernetes data source specific that has nothing to do with with the nodes itself.

Okay.

So I hope that helps maximum and minimums of MicroK8s number of paws master nodes.

We're going to okay.

So we already kind of answered that current k8s available can we upgrade?

Ooh, so that's a good question upgrading.

Yeah,

that's a really good question.

So in terms of support, we always commit to support the latest three upstream versions.

So right now we're on the latest one plus minus two.

So one 19 is the latest stable version for MicroK8s as well.

What you would do is that you can subscribe to different
snap tracks to sort of say, I want to be on edge.

I want to be on Bita and then really decide how you want to take it.

So either you're on the latest and greatest, or you're on the latest stable, or and in the end
with the snap mechanism also provides you a way to automatically updates from version to version.

And you can, but you can always control the updates if you want to have it or not.

Right.

And so yeah, for those of you out there that haven't installed this, make sure you're
thinking about snap as a technology, you might want to learn a little bit about that.

It is not the same as apt or yum, so you definitely want to look into that.

In the end, the snap is a more modern weight package software.

Like it started with tars and Debs went into apps and
then we've been working on snaps for quite a long time.

And we feel that it provides a better packaging and isolation mechanism
and those a lot of security benefits from it like you get automatically
automatic security patching and like the updates can also be automated.

So that, yeah, that's kind of, the cool things that come with the snap packaging.

Right.

The, one of the common questions I get too, is someone who's learning Kubernetes,
and they're learning about all these different services that have to run from
the whether it's on a node or a master node, one of the control plane nodes, and.

One of the things that MicroK8s does out of the box,
is it I believe it runs on container D is it's runtime.

Yes.

And it installs some of the standard services that you would expect, you
know, the API, the controller manager, things like this, it installs them.

I think it's system D services on the host.

O S so, yeah, w we wouldn't have to necessarily get into it, but for those of you
getting started, if you were to do something, like get pods, all namespaces, you
wouldn't see the same thing as you might see in like a Docker, desktop install, or

maybe a rancher install or something like that, because that's one of the flexibilities
about Kubernetes, is that the way you install all of the necessary services can vary.

They may be in containers and they may not now with this
snap, unless it's changed recently this year, let me know.

But most of all those things out of the box that are absolutely required
are installed assistant to these services, which means if you do a good.

Of the different namespaces or just of all namespaces you won't see those there because they're
running in system D like normal host processes do, and then as you enable optional features
like DNS or dashboard, or I'm assuming that the other ones like Istio registry, I haven't

actually checked all of them, but those things will install as containers because I'm guessing
it's easier for it to download those, manage those as containers from a, from that perspective.

Is that correct?

Everything right.

To the detail, like really.

Okay.

Okay.

Yeah.

So, I mean, and it just shows to the flexibility of Kubernetes and how each a lot of times
students will ask me about why I'm having a problem with name resolution, I'm having problem
with the service port or what, or how do I get things running on port 80, or, you know, all
these different questions that are related to the specific way that they install Kubernetes.

So that's how it's actually, it gets a little challenging in the Kubernetes space because
we can't all be talking about this the design of the architecture of the infrastructure.

It's all different depending on your distribution.

So you first have to answer that question of, so if you're going to, by the way, pro
tip, you're asking questions, like in this case about something specific like that,
you always have to frame the question with, I am using Microk8s, or I am using Docker
desktop, or I am using a, you know, AWS, you know, Kubernetes, VM, or something like that.

You have to give us the way you installed Kubernetes because that will completely
change how networking works, how in grass words proxies all that stuff.

So a pro tip for those guys we almost on a daily basis, we're having
to tell students that, Hey, you didn't give us any information.

We have no idea how you're running Kubernetes.

So please, please help us out.

Give us some info snap and Fu who knows giving us some tips here.

Snap info, MicroK8s is a command that you can see the channels.

Talking about Alex, what he was saying earlier.

So.

What about the remaining master components?

Like API server controller, manager, kube, scheduler.

They have the leader voter standby and spare.

If something goes wrong with master node,

I think what they're asking is that yeah.

How do the individual components relate to this leader standby relationship thing?

So I don't think they're tied into the leader voters, standbys
payer, because these refer to the roles within the database forum.

So in theory the most difficult thing to get, right, when you talk about
ha in Kubernetes has to do with, how do you protect your data store from
failure because the state of your cluster is stored in the data store.

So if you manage to make your data store highly available, All the rest can
be like, okay, your server can fail, but you can still manage to recover
it from the data you have stored in your highly available data store.

So that's how we I think we focused a lot on getting the data store ha rights in order to
ensure that if all things go south users can still recover their Kubernetes from the data store.

Right.

So, that

Hey, we have causing in the room.

So it calls us shout out to my fellow Greek senior engineer behind Microk8's.

Yeah.

We got somebody in the audience that has the answer all nodes run the API, sir.

We run three instances of the scheduler and the CR controller manager all the time.

Oh, interesting.

Okay.

So, that's pretty interesting.

I don't, I'm trying to think of another like sort of a minimal install
kind of tool that does that word runs the API server everywhere.

Yeah, because the fine print and a lot of cloud distributions even is that you
know, they might only be running one database node or one copy of the API at a time.

And now they're granted that the problem here is like a login.

What does it take before the horse thing or the chicken?

It's the chicken or the egg scenario.

If you're running all these things in Kubernetes, then when you have to manage the, when
the infrastructure goes down, Kubernetes, isn't there to actually fix all these services.

So again, I think these things are all running in system D I guess.

Some sort of manager, like some sort of agent that's not kubelet I guess, on these
systems that's running and system D what's doing the job of running all of these things.

I don't know if you know, the answer to that, or we're gonna wait on the internet delay for your,

I don't, but I think cos can chime in save me in the chat, but

yeah, so yeah, it's about 20 second delay.

So we'll wait for the answer on that, but we'll skip to another question.

What that reminds me, I think we should direct our users to our discourse page.

So if you go on to microk8s.io and you have any questions, like how, however, technical
there's a community tab that you can open and feel free to drop in any question or
feedback that you might have we're really interested in hearing back from the community
and yeah the team is super like on it's to address like all possible questions.

And one question I got was, do you prefer using MicroK8s over K3s from rancher?

Which approach do you think it is more agnostic?

I would say that I would probably, so, okay.

I got into Microk8's before K3s was easy to install and there's like, there's some
tools out there that help with that with K3s or K threes, whichever way you wanna say.

And so.

Probably lean towards Microk8's just because I know it better.

I'm not saying it's, they both run Kubernetes and now
they both do ha Kubernetes and they both have installers.

So it's really up to you, which ecosystem you want to be in.

They both are going to run, you know, support all the standard
Kubernetes API stuff, but which approach do you think is more agnostic?

I would probably lean toward Microk8's because, and that's that you need it to
be agnostic, but it's, it, things are modular and MicroK8s, which is a normal
process for your Kubernetes installs as you have all these different services.

And your things are sort of spread out a little bit.

Whereas with K3s is designed specifically around a single
binary, removing a lot of the things that aren't necessary.

So it's not a bad thing.

But I would say that for you, if you're interested in comparing.

Just try both out.

And if you like the K3s command line, or if you let you know,
you like the way that Ubuntu provides a snap install, right.

Because that's not something that I'm aware of that K3s has.

So it's just, it's really about how w what tool you prefer deploying and managing.

Some people are already running Ubuntu, and they're
used to snaps, and they're very comfortable with that.

So that would make sense.

Some people are totally in the rancher world and they've using all
the other rancher tools and they're using the, you know, the ranch
Kubernetes and the rancher kube installed tools and stuff like that.

So K3s would, you know, make sense for them also, both of these, if I'm by understanding correctly,
can I get that's not a question that's been asked, but this is like the enterprise money question.

Can I get enterprise support or Microk8s through Canonical

yeah,

yeah.

Yeah.

So if you're someone who's already getting enterprise support through
someone like Canonical, then that makes sense to align your vendors.

A lot of.

When I'm talking to customers about which Kubernetes they need to choose.

Because there's a hundred distributions.

It really becomes a question of, well, what vendor relationship ships or do you already have?

And you know, what are you already paying for?

Because you may already have it, right.

You may, depending on if you're on red hat, you're gonna be looking at red hat.

If you're on VMware, you're gonna be looking at VMware.

So that's not necessarily the decision they always make.

Because I just ended up having a client the other week that chose a
different Kubernetes vendor than their main infrastructure vendor.

But you know, it sometimes that's the way you gotta go.

Or if this is just a personal project, like go with something easy.

I also liked that, you know, MicroK8s for learning is I think it's one of the easiest
ways to learn your Bernays because it now comes with the installer for every end platform.

So when you take my courses, it's one of the three
main ways IDE I recommend that you try out Kubernetes.

One of them is Docker desktop.

One of them is Minikube and then now MicroK8s in there
as well because it tends to be my go-to more so nowadays.

You know, like kind or Minikube or something like that.

So, yeah.

All right.

We did, I've provided a filler for us to get the the answer.

There are all Kubernetes services, our system D services.

All right.

I think you, you got super rights with regards to the differences between the two distributions.

I get that question a lot.

I think you're, you provided a really good answer.

It's really, it really boils down to what the user's looking for.

I feel that we are mostly focusing on, you know, removing the friction
and going towards the zero ops user experience where you don't really
need to go into the weeds of Kubernetes, including the add-ons.

We try to provide the most popular Kubernetes tools or add-ons as a single command.

So you can do MicroK8s enable who and you get like, probably the most
comprehensive set of Kubernetes plugins that you would not want to have around your
distribution, like, Istio or Promethease or Gryphon our DNS and so on and so forth.

Whereas in K3s you would need to still do like the manual setup of all these.

I'm not saying it's super hard.

It's just like one less step to take.

And the same experience applies to pretty much
everything we do in MicroK8s sort of tried to show you.

How ha works.

It's a single command to, to, to add nodes into your cluster, as long
as you ha as soon as you have a few more nodes, poof, you got ha magic.

I know in the K3s world, it's probably like a couple of more commands.

It's really not that big of a deal.

But yeah, I think both it's really good to have the differentiation between
the distributions and yeah it definitely makes sense to, to have both around.

And I should actually also say that we are also using the kind
sort of, middleware that the branch are provided in case we asked.

To do the work to enable instead of MTV.

And that's like the community, the Kubernetes community working towards similar goals and
having different distributions benefit from other people's work is really nice to see.

Right, right.

Yeah, there was some questions around can you assistant
non-system D S systems, can you run a MicroK8s on them?

And the answer is no.

So they're there, which I don't, I'm sure there are Linux distributions that
don't use system D but I haven't seen one in awhile, so not that it doesn't exist.

It's just, it's not common as, as near as common as my, in my experience, at least.

So it is a hard dependency on system D to run that stuff in the background.

And obviously you have to have snap, like you have to have snap
installed in that Linux distribution to be able to install it.

So, Let's see.

Yeah, it's hard requirements set by snap.

Oh, you cannot have snaps without system D it's a hard
requirement set by snap D all right, there you go.

Um, There was a question on how to run two different run on containers and two
different cloud providers behind a load balancer of some kind of routing mechanism.

So, That's a pretty advanced setup.

I don't have a single customer that's ever done that.

So, usually if it's just a website, you're going to have like an external load balancer
and you just have to route them between zones not availability zones, but regions
rather MicroK8s has ad-ons management and K3s can be installed everywhere without snap.

Oh, that's an interesting distinction.

Yeah.

The other big difference is as we said before, a K3s is a single binary.

Like they recompile effectively all the Kubernetes services in a single binary.

Whereas we provide all the Kubernetes binary's everything
coming from the upstream in a single package, which is the snap,

right.

Compatible container platforms for MicroK8s.

So I'm not sure what that question means.

What is you?

Yeah.

What is, what do you consider a container platform though?

yeah, if you're talking about run times it uses container
D out of the box, which is the same thing that Docker uses.

So, in fact, you can have Docker and Microk8's on the same server, same
system, and it, but they won't interfere interfere with each other.

They're they're separate.

I can actually in other words, Docker is not a dependency of MicroK8s,
because it doesn't need to be most people that are running Kubernetes,
especially if they're running a distribution we have a number of them.

They are, they're using container D though there's a slow shift from Kubernetes distributions,
automating the install of container D rather than you having to go and manually install Docker.

Yeah.

But because reality is once you have Kubernetes installed, unless there's a hard
requirement on a feature Docker, you're not usually using the Docker command line anymore.

So, you know, there's lots of reasons you might need to run Docker, but you
don't necessarily have to have it running you're on your Kubernetes for you.

All right.

I think we've wrapped up all of the questions and we're kind of perfect timing.

let's let's talk about how people get started.

So a lot of people say they, it like they have not used MicroK8s before.

So it sounds like, what are you gonna recommend?

Go to the website?.

go to the homepage, MicroK8s .io select the platform that you
want to install MicroK8s on and just follow the instructions.

Super easy single commands on Linux just a single installer on Windows.

And I guess a couple of commands on macOS.

And then everything's pretty comprehensible.

You can use the Microk8s status command to see the, the health of your
instance and then check out all the different add-ons that are available.

I've listed some of them before but the list is a
little bit more extensive than what I listed before.

And yeah, you can also follow the tutorials that are on the website to get you started.

We have a raspberry PI tutorial.

We have the clustering tutorial.

And we'll keep adding to the list of tutorials that we provide to our users.

And of course we are on open source community as well.

Everybody can contribute like if you have any cool use case that you are playing with feel
free to, to contribute it, story all to the website, happy to discuss with you further.

And then, yeah, I mean, in terms of production grade and our shift towards the enterprise, I
think we're we're getting better and better at addressing needs, especially with the edge and IoT.

And yeah, we'll hopefully we'll have some case studies out there soon because
people are also asking us like okay, I have this sort of Super-important production
a story that I and I'm interested in using Microk8s because it's so simple.

But can I have some specific examples?

So in the past we weren't really allowed to to be public with
with our customers, but hopefully we'll have something there soon.

Very nice.

Is there is there, by the way, is there a distribution
list or some way that I can get notified of new updates?

How do I stay aware of the changes going on in the MicroK8s community?

So, if you go on the docks, the, there are release notes with the with all the changes
that that occur and actually causes always sending an email and an update on the Kubernetes
main websites These are the two most prominent ways to get informed about latest changes.

And we have we also have a Kubernetes, a slack channel.

So in, in the Kubernetes slack, you can find #microk8s channel to discuss with the team.

And I think I've listed everything.

We also have a YouTube channel with some cool demos and a few intro videos.

We're on MicroK8s.

So, I can share that with you.

It's called Celebrate Ubuntu, and you'll find the Microk8s videos underneath our channel.

By the way I was, while you were talking, I was actually realizing that So for those
of you getting started this trips up a lot of my students as well don't go and install.

Okay.

So when you install CU MicroK8s, it's not going to give you, you can't just type kubectl.

And that doesn't mean that you should go and install kubectl.

What you should be doing is this, like you should be looking
at the fact that MicroK8s provides you to control out of the.

But it's not going, it doesn't want to conflict with maybe another queue control
install that you already have running a different, you know, that, that binary directly.

So they have an SL alias that they recommend.

I do the same thing for Multipass.

I actually just alias, MP to Multipass and that way you're not stepping on the foots of, to control.

And here's the important reason why is that you might have multiple clusters, right?

So if you have, if you're under your local command line and you may be managed remote clusters
and they have different versions, the queue control command line can only be so many versions.

I think it's two versions out of date from that cluster.

So the nice thing is if you follow the directions in the getting started guide, that will make
sure that your Microk8's kubectl command line also stays versioned with your MicroK8s itself.

I messed that up when I first started using it and didn't quite understand the nuances.

Also, it automatically manages the certificate of you getting into your.

Cluster.

So you don't have to worry about that either where you
would maybe have to do a little bit of manual finagling.

If you were separating out your kubectl, mainline install from the MicroK8s.

So the nice thing is kind of like how the Docker desktop does this
MicroK8s manages the CLI for you as well as the server components.

So I highly recommend you do that method instead of manually installing kubectl
and then trying to make it work because that's more work for really no benefit.

I see a C a couple of questions that we've already answered.

So I would invite the people like, especially the MicroK8s versus K3S
that's been covered in detail and again, MicroK8s is production-ready.

We just launched general availability for HA.

So, we're quite optimistic that it can take heavy hits and still be up and running.

That's the whole purpose of this launch.

And congratulations on HA that's a big that's a big, huge win.

And something that's I think is going to help a lot of people that were already
learning Microk8s and now can actually use it in a more production businessy type of
scenario instead of having to learn a whole different tool that does things differently.

And the next thing for everybody to do is to go check that out and Multipass, or managing your VM.

So don't forget about Multipass if you're not checking that out.

We mentioned that earlier as well, and that's going to be it.

Thank you so much, Alex.

This has been great.

Yeah.

is there a Twitter feed actually was confused about this?

Is there a Twitter feed that people can follow

on?

On MicroK8s specifically?

No.

But you can always, yeah.

You can always follow the Ubuntu sort of feed @ubuntu, yeah.

I'm not that much of a Twitter guy, myself.

I'm more like Facebook and LinkedIn, but yeah, I'm always there for work purposes

either way.

Right.

Right.

All right, everyone.

Well, thank you so much for showing up live this week.

And again, if you want to get updates about what's going on,
what's being released in terms of my content and my stuff.

Jump over to Patreon All right.

Thanks again, Alex.

We'll see you soon.

Thanks.

Thanks so much for listening and I'll see you in the next episode.