DevOps and Docker Talk

My guest is Ben Arent, Developer Relations Engineer at Teleport. Teleport allows engineers and security professionals to unify access for SSH servers, Kubernetes clusters, web applications, and databases across all environments.

Show Notes

My guest is Ben Arent, Developer Relations Engineer at Teleport. Teleport allows engineers and security professionals to unify access for SSH servers, Kubernetes clusters, web applications, and databases across all environments. In this episode, we talk about why it exists, the problems it solves, and how it's implemented. Streamed on YouTube Sept 29, 2021, Ep 139.

★ Support this podcast on Patreon ★

What is DevOps and Docker Talk?

Interviews and Q&A from my weekly YouTube Live show. Topics cover Docker and container tools like Kubernetes, Swarm, Cloud Native development, Cloud tech, DevOps, GitOps, DevSecOps, and the full software lifecycle supply chain. Full YouTube shows and more info available at

You're listening to DevOps and Docker Talk, and I'm your host, Bret Fisher.

These are edited audio only versions of my YouTube
Live show that you can join every Thursday

This podcast is sponsored by my Patreon members.

I'd like to thank all the paid supporters that make this show possible.

You can get more info and follow my updates on all the content
and open source I'm creating at

And as a reminder, all the links for this show, the topics we discuss.

As well as the links I've already mentioned are available
on the podcast website at

In September of 2021, I had Ben Arent, a developer relations engineer of teleport on the show.

Now, if you haven't heard of teleport, we're going to get into it, but it's essentially
a fancy remote access technology, mostly open source that allows you to access
endpoints as well as systems like Kubernetes remotely without a traditional VPN.

I I think it's an interesting way to provide your teams granular access
and really lock down what remote endpoints people can get access to.

And it also uses some great security underneath that we get into as well.

So please enjoy this episode with Ben from Teleport.


Yeah, you can tell.

I have the...

Sorry, I distracted you.

I know, I have the East Bay accent, I'm originally from the UK, but been here a decade


and I'm a DevOps engineer at Teleport, and I've worked
in a range of developer tools probably for a decade now.

I was just talking to Bret about some of my adventures in various companies that you may know about
Redis To Go, Airbrake, Rackspace, OpenStack, all sorts of fun projects that have come and gone.

So one thing that's always been standard is you always need to get some kind of access.

Yeah, that is universal.

So that's going to be our focus today, if you're all are just tuning in.

We're gonna be focusing on specifically Cloud Native modern remote access.

So, we're going to go through some of the problems of the past and the ways we did it before.

And I have personally heard about previously the Gravity Project, and then
Teleport, when you all announced, was it last year you announced the change?


was last year.



19, last year was a blur.

Yeah, that's true, that's true.

I think last year was the year that I was actually becoming more aware
of the projects and what you all were working on and stuff over there.

So all of you out there, you may have heard of Gravity, which was a project by Teleport.

I might be getting this wrong, but now we all know you as Teleport.



can give


Gravity instead of the founding of the company and actually worked
with Sasha and Taylor at Rackspace when they were working at Mailgun.

And I think they saw the similar problem of trying to run compute anywhere.

And Gravity was this idea of packaging up your applications and being able to run them we
would call them with Zero DevOps and Teleport was a method for accessing those clusters.

And under the hood Gravity, which package, it was a would say like
Kubernetes ones, your applications and Gravity runs Kubernetes.

And so that let people run and package Kubernetes clusters into whole bunch of different places.

So you could run Kubernetes on premise, but without having to get external resources, or we had
other people who, sAS providers might want to sell that SAS product in someone else's data center.

And by using Gravity, they could package everything up, they
wouldn't need to have external resources, and run it in their DC.

one of the benefits of Teleport, which we pulled from Gravity was, Teleport let you
access and maintain and do a whole bunch of, other controls over those systems.

It sounds like it was, you were solving it you're scratching your own itch.

It sounds like you were solving your own problem there.

Yeah, so it started off with Kubernetes and server access.

And over the last couple of years, we've added application and database access as well.

Very nice.

Let's back up for a second because we were talking about what to talk about on the show
and it was an interesting idea to talk about the origin of remote access, SSH and all the
things and where that starts to struggle in our modern multi-platform multi-cloud world.

Yeah, it depends how far back in the history of access.

And I think that's often, you know, like 2021 people
will say, oh, you don't need to access machines.

It's cattle vs.

Pets, immutable infrastructure, if you access a node.

I actually had, I always worked for the DevOps lead, and if you accessed it,
they will terminate the instance after five minutes, it was seen as like a toxic

Who knows what you did?

Who knows what you did, it's yeah, remove it and fire something else in there.

But the reality of modern DevOps is that you always need, someone
needs to get access to the infrastructure for a range of things.

So even if you have a fully immutable infrastructure, you may need a team to pull
logs off a system prior to rotating it, which could be like your security team.

And then, what becomes interesting when you go into the
world of Kubernetes, everything is talking to a rest API.

And how do you get a full audit log of who's doing what and
having a audit log of history about which commands are being run.

And I think if you think about cloud providers in general, someone still has access to your
machines, and so there could be like the serial, I think Amazon even added a serial bus recently.

you have like serial bus, you have SSH, you have these methods
in which you do need to get some sort of access to machines.

And we see a plethora of people from all sorts of interesting use cases.

So we have some people deploying Raspberry PIs into farmer's fields and they need
to get some kind of remote access, but there's no sort of central command plane,
and you can run Teleport in this mode, that's Teleport and dial back and do with NAT
traversal, and you don't have to necessarily worry about your networking as well.

And so you can think of Teleport as this unified access plane that you don't have to worry
about protocols or even the network and everything is done specific to that protocol.

So for our service support, we just use Open SSH certificates under the hood, and
then we just have a whole bunch of stuff that makes that much easier for you to use.

I go to the website, and there's a list of products and are they all related?

Because they all just seem to be access focused.

Yeah, I guess they're all related, but they're also very deep in the protocol.

And so if we start with server access, I think this is probably what people are most familiar
with so, when you have a cloud provider, you'll often provide your public private key.

You generate a private key on your host, you upload
your public key and that's sort of how you authenticate.

That kind of works well for your smaller projects.

But if you're working on a team, let's say you have five people, do you have to upload every
five public keys to the server, and then you have a script that runs it when they leave?

And it's there very quickly, doesn't become a sustainable way of adding new people
to get access, and there's also the lack of visibility once people sort of leave.

And Open SSH has had certificate support for a while.

And this lets you, instead of providing a sort of long lived public
private key, you can use a short lived like X509 certificates for access.

And that's what all of these sort of different platforms use so short lived access for us.

So you can also use the same thing for kubeconfigs, instead of having
long-lived kubeconfigs you only get a kubeconfig for a 10 hour period.


access use kubectl again, you need to get a new kubeconfig based upon how you've set it up.


I think a couple of years ago I was reading at least one great article
about SSH using certificates rather than keys and the benefits of all that.

And to me, it always seemed like the challenge was implementation and maintenance of that.

There's a lot of,...

Yeah, because you have to manage a certificate authority and then you have to worry about rotating
of certificates and that is all abstracted away, and Teleport makes that very easy for you.


Yeah, and there's never really been, like I think every project I work
on, the way we get into things, because especially if you're DevOps or
you're especially ops, when things go awry, you got to get on servers usually.

At some point you got to get on those servers so that the methodology for how you get there and how
you do it securely and I find that it's related to the maturity of the team, the way that you access
it and making sure that keys are taken off and then people that have left had all the keys removed.

That's not stuff that a young team has, right?

Like a young team is you're saying, like throwing SSH
keys of their own on servers randomly when they need them.

They might have a cloud init script that automatically
installs them at startup time, and there's a list.

Or maybe there's one key, and then it's given to all the people that need it.

And then the problem is how do you replace that key
and how do you know who accessed it and all that stuff.

And there's just, I feel like it's not a solved universal problem.

And it's glad to see more ideas in this space because we do,
it's funny, you don't see a lot of this discussed at conferences.

You go to cloud native stuff, and like you said, the very beginning, we talk
about this utopia world where we never need SSH, we never need remote access
to a physical machine, and everything's wonderful, and you just, if there's a
problem, you just turn it off and replace it and then that magically fixes it.

And that's just really not true at any scale that I'm aware of, unless
you're Netflix or Google, which you probably then have tooling to
automatically pull off snapshots so that you can debug after the fact.

That's a really advanced workflow that's beyond the scope of what we're
trying to talk about today, but that's what I ended up seeing out there.

So I'm glad that you're seeing the same thing, and it sounds like these solutions
are trying to address those problems because, and some of this is open source, right?


the majority.

So we're sort of an open core company, which means you can,
80, probably like 90% of our code is in our open source repo.

And you get everything that you'd really need for even like a small team.

You get all of the access of different platforms.

The only thing that sort of we gate on would be more
enterprise, single sign on providers, but we provide Github.

use local auth.

We recently added, role-based access control into open-source edition, which was a highly requested.

but everything else works.

Kubernetes access works databases.

and then there's another new feature for teams, which we call access
requests and access requests that you, access from other teammates.

And probably if you're running in a small team, it's pretty less of a concern.

You can have wider access, but for people who want to really
gate and have extra compliance, one feature that we provide.


So that's almost like a PR review on my server access.

at The moment I

what's cool is, you can just say set it similar to your point in
the Navy, I actually did a webinar on the nuclear launch codes.

So you can set like multiple people, so you have three
people must approve this before you can launch access in.

means that

They know you did it because not everybody looks at logs.

Like we might log everything, but if there's something about the analogy of,
if the tree falls in the woods and no one's around us, it really whatever.

If a server, if a guy, if someone, a SSHes into a server and
it's logged, but no one's reading the logs, did it really happen?

it, does anyone know?

Does it matter?

Because now you have that one-off server that's slightly changed.

but no one else

knows about it.

of Teleport.

It provides the centralized logging without having to use AuditD or some other
kind of configuration, which could be tricky, or you just forget about it.

And if something happens, you're like, oh, where are our logs?

So this is covering Kubernetes databases?

I meant Teleport as a whole.

Databases, as our databases, we have MySQL and Postgres.

We support Kubernetes, so you can think of this as getting your sort of short-lived kubeconfigs.

We have applications, so securing internal web apps and then servers.


The server can be anything from a Raspberry PI to a

A thing that's running a kernel,


that you connect to in some way.

Can it be self hosted?


Actually only recently, did we have a cloud version.

For a long time, we were self hosted and our open source edition is also self hosted.

I have Teleport running on the public internet and we
generally assume in the world of the sort of access tools.

You have the idea of bastion hosts and jump hosts.


One is inside of your network security and one is inside, one is outside.

I think a jump host is outside, bastion's inside.

The proxy is fine to be on the public internet.

And there's also methods in which you can run Teleport in a sort of
very secure way in which the proxy service runs separately from the auth
service, and we go very deep and not to take all security very seriously.

And so your teammates welcomed with this sort of sign in, and you can sign
in with a local user and we always enforced to a strong second factor or
a preferred method is this user identity provider that you already have.

I have a GitHub group.

Even when I work with teams that have, they might have Kubernetes in it, there might be a jump
host, but it's usually just one jump host and we all have to know the name or the IP to get through.

And utility machines like that often don't get a lot of the love, right?

Cause it's usually the production infrastructure for the customer, the internal
customer, the whatever, the developer customer that you're dealing with.

It's usually that it gets all the attention.

And it's usually this other, ancillary infrastructure
that tends to from a lack of automation and stuff.

So this it's good to see that concepts like that, where it doesn't really
matter what machine I'm getting into, I just need to get into a machine.


You can also put in your AWS tags as well.

So, if, it's pretty common to have like heavily tagged machines


use that same sort of tag flow in Teleport.

And so right now, when you're running these commands, you're
running it against basically a Teleport instance, right.

That you've got running on a machine


That's when I logged in.

We do support multiple clusters as well depending upon how you
configure, you can configure like multiple tracks, the clusters.

And this is another powerful feature that we have, even like some customers who like MSPs, which
is a sort of service provider, and if they want to get access to someone else's infrastructure,
they can just share their trusted cluster for a short period of time and then cut off the access.

And deals with that sort of jumping between hosts seamlessly.

So first question is, if we're talking about Kubernetes, which we actually haven't
gotten to yet, but we've got some questions coming in, so I'm prefacing this,
Teleport, needs to be in each cluster of Kubernetes, so when I say, when I see
THS clusters, is that Teleport clusters or Kubernetes clusters, are they the same?

It could be the same thing, depending upon how you've deployed it.

In my case, I just upload my sort of route Teleport cluster on a dedicated AWS host, but I could
have also just deployed Teleport just in a Kubernetes cluster, and that'll be the same thing.

And you, I'm sure you can you run it all the same way?

Like You can run it in Docker, you can run it natively on the host, you can run on Kubernetes.

You can just kind of run it how you prefer?

How is this different or better than the zero trust network access concept also
named the VPN killer feature, that is available more and more on firewalls?

You can think of it, I guess the world of zero trust is definitely an abused term.

It can mean lots of different things, and I think Teleport part of a zero trust policy to deploy.

So we do have some customers who will deploy a VPN and use Teleport.

So there's still uses of VPNs but it's not required.

And how we're different is very deep on the protocol and then also deep on the
individual action and identity, which can be different from some zero trust solutions.

I think the answer is always kind of complex, I mean, if you talk to like a person in a
uh, conference booth spending point they're selling, they're like soap, whatever they need,
but often do you need like multiple sort of solutions to obtain these sort of zero trust...

And for myself as well as those that maybe don't know that, in this context, what are we assuming?

What do we consider zero trust in this context?

So I think it was like step back a bit.

I think in the old days I would have just given you access to the VPN and then you
can have access to everything and say, how things has evolved is when you logged in,
you had to authenticate through Github to prove that you were your own identity, and
then Teleport also enforces these short-lived certificates and everything audited.

And now it goes deep on the protocol.

And it's to a specific resource, instead of.

most VPNs that I use that it's like you have access to everything like
or everything that the VPN has access to you, carte blanche, right?

It's all, it's a universal policy, it's a very broad set
of systems and resources, and you may only need one server.

I don't know if that's in the scope of zero trust ideas, but that's something that's
always been attractive to me is where, giving people basically in that moment, just
the thing they need and not over-provision, which is what VPN is classically known for,

over provisioning, complete access, which is where a lot of the, it's the whole, we
have one guard at the gate and once you get past the gate, there's no more security.

So you have access to all the ports and all the servers and all
the networks, as long as you get through the VPN connection.

So yeah, this seems much more granular and flexible.


And this is not a great example since these are two wildcard
stars, but it does let you use the labeling that we already saw.

And then you can create roles to provide sort of fine grain access, for whether it's
labels on Kubernetes clusters or also groups, another bad example, that's system masters
is not a great example of zero trust and Kubernetes because you get access to everything.

Up to you to assign your Kubernetes groups and then give them to your teammates appropriately.

Yeah, so you've got a Mac there.

So on a Mac, if I need to SSH on a server, is there something running in the background
as a service that's like relaying my SSH or how does that connection actually?


have this, like TSH small binary that you download and install, which does
everything behind and the thing that hits us, like populates these clusters.

And so you can see here these like the X 509 certificates for, this cluster.

So this is my one, SSH, what it's slightly different for, kubeconfig, especially
populate your kubeconfigs locally, but under the hood, going back to like our open
SSH, public private keys for your certificates, under the hood, it's just open SSH
certificates, but in a much easier way than having managing and orchestrate it yourself.


So in this case, there has to be a machine on the internet that is running SSH for me to get to?

Teleport is not providing a separate port tunnel into
someplace that has an SSH Daemon running somewhere.

I think so.

Yeah, I'm correct.

So you can just run Teleport, once you run Teleport in a certain independent go from
the mode in which you run it, you don't necessarily get access to SSH into that node.

You then need to add nodes, running the SSH service to connect back to your sort of root cluster.

And you actually have two options.

You can over the local network.

If you're you, say you could configure Teleport in a VPC, or you could
just, in my case, I just have Teleport, I on the public internet and
I'm tunneling through, but you actually don't even need to do that.

You can change your sort of networks set up depending upon sort of the risks in your organization.

Sly has a question, similar as can you explain better how it works?

Do you have a server running and an agent in every machine?

Did that network diagram do a basic description of some of that sort of the pieces?

That's probably a better one.

That one was actually maybe how it works.

Might be a good one.

So we have this Teleport basic concepts, which is this probably a perfect one.

So we have the users which go through the proxy, which is Teleport

And we have our auto server in our case, we have registering.

I need to access it.

I go through the proxy and the proxy dials back.

So in this SSH node, we're running a Teleport service in node mode to an agent.


And that's the four different sets, the four different ways or the
four different types of resources I can access through the proxy.

And then for Kubernetes, I've deployed a helm charts, which is
this same sort of thing, but it runs, Teleport in a Kubernetes mode

Runs it.

I'm assuming it runs a pod with a similar executable that you ran on a native SSH on the host there.



And the same for web apps.

So I actually have another cool example in which I'm running Grafana and Teleport
locally and Docker to provide like local access to a sort of web app that I'm running.

And is that an agent on the web server?

It can be, yeah, it can be on the web server host itself.

So a very popular example of this would be like, if you have a Grafana
dashboard, I can see if you can do this the Kubernetes dashboard.

People will make it publicly exposed, and so you could give a loopback
address that only the Teleport agent on that host can access it.

And then it creates the reverse tunnel back to Teleport to make
sure that there's no sort of know where remote access to it.

So this prevents you from needing to put those apps on the public.


the http part of it.


All right.

Alexandra has a question of what encryption is used
between the inside node or agent and the outside machine.

Is there any mechanism for posture check?

So we have a few things based upon how you run it.

You can, have a CA pin, which is a hash of Teleport certificate authority,
which you can use to verify that the auth server is the right auth server.

If you're using this edge mode, we just do it through the mutual TLS certificate.

There's some encoding on making sure that you're joining the correct host.

So for me, as an admin, as a Kubernetes person, I just have to make sure Teleport's
on my machine, Teleport command line is on my machine, on my the command line tool.

And then when I run those TSH log-ins, that's all I have to do.

I can just, I run the log in and now my kubectl is able to talk to that server.


So the first batch of connection does take a little bit of time
to do the initial handshake, but we're like connected to a minute.

This is just an empty cluster that I've been running for a week or so.

And on is the, is my kubectl actually talking, kubectl
command line, actually talking to the proxy server directly.

Is that how the connection's happening?

All right.

And so in that case, when you did the log-in, if you were like earlier, you were
talking about, this new thing of requiring having others approve you're log in.

Is that where all that process would take place, during the login phase?


take a prior to it.

So you'd ask for access to the cluster oh,

and then went


And okay.

And then that certificate based on your policies, I'm assuming that you configure
on the Teleport, the login certificates are time bombed based on the policy.

because I didn't see you ask for amount of time or anything like that in the command line,

And I say by default, I have a 30 hour session.

Oh, okay.



this is quite a large generous role but I have access to all clusters,

So it is for demos.

We always get God got access to DevOps.


Very cool.

Does this take care of our back management of a Kubernetes cluster for me?

Or do I need to already have all those set up and then apply these policies here?

Like how does that work?

Yeah, you'd have to set it up on your cluster of choice based upon how you want to define the roles.

So we have some people I know who create users in Kubernetes based
upon their user, but we have some more advanced these, like internal
DB users, but you can just put in external identity provider options.

So if you have an SSO provider, which for your Kubernetes
cluster, you can map that same thing into Teleport as well.


Yeah, I was wondering if I could have roles in Kubernetes and then specifically have users in
Teleport and alleviate needing specific users because Teleport's logging all the things, right?

So it's showing the connection, it's showing who did it, so now I'm maybe not so
much looking at my kube logs and I'm not paying more attention to Teleport logs.

If that's the only way you can get into my Kubernetes server.



So in that case, it depends upon the risks in your team and how you want to provide access.

Maybe you have like system masses for the ops team, but you have a dev role,
which is like fine-grained, but all developers share the Kubernetes dev group.

and then you create costum users based and then use Teleport to give them access.


So I see that there's potential for error for a lot of configuration in the
Teleport itself and in the proxy or server or whatever we're calling it.

I keep forgetting the names.


Can I put that stuff in Git and not have it stored on the server?

Can I control Teleport through, GitOps or some sort of.


I forget the resource for the RBAC connectors, but you can like get these and
set them and then we'll have an API to that, you can configure it as well.

So do have some customers who have, I think 10,000 different roles.

Oh, wow.

So you can really customize it.

But if you actually have that many, you're probably configured this Guid and this is like
some other more advanced, like regexes that you can do to like really narrow down your roles.



Is, this is what we've been seeing so far.

Is this all the open source stuff?



and then what does, if I use this, is it a SAS solution?

Is that correct, the best way to describe that?



So is that just alleviate, what am I getting, what can I pay for, I guess this is maybe

we have Teleport Enterprise, which I think includes cloud now.

And that just means you don't have to run this root cluster.

So we run and maintain Teleport if you're used to being very SAS centric, it
just makes your administration a bit easier, it's one less thing to worry about.

But often people like Teleport because they can run it themselves within
the data center and really limit and sort of fine tune and control it.

Okay, so you have the SAS offering essentially,


Are the four, I think we talked about the beginning
about there's different types of Teleport or different.

Maybe I'm thinking of the different ways, different types of resources I can connect.

So I like the database stuff was really interesting to me and I was, you may not may or not
have a demo for that, but I didn't quite understand how a protocol-specific connection worked.

If I bring up a SQL GUI for MySQL, is it actually talking the, MySQL to the proxy?

Is that kind of what's happening?


So sometimes that's a little bit of like yak shaving for the intricacies of the different platforms.

But once you configured it, you do it once and then you can access it.

Talking about GUIs, lots of GUIs do support certificates, especially in MySQL, like they still
couldn't like SSL certificates, somebody new, if they're not updated TLS, but you can also say
support it, so you can use short lived certificates for access for Postgres and MySQL well.


I was going to say, is this, I'm trying to figure out how that connection works
because obviously, this is another problem of, when we're troubleshooting, right?

There's a database, let's say it's RDS in AWS, and I got a Postgres server in there and
it's the production database, and we're seeing weird errors and we're worried that it may be
something wrong with the SQL data, and we just need to get someone connected directly to the
database to do some selects and figure out if the data needs to be, somehow it got screwed up.

And that process, inevitably it's like, now I'm creating a database user and I'm handing
that to a particular person and now they always have it and the passwords never expire.

And that would be, is that kind of a scenario where this just replaces that whole workflow?



And it's a similar vein, databases have the most sensitive information.

You might have a range of people from like a data engineering to just an engineer, wants
to run a query, like any kind of human interaction you should use sort of Teleport.

Just because you get so much visibility into what's happening about these, database connections.

And that's a good distinction to make real quick, it sounds like Teleport is focused
on humans connecting to systems or resources, not resources connecting to resources.


You can configure, so you can use Teleport with Jenkins, for example.

And it also depends upon your threat model.

You can't necessarily give Jenkins a 10 hour certificate
for access because, you need a new one in 10 hours.

And so in that case, we have people who use our API and they always re-issue
Jenkins a new certificate every 10 hours or if you can have people to each run.

And so what that means is if your CI system was ever compromised and
someone got the certificates of the service, then you've got access
for that short period of time, and everything's like refreshed again.


If you start thinking about short lived certificates, you
get like a much better, like hygiene kind of policy in place.


All year long, I've been talking about GitHub actions.

Do you have anything in the works for something with GitHub actions for that so that
we can run an action against, for example, I've got some functional tests that I'm
running in GitHub actions on, GitHub action public runners, and they need a remote
database maybe because it's got to actually test RDS and some S3 stuff inside of a VPC.

So is there, Is there anything with that?

out of the box but I think it's something fun to explore.


I'm a big fan this year of getting all of my tooling into their own actions so that I can,
just basically plug and play a workflow together and not have to write a bunch of custom bash.

And, I'm trying to downplay all the bash scripts that everyone's putting in their CI and
say, let's get back to, declarative approaches and try to take our CI to the next level.

So we've talked a lot about GitHub actions, so I just thought I'd ask there.

Definitely something to be thinking of.

So Sly asking about yeah, database GUIs, like SSMS, or is this really just command line tooling?

So it sounds like the GUIs have to support certificates.


If you come to docs, I think there's actually a page here for guides for database GUI clients.

And these are ones that we've tried.

like PG admin, And, like, it's a bit weird, these sort of these GUIs.

So just like read our instructions, so you can also reach out to us, we're happy to help.

And so what you do is you load in the key file, which kind of
stays the same and you just do thh login, which will refresh them.


Yeah, because I have to keep remembering it.

This isn't some system based VPN that allows anything to run through that tunnel.

This is a protocol specific and it doesn't wrap my client tools, it sounds like.

It's dependent upon the client tools functionality, and
this is all using PKI, this is like all certificate based.


So we've talked about Kubernetes, we've talked about SSH.

We've talked about database connections.

You want to cover real quick since we got a few more minutes, you
want to cover a little bit more of how the web based access works.


I'm actually, I just have this Docker Compose script,
which it just has Grafana and Teleport running.

So what we have is we just have a Grafana service and a Teleport service running.

We have a small network, like a bridge network between
these two, and this is running a node map mode.

I have a range of applications.

The Grafana dashboard, and then it's the connection's going through Teleport.

Even you can access this.

And this is an example of using Teleport for application access.

You might want to secure your own Grafana dashboard, or you could use this for, if you
had some staging or a local dev environment you wanted to share with the rest of your
teammates, could use Teleport application access to share it and get sort of early feedback.

I'm trying to think about how that works.

So you've got that running on your local system.

You're like, you're creating some custom Grafana dashboard.

How do I get access to it?

How does that connection actually work?

How it works is, you can think of it like an SSH reverse tunnel.

And so the initial connection goes through Teleport, and then it proxies your
connection and down to my machine where I have this Teleport running a sort of a
sidecar, and that sort of puts that connection back into the Teleport root cluster.

And you access it through the root cluster.

It sounds a little bit like inlets, if you've ever heard of Alex Ellis' inlets proxy.

We've talked about on the show before, we've had him on the show.

So basically your machine is reaching out to your Teleport server, making
that permanent connection that all this protocol's tunneling through.

And then I am typing in the URL of essentially of the proxy server, right?


So to me, it looks like you just have an internet TLS proxy with a
friendly name that happens to then redirect it to your machine, okay.

Yeah, like Grafana for me, I can probably access it on this, access.

This, I think can access on zero, zero.

This is how I could access it through kinda like Docker networking

Directly on your machine without Teleport.



Yeah, so I like that sidecar analogy, So Then would you need one of these Teleport
side cars for each web app that you wanted to have distinctly in that list?

Not necessarily, you can add multiple ones, but as probably a
good security model to have one sidecar, they're very small,


Then you just have the local loopback, so you don't
have to put the application too wide on the network.


Otherwise I would imagine you can't granularly control each individual,
one is it's all or nothing if you're putting a bunch in there.

So this is almost It's like an application list of things I can access that I may not
have direct connectivity, may or may not have direct connectivity to those things,
but almost becomes like the, this is, it's a totally different technology, but this

kind of reminds me of, if anyone's ever had to run like a Citrix server, you would
get a webpage with all these app buttons and they would be running all over the place.

You have no idea where those apps are actually running and what data center, whatever,
but you, the user just use a webpage, you click it, the thing opens it's magic.

That's a totally different technology, but, it didn't
matter what system I was on or where I was on the network.

I could just get to those things.

So that's a pretty interesting workflow there.

So when it's listing applications, is that including Kubernetes?

kubectl or is that really just web applications?

These are web dashboards.

I have added like the standard Kubernetes dashboard before.

So like the Kubernetes dashboard, because it's HTTP.

yeah, very neat.

And again, this is all open source right now.

at the.

been open source.

Question from the audience Muhammad asks, for tunneling,
does it use something like WireGuard under the hood?

We don't use WireGuard currently.

I actually have a really good blog post on using WireGuard for Kubernetes.

Is WireGuard protocol-specific or is it more of a universal tunnel?

I always understood it was a universal.

It's a universal So if you're interested in WireGuard and using
WireGuard for Kubernetes, Kevin wrote this blog post, which we used
for gravity, but I think it's an open source project you can use.

So if you ever want to go deep on WireGuard for
Kubernetes, I highly recommend checking out his post.

And it covers sort of everything and kind of goes





Um, all right.

Uh, Anything else you want to show off before we wrap this up?


I think we've covered a lot today.

It's been a lot of fun.



So how do they get it?


Just go to Get Started.

It's free download.

You can download it here.

I'd recommend checking out this quick-start guide.

I have a short five minute video on setting it up.

If you have seven, if there's 77 people on the phone
on the line right now, you can get us to 10,000 stars

All right.

I'll put my star in.

So we're a gravitational GitHub gravitational Teleport, right?


In fact on the website on, it actually just shows 9.9 K.

So I will put everyone to that, that GitHub, we'll see if we can't get you a little bit closer.

It's a fantasy metric,

Yeah, but it's fun.

We all love round numbers.

I've been watching my Twitter feed for the longest time waiting for it to hit 10,000 and it's
been really slow going, but I'm excited that it might happen one day, maybe this year, who knows?

But yeah, so we can go to the website, download it, walk through the examples, or you
can read all about it on the table of contents on GitHub, if that's your preference.

I'll tell you what, I spent so much of my life on
GitHub, now I might as well just have a GitHub computer.

Like all my tabs are all GitHub, and so I almost
always prefer the GitHub format over website formats


Actually, if you want to get started, I this is super concise Readme.

You don't have to go to our website, everything you need need is here.

And then also if you're interested in hacking in Go, there's like a super clean Go project as well.

We're also hiring.

There you go

If you want to

for those.

some open source Go, come join us.


If About every month I hear I, I worked with a lot of projects.

Obviously, I have a lot of students and all the time, I see people
switching to Golang and, it's so much to the point now that I feel like
that even if I don't develop in it everyday, I just need to know it now.

Like it, it's become one of those things like Python or bash or that you just, or Javascript.

It's almost like at some point, you're going to be expected to know.

If you're in the Cloud Native space, you probably need at least to know how to read Golang.

It's awesome that you guys have such open source.

Jason, thank you for the open source community version.

And I think that's going to wrap it up.

Thanks so much for being on the show.

We've been planning this now for about a month, I think.

I have been very curious about this product and wanting to use it on
my own stuff, especially not realizing how much of it is open source.

It seems very interesting to me to be able to have universal because I have all the same needs.

Just on a personal level, I have Kubernetes clusters that I use, I have nodes
that I want to get into and I have websites running in places, like the Kubernetes
dashboard that I don't necessarily want to have just a complete open public access.

And the only thing that's blocking me is the Kubernetes certificate, that's
only on my machine because I haven't put it anywhere else or backed it up.

This might be a good thing to check out.


Thanks, Ben.

You can see, by the way, you can get ahold of him on Twitter.

I'm just going to volunteer him, if you have any further questions, get him on Twitter.


channel too if you ever want to join us.

Oh, nice.


So if you have more questions and I'm sure there's people in there to help.

All right.

thank you so much, Ben, for being on the show.


Thank you.

Thanks so much for listening and I'll see you in the next episode.