Swift Package Indexing

This week, we discuss importing ALL the packages, Swift 5.9, the Swift mentorship programme and package scores. Plus, of course, package recommendations!

News
Packages

Creators & Guests

Host
Dave Verwer
Independent iOS developer, technical writer, author of @iOSDevWeekly, and creator of @SwiftPackages. He/him.
Host
Sven A. Schmidt
Physicist & techie. CERN alumnus. Co-creator @SwiftPackages. Hummingbird app: https://t.co/2S9Y4ln53I

What is Swift Package Indexing?

Join Dave and Sven, the creators of the Swift Package Index open-source project, as they talk about progress on the project and discuss a new set of community package recommendations every episode.

Remember last week we talked about dependencies and in the course of the conversation I sort of

apologetically, is that a word? I sort of feel, I felt embarrassed when I mentioned that

SPI server, so the package that drives the Swift package index webpage is the one with the most

dependencies in the index at 56. And when I was listening back to the episode, I thought,

I've been approaching this all wrong. This is the Swift package index server package. The server,

the package that's hosting the package index, it should be using as many packages as possible.

We need to embrace the index, right? Embrace the index. That's probably a tagline we should have

on the site. So if anything, that number is too low. We should be aiming to have at least 1% of

our packages that we use. I have a mission. Yeah, I have a mission. I'm going to get all the

dependencies. Maybe we should just include every dependency just in case we need it.

Well, at least a few more we could do, right? Maybe we should remember when Steve Jobs announced the

iPhone and said, you know, once to have 1% market share, we should maybe have 1% package share in a

year. Yeah. And maybe then in the future, we could aim for two or 3% long-term, but you know, 1%,

we're close. We're at 56, 63, 10% is 1%. So we're right there. I mean, it doesn't take much.

who else is going to test whether all these packages work together? I think we are the only

answer to that question. I mean, I'm sure there's a couple of easy ones we can pull in

without bending over backwards too much. I'm sure something can be done.

Anyway, so that was that. And then the other interesting bit of feedback we got was from

Joe Heck, so he sent us a follow-up email. We've talked about JavaScript dependency trees,

how they tend to be big and self-referencing in ways we found puzzling. And he suspects

the reason is that the dependency trees in JavaScript are independent. So, and my understanding

was if this is as if Swift package manager, when you do package resolution, it looks at the whole

thing and resolves one version and one version only per package that is used for your artifact.

And sometimes that doesn't work, right? There is no package resolution because one package has a

certain range for a dependency and another package has another incompatible range and then package

resolution fails entirely. My understanding is in JavaScript, that would still work because

you have two independent subtrees that both use that dependency, just at different versions.

And I think the analogy in Swift is as if every dependency was built like a binary artifact,

and then you pull those binary artifacts in and link them into your binary at the end,

such that they're sort of opaque. Yeah. Yeah, pretty much, but it's not binary, of course,

it's just source code. So the slightly annoying thing is, I actually knew this already and I just

completely slipped my mind completely when we were talking about it last week. And it is an

interesting approach, but it brings with it as many problems as it solves. Yeah, it's interesting

how these different approaches potentially lead to different outcomes. I mean, we don't have the

numbers. There were some similarities between JavaScript's ecosystem and Rust. And Rust,

I believe, doesn't have that same approach. So the fact that they also have circular

references might indicate that that would happen in Swift as well. Again, we'll do this at some

point and then we'll find out. I still can't explain why some packages reference

themselves. Wasn't that a thing? Yeah not only that, I think even directly,

which I found super baffling. Yeah so the two dependency

cycle is now explained more readily, but the one package dependency cycle is

still a mystery to me. Yeah that pretty much is a mystery. I wonder if

something that was wrong with him. But you know his analysis looked very

thorough and considered so it doesn't look like this was a data mess-up

or anything like it. We'll see. And if you're wondering what we're talking

about, you should go back and listen to the previous episode where we

talk about this blog post about JavaScript dependencies and

you'll hear all the background to what we're following up here.

here. Yeah, exactly. Yeah, right. That was the follow-up that I have in my

notes. One thing we can talk about this week is we had a pull request merged as

a result of the SWIFT mentorship program. So we've been participating in the

mentorship program for three years now. This is our third year of participation

and I've been working with Cindy Chin to work on some of our package scoring

code. And so there's actually been a couple of pull requests merged by Cindy

already. The project that we're working on is all around package score and how

we determine it and how we surface what the package score is and so package

authors can see their package score and things like that. And we got started with

with one pull request almost at the beginning of the program and then this

week we merged another pull request which is part of the scoring mechanism.

So we're adding an extra dimension to scoring. So we have this internal package

score which is what we use. It's not entirely the sort order for search

results but it contributes to the sort order for search results and we want to

make that package score represent or should we say we want to make that

package score increase when good quality packages should naturally increase that

package score. So for example, if a package has been maintained recently, it

had an issue closed or a pull request closed or a commit or something like

that, then we count that as a recently maintained package and that

adds to the score. If the package repository is archived, we don't give any

points for that, but if it's not archived, we get some points for that. And there's

several. You can have a look at the source. In fact, we'll give a link to the

source code in the show notes because the algorithm is transparent and open.

and we've added to it or Cindy has added to it this week by adding a

score increase if your package has a test target. Now there's pros and cons to

this score increase for test targets. It is just about the simplest

implementation of are there tests in this package and what it does is it looks

through the targets in the package manifest and if you have a test target

then it says then you probably have some tests. Now depending on how you made your

package you might have got a test target for free and might have never added any

actual test to it. So there's an element of this which we would like to improve

again in the future but this progress towards a better system right? This is

that how software development works, we step one foot in front of another.

Yeah, I think it would be good as a second step to try and at least reject the common case where

someone has just created a template package and sort of got the points for free and someone who

actually cleaned up the template and removed stuff they weren't using would actually be sort of

punished for doing the tidying up, that's probably something we can do at some point. It doesn't

sound like it's... I could imagine there are ways to do that that aren't overly complicated.

Just briefly thinking about it, Devel might be in the details, but it's the sort of thing that is

probably one metric that is going to be really interesting for someone looking for a package,

test coverage one way or another, if it's by counting targets or number of tests,

or ideally actual coverage is a really interesting metric, but obviously it's also hard to get,

right? Test coverage is really hard to obtain unless you run the tests and running the tests

isn't something we'll probably ever do or be able to do because ACI system isn't

notoriously difficult to set up even if you own the project and probably

impossible to do if you don't own it. So yeah CI is hard to set up if you're

being paid to set it up. Let alone us setting up for every project that

exists. I really like messing with tests and CI but the prospect of

managing hundreds of CI systems of packages I don't own isn't one I cherish.

Yeah, yeah it's interesting. It's the kind of thing that I also don't

think we'll ever go to that level with the package index compatibility system.

But we could certainly do more to start actually looking for

number of tests. I probably wouldn't want to even report, even if we could, I mean

obviously we'd need to run the tests, but even if we could find a way to do that, I

wouldn't even particularly want to report code coverage. But I think a

simple number of tests is a great metric that we'll potentially work

towards going forward. Yeah, and I think that might be feasible with Swift

syntax if you really want to look at the source code or maybe even some

simple heuristics that might be wrong, but you know in some edge cases, but generally have

a good result that might work too. I think there are ways to arrive there that aren't overly

complicated. And then the next stage of Cindy's work on the mentorship program is to explain that

score. So I said a few minutes ago, I said we put a link to the source code in. Linking to the source

code is one way to tell people how their package is scored, but it's not a great

way. It is, however, the way that we've been relying on so far. But the

next piece of work that Cindy is currently working on - in fact I had a

meeting with her about an hour ago where we looked at a prototype of this

working on her branch - is to - so we have a page for package maintainers

where we give people the markdown for the compatibility badges that they can

put in their readme and links to how to configure the documentation and things

like that. And at the bottom of that page we're going to have a section where it

tells people what the package score is and then has a table underneath that

score to also explain how the points were tallied and why the points were

tallied so that you can take a look at your package if you have a package in

the index and see okay well this is why I got this score and if I would like to

improve my score this is what I need to do and if if all of this works then

those things that you do to improve your score will also make your package better

so hopefully there'll be no gaming of the system. I was just gonna say we're

going to publish the recipe to game the system. Well I think there's obviously

gonna be a little bit of that and we can't we can't stop that nor would I

particularly be interested in stopping it. But I think as long as we are, I think

we should be transparent with what the score is. I mean we are being

transparent by definition because it's an open source project so we can't, we

could hide it, we could put it in a private package but I definitely don't

want to do that. So it is already open. This is just explaining it

- Yeah, and you know, if we get people to add documentation

to their package, even if it's bare bones,

that's still a win.

If that's the level of gaming we arrive at, that's fine.

That's good, because then it's easier next time around

to just do a little more,

that other little bit of documentation,

because you've already set it up, why not use it?

- It's like a broccoli eating competition.

You know, you can cheat and eat a load of broccoli,

and at the end of it, you're just more healthy.

(laughing)

(laughing)

That doesn't work at all.

- It sounds like you don't like broccoli.

I actually love broccoli.

That didn't sound like--

- I actually, I love broccoli.

I think I would even go as far as to say

broccoli is my favorite vegetable.

Ah, that's funny.

I don't know whether that'll make the edit.

(laughing)

- Leave it in.

- The last thing to mention here is that Cindy was very keen

with her mentorship to do a feature end to end.

And so we started off with some planning of it.

We talked about possibilities.

We talked about, should we even display the score?

Where should we display it?

We toyed with having it on the package page.

So we did a little kind of feature design at the beginning.

We also wanted at least one more kind of indicator

into that score, which is this test target,

pull request that got merged.

But I think the other thing which is really nice to see

is that she wanted to take this feature

right through to launching it.

And the last pull request on this project for her

will hopefully be a blog post announcing the thing.

So it's literally from what are we even going to do

through to several pull requests to add some functionality

through to launching it and publicizing it

and getting the word out about what we're doing.

- Very nice.

- Which I think is just, it's such a great attitude.

It's been, I mean, it's not over yet,

but it's been a pleasure working with Cindy so far.

- That sounds great.

All right, in other news,

we have upgraded our build system

to the release version of Swift 5.9,

which is probably not a big change for most packages.

We were still running on a early version

of the Swift 5.9 beta,

but I don't think there were many packages

that had any build differences between the two.

The biggest change is probably that we're also now generating a documentation by default

with 5.9, because the documentation generation always happens on the latest release version

in our system.

And as long as we're on a pre-release version, we don't switch documentation generation over

yet.

So if you haven't specified a Swift version for your doc generation from now on, your

will be generated with 5.9. That's also probably not a big change between 5.8, 5.9. Do you actually

know what the docc differences are between the two versions? I don't think there was any.

The quick navigation was 5.8, right?

Yeah, that was a very user-facing feature that came with 5.8, but 5.9, I don't think there's

anything in the UI. I believe there was a default change when it comes to generating documentation

for extensions but I'm always a bit hazy on what goes in when and what default is enabled when.

But it might be a good idea if you have documentation and you release a new version

or just the main branch, have a look at your docs for that version to make sure that everything

looks as you expect it to. And if you see any issues, make sure you generate your documentation

locally with Swift 5.9 to see if they get the same result. And if there's any discrepancy,

let us know. If there are any errors, let us know and we'll take a closer look.

Yeah, that's just the heads up around that change.

Absolutely.

Shall we do some packages?

Yeah, let's do some packages. And I will start us off this week with a package by Miguel de Icaza

called Swift Godot or Godot, which is something that I've been paying a little bit of attention

to for a little while now. So I don't know whether you heard in the last couple of weeks,

Unity put their foot squarely in their mouth with a change of business plan that really

was not very well thought through and was to say the least badly received by the Unity community.

One could say there was discord in the Unity community.

One could if one wanted to, yes. But let's hope one doesn't want to.

Yeah, it was a terrible PR disaster. And I'm not a Unity user. I've downloaded it and had a very,

very cursory play with it, but I definitely wouldn't call myself a user of it. But one

thing that I had been kind of keeping my eye on is this open source game framework called Guddo,

or "G'not" I'm not quite sure what the pronunciation is. And I think they had a

huge boost in publicity when this unity announcement went bad because everyone

suddenly starts looking for well maybe what else is available. Yeah. And yeah and

Miguel has put together some bindings for the framework using Swift. So there

is apparently a new extension system in the framework that you can hook into

and then interact with the game object. So the default language for this

framework is actually a language called GDScript, which it's a little like Python

in that it uses indentation for structure, but also it's a little...

there's certainly a couple of shades of Swift in there with, you know, VAR

statements and funk statements and things like that. So it's GDScript

itself is one option for interacting with this framework, but now you can

actually also use Swift. And I just want to call out what a fantastic job - I don't

know how I haven't used this package - but what a fantastic job Miguel has

done with documenting this package, even down to hosting a tutorial, a 50 minute

long tutorial using the Doxy tutorial framework to get you started with

building a game with Swift using this framework.

Nice. Yeah, this was actually also on my list, actually the first one.

Oh, I sniped you!

sniped, got in first, yeah, and I picked it for the same reason, with the Unity stuff going on.

It's really interesting, yeah, I saw, I didn't actually see that there was this

long tutorial in it, and I think tutorials can be quite fiddly to generate, so creating a tutorial

that long is quite the task. That's amazing. So from the readme file, if you click through to the

API documentation you'll end up on a Doxy generated documentation set which is

unfortunately not hosted with package index but we'll forgive Miguel that just

this one time. And the very first link on that on that documentation is to the

tutorial it's called meet Swift Godot and it is estimated at 50 minutes of

development. Nice. The other thing perhaps worth mentioning is there is a binary

package that also comes with Swift Go, or is associated with Swift Go though, and the advantage

here is that he has packaged the underlying library up in a binary artifact, so you don't need

to actually build the whole framework to use it, and that should probably help with your just user

experience using that package. I haven't tried this, I don't know how this worked. I did have

have a quick look into how to set this up, but I quickly gave up when I saw what

all is needed to do this, and game dev is a bit like cooking for me. I've

sort of chosen to specialize in eating, so I'm much better at playing games than

doing anything around creating them, so I pretty much waved the flag

immediately. I have had a play with the framework itself way before the Unity

stuff. I investigated it maybe six months ago, but I didn't do anything apart from

write a little GD script, which is just the default. So that's, yeah, that's

my first package recommendation. I think we are going with Godot, right?

I think so. That's how I had it in my head. Probably needs some sort of

phonetic spelling out as a metadata field on all these packages.

So that's SwiftGido by Miguel de Icaza. Right, my first pick is called Swift SDK

Generator by that little company called Apple and this is of particular

interest to me. It's a package about or for cross compilation. It's a cross

compilation toolkit and was I think released just last week or the week

before. So there have been some Swift proposals around this Swift proposals

that went through a while back, but this is the first part of the whole cross

compilation story that is actually usable and does work but in limited

ways. So currently what it supports is cross compilation from Mac OS to Linux

And the way this works is use this SDK generator to create a Swift SDK toolkit.

So it's sort of a bundle.

You can bundle up, it's 2.5 gigabytes of stuff, essentially Linux stuff that you bundle together

and it's by Linux version.

So there are slight differences between different Linux distributions.

So you need to do this typically per Linux distribution.

I've tried this for Ubuntu Jammy, for instance, and obviously also by platform.

And then you, you package this together and then you can import this into the, into

your Swift compiler tool chain effectively.

And after you've done that, you can run Swift build and it's currently an

experimental flag, so you do it Swift build dash dash experimental dash Swift

dash SDK, and then you reference the SDK you've just created and effectively give

it a Swift version number, a platform version label and architecture.

So like 5.9 release, Ubuntu, Jammy, AR64 for the ARM version of it.

You specify that as the identifier for that SDK, and then you can cross compile

on Mac as a Linux binary, and you can copy that over to Linux Docker image, for

instance, a Docker container and then run it.

And after that, it works just fine.

Really nice.

And, um, I mean, obviously this is probably not the most interesting path yet, right?

Compiling from macOS to Linux, because if you are looking to perhaps save some money

on your CI, the macOS as the platform to run stuff on is probably not the cheapest option you have.

I don't know what the plans are, if it's ever going to be possible to do it the other way around.

like on Linux to cross compile to macOS.

I'm not sure how that would work with licensing and all that.

But that would be for us, for instance, that would be really interesting.

Because currently Linux is a very easy platform for us to manage and, and, and

maintain with respect to Swift versions.

Whereas macOS interestingly is, is the most complicated one.

We might have a little story to tell in the future, how that might be possible

to make it a bit easier, but currently it's a bit of a fiddly system with respect to macOS

versions and Xcode versions to manage. I really hope that at some point these SDK setups become

really universal and that they support lots of different combinations. I mean, ideally all of

them, but that might never be possible. But the more combinations are covered by this, the better,

really.

- And I think that's the key thing here is that this may be a package which will become more useful

and or more used in the future, but it's good to see that there's thought and planning going into

this now. - Yeah, I mean, just the effort that this exists and works is amazing. I think this

is a really great and strong indicator of where we're heading here because I mean, none of the

the Apple packages strike you as efforts that happen now and then don't go anywhere. I think

these are all long-term strategic things that are just being started now and are intended to be used

and to grow. So this is really exciting and the fact that this already works and is usable is

compelling. I'm not sure it's something that would help us immediately. I mean, I could see how it

could make some of our build system easy. So if we were to use this, it would make actually parts

of our setup easier. It would make other parts of our setup harder. So it currently would be a

trade-off, but just the fact that this exists would give us an option in dealing with certain

complexities in our system. And I can, you know, envision that in the future this might make lots

of things just easier in general, where right now we have to do a lot of special casing. So I'm

really looking forward to how this package develops in the future. Yeah, that's great. It's great to

see. My next package is Zip Pinch by Alexey Bukhtin. And this package kind of blew my mind

when I found it. It's actually not a new package at all. It's been in development

for eight years so this is a very well established

open source package. It's not a, I don't think it's a

particularly terribly complicated package because

in those eight years it's only had 48 commits but

that's not to put it down at all. It's just

you know some jobs are big, some jobs are small. But what blew my mind

what it does and what it does is it's an extension for URL session to work with

zip files remotely so you can open peek inside read and write to a zip file

without downloading the file locally which absolutely blew my mind now it's

been a very long time since I looked at the details of a PK zip file but I do

do remember from when I did briefly look at it when I was a kid,

basically, that it does have this kind of table of contents at the beginning,

and you can reference into the file and you can get just the bits you need and

all the rest of it. But to do that remotely over a UL session kind of blew

my mind a little bit. Now you might ask, well, how useful is that? And it became

more useful in my mind when I read that it was also compatible with watchOS.

So I can absolutely see a situation where on the watch you're trying not to

download an entire zip file, unzip it, try and poke through what the

files are inside there, but to be able to access a remote zip file and access

files within that, I can see this being useful.

So is this PKZIP or ZIP or are they the same? I don't know. I thought PKZIP was different.

PKZIP was the original file format. I mean I know that file format has been incredibly robust.

I can't say there have been no changes to the zip file format since then, but I don't think there

have been many changes to it. Right, so this is just a general zip file that you could like if

you right-click and click compress, that's the same format. It's a zip file,

yeah, absolutely. So how does that work? Does it just use HTTP offset like

head requests and stuff like that? I haven't been digging through the source

code, but that's the only way that I can think that it can work. So it must do a

request to get the beginning of the file, read a little bit, read the table of

contents and then do a separate request at an offset to read the file and then

unzip the content locally. So that's Zip Pinch by Alexey Bukhtin. Very nice.

Right, my second pick is another package by Apple and it's called Swift Testing.

People have probably heard about this because I believe you also referenced or

mentioned this in Friday's newsletter, didn't you? Yes, in fact I wrote my

opening comment on it. Ah right, yes, yes. You actually had a link to the

Vision document about the package in your opening and it's actually, that

document is hard to find. I wanted to check it out again and the best way to

find it was to go back and find that issue because it's not actually linked

from... Well the best way to find anything is to go to iOS.com. Here we go.

our sponsors this week. I really love this package. When I saw this announced on the

Swift forums, I was so excited because I've long felt that that XC test assert and the whole XC test

framework is great. I mean, I've used this for literally decades now. I mean, this is ancient,

right? And it hasn't changed really. The word I used to describe it was "fine". It's fine.

It does exactly what it needs to do. Yeah. And you know, at the surface, it's the same old. I learned

unit testing with Java in, I still know the year because it was at a conference in 2001

where I first heard about unit testing and I was immediately taken by it. And the concept is

exactly the same since then. You have a runner and you have asserts that you effectively something

equals another comparison function or macro or however it's implemented and something that

prints out if they differ and to various degrees they print it nicely or perhaps in the case of

XCT_ASSERT less nicely the actual difference. And what this does, it gives, it uses macros to

revisit how all of this machinery is done. So for instance you have a #expect macro that unifies all

of the XCT_ASSERT variants, so you no longer have different ones that you spell out like XCT_ASSERT

equal and I think the other one is "exert equal accuracy" if you want to compare two

doubles with a certain accuracy. XCT, assert true/false if you have boogalions that you

compare. All these different variants you have to use. #expect is a single macro that you use,

It can handle all these different variants and it gives you nicer printouts in case of

error logging. I do hope that they sort of steal from the power assert library that we talked about

in the past. I'll add a link to the show note. I'm sorry, I don't have the author's name to

hand right now. It's called power assert. It's a macro library that gives amazing printouts of error

messages when an expression is wrong. It actually details which part of the

expression is what and uses ASCII-R to point to the different elements.

Yeah it's super cool. So you can immediately see what's what it's an

amazing library and I hope they they they you know steal or you know and some

shape or form absorb the awesomeness of this package into the Swift testing

package, so we have it available without having to import anything extra.

It also adds an @test annotations to your test functions.

So it's not any longer required to adhere to certain naming conventions to make tests

discoverable. So there's no longer a test_naming of functions.

They can be named anything and you have the @test annotation and you can even give your

tests speaking names like proper descriptions what they do.

By default, if you don't do anything, it'll just use the test name as the test name, but

you can override that and even use like normal strings with spaces and all that to spell

out what the test does.

Really nice.

It's just a modern test suite really.

It has parameterized testing so you can run many tests that just differ in input parameters.

And the recommendation is already great.

There's a great guide on how to convert tests over.

I've actually tried this out with one of our packages to see how this works and if it's

already usable.

And it is.

I think the only thing I noticed is that there is no class setup function at the moment.

So a class test up, if you have a XCTest case, if you have a class setup, it runs once per

suite, whereas an instance setup runs before each test. So right now you'd have to sort of fake that

by keeping a little Boolean around where you mark if it's run already, if you have need for that.

But other than that, everything I needed to convert over that test was there, and the output is nice.

You need a nightly toolchain right now to use it, but it's all documented how to use it and looks

looks like really great. I'm excited about this. I can't wait to have this around. I

suspect it'll be Swift 5.10 when this will come. I don't think this will be any of the

dot releases of 5.9. So I think we're looking at Spring. But yeah, I mean, Spring can't

come soon enough to be honest.

Yeah, yeah, it does look really, really great. And I think the fact that this is being developed

as an open source package rather than something.

'Cause obviously testing is a big part

of what iOS applications and macOS applications do.

And it's a big part of what Xcode does.

And there's obviously Xcode integration with XCTest

in the current releases of Xcode.

And I think that the fact that this is being released

as a open source Swift package is a really good sign

that that interface between test results

and Xcode showing you test results

might become a little more flexible.

And that's mainly what I wrote about

in the newsletter on Friday.

My hope that that would be the case

to enable things like different testing strategies

and different testing libraries.

And wouldn't it be wonderful if we could,

if Swift could support lots of different testing approaches,

not just one blessed testing approach,

even if that bless testing approach is great.

- Yeah, I mean, I'm pretty sure it will be

because I mean, functionally it's a drop in replacement

for the existing system and there's lots of,

there are lots of other packages,

quick and nimble that are built alongside at least,

if not on top of XCTest,

point three also have a number of test extension packages

which make use of XCTest

and all this should still just work

and be possible to extend and build on top of.

- Of course, yeah.

But the problem with those things,

especially QUIC and Nimble,

is that the way that Xcode reports those tests

is because everything has to effectively

be done within one XC test.

And so your test suite passes or fails,

and they do things to make that a little easier.

But I'm hoping that this ushers in a new era

of decoupling the tight integration between that.

- Yeah, I mean, just the fact that this is

an open source package gives all these developers

of these test extensions an opportunity to chime in.

And they already have.

I've seen Stephen Sellez too, listing their packages

and highlighting in which ways they build on top of XC tests

and sort of hinting at these would be potentially useful things to have inside this package.

So you don't need to actually onboard different new test libraries. And he didn't say so,

but I'm pretty sure that Stephen and Brandon would be happy to see functionality moved out of their

packages into a core testing library such that they wouldn't have to maintain it anymore and

could use it. So I'm really excited about this for all these reasons. It's built out in the open

and gives all of these other test extension packages and authors a chance to influence

the direction. And I think that looks really, really great and promising.

Yeah, it is. It's great. Okay, I'm going to start my last package recommendation this week

with a question for you, Sven.

Howdy, is it quiz time again? Kind of, it's just a one question quiz.

How many times have you written a regular expression to detect an email address?

Not a lot to be honest. I don't know. Not a lot, but probably more than zero, right?

Perhaps, yeah. I mean, I know I'd probably look for a... I know this is a hotly debated topic,

you know, like using Regex stuff to parse certain things. And I know, especially the email spec is

not as easy as you think it is, right? Even the domain name spec isn't as easy as you think it is.

So. And you have to incorporate all the domain name spec plus the email spec, right? Yeah.

I might even go so far as just to look for a single @ sign and make that the, you know,

depending on how rigorous the check has to be. But sure. And that's the temptation. It feels like

such an easy problem to solve. You just go, well we'll just split on the @ and

then if it's got a dot on the right hand side and something of this kind of

alphanumeric character on the left hand side, we're fine, right? It feels so easy

but it's not it's never that easy. So my final recommendation for this week is by

Dave Poirier and it's called Swift Email Validator and one thing that I didn't

even realized till I read this readme is that you mentioned the email spec.

Well actually there are five email specs. Of course there are. There is RFC 8222047

5321 5322 for the stuff they forgot to put in 5321 and RFC 6531.

So yeah, this email addresses are notoriously difficult to validate and

most of the time you might just offload that functionality to your backend

server. You might just say take your iOS app or your Mac app, pass whatever the

input is back to your server. But if you do want to do some smart validation on

the client, I think the comprehensive nature of this package by Dave Poirier

is worth taking a look at. It certainly seems to be extremely

comprehensive. You can have email addresses that have IP addresses in them.

You can have email addresses that have IPv6 email addresses in them. And suddenly the can is open and the worms are

everywhere. Yes so yes do not do not try and write a regular expression to validate

email. You will get it wrong and I mean that doesn't even start to think about

all the internationalization character sets and all sorts of stuff. They're all

valid in email so yeah it's it's a difficult thing to do so maybe try this

package by Dave instead. That's Swift email validator by Dave Poirier. Nice.

Well at the very least you'll have someone to yell at if it doesn't work.

Great. Well I don't have a third, well I did have a third package but you sniped

that so I'm out. But I sniped it, yeah. So we will call that an episode and we

We'll see you back here in a couple of weeks.

- See you in two weeks, bye bye.

- See you then, bye bye.