Swift Package Indexing

Dave and Sven talk about the work that Cyndi Chin shipped as part of this year’s Swift Mentorship Program, and then dive into the details of some of the metrics, and answer a couple of listener questions about the feature. Plus six package recommendations, as always!


Creators & Guests

Dave Verwer
Independent iOS developer, technical writer, author of @iOSDevWeekly, and creator of @SwiftPackages. He/him.
Sven A. Schmidt
Physicist & techie. CERN alumnus. Co-creator @SwiftPackages. Hummingbird app: https://t.co/2S9Y4ln53I

What is Swift Package Indexing?

Join Dave and Sven, the creators of the Swift Package Index open-source project, as they talk about progress on the project and discuss a new set of community package recommendations every episode.

We have shipped a new feature since we last spoke, right?

Indeed we have.

Well, kind of. I say we.

It's neither of us that shipped the feature.

So what we're talking about is the feature that went live on the site, was it last week,

I think it was, which is to explain our internal package score on the page that we currently have

for package maintainers that shows you various little bits of metadata about your package.

And it now includes a breakdown of how we are representing packages with our internal score

that goes towards the search result placement. So our search results are not entirely ordered

by package score, but package score goes into the mix as well as relevance of the query,

of course. And the most significant part about this feature that's gone live is that it was

developed by Cindy Chin, who did it as part of this year's Swift Mentorship Program.

I think we've talked about the fact that Cindy was working on this on the podcast in previous

episodes, but it's nice to mention that it's now shipped. The Mentorship Program is finished for

this year. It's been an absolute pleasure working with Cindy, just from start to finish, amazing

attitude, incredible coding skills, just a great experience start to finish. And the nice thing

about it is that this feature has now shipped on the site and is live for everyone to look at.

And one thing, just to talk about the Mentorship Program for a second before we dig into the

feature, which we do want to talk about, but to dig into the Mentorship Program for a second,

it was really nice this year because in our initial call that we had to kind of define

the scope of what Cindy was going to tackle for this project, she was very keen to tackle one

feature end to end. And so that meant, like, obviously we picked something together that

was either, it was actually ended up being from the current list of issues, because this has been

on our minds for a little while. But we didn't, we hadn't really designed the feature. And so

the Mentorship Program this year started with, well, let's have a think about what this package

score, how do we actually represent it? Where do we put this information? How do we make people,

because package score has been a little bit of a sensitive subject in the past, but there's been a

few conversations around whether, you know, how, whether you even should be ranking packages by

some kind of internal score. And so when approaching the feature of exposing this data,

we definitely talked quite a lot about how to make it obvious what was happening, but also

being aware that some people might actually disagree with the whole idea about package

scores. So we did some design, first of all, and thought about how the feature would work.

And then Cindy kind of went ahead and made several modifications. So not only exposing

the package score, but also adding two new factors to the package score. So two new metrics

that packages will be scored on, or are currently now scored on. So adding two of those, and then

visualizing that package score on the maintainers page. Finally, obviously, including testing and

all the rest of it, finally designing the front end side of it. So she wanted to do some HTML and

CSS work. So that was representing the score on the page. And then finally, and I really do mean

end to end when I say it, she wanted to write the blog post that launched the feature. So that blog

post went live last week, written by Cindy, and the feature went live as well. So it's been an

absolute pleasure working with her this year. And I think we've actually got a great feature out of

the end of it. We absolutely did. And I'd add, finally, finally, she also got featured in one

of the most popular, if not the most popular, IRIS newsletters last week, didn't it? So it was really

how on earth did that happen? Yeah, that's amazing.

Well deserved, because I think that's a great feature. And I think I love the whole approach of

wanting to do it end to end and doing it in that time frame. And not as your main thing, she has

other work to do, right? So this is a side project. So it's quite amazing. Really nice, really well.

Yeah, she works at Mozilla, working on Firefox. I mean, she has a full time job. And so yeah,

this is 12 weeks of, you know, a couple of hours a week to get this done, which is remarkable work.

Really nice. I'd love to see that come together. And really just across the whole spectrum with

everything. Really, really nice. And it's been well received. I actually got some questions.

Yesterday afternoon, I believe it was about the feature, which shows how interested people are in

the score and wanting to know, you know, how it's computed now that they can see it. The next thing

that's happening is, well, why does this package have this score? Right. And so this is what we

expected. Yeah. And I think these are fair questions. Yeah. We mentioned this in the

blog post, actually, but this score has technically, and I'm going to lean on the

word technically very heavily here, it's technically always been open because the score has actually

been isolated in one class in our code since the beginning. We launched day one, this score

was there in some form. And so technically, that score has always been transparent.

It's just that it's unreasonable for us to expect people to go and look at the source code.

And that's what this really does. But we knew that this issue would come up as soon as we ship it.

It's kind of like in Hitchhiker's Guide to the Galaxy, where the demolition of Earth

was published ahead of time on Alfred Centauri or somewhere.

People could have protested, but you know, just didn't get around it.

Yeah, I mean, and it's fair to ask these questions. And it's great to have them,

just keep them coming. If there's anything you want to ask us about, let us know. I love that

I got this question ahead of time and that we can address it now, rather than me typing some

back and forth answers. This is much nicer to be sort of broadcasting it to more people,

which probably, you know, some of you might have the same question. So one question was obviously,

and this is an example of one metric, what's the rationale to give zero points for eight

dependencies? So this package has eight dependencies and in the listing, it then has

no points for the number of dependencies because the, you know, I think five or less get some

points and two or less or fewer, rather, you get some more points. And, you know, that's obviously,

if you have eight dependencies, you know, you get no points. If you had fewer, you'd get some

points. And the question is, well, I can't really shed the dependencies. How did we arrive at those

points? I should rather say and direct the questions towards you, because I think, I know

you came up with these figures. What was your rationale around those?

Dave Yes, all of the scoring has been,

you can lay all the blame at my door here.

Mason Praise you can include me,

the blame goes to Dave.

Right. Okay, sure. So yes. So before I answer this question, I would be much more comfortable

with this metric if we were able to exclude test only dependencies, but we cannot. And therefore,

we do the best with what we can, what we have, sorry. I think it's valid. So generally,

the way that I think about score is that it should be all positive things. There shouldn't be really,

there should be no penalties included in the score. And in fact, the way that it works is

that everything is just an addition to the score. It's just that it happens to be dependencies

calculation is where it's less than something instead of greater than something. So it's kind

of inverting the positivity really. But so generally, it's all ways, adding on something

for doing something that makes your package better. And that also is a very important part of

this is what I always hope will happen with the score and we live in an imperfect world. And so

we can't be guaranteed, you know, we can't be absolutely correct with this. But what I hope

that if anyone tries to gain their score and increase it artificially by looking at the

rules and doing things to kind of increase their score, that actually that just makes a better

package because that's ultimately what we're trying to do here is put higher quality and

better maintained packages higher in the search results. That is the intention of this score.

So things like having many releases, that's one of the factors that we score packages on.

If a package has been around for a long time, it's more likely to have more releases.

And therefore, we give more points, not based entirely on how many releases,

but there are some thresholds. So I forget what the numbers are. But if you go over

two releases, you get a couple of points. And if you go over five releases, you get some more,

and you go over 50, you get some more or something like that.

Mason So in terms of those, in terms of those

thresholds, specifically for the dependencies, like five and two, did you look at the distribution?

Or did you just sort of eyeball the numbers? Did you just come up with a number? How did you come

up with the numbers? It's a mix, right? Yeah. So I certainly do look at the data when I'm coming up

with these numbers. And don't forget, a lot of these numbers were come up with three years ago.

So I forget exactly how I came up with some of them. But certainly, one approach that I definitely

use is that, like, I don't even know whether those numbers I gave were correct, but I would

imagine they would be something like that. Because what I tend to do is, you'll get a few points for

just kind of doing anything towards that metric, a few more when you go over, you know, a medium

amount. And then the more releases, like if you have 1000 releases, you're not going to get any

more points than if you had 50 releases or something like that. Generally, it's diminishing

returns as we go up. So certainly, if you look at the, like the metric that we give points for

number of stars, so you only get zero additional points if you have less than 25 stars on a

repository. From 25 to 100 stars, you get 10 points, which is actually quite a large amount of

points. 100 to 500, you get 20. 500 to 5000, you get 30. 5000 to 10,000, you get 35. And then if

you have more than 10,000, you get 37. So you're getting two points for potentially 20,000 stars

at the end there. So certainly, it definitely takes into account diminishing returns. Generally,

I like to also factor in how important the metric is in terms of how, what the maximum score for it.

So it's not like every metric has the same maximum score. They do have different maximum scores.

And in the case of dependencies, which is what the original question was around, we could finally

get him back to the original question. The total number of points you can get for having a low

number of dependencies is five compared to 10 for having 25 stars, which 25 stars is a relatively

low number of stars. So just to put that into context, and this was deliberately because this

metric is first of all, not ideal because it includes test only dependencies. And secondly,

I certainly do think there's a case to be made that a package with a zero or very low number

of dependencies is a mark of a package that I would want to consider using. So I think having

that metric in there is valid. But the fact that that metric isn't quite perfect in how we are able

to calculate it right now is why it has a lesser score. And when we're able to do test only

dependencies or exclude test only dependencies, we may also increase the total, the kind of the

weighting of that score. So there's a follow up question. But that's an interesting question.

Would you agree? Well, actually, before we move on to the follow up, would you agree

with my defense of that? Yeah, yeah, yeah, definitely. I mean, plus, I think people

sort of focus on the score quite a bit, I would say perhaps a bit too much, because

what you need to bear in mind, this is a tiebreaker in search results. If you search for

something, you have a name match or, you know, the term, your search terms are good. Right.

Your readme has the terms, you'll show up in the list. And this is just, you know, give you a bit

of a higher ranking in that list. It's, you know, we don't have recommendations or discovery or that

sort of stuff where that would otherwise appear and have an impact. So the best thing to actually

do is have a package that has proper keywords and stuff and, you know, a readme that explains what

it is and then make sure that terms that people would search for to find it are referenced there

in these in these places. And then you show up and we show 20 results on the first page. And,

you know, with given the number of metrics we have in this score, maybe in one of them,

it won't be ideal. But, you know, if it's a good package and has a lot of the other metrics right,

you know, having a few points fewer there isn't going to destroy your discovery on that page,

I think. So I think that's something to bear in mind there.

Toby Cunningham But Cindy and I had an interesting

conversation around one of the new metrics that Cindy added was, does the repository have a readme

file? Because that certainly is a mark of a, you know, a package which I would want to look at. If

a package doesn't have a readme file, I think it should certainly have less points than one that

does. And of course, most packages do. But the scoring on that check of does the package have

a readme, not looking at the content of the readme, not trying to evaluate how long the readme is or

how much information there is in there, but which we can and may also do in the future. But just

does it have a readme? We scored that at 15 points, which is the exact same score as does the

package have documentation. So initially, my gut feeling when we were talking about having it at 15

points was well, documentation should be more important than a readme because clearly someone's

put some time and effort into documenting their package. But actually, I think I feel really

comfortable in the end with what we decided on to have them at the same score. Because again,

the documentation, we're not giving any kind of measurement of documentation quality with this.

This is just a Boolean. Does it have documentation? Does it have a readme? And actually,

I think those, the reason we actually decided on 15 points for both is that some people use

their readme as documentation. And so they are almost fulfilling the same thing. Now, if you have

both a readme and documentation, then you're going to get double points, which I think is also fair

because it's more work that you've done towards creating a better package. But that's why

initially my feeling was to say the readme should be less points than documentation.

But the more we thought about it, the less that made sense.

Mason Mendez Yeah. Yeah. I think also the readme often

fulfills the role of sort of a getting started page in the documentation. Sometimes it's even,

you know, it's a duplicate of that, which is fine. I think it's great to have it in both places. But

the readme sort of serves that purpose more of explaining what it is about and what the entry

points are into the package. Whereas documentation often is at first, especially at the first pass of

the documentation is just API reference documentation, which is useful, but not

great if you want to understand how to use a package, how to get started with it.

You said we had a follow up question.

We do. Yes. And that's quite specific to that package. But I think there's a couple in that

area. And the question is, it's tough for a Swift on server package to achieve this,

because Neo, a very common foundational server side Swift library already has four dependencies.

So by using Neo, which you almost certainly will be when you're writing or publishing a

server side Swift package, you start off at a, you already passed the two that gives you

maximum points. You're very close to the five that gives you the next step. And you're almost

out of the scoring range with that. And those are real dependencies, not test only dependencies.

So they would not be fixed by if we at some point will detect test dependency. So this is a real

problem. Yeah. Well, is it? I think that actually comes. Is it though? Because, well, so I think

that actually comes back to the same thing that I said earlier, which is the total number of points

that you are gaining or losing in this case here is only five points. This is ranked,

you know, it's a third of a readme file. Yeah. But I think there's also another reason why

this isn't that important because these scores are relative. We're using them in ranking search

results for a thing. And if you're searching for a server side Swift package, all these packages

will have Neo as their dependencies pretty much because it's such a foundational library. So I'm

pretty sure if you made a survey across the Swift package index across server side libraries,

none of them will have points for dependencies because they're all up in the

five, 10 range because they're very much, you know, if you use Vapor as a dependencies,

you're done. You're way past any of the limits there. But that's true for all your

in quotes, competitive, you know, competitors in the package ranking space. They all have,

you know, Neo or Vapor or, you know, a package like that. So you're still being compared on

equal terms and, you know, compared to an iOS package, yes, you'll have a lower score,

but that doesn't really matter because someone shopping for a server side library won't be

looking at iOS libraries. It's not going to go, well, actually the score of this is less. So I'm

going to use SwiftUI instead of server side Swift. Yeah, yeah, exactly. I mean, you're

looking for something different. Yeah. And this goes back to, you know, the most important thing

is going to be your search terms. Have something that is sure to be referenced in your readme

keywords and stuff. You know, if it's an important thing that describes your package well,

has very strong association with your package, put that in somewhere and you'll land at the

top or near the top, certainly on the first page. And that's the most important thing, really.

Were there any more questions? No, that's the two that I got. We'll see if we have some follow-ups.

So the last thing that we should mention here is, because I don't think we actually said how

to find this feature. So the first thing you should do actually is read Cindy's blog post

on the Swift Package Index blog, which you can get to just by hitting the blog link on the home

page of the package index. And it does explain in that post how to find the feature, but I'll also

just quickly say it here. So if you go to any package page on the right-hand side, underneath

the versions, the current versions of the package, there's a little bit of small text that says,

are you this package's maintainer or something like that. And there's a link there, which goes

through to a specific page for package maintainers, which has information like how you can add those

badges to your readme to display your platform and Swift version compatibility, how to add

documentation if you have documentation in your package. And then at the bottom of that page is

now this new package score page. So that's where it is. And the other thing that I want to mention

is that we are absolutely not saying that this is a complete and total representation of package

score. In fact, I would describe this as the bare minimum of a package score. For example, we're

not doing any analysis of how documented or how good a readme file is or anything more than the

total number of dependencies or anything like that. All of these, there's a metric for does

the package include tests and the trigger for that metric is does the package include a test target,

which is a fairly low bar for passing that metric. So what we're saying here is that this is what we

have currently. And we are actively listening to ideas for amendments and additions to this. And

in fact, at the bottom of that page, just underneath the package score, there is a link to

a always open discussion thread on our repository where this is already being discussed. And so

just a declaration that we don't believe this is a completed package score feature.

This is always going to be a work in progress and that we are listening if you have ideas and

opinions on new metrics, how we could make it better. Exactly. A living score. And it always

will be. Yeah. Yeah. Do we have anything else or should we do some packages? I think that's it. In

terms of news, we should do some package recommendations. I can start us off this week

with a package called DirectJSON by Mertol Kasanan. And this is a package that makes use

of the Swift function dynamic member lookup. And what it does is it allows you to access on

extend the string. And if that string includes JSON content, like if the contents of the string

is a JSON object, it allows you just to basically dot into the properties and navigate through

the JSON in that way. So, for example, the example from the readme here is a string

called the cars of 2023, which is a potential list of cars or something like that. It doesn't

actually have the data in the readme. And so it says the cars of 2023 dot JSON, which looks to

see if the property is JSON, like it is the string JSON dot EV dot popular bracket zero dot brand.

So you're effectively saying no, no codable, no JSON parsing, just access properties inside JSON

as if they were already parsed. And then the dynamic member lookup will turn those method

calls into lookups inside the JSON and there we go, return the values. And I think this is an

interesting package, but it's probably not one that I would suggest using in actual production.

Stuart Jensen Here we go again.

Jason Kilgore Yeah, this is my thing, right? Here's a

package. Don't use it. Well, let's see if you agree, right? Let me say what I'm going to say

about it. And then again, we'll see if you agree. But I think the reason I wanted to highlight this

package is not because it would go into a production application. But actually, if you

want to write some Swift code, and just very quickly look at the contents of some JSON,

you actually end up having to do quite a bit of work with codable. And yes, you could decode it

into a dictionary and do it that way. And that's fine. But if you want it in a typed way, then

you're going to be doing quite a lot of typing to get that. If you just want to very quickly

just see what's there, experiment with something before it goes in properly.

And so I think that's where this package potentially lives is for experimentation.

I think that the downside of this is potentially that if you did ship something with this,

and your JSON ever changes, then that's going to be harder to work with than it would be with

something like codable. It reminds me of Ruby, actually, because the first time I came across

this kind of approach was with Ruby, which has a method called method missing, which is the same

as dynamic member, whatever it's called, dynamic member lookup, where it turns the method name

into a parameter on method missing. And that's the first time I came across this approach.

So would you agree? Would you agree with my assessment of this package?

Mostly, yes. Although the first thing I thought of was actually Ruby is a good point,

because I thought this is going to be nice for scripting, right? And I have been using Swift

more and more for scripting. And I do love codable there, because it's very easy even to drill into

structs, because you don't need to spell them out completely, right? If you drill into a nested

JSON, all you need is the container types. You don't need to spell out all the properties you're

not interested in, right? So I wouldn't necessarily agree that you have to do that much typing

to unpack a JSON into codable. However, if you don't know what the structure is,

or the structure is perhaps dynamic, then you're out of luck with codable. You have to use JSON

serialization. And I guess that's what this is using under the hood. And then this certainly

is a much nicer API to unpack something that is dynamic. And it certainly is. It doesn't need any

typing, right? You don't need any struct or declaration to decode. You can just drill in.

I think that's really nice for that kind of purpose.

I actually came across a situation, it was with codable on a YAML file rather than a JSON file.

But I had a situation this week where the same key in the YAML file could be either a string

or an array of strings. So I had to write a custom decoder for that, which first attempted

to decode an array. And then if not, then it attempted to decode a string.

Yeah. On that note, one of the features I love most about recent Xcodes, I think it might be 15,

is that you can generate a codable implementation. Because writing your own codable

implementation always, I have like snippets lying around to look it up. Or I used to, because now

it's so much easier because you can just generate it and then modify it to do what you need it to do.

That's such a great feature.

That would have been useful. I didn't know about that. Where is that feature? Is it in refactor?

Yeah, it's in the refactoring stuff. You can generate a codable implementation.

You learn something new every day of the school day.

So there you go.

My first pick is a really interesting package that I came across a couple of weeks ago,

and it's called Swift Summarize by Stef Kors. Did you know about

Core Services Framework Search Kit, and in particular in there, the SK Summary type?

Only because I also read about this.

I had no idea this existed. So what this does, effectively, it gives you like a local version of

one of the aspects of chat GPT. You can, with this thing and with this package, give it a string,

and then offline and on device have it summarize that string you put in.

I tried this with a couple of paragraphs from our blog post about the Apple announcement

when we had our sponsorship. And the result I got was, so these are like two, four, six

very short paragraphs. I don't know how many characters that is, but it's the meat of our

blog posts. And the summary is, Apple support and the community support we already enjoy via our

GitHub sponsorships have set us on a path where the Swift package index can be a project that

fully supports our work financially. And I think we can just have much shorter blog posts in the

future because this is the meat of it. I did not realize that this was a framework that you can use

and that it's on device, it runs offline. So that's a really interesting framework that I

discovered via this package. And it's great to have this as a little Swift package. I could just

stuck it in a playground without trying to playground feature. And that was a great way

to play around with this and see how it does. Yeah. So I also tried this actually, because

it was going to be one of my package recommendations today, but it didn't make

it into the final cut. I did try this and you're right that having it not need a network connection,

not need API calls that cost money is a huge advantage to it. But in the examples that I

tried, because we've been doing some work with summarization of package information,

using chat GPT. And so this is a subject that I've spent a bit of time looking at. And

in the testing I did, it's impressive what it does, but it's not a patch on what you get out

of GPT for the same input text. So it is good. And it's certainly the fact that it is an on device

calculation, an on device summarization tool is a huge difference. And so they shouldn't really

be compared, but we do live in a world where chat GPT exists. Another thing that I really liked,

I'm not sure if I mentioned that already, is that the results seem to be stable. So I've run this,

you know, multiple times, right. I left some time in between in case there's some caching going on,

but you do, apparently you are getting the same result back, which is a nice feature,

actually, because the way we've been using it...

Which is a disadvantage of GPT.

Yeah, we the way we've been using that is sort of, you know, you run it again,

you get something different, you sort of have to pick a result at some point.

Yeah. Although I did, I don't know whether you you listened to or watched the recent keynote from

OpenAI. But they're talking about now in the current latest version of the API, you can also

specify a random, a consistent random seed to get the same output from the same input again.

Okay, that's nice.

I don't know whether that's shipped yet, but that was something they talked about.

Right. So that was Swift summarized by Steph Kors.

So my next package is Memberwise Init by Galen O’Hanlon. And this is a Swift 5.9 macro package

for automatic memberwise init statements. So if you create a struct, you want to generate a

memberwise init for it, rather than having to type out that or use any kind of automation to

create it. I know there are lots of ways to automate the creation of these things. This

takes it and puts it in a macro. So above your, for example, if you're doing on a struct, if you

above your struct, you add the attribute memberwise init. And then in this case,

the example is dot public, and it will create you a public init that takes every property and gives

you an initializer that adds that. And so nice little time saver is something that you end up

doing quite a lot in Swift. But again, I must stop recommending packages that I then recommend not

using. It is definitely becoming a habit of mine. I certainly, this package made me think more about

macros than I have done in the past. And it's certainly a nice little time saver to not have

to generate and not have to write a memberwise init for your struct. But at the same time,

and this was always the problem with C macros is it hides what's happening. And so for some things,

where there's a lot of code that you might want to generate or something complex that you want to do,

maybe that, although, I mean, do you want to hide complexity? I don't know. I think there's,

I think there's a genuine question about macros that this made me think about, which is,

are they actually a good idea? And I'm not saying they're not a good idea before anyone,

or anyone thinks that's what I'm saying. But I certainly, it makes me think about like how much

would I import a package that has a macro to generate a memberwise init when the amount of

typing in a memberwise init is actually not that much really. Yes, it's repetitive, and it's not

something we really want to have to do. But at least once you've typed it, it's there and you

can see it. And I know that Xcode can expand the macro and show you the code that it generates. And

there are definite pros and cons to this whole approach, but it did make me wonder of like where

that line lies in terms of would I add a macro to do this job or that job or some other job? And

I think this is an interesting package. I think I'm sure it will save some people some time,

but I'm not sure that I would bring it in. Would you? I think I would. And here's why,

because we have actually, we have this problem in a couple of places. Vapor uses,

Vapor models, most of the properties are, or many of them are optional just due to the way

it's often set up. And you do need to specify initializers for all these. And they often

default to nil. And we had a couple of cases where we, and you can very easily generate the

initial initializer, right? There's a, again, under the refactoring tools, you can right click

on your name. That one I do know about. Yeah. And then it generates that and it does that,

but it only does that the first time, right? The next time you add a new member or remove one,

you have to remember to update your initializers. And that is something that you can't forget if

you have a, if you add an optional property, then you don't need to initialize it and to spell it

out in that list. And there can be drift in what members you have and which ones you actually

initialize. And I guess the big advantage of something like this is that it would track

and always have a fully specified initializer. There's no missing of members and making sure

they're all assigned. And the way this bit does was, if I recall correctly, we ended up, you know,

saving stuff to the database. It wasn't actually saved because we never updated the initializer

where that field was passed through and actually written to the database, which is, you know,

that is not something you discover like weeks after. It's just something you discover at an

inopportune time, like an hour later. It's not a huge problem, but it's a bit of a nuisance. And

it's a lot of silly updating of initializers that isn't really interesting work, right? I love,

yeah, taking stuff out of the picture that is just busy work. And there's nothing, you're right,

hiding complexity is a problem, but I don't think this is complexity that is bad. Like this is just

noise, really, because everyone will understand what this initializer does and why it's there.

You know, sometimes you need it. You don't often don't even need it, right? If you are

internal to a package, you don't need to write the initializer. So, and it's still there, right? Swift

still generates that internal one that you never even see, but you can initialize your type fully,

you know, with all the properties. Why is that? Well, it's because it's all internal and it's not

exposed. You don't see it, but it's still there. How is this different? All it does, it gives you

a way of having that same automated way of generating it and you can make it public and

then it works across module boundaries and that. So in that sense, I think it just, you know,

elevates that to a different access level. And I think in that respect, it's absolutely fine as a

complexity. In fact, I think the only complexity is that you need to import that as a package.

And I know that there's been a pitch to have something like that as a language extension. And

I think, I don't recall where that discussion ended. I'm pretty sure it came up in the discussion

where that should just be a macro. And I'm very sure that there will be a set of macros that

eventually end up in either in the standard library or in a, you know, Apple foundation

package or something where these things will become commonplace and I use just like any other

annotation that we use right now, you know, like observable and whatnot. And it'll be one of those

where it offers that and we will use it and forget about it. All right. Yeah. Yeah. I mean, all very

good points. And I should just say as well, that there are lots of options with this macro for you

to customize various bits that you talked about there. So you can specify that a property should

be escaping or public or give it a different label or, you know, and combine those things.

You could have a customized label and the fact that it's public and things like that. So I should

say that there's good support for creating the member wise initializer that you would like to

create rather than just the same one for every type. So, yeah, I think it's, and that's the reason

I didn't hesitate to talk about it here because we call these recommendations and they are of

sorts, but actually what they really are is as with any real world dependency situation where

you're considering adding a dependency is it's actually, it's a decision that comes down to

trade-offs in the real world. And so when we talk about them, we should talk about those trade-offs

too. Yeah. I mean, mine are recommendations. Yours are mentions. That's not how I want them

to be. I just think, well, you got to try harder, Dave. You got to try harder. Yeah, I do. You're

right. I do have to try harder. That's right. Because you wait till you hear my last one.

Oh God, here we go. All right. I'm bracing myself. All right. Let me, let me squeeze in a

recommendation before we get to that one then. And this next one is called Typhoon by Nikita

Vasilev. And that's a really nice retry library. And I came across this because I actually had

need of something like this recently. I have my own version of it, which is a little module that

I'm using. And if I had discovered this before, I might've actually jumped on it. It looks really

nice. So what this does, it gives you an API where you can pass in an async throws closure

and it'll retry it as the name of the package says, or the description says. I'm not sure

of the name Typhoon, how that ties to retrying, but that's what it's called. The nice thing is

you can set retry policies. So there's a couple of things that are listed. I'm not sure how

extensive it is, but obviously you can specify the number of retries, whether they are just,

you know, on a constant time. And it also supports exponential backoff where

subsequent retries will be at a longer period. You know, it'll do one second, two second, four

second, eight seconds. And you typically do that to avoid hammering a service. You know, for instance,

if you're running on a network, you don't want to constantly hammer the service at the same time.

I'm not sure if it has random jitter, but it would be a nice extension if it doesn't.

Looked really nice. Also something you can test in a playground to get a feel for it.

One really nice thing I found is it didn't actually work in our Try in a Playground feature

because it was referencing Vision OS in the platforms, Preamble and Arena. The underlying

tool didn't manage to pass the manifest, but by the time you listen to this, or actually already

it's fixed. So we have shipped an update. I saw this pull request go through. Yeah. I'm so keen

to try these packages that whenever they don't work, I get really annoyed and I need to fix it

straight away if I can. So that's Typhoon, a retry package by Nikita Vasilev. So my third package is,

I love this package and I love what it says about the Swift package ecosystem, which is that we are,

we have everything from something that will generate you a member-wise initializer for

Astrux, which would be valid in any Swift program that you could write, pretty much.

Two packages like this, which is called Swift ZPL by, well, I don't think it's a name,

I think it's an abbreviation, but the abbreviation is S-C-C-H-N, but this package is a Zebra

programming language enhancement for Swift. So it allows you to write this Zebra programming

language faster, easier, and safer is the description of it. Now, do you know what ZPL

or Zebra programming language is? I have no idea. I saw the package and I just briefly looked at it,

but it didn't explain in the readme, so I didn't have the time to drill in. I just saw some

further down, some barcode stuff. I drilled in. And so Zebra programming language is a command

language used by a lot of printers, and it is a way to tell printers what to do, right?

And so this is a way that you can write code to control a Zebra, a ZPL compatible printer.

So you can say, change the alpha default font to this and have it use 50 heights. And then you can

have a field and put some data in that field and you can define what to tell these printers to do.

And I think, I mean, obviously this is a extremely niche package. This is not going to be

included in many people's projects because it does a very specific thing. But actually,

that's what I love about this package is that this is going to make somebody's day, right?

Somebody is going to have this task. They're going to go, I wonder if package, there's a package for

it. They're going to search Swift package index, and it's going to make somebody incredibly happy.

It's only going to make five people happy in its entire life, but they're going to be so happy.

And that's why I want to talk about it because I think a thriving and kind of

comprehensive package ecosystem includes stuff like this that can

take a ZPL compatible printer and generate a barcode in three lines of code.

Yeah. I mean, you can see this be super useful if you have that sort of printer and need to

output a barcode or a QR code. I mean, this looks great. There's even a Swift logo further down

that they printed. Nice. This is a recommendation. It's just a recommendation for a small number of

people. Yeah. Nice. And the Swift code is bang up to date. The ZPL, I would imagine, has been

around for a very long time, but the Swift code, it uses result builders to build up the syntax for

the ZPL programming language. So you just, you open up some braces and you start putting commands

in there just like you would with Swift UI. Wow. That's amazing. So 37 episodes in,

and we have a recommendation. That is fantastic.

Yes. I need to work harder. You're right.

My third pick is called Obfuscate Macro, and it's by our return guest, p-x9. I'm just double

checking. No, there's still no further name. We had a package recommended before or mentioned

before. I'm not sure whichever. Yes, I recommended one. I'm sure it was a recommendation.

There we go. So this is obviously, as the name says, it's a macro package and it's a package to

obfuscate strings in your binary. So you might know if you embed something in your code,

like a static string, like a configuration variable that you don't ship from the server

and you embed it in the library because it, for instance, never changes. But it's still something

you don't want to leak. For instance, say you have, I don't know, some key for a symmetric

signing or something or a passcode or something that can be found and printed. I think strings

is the command you would use typically to see the text segments. I'm throwing around words here. I

hope that makes sense. I've never tried it, but I know you can get it strings in a binary.

Strings definitely does that, yeah. And so what this does, it scrambles these and you can

just annotate your variable definition with a macro obfuscate and or rather not annotate it,

you can use a macro command to assign it to a variable and then you can just let it do its

thing. It picks an algorithm, I think, by random. You can also specify a specific algorithm that

you want to use and there's a couple of quite a handful of options like bit shifting, base 64,

AES, that sort of thing. And then it just generates some data and obviously also embeds a way to

reverse it back into the string when you run your binary. And the result of that is there's no plain

rendering of your string in the binary. If someone looks, they won't see it that way. They'd have to

do more to find out what's going on. And my understanding is this isn't a foolproof way to

actually make that operation safe. As the name says, it obfuscates the string in your binary

and makes it harder for someone to reverse engineer it. It's still possible, but it probably

puts up a high enough barrier to deter most folks from poking around and doing stuff with pulling

out an API key or something for a service that you don't want them to be messing with.

So yeah, there you go. Obfuscate macro by p-x9. Fantastic. And so I think that brings us to

a close again. We're actually going to be back in three weeks this time and the next podcast will be

the last one for 2023. So one more before the end of the year. But yeah, we'll be back in three

weeks time and we will speak to you all then. All right. See you in three weeks. Bye bye. Bye bye.