Cup o' Go

Creators & Guests

Host
Jonathan Hall
Freelance Gopher, Continuous Delivery consultant, and host of the Boldly Go YouTube channel.
Host
Shay Nehmad
Engineering Enablement Architect @ Orca
Editor
Filippo Valvassori Bolgè
Sound Designer / Audio Editor based in Milan

What is Cup o' Go?

Stay up to date with the Go community in about 15 minutes per week

Shay Nehmad:

This show is supported by our beautiful members on Patreon, which might include you. Get to the outbreak to hear more and learn how to join. This is Cup of Go for May 31, 2024. I'm back. Hi, Shayne.

Shay Nehmad:

Keep up to date with the important happenings in the Go community in about 15 minutes per week. I'm Shay Nehmad

Jonathan Hall:

And I'm Jonathan Hall.

Shay Nehmad:

Miki, thanks a lot for covering for me last week. I appreciate it. It was a really good episode.

Jonathan Hall:

Alright. Let's pound through a couple of quick news items before we talk about something a little more interesting. Pre release announcement, you know how this this goes, 122.4 and 121.11 will be released on Tuesday. We'll tell you more next week. Secret

Shay Nehmad:

security fixes?

Jonathan Hall:

It is.

Shay Nehmad:

Nice.

Jonathan Hall:

Couple meetups to mention. Both are actually kind of the 2 we have on the list here, unless you have other, Shy, are kind of related to me. So I'll be speaking at the Atlanta Go meetup on June 12, 7 PM in Atlanta. If you're in that area, come by. It's the same meetup that, Bill Kennedy spoke at last month.

Jonathan Hall:

So, I guess they're doing a international tour or something. I don't know. Bill Kennedy is, I guess, not international, but he's from a different state. The other one is, just announced there's a new Go meet up here in the Netherlands that's happening on June 20 in Tilburg. I would be so tempted to go.

Jonathan Hall:

It's an hour 15 minute drive for me, but I'll be meeting my co host, Shai, in person that night, I think. Or is it the night before?

Shay Nehmad:

So it's the night before.

Riccardo Pinosio:

It's the

Jonathan Hall:

night before. Maybe I can go. Maybe I can go. So many meetups. So, yeah, those are the 2 meetups.

Jonathan Hall:

No more details there, because that only affects a small percentage of our listeners.

Shay Nehmad:

Even though it affects a small percentage of listeners, I do wanna remind everyone that on June 19th, we're planning a cup ago meet up. It's basically just a chance for, Jonathan and I to, meet, but we did wanna open it to, other folks if they wanna join. Currently planning just, beers and perhaps a short live recording, even if it's just for fun going through the news items inside a bar instead of, the comfort of our homes. Yeah. So June 19th in Amsterdam.

Shay Nehmad:

If you can make it, it would be really really cool. Also, if you have an office to lend to us. So we have an update on a past proposal which we discussed. I think it's very, like, it's not there's not a lot of drama about it. Right?

Shay Nehmad:

It's very everybody agrees it's good. It's exactly what we added generics for, which is map functions that are generic in the standard library. Both inserting, reading, looking at the lists, and whatever. And unlike our usual links here, which are links to issues, this is a link to Garrett, which can only mean one thing.

Jonathan Hall:

It's been not only accepted, but it's actually been implemented.

Shay Nehmad:

Yeah. It it it that happens as well. Yeah. So I

Jonathan Hall:

have to say, I'm honestly looking forward to this new iterator. As much as I dislike the c and c two naming, and I think it's kind of a complicated implementation, I'm looking forward to using it, which will help, I think, reduce the number of things that you shouldn't use Go for,

Shay Nehmad:

which

Jonathan Hall:

I think might be an interesting, topic. Maybe somebody should bring it up on Reddit.

Shay Nehmad:

Yeah. So, following a similar thread in Rust, people are like, oh, I really like Go, but what isn't it a good fit for? I think this is actually super important knowledge, especially if you're more junior ish, like if you're in the beginning of your journey and you fell in love with Go, it's really important to know where to not use it so you could seriously bring it up as a trade off, at work or wherever, you know, for yourself when you, need to write a piece of software. I've really come to the realization that there isn't a one language beats all. It's a very childish discussion to be like, what's the best language?

Shay Nehmad:

It does matter personally what languages you like using. Like, I like using Go and I also have invested a lot of time into it. So I have, like, sunk cost into it, which isn't at this point, it's not a fallacy. I I can write Go a lot faster than, other languages, but it's important to know where it's not a good fit. Right?

Shay Nehmad:

If you can do something in a super short two line shell script and you're never gonna change it, maybe it's fine to just do it in a shell script, not like, alright, let's open Cobra, viper, OS, whatever. And there's a thread on Reddit here by accurate peak 4 856, great names as usual on Reddit, with some, you know, genuine answers that I I like. The first one is highlighting that since Go is garbage collected, hard real time processing is not a great fit. Mhmm. I try to find the best application that wouldn't be a good fit, if that makes sense, like the best worst application.

Shay Nehmad:

Some people mentioned the audio. Yeah. Like you wouldn't write effects for, you know, virtual guitar amp, amplification software like Guitar Rig, in Go. But when I hear, like, hard real time processes, I also imagine things like flight, computers and things like that. Right?

Shay Nehmad:

Trains.

Jonathan Hall:

I suppose. Yeah. I don't know. I don't work in that space. I mean, my my feeling is that audio in computer terms is pretty slow.

Jonathan Hall:

I mean, 44 kilohertz or whatever isn't that fast. We we have embedded chips that run-in like 2 megahertz that can can handle that sort of stuff. So it it doesn't strike me as like an obvious problem space, but maybe it is. I don't know. I don't work in that space.

Shay Nehmad:

So I tried implementing, Guitar FX because I, this is, sad, but I, had the free version of Guitar Rig, which only came with a few effects when I was, 18 or whatever. And I tried to develop my own, effect and add it as a VST plug in into the Guitar Rig software to because I couldn't buy the effects because I didn't have a credit card.

Jonathan Hall:

So, like, I

Shay Nehmad:

could afford it. I just didn't know how to go through the process of buying stuff on the Internet. And it's very difficult to actually even if the, like, actual signal rate is very low, you have to be pretty good to write stuff that doesn't immediately go like Yeah. At least that's my experience.

Jonathan Hall:

So one of the takes that that I could resonate with, it just says simply excel macros should not be written in Go. And I agree with that because Excel macros just shouldn't be written, just something like Excel.

Shay Nehmad:

Spreadsheet is running the world, man. Like it or or hate it. One thing I saw that was, you know, we brought up here on the show with, good friend, Andy Williams, maybe GUIs. A small tool. Maybe, this is, someone it's a potential convert to find.

Shay Nehmad:

I I call them a pre find developer.

Jonathan Hall:

Mhmm.

Shay Nehmad:

They say maybe GUIs, a small tool maybe, but a serious project, I don't think I've seen a good looking GUI app in Go. So Yeah. I tend to agree and disagree. I think it's possible if you really care about having your application written in Go, you could make it beautiful with fine. But if you just wanna get gooey out the door, unfortunately, the best way to do it today is just write a web app and ship electron.

Shay Nehmad:

Which sucks for the users. Right? Because they have like half a gig of memory for, you know, you showing showing your to do app or whatever. Mhmm. But it's just the most cross platform whatever way, and and you can rewrite it and go once you have a 1000000 users.

Shay Nehmad:

Right? But I don't know. Maybe I'm maybe I'm thinking about this backwards. And actually starting with Go makes more sense because it'll keep the UI very simple. I don't know.

Shay Nehmad:

Yeah. I don't know.

Jonathan Hall:

I think I think one last comment I wanna call out here to kinda counter these these arguments against, using Go for everything. One person, says he even wrote his wedding vows in Go.

Shay Nehmad:

So yeah. Someone in in great ready etiquette, someone wrote, I write everything in Go and then someone responded, that comment is not a valid Go syntax. So unlike, usual Reddit hate that I spew here, I really like this thread. I think it's interesting. If there's interesting discussions here, and I'm gonna foreshadow our interview.

Shay Nehmad:

There's one comment here that says probably AI related stuff which is better in Python.

Riccardo Pinosio:

Mhmm.

Shay Nehmad:

So stick around to our interview to find out if that's actually true. Oh. So yeah. Cool Reddit thread. The link is the it's pretty fresh.

Shay Nehmad:

Right? It's like a few days ago. So the link is in the show notes if you wanna go, and nerd out about what Go is is and isn't good for. The flame wars that way. Good luck.

Jonathan Hall:

So here's, one that was, brought up on our, Slack channel. Thanks, Arno, for shouting this out. The title of the blog post is blazingly fast shadow stacks for go. I don't know what a shadow stack is, but it sounds creepy.

Shay Nehmad:

Isn't blazingly fast, you know, specifically stored for Rust blog post. I thought we weren't allowed to use those in garbage collected language. So what's this blog post about? The graph at the top seems very, interesting.

Jonathan Hall:

Yeah. So it shows a a a line chart with 2 lines, a blue line that goes up to the right as you would expect on a line chart, and a red line that's almost horizontal, I should say, to the bottom. The TLDR, I'll just read it because it's short and it's I think it does a good summary. Software shadow stacks could deliver up to 8 x faster stack trace capturing and the go runtime when compared to the frame pointer unwinding that landed and go 1 point 21. This doesn't mean that this idea should escape from the laboratory right away, but offers a fun glimpse into a potential future of hardware accelerated stack trace capturing via shadow stacks.

Shay Nehmad:

So just to give some context, the author of this article, which via the TLDR sounds like, what they did in Go 121 is just stupid. There's 8 x performance. He's the person who wrote the 121, implementation.

Jonathan Hall:

So it's all good. He's crapping on his own backyard or whatever.

Shay Nehmad:

So it sounds very interesting, but what are shadow stacks? Other than sounding super super super cool, sounds like I don't know. Sounds like some ninja futuristic it could be, you know, the the next cyberpunk game. Right? Cyberpunk 2078 Shadow Stacks.

Shay Nehmad:

Other than a a cool name for a futuristic, video game, what does it actually mean?

Jonathan Hall:

I'm trying to learn that right now.

Shay Nehmad:

I tried to give you some time.

Jonathan Hall:

I know. I I got halfway there while you're while you're segueing there. So, maybe before we talk about what shadow stacks are, we talk about the problem. So if you wanna generate a stack trace and go, it's kind of an expensive operation, which is by the way, why they're not included in errors by default. Way back and go 1.13 when they added error wrapping and unwrapping.

Jonathan Hall:

I believe they even implemented including stack traces in all go errors and it slowed things down so much they had to take it out. I don't know if it was actually merged, but they definitely tried it and it was too slow so they took it out. So my hope is that maybe this will will allow us to put that back in. But to build that stack trace is a little expensive process because you have to sort of go through every stack frame and figure out which line of code was related to it and on all this so on and so forth. TLDR, a shadow stack kind of maintains that on the fly as the stack is is popped and pushed and as you run through your your code.

Jonathan Hall:

So as you call a new function, it it keeps the shadow stack up to date with what's already happening in a much less expensive way so that it doesn't have to be done in a slow way. That's that's really redundant thing to say there.

Shay Nehmad:

The reason this is like a new thing, it's not a new algorithm. Right? It's the access to modern hardware. Right. So it's still kind of new and, you know, you probably you'll you'll probably never interface it, with it from user space just right writing code.

Shay Nehmad:

Right? But if the language could, utilize these and, you know, you have these new CPUs running on your web servers or wherever you're running your local program, you could decide I imagine a a future, you know, a year from now where you have go debug equals shadow stack, tracing equals 1. If you care about the error stacks and the performance isn't critical for you, so you could accept that trade off. Right? Every error having a full stack trace, if you care more about reliability than and, I guess, observability than, pure pure performance.

Riccardo Pinosio:

Mhmm.

Shay Nehmad:

I actually imagine it being like, okay. For 10% of my users or 10% of my servers, I ship, you know, with the stack traces. The for the other 10%, I ship with, profile turned on, right, for profile guided optimizations. And then the rest, 80% are the normal ones where it's, like release mode, with, all the optimizations built in and none of the extra observability or debugging features inside. And then all the premier users, you know, we promise they're always going to the good servers.

Shay Nehmad:

Could really help you strike a good balance between all these developer facing features, better error handling and profile guide optimizations and whatever, and just trying to eke out the best performance you can from the language.

Jonathan Hall:

At the end of the article, he also talks about, doing the shadow stack in software. That has some drawbacks. It it hurts performance for certain cases while improving for others. So it's not a clear win. But maybe by adding the support for the hardware, shadow stack, we can, get some, performance improvements.

Shay Nehmad:

Yeah. I think this is a really fascinating article. I actually tried to read it, but a few times and it's not a it's a very technical article, not easy.

Jonathan Hall:

It's not that long, but it yeah. It kinda jumps in the deep end quickly.

Shay Nehmad:

So we just gave you the, you know, the cliff notes of this. But if you really wanna understand this further, there are a few links inside here, and it's really well written. By the way, another, like, very personal blog that I like. This is not, like, on Medium or whatever. This is just, you know, Felix's personal blog.

Shay Nehmad:

I'm not gonna say the last name. No way. I'm gonna put the link to the blog post, but also just the blog itself, in the show notes because the blog itself is really cool, has a few blog posts, and I'm really hoping to see more in the future. No pressure. Talking about a cool blog post, there's one last blog post I really wanna talk about just because it's cool.

Shay Nehmad:

I wanna nerd out about it. It's less about the Go language, but more about Go's infrastructure. Mhmm. And it's it's named abusing Go's infrastructure, which is already kind of interesting. So Go has a checksum database.

Shay Nehmad:

Do you know what that is? Vaguely.

Jonathan Hall:

I don't know the details of how it works, but it it keeps track of the late of of all of the revisions of public modules. Right?

Shay Nehmad:

Public Go modules. Right? Yeah. Right. So no.

Shay Nehmad:

This assumption you just had, is the assumption of everybody who used this database. And the security researcher in this blog post shares how you can upload any arbitrary data into this server and not just Go models. It doesn't filter on it having any Go code whatsoever, having a Go mod file. So as an attacker, you could say, okay, if I wanna take the Go, check some database down, maybe I can just start uploading infinite amounts of, you know, random data as packages. Mhmm.

Shay Nehmad:

So that's not true. The Go authors did have, DDoSing in mind, or just DoSing. Sorry. So so

Jonathan Hall:

to be clear, it does store version information for Go modules, but it also stores arbitrary other stuff. Is that what you're saying?

Shay Nehmad:

By accident, you can upload, other packages there as well. Okay. The intention, the happy path intention is to store Go related metadata so you can, you know, when you go go mod and go sum, it has the hashes. Right?

Riccardo Pinosio:

That's how

Shay Nehmad:

it works, but you don't need to specify anything specific in the repo you were uploading to put it in the Go check some database. The Go authors realized that this could lead to Dassault. There's a limit. Right? There's a limit on the package size.

Shay Nehmad:

There's a limit on the Go mod file size, like, half a gig and 16 megabyte or things like that. But you can bypass download restrictions on developer machines. You could, if you're malware, you could store payloads and retrieve them from that server and, you know, it's not like you're storing your malware on I'm a hacker dot this is a bad website dotxyz. When you look at web firewalls and things like that, you know, around your organization, you're gonna see, network traffic to go to some database. It's gonna look totally legit, at least from the domain.

Shay Nehmad:

Right? So it's a very interesting, you know, the DOS is gonna be very challenging to execute, but if you're writing malware, it's a really good, free hosting for you that looks really nice. This is not that problematic in my opinion to, like, operate a command and control, server from legit servers is something that almost any development, file storage, shared whatever, software has an issue with. So Go is not unique here,

Riccardo Pinosio:

but

Shay Nehmad:

I was surprised to see it in Go as well. You can basically do a request for the model path. You parse the result, extract the version, you make another request to download the zip file, and you have the zip file. This is very, very, simple way to implement commands from these command and control server hosted by the Go team, right, by accident. So a really cool security, you know, sort of red, red team white hat approach to Go's infrastructure.

Shay Nehmad:

Yeah. Also written in a way I really like. It's like, okay, I went to sleep. I woke up. I still need answers.

Shay Nehmad:

This is, like, this is a really an approach that I like. So really good blog post. Reverse engineer again, by the way, another good blog, personal blog. So 2 personal blog community things, posted. We're gonna put, again, both links in the show notes.

Shay Nehmad:

I don't think that there's any reason for alarm in terms of, oh my god, this is like a very serious vulnerability that puts anyone in danger. Not really. It's, it's interesting, but I don't think it's, major.

Jonathan Hall:

Hello. So my son has just walked into the room to end the recording. You wanna say hi, Harvey?

Riccardo Pinosio:

Hi.

Shay Nehmad:

Hi. I guess this is a good time to go to our ad break.

Jonathan Hall:

I I yeah. I think it means it's time for me to be a papa again. So, we'll end the recording here, and, we'll see you at the other side of the ad break with an interview with, Ricardo.

Shay Nehmad:

Yeah. We have a very interesting interview coming up.

Riccardo Pinosio:

We all have you.

Shay Nehmad:

Yes. An interview. Are we recording? We are. We're live.

Shay Nehmad:

Welcome to our ad break. We're not live. This is a podcast.

Jonathan Hall:

We're live right now as we record. Previously live as as every recording I ever saw.

Shay Nehmad:

Live at the time. Welcome to our ad break. I don't know if, Filippo kept all that in. If you wanna sponsor the show and help us, with this fun, but yet expensive hobby, you can join as a member on Patreon. We wanna thank all our existing patrons and our new member, Simon Law, for helping support the show.

Shay Nehmad:

This money just goes towards editing fees, hosting fees, stuff like that. It might pay for the beer in Amsterdam, but the flight is paid for by my company. So don't worry about it. I'm not flying on airplanes on your dime.

Jonathan Hall:

Right. If you

Shay Nehmad:

wanna reach us, the best way to do it is capago.dev. You can find links to all important show related things there. I don't know if important. This show is not important. It's just like cozy and fun.

Shay Nehmad:

Yeah. You can find a store there where you can, hook yourself up with some cool swag.

Jonathan Hall:

That that is important, I have to say. The swag is important. We do it for

Shay Nehmad:

the swag. And the same reason everybody was in RSA last week, just for the swag. You can find links to our Slack channel, where there's a ton of interesting discussion, recently. Hashtag cupago on the gopher's Slack. And you can email us at news at cupago.dev.

Shay Nehmad:

That is news at cupago.dev. If you wanna support the show in other ways, the best way to do it is to through word-of-mouth. We don't pay to advertise or promote the show at all. All the great audience, participation, engagement, and just numbers we're seeing are just thanks to y'all sharing the show, talking about it. And on that note, we really wanna say thanks to Pavel.

Shay Nehmad:

I got notified on LinkedIn, which doesn't happen a lot. You're a LinkedIn beast. Right? You you troll a lot. I totally troll.

Shay Nehmad:

Engage with that platform too much, but I got, pinged by Pavel. And I don't know. How how did you feel when you, you know, read this post?

Jonathan Hall:

Good. I mean, it's a it's a fairly long post. We won't read the whole thing. I like the the the tagline. Unpopular opinion, the first book you read when learning a new language should be about writing tests.

Shay Nehmad:

A 100%.

Jonathan Hall:

And I I I think I agree with that. And then he talks about, being introduced to, did he say that we introduced him to Go? Or he just started listening to his

Shay Nehmad:

journey in Golang with the Cup of Go podcast.

Jonathan Hall:

Yeah. So that's that's good to hear. Yeah. And he, enjoyed the episode where we interviewed Adelina Simeon about her book, Test Driven Development in Go. So, yeah.

Jonathan Hall:

It's nice to see that the show is having an impact on people. Can I say this? I I I'm not gonna share names, but you mentioned before we recorded today that one of our previous guests got some VC funding as a result of coming on our show. Yep. So that feels good too to to just be having an impact and and helping people with their careers, learning Go and and everything.

Shay Nehmad:

Yeah. We we set up this show to just learn Go ourselves and make sure we're on top of everything new in Go. So it's a it was a very selfish endeavor, as you would imagine recording yourself on a weekly basis is. But it turned out to be, you know, the community and the talking to people about it turned out to be a much bigger part for me than the personal, angle, I would say. So thanks all, Pavel, for your, unpopular opinion.

Shay Nehmad:

If your other unpopular opinions can also include liking us, that would be good. And it will also track with, what I normally know about, myself, that it's unpopular to like me. So, if you wanna help support the show, you know how to do it. If you wanna get more Capo Go in your life, physical or in your ears, capogo.dev is where you do it. And one last thing before we send you off to our very interesting interview, we plan to do a physical episode.

Shay Nehmad:

We mentioned this on the show as well. June 19th in Amsterdam. If you can make it, please tell us. And if not, totally fine. John and I will just slam a couple of beers.

Jonathan Hall:

Could just be 2 of us. It'll probably be a lower quality episode if it's just the 2 of us than if somebody joined us.

Riccardo Pinosio:

But what

Jonathan Hall:

may happen?

Shay Nehmad:

Stick around for an interview with Ricardo about Hugo. Gonna be interesting.

Jonathan Hall:

Hugo? Isn't that aesthetic site web generator?

Shay Nehmad:

Let's find out. Hey, Jonathan.

Jonathan Hall:

Hi, Sai.

Shay Nehmad:

Alright. So I really wanna set us up here. I really like, blogging. Static blogging. So today, we're talking about Hugo.

Shay Nehmad:

It's a project in Go, called Hugo. And that's, what I know. Right?

Jonathan Hall:

Yeah. Yeah. I think you got I think you spelled that wrong.

Shay Nehmad:

What do you mean?

Jonathan Hall:

Hugo has a t at the end. Doesn't sound like it. Well, maybe someone maybe we could find someone to explain to why it's spelled that way. Hey, Ricardo. Could you help us with that?

Riccardo Pinosio:

Yes. I can. Hey. Yeah. It's, you know, there's, the famous dictum that says there's only 2 hard things in computer science after validation and naming things.

Riccardo Pinosio:

And, I think this might be an unfortunate result of that because Hugo is not is not the Hugo, static website, the stuff that we know and love, but it is something completely different. It is a library that we've been building for machine learning in Go. Actually, transformers in Go.

Jonathan Hall:

That sounds even more exciting than static websites.

Riccardo Pinosio:

Well, if if it depends what you like, but, it is very exciting, definitely.

Jonathan Hall:

Alright. Well, let's let's talk about that. But first, Ricardo, would you tell us a little bit about who you are and and then we'll talk about the project.

Riccardo Pinosio:

Yeah. So, I'm Ricardo Pinosio. I'm I'm a machine learning engineer, and I currently work at Knights Analytics and, which is a company that builds essentially solution, for master data management. And before that, I was working at various companies, mostly in as machine learning engineer in the, AI and finance state. And before that, also, I did my PhD here in Amsterdam in mathematical logic.

Riccardo Pinosio:

So I actually mathematical magician by training. And and I've been using Go. I'm actually quite a recent convert Okay. To Go because it's been, slightly more than 2 years, a year ago. Before that, of course, coming from machine learning and AI, it was essentially well, first, r, and then Python, lots of Python, everywhere, obviously.

Riccardo Pinosio:

But then when I started working at at the night, we worked we built our product called Academia. The whole stack was just built in Gong. Right? And one of the goals, was to integrate more and more machine learning capabilities and transformer capabilities into the product. And that's actually where, Hugo, comes from.

Riccardo Pinosio:

And that's essentially what, the origin, right, of of of Hugo. That's what I've been doing in the past, in the past years. We we need transformers and machine learning capabilities in our Alkimia product because we all work a lot also with our structured data, and then we tend to be in mastering that type of unstructured and semi structured data.

Jonathan Hall:

Okay.

Riccardo Pinosio:

And therefore, we use a lot of the latest techniques for for that. But it wasn't that easy to do it in Go, obviously, because it's not it I think it's still a bit behind with back to, let's say, Python in in terms of machine learning, but my hope is that it will get better. And there's a lot of opportunity to make it better. Right? So that's also fun.

Jonathan Hall:

Hugo, calls itself a hugging face transformer pipeline in Golang or pipelines in Golang is the description of it. For those of us in the audience, including myself, who don't really know machine learning, what is Hugging Face? And why would you want a Golang, pipeline for

Riccardo Pinosio:

it? Yeah. I I I can I can explain that? So, essentially, Hugging Face is, let's say, the most important repository of machine learning models, open source machine learning models that is out there. So what they do is they try to, let's say, market themselves as, let's say, the GitHub.

Riccardo Pinosio:

So the goal is as a researcher or as a company that runs a machine learning model, you can publish your models on Hang and Play. And they have a variety of models then. Right? From, let's say, image classification. Mhmm.

Riccardo Pinosio:

You know, the classic example is this image picture of a cat or a dog, to a natural language processing, which might be tasks like, if I have a sentence, can you identify, let's say, whether this sentence is a positive or negative sentiment?

Shay Nehmad:

Yeah. I really like the example in the read me where you have 2 sentences which sound obviously, you know, positive and negative when you read them out loud. The the negative the positive one is this movie is disgustingly good. Yes. The negative one is the director tried too much.

Shay Nehmad:

But when you actually think about what these sentences say, like, the words in them. So the first one has the word disgusting in it, so maybe it's negative. And the director tried a lot, it maybe it's good. So to humans, it's obvious, you know, that the labels are correct. And then, the model obviously, you know, labels them correctly as well with a really high confidence score.

Shay Nehmad:

I think it's a really good read me example. Slightly more

Riccardo Pinosio:

Yeah.

Shay Nehmad:

Interesting than, the dog cat example.

Riccardo Pinosio:

Yeah. Yeah. Exactly. Because there's there's a bit of subtlety there. Right?

Riccardo Pinosio:

So the ability of a model to be able to pick up on on the things that you pointed out is not trivial. And and so Aggie Face is essentially a repository for for all of, like, type of models. I mean, they have, like, many, right, from simple to very advanced, many applications, computer vision, natural language processing, multimodal, LLM, you know, all all sorts of things. But it's not just that. It's also a collection of open source libraries.

Riccardo Pinosio:

The thing most famous one is called transformers. And, it's written in in well, it's a Python library. And what it offers is essentially a a nice interface, a nice, library to train transformer models, so train this type of model and also perform inference with them. So use them to predict. So you might use the transformers library for mapping phase to, let's say, take a model that has already been trained, like the example that Shay was giving.

Riccardo Pinosio:

Right? So I already have a model to classify whether a sentence is positive or negative, and I want to do inference with it on my on a set of a 1000000 sentences that I have. And you might use, that library to be able to do that. Or you might have a set of sentences that you already classified as positive or negative yourself. Right?

Riccardo Pinosio:

With old school manual labor of looking at the sentences and say, yeah. This is is positive or this is negative. And then you might use having fixed transformer library to train, your models. That's called, let's say, fine tuning. That's how we call it.

Riccardo Pinosio:

In technical terms, fine tuning basically means you already have a model that has been trained, but you now have additional training data. You want that model to look at the training data and get better at the top. Right? And so Agony Face Performance offers all of these capabilities. The issue, of course, that we encountered in our case is that that's all Python based.

Riccardo Pinosio:

They do have some Rust, actually, but, basically, the main the main way of doing this is through Python. So we rapidly encountered the technical issue of, well, if I train my model with performing libraries to classify sentences, how do I then run it, in my Golang application at scale, and then being able to make it with a follow the next things that Gong gives us. Right? In terms of concurrency and so on and so forth and speed. That's where Hugo sort of, came from because we didn't find any solution that totally fit our purpose there.

Jonathan Hall:

So clearly, Python is is in the lead, so to speak, as far as, like, a number of libraries and just general community support and so on for machine learning and and all sorts of data transformations and stuff like that.

Shay Nehmad:

So so I'm gonna stop you right there. Yeah. I just had this argument at work. Okay. It seems like Python is in the lead.

Shay Nehmad:

But in reality, c is in the lead. Because every single important Python library that you use for machine learning is just a wrap around the c right now. Right? It's not like you're using PyTorch and it's

Jonathan Hall:

But nobody's writing well, not nobody. But few people are writing c to to do this work. They're writing Python. And and you can say the same about any interpretive language. You could say JavaScript isn't in the lead in the browser because it's all written in c.

Jonathan Hall:

I mean

Shay Nehmad:

Yeah. But there are many libraries that are pure JavaScript that you could get into. But, if you wanna get into, you know, the model or anything like that, it's not like you have a better advantage point. For example, you know, trying to compare 2 models doing it from Python than doing it from any other language. It's just the the libraries right now are are written in Python.

Shay Nehmad:

So I agree, but it's important to to because it's so dominant, it's important to remember that it's not like inherently there's something in Python that makes it good for machine learning. It's just easy to to get right at an to get started and write like simple script.

Jonathan Hall:

Right. I I don't I don't think that changes my point though. Because, I mean, I I if I'm not mistaken, even Hugo uses a lot of c bindings.

Riccardo Pinosio:

Yes. Very that's very much true. So, essentially, I I think you're kind of both right right in a way in the sense that Python is, it's true that most of the machine learning libraries are essentially calling into c because for for obvious reasons in the sense that machine learning inference is a very expensive operation, and training is even worse. So, every every sort of intro performance that you can have there is worthwhile. And, obviously, Python is not known as the most performant language of all.

Riccardo Pinosio:

And so and so then that is that is how it is. However, Python does have a strength in that sense because it's extremely easy to write bindings to c code and even to Rust code, from Python. Right? So it's kind of like that scenario of, like that like like a language that is very much, very good at, you know, writing it's very good to write wrappers around c code or or rascals and so on and so forth. And so in that sense, it's not very different from bash.

Riccardo Pinosio:

Right? I mean, it's like, I think there is, like, a strength, maybe with the. But, actually, there there is a similarity parallel there. Right? I mean, the bash is also very good at calling into c, compile, you know, routines, and so on and so forth.

Riccardo Pinosio:

And that's essentially how Python is used there. So PyTorch is c, c plus plus in the background, and so on and so forth. And that's also what we do in Go. And Go is a bit more challenging because of Figo, but that's also the the route we went down because we thought, well, essentially, you're kind of, like, you have 2 options. Right?

Riccardo Pinosio:

You have to either you build something in native Go or you essentially rely on bindings to what is already there and see and write around it all sorts of functionalities and make it easy to use. Yeah? And and there have been attempts. Right? If you look at Go packages, there are Go packages that try to do these things in native Go at various levels of, let's say, abstraction from reading writing your own neural neural network to being able to run pretrained models.

Riccardo Pinosio:

But, though, that presents a lot of a lot of challenges because it's very hard. These are very big projects. Right? They're very hard to write. They need a lot of effort to maintain their large community effort.

Riccardo Pinosio:

And you always run the risk, essentially, not being able to maintain that such a large project. For us, it was an easier path, and I think we also think more fruitful path to align ourselves with Onyx Onyx Onyx runtime. Probably for the listeners that they don't know what that is, it's a Microsoft project. Onyx stands for, Open Neural Network Exchange. It's a format, actually.

Riccardo Pinosio:

It's basically like a file format to store, this type of models, machine learning models. But they also provide c, APIs to be able to run these models at scale. It's very optimized, very fast. And so what Hugo does is essentially use, the bindings, right, to the Onyx c API to be able to read these models and then run inference to the models and also, post process the output so that it's usable and can be easily used by a a goal developer to replicate, essentially, exactly what the Python version does. So the goal was really for us was really a if I test something in with Python transformers, you know, Aggie Face, which is sort of state of the art standard, industry standard for open source transformers.

Riccardo Pinosio:

I run something there. I see the results. How can I do this now? How can I take that model and transfer it to Go and run it in pure Go? Well, you know, in my in our Go application, but without having to call in Python and do the other hacking thing.

Riccardo Pinosio:

So that's that's essentially what the the goal of allow the Hugo library. Yeah.

Jonathan Hall:

So I have a couple questions. I wanna get back to the question I was gonna ask, regarding when I when I when I sort of the preamble of Python seems to be sort of in the lead, at least in terms of code written to integrate or to to do machine learning and data analytics. Most of the data scientists and data engineers are writing Python code. Yeah. What are the other what other languages?

Jonathan Hall:

I I mean, I I know it's one of your goals to to sort of help go rise to that level. Maybe even, you know, if all our wildest dreams come true, you know, some plant Python, you know, become the leading, language in that space. But what are the other languages? I know r is popular in there. I don't know if it's if it really is in the same space as Python.

Jonathan Hall:

You talked about Rust. What other languages, if if we if we frame this as a competition, what other languages is go up against, in this race?

Riccardo Pinosio:

So I would say, definitely I mean, so Python is currently definitely at the top. Right? It's the leads, without question. R is very strong in the traditional statistical, tooling. Right?

Riccardo Pinosio:

Statistical models. But for the latest, things, like so all AI now is well, not all AI, I should say, but a lot of AI, a lot of attention. I did some generative AI and transformers and the GPT stuff and all that kind of stuff. For that, it's very, very much behind compared to to to Python because the community you know, for a programming language, there are community efforts in the end. So, of course, the community influences very much what the language is used for.

Riccardo Pinosio:

Right? So R is very much the prominent language for substitution. And so the functionalities implemented there are all statistical functionality and classical machine learning. But then for transformers and AI, it's very much behind. Python has the strength of being the glue.

Riccardo Pinosio:

Right? So, obviously, it's just on the top. I do think we'll see a resurgence of, actually, c and c plus plus themselves as in even skipping the Python layer and just directly stream using c and c plus plus, for, to build your ML applications. Then there's obviously Rust, which is has some advantages. I think I think it's more more advanced than Go currently, in that space.

Riccardo Pinosio:

For example, Hugging Face tokenizers, which is a fundamental component of adding these transformers that is written in RAST. And then you have Python bindings to it. Right? Python Python makes it really easy to bind to Rust. Right?

Riccardo Pinosio:

And so that's basically how that works. So I think these are definitely the main contenders. There's Yulia. I haven't used Yulia very much. To be honest, I'm not sure how it's doing on the generative AI transformers kind of, battlefield.

Shay Nehmad:

The last time I heard about, Julia is when there was a theoretical discussion about, programming languages similar to this one. We're, like, oh, we're wondering what languages might be relevant for x. I have yet to see it, anyone who gets, paid money and and talk to me about it. And I tend to talk to a lot of programmers. So I wouldn't be too worried about Julia.

Shay Nehmad:

If you're listening and you're like, oh my god, I have to learn Julia as well. You you can put it at the bottom of the backlog, I think.

Riccardo Pinosio:

Yeah. The only thing I know about Julia that, of course, make me very happy as a, as a nerd is that you can actually write Greek letters in your program. Right? So you can use full, unicode. Yeah.

Riccardo Pinosio:

So your variables could be alpha and gamma and so on.

Shay Nehmad:

Killer feature.

Riccardo Pinosio:

And, of course, you would never want to do that. Right? I mean, it's a variable idea in practice, I think, but, hey, it looks really nice. It's, I'm writing a math algorithm, and I can put alpha as it was. It also makes you feel very smart.

Jonathan Hall:

So these

Riccardo Pinosio:

are always a good idea.

Jonathan Hall:

Yeah. Well, you could you could do the same in Go if you

Shay Nehmad:

really want to, by the way.

Jonathan Hall:

Oh, yeah. It supports full unit code, variable names too. Yeah. Yeah.

Riccardo Pinosio:

Okay. So if

Jonathan Hall:

you wanna do that terrible thing

Riccardo Pinosio:

I'm gonna go back and rewrite our old code code base with, alpha beta gamma, and then and then I'm just gonna push it without without code review, and then I'm gonna see what happens.

Jonathan Hall:

There that's awesome. You're gonna

Shay Nehmad:

see the stargazers. You know, how in GitHub, the video, you have the stargazers shutting, always going up. You're gonna start

Riccardo Pinosio:

to see You're gonna see the trash.

Shay Nehmad:

So Hugo is the main thing you need to learn, or the main thing you need to know to use it other than, Golang, of course, is pipelines. Right? The the concept of pipelines in, hugging face, which is not a Yeah. The concept of pipelines is not specific to Hugo. It's just something you need to learn, if you're using hugging face, which is basically pick the module you the model you wanna work with

Riccardo Pinosio:

Yeah.

Shay Nehmad:

For whatever task, and then start to run it. How much did you wanna stick to the Python, pipeline, API which seems very open and you can, like, do whatever you want. There are 5 different ways to open a pipeline. They're all valid. If I know and already use the Python Hugging Face SDK.

Shay Nehmad:

Like, will my migration be super easy and I just need to I copy paste the pipeline, code and just replace the equals with our, you know, colon equals and everything gonna work? Or is it a suit very different API with, like, different intentions in mind?

Riccardo Pinosio:

No. So the the goal, of of what we have done there now is essentially to be a one to one rendition, of what is on the Python side. So the goal was really so first of all, just to explain what a pipeline is because maybe it's like so there's a bit of a difference between a model and the pipeline in the sense that the model is just a neural network. Right? And the output of the the output of the neural network is essentially an array.

Riccardo Pinosio:

Right? Or a slice of array. Right? So it's all flow.

Shay Nehmad:

I heard I don't remember where I heard it, but I just heard it this morning that, linear algebra had the biggest marketing scheme ever, rebranding as AI.

Riccardo Pinosio:

Oh, yes. Yes. Very much so. Yes. Indeed.

Riccardo Pinosio:

If it's a linear algorithm, we think of it as the most boring thing in the world. But now if it's, like, it's AI, then it's, no. That's the strategy. So it's all Yeah.

Shay Nehmad:

I don't know if I I don't know if Elon if Elon would raise, 6,000,000,000 for x linear algebra, but x AI, for sure. Just take my money.

Riccardo Pinosio:

That's x matrix multiplication. That doesn't sound good enough. Yeah. So it's all matrix multiplication, right, in the end, there. So the output of the model is just a big matrix.

Riccardo Pinosio:

Yeah. But, of course, if you have a specific use case, like, for example, a lot I think we do a lot is I have a sentence, and I need to extract, what we call named entities from the sentence. So, for example, if I have a sentence, I don't know, Sema Altman is the CEO of OpenAI. Sema Altman is an entity of type individual. CEO is an entity of type role, and OpenAI is an entity of type organization.

Riccardo Pinosio:

Yeah? The model, if I ran it on on the sentence, will just give me the matrix. But then I need to reshape it. I need to look at the labels. I need to do some sort of whole bunch of post processing to get, as a result, the JSON that tells me, hey.

Riccardo Pinosio:

These are the entities that are in there. Right? So that's what what's what's happening, by the way, also on the transformers side. Right? So the pipeline transformers, pipelines from Hugging Face are running on PyTorch.

Riccardo Pinosio:

Right? The model is a PyTorch thing. And then they have this sort of file that wraps around it and does all of these nice things for you so that you don't have to worry about implementing all of it yourself. You can just pass in a sentence and get a JSON app. That's the goal.

Riccardo Pinosio:

That's the idea. And the idea of Hugo is that, currently is that we replicate that functionality for we don't have all pipelines. Actually, this is a good call. If you want to contribute, there's still a lot of work to do. We don't have all the pipelines that Agonfigs for it's implemented.

Riccardo Pinosio:

We have, like, a set of core one, for NLP because that's also what we use in analytics. But if you want to contribute, we are very much open for people to come over and help us integrate further pipeline, in the mode in the in the library. But essentially, that it needs to match that 1 to 1. Yes. The only thing I will mention there, a crack or caveat, is that we only support and for the foreseeable future future, we will only support only run time exports of the equivalent Hugging Face model.

Riccardo Pinosio:

Right? So the typical flow for you as a developer if you wanna use it is you try it you can try it in Python. You can try the pipeline in Python, see if it gives the results that you want. Then you can use, an Aggie phase to export the model into ONNX runtime. You basically export it into a folder with, like, a couple of files, ONNX and with the ONNX binary format.

Riccardo Pinosio:

And then you go to YUGO, and you essentially point it to that folder, and then you can create recreate, that same pipeline and go, and run, the model.

Shay Nehmad:

Cool. So a pretty easy migration from the like, if I'm working in, Yeah. In Python today.

Riccardo Pinosio:

Yes.

Shay Nehmad:

What benefits have you seen in Knights analytics from, you know, doing this project in Go? Obviously, we're all 3 of us here are biased. We probably wouldn't be hosting the podcast, and you probably wouldn't be a guest if, you know, we we didn't like the language. But, you know, trying to look at it objectively. It sounds like you start experimenting with Python, then you migrate to Go, but you migrated to Go and then what?

Shay Nehmad:

Like, what's the what's the benefits you've seen specifically, you know, when where you work?

Riccardo Pinosio:

So for us, it was, essentially a, matter of, so a lot of of the things that we do, for example, are, deploy. So our old stack is Go. Right? And Go offers us benefits in terms of concurrency, being able to ingest and concurrent and very easily design concurrent, systems where and and we wanted to be able to, for example, take a million, or, like, 10,000,000 or whatever, a 100,000,000 of news articles and just pipe them through this model, right, to be able to extract information from them. Yeah?

Riccardo Pinosio:

And Go, of course, makes that extremely, effective compared to Python, to design that type of concurrency concurrency model. And the additional problem we had was that we could not do, API calls because we deployed in an environment that where you're not allowed to send any data outside. Right? For instance. Mhmm.

Riccardo Pinosio:

So we needed to do this at scale and locally. Right, without doing, it should be rest calls to whatever. And so that was sort of like the setup. Right? The the the the the benefit for us to be able to do the Go is the ability of, you know, concurrently pipe input to these models at scale.

Riccardo Pinosio:

And with YUGO, you can do it. There's no API calls, so you can do it locally, right, with a local model. And that's that's essentially the the setting, right, that we're trying to, you know, we're trying to to tackle. We are actually in the process. I also I I don't have data for that yet, but one thing that I have in the actually, this is also a good issue, for a first timer who wants to contribute is to compare the performance to the Python version.

Riccardo Pinosio:

So if I take the same if I take a Python pipeline and, like, with the Hugo version, I can go, what is the the performance gain or the performance difference between the the 2, the 2 implementation? So that's one thing that we are looking at. But for us, actually, the the pure performance was less important in this case as also the stability, and, the ability of sort of have a tightening tight integration with our larger Go application.

Shay Nehmad:

So, like, maintainability within your current ecosystem was a was a big thing.

Riccardo Pinosio:

Yeah. Because imagine, I mean, if you are, like let's say and and we have this use case. Right? So there are people that are using Go for, like, this scenario. You have a Go application, and you want to be receive, let's say, a call, like, like, an API call, that, sends with post, let's say, some news article, and you want to be able to calculate an embedding of the new article and find similar articles.

Riccardo Pinosio:

That's a typical use case. It's called we call semantic search. It basically means take some text, create a vector from that text, and then use it use that vector to find similar, articles, for example, similar news articles from a database of articles that you have already previously embedded. Yeah? And if you want to do that, you can go let's say without Hugo, what you'd have to do is you would have to say, okay.

Riccardo Pinosio:

Well, I have my back end in Go, but now I need to be able to call transformer models to get this embedded vectors out. How do I do it? Well, I need to well, you could have a mic Python microservice that you would deploy separately, and then you could do an API call to that Python microservice that would send back the response with the embedding, and then you could, you know, do whatever search you wanna do. But then this complicates things. Right?

Riccardo Pinosio:

So now suddenly, if your application is go, you need the Mac Python microservice, which will need to have its own container deployment, for example, which will have to have it. So so it introduces a whole host of complexity versus being able to do it with Hugo, which is, hey. I have this folder on the street. It has the model. Use the Hugo library to load it in.

Riccardo Pinosio:

Now it's your struct in memory with your struct methods to do, an imprint, and you just pass it in. Right? So it becomes a much easier story to maintain, build, manage, if your application if you if you have a Go application, right, and you want to use this machine learning capabilities.

Shay Nehmad:

I can really imagine someone on on, you know, on their way to work right now just being, like, angry because they they just yesterday finished, deploying their new Python back end. Yeah. Because they thought, oh, it's machine learning. I have to do it in Python. And then you they're just listening and they're like, Shai, how do you know my life?

Riccardo Pinosio:

Yeah. Yeah. Yeah. Coming into work and, like And we have people we have people, right, you know, that we talk to that, you know, you go to tell, like, oh, now I have all my Python microservices. I'm gonna start migrating them all back to I don't understand.

Riccardo Pinosio:

Yeah. We're all removing them, and then you'll Hugo, so I can do it in one in one whole.

Jonathan Hall:

One one last question. I hope it's a quick quick one. We're we're we need to wrap up soon. Suppose somebody listening is is interested in machine learning. They've never done it before, but they know Go.

Jonathan Hall:

What's the learning curve like to just dump jump right in with Hugo instead of learning it with Python first? Is that is that tenable or is that gonna be a a really steep learning curve?

Riccardo Pinosio:

I think it's doable to just go straight to Hugo because, again, if you don't want to understand underlying details, the concept is not super difficult. You just need to understand what's your use Right? If I know what my use case is, hey, I want to be able to do sentiment analysis or extraction, you can go look at the Hugo documentation. You can see, well, this is their pipeline that can help me here, that has been already implemented. And then the only thing that you would have to do is and that for that, you still need Python.

Riccardo Pinosio:

I'm hoping to eventually be able to do do it without it is you would need to find a model in AgnewFace that you wanna use, right, that is already pretrained, for instance. And then you would have to export it to the Onyx format that we use. But that's literally, like, 5 lines from, Optimum, which is the I get this library import Onyx runtime, and then you load the model in, and then you export it to Onyx. So it's, like, pipelines. And then what that does is it generates the folder with all the assets that you need, and then you can use it with YUGO.

Riccardo Pinosio:

So I think that the the learning curve to be able to do inference with models that are already trained is very low. If you want to train your model, meaning you want to fine tune it, then at the moment, you can't do that with Hugo. You still need to go via Python. So we only do inference. However, however, I'm hoping to also fix that because I'm currently working on having the bindings to the onyx runtime training.

Riccardo Pinosio:

So if I can get that done, which is the C library for ONNX Runtime that allows you to train models. So if I can get those bindings going, then eventually, you'll be able to use Hugo also to train your model. So provide it with examples provide the model with examples and then fine tune it. And then you have a better model so that you don't have to go to Python at all. That that would that would be a dream.

Riccardo Pinosio:

Right?

Shay Nehmad:

Yeah. Right. Right.

Riccardo Pinosio:

So that's that. But for the moment, if you wanna do inference, I think you can start with, and you have a use case for which you want to do a model, use a model. You can start straight straight with Hugo. Yeah.

Jonathan Hall:

Great. Well, let's, let's tell people where they can find you and Hugo. Hugo's on on GitHub. Is there a website other than that, or is GitHub the best place to go?

Riccardo Pinosio:

Yeah. That's the best place to go for now. We have Okay. Users go to GitHub. We have a read me that I think it's quite comprehensive.

Riccardo Pinosio:

And I'm working on, a docs page and actually a tutorial.

Shay Nehmad:

Please tell me the docs page for Hugo is gonna be in Hugo.

Riccardo Pinosio:

That's right. And then

Shay Nehmad:

and then my joke from

Riccardo Pinosio:

the beginning will be complete. Now that I didn't think about it, but actually, that's a great idea. Hugo on Hugo. And then it's like, Hugo inception or or or Yeah. Yes.

Riccardo Pinosio:

Indeed. So that will be that will be will be there. And I'm working also on a tutorial for semantic search with Hugo and Quadrant, which is vector database. Mhmm.

Shay Nehmad:

I'm

Riccardo Pinosio:

sure you guys heard a bit. It's a very good vector database. And so but for the moment, if people want to learn more and contribute, the GitHub yoga page is perfect. The readme is quite extensive, I think. And and yeah.

Riccardo Pinosio:

And we look forward to people using it, contributing to it.

Shay Nehmad:

And if people wanna reach out to, you specifically, where can you be found online?

Riccardo Pinosio:

I think the is it so I don't I'm not really that much of a social media person or those kind of stuff. I'm not That's good. So so it's a lot of waste

Shay Nehmad:

I'm happy to hear that.

Riccardo Pinosio:

It's a

Shay Nehmad:

lot of waste of time.

Riccardo Pinosio:

I think, actually, if so, Nivo, it's just you can just use the issues page in GitHub. Otherwise, you can send me an email, which is well, you can also go to the Nivo Analytics webpage, and contact us through there. So if you Google

Shay Nehmad:

If you're listening, the link is in the both for, the GitHub and for knightanalytics.com, is in the show notes. Yep. Yeah. So you can, jump on there.

Riccardo Pinosio:

Yeah. Exactly. If you go to the knight's analytics. Yeah.

Jonathan Hall:

We have one last question that we'd like to ask all of our guests. This might be easier for you since you said you just started using Go about about a couple years ago. But the question is, thinking all the way back to when you started learning Go, what surprised you the most or what was the biggest challenge for you?

Riccardo Pinosio:

Okay. Yeah. That's always that's a good question that, I would like to like to like answering because I really like programming languages, and I like to talk about their differences. And I like to, you know, bitch about them, and tell what I'm talking about. Also, say what I like.

Riccardo Pinosio:

It's like the gossip. It's like the nerd version of gossip. Right? So,

Shay Nehmad:

I

Riccardo Pinosio:

would say that to some gossip. So the thing that I like the the thing that I found most challenging, I think, from Fattalmingle was As a machine engineer, I mostly work with extremely expressive languages. Right? Like, so Python, r well, r is actually basically a Lisp version. Well, it's very close to Lisp, so it's extremely expressive.

Riccardo Pinosio:

Python, you can do all all your classes, inheritance, the audio of different programming, some support. Go, when I started, the type system is quite bare. Right? It's quite bare bones. And so I did have quite a lot of a lot of struggle to switch the mindset from, you know, before I used to think, okay.

Riccardo Pinosio:

Let's think about what types we have. Right? So or what classes we have. And let's structure the code base thinking about types and the classes and what and the methods that they have and what they are able to do and how they interact with each other and so on and so forth. And then moving to Go, I miss that type of abstraction because I I really do think work Go works best when you just write code, you know, almost in an imperative fashion.

Riccardo Pinosio:

And then you introduce abstractions as you go, when you need them and where you need them. So it was I found that's the thing I found most challenging, like, the the the type system, you know, the lack of the the the lack of enams, the lack of, union types, the discriminative union types. I think those are, Right. Those were sort of challenging. But now now that I use it a bit more, I don't miss that much because you really need to get into that mindset of let's do obstructions later on as they arise, versus let's try to plan my all my abstractions and how I'm gonna do it.

Riccardo Pinosio:

I also found that, look, for me, it's a very it's a very good language in a way for my productivity because I do tend to, I have a tendency, I think, to, sometimes over engineer thing just because I like I like the abstraction. Right? It's just that I go, I have this class. Oh, I can do something.

Shay Nehmad:

We're all we're all guilty of it one time.

Riccardo Pinosio:

I can do this marked reflection thing here and say and then it's all yeah. You know what I mean? And then Go tells me, like, no. You're gonna abstract. You're gonna have a for loop.

Riccardo Pinosio:

And if you want something, you can have the interface, and that's that. And I noticed, actually, that, in the end, I'm actually not much more productive, in Go.

Shay Nehmad:

Sort of reigns in your creativity to the important parts.

Riccardo Pinosio:

Yes. Yeah. Exactly. Exactly. That's that's I think it's like, you know, I think it's like a metaphor go is like, you know, sort of like a Greek temple.

Riccardo Pinosio:

Right? So it's clean, and, the lines are all the lines are all straight lines, and it's sort of, like, versus, okay, the super complication pipes and so on and so forth. So that's the that's, I think, the the what I like about Go. And in a way, it it feels very similar to c. Right?

Riccardo Pinosio:

So right now, I'm, well, I'm writing bindings to c, and I get kind of the same feeling, right, from c, which is like

Shay Nehmad:

Mhmm.

Riccardo Pinosio:

Yeah. Let's not just get bogged into too much obstruction, but then I look at them, like, in 2 months, and I don't know what I'm doing there because I thought I was clever at the time. And, you know, and and now it's like, hey. What the? You see what I'm saying?

Riccardo Pinosio:

I think everybody has their experience, and I think that's what's nice about about Go. And the other one, which I think is also the one that everybody mentions because it wasn't the Stack Overflow survey, the error and error handling, you know. If error not equal to null, and to nil, do the other thing.

Jonathan Hall:

Yeah.

Riccardo Pinosio:

And so that that is actually the single thing that I think it would be really nice to have, like, return types and option types like Rust does it. I think the way Rust does does it there is strictly better. But but it's a small thing, I mean.

Shay Nehmad:

We we saw recently a few articles. I don't remember the name of, like, a language that compiles to go. Do you remember what I'm talking about? You think

Riccardo Pinosio:

it's Borgo. Borgo. Borgo. Yeah. Yeah.

Shay Nehmad:

Yeah. Yeah. Borgo. Yeah. Obviously, it's very experimental and and early, but it sounds like exactly the your kind of, your cup of tea where it's in go, but you have return types and it's, in the middle between complexity and type safety.

Riccardo Pinosio:

Yes. Exactly.

Shay Nehmad:

Trying to have its cake and eat it too. You know, it's a result type. So the value is the cake and the error is oops, I ate it. It it returns both.

Riccardo Pinosio:

Yes. It it's funny it was mentioned there because of you know, I was thinking the other day, oh, it would be really nice if, you know, Rust and Go had a child that was, you know, a bit less smarter than than Russ sometimes and a bit more more smart than Go than Go sometimes another. And Volvo is kind of thing that that's the idea. Right? Because you can, from a book, roughly, you can write and you have option and return type, but then it compiles to

Shay Nehmad:

each Yes.

Riccardo Pinosio:

To go. So it shows all almost like It

Shay Nehmad:

transciles to go. But yeah. Yeah.

Riccardo Pinosio:

Indeed. Totally.

Shay Nehmad:

Well, thanks, Alrigado, for jumping on our machine learning adventure, which is not normal for us. Like, people on this call usually are like, okay. So this is the 5th 100th web server I've written, This is what I think about it. Yeah. But yeah, there are other, aspects to go as well that are expanding.

Shay Nehmad:

Again, the call to action from this, if you listen to this podcast and you were like, oh my god, this sounds super interesting. The link for this GitHub repo is in the show notes. And there's a very obvious path to contributing. There are many missing, pipelines.

Riccardo Pinosio:

Yes, exactly.

Shay Nehmad:

So if you have a pipeline that you currently have and, Hugo supports, great. You can use Go and machine learning. Awesome. And if the pipeline isn't supported yet, just from a quick browse of the repo, it seems like a very easy, you know, open source, swag you can carry. It's it's not very difficult to add another transform in the current No.

Shay Nehmad:

Add another pipeline, sorry, in the current architecture.

Riccardo Pinosio:

No. And we're there, and we're very supportive for anybody that wants to that wants to contribute. I mean, the goal is to make machine machine learning in Go a thing, so then we can use Go not only, like, like, or to to write the 500 web server or to do cloud deployment Yeah. Or all of that back end stuff, but also to integrate tightly with machine learning capabilities. So any contribution as well.

Shay Nehmad:

Cool. So thanks a lot.

Jonathan Hall:

Thanks again.

Riccardo Pinosio:

Thank you.