You may think of R as a tool for complex statistical analysis, but it's much more than that. From data visualization to efficient reporting, to improving your workflow, R can do it all. On this podcast, I talk with people about how they use R in unique and creative ways.
Hi. I'm David Keyes, and I run R for the Rest of Us. You may think of R as a tool for complex statistical analysis, but it's much more than that. From data visualization to efficient reporting to improving your workflow, R can do it all. On this podcast, I talk with people about how they use R in unique and creative ways.
David Keyes:In this conversation, Simon and I chat about his work at Posit, the state of AI, and the packages he's developing to work with AI in R. It's a great conversation that touches on some high level topics like what's the future of AI in coding and touches on some really specific things like how to get started using AI in your coding today. Let's dive in. Well, I'm delighted to be joined today by Simon Couch. Simon is a software engineer at Posit where he works on open source statistical software.
David Keyes:And with an academic background in statistics and sociology, Simon believes that principal tooling has a profound impact on our ability to think rigorously about data. He authors and maintains a number of packages, some of which we'll talk about today, and blogs about the process at simonpcouch.com. Simon, thanks for joining me today.
Simon Couch:Yeah. Thanks for having me. I'm excited to be here.
David Keyes:Well, I'm excited because we're gonna talk a bunch about the packages that you've been working on that are specific to AI. But before we get into that, I wonder if we could start by just having you tell me a bit about your job at Posit. What is your how do you divide your time? What does it look like, in your kind of daily work there?
Simon Couch:Yeah. So I work on open source our packages broadly. Properly, like, in the org chart, I'm situated in the tiny models team under Max Cooney. And that's for the first couple years when I was full time at Posit, when I was working on forty hours a week. Around a year or so ago, I started working halftime on ODBC, so, like, an interface to different database back ends.
Simon Couch:Around nine months ago, I started a book, so that's been another sort of chunk of my time. And then around three or four months ago, I I did an internal hackathon centered on, like, AI and and LLMs. And I I guess that's the subject of the podcast today. That's probably you know, the AI work is something, like, 30 or 40% of my time On tidy models specifically and maintenance and such there, that's maybe 25%. The book is maybe 25.
Simon Couch:For a while, ODBC and the database work was around half of my time, but that's closer to maybe 10% or so now.
David Keyes:Okay. And I'm curious where your own interest in AI and kind of making packages for the R world that that interface with large language models, where does that come from?
Simon Couch:Yeah. So it started off in this hackathon. The way that this worked was that some folks at Posit got together around groups of, like, 10 or so of folks around the company. And they gave us an introduction to, like, interfacing with large language models via APIs rather than, like, a chat interface that you would navigate to on the web. And they gave us API keys, and they let us loose for a couple days.
Simon Couch:And at that time, I was heading into spring cleaning, which is a week that the Tidyverse team takes twice a year, one for each hemisphere. And we do all sorts of, like, code updating technical debt sort of tasks. One of them was specifically about updating code that raises errors. So if you call stop or from our lang, the analog is abort. We try our best to make errors as informative as possible for users.
Simon Couch:And part of that has been transitioning the tooling that we use to raise errors to this new package and this new syntax, and that's called CLI. And so for context, like, within the tidy models and the tidyverse and the Rlib organizations on GitHub and all all the packages that we release and work on, there's, like, thousands, if not tens of thousands of these custom pieces of code that raise errors to the user. And our goal was to convert most all of them to use this new interface, but it's really a sort of fuzzy, like, a little bit irritating task to to transition from all these different ad hoc ways to raise these errors we've put together over the years into this standard interface. And if one were to try to implement that, like, some sort of package that converts the errors automatically, that'd be a really difficult software problem. And so I kinda just wondered at that time, like, I wonder how good these models are automating this task of converting this erroring code for us.
Simon Couch:And so that's what I ended up working on. And then in the week after, the team actually got to use this package, which was sort of a predecessor to pal and probably, you know, called this same key command a thousand times that, like, takes the the code that you have already to raise an error, sends it off to a model with a bunch of information about how to convert error in code to this new interface. And, ideally, it comes back in a couple seconds, and you're 90% of the way there.
David Keyes:So that's a good example of, like, using AI to handle something that would be incredibly tedious to do Mhmm. On your own. I'm curious if you see that as a a an optimal use of AI, and if there are other things that you see as, like, good uses of AI at this point when working in our
Simon Couch:I think that, like, this sort of tedious in the code updating kinds of tasks, LMs are particularly promising for. But those are probably also a subset of, like I often think about it as, like, forty five second tasks that could be turned into five second ones. How there's, like, a fixed set of these that I've defined and then also tried to provide some infrastructure for people to add in their own use cases for. And then with Gander, it's sort of more one off, like, whenever you run into one and have this spidey sense that maybe it's something that could, be done for you that you're not particularly interested in doing. Those are the sorts of things where I'm really leaning on models right now to, automate those smaller chunks of work that I'm not really interested in carrying out.
David Keyes:Yeah. Are there other ways that you think are particular prop particularly promising for AI? Or is is that kind of where you've been maintaining your focus at this point?
Simon Couch:That's where I've been maintaining my focus for now. I'm hesitant to, like, make any predictions that are are too bold about what's coming next just because it the LM space has been moving so fast, and I've really only been engaging earnestly with it for, like, a few months now. But, yeah, those those sorts of smaller tasks are are mostly where I'm focused right now.
David Keyes:So, well, maybe this is getting ahead of where you are. Like you said, you've been doing this for a few months, but I'm curious for your thoughts. Because you hear a lot of talk about, oh, AI is gonna make, you know, writing code completely unnecessary. Everyone's just gonna, you know, chat with a a model, and it'll, you know, spit out everything you need. Having worked on these packages, Pao and Gander and others, what are your thoughts on whether AI can go to that level?
Simon Couch:There are certainly people who who earnestly believe that's the case, I think. And and I might even go to say, like, many of those people know more about these models than I do. And and at the same time, like, in my personal experience actually working with these models, in my, like, sort of cursory understanding of how they're working and where they're heading, which, again, I'm very hesitant to to make any predictions about where this might be headed. Ultimately, I don't see a lot of evidence for this. In my own personal experience, I I get a lot of help from these models, writing code and updating code in, like, repeatable ways where I can really explicitly specify how I'd like the code to be updated.
Simon Couch:In the the big picture, the proportion of of my work, like, developing our packages or the work of a data scientist, the process of writing code is is probably a relatively small portion of your work compared to, like, debugging and and maintaining code. And both in terms of, like, formal evals about how good these models are at at doing that and also my personal experience, I don't see that the these models are particularly capable at doing those tasks, which, again, right now are, like, a huge part of my work. And so at least in the short term, I don't fully believe the argument that AI will make running to write code unnecessary.
David Keyes:Yeah. The I mean, I'd say I I could broadly agree with that. I think, at least right now, AI is most useful when you can give it context and you kind of know the right questions to ask. I mean, I know from teaching people, a lot of times we'll see people use AI, and the answers they get back end up confusing them more than anything. For example, we teach with the tidy verse.
David Keyes:Mhmm.
Simon Couch:They'll
David Keyes:ask a question to AI, and it'll give them a base R answer, which there's I'm not saying there's anything wrong with that. But if you're learning through in the tidy verse style and you get a base r answer, you're gonna be incredibly confused. Whereas for me or someone who who's been doing it for a while, you know, you can know how to ask questions in a way that you're likely to get answers that will be meaningful to you. Right. But you can't kind of just spew something at chat g b t or whatever model and expect it to to give you back exactly what you need.
David Keyes:Right.
Simon Couch:There's absolutely that skill of, like, learning how to ask the question. And I think, unfortunately, because I haven't really tried to, like, put this in words before. But there's also a little bit of a spidey sense for, like, the kinds of problems where the model can for turn the forty five second task into the five second one or when it's gonna give you an answer that's gonna sync five minutes of your time because you're trying to debug it or it sent you down a rabbit hole that's probably not a productive one in the long run.
David Keyes:Yeah. That makes sense. Well, let's talk about the packages that you've been working on. So we're gonna talk about two. There's one called Pao, and then there's one called Gander.
David Keyes:Can you talk about what each of them do?
Simon Couch:Yeah. So CAL is sort of a generalization of this initial idea I had about updating this error in code. And what the the package does is it supplies an extensible library of model assistance for working on our code. So the package comes with a a preset number of assistance that you can press the key command and highlight some code. And in the back end, what the package is doing is supplying all sorts of context to sort of teach the model real quick how to do the task that you're asking it to do.
Simon Couch:And then as long as all goes well, code begins streaming directly into the the document that you're working in. Under the hood, these are using Elmer, which is, a new package coming from Hadley and Joe that provides an interface to any or many of the major large language model providers out there. And because Pal uses Elmer under the hood, you can use any model that you would use with Elmer with Pal. Gander is a newer package that I've been poking at recently. And I think the best way to phrase it is sort of like a a Copilot alternative that knows about your R environment.
Simon Couch:So I think, at this point, many folks have experimented with Copilot auto complete, whereas you're working in a document, the model will receive all the code context from within that document. So it can see the lines of of text that are surrounding where your cursor is at. But the thing that it can't see and the thing that I ultimately wished it could see was my R environment. Like, if I have some data frame inside of my source file that I'm working on, my R file, the thing that I really want the model to know about is the columns inside of the data frame and how many rows it has and things like that. And so Gander is a Copilot alternative that knows about your R environment and how to describe it well to models.
David Keyes:Well, that actually seems useful. I mean, having just put together some lessons on using a o AI with R, I assume that works both in RStudio as well as in the newer editor, Positron. So, yeah, it seems like it that's another benefit that it can be used across editors.
Simon Couch:Right. So both both Pal and Gander in the back end are implemented using the same two tools. So one of them I I talked about was Elmer, and the other one is the RStudio API. And when I say, like, it uses the RStudio API in the back end, it sounds like that probably wouldn't work in Positron. But one of the nice things that the folks working on Positron have done is created what they call shims, which allow all the same commands that you could use to interface with RStudio to to interface with Positron.
Simon Couch:And so Okay. Both Pal and Gander work in both RStudio and Positron, which is a really nice bonus. At the same time, using those tools specifically makes for some limitations in in terms of the UI as well.
David Keyes:How so? What do you mean by that?
Simon Couch:So Palyngan are both work by writing directly to the the source files that you're working in. And this kinda feels like magic when they get it right. And that, like, there's you ask the model something, and then it's there. And there's no sort of, like, clicking or, like, copying and pasting or moving around that you need to do. But the bummer is that if you accidentally get a model rambling
David Keyes:or Right.
Simon Couch:If the model does something that you don't want it to do, you're backspacing or or undoing to get back to where you started where
David Keyes:I see.
Simon Couch:The the other way than the way that I've been doing, which is everything comes from r, is to do it from the angle of the IDE. And so, like, in in Versus code, a lot of these extensions have some sort of interface that's like, here's a diff. Like, these are the lines that I would delete, and Yeah.
David Keyes:Here
Simon Couch:are the lines that I would add if you gave me a chance to. Is that okay? And then you say, okay. Sounds good. Which prevents that problem of, like, a bunch of stuff being dumped into your document.
David Keyes:Got it. That makes sense. Well, let's have you give a little demo. So we talked about having you walk through the Gander package. You wanna put your screen up and and show us what that looks like?
David Keyes:Hey, David here. Just wanted to let you know that at this point in the conversation, we switched to a screencast. Now, obviously, showing code doesn't work very well in an audio podcast. So if you wanna see the rest of this conversation, check out the video version of this podcast on YouTube. You can find a link to that in the show notes.
David Keyes:A piece that you sent to me that you said data science differs a bit from software engineering in that the state of your R environment is just important or more so than the contents of your files. So that's what you're getting at here. Right? Like Mhmm. Unlike in, you know, other, like, types of software engineering, for example, the state of your our environment or whatever the analogous thing to your our environment would be is less relevant.
David Keyes:Whereas here, because, you know, what we care about is, like, the data, you need to be able to pass that to a model in order to get good responses. Am I summarizing what you're saying accurately?
Simon Couch:Yeah. I think that's very much so what I was trying to get at in that quote is that the existing tools are kind of assuming that all the information you need is contained in, like, lines of code. In the context of data science, there's this other piece that's really important for any model to know about in order to complete properly.
David Keyes:That's cool. And is there any other tool that does the same kind of thing where it, you know, passes the state of your environment, your your data objects alongside prompts that you know of?
Simon Couch:I don't know of any. I would be surprised to hear if there were not. So, like, I don't know that I was inspired by anything specifically in in writing this, but I would imagine that somebody probably did this before me, maybe even in r.
David Keyes:So what model is it using? Like, you haven't done anything to say, like, use chat g p t or use quad or anything. So so how do you or or can you choose a model? And if so, how?
Simon Couch:Yeah. So I'm hesitant to show any specific code because the interface to make this happen is actually gonna change soon.
David Keyes:Yes. Yeah. That's fine.
Simon Couch:But but, yes, the user can use any model that is supported by the Elmer package. And the way that setup works is you just tell Gander, like, by the way, this is what an Elmer chat that I want to use looks like. So when I'm developing this package in in my day to day use, I tend to lean on Claude. But the in using Gander, you can use Claude. You can use, like, Chat GPT from OpenAI.
Simon Couch:You can use locally hosted models, which are effectively free. You know, like, you're not paying for a service, and you're not sending your data off somewhere else. And at the same time, the kinds of models that our laptops can host on their own are are tend to not be as powerful as something
David Keyes:like Sure.
Simon Couch:Cloud or ChatGPT.
David Keyes:So is it then like, I know within Elmer, you can correct me if I'm wrong, but I think you, like, set an environment variable with your, like, ChatGPT or Cloud API key or something like that. And then does Gander just recognize that? Just at a at a high level, like, how does it know which model to use?
Simon Couch:Right. So there's sort of a config associated with Gander, and it's just an option. And that option says, call the, you know, chat Claude function from Elmer. I see. Gander doesn't know anything about, like, that API key that you might have set up in order to use Elmer.
Simon Couch:Elmer is still the package that's taking care of all that authentication and stuff. But Gander knows how to talk to Elmer chats, and that's what's happening under the hood.
David Keyes:So Gander is the package that's formatting the prompt, essentially. Mhmm. Then Gander sends it to Elmer, which takes care of all the authentication, all the stuff that you need to do to interface with Claude or ChatGPT or whatever model you're using. The response comes back to Elmer. Elmer passes that back to Gander, and then Gander paste it into your editor.
David Keyes:Is that what it looks like?
Simon Couch:Spot on.
David Keyes:Yep. Cool. Why Claude? I've heard several people say that recently. They really like Claude best for editing.
David Keyes:I haven't used different models enough to to have seen a difference. So I'm curious. Sounds like you have. Why do you think Claude is the best?
Simon Couch:I would hesitate to speak too definitively on this because, again, I'm relatively new to this, and this is mostly just like a vibes evaluation. And I started out, like, the API key that they handed me when we started experimenting in this hackathon happened to be a quad API key. Quad tops or, you know, is a peer with many of the most performant LMs on our many benchmarks, which are just, like, various ways of evaluating how good a given model is. And then there's also the component of, like, that sort of taste of the model or something. That's kind of harder to capture in a a benchmark or an evaluation, but I've just tended to see that the model takes instruction better than others I've seen where if you tell it, this is what I want the the response to look and feel like,
David Keyes:it
Simon Couch:tends to give responses that that look and and feel like that more commonly. Yeah.
David Keyes:It's more compliant, you're saying. Not not as sassy as chat g p t.
Simon Couch:At the same time so, like, right now, it it in the development version of these packages, it's just been a default model under the hood. So, like, if you happen to have an API key for Claude set up already, there's, like, no config that you need to do ultimately. That said, I think that will probably change before I send these off to CRAN. And that's because, like, long term, I kind of hope that I can host the model on the laptop that I'm working on right now Right. That is good enough at at listening to to what I ask it to do and knows enough based on its compression of the Internet
David Keyes:Yeah.
Simon Couch:To, like, complete just as well as Cloud would. And so I don't wanna, like, bake it in in a default in these packages that folks should be relying on these
David Keyes:major corporations. Well, I'll ask since it's been in the news. It's January 30 as we're recording this. Have you used DeepSeek?
Simon Couch:I've used DeepSeek a couple times just poking around.
David Keyes:Yeah. But not with Gander. Is it I assume it's probably not set up to use to be able to use that yet.
Simon Couch:So, actually, it is possible to to use DeepSeek in the same way that you would use any end of model with Gander. On Elmer and this is just in the development version of the packages we're recording this. So you would need to install Elmer from GitHub rather than from Crayon. But the package supports DeepSeek at the moment. And so you would just configure that DeepSeek chat with Gander, and Elmer would take care of everything.
Simon Couch:That said, there's been so much hype around DeepSeek, especially in the
David Keyes:last week
Simon Couch:that the APIs have been down probably more
David Keyes:Oh, really?
Simon Couch:Than they're not. So I haven't I haven't locked into, like, logging in at the right time I see. To give that a try.
David Keyes:But isn't it I mean, I know it's open source and it's relatively small. I I guess I mean, it's new to me too, so I haven't really looked into this. But, you know, like, llama or models like that.
Simon Couch:I I
David Keyes:guess, I don't know if it's small enough that you can run it on your own laptop, but I I
Simon Couch:wonder if
David Keyes:in the future it might be.
Simon Couch:So the the model, I think it was called b three that they released, like, a month or so ago, which made a little bit of a splash, but not nearly as much as the model that they released a week ago. That model, is something like 500,000,000,000 parameters or something, which you can kind of approximately think about the number of of parameters as the number of gigabytes of RAM you would need to run the thing locally. So
David Keyes:Mhmm.
Simon Couch:You know, unless you have a pretty amazing computer, you're
David Keyes:probably not
Simon Couch:running the 500,000,000,000 parameter model. So, yeah, the the reasoning model that they put out a week ago, and, you know, that's probably analogous to something like o one, which, quote, unquote, thinks for a bit before it does anything. I've found those models to be kind of painful to use with Pal and with Gander and Insurer. Because that process of thinking, there's no user interface happening at that point. And when you use something like Claude or a local model, you immediately start getting text streamed into your document.
Simon Couch:So you know that something is happening. Whereas, with the thinking model, you press the command, you type in your input, you you start waiting, and you're like, did anything
David Keyes:Yeah.
Simon Couch:Happen? Yeah. Which I mean, there are surely, like, OpenAI and and DeepSeek have found ways to put together interfaces such that people know that something is happening, and I just haven't put the time in to to figure out what that might look like with these packages.
David Keyes:Okay. I was listening to the Hard Fork podcast, New York Times. I don't know if that's a a podcast you listened to. Yeah. So they did you I don't know if you listened to the, like, emergency podcast they did a few days ago about deep seek.
David Keyes:And I I may be misinterpreting what they said, but I thought they said that deep seek would actually their the, whatever the reasoning model is called, would give you, like, basically, almost as if it was, like, articulating its thought process. Yeah. I know I'm anthropomorphizing Sure. A a large language model. But and they were saying, you know, that's something that the o one model from OpenAI doesn't do.
David Keyes:And they thought, oh, this is the kind of thing that might be incorporated because it's actually really helpful to be able to have a little bit more visibility in terms of what's going on rather than just have, you know, like you said, sitting there and, like, waiting for a couple minutes until
Simon Couch:you Right.
David Keyes:May or may not get a response back.
Simon Couch:Yeah. The CEO, I think, the title of that person from Anthropic put out an essay yesterday where one of the things he said was, like, oh, it seems like it's a really nice user interface thing to print out all that thinking, which is something that, like, o one from OpenAI hasn't done up to this point. Right. In terms of, like, what that looks like in pal and candor and stuff, again, these things are writing directly to your files. So when they're thinking out loud, it's actually just inside of a little XML tag slash think.
Simon Couch:And different applications have ways of just waiting until it's done thinking. And then once that XML tag completes the model, it's like, okay. I'm ready to, like, respond. But the way this looks in Gander and Pao right now is it just, like, dumps stuff into your R source
David Keyes:file. Okay. Interesting. Yeah. That would probably not be the
Simon Couch:the best resource for us. But that actually gets me
David Keyes:to a question I was wondering about. You talked about, you know, there being pros and cons of it just pasting the response directly in. I'm curious if you can talk about those and why you chose to have it, at at least it seems like by default, replace the text versus, like, putting it below. Yeah. Well, I
Simon Couch:still I think the nice thing about it replacing directly is that it allows for really quick iteration. And then Mhmm. And that, like, if it drops some code in, it's selected already, and you're just triggering the add in again, then it's gonna replace the thing that it put in, and there's no, like, deleting or or copying and pasting. And then on the the drawback side, it's really unpleasant when a model begins writing things out to the the source file, and you immediately know that it's probably not
David Keyes:Right.
Simon Couch:What you want to be there. And you kinda just have to wait for it
David Keyes:to to finish. Yeah.
Simon Couch:I would say that that's mostly just an artifact of, like, the limitations of the way this tool is built. Like, this happens via the RStudio API. And the tool that I have access to to, like, interact with your RStudio session is, like, modifying your files
David Keyes:and
Simon Couch:not necessarily, like, giving you a nice diff so you can, like, think for a second before you
David Keyes:I see.
Simon Couch:Let the thing drop text in or whatever. So I kind of feel like as promising as these tools are and is even with how much time they saved me day to day, I probably see this longer term looking like a UI that allows for that sort of like, you can stop the thing at any point. You can choose to reject the changes that it wants to make And, hopefully, like, authenticate easier too. There's this, like, config process that you have to work through when you first install the package, which, in general, with Tidyverse, we try to avoid. Like, you should be able to install the thing and get going.
Simon Couch:And hopefully, this sort of functionality can live in an interface like that at some point.
David Keyes:Got it. So what are your plans moving forward for Pao, Gander, and any other AI packages you may be working on?
Simon Couch:Yeah. So at the time that we're recording this, the next few weeks for me is going to look like coming back and revisiting each of these packages and kind of buttoning up the loose ends or whatever the phrase is. Each of these three were me kind of learning in public, experimenting with what these things might feel like. And and so before I I send these things off to Crayon, I want some of I wanna squash a good few bugs. And at least in the case of Gander, get these things working a little bit better outside of our files.
Simon Couch:So right now, if you're working in, for example, a quarto, like, markdown, that problem of, like, writing stuff directly into the file is is kind of difficult. And I found that most of the time, I prefer not to use Gander to write code when I'm inside of a a quarto file. Beyond that, I think trying to figure out how to solve this interface problem of, like, writing directly to the source file is what's next. And that will probably involve coming at this from the angle of, like, an IDE extension rather than from an r package that does everything that it needs to do using only r code.
David Keyes:So would that be when you say an extension, I know, Positron obviously has that model of extensions. Would that just live in Positron? Or I I don't know that our studio has that ability or I mean, you you tell me if I'm wrong.
Simon Couch:Yeah. No. I'm not sure what that would what that would look like yet.
David Keyes:Okay. Yeah. Because, I mean, I know for the AI course that I've been putting together, I showed the Codium extension, which now that we've talked, I'm like, oh, that is deficient because it doesn't have the access to the objects that I've created. So I'm excited about your package, but also because I need to go back now and and record a new lesson. Yeah.
David Keyes:Well,
Simon Couch:hopefully, we'll be keeping you busy, unfortunately.
David Keyes:I mean, unfortunately that is is gone. Fortunately or unfortunately, that's the nature. And, you know, I presented the courses like, this is what exists right now, and I'm gonna continue to update it. So I look forward to adding. I definitely need to add a lesson on Gander having seen you walk through it.
Simon Couch:Sure. Yeah. I'm happy to hear that.
David Keyes:Yeah. Cool. Well, if people wanna learn more about the packages, about you, what are the best places to connect and and do that?
Simon Couch:Yeah. At the time that we're recording this yesterday, so that's January 29, a post went up on the Tidyverse blog about Pal and Gander and Ensure, and sort of where the ideas for those packages came from and how they're related and how to get started with them. And so that's a a good place to start if you're interested in learning about those packages specifically. In general, I I write quite a bit about the things that I'm working on my personal blog, which is simonpcouch.com. And there's an RSS feed there if you're interested in following.
Simon Couch:And the same goes for the Tidyverse blog. There's maybe 10 or 15 people that contribute to that, but every once in a while, I'm dropping in there. And, otherwise, I'm I'm relatively active on social media. So if you drop on to my website, there's links to all the different social media that I'm using relatively frequently.
David Keyes:Okay. Great. Well, we'll make sure we add a link to that Tidyverse blog post as well as to your website so people can find you there. Great. Well, Simon, thank you so much for chatting with me about all this package development that you're doing.
David Keyes:It's really fascinating to see.
Simon Couch:Yeah. Thanks for having me. This has been fun.
David Keyes:That's it for today's episode. I hope you learned something new about how you can use R. Do you know anyone else who might be interested in this episode? Please share it with them. If you're interested in learning R, check out R for the rest of us.
David Keyes:We've got courses to help you no matter whether you're just starting out with R or you've got years of experience. Do you work for an organization that needs help communicating effectively with data? Check out our consulting services at r for the rest of us dot com slash consulting. We work with clients to make high quality data visualization, beautiful reports made entirely with R, interactive maps, and much, much more. And before we go, one last request.
David Keyes:Do you know anyone who's using r in a unique and creative way? We're always looking for new guests for the r for the rest of us podcast. If you know someone who would be a good guest, please email me at david@rfortherestofus.com. Thanks for listening, and we'll see you next time.