Featuring interviews, analysis, and discussions covering leading issues of the day related to electromagnetic spectrum operations (EMSO). Topics include current events and news worldwide, US Congress and the annual defense budget, and military news from the US and allied countries. We also bring you closer to Association of Old Crow events and provide a forum to dive deeper into policy issues impacting our community.
speaker-0 (00:02.604)
I think everybody is fully aware of. Every negative thing has a positive side effect. The positive out of the Ukraine is that people started innovating more quickly and that I hope we don't have to have our buds kicked before we start doing that more quickly as well.
speaker-1 (00:28.046)
Welcome to From the Crow's Nest. I'm your host, Ken Miller from the Association of Old Crows. As always, it's great to be here with you and thanks for listening. Before I get to today's topic and guest on cognitive electromagnetic warfare with Dr. Karen Haigh, I just wanted to say a few quick words on current events. We're obviously about two weeks into the conflict with Iran. In our last episode, I had the...
pleasure of sitting down with retired US Air Force Colonel Jeff Fisher to discuss the first few days of the operation. And there was some interesting insight that he had. So I would encourage you to go back to our last episode where we talk about a little bit about what we're seeing in the first few days. While we've had many successful strikes since then, we've also had our share of MSO failures. And we're starting to see them, I wouldn't say pile up, but they are occurring with a frequency that we are noticing.
We've lost a few aircraft, some to friendly fire. There has been possibly some questionable effects on our targeting and GPS systems. It'll be a while until we get a clear picture of what's happening. We can see early on that these are issues that are basically avoidable. We've been talking about them for decades. When we talk about the need to make sure that all of our war fighters have a common air picture and a common understanding of how to operate in the spectrum, a lot of these
errors can be avoided in future operations. And that's one of the reasons why I was on Capitol Hill last week talking to US Senate on our new legislative proposal to establish an MSO combat support agency, because while it is a big swing, we have been talking about these same persistent gaps for too long, and they continue to rear their head in combat. And so it is certainly a heavy lift to get this all accomplished in one year. We do feel it is time to do this now.
before situations deteriorate or get worse, or there's another conflict in the future against a peer competitor with more advanced technology. If we wait longer, it's only gonna cost a lot more money and unfortunately cost lives too. So we're hoping that the military planners hear some of this podcast, understand what we're trying to accomplish with the Combat Support Agency and take steps this year to close some of these gaps.
speaker-1 (02:41.748)
Also, I was intrigued by a couple of reports coming out of Ukraine over the last couple of weeks. The first showcasing the use of directed energy, possibly HPM, a high powered microwave, to shoot down Russian drones that target fiber optic cables. And then the second report was from Ukrainian drones hijacking Russian jamming systems and using Russian signals, jamming signals as targeting to hone in on Russian command center. Now, for those who listen to the podcast,
the last episode with Jeff Fisher when we were talking about Iran, we raised this as a possibility of how advanced is Iran's technology? Could they be piggybacking on our signals and causing some confusion in that regard? We don't know for sure, but we're starting to see this in the Russia-Ukraine war. So clearly, if we're seeing it in one front, we'll probably see it in another front, if not now in the near future. So I think both of those developments in Russia-Ukraine war
bear noticing and trying to learn lessons from them because we will be seeing those elsewhere. Technology and tactors are going to keep evolving at a dynamic pace and we just can't keep up without proper organization, leadership and resources. And so you're going to hear this message time and again on this podcast as well as anything coming out of AOC. So we do encourage everyone to take a look at the Combat Support Agency proposal. You can learn more at crows.org slash CSA. All right, onto our topic for today.
on cognitive electromagnetic warfare. I am pleased to be here with the esteemed Dr. Karen Haigh. Dr. Haigh is an expert and consultant in cognitive EW and embedded artificial intelligence. Her focus is on physical systems with limited communications and limited computation resources that must perform under fast, hard, real-time requirements. She co-authored the book, Cognitive Electronic Warfare and Artificial Intelligence Approach back in 2021.
And she received her PhD from Carnegie Mellon University in computer science with a focus on AI and robotics. I've had the pleasure of getting to know Dr. Haig over the past number of years. She's been a guest on the show in the past. given everything that's happening in the world today, I wanted to have her back on because this field is one of the fastest and most dynamic growing fields. So Dr. Haig, thanks for joining me here on From the Crowsness. It's great to have you back on the show.
speaker-0 (05:03.628)
Yeah, no, it's great to be back. And you're absolutely right. This is a very cool, very dynamic piece of the EMS problem.
speaker-1 (05:10.466)
So I think when I had you on the show the last time, was probably over AOC Europe show a couple of years ago maybe. I know I started off the interview and I want to do it the same way again is I want to help our listeners understand exactly what cognitive electromagnetic warfare is. When we say those terms, not everybody is up on the lexicon of our community. So just to kind of help set the stage, I want to talk a little bit about what are some of the terms and what are some of the differences because you'll hear AI generated.
You'll hear just artificial intelligence, unmanned cognitive systems, machine learning, et cetera, et cetera. So could you give us a quick overview of this field and what terms mean what in terms of the technology?
speaker-0 (05:55.15)
Sure. So AI is a field of science, a field of study like engineering or physics or math. And there's lots of fields. So if you think about math, there's calculus and linear algebra and geometry and so forth. In AI, there's a bunch of sub fields of which decision making is one, knowledge management is another, machine learning is another. And certainly the media would like you to believe that machine learning equals AI. And in fact, there are lots of people who use the term AI and chat GPT synonymously. To me, chat GPT is a tool.
but it's not AI. AI is a field of study and machine learning is an area within that. If you talk about an AI enabled system, it's a system that has AI techniques inside. So for example, if you go to the airport and they scan you for weapons, that's a model that was built off real passengers and real weapons, but the model itself is static. It doesn't change without full certification every time.
the security agencies want to look at it. A cognitive agent takes that next step of being able to not only make the decisions and take actions in the sense of agentic AI, but also learn from its own actions. So a cognitive agent perceives the environment, reasons about what it's perceived, reasons about its goals, then acts to accomplish those goals, and then learns from its experience. And when we talk about that in the
EW space, a cognitive EW system understands and predicts the electromagnetic spectrum. It makes goal-directed decisions to improve the performance of that EW system, and it learns from its own actions. And all of that has to happen at mission-relevant time scales with minimal human supervision.
speaker-1 (07:43.03)
So when we talk about MSO today, within that first statement, we always go talk about how AI is driving MSO, which is inherently joint and inherently multi-domain. And when we look at how we are evolving in this field, obviously, we are being pulled by the direction that AI is taking us in. So I'd like to ask you a little bit about when we speak of spectrum dominance and both projecting and achieving and sustaining
superiority in the spectrum across all domains. How does cognitive EW integrate then with other capabilities such as cyber space information operations and then the traditional electronic electromagnetic warfare?
speaker-0 (08:27.48)
Right, so when you think about a cognitive EW system, it's got to do the three things, the sensing, deciding, and acting with the learning involved. And when you think about multi-domain operations, it's a coordinated action that covers all of the electromagnet spectrum plus the adjacent fields of acoustic and cyber. And hey, maybe you've got an unmanned platform that you're also controlling the physical movement of the device. So.
It's multi-domain land, sea, space, underwater. And using AI terminology, we would call that a multi-node, multi-task coordination problem. And when you think about just how complex and rich the EW problem is, really, AI is the only way you're going to be able to manage that. Throw all the balls up in the air and have them land roughly where you expect them to and where you want them to.
because it's got to manage all of the complex trade-offs and interactions that you're going to have in that extremely complex multi-domain space.
speaker-1 (09:29.974)
You were just using the example of airport security and how certain models don't update, but with AI, the idea is to update those models as you continue in a cognitive space. With MDO, with multi-domain operations, we already know that when an operation starts, and we're currently seeing this take place in real time in Iran, we saw it last month in Venezuela.
to a much lesser extent in terms of the substance of the operation. But we always say that no plan survives first contact with the enemy. So you're constantly having to analyze, update your models, your understanding of the threat, your maneuvering, how you have to maneuver around the threat. How has that trend evolved in recent years? Because we talk a lot about training, setting up exercises, and going against
realistic threats and so forth. when the fighting actually starts, things have to be updated because too often we get into a situation right out of the gate and to no fault of our own where the situation is changing more rapidly than we have trained for. Can you share with us a little bit of insight into how AI and cognitive systems really either lead or pull us in training or how we need to do a better job of
in integrating cognitive systems into our training because being ready to fight has to also mean being ready to change your mind and change your tactics immediately.
speaker-0 (11:06.73)
In real time, on board the platform, you've got that tight feedback loop of whatever is happening and you're detecting in the spectrum, then the decisions that you're making at fast real time, and then the feedback that you get from the environment. And you want to assess your own performance. Did my jamming work? Did my beam forming work? All that kind of stuff. So that's kind of on the understanding side. And that's the only way to keep track of the fact that everything is changing constantly and much, much faster than a human can track.
On the decision-making side, the two things that need to happen are first a re-planning step, which to your point, no plan survives contact with the enemy. You may have a plan that allows for flexibility, right? You would design a plan that allows for flexibility. So for example, you want your platforms, you send out a hundred platforms, drones, and you want them to all hit their targets within a few seconds of each other.
Well, you know you're going to lose 10 % of your drones. You just don't know which ones. And so that's when the replanning comes in and you reallocate tasks among them. And that is an expected change. You also have the unknown changes, unexpected changes, and you may have to actually take a step back, a deep breath if you will, and replan from scratch, at least from wherever you are currently in the mission.
And then the last step is monitoring and tracking the changes from what you expected. Maybe you expected a certain effect, but it didn't accomplish what you achieved. And that might be for any number of reasons, right? You may have a faulty device, you you may be reading the information, you may be reading the setting incorrectly, but being able to update your models on the fly based on the immediate feedback from what's happening in the environment.
allows you to make a better decision next time. The definition of idiocy is to do the same thing over and over again and expect a different answer. Well, I don't want my EW system to be an idiot. I want it to say, gee, that didn't work. Do something else next time, please.
speaker-1 (13:14.542)
You mentioned just now about having to update tasks real time, especially because you use the example of drones. You know you're going to lose some, but you don't know which ones. And so you're going to have to retask everything once the results come in. Obviously, the use of drones, unmanned systems, and therefore the encounter UAS is the flavor of the day in a lot of conversations about modernization. We think about AI making it easier and faster than a human can do it.
However, there's also a scale issue. 50 years ago, if you were going to do an operation, you're talking a strike package or multiple strike packages. But it's kind of a static number of assets, either in different domains. A strike package in the air, you might have a naval sea surface or underwater package or something delivering effects. And so if something happens, you're updating that strike package. You're updating.
kind of a limited mission, but now you're talking potential operations where it's 10,000 different drones or whatever. You're flooding the domains with unmanned systems. So while AI is faster, it's also having to update a lot more tasks. So is the evolution of AI keeping up with the scale it's necessary for the use of it in the battle space?
speaker-0 (14:33.89)
So the short answer is absolutely. And there are many strategies for being able to do that. And the obvious one, which is the same way we do as humans, is that we scope the problem to something that you can accomplish easily. So you mentioned air or sea. Well, we could do exactly the same thing with the unmanned setting. And that might be by role. It might be geography. It might be by platform capabilities. The point is that you can downscope to what is needed.
Other ideas for being able to do this control is by decomposing the task in a way that makes sense so that even if, for example, you lose 100 % of your communication, you have absolutely no way to communicate, you still can accomplish the task in a decentralized form of coordination. Logically speaking, if you have a team of robots that want to, and you want them to reconfigure a classroom from like,
rows of student tables to projects, you can tell the robots that's the goal. And they don't actually have to communicate because they can watch each other when they go in the room. Right? And we can do the same kind of thing with the unmanned systems, right? If I see a drone hit a target and it looks pretty good to me, I don't need to go there. I can do something else.
speaker-1 (15:52.514)
So what does that mean then in terms of spectrum maneuver? That's kind of the phrase used to describe MSO operations really coming out of the 2020 strategy where they mentioned, they said it's not a domain, it's a maneuver space. And I don't think we've really all agree within our community across TOD what we mean by maneuver in this age of AI.
So could you talk to us a little bit about what spectrum maneuver looks like in the future as it pertains to cognitive EW?
speaker-0 (16:27.138)
Yeah, so maneuver in the generic sense of the English word is just any ability to change what you're doing, to take actions. And to the extent that we have been, say, single device, all I can do is jam, your maneuverability is pretty limited because you've only got one action. You may be able to tweak it a bit one way or another, but you've only got one thing that you can do. And so as we become more and more multi-domain with more and more capabilities,
you've suddenly got more axes of flexibility where you can accomplish things by doing it a different way. If you think about electronic protect, in a communication setting, you can apply a notch filter to clean up a signal. You could instead route around a jammer so that you're not actually getting hit by the jammer.
You can send redundant packets. You can do a spec spectrum or a directed antenna kind of thing. Each of these gives you options. And as we increase our space of options, now we start getting into, I'm telling my device to change positions so that the antenna can find more signals. And if you think about what an electronic warfare officer might do, the back seat or on a plane, he says to the pilot, break left. Well, that's a physical.
to break a radar lock, right? Well, maybe the AI can do exactly the same thing, right? To that physical action that has an impact on the electromagnetic spectrum.
speaker-1 (17:59.01)
So obviously in a conflict, there's a lot of always a lot of confusion in terms of, especially in the MSO fight because results and impact aren't necessarily readily available in terms of understanding what happened. And I think about even just the Iran mission where we know that we lost some F-15s early on, still trying to figure out what happened. How susceptible are military systems?
to spoofing and adversarial signals when it comes to manipulating the spectrum and therefore, know, achieving an advantage against your adversary.
speaker-0 (18:36.59)
So the answer, of course, is that this is a cat and mouse game. I can design something to be perfect against you, and then you'll turn around and break me, right? So in general, there's a lot of ways of detecting spoofing, detecting jamming, and compensating for that. And the more that you can detect that your information is being manipulated, the more you can compensate for it.
And that may be detecting which sensors it came from or which there's lots of different ways that you can look at the provenance and the credibility of the information you're receiving. And in terms of impact and being able to measure your effect, some things are directly measurable and other things you have to start inferring. And we make educated guesses. If you think about radar track quality, that isn't something that you actually measure as you go along.
Right? Until the missile falls into the sea, you don't know that it actually succeeded. You're doing an educated guess based on algorithms that we've had, you know, for 50 or 100 years. We have a good ability to infer that it worked. Did your range gate pull off work? We can take those kinds of ideas and extend them here. And I think a lot of our problems with being able to assess the impact, even in a physical setting, is that we don't
We don't make those assessments in real time. You if you think about a 1v1 boxing match, the boxers know exactly whether something succeeded, right? Because it's instant and they've got the full feedback from watching their opponent. We're not doing enough of that coverage to understand the whole picture.
speaker-1 (20:18.35)
So in talking about resilience, I oftentimes get into conversations about systems that need to be hardened against electronic attack. When you talk about hardening systems against things like electromagnetic energy, radiation, so forth, there are a number of tools that you can use. I know that there's ways that, or correct me if I'm wrong, there's ways for certain systems, viruses, malware, so forth, who sort of change the algorithm.
and maybe not even be recognized by the specialists. So I'm trying to craft this question as a non-expert talking to one of the smartest people on this topic. Is there a way to harden the algorithm so that we can have assurance that the algorithm that we're basing decisions on is not spoofed because we might not be otherwise able to see a change in it if it is being impacted by
Am I completely going off the reservation on this topic or what are your thoughts?
speaker-0 (21:21.198)
No, you're definitely not off the reservation. It's a great question. Everybody's asking it. The short answer is that no, we can never be sure. It's an adversarial setting. They can always out manipulate us. But there are definitely ways to mitigate those kinds of problems. The biggest and most common, the easiest one really is data diversity. The more diverse your data set, the better. So in 2017, OpenAI and
the University of California Berkeley in Michigan put together a paper talking about data manipulation where it was stop signs. And the idea was you train a model to learn, understand stop signs, and then the data manipulation was by putting post-it notes on those stop signs, and it couldn't recognize the stop signs anymore. If on the other hand, they had created a much more diverse data set that included
multiple languages, right? Other countries, you know, in Quebec, you see Arrête, in China, you see Ting, or not ones that are red and white, right? You go to parts of the rural United States and they're kind of pale orange and dull yellow. Or like on the mountains in Hawaii, you have what looks like bullet holes because they're making holes for the wind. You go to Los Angeles and there's Post-It notes on the stop signs, right? It's not gonna be confused by a
Simple manipulations. You have to do a lot to get around a well-trained, highly diverse dataset, a model trained on a diverse dataset. That's certainly one thing. Another common technique, you need someone who understands the mathematics quite well. one of the, apparently there's a big argument on whether we should be classifying models that were trained on classified data.
And the answer is absolutely yes, because you can reverse engineer the models. If I gave you my model, you can figure out exactly what data it was trained on. So there are techniques for handling that so that the model, you could classify the model or you could do homomorphic encryption so that the model takes in the raw.
speaker-1 (23:31.606)
You used a term I've never heard before. What was it? What type of encryption?
speaker-0 (23:36.408)
HOMOMORPHIC ENCRYPTION. It's a mathematical property on the encryption algorithm. All right. OK. And what it allows you to do is that the input data is encrypted. All the math on that data, as you're manipulating it towards the answer, stays encrypted. And then the answer comes out encrypted with the same encryption key that it started with. And so I can give you my model. And unless you can break my encryption key, it means nothing to you.
speaker-1 (24:05.4)
So what is the metric to determine how diverse a data set has to be in order for it to be reliable? And how do we attain that level of diverse data to assure? When we were talking about training and fielding a new system and testing it through the technology readiness levels on up, how do we make sure that we have the right data set?
to at least mitigate the potential manipulation.
speaker-0 (24:40.056)
So I'm in an engineer, so there's only one answer that is valid here, and that is it depends. Depends on your mission, right? What is diverse enough for your mission? What I recommend to developers is that they start by doing the sort of the usual train the model and cross your fingers and evaluate it and say that it's 99.3 % accurate. OK, cool. Then you have to do fairly exhaustive holdout testing.
That's one way of doing it, right? Doing the ablation tests where you're doing all subsets of the known data so that you can then turn around and say, how quickly can it learn from a new example and how resilient is it to surprise and having a red team coming in and saying, you know, can I take you down? Right? If you've got a really creative, what I call my mad scientist team to come in and make things up that you believe
your system may not be able to handle, that's what gives you the confidence to say it's good enough. Is it going to be perfect? No, we can never guarantee perfection. can't guarantee it from the human's, a cognitive human, can't guarantee it from a cognitive silicon system either, or a non-cognitive one for that matter.
speaker-1 (25:54.156)
I want to talk a little bit about another topic. And I wish I had the speaking skills to create a nice transition, but I'm really not sure how to do that. So I want to talk a little bit about adaptive waveform generation and reprogramming. Machine learning models can autonomously design and select, adapt, so forth the waveforms in basically real time.
without human interaction, without humans being involved. So how does reinforcement learning, what they call, I guess, generative models, how can they propose new waveform parameters when environment changes? So how does that whole process work?
speaker-0 (26:38.83)
So the waveform selection and reprogramming, that is all part of that loop of the understand, decide, and learn. So understand tells you what's out there. And you need to be looking at it, characterizing it, figuring out what's happening, making the predictions about the future. That gives you the context from which you can start making decisions. The other key piece of the puzzle is you have to know what your objectives are, right? What is the thing that you're trying to accomplish?
in the reinforcement learning technology is what is your reward? The decide box then goes and figures out what the tools are available to it and matches for the environment it sees and predicts. Can it choose the right actions? In this case, it would be a waveform selection optimization or generation step to then accomplish those mission objectives. So the decide box is the one that connects the environment to the goals.
The idea is that you can even go quite simply on a waveform selection. Maybe I've got 10 waveforms in my system and I just pick and choose among them. You can go all the way to the other end where it's almost writing the bits or the FPGA gates to create a brand new waveform on the fly. And that's where you might start getting into the generative approaches. Essentially, as long as you have a rich enough embedding space where you're describing your environment richly enough and you have enough tools in your bucket,
whether that's high level tools or low level tools, to combine them in a way that becomes interesting, then you end up with the ability to create new things or use known things in new ways under surprising changing conditions.
speaker-1 (28:21.056)
It seems as a new technology evolves, when you're talking about an AI system, there's so many different tools that can be used to bring in the data that you need for the model to make a decision and so forth. We talked at the beginning of the show how MSO is inherently joint. It goes across the services, it touches every domain. Are we as a military, speaking just from the US perspective,
Do we have, when you look at all the tools necessary and being able to understand, hey, we have to update an environment for a different waveform, for example, we generate. If it's an Air Force program, are they getting information from the Navy? Because our services, our branches are everywhere. But are they talking enough to really give a system that might be an Air Force system or a Navy system or an Army, are they sharing enough to make sure that
the data sets that they're generating and the waveforms are accurate based on everybody's experience versus just a particular service experience.
speaker-0 (29:30.446)
I would say that for the most part, we tend to be siloed by platform, not even necessarily across the entire Air Force or the entire Navy. And there's definitely a push to doing more of that kind of sharing, both in terms of the descriptions of what we're seeing and in terms of the actions that we can take. NATO actually has a very nice basis for being able to share real-time observation data.
called SESMO, the Cooperative Electronic Support Measures Operations. And it's a format, a standard format across NATO that allows those kind of contact reports to be shared in real time so that you can across an entire multinational battle fleet. We haven't adopted it here in large part because we have such a history of being siloed. There are definitely people working on it. And I am hopeful that one of these days we'll get to a point where there is enough
push from the government and pull from the users to make it happen.
speaker-1 (30:31.69)
So you're saying we haven't implemented it yet in the US or we are working toward it or is it just kind of languishing in purgatory until enough people advance it?
speaker-0 (30:42.542)
I would say that in terms of the NATO standard, really haven't adopted it. Most people I talk to in the US are not even aware of it.
speaker-1 (30:49.614)
I haven't heard about it either. That's why I'm asking like where it's at in the advocacy because that always touches a part of my heart.
speaker-0 (30:57.56)
Well, the Navy has a standard that they've been working on, a common data format that covers most of the radar needs. Is it at a level of depth that a cognitive UW system would need? Probably not. It's more at the level of setting up a pipeline to share information among humans. But frankly, in our current environment, it's often the pipeline that's the hardest thing, right? The permissions to share more than the technology to share.
speaker-1 (31:23.288)
So to wrap up our time, and there's so much more information on this, and we can only scratch the surface on this topic, but looking kind of forward when we get into advancements on the horizon, as well as questions around verification and security and so forth, what are some of the technical challenges on the forefront of your mind that are keeping you awake that we have to address?
Speaking of the US force or even coalition force, what are some of the technical challenges that we need to focus on to really propel this field forward?
speaker-0 (31:59.246)
Well, I think the biggest challenges have nothing to do with the technology, right? The acquisitions and contracting and so on and so forth, all of those headaches that I think everybody is fully aware of. Every negative thing has a positive side effect. The positive out of the Ukraine is that people started innovating more quickly and that I hope we don't have to have our butts kicked before we start doing that more quickly as well. So that's the non-technical side. Those are the biggest barriers.
Technical side, think we have, there's a number of, and don't want to call them preconceived notions, but the assumptions we make about the technical capabilities. For example, everybody assumes you must have a GPU in order to run AI on a system. And it's like, no, that's absolutely not true. You can do it on the smallest, oldest devices there are.
You can right size that AI model for the device you have. There's a lot of work being done even to put large language models, well, I guess they're small language models on microcontrollers, right? So it can be done and getting rid of those assumptions. Identifying and then mitigating against those assumptions is probably the next level of technical problem that we need to address.
And in terms of the algorithms, I think we need to push the envelope more. We're doing it quite a bit when it comes to the understanding, right, characterizing what we're seeing in the environment. We're doing much, much less on the decision-making side.
speaker-1 (33:24.524)
Well, Dr. Hay, that is all the time that we have for today. I really appreciate you taking time out of your schedule. I know that you're teaching a lot of classes, and I've always enjoyed tracking all the work that you do. I have no doubt that a number of our listeners probably have some follow-up questions. Is there any way to get a hold of you if there's follow-up questions? Because I can guarantee our listeners that if they have a follow-up question for me, I will not be able to answer it. So I just wanted to give you a chance to kind of provide us a closing.
comment and any sort of contact information that you can send to our listeners.
speaker-0 (33:58.35)
Okay, well, so with contact information, I have a website on Google, which is KZHAIGH. So Google cites KZHAG. I also have three courses, longer term courses coming up. One in the UK in May, one in the US in June, and one in Singapore in July. So, you know, covering the vast majority of the planet at this point, I just need to figure out how to get to, you know, Tunisia or somewhere next. So lots of options.
speaker-1 (34:27.694)
Great. Well, thank you so much for joining me here on From the Crow's Nest. It's great to have you back on the show. And hopefully I'll see you very soon here at one of our other events coming up. I know we have AOC Europe coming up. I usually see you out there. So I appreciate you taking time to join me here on the show.
speaker-0 (34:43.034)
Nice to talk to you again.
speaker-1 (34:44.856)
Thank you. All well that will conclude our time here today. As always, I wanna thank our guest for joining us for this conversation. As always, please take a moment to review, share and subscribe to this podcast. We always enjoy hearing from our listeners. So please take a moment to let us know how we're doing. That's it for today. Thanks for listening.
speaker-0 (35:18.702)
you