🎙 Welcome to Rarified Air: Stories of Inspired Service, a podcast that takes you on a journey into the DNA of InterSystems. I will be your guide as we explore how our unparalleled commitment to customer service fuels limitless human potential.
🤝 Join us as we dive into the culture of InterSystems and share the stories of the people who make it all possible - our customers, partners, and employees. From helping healthcare providers improve patient outcomes to powering the world’s most important institutions, we’ll show you how our dedication to customer service excellence is in rarified air.
Tobias Zwingmann [00:00:00]:
One important thing is to define a success criteria or measure for success, which turns out to be really hard in AI, because in the end, everyone wants to have these 20% task completion improvement or like 40% higher quality. But how do you measure that? You have to find scenarios where you can put people into different groups and just experiment, okay, how is this group performing versus how is that group performing? Because it's very hard to figure it out otherwise, just looking backward and saying, okay, what was the increase now in, like increased leads or overall quality? Because maybe something else happened which was not due to AI. And so, yeah, maybe just keeping the same level with AI was actually a win because there were some other external factors that came in. So I think this is important, like just thinking how to bring these projects on track. But of course, once you have one of these candidates, one of these use cases that actually work, you need to be able to iterate and improve. That's like the final step to execute.
Intro/Outro [00:00:51]:
Welcome to Rarefied Air. Stories of inspired service. Our host, John Paladino, head of client services at InterSystems, will use his 40 years of experience to show you how to build a successful customer service program and highlight stories of innovation with customers. Join us as we explore the past, present and future of service, from AI's promise to the enduring power of the human touch.
John Paladino [00:01:20]:
With me today is Tobias Zwingmann. He's the managing director of Rapid AI. I think you're going to be interested in the AI, the AI. And also with me today is Asaf Sinai from InterSystems. Welcome.
Asaf Sinai [00:01:34]:
Thank you.
Tobias Zwingmann [00:01:35]:
Thanks for having me.
John Paladino [00:01:36]:
So, Tobias and Asaf, I'm so glad you're here, but can you take a minute and just share your backgrounds? Tobias, start with you.
Tobias Zwingmann [00:01:43]:
Yeah, sure. So my background, if I go really far back, originally I studied business administration. That's what I did back in 2006. But at my heart, I've always been kind of a techie. Like back in school, I created my first website, sold them and so on. But then after school I figured out, okay, maybe I should do some business stuff. So I did that. But quickly after that, I transitioned back into it.
Tobias Zwingmann [00:02:03]:
I did not know what that was turned out. I did a master degree in IT management, and my thesis at the end was about exploratory data analysis. And this somehow got me into this whole data science train because I learned how to do r. Then I was hired for a role that needed to have our programming and data analysis skills. This is how I ended up coming into this data analysis. Data science track worked in the data science role for a couple of years. Also digital product development, until I then finally transitioned into finding or founding my own company, which is now rapid AI. And helping companies achieve that's a great journey.
John Paladino [00:02:35]:
A lot of overlap between business and understanding data and the technology to get that data.
Tobias Zwingmann [00:02:41]:
Yeah, absolutely. And so far it's been really helpful for me.
John Paladino [00:02:44]:
Well, you've done well. So, Asaf, I know you've been working with us for 21 years. You're in an engineering support group, which we're going to hear more about a little bit later, but tell us a little bit more about yourself.
Asaf Sinai [00:02:57]:
I love data, and I've been focusing on analytics technology in recent years. So progression to AI is quite natural for me in terms of interest, in terms of helping customers utilize the goldmine of data they have. So I've been enjoying kind of learning more and more about those technologies, do some hands on experiments with it and break it a little bit, fix it, so when customers start using it, we're in a better spot to help them.
John Paladino [00:03:28]:
Is it fair to say you're passionate about helping people and you're curious by nature?
Asaf Sinai [00:03:34]:
Absolutely.
John Paladino [00:03:36]:
All right, let's dive right into AI. Everybody wants to know, and Tobias, you have unbelievable guidance for companies on how they should approach and execute anything related to AI, especially generative AI, which is all the buzz right now, and everybody is listening in and trying to learn from each other. So let's get right into it. So if a company is considering using generative AI, how would you contrast a good approach and a bad approach?
Tobias Zwingmann [00:04:07]:
All right. Okay. I think there are multiple dimensions to that, but probably the worst thing you can do at the beginning is to treat AI purely as an IT topic. And that's what a lot of companies are still doing. They think that AI, it's technical, it's complicated, so we just hand that off to it and then let it figure that out. But what happens is, in the end, you will just give birth to another IT project. And I think in the end, AI really touches more aspects of the business. And I firmly believe that the whole company, and especially leadership, needs to understand what AI as a technology itself is really capable of, because it's a horizontal technology.
Tobias Zwingmann [00:04:38]:
So I would always approach it more from a business perspective and see where you can integrate that. And I have a process to walk companies through that. It's extremely important where you start this project. If you started as an IT project, it'll be really hard to get it out there. But if you started from a more business perspective, I think you have much better chances of bringing that to success.
John Paladino [00:04:56]:
And when you say horizontal, you mean like images versus text versus unstructured data, is that what you mean?
Tobias Zwingmann [00:05:03]:
Yeah, that's one aspect of it, but also in terms of use cases. So take generative AI, for example, especially large language models where you essentially do next word prediction. And that's a very horizontal and large capability. And text and language is everywhere in businesses. So you have to be both aware of what can this technology do? What is this actually capable of doing? So to give you a simple example, chai, GPT, or any large language model can generate any piece of content, but it can't really do any math or any calculations for you. A lot of people don't understand that. And if you approach it from the wrong use case, will be really hard to fix these errors later on. And because this touches every different business unit and every different business department, you have to get that right from the start and educate people also, what this technology can do and what it can't do.
Tobias Zwingmann [00:05:46]:
And I think this is where my approach differentiates a little bit from what other people are doing, because I say start with understanding the technology. I'm not saying every person in the company needs to become an AI expert, but you should be on a level where, similar to, you don't need to be an electrician to need to know that you must not put your finger into a socket when you deal with electricity. Similar with AI, you need to learn some basic fundamentals of what can the technology do, what can't it do? And also what's behind this term. It's not only generative AI, there's more to it than that.
John Paladino [00:06:13]:
So it's like a chainsaw, for example.
Tobias Zwingmann [00:06:15]:
Exactly.
John Paladino [00:06:16]:
Before you use it, you should understand how to use it properly.
Tobias Zwingmann [00:06:19]:
Yeah, yeah. And also, I also disagree when people think that AI is just another tool, I really treat it more as a technology chat. GPT is a tool, the software, but AI and the whole paradigms behind that, that's more of a technology and requires also a different way of thinking to approach problem solving in different business aspects.
John Paladino [00:06:35]:
So understanding AI is the first step. How would you approach it after that? Do you hire consultants and bring in consultants? Would you send people off to school to study it more deeply? Would you experiment, kind of dive in and see what happens, what works and what doesn't?
Tobias Zwingmann [00:06:52]:
Yeah, I'm always a fan of bringing consultants in because I'm doing consultancy myself. But honestly, I think this will be too early to bring consultants in at this stage. So learning AI just to wrap that up. It's not sending people for half a year. So to some kind of academy. If you just get hands on or experiment or learn maybe 1 hour per week or so, I think you're really well ahead of a lot of competitors of yours. And then once you really want to start off with something, I always recommend look at the stuff that you're currently doing. So lots of people approach AI from an innovation perspective, which is fine, but it turns out to innovate new things and innovate new customer experiences is much harder instead of trying to improve what you're doing right now in your business.
Tobias Zwingmann [00:07:32]:
So I always recommend start by looking at different business processes, also inside different business units, for example, sales or marketing, and identify what I call pain points and bottlenecks. Pain points are things that are currently painful for you. For example, if you have low quality leads in your sales pipeline, that's a problem right now. But you have also bottlenecks, things that are limiting future growth. So that could be in the sales example to say we have a positioning problem in our company, we are too exchangeable with everyone else there in the market, or other examples that are more targeted to future capabilities that you need to build as a company. But once you have pinpointed those, I think only then you're in a good position to figure out in the next step. Where can I actually map AI capabilities, the things that I learned in the first stage, to these different pain points and bottlenecks and identify use cases?
John Paladino [00:08:18]:
I'm going to jump ahead a little bit. So when we're facing priorities and problems, look for the most impactful problem we can solve with the least amount of effort. Is that what you would do at this point, or does that come later?
Tobias Zwingmann [00:08:31]:
Yeah, I think it sounds great in theory, but the problem is that those really high impact problems are very rarely solved with very easy means. Even with AI, it turns out to be really difficult. So the way I like to approach is to say look for departments or parts of your business where you currently have the biggest sum of smaller problems. And once you have that, you can identify different smaller problems that kind of accumulate or that have a shared root cause, and try to work on these smaller problems because you will learn a lot about not only how AI works, but also be able to transfer learnings from one project that maybe did not work out as expected to another project in your use case pipeline from the same business vertical. And I think this is where a lot of companies get that wrong and treat AI as a horizontal technology and also as a horizontal business effort. I and have all these AI prototypes and projects across the whole company going on, but they can't really learn from their failures. They can't really learn as an organization and make progress in growing a higher amount of AI literacy. So I would rather say, okay, look at not big problems, but business areas or business units that have accumulated big problems or have a high impact to your business, and then break it up into lots of smaller problems because turns out these are much easier to be solvable with AI.
John Paladino [00:09:41]:
Yeah, interesting. If you don't follow this approach, you can end up wasting a lot of time and energy and have nothing to really show for it. So if I could summarize so far, is learn a little bit about AI, enough to understand it, pick some priorities that impact the business. Then third, map AI versus those problems to get them aligned.
Tobias Zwingmann [00:10:03]:
Absolutely.
John Paladino [00:10:04]:
Then what's the next step?
Tobias Zwingmann [00:10:05]:
So the next step is then to break it down into actual use cases that you can work on. Use cases. You don't have overcomplicated things, but you have to think about them a little bit. The way I like to think about them is in four dimensions. So what's the problem we are solving? What value is created once we solve that problem? What's the solution we are intending for that? And what kind of data do we need? Or do we need any data for that? Because turns out, for generative AI, there are also use cases where you don't need that much data. That was really different before in non generative AI times when you had to train your own models and so on. Now you can also do a lot of things when you don't have much data available. And once you have these four dimensions sorted out, I also like to consider a fifth dimension, which I call the prototypability of a use case.
Tobias Zwingmann [00:10:46]:
So I like to follow the 2020 rule for that. Which is.
John Paladino [00:10:49]:
What's the 2020 rule?
Tobias Zwingmann [00:10:50]:
Yeah, the 2020 rule is that I say if you try to build a use case, slice it up into increments that you can deliver in either 20 days of work or twenty k of budget. And this is talking about, like, us dimensions and european like in other parts of the world that might be a different number. But this gives you a sense of the scope of the initial increment you should ship, because if the initial increment or the first version of the prototype is much larger than that, then two things could have happened. Either the prototype is just the project, its scope too big, or you're looking for highly automated and highly integrated use cases. That need a lot of development effort left and right that have nothing to do with AI in order to get them off the ground. And this is where I would like to shift them over to a more augmented scenario that is not so highly automated and integrated.
Asaf Sinai [00:11:36]:
John, I actually have a story about you, about me and that use case, and why it's very important to I.
John Paladino [00:11:43]:
Have 2020 vision.
Asaf Sinai [00:11:46]:
So we serve our customers for every problem that is solved and closed. And I wanted to create a dashboard for you to be able to filter that. Bye. We applied some natural language processing and beautify the dashboard and everything and worked on that less than 20 days, but something around that number and I was very excited to show it to you. And I showed you how to filter, how to see the visualization and everything. And then you look at me and you say, I read them all. I don't need to filter. I don't need, I read all the surveys.
John Paladino [00:12:21]:
I'm kind of weird like that. I like to plow through all the raw data, but that's not what most people want to do.
Asaf Sinai [00:12:28]:
I learned to check with the end user about the use case and kind of start small and collaborate to make sure you're delivering the right solution.
John Paladino [00:12:37]:
So let's roll back for a second. Steph, how did you learn about AI? Did you take the approach that Tobias is recommending?
Asaf Sinai [00:12:45]:
Actually, I was inspired by Tobias and I went through a class about AI and kind of analytics and bihe, about a book you wrote, and O'Reilly had a kind of an online class. So I read the newsletter. It's a weekly newsletter and what I loved about it is how practical it was to help you better understand, to help you do some hands on experiments. And I started doing just that. Obviously, chat GPT made it easy for all of us to get our hands and try it out and slowly looking for use cases at intra systems and technical support. So to learn more about it.
John Paladino [00:13:26]:
Yeah. For our listeners, Asaf is a well respected leader in the support organization and he did this in his free time.
Asaf Sinai [00:13:34]:
That's right, that's right.
John Paladino [00:13:36]:
So you followed Tobias's advice perfectly.
Tobias Zwingmann [00:13:39]:
Awesome.
John Paladino [00:13:40]:
And then you look for use cases, and then the third step was to apply AI just to map it to see what fit. Do I get the right to buy?
Tobias Zwingmann [00:13:50]:
Yeah, yeah. And also like plan for failure, I would say. Because make no mistake, AI projects are super hard. At their core, they are software projects plus data projects on top. And each of those two is already like difficult to nail. So AI can be really tough to figure that out. So I always recommend have a plan, but be ready if the plan does not work out as expected. So there's this saying, everyone has a plan until they get punched in the face.
Tobias Zwingmann [00:14:11]:
So that's what happens with Aih.
John Paladino [00:14:13]:
Famous Mike Tyson.
Tobias Zwingmann [00:14:14]:
Yeah, famous Mike Tyson quote. So it's not about being ready for that by yourself, but also informing your stakeholders about that. Because the worst thing you can do is over promise what you're going to do with AI and then under deliver. And especially in generative AI, it's very easy to fall for that fallacy. I call that the 80% fallacy, because with generative AI and chaipo, a lot of use cases, you can get to 80% good enough stuff within 20 days, for example. But getting to 90 or 95% sometimes takes just forever or it's not even possible to get there. And figuring that out, especially at the beginning, it's very hard, and there's also no learning blueprint you can follow. It's just like experimentation, and you need to get into that experimentation mode.
Tobias Zwingmann [00:14:54]:
I rather prefer to keep expectations low at the beginning, at the same time selling also a larger vision because we just don't want to have all these little AI use cases everywhere. We need to have a roadmap that accumulates and we have all these small wins that have together a much bigger effect than individually. But also be realistic about what you can achieve in one month and how the roadmap looks like.
John Paladino [00:15:15]:
Asaf, what were some other examples of what you've attempted to do with AI.
Asaf Sinai [00:15:19]:
And were they successful or so at intrasystem support? We're the experts and you are the experts. And we've been doing it for many, many years, supplying the tools to our developer support engineers to help customer be successful. So I would like AI to basically give them superpowers but not replace them. So the whole concept of augmentation and augmented AI, because somebody needs to verify that the answer is right and not just help the customer get to the middle of the road. We want them to get to the other side safely. So we have the responsibility to keep up to date with the technology, understand the customer need, and use tools. AI is just another one of them and help them get safely to the other side of the road.
John Paladino [00:16:11]:
That's great. So you mentioned augmented Tobias, do you have a comment about augmentation versus automation?
Tobias Zwingmann [00:16:18]:
Yeah, because I hear it so often when people think about AI, they think about automation. I hear it so often that AI and automation is just mentioned one breathe. Just to give you a little story, how these two concepts differentiate. I had two companies that were trying to solve a very similar use case. The use case was data quality and CRM. So both had Salesforce as a platform. And then the first company tried to improve their data quality by buying a software, buying a plugin for Salesforce, and then trying to fill in the missing fields of customer records or contacts and accounts in salesforce with some external data connection. And quote, unquote, aih.
Tobias Zwingmann [00:16:55]:
There was like a six figure budget project and it took them almost a year to pull off. And in the end it didn't meet the expectations. It worked somehow, but it wasn't really a huge success. So then the other company that approached that problem very similarly had a different approach. What they did, they said, we know the data in the CM is more or less not ideal, but let's work on the process of new data coming in. So what they did is they also bought a piece of software, but this piece of software did something else. Every time a sales rep ended their conversation on the phone, he would automatically get not only a summary of that conversation, but also pre filled values for the CRM system. So for example, it could automatically pre fill and say, okay, what's the next action item here? What should I put in the call notes? The salesperson could review that and just accept that and would automatically be sent to Salesforce.
Tobias Zwingmann [00:17:39]:
And we did a test between two groups, and it turned out that those sales reps who use that tool had a 40% higher completion rate of data in the CRM. And this was something they could immediately apply. And I, it was way cheaper than six figures.
John Paladino [00:17:52]:
And it's in workflow too. It's in the closest workflow.
Tobias Zwingmann [00:17:55]:
Yeah. And this was the augmented workflow, because the sales rep did the same thing as previously, but now there was kind of like this smart suggestion of saying, hey, do you want to fill that into the CRM? And they could either correct it or just say accept. And then it was in the CRM. And I think this is a good example of approaching the same problem from two different angles, like augmentation versus automation, because lots of companies just think, okay, let's buy some software and we automate everything. And then it turns out to be great, but often it's not.
John Paladino [00:18:19]:
So this is great advice so far. So let's go back. You're doing the proof of concept. Some work out, some don't work out. Would you like to talk about that in terms of success versus, as you said, sometimes the scope is too big and you have to go back and rethink it.
Tobias Zwingmann [00:18:35]:
Yeah, I think one important thing is to define a success criteria or measure for success, which turns out to be really hard in AI because not only in AI, actually any business improvement, because in the end, everyone wants to have these 20% task completion improvement or like 40% higher quality. But how do you measure that? And I think you have to find scenarios where, similar to one that I just explained, you can put people into different groups and just experiment. Okay, how is this group performing versus how is that group performing? Because it's very hard to figure it out otherwise, just looking backward and saying, okay, what was the increase now in increased leads or overall lead quality? Because maybe something else happened which was not due to AI. And so maybe just keeping the same level with Aihdem was actually a win because there were some other external factors that came in. So I think this is important, like just thinking how to bring these projects on track. But of course, once you have one of these candidates, one of these use cases that actually work, you need to be able to iterate and improve. That's like the final step to execute. You need to be able to execute on that roadmap to figure out, okay, what's working, what's not working, to improve that and then ship for the next iteration.
Tobias Zwingmann [00:19:36]:
And in the end, that's agile development and agile release cycles and. Sounds like such an old topic, but it's in my experience, like lots of companies still struggling with that to get in this agile working mode.
John Paladino [00:19:47]:
So you don't just end up at a finish line, you have to keep going.
Tobias Zwingmann [00:19:50]:
Exactly. You have to keep going. Yeah. And also prepare your organization for that.
Asaf Sinai [00:19:53]:
The deployment is just the beginning of the journey.
John Paladino [00:19:56]:
Yeah, that's a good way to say it. Saf, thank you.
Tobias Zwingmann [00:19:58]:
Absolutely. And you need to have people that are ready for that because sometimes these projects just get handed over to whoever it or whoever to have that. But especially in these early phases, you need to have someone in the team who has more of a product mindset and trying to improve that and trying to improve the product, because there are still lots of things that you need to try to figure out left and right.
John Paladino [00:20:17]:
Top of mind for everybody, when we talk about AI is it could be very useful, but it could be also very dangerous. So AI ethics is an important aspect of using AI in a proper way. What are your thoughts on that, Tobias?
Tobias Zwingmann [00:20:33]:
Well, we have a podcast episode just on there. It's a super huge topic. But no, to be honest, I think.
John Paladino [00:20:38]:
That you have three words.
Tobias Zwingmann [00:20:39]:
Yeah, I'll give you two words. The first word is compliance, and the second word is humans. So talking about compliance, I think organizations need to have on their radar that there's a growing regulation for AI that they need to comply with, especially in industries like healthcare, but also in HR, for example, the european AI act has just been submitted and passed. So there's a lot of stuff that you need to watch out for this, it's not directly related to ethics, but like, if you want to comply with regulation, it's also ethics. So that's one big aspect to it. The second big aspect, I think, is humans and people and how to treat them, because there's the idea of implementing AI in order to replace or like diminish human work. And the way that I see it is to make people just more powerful and enhance their capabilities or make them do the work they just do, just with more fun and more creativity. And I think once you approach it from that angle, it's much easier to figure everything else out which comes after that, if you have the right mindset from the start, which is okay, we want to empower humans and not necessarily replace them.
Tobias Zwingmann [00:21:36]:
And also we want to comply with regulation that is in place for that technology.
John Paladino [00:21:39]:
Saf, do you have any thoughts?
Asaf Sinai [00:21:41]:
Part of our training program for new developer support engineers is information resources and how to do your research. And what do you communicate to the customer? So AI is definitely another information resource, and we need to test that. We need to make sure that this will solve the problem that the customer is having. So we treat that just like any other information we can find.
John Paladino [00:22:09]:
That's a good way to put it. I thought about it that way, but that's. Right. It's just more information, right. You can look things up anywhere and give somebody incorrect information, hallucinated information because you misunderstood it. Yeah, it's just another source of data. That's great. So let's bring this back to service because that's what this podcast is really all about.
John Paladino [00:22:32]:
We've covered a lot of great, interesting topics related to AI. How do you see AI affecting customer service? Saf, I'm going to start with you.
Asaf Sinai [00:22:42]:
I think what we value is communicating with the customer and understand why they called us, why they're working with us, what type of problems they have, and how we can help them. It's not like the customer is always right, because sometimes what they're asking they shouldn't be doing. So just answering the question and giving them advice how to do it will not make them successful. And we're in the business of being experts, and it's our job internally to create those experts. AI as it is today, it helps experts, but if you're not an expert, it will not improve customer service by itself. So replacing people with that, especially with the tough problems, will not get you there. But giving the experts those tools, going to make them happier, more efficient, help them learn and get even to superpower play. So this is what we want to get people to.
John Paladino [00:23:40]:
Thanks to Saf. Yeah, I know we spend a lot of time analyzing log files looking for anomalies. So you're right, we could use AI to help us get there faster, hopefully have us provide a faster and more correct answer more consistently. But its another tool and thats how I view it too. I think it can help us be more responsive ultimately. But I think we need to figure out, as Tobias was saying, go through that same process. What are the business problems? Were trying to solve and align the right AI to help us solve that problem and then do a 20 by 20 and prototype. Interesting.
John Paladino [00:24:19]:
Thank you. Assef tube, what do you think?
Tobias Zwingmann [00:24:21]:
Yeah, I think AI and customer service, in the end, it's about delivering a better customer experience. That's the bottom of it. And the way I see it is there are two leverage points that you have. The first leverage point is to let customers help themselves so they don't even get in touch with the company and it could be a chatbot. That's one project that I do for a lot of customers, which is not necessarily like automating customer support, but more as a improved faq, so to say. So helping customers point to the right contact person or to the right document, especially in these complex b two b processes that I work a lot in. The second leverage point that you have is to improve the actual conversation that customers have with customer support representatives. If it happens to be that case, for example, if you also, especially in b two b or more complex settings, it very often happens that the person on the phone or in the email does not know or does not have all the information just because the information is really complex or the situation of the customer is really difficult.
Tobias Zwingmann [00:25:15]:
And that typically ends up in two scenarios. Either the answer is not complete or the answer just comes way too late because they take all that time to scan that information. I was recently working with a call center company that is also doing b two b call support. We help them implement an AI system that just helps the agents while they are on the call to retrieve information much faster and also pointing them to the right source instead of needing to look through all the documents before Orlando memorizing all of them, which is really a big pain point because there's a huge long tail of problems, especially in call center operations. Like you have very often a high turnover of employees. You need to onboard new employees. It's not like you have people working there for 30 years and they know everything. It's like someone is new on that process and they need to be trained really fast in order to serve a good customer experience.
Tobias Zwingmann [00:26:01]:
And I think these are the areas where AI can really help and make the whole onboarding process or the whole customer interaction process. Yeah, much, much better for the customer in the end. And also for the customer support representative.
John Paladino [00:26:10]:
Yeah. Whether it's a call center level, one type of support or engineering support at a soft send, it's all about getting the customer what they need as fast as possible and correctly, and doing that with heart and kindness. That's the icing in the cake.
Tobias Zwingmann [00:26:26]:
Absolutely. Yeah.
John Paladino [00:26:27]:
Customers want the cake. So this has been fascinating. As I mentioned, we have two really inspirational people here. I've learned a lot from both of you, and I really appreciate you being here. Tobias. Thank you. Assaf. You're amazing.
John Paladino [00:26:42]:
I really appreciate you being here. I hope our listeners got as much out of this as I did. Please give us feedback. Whether it's something we can improve, something you like, let us know. We want to make these podcasts better for you as we go.
Intro/Outro [00:26:55]:
Thanks for listening today. If you have any questions or want to hear from a specific guest, email us anytime at inspireservice@innersystems.com and when you're ready to unlock the potential of your data and experience the transformative power of support done differently, go to innersystems.com.