People & Music Industry

Antelope Audio founder Igor Levin chats to Sam Inglis about their distinctive approach to audio interface design. We find out how Antelope’s innovative use of FPGA technology has enabled them to deliver impressive signal processing power, providing a platform for an ambitious catalogue of vintage studio hardware emulations and virtual microphones.

Chapters
00:00 - Introduction
00:22 - History
08:42 - Field-Programmable Gate Array (FPGA)
11:29 - AFX Technology
13:14 - Creating Emulations Using FPGA
17:40 - Recruiting Programmers
18:43 - Synergy Core
20:30 - The Product Range
23:06 - Using FPGA in Your DAW
26:56 - Different Applications
31:29 - Ethernet Audio
33:41 - AFX2DAW
34:56 - Modelling A Vintage Processor
42:05 - The Most Difficult Item To Model
45:24 - Modelling Vintage Microphones
50:46 - Future Of Audio Interfaces

Antelope Audio Biog
Antelope Audio is an award-winning company founded by Igor Levin, that delivers high-resolution sound to acclaimed recording studios worldwide. The company’s rise to fame can be traced back to their innovative master clocks and multi-channel audio interfaces. With real hardware-modelled effects running on the advanced Synergy Core processing platform and the most renowned clocking technology in the audio industry, the company is pushing the limits of detailed digital audio and signal processing.

Sam Inglis Biog
Editor In Chief Sam Inglis has been with Sound On Sound for more than 20 years. He is a recording engineer, producer, songwriter and folk musician who studies the traditional songs of England and Scotland, and the author of Neil Young's Harvest (Bloomsbury, 2003) and Teach Yourself Songwriting (Hodder, 2006).
https://www.soundonsound.com

Catch more shows on our other podcast channels: https://www.soundonsound.com/sos-podcasts

Creators and Guests

Host
Sam Inglis
Editor In Chief Sam Inglis has been with Sound On Sound for more than 20 years. He is a recording engineer, producer, songwriter and folk musician who studies the traditional songs of England and Scotland, and the author of Neil Young's Harvest (Bloomsbury, 2003) and Teach Yourself Songwriting (Hodder, 2006).

What is People & Music Industry?

Welcome to the Sound On Sound People & Music Industry podcast channel. Listen to experts in the field, company founders, equipment designers, engineers, producers and educators.

More information and content can be found at https://www.soundonsound.com/podcasts | Facebook, Twitter and Instagram - @soundonsoundmag | YouTube - https://www.youtube.com/user/soundonsoundvideo

Hello and welcome to the sound on sound people and music industry podcast channel. I'm Sam Inglis, editor in chief of sound on sound magazine, and I'm very pleased to be joined today by Igor Levin, the founder of Antelope Audio. Welcome Igor.

Igor Levin
Thank you for inviting me. You're welcome. I think the name Antelope Audio will be familiar to the vast majority of our listeners here, but your own story is perhaps a bit less well known. So I wonder if we could kick off by asking you to share a little bit about your background and your history.

Basically, I was born in the Soviet Union and when I was 14 years old, I emigrated. to the United States, where, um, I then went to the university. I was accepted at the University of Michigan in Ann Arbor, Michigan, where there I studied electrical engineering and computer science. So I have a sort of a dual degree in electrical engineering and computer science.

And this was in the early days. Nowadays, maybe it would mean a lot. It's, it's actually quite complicated and things have been divided into so many other types of engineers. But there I was, and, um, I, when I was all growing up as, as, as a young person in the Soviet Union, I had interest for electronics and I started building radios.

And these were, these were the days when you had to design a lot of things that using discrete components, this was in the, in the Soviet Russia at that time in integrated circuits were very few and difficult to find. So any, anything you wanted to build, and I wanted to build radios and The tape machines and audio amplifiers.

So you had to do it with like a discrete components, transistors, you know, uh, even some vacuum tubes in some instances, although I haven't done that much with the vacuum tubes, but this was very interesting because with that background, it prepared me, you know, now in retrospect, looking at all the modeling that we've been doing, that many things we model are designed with discrete components.

components with transistors, resistors, so on and so forth. And so that's sort of, for me, it's a very fun trip in the memory lane, you know, going back to my times of my childhood when I was soldering these things together on a wooden board. And so it's, it's really, it's really lovely to even look at the devices and see how they're constructed, because even the way the old Printed circuit boards are made or they weren't really printed circuits.

Some of them were just discrete point to point wiring. But, uh, so that's where the background, you know, my childhood and my interests. And then of course, the university, I learned lots of math, lots of physics, lots of signal processing. It's a. It's actually quite a tough university and that sort of, uh, opened me up for the digital era.

That's when things sort of moved from analog circuits that I was very fond of. And I was still as a consultant right after the university, a lot of people were hiring me to do some analog circuits for biomedical, for lasers, for whatever, because analog engineers are still. very much in high demand, but by that time I already started programming in C, C and later down the line signal processing algorithms and so on and so forth.

And so, uh, that's how, and, uh, there in that town I started another company that was called Artvark and in the early days of audio and I've Build a lot of things, you know, again, starting out with clocks, and I think, uh, Antelope and also Hardbark are pretty much known for their clocking devices and the clocking innovation that Early on, I had a few inventions that proved to be quite, uh, fundamental and important in terms of achieving a certain sound, which we're still using in our devices.

The acoustically focused clocking technology that we talk about. Um, and that was, goes to those early days of Aardvark and then down the line, you know, moved on from clocks. We made some SPDIF stuff and some interfaces. And then actually we started building multi channel interfaces. And in fact, we had some interesting, uh, like the, uh, industry first, like for example, we actually shipped the first, uh, eight channel line level in and out interface in the world, I think, uh, predating, uh, just by, uh, maybe one or two weeks, another company that's.

that was working in the same domain. And then, uh, later down the line, I built, uh, the world's first audio interface with the microphone, with the digital controlled mic, uh, preamplifier built into it. And in fact, this was around, I think, year 2000 approximately. And that also had some DSP processing in it as well.

It had a signal DSP processor. At that time, it was a Motorola 56301, I think it was called, family 24 bit microprocessor running at about, I think, 80 or 90 megahertz clock rate. Uh, which is about, you know, 1 of the DSP processors in our current products, quite interesting. And we were still able to do quite a bit with this one processor where we had the EQs and compressors on the channels and we had a little reverb going.

And so, uh, but, but putting the mic pre into it was one of the biggest innovations, I think. as well as maybe putting everything together like that. And in fact, at the trade show at the time, people were quite shocked. They couldn't understand what was the box with like this mic piece, uh, with, with these jacks, because they were not used to that.

The audio interface was like a line in and line out. And when they saw this box and the typical conversation would go like this with the microphone jacks, they would go, okay, what is this? Is it the mixer? And we say, no, uh, But it has a mixer. Uh, well, it, what you know, is it a mic pre and you know? No, it's not a mic pre, it's, it's, yeah.

You know, so it's not a mic pre. Why does it have this in the mic? Jacks? Well, it's an interface. Well, no, it can be an interface because interface has to have line jacks. And so it took a while for people to kind of integrate, uh, move up to this thought, which of course now is pretty normal. Almost everybody offers.

Uh, interfaces now with the direct connectivity with the microphone, but it was quite innovative. And this was about 21 years ago, if you think about it. So then, um, you know, as things developed and over time, then Aardvark stopped existing. And my next company was Antelope Audio. And in Antelope, you know, we pretty much.

It was the same thing that I was working on, but now being extended on a higher level as the technology progressed and there's more components now. And so we have one of the early innovations that, uh, the fundamental to Antelope was, uh, a 32 channel interface working on A USB. And this is really something that sort of launched us into the interface world.

And even that was quite a shocker at the time because people just, well, USB overall had a sort of a bad rep at the time people were avoiding, you know, they were using firewire or whatever they could, but not, not USB. And so just to have something working on the USB and to have something working on 32 channels at 192 kilohertz in and out, uh, People were just like, quite shocked or not quite convinced.

So it took a little bit of time for them to actually say, wow, you know, this is really working and it's working on windows and, uh, and it's working on Mac. So that was good, but, but to make this thing work, we had to develop FPGA solutions. This is how and why, because I think maybe you may want to talk about FPGAs or you, I don't know, that, that may be an interesting topic because it's so unique to our company, but we turn to FPGAs in order to maintain reliable multi channel signal flow through USB.

That's how we accomplished this 32 in and out. And that required a unique design using this programmable devices field gate arrays that we actually accomplished. So, um, that's maybe a little bit of a background and where things stand. Thank you very much Igor, and yes absolutely, I think we'd all love to know more about the FPGA solutions that you've come up with over the years.

For those that aren't aware, FPGA stands for Field Programmable Gate Array, and it's a somewhat different technology from both native processing, where audio is processed by the CPU in our computers, And from DSP processing, where the audio is processed using dedicated hardware chips. And FPGA is something a little bit different from both.

I wonder if you could explain what those differences are. Well, FPGA, you know, is just basically a chip that's, uh, sort of by itself. It does absolutely nothing. It's a, it has the building blocks, you know, it's a chip that consists of hundreds of thousands of basic digital circuit building blocks that you can write, uh, an algorithm, you can write an instruction, how to wire the chip individually, how to wire all these things together so that they can be useful in some ways.

So once these things, you know, and you write that using like, uh, typically very log or VHDL, that's a special language that is created to, to instruct the chip, how to make these connections. But essentially you're not executing a program, a DSP or, you know, computer, it's running a program step by step, you know, step one, uh, take this number step two.

To add it with this number, step three, put it into this, uh, memory location and so on. Whereas an FPGA doesn't work like that. It has a clock and on each clock beat, the signal flows through all this hundreds of thousands of mathematical or building blocks performing something for you. And so that's sort of essential difference in that a DSP processor works sequentially, executing things one step at a time, whereas an FPGA is hundreds of thousands of things working in parallel, wired together, kind of like your brain, where all these things are connected.

And so the, uh, that's sort of the difference of a circuit, much the same way. Say, if you have an electronic circuit, like in your compressor, like all these transistors, resistors, blah, blah, blah, they don't work sequentially. The sound, the electrical signal just flows through all of them at the same time and outcomes the result.

And so, uh, this parallelism. is, is, is the main difference, which one of the main advantages is, is the hugely larger signal processing power, because when you have hundreds of thousands of things that can work together, that's sort of a little bit different than versus having like one DSP chip that can do things one step at a time.

Thank you, Igor. That's a very clear explanation. And in a lot of Antelope products, you've used this AFX technology to provide processes that appear to the user like conventional plugins. Is it quite a big step to get from this FPGA processing to a conventional user interface that people understand?

Well, yeah. I mean, many of these things are just the problems. And these are sometimes daunting problems because the problems of user interfaces and the problems of presenting a paradigm to a person in an intuitive way, oftentimes is much harder. than purely mathematical or electrical or engineering problem.

Because, you know, you're dealing with human beings and which have different, uh, so you're again dealing with psychology and person's backgrounds and their learning. It's interesting how even software from different countries, like, I think like the DAWs that are made in Germany have a distinct different way how, you know, UI is presented versus, say, North American DAWs.

Because I think there's something about the cultural element, even of people who design these things. Aside from their aesthetics, you know, like whether they prefer darker backgrounds or lighter or icons like this, you know, just sort of like the British styling or the European styling, there is this thing.

And so the presentation element, yeah, it takes, it's not to be underestimated. That's what makes or breaks sometimes a company or a product. So, yeah, that's. That's not a simple thing. You've used the FPGA technology to create emulations of classic studio hardware like compressors and equalizers. That's something we've seen other manufacturers do using conventional DSP hardware.

Do you have to take a different approach to do that in your FPGA based technology? Yeah, it's, uh, it's, it's, uh, completely different. I mean, this is like, you know, it's the difference between, uh, parallel processing Versus sequential processing and just the way, you know, parallel processing or even, you know, there's computer languages and algorithms that are presented in doing things simultaneously.

But for people, they're much harder to comprehend. It's just the way our brain works, and so when you're creating an FPGA, you're not actually programming it like an algorithm, you're not telling it, do this, then do this, then do this, you're telling it, okay, connect this with this, so you're, you're, you're building, you're creating a topology of the chip, you're not telling it, how to do it, in which order you are actually connecting it, you're connecting the building block.

So it's, it's a little bit, maybe more similar. I don't know, there's audio software products where you have this icons and you kind of drag them together and you put, you know, two wires to the third icon and, and it represents a mixer or maybe an oscillator. So like if you're doing this, uh, um, Stuff that's, uh, uh, synthesizers, you know, they're sometimes programmed with icons, so like this type of programming is closer to an FPGA programming, where you are actually creating a structure.

These are structural languages, they define topology, they're not sequential languages. You know, computer languages are divided into, like, declarative and versus, uh, procedural. So declarative, they declare relationships or something, and procedural is the ones most people are familiar with. Like, do this, then do this, then do this.

And so, That's like two different worlds, as well as trying to do things where multiple things happen in parallel, like two, three, four, five, you know, like imagine, it's like watching five movies at the same time with a different plot and trying to, so it's, it's much harder. So FPGA is really the only people who can design these things are really chip designers that have had experience with these things.

So naturally it's a lot more daunting. Incidentally, maybe this is a good time to mention. that we are not the first company that got this idea or popularized this idea or stepped on this idea. RME, in my view, is arguably the first company that really took the approach of using FPGA chips. And in fact, they did so when, in order to do their, uh, I think they have a mixer.

I don't remember what they call it. It has an interesting name, but it's the one that can do lots and lots of channels. And then. You can put on like an compressor and of the same kind in each channel. So this is what, that's what impressed me is, you know, RME are the pioneers and they deserve a special fun mentioning on my part.

And that's really what inspired me. I mean, I took. The idea of using FPGA from them, and then I developed it, whereas in their world, like if they have a compressor EQ, it's always the same compressor, same EQ, whereas I was able to generalize this idea where you can have many types of compressors or many types of EQs.

Um, and that's sort of, but in terms of programming, yeah, I mean, programming FPGAs is a nightmare and a half. And, and, and very few people can do that. And whereas, you know, programming DSPs is something, you know, that's done in C or C and that's generally like a second year student or, you know, um, something that by the time you finish college, you should be comfortable programming in C and C Certainly a lot more accessible and popular.

And there is a large, database or there's a large amount of, uh, already existing algorithms on GitHub or other places that you can practically take and use for signal processing on a DSP chip. That sounds as though it must pose quite a challenge from the recruitment perspective to find people who have the skills you need.

Well, that's, uh, that's the approach. Then we took a different approach instead of, uh, we tried to make these things more general purpose. And so once we design the processors on the FPGA, we design general purpose processors on the FPGA so that then programming them becomes a task similar to programming a DSP chip.

And so, but designing the processor itself is not trivial, but you know, we've had over 10 years to work on this thing and several people worked on it, myself included. So, uh, there's a lot of work that has gone into that. But mainly. And also because of the factor that you mentioned and the flexibility that's involved, yes, we definitely at certain point in time considered augmenting the system of FPGAs with the general purpose DSP processors.

That, as I understand it, is what you've done with the latest generation of processing at the heart of your newest interfaces, which you're calling the Synergy Core, is that right? Yeah, that's exactly right. That's the idea, to basically have the best of, you know, have your cake and eat it too, or, you know, both of both worlds are best, or, but that's the thing.

Um, because at that point we've done a lot with the FPGA and we got, uh, you know, huge performance and. Um, a number of vintage effects, but there is, there are weird effects. There's weird and interesting stuff that doesn't really easily, uh, reduce itself to a standard paradigm. And that requires more of a general purpose, uh, processor, such as a DSP processor.

And that's. Plus, uh, and ultimately, um, you know, we've done a lot of work, uh, on our own. I, we've developed lots and lots of effects. I think, you know, maybe 50 effects and so on, which is quite a bit, but then there is also a whole community of other people that have been working in DSP. And so at that point we kind of thought, okay, we want to open up the ecosystem and we want to bring in the new partners into the game, and that's, you know, That's why we need to have a sort of a procedure or system that people are familiar with.

So we need a standard DSP processing thing being a part of our ecosystem and then we can open the system up and we can have partners and we can run all these pre made algorithms that other people developed and there is no way in hell we will be able to use such as autotune for example. You know, like Auto Tune would not be practical to put on an FPGA chip.

If we look at the range of AFX processors that's currently available, there's a huge number of vintage emulations, not quite so many reverbs or delay based effects. Is this to do with a limitation of the technology or is there some other reason? There is, some of it just has to do with the flow of how we got into the business of developing these things.

And that we firstly made a universal processor on the FPGA that was, uh, specialized in doing EQs. And then we made one that was specialized in doing processing. compressors. And, and so then we made one that was a reverb and we made things that are like, uh, preamps and guitar cabinets and so on and so forth.

So some of it is the order in which we were developing these things. And so then ultimately when it came to time based effects, we sort of Ignored them a little bit, but they're currently, you know, currently we have seven or eight. I think that that are offered right now when you take into account the various plungers and pedals and, and, um, and there's, uh, I know that there are a few more in development and in various stages of QA.

So by the end of the year, uh, we're going to see, uh, Quite a few more of those. And now with the DSP, there is a certain types that we can do that we couldn't just do before. Um, and so we have, we have more freedom and we should be able to do more of them. But I think with some of the stuff is also the question is trying to figure out what we should be doing next because we already have quite a huge library of these things, and I think that what's great is we tried to.

Uh, do this process, uh, somehow by listening to our community and, and what people are asking. So I think what, what would really help us and focus us, you know, people to tell us, okay, what do you want us to make next? You know, and we, we have a huge community, so that's also interesting. So, because maybe guitar players will have one idea and maybe mastering engineers will have another idea or whatever, but, but ultimately, yes, I We are going to do a lot more of them.

And what would be great is to just, um, get people involved. Like, what's your view? What kind of, uh, facts do you think we should make? How would you decide if you had to decide? Well, there's there's already a huge range of effects available for AFX I mean, I haven't explored half of the ones that exist already So it seems greedy to demand to demand more but one thing that is characteristic of AFX and some other systems Too is that a lot of the processes are designed to be used on the input So you would actually record through them and print the processed signal.

For instance, if you have a mic preamp emulation, or in your case, actual modeling microphones. Um, Do the AFX actually have the ability to change the characteristics of the analog circuitry on the preamp side, as in some rival products, or is this all handled entirely in software? Well, a couple things on that question.

They're not specifically designed to be used on the input side, because you can, by means of using the routing matrix, Actually direct them and use them as inserts in your applications. Arguably, this is not the most elegant way to do so. Uh, a more, uh, elegant signal flow would be, or a workflow would be to just open them up as plugins and have all that magic happen under the hood.

And that's something that we do with our AFX2DAW technology that automatically wires these effects in while presenting them as VST and plugins and with all the automation attendant and, and, and working. Yeah, so these things are certainly and absolutely not hardwired into any particular application. In fact, that's.

It's sort of cool about our router in general that it's very, very flexible, especially at the, with the high end products, you get our drag and drop router where you can really build the amazing signal processing chains. But I think that initially, or a lot of people were using them, uh, with the signal coming in, uh, but you don't need to do so.

And certainly. We expect with AFX2DAW, a lot of people using them rather conventionally by just installing the plugins, and that's why we developed the AFX2DAW. Um, and the second part of your question, does this change input impedance? I think that that was what you were referring to. Our philosophy, or my philosophy, our philosophy is that we want to take the signal, bring it through the amplification with the least amount of distortions or change, and have it converted as clean as possible.

And any other kind of um, modification that reflects the presence of some circuits would be done actually as a DSP. So that's, so we do not actually alter to, to be consistent with that technology, with that thinking, we do not bring in, um, elements that would alter the signal because, you know, say if you want to do controllable impedance, where do you draw the line?

How many different kinds of impedances would you allow? One, two hundred ohm, 500 ohm, you know, Two kilo ohm, one mega ohm, you know, what if you had a device that was none of those, then what does mean you wouldn't be able to model it right? So, uh, and what do you do with the people that bought your devices before?

And now you have this plug in, but you don't have that circuit. So they wouldn't be able to use the plug in. So in order to, uh, to stay clean and consistent with the idea of just bringing it in as pure as possible and then doing all the processing through the DSP. Uh, so that's how we do it. We basically model the effects that particular impedance would have.

That makes good sense. Another thing I wanted to ask you about plans for the future is we've seen other manufacturers who make DSP assisted plug in boxes make their technology available in other formats. For instance, you could buy a satellite unit that Simply handles coprocessing and doesn't have any IO or we've seen companies use their technology in guitar foot pedals, for example.

Do you have any plans to do that with the AFX? Well, the technology is there. We certainly have the IP. So, on a purely technical level, that would be a relatively straightforward thing to use it, um, into various applications such as these, and, and it's great that you're mentioning that because we're definitely open to suggestions and comments by both power users, uh, media, uh, hosts, and, uh, community in general.

Uh, everybody looks for ideas, but let's take these two questions separately. The question about accelerator box, um, my general feeling is that we'll make a range of devices taken into account, so, you know, starting with the 499 and then, you know, Going to multiple thousands of euros. Okay. So they have a somewhat different budgets and different expectations.

You know, the guy that's really at the lower end of the scale, he's really very, very budget conscious. A box like that with the attending costs of various plugins for it is something that's probably out of his range at that point. And then the guys on the very top, they really need some serious signal processing.

So one advantage is we have in this whole process because we're using FPGAs and everything we have is has FPGAs. And so that. By accelerating it with both FPGAs and DSP chips, we generally offer several times the amount of processing that's available in other similar, uh, devices. So that, that leaves a lot extra DSP power in the devices themselves to be used for the AFX to DAW applications.

And more expensive, higher level devices actually have more acceleration. So, for example, with Orion Studio, which is a very, very nice Uh, upper mid level device, you're already getting two FPGA chips and six, uh, DSP chips. So that's quite a bit of processing power. And if you bought Galaxy, you would have gotten 12 DSP chips and three FPGAs involved in the acceleration.

So the, the way we build them is just that. Basically thinking that it's somewhat matched to the needs of particular customers. So, at this point in time, we think that the boxes themselves are providing enough access capability where, you know, an external device is not necessary. But, if we hear the customers saying, you know, we love this stuff and we really want even more acceleration.

then, you know, we'll listen to them and, you know, you may expect something like that. Uh, or maybe there's another application that I'm missing, you know, trying to figure out like how that would fit in. But at this point in time, um, I think what we got now is, is, is a good starting point, certainly. In terms of your second question, I think that refers to pedals.

Um, that's, that's an interesting one. And I think there is more of a question. Okay. How would the Antelope pedal be any different from any other pedals, you know? And that's a sort of, we're not fond of just making something just to put an Antelope logo on it because, you know, ultimately people aren't that stupid and they will not, uh, if the product is not compelling, they're just not going to buy it.

And so we need to really understand that. Stand again. You know, and that's, we're, we're a pretty open company. We're open to the community and, uh, it, we will, this is something really our customers really need to do for us. And, and we need to hear from more people like that. If they say, okay, we like this effect, this effect, or We like the way this, this, this thing sounds, or it could be used like that, then, you know, we would feel that there is a compelling reason to actually build a pedal.

Uh, and, and we, we would do so. That makes perfect sense. If I could ask you another future focused question, Antelope Audio were early adopters of the Thunderbolt protocol, and you've also made a number of USB interfaces. In fact, many of your interfaces offer both connection protocols. Do you see much of a future in studios for Ethernet audio?

Well, we actually have gotten into it. If you think with the Galaxy products where we have the, uh, Ethernet connection, um, that's already available. Um, I think that's a different thing. It's, it's really, I, I see Ethernet thing as more like a larger institutional audio type of projects. I think the biggest users of such things are, uh, like educational institutions that like to have different rooms connected, or, uh, extremely large, uh, studios maybe, or, uh, some live applications.

Currently, we have Dante connectivity, but at the same time, the world is scaling down and, you know, that's something that we've also seen, uh, during the, uh, COVID times, during the lockdowns, we've seen a lot of people staying at home and sort of interestingly for the industry, it was a boom in sales in terms of like these small devices.

Because, you know, all these people staying at home wanted to, like, get back to their hobbies and so on. So, like, this small thing, I don't see how that would fit in, into, in the Ethernet. So, maybe wireless, if anything, you know, if one day there was some sort of a wireless thing. Now, currently, with, you know, cost benefits analysis, I think that some of our new devices, like XenGo, which, are powered from the laptops.

And that's something that's, it's much more practical. I think at this point in time, as well as provides better quality and much lower latency as well. I guess I'm not that gung ho on the Ethernet audio, although it certainly does have its place in the live applications. Okay, so talking about the AFX to DAW plugin, uh, if you're not familiar with this It's a plugin that allows you to use the hardware based AFX processors within your DAW software as if they were native plugins That's been available for a while on Antelope interfaces running on Thunderbolt on macOS Any news on when that might be extended to Windows or to USB?

I'm glad you asked. Actually, I have good news, and I love delivering good news. In a very short time, we are actually going to make all the stuff that's currently running on Thunderbolt Mac. AFX2DA will be available on Windows Thunderbolt. So that's available. In addition, we have Finally, uh, made a bold step into supporting, uh, this technology on the USB.

So, uh, uh, beginning with the Zen Go device, we will be able to also offer AFX to DOA on the USB, and that's on both Mac and Windows USB. Oh, that's fantastic news. I'm sure you've made a lot of customers very happy there. And it made me happy because it's just so difficult to, it took a lot of efforts to get this thing working, but I'm, I'm really happy how it worked out.

So what's your general approach when you're creating a model of a vintage processor? What's your starting point and how do you refine it till you know it's right? Well, that's an interesting thing because that's sort of a combination of art and science, at least as I see it. So the way I like to work, and you know, there's different, different people have different ways of working at it.

But I like to work with the schematics because to me, schematics, it's sort of like a building plan of And so by just looking at the schematics, you get sort of an idea of the, uh, just, I don't know why, uh, a quote from Einstein comes where he said, I want to know what God's thoughts and the rest are, are just details.

I think he said something similar to that. And I want to know the thoughts of the designers of these things. Some of these people are extremely brilliant and they have signatures. So when you look at how the schematic is made, you can sort of feel there's certain personality that to me comes, uh, across.

And so that's why I like to see the schematics. And the schematics also often gives away what he's trying to accomplish. But there's interesting because some of these things, uh, you also see the imperfections. These days we're very spoiled. We're very lucky. We have, like, various skills. CAD software and modeling software.

So you have something like Spice or LT Spice and you can, this is the software that allows you to really model, you know, build these circuits without physically building them and send all kinds of signals for them and audio and see, you know, can really record the result. And it's really, really amazing.

Um, and watch these voltages and currents. But what you find out in that is also that there is, When it comes to vintage devices, there is a number of uncertainties and the cool sort of contradictions. For example, one thing is that as they issue the same device over, say, four or five years, they may actually go through three or four or five internal revisions without You know, the public thinks it's just one device, but then certain resistors or capacitor values may be changed from version to version, and there you may be facing an interesting choice as a person modeling it.

Which version do you want to model? Do you want to model the early version, the middle version, the last version? Or maybe model several of them and allow them to switch between them. And then the other thing is you find out is that some of these people, because they didn't have all the stuff we're having now, they made mistakes and they didn't realize they made them.

I've seen the devices where things are not quite working perfectly, you know, or there's mistakes in the component values and even mistakes in how things are connected. I have also seen some products where in manufacturing process they didn't notice that one of the wires or connections on the board was actually missing and they got into manufacturing without realizing it.

And so then the question is, okay, is that a feature or a side effect? You know, because, uh, sometimes could be a lucky thing when, you know, something is missing, but maybe it sounds special because of that. And so, do you fix it and then model it, or do you leave it like this? You know, another, there's so many imperfections, you know, like a classical ones, uh, the settings on the decals that say like it's a five kilohertz or two kilohertz on the EQs generally do not match to the, the real ones.

And then the question is, okay, do you fix it or do you leave it? Because with digital modeling, you can make three kilohertz to be exactly three kilohertz. Or, or, or should you make it 2. 6 as it is in the, you know, so those are the interesting choices. You know, that to me is fascinating. Uh, and then comes the question of aging of components, which is, you know, fairly explored in the microphones.

You know, when you model the microphones, you know, Membranes are physical things, you know, subject to moisture and handling pressure. And when you model a given microphone, which one do you take? And, you know, and those membranes, they're sort of retightened by hand. And so all these things, there's a huge ambiguity, which goes with all this vintage stuff.

And that's where the art of it comes in. Um, I mean, the, the, the other element, okay, once you understand the circuit, You need to figure out what model and what not to model, because you can't model everything. If you try to model every component, literally, you're likely, you know, you're going to bring the computer to, uh, to a halt.

Uh, and so you need to really figure out what's the quintessential, what makes the circuit. Or what makes that device and what it is that you want to emphasize and some stuff that you want to de emphasize or maybe let go. So those are the fascinating stuff to me. And then comes the question of choosing the actual algorithm because the algorithms or the digital methodology is not perfect either.

And they have certain, you know, certain methods. the worst and lower sample rates, certain methods, the worst and higher sample rates. And then what precision do you need? And that's where, you know, with the FPGA, we have that benefit that when we design these processors on the FPGA, we actually Select the bit width, you know, that we process a particular signal flow and we adjust it as needed for that particular application or in that particular case, unlike, let's say, a DSP processor that's made to work with a 32 bit floating point or whatever, you know, we can actually work with wider.

bit width and we do so with certain, you know, with EQs, for example, to have an EQ sound good at the lower frequency cutoffs, you need a lot more bits. And so sometimes we go to like 64 bits or 70 something bits, you know, these extra bits, they really do help, especially at higher sample rates. And so these are the things, the flexibility that we have, but that has to do with the analysis.

And then understanding of, of these limitations. So we're talking about the analog limitations, the aging factors, the imperfections in the transistors, and, and the fact the limitations of the algorithms. So only after you can sort of have a handle on this whole, on all these number of issues. That's sort of the fascinating choices as a designer you make.

And that's sort of, uh, that's what excites me. And that's what excites me in the vintage stuff, because it's just really not black and white. It's, it's really, you know, not like the pictures of the period. So what's been the most difficult item to model? Ah, interesting. It's just the ones where you don't know.

Like, did he mean it like this, or did he mean it like that? Now in terms of, you know, these days I'm on the German wave. And that's, I find the German, uh, some of the German compressors, they're just fascinating. Like this, uh, and the approaches they have, like this one, they, like, when he's got this release circuit, he's got this thing.

In addition to the usual timing, he added a counter that counted two and a half times. cycles of the wave before he engaged the release. And this is like happening in the background, you know, and that's, they figured out for lower frequencies, that makes it smoother. Like what's, what's unusually are these like sort of interesting thing like, right now I'm working on a device, It's called Dynaset U311, which is, I'm sort of doing as a hobby while I'm on vacation, but this is fun for me.

And um, that one is like, has two separate thresholds. It's like a compressor with two thresholds and, and for each threshold, it has like its own ratio. So you can have like this really weird looking compression curve, like where the bottom threshold could be either compressor or expander, and the top one can be compressor or expander or anything in between.

So the unusual stuff like that, that's sort of what fascinates me. But I recall like some of the EMT. compressors that was like a nightmare to model, just too many diodes and, and when you get stuff with too many diodes, you know, the equations become nonlinear, and it's sort of difficult to solve them in real time.

But on the other hand, I, there was something where we got some kind of a version where some of the wires were missing on the PCB, because I think it was one of the last things they produced before like the company went under. And so I was trying to figure out what the designer actually meant to do. And he must've been under a lot of pressure at that point, or I don't know, you know, what was going on.

So it's interesting. I guess they're all great. And it's sort of like your children and just because it's a difficult child, it's nice, you know, at the end, everything is lovely in their own ways. Well, it's fantastic that we have now these devices available for all of us to use courtesy of your emulations.

Yeah, and it's, and hopefully, you know, I'm hoping that other people can enjoy them in a different way because I'm not a musician, you know, that's Although I do sing, but certainly don't produce music. Um, and, uh, so I'm hoping, you know, I enjoy these things from like sort of a dorky, quirky engineering point of view.

And so now I'm also hoping that I will, I want to, I want to have guys try out these like double things. Threshold compressor thing and drums and and really see how they can get some really unique sound out of that. Yeah Well, I look forward to hearing how they do. Thank you as well as modeling classic studio hardware such as compressors and equalizers Antelope have also launched quite a comprehensive range of of modeling microphones What are the special?

challenges involved in modeling how a microphone works? Uh, that's a very interesting one. And the main one, it's sort of along the lines of vintage, because it's, I mean, I think some of these microphones are the most vintage things that we model, you know, with the original designs going back into quite a bit into the past.

And so one of the challenges is finding the right specimen. Because, uh, you know, these microphones are now sprung over, uh, different studios. And each person thinks his microphone sounds the best. So, but these two things, they're, they're just sort of like vintage wine. So if you, if you go burgundy, say, uh, 1957, and then got 10 bottles from, from the same place, and so you're, you're going to find like, there's some inconsistency, so finding the right microphone that you think that microphone is the one we want to model.

I think that's sort of, uh, more than half of the battle. Then the next question is measurements and, uh, algorithms that you use. Um, one, one of the unique things about our modeling algorithms that we do this as 3D modeling that Models the sound, uh, you know, off axis, on axis, and off axis. These are for the, when you use our modeling microphones that have two capsules, because you need to have at least two capsules to be able to do, like, this 3D thing.

And so, that generally, it does the best job modeling microphones that are constructed similarly. You know, there's a number of German microphones that have, like, that very similar construction with the two capsules. And so, those microphones are modeled fairly accurately. Now when you try to extend it out to different kind of mics or maybe dynamic mics and So it's sort of like well, you know, it's not the same thing But it's similar and the algorithms they will optimize something and you know You can't have you don't have enough degrees of freedom to optimize for everything So then you need to tweak it and you need to do a lot of listening tests and to try to figure out which one really Comes closer to it.

I think that those are the three the biggest, biggest, biggest tricks that are involved. And then essentially, yeah. So in general, that's sort of like the modeling. Modeling is a little bit like drawing pictures, but imagine sketching somebody sketching someone's face. You know, there's people artists who would just like one or two sketches.

Or with just two lines, and you immediately look at that and say, Yeah, that is this person. And you can have people who just draw and draw and draw at the eyes and the ears and everything, and at the end you're like, you're not so sure. So, that's sort of the ability to focus in on the essential. That I think is what makes good models versus not so good models.

That's a lovely analogy. I like that. Yeah. And in general, I think that's sort of, uh, and sometimes the other thing you find out in audio and in video and, and in generally in the world of art is that realism is not real. And that's sort of a, an interesting thing. So that Sometimes when you make it exactly, when it's mathematically exactly right, it may not be exactly right in terms of the sound.

You know, in the early days of television or in the photography, they found out that if, if a photographic image reproduces exactly the amount of contrast, that's out there in the scene, then such an image will look to you pale compared to the original. Because there are some psychological factors such as size of the image, amount of lighting that affect how you perceive contrast and color.

And so generally color reproduction is, is a bit fake. It's sort of, it exaggerates certain, there's, you stylization and exaggeration. in order to make it sound right, just like being in a theater, you know, like the actors in a theater when you're, or in the opera, you know, their gestures and mimics are so much larger than life.

But if you're sitting far away, it just seems to be just about right. So in modeling, like when people were doing, uh, like, um, tube amplifiers. Uh, you know, people have certain expectations how tube amplifiers have to sound warm. Well, it turns out that if you actually model the tube amplifiers exactly how they are, you will not find out that they sound like tube amplifiers.

So that what a lot of people do is actually they over exaggerate the tubiness. Uh, or the tube distortions just a touch or sometimes not a touch, sometimes two, three, 10 times, and then it just about sounds right. So taking, you know, psychoacoustics into the, uh, factor is a, is a big thing as well. Thank you, Igor.

This has been absolutely fascinating and I'm sure I'm not the only person who's been hugely impressed with what you've achieved with Antelope Audio over the years. So if I could leave you with one final question, where do you see the audio interface market going in the next 5 10 years? What's the future for audio interfaces?

Well, I always thought or that the stuff I was doing was 5 or 10 years. ahead of its time. So if you want to see what it's, what it will be five or 10 years from now, then look at the stuff I'm doing now. And it's a little bit funny because the, the work I did is, um, in, uh, Aardvark days, right? In the year 2000, I've introduced an 8 channel interface that had the digital controllable mic pre's and had the processing on each channel and the reverb.

I think it took about about 10 to 20 years for other people to have, uh, products that are similar. So in some sense, that's sort of what gives me an idea that some of the stuff we're doing today will become commonplace maybe 10 years from now. So, and in terms of, uh, going further, that's, that's a little bit difficult to do.

In general, there seems to be a trend towards virtualization. So the interesting thing is, uh, whether there will not be an opposite trend. Because some of the, uh, it's interesting thing is, is in, in the nature and the philosophy or in the nature of people, things sort of move in, um, sort of a pendulum way.

And in terms of fashions, you see like, uh, short hair followed by long hair, followed by short hair or tight clothing found followed by loose clothing, you know? So there's this. cycles. So the interesting thing to me is, will the virtualization continue? So, because I myself have been in. involved in much of the efforts to virtualize.

And maybe, um, uh, another interesting comment to make about this thing, because currently some of my focus is in preserving the history. I am actually, uh, sort of, sort of an interesting, I realized I'm sort of an archeologist or something of the sorts because I'm currently doing a lot of modeling of vintage German stuff.

I'm very, very fond of, uh, this, this, uh, these designs that are very little known in the, uh, Anglo, you know, in the English speaking world, but there's been a huge amount of activity in audio in Germany following the world war two. And some of the most brilliant designs, um, made, and I go back EQs or whatever weird things they had, the, the, the the microphones, you know, and I, um, and I tried to figure out how they were working, understanding how these people were thinking, and to try to, uh, uh, reconstruct it in a digital way, so to preserve this forever, because some of the stuff I'm doing now, you know, you can't buy it anymore, it's not made, and there's very few of them left in the world.

So in that sense, I'm now in this sort of archaeological effort where I'm going into the past instead of in the future, because trying to see if there's some stuff in the past that we have missed and that that would be very, very cool to make it available to the young people today. And And not so young people today, you know, to really enjoy this stuff.

But in terms of the future, so I think it's, uh, if I'm thinking about this, uh, sort of oscillations, I, I, I kind of see that people may at some point become tired of the virtualization. And so, sort of go back to, okay, well, how many of us can actually play the piano or how many of us? So, instead of having all these plugins, I can foresee that it may become extremely interesting to actually start building again these things, box after box, with the huge knobs and the dials and, and whatever you can find of the vacuum tubes.

Or maybe even there could be factories that will start making vacuum tubes again. just to, to sort of swing back into this naturalness of things as they were. So, and, and then it's possible to have a synthesis. So maybe we'll have some analog devices that are somehow integrated with the VST world. So that's sort of what I see as some possibilities.

Thank you, Igor. And I'm sure we all look forward to the new range of Antelope audio boxes with enormous Bakelite dials on them. In the meantime, you've been listening to the sound on sound people and music industry podcast with me, Sam Ingalls and Igor from Antelope audio. Thanks for listening. And be sure to check out the show notes page for this episode, where you'll find further information, along with web links and details of all the other episodes.

And just before you go, let me point you to the soundonsound. com forward slash podcasts website page, where you can explore what's playing on our other channels.