People & Music Industry

CEDAR Audio has led the world in audio restoration and noise suppression for over three decades. Hugh Robjohns talks to their Managing Director, Gordon Reid about their technologies.

Chapters
00:00 - Introduction
00:19 - CEDAR Audio Beginnings
01:45 - The National Sound Archive
06:15 - Cleaning Up Old Recordings
08:12 - Working In Real-Time
10:36 - Developing Rackmount Units
11:52 - Making User-Friendly Products
14:27 - Building A Team Of Developers
17:06 - Introducing Machine Learning and AI
21:25 - The Move Into Post Production
25:34 - The Invention Of Spectral Editing And Retouch
29:26 - Algorithmic Technology
36:10 - Involvement In Forensics
39:02 - Blind Source Separation And AudioTelligence
44:04 - Future Opportunities

CEDAR Audio Biog

CEDAR Audio is committed to furthering the science and art of noise suppression, audio restoration and speech enhancement in all of their forms, and actively pursues research into each of them. The company pioneered real-time audio restoration, spectral editing, zero-latency dialogue noise suppression, and many other processes that are now industry standards. It has worked closely with the world’s most famous studios, record companies and bands to ensure that they obtain the highest audio quality, not just from vintage material but also from audio recorded today that suffers from some sort of noise problem. If you enjoy a trip to the cinema, you're almost certainly listening to sound that has been perfected using CEDAR and, from the newsrooms of major broadcasters to reality TV, to studio-based shows, to sporting events such as the Olympics, the World Cup and Premier League football, CEDAR is the standard for noise suppression. The company is also proud to include many of the world's archives and libraries as well as law enforcement agencies and security organisations as customers and friends.
https://www.cedar-audio.com/

Gordon Reid Biog

Gordon Reid is the Managing Director of CEDAR Audio and has helped guide the company to an Emmy®, an Academy Award®, two Cinema Audio Society Awards and numerous other accolades. He is widely known in the fields of audio restoration, noise suppression for post and broadcast, audio forensics and speech enhancement, and he co-founded AudioTelligence, which develops technologies for applications ranging from hearing assistance to advanced surveillance systems.

Hugh Robjohns Biog
Hugh Robjohns has been Sound On Sound´s Technical Editor since 1997. Prior to that he worked in a variety of (mostly) sound-related roles in BBC Television, ending up as a Sound Operations Lecturer at the BBC´s technical training centre. He continues to provide audio consultancy and bespoke broadcast audio training services all over the world, lectures at professional and public conventions, and occasionally records and masters acoustic and classical music too!

Catch more shows on our other podcast channels: https://www.soundonsound.com/sos-podcasts

Creators and Guests

Host
Hugh Robjohns
Hugh Robjohns has been Sound On Sound´s Technical Editor since 1997. Prior to that he worked in a variety of (mostly) sound-related roles in BBC Television, ending up as a Sound Operations Lecturer at the BBC´s technical training centre. He continues to provide audio consultancy and bespoke broadcast audio training services all over the world, lectures at professional and public conventions, and occasionally records and masters acoustic and classical music too!

What is People & Music Industry?

Welcome to the Sound On Sound People & Music Industry podcast channel. Listen to experts in the field, company founders, equipment designers, engineers, producers and educators.

More information and content can be found at https://www.soundonsound.com/podcasts | Facebook, Twitter and Instagram - @soundonsoundmag | YouTube - https://www.youtube.com/user/soundonsoundvideo

Hello and welcome to the sound on sound podcast, which is in our people and music industry channel. I'm Hugh Rob Johns. I'm the technical editor of sound on sound. And joining me today is Gordon Reed, who is what of Cedar? Managing director.

Does Cedar stand for anything? It did in the past. The company was originally formed by the National Sound Archive, which is part of the British Library. Mm hmm. And the man there who caused it all to happen originally wanted to call the company Digital Audio Restoration, or DAR. But there already was a company in the audio industry called DAR.

Mm, they made some of the very earliest workstations. They did, yes. Audio workstations, yeah. And so he put the word computerized in front of that, so he had C hyphen C. D A R. Obviously pronounced cedar. Um, so they looked for a word, uh, to put the E in, so that it could be cedar, and they chose enhanced. So, computer enhanced digital audio restoration.

Right. One of the first things I did was to point out to the British Library that audio restoration, which is what we did in those days exclusively, is not the same thing as enhancement. Restoration is, by and large, is taking away things that you don't want to be there. Yes. And enhancement is adding things in that you think make it sound better.

Sure. So it was almost exactly wrong. Ha ha ha ha ha. But the name worked well. So, um, we stuck with the name, Cedar, and we kept it in capitals because it was originally an acronym, but we tried to get the world to forget the idea of computer enhanced digital audio restoration, which by and large it's been doing until you just came along and asked that question.

Yeah, I'm sorry I brought that up then. So let's forget all that. You mentioned the National Sound Archive as being the founding basis of the company. When did that happen? And tell me a bit about how that came about. The original idea was born in 1983. The director of the National Sound Archive was a chap named Christopher Rhodes, now long retired, and Christopher was one of these great oddball British eccentrics who had ideas that seemed absolutely wild at the time to sensible people.

Many of which were, but some of which weren't. And one of his ideas was that with the advent of digital technology, the National Sound Archive could digitize its collection. So take these molding discs and tapes and copy them onto modern digital media. Because at that time, The idea was being put out there that you only had to copy something onto digital and the phrase used was perfect sound forever.

Mm hmm. I remember it well. Which of course was a great bit of marketing hype, but not true in, in any respect. Um, and Christopher's idea was that whilst copying from, these aging analog media, wouldn't it be wonderful if we could get rid of the clicks and the crackle and the hisses and other noises? So he persuaded the British Library to create a grant to develop such technology.

And the original approach was to Neve Electronics, just down the road from Cambridge, and of course, hugely famous in the industry. And the chaps at Neve spent a couple of years Looking into the, uh, the problem, uh, the outcome of which was actually the first Neve digital desk, the DTC 1, which had a, a button on it, which said de click.

But that button did nothing because it wasn't even connected to anything. I think if you pressed it down, it may have illuminated, but that was all. And after a couple of years, uh, the chaps at Neve admitted that this wasn't really something that they would be able to do. See through to a successful conclusion.

So they recommended that Christopher talk to the engineering department in Cambridge University. A chap named Peter Rayner. And Christopher went and saw Peter, and Peter said, yes, we think we can do this. The technology is getting to that point. But, it won't be a real time solution. You're going to have to load the material onto a computer, some algorithms of some sort will chunter away, and then, at an indeterminate time later, A new file will pop out.

So they developed a prototype system on that basis, um, over the course of three years, from 85 to 88, and then showed it on Tomorrow's World. I think we ought to, just for the youth out there who may not know what Tomorrow's World was, it was a BBC technology television programme. Yes, it was, it was pretty much required watching for two years.

Teenagers in its day remember all the demonstrations that never quite worked properly because it was live television Well, well the cedar one did work Because the material had been pre prepared So there was this maybe 10 second clip of a 78 which had been snapped in half Going click thump click thump click thump click thump and the recording that had been made of the restoration which didn't And this stirred up a hornet's nest, at which point both the British Library and Cambridge University realized that they weren't set up to form a company to either provide the service or further develop the technology.

So they went out into the jobs marketplace to look for someone with some technical background, some commercial experience and some musical background as well, and singularly failed to find anybody. Suitable until quite by chance. I bumped into this chap Christopher Rhodes and we got talking about it and I do have a Scientific background and I had by that point some commercial experience and of course a lot of musical background So he offered me the opportunity to form the company and develop it and as we sit here That was thirty three years and four days ago.

Well, happy birthday. Actually, no, the conversation was longer than that. It was a couple of months before then. But it's thirty three years and four days since we actually opened the doors to the company called Cedar Audio. And, um, we had a room considerably smaller than your, sort of, your family lounge.

And just the two of us in there. And, uh, then started looking for staff. So was the startup funded by the National Archive, or did you have to raise money to get the company going, or how did that work? Um, what happened was that the, uh, British Library, um, formed a joint venture company with Cable Wireless, the telecommunications giant, which was very visible in the UK at that time, much less so now.

And they put in a really small amount of money just to establish SEDA. So we had to find a way to generate some revenue so that SEDA could continue to exist until we could actually release a SEDA product. And the way we did that was to use the prototype systems, even as we were developing them on a sort of day to day basis, to undertake bureau work for record companies.

And we cleaned up some quite significant selections for companies like CBS and Den and Columbia. And that enabled us to earn enough money over the course of 18 months to get to the point where we could offer the first Cedar system for sale. Right. And in fact, The deposits for the first two SIDA systems, which were in June 90, came in literally on the day that we would have gone into the red for the first time.

Oh, wow. So we'd been eking out the funds to the point where we could sell the first systems. And they went to, um, studios in Paris and, um, Brussels called digipro. Okay, who were fantastic people. They were really far sighted and they saw the commercial benefit of Audio restoration almost before anybody else did there was a real desire to hear Recordings without the hiss without the clicks without the crackle.

Yes, and they were a mastering studio They weren't a record company and they did Thousands, tens of thousands even, of restored recordings for all manner of people. So, they were much better and faster and more professional at that side of things, and it freed us up to put all of our efforts into development, rather than having to do processing ourselves to eke out the funds.

And offline process? Uh, no. We, uh, We're always very keen on the idea of real time processing. I used to do seminars and try to explain to people the difference. Imagine you're sitting in front of your DAW today, and you want to EQ the vocalist. Imagine setting up an EQ, you know, boosting a little bit at 2k, rolling off a little bit at, you know, below 200 or, or whatever.

And then having to wait until the following morning to hear the result to see if you like it. And then, ah, I've rolled it off a bit too much. So backing that off a bit and having to wait till tea time to see if you backed it off by the right amount. So we wanted the first seed of systems to be real time.

And when we supplied those first two to DigiPro, de hissing was real time. The de click and de crackle, they weren't quite finished. The real time versions of those weren't quite finished. And it wasn't until we supplied the third system, which was a few weeks later to, um, BMG Studios in New York, that all three processors were Completely real time.

And, again, people have to realize that real time in those days meant one process running in real time. Mm hmm. So, you did a real time pass to get rid of the clicks. You then did another real time pass to get rid of the crackle. And then you did another real time pass to diminish the hiss. Yep. So, just a three minute side of a 78, you were still looking at, I would say, at least half an hour to get a good result, well, a better than good result.

Where there's an offline system would have taken you maybe two or three days because you'd have to have run all of these processors Overnight and then come back in the next one overnight and so on so it was an absolutely huge step forward But real time was was always Cedar's sort of watchword right from the start And it's why we were the first people to put processors into boxes and make hardware unit make hardware units.

Yeah We always kept ourselves at the forefront of signal processor development. And our links through the university made that easier than it might otherwise have been. So we adopted what in those days were lightning fast floating point processors before anybody else. By today's standards, they crawl.

Of course. We're talking 25 megaflops, which in those days was mind boggling. And nowadays, I don't think you could buy anything like that. That's slow. That's slow. So you built these first three units, all computer based, but in real time. Uh, yeah. Where did it go from there? Uh, well, more of them. We're talking about DOS based computers.

This was a number of years before even the earliest version of Windows. Yes. And DOS only supported one application at a time. So if D clicking, uh, That's what you could run. There was no way you could then launch or decrackler at the same time. So, for about four years, that was the main limitation on the computer processing.

But we were developing algorithms much faster than the computers were developing. So, in a way, the, the PC based side of things was held back by the technology of the time. And it was during that period that we built the first four computers. Rack mount units that each did a process. So a typical SIDA installation often comprised a rack mount declicker, a rack mount decrackler, and then a computer based de hisser running in real time.

The signal being cascaded through them in real time. Yes. So one would walk into a studio and there would be literally a rack of SIDA equipment. Again, it's stuff that today you can do on your iPad, let alone a powerful computer. Yeah. Um, but in those days it was absolutely radical and some of them are still in use in that form.

I remember reviewing some of that early Cedar hardware and the thing that struck me about it at the time was how simple and elegant the user interfaces were. Despite the complexity of the process, it wasn't a complicated thing to set up and adjust and optimize. Yeah, it's a tricky one that is a double edged sword.

I fall firmly in the camp that products should be simple to use or, or rather, they should be simple to obtain excellent results. Things can be simple to use and rubbish. True, yes. But the trick is to create something that is simple to use and has a very wide sweet spot as well. There's been software.

Over the years, where 8 works perfectly on some parameter, 8. 1 you can hear the artifacts, 7. 9 you can hear the artifacts, and you have to spend forever finding 8. And what we wanted was a process that, if I use the same analogy, 8 might be the center of the sweet spot, but anywhere between 4 and 12 was absolutely fine.

So, it was a co development of the algorithms. Because algorithms with a very wide sweet spot are much harder to develop than algorithms with a narrow one. And elegant, uh, user interface design. The reason it's a double edged sword is because there were numerous occasions in those early years Where people would come to us at trade shows and seminars, conferences and so on and say how can it be so expensive when it's so simple?

Hmm, I might have said that in some of my reviews actually. Yeah, they were expensive boxes in the day Yeah, they were but they were unique so nothing else could do it. So the four boxes sold for I think it was 9, 000 for the azimuth corrector 10, 000 for the declicker and 12 for Uh, each of the, the Decrackler.

So, if my mental arithmetic still works, that's 43, 000 pounds for the set. Which is a lot of money today. That was a huge amount of money back then. But, the idea of being able to cascade a signal through these, and clean up all the major flaws in something that you wanted to re release, As you copied it from the master tape to the new digital medium.

It was absolutely radical at the time. So many people came up to me and told me that real time audio restoration was impossible. And that we were performing some kind of nasty and clever trick. But when you add in the concept of people's salaries, Compared to spending these days and days to per track just just doing it in the transfer You know, you could save a hundred thousand two hundred thousand pounds worth of salaries per year.

Yes with these forty three thousand pounds worth of boxes So they actually paid for themselves very quickly Where did you find your brains to develop these algorithms when you mentioned that it started off in the University of Cambridge? But moving forward from that you obviously had to build a team of very cutting edge software developers So, how did you go about finding those?

Well, our two top people came from the university, not just, um, Professor Rayner, who's now retired and, uh, that first employee that we had, um, who, when I said, but there was a two of us in the room, uh, champ named, uh, Professor Simon Godsell, and both of those are still directors of Cedar. Okay. Um, and, uh, Two of Simon's colleagues in those early days, um, and Peter's protégés joined us as the, the top people on the engineering side.

And so they provided the, what you've, I think you just called sort of the real brains behind it. And then it's a question of, as you say, staffing up with really good people. Good, high quality developers. So once you've developed the algorithm, uh, the job is to implement it elegantly. Mm hmm. And in those days, the computer power didn't exist to be careless with it.

You know, you needed every processing cycle. So, we looked for people with really good understanding of the job, of the processes involved. And that's what we did. We kept our eyes open for people who clearly had talent. And that we could then train to be better. Excellent. And a number of our people have gone on to, um, you know, significant roles elsewhere, but having said that average length of service, um, of employment within Cedar is currently sitting at something like 16, 17 years.

So we don't turn over employees rapidly and we build up a huge well of understanding and of talent within the building. Sure. You need to employ new people to bring in fresh ideas. Of course. So a steady but relatively low level of new employees supplementing people who've been around for, you know, nearly two decades and absolutely know what they're doing.

And that's worked for us. I'm not sure it would work for every company, but it's certainly been successful for us. So Cedar has a, an international reputation. But, presumably, it's actually a very small company that, in terms of physical bodies in the building here, I imagine there aren't that many. Yes, that's true.

Um, we're a small company that's worked very hard to create products that are worthy of the reputation. And, if you do the job very well, and if you look after the customer, Equally well, on the rare occasions that somebody needs some support. Um, then you develop a good reputation. So, these algorithms, obviously it's not a static thing.

Technology evolves, ideas evolve. So presumably the D Clicker and the D Buzzers and all the rest of them that you're running today are quite different beasts from the originals? Or are they still very closely related? Some of them are related. Others, much less so, um, things have moved ahead, uh, hugely, and in the last few years, um, at an increasing pace because the adoption of machine learning and AI is pushing these technologies forward at an ever increasing rate.

Right. In fact, as far as I'm aware, we were the first people to use machine learning in an audio product, in a pro audio product. Okay. We talked about AI in the manual for the first DC1D clicker in 1992. Yeah. But the first product which really used machine learning in a way that I would consider really using it was the DH1 Dehisser.

And again, I've got to give you a bit of history to explain why. Back in the 90s, it was considered that the only way to remove the hiss from a recording without creating unmentionable artifacts immediately was to use a process called spectral subtraction, which required that the user find a quiet bit of the material and take a noise fingerprint.

Mm hmm. And then you developed an algorithm that used that fingerprint to reduce the noise, hopefully avoiding. But we thought there had to be a better way of doing this. And the driving force behind it was the realization that the fingerprint is only accurate at the moment it's taken. Noise, by its very definition, is changing all the time.

It's random. So the fingerprint is accurate at the moment it's taken, but it's inaccurate all the time. The whole of the rest of the material. And what we wanted to develop was something that could take the idea of a fingerprint, but update its noise estimate as the wanted material was running. And we developed a very early machine learning algorithm that could make a really surprisingly accurate estimate of the noise content of the material.

Without a fingerprint, you didn't even need to start with a fingerprint. And that was the basis of the DH1DHISA, which we launched at an AS convention in Amsterdam in 94. And I just remember numerous people coming on the stand and just saying, what you're doing is impossible, even whilst they were listening to it and turning the knob and hearing, you know And hearing what it was doing.

And what it was doing. Yes, I can hear this, but it's impossible. So, when we launched it, I wrote a press release, which discussed machine learning. And various people in the company said this is going to sound like Technobabble. Don't do it, Gordon. So I rewrote the press release and left out all mention of machine learning and just talked about what it did and, you know, what the benefits of this were.

And the real benefit, of course, being that you could run your material in real time. You could actually put this box in line between a microphone and whatever, a broadcast, for that matter, just go straight through it. And I'm not surprised that people thought it was impossible in those days because the idea of applying machine learning to audio was completely radical.

And in fact, the idea of machine learning hadn't really seeped into the public consciousness at all at that point. Um, fast forward about 15 years and various other people are in the same position. Product spaces us start talking about machine learning and AI, and they've got the timing exactly right.

You know, the population as a whole was really ready for AI concepts. Artificial intelligence will do this for us, that's what's in every sphere of human life. This is the way forward. And, um, if I'm honest, we were caught on our heels a bit. Because having done it first about 15 years earlier, suddenly other people were shouting about machine learning and AI, and we weren't.

So, uh, we had to try to say, yeah, this is great stuff, and I'm glad they mentioned it, because we did that 15 years ago. Um, but today, um, those technologies are underpinning a huge amount of audio development. The only sort of word of caution is that people often use the term AI inappropriately. Um, AI is a particular technology and machine learning is part of AI.

Um, but an awful lot of, uh, what is done in audio processing is actually machine learning. It's not AI itself. Okay. Interesting. So, what processes do you now have? What can you offer? We've mentioned the D Click, the D Crackle, the D Hiss, but you do a lot more than that in the current range. Yeah. Those three processes underpinned The, the business of audio restoration throughout the nineties.

That was the era of the CD re release. And everybody was really keen to clean up their commercial libraries and re release. And also, it was the period at the beginning of the DVD re release as well. And so a lot of soundtracks got done. And one of our proudest moments was around about 97, I think, where we got credits on the Star Wars re releases.

Even today, as you can tell, that puts a smile on my face. It's funny. But we realized around about that time that this was a, a moment in time, that if we stayed in the restoration and remastering field, eventually we would be in a business that no longer existed, or at least existed on a permanent basis.

So we looked around and said to ourselves, where can our skills be next applied where it's really going to be needed? And we identified post production. So up to that point, there had been virtually no cleanup in post production using the kind of digital tools that we were. Developing this is for cleaning up dialogue for cleaning up dialogue.

Yeah. Yeah, and in fact the the bulk of um clean up in post was still done with gates downward expanders Um, and some boxes developed by dolby. Yes. I remember them. Yes little orange box And that was sort of the limit of the sophistication in post So CD mastering and, and DVD, um, pre mastering had taken a step forward that post hadn't.

So one of our chaps flew over to meet a whole bunch of, um, post production studios in Canada, as it happened, in Toronto. Right. Who'd come to us and said, we're moving all of our studios over to digital post production, primarily in Pro Tools, and we need a tool, uh, to replace our, our Dolby and other processors.

So we talked to them, and they said, Rather than giving them what they asked for, we gave them what they needed. And we developed a box called the DNS 1000, where DNS stands for Dialogue Noise Suppression. Completely different algorithm from anything we've done before. A new development for a new job in a new field of activity.

This wasn't progressing on from the HIST remover then? No. Entirely different approach? Yes. Okay. Yes. The, the, the problem with de hisses, even at the time we're talking about, when this was being developed, which is around about 98, 99, was that, uh, firstly, they're designed for musical content. Right. You know, the, the algorithms, the spectral subtraction algorithms are, Much more effective at mild de hissing of musical content than they are at removing background noise for dialogue.

Right. It needed a much, much more tolerant algorithm and it needed a completely different way of controlling it as well. We needed a control surface that felt comfortable to the potential end user. And Dolby had kind of pointed the way with the idea of multiple sliders controlling frequency ranges. But we also needed something that was much more.

Um, surgical in its removal of the hiss. So what we did was we designed a, a new noise reduction algorithm with a lot of filters and an interface between the algorithm and the control surface, which only had seven faders on it. Mm-Hmm. one to enable the user to determine the amount of noise contained in the signal.

And again, it had a broad, sweet spot. You weren't looking for, you know, a millimeter of fader movement. And six, that controlled the shape of what you wanted to reduce. Right. And then gave it some frequency ranges so you could home in on, on specific things if you wanted to. And that was the basis of the, the DNS 1000.

Yes. I have played with it and it, it really does, you know, I suppose revolutionize isn't too strong a word, it revolutionized the ability to clean up dialogue compared to what came before. I think so. The one thing that Cedar did that really did revolutionize audio work was retouch, in my view, which we haven't mentioned yet.

Tell me all about retouch. Yeah, retouch came along two years later. And again, that was something that came out of a development that we were thinking about something else. Right. Just as, um, the idea of, um, a fingerprint less de hisser came from the idea of modifying existing algorithms to track the noise, Retouch actually came about out of a desire to perform better de clicking.

Because what we noticed was that clicks don't always occupy the entire frequency spectrum. And if you can avoid processing the bits that don't contain the click, you have a far better chance of an inaudible restoration. So what we were thinking about was developing something that was effectively a spectral declicker.

All declickers work on the basis of the click starts here, And it ends there, cut out that piece of signal, and then use what came before and what comes afterwards to interpolate what existed in the middle before the click existed. And we thought, wouldn't it be cool if we could say the click started here in time and ended there in time, but also extended from this frequency, At the bottom to this frequency at the top right and leave out from the processing anything that was outside of that window and the way to do that was to represent the click on a spectrogram and then we noticed of course that what we were doing here was effectively selecting a spectral region.

Yes and saying we are going to do something in this region. Yes. And so the idea of D clicking in that region got extended to all sorts of other unwanted sounds and the. The typical ones quoted in those days were a car horn or a conductor dropping a baton or somebody slamming a door in the background or knocking something over on set and you get your perfect take and you've got to do ADR not because the actors were unclear, but because somebody knocked over a fan in the background.

Well, what would happen if you could say, well, here's the splodge that I can see on screen. That is the fan falling over and just remove that without affecting The voices at the same time. Um, so that was the birth of Retouch. And once we'd had the idea, the development actually didn't take very long. It was one of those ideas which is blindingly obvious once somebody has thought of it.

And lots of companies saw what we'd done and went, Oh yeah, we can do that. It's the kind of thing that really competent companies could do. Right. But we patented it. Good. And we still hold the patents to spectral editing. Have you patented all of your algorithms, or is that unusual? It's unusual. You have to make a decision with patenting, because it's expensive, and it tells your competitors exactly how you do what you do.

And often, um, industrial secrecy is a better way forward. Right. Just don't tell people what you're doing, and let them try and work it out for themselves. In fact, I have to give a lot of the credit to that, to a chap named Joe Bull, Who, at the time, was the managing director of SADI. And the first implementation of Retouch came out on SADI.

Yes, it was a dedicated plug in, effectively. And, uh, it was his, um, enthusiasm with the idea of patenting it that caused us to, to rethink the industrial secrecy idea. Right. Because it was exactly the right thing to do. It was a radical new technology. As I say, pretty obvious once somebody's come up with the idea.

So it was always going to become a, a mainstay of how we handle audio. I mean, nowadays we meet younger engineers who've, they've never encountered audio where spectral editing wasn't possible. And the idea that somebody had to invent it, and that invention was only 20 years ago, comes as a huge surprise.

So where do we go from here? What came after Retouch? Um, There were lots of developments, um, in the DNS side of things. The DNS 1000 was replaced. Um, A large part of the driving force behind that was actually the so called Roche regulations that came in. R O H S, which people pronounce Roche for reasons which, uh, escape me.

Mm hmm. Um, but we had to, uh, redesign the unit anyway, um, because we couldn't use various, uh, Components and we couldn't use lead solder and all that kind of thing. Yeah, so we took the opportunity to devise two new DNSs and the big development there was the addition of machine learning to the DNS concept.

So just as we've done it with spectral subtraction, we realized that we could create a DNS. That determined for itself in a much more intelligent fashion, the nature of the noise that one was trying to remove, and then one could go one of two ways, either say, we're going to give you a control surface that enables you to manipulate that noise estimation and decide How you're going to reduce the noise or one that says, we're not going to give you access to the noise estimate.

It's pretty damn good. Leave it alone. And we're going to give you a very simple control surface that just enables you to determine the amount of attenuation of that noise and maybe bias it slightly to entire or lower frequencies. And. We didn't have to make a choice between the two. We realized we could develop products that did both.

So, nowadays, you have a product like the DNS 8D, which is the former. It does a fantastic estimation of the noise with no a priori knowledge at all, but gives you the ability to burrow in and change the noise estimate and determine quite accurately the color of the noise reduction you want to perform.

And We designed it primarily for live broadcasting. We thought this is a really good design for in studio use. But it's gone much, much wider than that. And it's used live for things like, um, sporting events, conferences, political rallies, would you believe? Things of that sort. And then there's a second product called the DNS 2.

Which we thought was ideal for location sound, which is of the, the second sort. Here's your noisy signal, it determines the noise and it enables you to determine how much of it you remove. Yeah. And on location, you don't have time to set up complex noise reduction. We'd imagined that this would be used for, um, for dailies, more than anything else.

Yeah. So, somebody does a shoot, they record both the raw audio and something through the DNS2, just so that the, the location sound engineer can say, yep, this is going to clean up in post perfectly well. That's a take. That's a wrap. Thanks very much. Go home and have a beer. But what we found was that, um, budget productions with minimal post, they would use the DNS 2 processed audio, and it was fine.

Um, live to air, of course, that suddenly became a big thing. You're interviewing somebody in a noisy environment and going live to air through a DNS 2. So that, if I could go sideways again, that puts another huge constraint on, um, The algorithm, which is if you are going live to air, you can't have strange, nasty, funny things happening.

We're back to this whole idea of a wide sweet spot again. If the nature of the environment changes, you can't go from good processing to something riddled with artifacts or dropouts or whatever. Yes. So two different approaches for using the same underlying technology. And, and that's all happened from 2002 to the 2012 was when we launched the DNS eight live.

Mm-Hmm. . And then 2016 was the DNS two, which was the one that doesn't give you all of the control. Yeah. On the, the retouch side, we've just kept adding tools to retouch as we thought of useful tools. Mm-Hmm. . But what we've been looking to do throughout all of that period is to come up with more new ideas.

That really increased the value of processing audio in that environment. I think the ones that we're most proud of are both ML and AI based. The ML side of it is what we call match. So let's say you've got a drum kit, you've recorded all your mics off the drum kit, and you want to EQ your snare top mic.

But it is just covered in spill from the hi hat, right? So maybe you just want to make the snare brighter and suddenly you've just been killed by hi hat Yes So we thought is there a way that we can remove the hi hats without damaging the snare so that you can then mix them? Independently and we realized that retouch was the right vehicle for this And what you can do nowadays is mark one of these hi hat hits in the spill and Retouch itself will just go away and identify all of the others.

It's a perfect application for machine learning. Here's one, Mr. Algorithm. Learn it. Go away and find all of the others. You might have thousands of hi hat hits. Sure. And if you had to retouch them out individually, it could be done. It's going to take a lot of time. Take a lot of time, yes. And of course that's what people have been doing up to this point.

Sure. So we developed Match. And you can see it happen. It takes a split second. And just all of your hi hats on that snare track just go ba ba ba ba ba ba ba. Right through the whole track. Mm hmm. Then what to do with them? Because you don't want to have to go into each of them individually. So what we did was we came up with a tool which we called Repair.

And perhaps we were trying to be a little bit too clever, because repair is an incredibly generic word, but it's got AI in the middle of it. Okay. So sort of R E P and then capital AI, low case R. And that's actually an AI process where the system is looking at this identified Hi hat spill and saying this bit of what's identified is genuinely the high have this bit is perhaps overlapping with the actual snare.

I want to modify the process so that it gets rid of what I don't want that retains everything of what I do want. And the two together just work absolutely brilliantly. So that's just yet another example. I mean, there are still all of the restoration processes being developed as well. Declicking now is better than it was 10 years ago, 20 years ago.

30 years ago. So is de crackling, de hissing, de buzzing. We're doing some interesting work, actually, at the moment in the area of removing buzz and hum. Because it's not a perfectly solved area of audio restoration. Yes. Well, buzzers are very complicated sound sources, buzzers can be complicated. Very complicated, and in fact, the machine learning side of things offers some insights into better ways of doing that nowadays.

So that's just an example of another process that could benefit from the general development of algorithmic technology over the last decade. Two, three decades. Mm-Hmm. Beyond Pro Audio is the forensic aspect of things, which I know you're involved with. Yeah, very much so. Forensics has become a very significant part of the, of the company, and it happened in a strange way as so many of these things do.

I was in, uh, Australia doing some demos of the CEDA systems and I got a request to fly to New Zealand to show this system and it turned out that it was from a police force and they wanted to see whether the CEDA system could offer them better results than the filters that they were using for cleaning up all manner of forensic material which could range from interviews through to surveillance and it worked to a surprising degree.

And so, um, both the Australian and New Zealand police forces bought SIDA systems, which had been developed primarily for the sound archives and libraries. Yes. So right back in 94, we realized that there was another area of interest for cleaning up audio. But we didn't push down that line very hard, because from our viewpoint, the algorithms that we developed up to then, We're really suitable.

They might do a good job by accident, but they weren't specifically designed to do a good job in the in that arena, and the difference lies in the two words intelligibility and listen ability when you're cleaning up for whether it's CD, DVD, film, soundtrack, broadcast or whatever. You're trying to increase the listen ability.

Right. You want things to sound nicer. Mm hmm. So you develop algorithms that remove the problems, whilst retaining the original tone and feel and everything to do with the want and signal. In the forensics and security arenas, you don't really care about listenability. What you want is understanding.

Intelligibility. You want to understand what's being said, so you can design completely different types of filters that might turn barrel chested bloke into a chipmunk, but in the forensic community, it doesn't matter if you make the voice sound squeaky. If you can now hear what's being said. So the algorithms are quite, quite different.

We had to wait, actually, about Six or seven years until computer technology provided us with the processing power that we needed to implement the algorithms that were designed. But in 2003, we launched our first dedicated Cedar Forensic System, which was now based on Windows. So you could run lots of channels simultaneously, lots of processes simultaneously.

Uh, and Try to hear what was being said in real time as well as cleaning up back in the forensic laboratory and it was so successful that within a year we actually created sort of an in house division that we called Cedar Forensic and employed a forensic specialist who spends a lot of time supporting law enforcement and security agencies and so forth all around the world.

And are there any crossovers from the technology that you develop for the forensic side of things that can then use in pro audio? It's a very good time to ask me that question. Right. Because although there have been a number of examples in the past, there's a really big one today. Which is that in 2008 we asked ourselves yet again, what's happening out there?

What should we know about? What should we potentially be involved with? And the technology that we alighted upon was a fledgling technology called blind source separation. It's a good phrase. It means nothing at all. Does it not? Ah, okay. Blind source separation is where you have a mixed signal that's come from multiple directions.

Right. And you can separate out the individual sources. Okay. And the most obvious application for the general population for that is what's called the cocktail party problem. As we get older, we become less and less able to discriminate individual sources in a noisy environment. Right. And a lot of what people refer to as deafness isn't actually the inability to hear.

It's the inability to discriminate. After 40, almost everybody starts developing the cocktail party problem, to a greater or lesser degree. So, blind source separation seemed like a very good vehicle for looking at hearing assistance. Mm hmm. And we started a research project to develop a blind source separation prototype.

Um, As a consequence, we formed a whole new company, a spin off company called Audio Intelligence. Right. To further develop this blind source separation technology. Because what we realized was that if it could be used for hearing assistance, your marketplace is hundreds of millions. Yes. And we don't have the skills in SIDA to address a customer base of hundreds of millions of people.

So we had to create a new company and find people who had experience in this, you know, very large commerce, yes, tech commerce, which is what we did. And they continued to develop the, the blind source separation technology. And, uh, recently announced some products in that arena, um, for, for hearing assistance.

Um, but SIDA retained the opportunity To use this BSS technology in the forensic and surveillance arenas, right? So there's a real sort of complimentary set of skills in the, in the, the two sister companies. And last year we announced a product called isolate, which is based on a tiny little microarray, sort of smaller than a beer mat that you can just place on the table and that will capture the sound in a sort of 360 degree sphere.

And the software enables you to say, I want to listen to that direction and that direction and that direction. And here, let's say, a conversation going on between two or three people out of, you know, a hugely noisy environment where just a one or two mic recording, uh, wouldn't enable you to do so. So it does rely on, on a directional microphone as the source then?

No, it relies upon a mic array. Right. Which is, um, very different thing from a directional microphone. Absolutely, yes, I see what you mean. Directional mic enables you to point at a wanted source. What isolate enables you to do is separate all of the sources in the sound field. Yes. Without any a priori knowledge of where they are.

Where they are. You have to be able to see your target with a directional mic and say, you know, I'm pointing at him or pointing at her. With isolate, you don't. And of course that's where it's covert and forensic opportunities lie. So we announced isolate about the middle of last year and obviously a number of people in that arena contacted us and that's ongoing business.

But we were also somewhat surprised but very pleased to be contacted by a number of broadcasters. Because, of course, exactly the same problem lies in interviewing, and the idea of being able to put a single microphone down, whether it's, you know, people on a podium or around a table or whatever it is, and effectively point the software at each of those people, and be able to record each of those tracks individually, and then mix them or do whatever you want to with them, seems very appealing.

And of course it is. Because all of the stuff that's not being pointed at doesn't reach the recording. So it's quite different from noise reduction where you have noise in an existing signal and you want to remove it. You're stopping the noise actually getting to the signal in the first place. So there was a perfect example of what you just asked about a forensic technology crossing over to a commercial one.

And what's actually happened there is we started thinking of it as a commercial technology. Okay. And in the hearing assistance, then we realized that it had these great opportunities for doing good in the forensic arena. And, um, when we launched that, uh, we were then approached by the commercial arena again to say we can see uses for this.

Yes. Interesting. So how many microphones do you need in an array to give you the resolution that you need? Um, the minimum sensible number is four. The ideal at the moment is eight. Okay. And our, our microarray, um, has eight microphones. So, where's Cedar going from here? Obviously, you can't tell me all your secret new algorithmic developments, but is there plenty to keep you busy?

Well, I think where we're going from here is to the pub with you, Hugh. That sounds like a good plan. But before that, is there plenty to keep you busy? Lots of lines of, uh, of investigation to pursue. There are so many opportunities. I, I, I actually think that we have more opportunities today than we've ever had before.

There's so much we can do, um, technologically and in the way that we can deliver products to people. Um, uh, it's a really, really exciting time. There'll be a lot of product launches, some of which will contain really innovative technology, many of which won't. There'll be better versions of what's come before or maybe not better of what has come before but better ways of doing What you did before would be a better way of putting.

Okay. Yeah, and I see that Yeah, there's so much to be done and so much that's exciting good. Excellent. Well, it's been lovely talking to you Thank you for spending your time with me today. Oh my pleasure And, um, Gordon Reed, thank you. Let's go to the pub. What a very fine idea that is. Thank you for listening.

Please check out the show notes page for this episode, where you'll find further information along with web links and details of all the other episodes. And lastly, please check out the soundonsound. com forward slash podcast website page to explore what's available on all our other channels.