SHARE:  
Industry & Product News
What Listeners Are Seeking in 2021: Global Audio Consumer Research
The use of audio devices has expanded since 2020. Many consumers today are heavily reliant on their audio products to aid connectivity, as well as for video watching, gaming, remote working, and music listening. The Qualcomm State of Sound research identifies audio device purchase drivers, and interest in current and future use cases, to better understand what today’s users look for in earbuds, headphones, and speakers. Download the State of Sound report here
Syntiant Accelerates Audio and Voice Product Development with New TinyML Development Board
Shortly after announcing its latest and most powerful Neural Decision Processor, Syntiant unveiled its TinyML Development Board, an easy-to-use developer kit aimed at both technical and non-technical users for building machine learning-powered applications in smart products, such as speech commands, wake word detection, acoustic event detection, and other sensor use cases. The new machine learning development board allows developers to build low-power voice, audio and sensor applications using Edge Impulse’s Embedded ML platform. Read More
Sound Solutions Austria Presents World's First Actuator Loudspeaker for Stereo Sound
The micro-acoustics experts at Sound Solutions Austria GmbH have developed a compact actuator that is able to ensure stereo sound quality when used to transform the display of increasingly thin smartphones and tablets into a loudspeaker. According to the Vienna-based company, the new “Singing Display” actuator is ideal for space-saving, waterproof designs, extending the response beyond what is possible with conventional speakers technologies, by being applied directly to the back of the large tablet or smartphone screen surfaces. Read More
CEVA, Beken, and VisiSonics Announce Reference Design for 3D Spatial Audio in Headsets and TWS Earbuds
CEVA, together with Beken Corporation, a key player in wireless communication solutions, and VisiSonics, the 3D spatial audio technologies specialists, announced the availability of a complete 3D audio reference design for the rapid deployment of headsets and true wireless stereo (TWS) earbuds supporting spatial audio for use in gaming, multimedia, and conferencing. The integration delivers a complete 3D spatial audio hardware and software solution to consumer electronics OEMs and ODMs. Read More
Perlisten Audio Debuts S7i In-Wall THX Certified Dominus Speaker
Perlisten Audio, the audio company founded by an all-star team of audio industry professionals in direct cooperation with THX Ltd., the audio and video certification and technology company, announced the seventh model in its cutting-edge lineup. With the new S7i (SRP: $7,495 each), Perlisten introduces the world's first in-wall speaker to achieve THX Certified Dominus status. The new speaker, scheduled to ship November 2021, features the same advanced technologies as the company's flagship S7t tower speaker. Read More
NAD Electronics Launches New C 700 BluOS Streaming Amplifier
NAD Electronics continues its expansion in the “just add speakers” category with the introduction of the C 700 BluOS Streaming Amplifier. As it did with the C 399 integrated amplifier, this new compact and modern solution renews the company's Classic Series with a powerful HybridDigital UcD amplification, 80W per channel and integral BluOS app interface. The C 700 suggested retail price is $1499 USD / €1499 EUR and will ship globally in late October 2021. Read More
Skullcandy's New Under $100 True Wireless Earbuds Offer Skull-iQ Smart Feature Technology
True wireless earbuds that offer truly differentiated features, such as hands-free control via simple voice commands, personal audio profiles, and software updates, are now a reality under $100. Coming from Skullcandy, the popular headphone brand leading that price category, the new Grind Fuel and Push Active TWS designs offer those features thanks to Bragi OS, and over 18 months of development effort by the Skullcandy and Bragi teams. Read More
Merging Technologies' Anubis Runs SoundID Reference Correction Software
Merging+Anubis is now the world’s first audio interface and controller to run Sonarworks' SoundID Reference correction software without a computer, with the lowest possible latency, right out of the box. Launched three years ago, the powerful Anubis continues to show the power of integration and being open to third-party add-ons. Sonarworks and Merging Technologies have partnered up to simplify the music production process with the first hardware to enable SoundID Reference correction outside of a computer. Read More
Resonado Labs Unveils Concept Marine Sound System in Partnership with Lippert
Resonado Labs unveiled a concept marine sound system at IBEX 2021, North America's largest trade show for the marine industry. Resonado Labs continues to design and provide proprietary audio technology to brands and manufacturers, based on its flagship Flat Core Speaker (FCS) technology. The new system was designed by Resonado in collaboration with Lippert’s Marine Group as the companies plan to partner on more marine projects in 2022. Read More
T2 Software Qualifies First Bluetooth LE Audio Host Stack
T2 Software announced the Bluetooth SIG qualification and listing of their Bluetooth LE Audio Host Stack. With OEMs and semiconductor companies all busy building LE Audio solutions, T2 Software was the first company to fully qualify all of the adopted profiles and services supporting Bluetooth LE Audio. The team has been working closely with the Bluetooth SIG over a period of three years. Read More
Editor's Desk
J. Martins
(Editor-In-Chief)
Audio Digital Signal Processing
Perfected by ML, Powered by AI
The November 2021 edition of audioXpress - stubbornly, still a monthly magazine - features a Market Update report on Digital Signal Processing. This year, I focused specifically on the use of artificial intelligence (AI) in digital audio processing. As it always happens in this type of article feature for a printed magazine, you feel like the work is never finished. In order to correctly frame key trends and technologies, year after year, I am forced to revisit all the key announcements going back several years, spend more time reading, experimenting, and researching - leaving less time for writing.

This year, the writing effort was particularly painful given the wide scope of products, companies, markets, and applications segments that we wanted to cover, knowing that this is an area that very few technical publications - never mind audio publications - are covering. And the companies involved in audio DSP, large and small, are not exactly the most prolific communicators - as I wrote in previous articles, they still think their business depends of being able to keep their craft a secret.

The ones who publicly brag about being "disruptive" in this field, are normally the ones still in the earlier stages. And when taking about AI and machine learning (ML) applied to audio processing, those companies tend to keep the actual process hidden under a cloud "service" layer - understandably, allowing them time to perfect the algorithms, train the ML models, and build the AI engines.
Happening on-device with increasingly integrated silicon, the use of artificial intelligence in digital audio processing is having massive implications to the audio industry. Read all about it in the November 2021 edition of audioXpress (I know, we released the October edition just two weeks ago).
This year, given the circumstances of the extended global pandemic effects, writing a DSP report was even harder. Usually, we would be able to work through a series of documents submitted by manufacturers and key vendors, some collected meticulously in the field during industry events over the past 12 months, others anticipating whatever these companies intended to release in the next trade-show or industry conference.

Such events are now a distant memory. While I see that InfoComm has not been cancelled yet - which is almost shocking under the serious circumstances of the pandemic in the US (and particularly Florida) - any chance to visit the always-secretive demo rooms of these companies is not going to happen before January, or even mid-2022... And while online interactions or attending webinars is a possibility, it's hardly a rewarding experience when you need to collect factual information that you can report to readers - unless you are focused on just one thing all the time - as some of my fortunate colleagues are.

Nevertheless, I believe the report that you will be able to read in audioXpress November 2021 - much like the ones published for the past two years - will be worth the subscription cost of this magazine. This edition was also reinforced with two great contributed pieces, one covering DSP applications in automotive interior acoustics (an article by Roger Shively) and another on the inspiring potential of an integrated DSP solution for home listening and high-end audio applications (written by Al Clark). I take this opportunity to formally invite more industry experts reading this newsletter to submit contributed pieces on the diverse topics we need to address every month, focused on audio development. (Email)
Shortly after announcing its latest and most powerful NDP102 Neural Decision Processor, Syntiant unveiled its TinyML Development Board, an easy-to-use developer kit supporting all sorts of low-power voice, audio and sensor applications.
But I didn't mean to write this newsletter just to promote the November edition of audioXpress. I am writing to highlight the fact that the use of AI and ML to determine digital signal processing of audio is something that all manufacturers, in all application fields, need to take into account. Because that is my main takeaway from the research I did recently. There's real progress and things are moving fast. Much faster than you think (and business is there - there's money to be made).

Unlike progress in many other important audio research areas, which product developers and product managers will all tell you to forget about because of the law of diminishing returns (unless you work in high-end audio and price is no obstacle), the use of AI and ML in audio can be more disruptive than in any other field, meeting real market requirements and consumers' needs. And that's the reason why we are seeing so many advancements and announcements.

Syntiant just announced its latest NDP120 Neural Decision Processor (NDP) chip for audio and sensor processing in battery-powered devices. The low-power Syntiant Core 2 technology contained in this latest chip delivers 25x the power of NDP devices currently shipping, supporting echo-cancellation, beamforming, noise suppression, speech enhancement, speaker identification, keyword spotting, multiple wake words, event detection, and local commands recognition. At the same time Cadence now offers decoder implementations supporting MPEG-H 3D Audio to designs using its Tensilica HiFi DSPs, opening up the potential to bring interactive and immersive sound to millions of users worldwide.

Just among the news published this week at audioXpress.com, we find important examples, with crucial announcements of strategic hardware and software partnerships, new development platforms, and new silicon being released. The full audio DSP connection between those announcements is often missing but the actual applications are clear as AI/ML moves to "on-device" - the current industry buzzword is “edge AI.” Which in audio terms means making things happen closer to the microphone, or even directly in the ear. Obvious references always include voice recognition applications, and voice processing for speech optimizations, as well as active noise cancellation (ANC), but current hot trends are adding spatial audio processing and audio enhancement for augmentation, as well.

In the report I wrote for audioXpress, because of page limitations, I was forced to leave out numerous practical application examples of companies working in this area, particularly in professional audio. But I was able to browse through some of the most promising examples of audio processing that will be exponentially improved when combined with ML models, and controlled with AI-enabled processes. To have an idea of what's coming we just need to consider the power of such platforms, combined with sensor fusion, geolocation and eventually connected services. All things available now.

Imagine being able to remove reverberation from a live audio input signal - not just fixed frequency-dependent noise. Or having an adaptive ANC engine able to adjust and filter noises or pass-through sounds depending on current activity profiles and user location. Applying effective hearing enhancement and communications using optimized algorithms that have been extensively trained, but are also able to adjust in real time throughout changing acoustic environments, purely based on AI, without the need for user input.
Running offline as trained algorithm plug-ins, such as Acon Digital DeVerberate 3; or running as an AI-based online service, as Descript does in Studio Sound, removing reverberation and acoustic backgrounds from a voice signal is not only possible, it works extremely well.
I still remember when I saw the Waves plug-ins that we all used to carefully manage on a multitrack DAW session running on the most powerful workstation we could buy, suddenly being able to run directly on each-channel of a digital console. In professional audio, we are seeing progress advancing with each product iteration and DSP becoming ubiquitous in all classes of products. The most sophisticated studio processing tools are now available for all creators working at home, through cloud-based services and resources, which are understandably pioneering AI/ML for differentiation. Now, consumer audio is moving even faster.

And many of the offline software tools still available for the studio are basically benefiting from highly trained ML models, which very quickly we will be able to port to those edge AI platforms. A few examples can be found in tools such as the latest DeVerberate 3 plug-in from Acon Digital, which is able to attenuate unwanted reverb present in recorded audio. The just-released version includes an entirely new algorithm based on deep learning for fully automatic reverb reduction of recorded voice, even with multiple voices speaking at the same time. By training a neural network on thousands of high-quality voice recordings and a wide variety of acoustical surroundings, the plug-in can automatically distinguish speech from reverb. Sounds familiar? Yes, the same type of processing is actually being applied in smartphones, communication headsets, and even true wireless earbuds with ANC.

Actually, an improved and much more sophisticated version was recently demonstrated by Descript - a company I have mentioned previously in this space, which converts speech to text and allows edit decisions on recorded audio and video content, simply by editing text. Descript is working on sophisticated AI applied to audio processing and they have announced Studio Sound, a powerful new feature that enhances speakers’ voices while reducing and removing background noise, room echo, and other undesired sounds. Studio Sound is still in beta but already shows great potential to separate voice from the recorded acoustic effects, helping anyone recorded over a Zoom call to sound like they are using a broadcast microphone in a properly treated studio. And Descript also offers AI-based ducking, which automatically lowers the volume on music and ambient sounds when speech is present. Amateur podcasters love this.

I also mentioned AI being used for spatial processing. And that in itself is a topic for many other articles. I'll just mention an example from LANDR, the online audio mastering service that uses the power of artificial intelligence and data from more than 10 million professionally mastered music tracks to become a major platform in the music industry. Musicians have been using LANDR to master and submit their work directly to digital streaming platforms, normally in stereo. But now that Apple Music, Tidal, and Amazon Music are promoting Dolby Atmos for music, LANDR is also offering spatial audio mastering services to that file format. Given the “creative” possibilities of mixing music in “spatial audio,” I am not surprised that an AI-based service could do better than stoned “music producers”.

But as I feared, that includes "remastering" stereo audio into Atmos using LANDR AI-based, automated processes. Yes, artificial intelligence will not help much when the idea is stupid in the first place.
LANDR, the company that gave AI a bad reputation - because it delivers an effective service that no one wants to recognize is popular and used - offers all the best and most sophisticated audio processing tools, enhanced with the company's own machine learning, automated approach.
R&D Stories
Getting Started with Automotive Audio Bus (Part 2)
Understanding A2B Features
By Brewster LaMacchia
For the many audioXpress readers following Brewster LaMacchia's four-part article series about Getting Started with Analog Devices' Automotive Audio Bus (A2B), after detailing the core A2B technology's features, the second article explores the technical underpinnings of the A2B bus design. The article examines how certain optimizations for the automotive applications must be accounted for when targeting broader audio applications. He also includes a look at analog phantom power and how A2B provides that feature in a digital system. This article was originally published in audioXpress, November 2020.  Read the Full Article Now Available Here
Voice Coil Test Bench
The Beyma 12LEX1300Nd Pro Sound Woofer
By Vance Dickason
This Test Bench characterizes the Beyma 12LEX1300Nd 12" driver from Beyma's new LEX Series. This latest design from Acustica Beyma, the company founded in 1969 and headquartered in Valencia, Spain, features a FEA-optimized neodymium magnet structure and 4" voice coil, and is targeted at two- and three-way PA speakers, being also optimized for band-pass subwoofer designs. The entire woofer is optimized with a rather substantial set of features for very high-performance. This Test Bench fully explores the design and benefits and characterizes all the relevant design parameters of the Beyma 12LEX1300Nd, helping to understand the results obtained from the company's unique Maltcross forced convention cooling system that includes using the aluminum shorting ring as part of the cooling process. This article was originally published in Voice Coil, July 2021. Read the Full Article Now Available Here
audioXpress October 2021: Digital Login
Audio Product Design | DIY Audio Projects | Audio Electronics | Audio Show Reports | Interviews | And More 

Don't Have a Subscription?
Voice Coil October 2021: Digital Login
Industry News & Developments | Products & Services | Test Bench | Acoustic Patents | Industry Watch | And More


Advancing the Evolution
of Audio Technology

audioXpress features great articles, projects, tips, and techniques for the best in quality audio. It connects manufacturers and distributors with audio engineers and enthusiasts eager for innovative solutions in sound, acoustic, and electronics.

Voice Coil, the periodical for the loudspeaker industry, delivers product reviews, company profiles, industry news, and design tips straight to professional audio engineers and manufacturers who have the authority to make powerful purchasing decisions.

The Loudspeaker Industry Sourcebook is the most comprehensive collection of listings on loudspeaker material in the industry. Purchasers and decision makers refer to the guide for an entire year when making selections on drivers, finished systems, adhesives, domes, crossovers, voice coils, and everything in between.

© 2021 KCK Media Corp. All Rights Reserved.