Industry & Product News
LG Electronics and Audioburst Collaborate to Build Audio Content Voice Search In-Car
LG Electronics and California-based Audioburst announced a collaboration to build the next cutting-edge in-car infotainment systems for leading automakers, giving consumers new ways to access and explore voice search and content, and brands deeper analysis around that usage in-car. LG Electronics will be the first company to integrate Audioburst Deep Analysis API for Live Audio Streams. The experience, still in stealth mode, will be unveiled in January to select audiences at CES 2019 in Las Vegas, NV.   Read More


Qualcomm Announces New Snapdragon 855 Mobile Platform with aptX Adaptive and True Wireless Stereo Plus Support
During the second day of the annual Snapdragon Technology Summit, Qualcomm unveiled its newest generation Qualcomm Snapdragon 855 Mobile Platform. This is the world's first commercial mobile platform supporting multi-gigabit 5G, the latest artificial intelligence (AI) and immersive extended reality (XR) technologies, ready for the next-generation mobile devices in the first half of 2019. The flagship platform also introduces support for the new aptx Adaptive and TWS+ wireless audio experiences.    Read More


VOIZ To Debut VOIZ AiRadio, the "Radio You Can Talk To" at CES 2019
There was a time when listening to a radio station meant being within the antenna coverage area, which for low-power FM stations also meant a restricted local audience. Today, all that has changed and radio content is available on-demand over the Internet. But while tuning a radio was not so hard, getting the right audio content online still needs some work. Enter VOIZ, which combined the latest audio and voice-control technologies in a "retro futuristic" receiver, using premium materials to create VOIZ AiRadio, the first "Radio You Can Talk To."    Read More


Vesper Launches VM1001 Piezoelectric MEMS Microphone Specific for Array Applications and Outdoor Environments
Vesper introduced the VM1001, the latest piezoelectric MEMS microphone in its product line, designed to ensure accurate voice assistant performance with extreme environmental robustness. The VM1001 is the first microphone in Vesper's portfolio that is specifically built for array applications and ideal for voice user interface, beamforming arrays, outdoor devices, industrial sensors, smart home devices and other applications where low noise, high stability, and durability are desired.   Read More


Bose Announces Frames, a New Wearable and Audio Augmented Reality Platform 
Following an earlier presentation at SXSW 2018, of the new Bose AR, the company's concept for an audio augmented reality (AR) platform, in the shape of glasses to hear, the Massachusetts company has now announced Frames, a product combining the protection and style of premium sunglasses, the functionality and performance of wireless headphones, and the world's first audio AR platform, combined into a single product. And Bose intends to develop a family of dedicated AR apps for Frames.   Read More


New IsoAcoustics OREA Bronze Isolators for Lighter Audio Components and Turntables
The OREA Bronze has been added to the OREA series of Isolators from IsoAcoustics, announced the Canadian company. The new OREA Bronze Isolators provide extraordinary levels of isolation for hi-fi audio components and turntables and are now available. In order to create this newest addition to the OREA series, IsoAcoustics had to lower the weight threshold in the series in order to achieve adequate isolation performance with lighter components including amplifiers, DACs, CD players, small speakers, and turntables.   Read More


Meyer Sound Conducts Research on the Effect of Variable Atmospheric Conditions on Concert Sound Reinforcement
In partnership with Denmark's Roskilde Festival, Meyer Sound realized a rare opportunity to make precision measurements on the effect of variable atmospheric conditions on concert sound reinforcement. Working under the direction of Meyer Sound Senior Scientist Roger Schwenke, a technical team installed multiple weather sensors and measurement microphones during last summer's event to measure the effects of atmospheric conditions on audio system response at varying distances above ground level.   Read More


It's Not That Smart, But It Sings! The Alexa-Compatible Big Mouth Billy Bass
Gemmy Industries debuted its highly anticipated Big Mouth Billy Bass with high-tech functionality: Alexa compatibility! "Get fishy with it. The popular Big Mouth Billy Bass is back and better than ever," says the company who is obviously hoping to "fish" all Alexa users this Holiday season. Everyone's favorite singing and talking fish is now programmed to respond to Alexa voice commands delivered to a compatible Amazon Echo device.   Read More


Editor's Desk
J. Martins
Editor-in-Chief

Audio Software Quo Vadis?
Artificial Intelligence and Machine Learning Used in Audio Tools


When we look at Adobe's portfolio of solutions, which includes the famous Acrobat (for PDF), Photoshop, Media Composer (video editing), and After Effects (visual effects and animations), in a family of 21 core software programs, there is only one for audio: Adobe Audition. Audition, used for audio recording, mixing, and restoration, is a software acquired from Syntrillium Software in 2003 and was previously known as Cool Edit Pro.
 
Recently I did an interview with Adobe's Audio Product Manager, Durin Gleaves, just before the company made the big announcement about its latest updates to the Creative Suite, previewed at the IBC 2018 show in Amsterdam. I really wanted to understand how Adobe was positioning Audition, other than being a secondary tool for Premiere Pro, and why and how exactly the company is evolving its software using artificial intelligence and machine learning.

Adobe Audition might not be the most popular audio software out there, but it provides a comprehensive audio post solution, now with next-generation AI-based audio cleanup technologies and a modern - and much faster - multitrack environment.

For the latest 2018 updated release of Creative Cloud, Adobe focused on things such as making its new generation of artificial intelligence (AI) -powered features, which the company calls Adobe Sensei, available for all its software. These are features that essentially help users accelerate mechanical tasks, such as auto lip-syncing an animation with performance-captured mouth movements and spoken sounds - a very smart and cool feature. Specifically, for Audition, using machine learning, the new Auto-Ducking feature automatically lowers soundtrack volume during spoken dialog. Also, there's a new Intelligent Audio Cleanup (also common to Premiere Pro), together with Reduce Noise and Reduce Reverb sliders in the Essential Sound panel make removing reverb and background noise easier than ever. Noise removal and clean-up was always a strong area for Audition, but the process was very time consuming and now there are many algorithmic plug-ins available that do that better and faster. But inside Audition and Premiere, these enhancements certainly are welcome. Audition is now more tightly integrated with Premiere Pro and other tools than ever, and that's great for video editors.
 
I should also point out that Adobe often incorporates its own software with tools from other companies, when it is something they don't do and there is a clear market reference and specialized software they can support. That's what they've recently done with 3D animation, by supporting Maxon's Cinema 4D software. But that becomes hard to fit within its membership model, as I'll expand.
 
Looking at the latest Adobe Audition CC, it clearly isn't a fully featured DAW, certainly not a tool for music production (it doesn't support MIDI or virtual instruments), and as exciting as auto-ducking can be for editors, it's not something that's going to conquer new audio users for Adobe. It certainly is a more powerful version under the interface, allowing for 400% faster mixdowns and bounces, and more precise surround panning, among other enhancements. 
 
Probably a good example of how Adobe looks at audio updates is the fact that this version introduces support for the good-old Mackie HUI protocol, finally allowing Audition users to use current control surfaces and consoles, including support for HUI-enabled timecode display and control devices (20 years later...). Also, funny enough, the 2018 Character Animator CC update introduces MIDI support for action triggers.
 
These are obviously welcome enhancements, considering that you get them automatically with your subscription, but it would not encourage anyone to pay for an upgrade. As Adobe explains in the announcement, "improvements we put in each release... often come from feature requests from customers and users and are specific solutions to real problems." So there.
 
When I asked Durin Gleaves to give me an idea of Adobe's focus for these Audition updates, he was quick to mention "simplification" and "collaboration," because as part of the Creative Cloud, users are no longer working on just one application.

Durin Gleaves is Product Manager for Audio at Adobe.

"Audition has a lot of adoption from people that are working on video editing, are visual designers, or are creating a podcast to promote a brand. There are a lot of different users and different skill levels, people for whom audio is not their expertise, but they need to create something with sound."
On the other hand, he also explained that they want to create tools that are more powerful but simpler to use. I think that characterizes very well what currently separates the smaller and specialized software houses, which make incredibly powerful and deep software (with an extensive learning curve in most cases) and large software companies (e.g., Adobe), who design tools that are used for media production in general. Integrating machine learning and AI to make the tasks easier and faster for those users is now a completely new focus, and something that I think will determine the way creative tools will be defined from now on.
 
As Gleaves explains, it simply is no longer possible to teach every user to explore every single manual task (I agree and can attest from my own extensive usage of Adobe's software). So, the software offers "automatic" features side-by-side with the traditional options to manually adjust things in detail and experiment as much as needed, without being intimidating. Very important, allowing the software foundation to scale, and allow the users to grow in their ability to explore and make the best use of the tools that are available. The fact that the software (and the platform it runs in) is now much more powerful and faster, also helps in making these features "automatic" and highly effective, of course.
 
And as Gleaves explains, it does the processing faster and at the local level (CPU-based) without taking processes to the cloud. So "users can be on a desert island and apply noise-removal or reverb reduction to a hundred clips with low latency (milliseconds) and low overhead, for a project running on a laptop." 


The new DeReverb tool in Adobe Audition can dramatically improve audio by reducing unwanted echo from a clip using adaptive algorithms that apply real-time adjustments based on the specific characteristics of sound clips. Like the DeReverb, the new DeNoise algorithm are trained using machine learning and get better over time.

Interestingly, the Adobe model for the Creative Cloud subscription, makes it possible for users to benefit from new features, and especially new AI-based tools such as speech-to-text transcription, without the need to pay extra or get extra products. While many software companies currently (and I saw that very often in broadcast technology manufacturers this year) are proposing AI "as a service" (basically pay-per-use), Adobe can again leverage those AI features to increase the perceived value of its subscription model. Of course, that's valid for products that are developed in-house or technology that Adobe acquired. If, such as with transcription engines, Adobe is licensing the technology, it becomes harder to integrated it in the workflow... for free. Adobe is working on getting text from video automatically transcribed and indexed on the cloud to increase the production possibilities, especially for large media companies and broadcasters. If the integration is there, all the users/subscribers can benefit. As Gleaves puts it, "I think we can do better."
 
I invite to follow this link to read the full interview with Durin Gleaves, where we discuss Adobe's plans for immersive audio, how Adobe sees the use of audio production and post production tools on the cloud and more (too much to include in a newsletter... :)


Practical Test & Measurement
Advanced Test Methods for Improving ANC Headphone Performance
By Hans W. Gierlich (HEAD acoustics)
 
From speaker practice and theory, this article on advanced test methods for improving Active Noise Cancellation (ANC) headphone performance attempts to gain better data to draw stronger conclusions about the ANC performance and is a must-read for product developers and anyone interesting in learning how to evaluate those systems. Active noise cancellation (ANC) in headphones has been available for several years and has seen significant improvements in the amount of peak attenuation achievable. Today, ANC is used in many different scenarios, including for users who want to shut themselves off from their environment and avoid unwanted background noises (e.g., when traveling by plane, bus, or sitting in a noisy room). That is why users turn the ANC to maximum. Alternatively, users still desire maximum noise cancellation but want to play back audio files such as music, news, or podcasts at the same time - or simply have speech in a phone call. This article focuses on these two scenarios and investigates how we can gain better data to draw stronger conclusions about the ANC performance. This article was originally published in audioXpress, October 2018.   Read the Article Now Available Here


Voice  Coil Test Bench
Two 28 mm Dome Tweeters from Dayton Audio: RST28A-4 and the RST28F-4 
By Vance Dickason
 
In this Test Bench, I characterized two new 28 mm dome tweeters from Dayton Audio - the RST28A-4 and the RST28F-4. These two transducers are both new tweeters from the Dayton Reference Series, maybe a small step down from the new Epique series. Dayton's RST28A-4, and RST28F-4 are similar tweeters that share pretty much the same platform with the primary difference being the RST28A-4 uses a 30 mm aluminum dome and the RST28F-4 has a 30 mm coated silk dome. Features in common include a 4 Ω voice coil, a ferrite magnet motor structure, an aluminum faceplate, a tuned injection-molded rear cavity, low viscosity ferrofluid in the gap area, a phase diffuser with a protective screen, 50 W RMS power handling, replaceable diaphragms, and gold-plated terminals. This article was originally published in Voice Coil, September 2018.   Check it out here!


AX December 2018: Digital Login
Audio Product Design | DIY Audio Projects | Audio Electronics | Audio Show Reports | Interviews | And More 

Don't Have a Subscription?
VC December 2018: Digital Login
Industry News & Developments | Products & Services | Test Bench | Acoustic Patents | Industry Watch | And More