SHARE:  
Industry & Product News
L-Acoustics Announces L-ISA Studio Software Suite for Spatial Audio Creation
L-Acoustics just expanded the possibilities of spatial audio creation with the launch of the new L-ISA Studio software suite and upgraded L-ISA engine with simpler yet more powerful controls. For the past 18 months, the French company has also been porting the L-ISA Processor Audio Engine to work on a regular computer, allowing anyone with a laptop and a pair of headphones to do pre/post production work. The new L-ISA Studio software even includes a binaural rendering engine and scale simulation mode. Read More
APEX Releases New SMA-2 Compact 4-Channel Amplifier Module
Belgium manufacturer APEX introduced a new four-channel OEM amplifier module specifically designed for multi-way pro audio loudspeakers designs. The new Apex SMA-2 is capable of delivering up to 3000W on a single 4 ohm channel and uses APEX’s unique GlidePath direct drive architecture, which is a proprietary Class-D technology where the DSP system is an integral part of the amplifier system, with a fully DC-coupled audio signal path. Read More
Jabra Launches PanaCast 50 Intelligent Videobar Hybrid Collaboration Solution
With the world transitioning to hybrid working environments, manufacturers of communication solutions are betting on a new category: videobars. A combination of soundbar and speakerphone design with a built-in webcam, these solutions can be placed under a monitor or display and quickly convert any space into a meeting room. Jabra, GN Group’s consumer arm, was the latest brand to join the category with the Jabra PanaCast 50 intelligent videobar. Read More
HEAD acoustics Releases Intuitive Structural Analysis Software Package
With the release of ArtemiS SUITE 12.5, HEAD acoustics introduced a complete structural analysis package, offering Time Domain Animation, Operational Deflection Shape (ODS), Shape Comparison Tool and Modal Analysis. This will allow a unified software environment for simulation and project engineers to address structural dynamic issues within the ArtemiS SUITE with ease and efficiency. Read More
Focusrite Group Acquires American Synthesizer Company Sequential
In a joint statement, Focusrite Group and Sequential LLC announced that British holding company Focusrite plc has acquired Sequential LLC, the respected American synthesizer manufacturer led by legendary electronic instrument designer and Grammy winner Dave Smith. A surprising development given that it was only in 2015 that Dave Smith regained the rights to the Sequential name from Yamaha, allowing Dave Smith Instruments to be rebranded as Sequential in 2018. Read More
Sonarworks Announces SoundID Integration on Drop + THX Panda Wireless Headphones
Latvian audio technology company Sonarworks announced a new partnership with Drop, the community-driven commerce platform recently turned into an original product design company with the launch of the very successful Drop + THX Panda wireless headphones. In view of that popularity, Sonarworks is integrating its SoundID personalized audio technology directly into the planar ribbon headphones, increasing the options for target response calibration, personal hearing profiles and sound preferences. Read More
Avantone Pro Introduces Gauss 7 Active Reference Monitors with GAU-AMT Tweeter
After its recreation of the classic NS-10Ms and Auratone studio monitors, Avantone Pro now pays tribute to Gauss by bringing a notable speaker name back with the Gauss 7. This active full-range, two-way reference monitor design mixes vintage nearfields and new-school boosted bandwidth and frequency response decades after the much-missed Gauss Speaker Company’s closure. Read More
High End Munich Show Postponed From September 2021 to May 2022
The High End 2021 trade show, planned to take place in Munich from September 9 to 12, 2021, has now been moved to May 2022 due to the ongoing impact of the COVID-19 pandemic. The new parallel event, the International Parts + Supply (IPS), is also postponed. The High End Society Service GmbH reached this decision after a reassessment of the current situation and in close cooperation with the Board of the High End Society. Read More
Editor's Desk
J. Martins
(Editor-in-Chief)
Sound Design to Audio Product Design
What Can We Do With Spatial Audio?
What do we expect exactly from Spatial Sound?
A true tri-dimensional representation of sound in space? An effective way to translate virtual acoustics? A limitless soundstage for infinite creativity? What about sound “in your head”? Companies such as Embody are working hard to generate personalized spatial audio that focuses on a reference listening experience - meaning, you have the feeling of being in real studio environments listening to a pair of studio monitors, while listening through headphones. Not sound “in your head” but real reference room sound. But yet, most of the electronic music produced these days is optimized to sound as if it is “in your head,” because producers assume listeners use headphones - and “like” headphones. The concept of two speakers in a room is increasingly alien to most people.

Benefiting from an implementation of head-tracking that actually works, Apple proposes that we use the AirPods Pro earbuds or the new AirPods Max headphones to listen to immersive audio - as if we were in a real home theater environment - including listening to a Dolby Atmos movie soundtrack, which requires ceiling speakers (typically 7.1.4 channels). And yet, Dolby itself is promoting Dolby Atmos music, which is produced in a studio surrounded with speakers, but is actually mixed for binaural rendering, a smart speaker, a soundbar, or even with smartphone speakers. If we are mixing music, what exactly should be done with that? What should be the target? Grammy Awards are already being attributed to "Immersive Audio” works. But what will consumers recognize as immersive in a Morten Lindberg Grammy award-winning album, when listening on a smartphone that claims to support Dolby Atmos? Will they be able to tell a Lady Gaga song on TIDAL was mixed for “spatial audio”, or will they just say “nah… this is stereo!” when it’s not?

And what should we do with existing stereo conventions that are firmly established in our collective memory of how music should sound - such as how an orchestral piece with the violins on the left, the clarinets center left, the cellos on the right, and trombones on the far right of the stage? Where can you start being creative and not “break the rules”? How far can we go in sound design for immersive audio? Are there any boundaries? Or is spatial audio just wild territory, free to explore?
L-Acoustics just made it easier to create immersive content to be played by the company’s L-ISA systems with its new L-ISA Studio software.
I started to write about this topic with audio product developers in mind. But the questions equally apply to content creators, because basically we all have the same doubts. Audio developers designing new earphones are dealing with signal processing and algorithms required by spatial audio and how those will be translated by (predominantly) two-channel systems. Others are attempting well-tuned and aligned multi-way transducer designs to translate some extra perception of tri-dimensionality. And will that sound as the “musicians intended”? (If there even is such a thing in spatial audio…).

Apple engineers designing the new M1 iMac were told that the new desktop computer design needed to be able to play Dolby Atmos movies in a convincing way - and they had to do their best from a two-dimensional thin frame, with microspeakers placed relatively close to each other. In a way, not much different from designing a soundbar for immersive formats, just using smaller speaker drivers while benefiting from much more powerful “smart” DSP. But what did they tune the system for when it is playing music and not movies or games?

As spatial audio creation takes off for multiple applications, we are also seeing a diversity of creative approaches for immersive audio reproduction (also frequently marketed to consumers as “3D audio”). Things get more specific when we discuss audio formats, typically Dolby Atmos (the de-facto mainstream immersive audio standard) and MPEG-H (the open, broadcast immersive audio standard, also used in Sony’s 360 Reality Audio) - both object-based. But there are many more “formats,” which all explore a combination of object-based audio, channel-based content, and/or scene-based for real-time user interaction (such as in gaming and interactive experiences like virtual reality).

All that is extensively covered and summarized in Nuno Fonseca’s booklet “All You Need to Know About 3D Audio” - recommended. There are no simple explanations of what spatial or immersive audio entails, because things get complicated depending on whether you are looking at it from the creation of content perspective, or actual implementation of playback systems.

Nuno Fonseca, a University professor and member of multiple Audio Engineering Society (AES) technical and standards committees on audio for cinema, audio for games, and spatial audio, is also the founder and CEO of Sound Particles, a fast-growing software company that offers software tools for spatial audio creation - extensively adopted in Hollywood and by all major production houses (the list of Sound Particles’ users is an incredible Who’s Who of mainstream content production). The company’s software tools and approach to “particles” is unique precisely because it allows being both universal and agnostic to “formats.”
I have been skeptical about the potential to creative immersive music experiences without the full capabilities of a true multichannel system. I mean, translating Dolby Atmos in a binaural system is already difficult and not very convincing, but doing it from a single speaker, as Amazon did with the Echo Studio? Can we even call it “immersive?”
Spatial/immersive audio is right now top of mind for all consumer electronics manufacturers and brands, but also for studio owners, live event managers, and sound engineers. If there’s one thing we know, it is that when we return to large-scale live music events, we will no longer work in left/right, or left/center/right formats as we did until now. As consumers are already streaming music in “spatial audio,” and watching immersive streaming content on Netflix, they will expect new levels of experience when they are finally able to see their favorite artists playing live again.

The questions remaining are: What can I do with spatial audio? How exactly should we conceptualize 3D sound?

The tools for “spatial audio” processing are now being created - and apart from ideas of moving sounds from back to front (great for flying helicopters - not so great for a guitar solo), there is no universally recognized idea of what it means. Particularly for music, everything remains relatively unfamiliar territory.

The live sound industry is now totally focused on paving the way for experimentation. Immersive sound installations with multiple arrays and speakers surrounding auditoriums (no speakers from the sky/ceiling, unlike Dolby Atmos cinema installations) are an attractive proposition. They tend to sound extremely clean and less fatiguing, because the whole dynamics change, with less compression demanded from the need to “throw” sound to cover the audience from a frontal location. With multiple speaker arrays, sound engineers feel the extra headroom, allowing them the margin to play a bit more with sound dynamics, and even frequency-based timing/phase manipulation.

Ideal sound installations will surround the audience to precisely render any object-based sound creation, translated to whatever channel sound reproduction is in place - with hundreds, dozens or simply five channels, supported with a bit of virtual acoustics - as Meyer Sound, L-Acoustics, d&b audiotechnik, Yamaha, Spatial, FLUX, and many other companies are enabling.
Sound Particles’ software processes sound as particles in a tri-dimensional space. The company’s latest plug-ins use intensity or brightness, pitch or MIDI notes to control movement, evolving from traditional stereo to Ambisonics, or from 5.1 to binaural. And this can be used in a fully controlled form, or totally randomized for creative effects.
On a different playing field, manufacturers that are now designing the next generation of home audio speakers, consumer headphones or true wireless earbuds to match the next-big-thing in “Dolby Atmos-capable smartphones,” are facing much different dilemmas. Other than a movie soundtrack, what are other “spatial” sound references they should be testing with?

While professional audio is moving confidently ahead, the consumer electronics industry is currently busy combining its most powerful digital signal processors that can be fitted in low power wearables, precisely to generate those spatial audio cues in binaural systems. In recent examples, CEVA is working with spatial audio pioneer VisiSonics to combine motion sensors for head-tracking and their powerful DSP family, in order to handle multichannel audio and 3D audio impressions rendered binaurally using generic or personalized head-related transfer function (HRTF) profiles. And Dirac offers a complete suite of spatial audio solutions for headphones, combining its patented Dynamic HRTF technology, magnitude response correction, impulse response correction, and digital signal enhancement, which can be ported to all major platforms from Qualcomm, RealTek, Airoha, BES, and multiple available SDKs. In fact, like other DSP pioneers active in spatial audio, Dirac perfected all those technologies originally for automotive and home theater applications.

But all those development tools still need reference material in order to become relevant for consumers. As the late Bruce Swedien (1934–2020) and Al Schmitt (1930–2021, RIP) reminded us, music “production” is so much about experimentation and creativity as it is about meeting established conventions - particularly when addressing specific music genres. An experienced music producer knows exactly how to position the percussion and drum sounds relative to the bass track, and how to use dynamic processing for creating the required tri-dimensionality between voice and all the different base and solo instruments. In their minds, the mixing process of those “aural cues” is defined “spatially” through a two-channel system, and the result can even be extremely impressive when translated to any actual “spatial audio” music mix. But what should we do with music created FOR spatial audio?

As Nuno Fonseca appropriately states, “Space is still one dimension that is not fully explored by musicians.
Spatial proposes a simple to install and use product line, complemented with services, to make dynamic immersive audio available everywhere. "We've been crafting an integrated software platform including a real-time interactive engine, 3D creative tools, an intuitive control app, and service," says Michael Plitkins, Spatial's Co-founder.
Audio Electronics
Speaker ID Technology
By Vikrant Singh Tomar (Fluent.ai)
As part of its focus on voice convergence, in its April 2021 edition, audioXpress featured a contributed article by Vikrant Singh Tomar, the founder and CEO of Fluent.ai, describing how increasing consumer demand for personalized experiences has sparked the creation of the company’s speaker ID technology, and exploring the challenges that this type of technology still faces. A valuable perspective from a company positioned right at the edge of voice recognition, personalized experiences, voice profiling, and language challenges, implementing low power designs using AI and edge processing. This article was originally published in audioXpress, April 2021.  Read the Full Article Now Available Here
Voice Coil Test Bench
The BWX-6502 Midbass from MISCO’s Bold North Audio Line
By Vance Dickason
In this article, Vance Dickason characterizes the Bold North Audio BWX-6502 Midbass woofer, which comes from a series of exciting new drivers designed and manufactured in the US by MISCO (Minneapolis Speaker Company), the oldest OEM driver manufacturer in the US, founded in 1949. The recently expanded Bold North lineup now includes the BWX-6502, which is built on the same platform with a similar XBL2 "dual-gap" motor design structure to the BWX-6501 model but with a cone assembly consisting of an Abaca fiber (paper) cone and a 60mm (2.4") diameter Abaca fiber dust cap, with compliance provided by a wide high excursion NBR surround, and a 3.5" diameter flat cotton spider (damper). Driving the cone assembly is a 1.5" diameter voice coil wound with round copper-clad aluminum wire (CCAW) on a nonconducting Kapton former. This article was originally published in Voice Coil, January 2021.
audioXpress May 2021: Digital Login
Audio Product Design | DIY Audio Projects | Audio Electronics | Audio Show Reports | Interviews | And More 

Don't Have a Subscription?
Voice Coil May 2021: Digital Login
Industry News & Developments | Products & Services | Test Bench | Acoustic Patents | Industry Watch | And More


Advancing the Evolution
of Audio Technology

audioXpress features great articles, projects, tips, and techniques for the best in quality audio. It connects manufacturers and distributors with audio engineers and enthusiasts eager for innovative solutions in sound, acoustic, and electronics.

Voice Coil, the periodical for the loudspeaker industry, delivers product reviews, company profiles, industry news, and design tips straight to professional audio engineers and manufacturers who have the authority to make powerful purchasing decisions.

The Loudspeaker Industry Sourcebook is the most comprehensive collection of listings on loudspeaker material in the industry. Purchasers and decision makers refer to the guide for an entire year when making selections on drivers, finished systems, adhesives, domes, crossovers, voice coils, and everything in between.

© 2021 KCK Media Corp. All Rights Reserved.