When we look at Adobe's portfolio of solutions, which includes the famous Acrobat (for PDF), Photoshop, Media Composer (video editing), and After Effects (visual effects and animations), in a family of 21 core software programs, there is only one for audio: Adobe Audition. Audition, used for audio recording, mixing, and restoration, is a software acquired from Syntrillium Software in 2003 and was previously known as Cool Edit Pro.
Recently I did an interview with Adobe's Audio Product Manager, Durin Gleaves, just before the company made the big announcement about its latest updates to the Creative Suite, previewed at the IBC 2018 show in Amsterdam. I really wanted to understand how Adobe was positioning Audition, other than being a secondary tool for Premiere Pro, and why and how exactly the company is evolving its software using artificial intelligence and machine learning.
|
Adobe Audition might not be the most popular audio software out there, but it provides a comprehensive audio post solution, now with next-generation AI-based audio cleanup technologies and a modern - and much faster - multitrack environment.
|
For the latest 2018 updated release of Creative Cloud, Adobe focused on things such as making its new generation of artificial intelligence (AI) -powered features, which the company calls Adobe Sensei, available for all its software. These are features that essentially help users accelerate mechanical tasks, such as auto lip-syncing an animation with performance-captured mouth movements and spoken sounds - a very smart and cool feature. Specifically, for Audition, using machine learning, the new Auto-Ducking feature automatically lowers soundtrack volume during spoken dialog. Also, there's a new Intelligent Audio Cleanup (also common to Premiere Pro), together with Reduce Noise and Reduce Reverb sliders in the Essential Sound panel make removing reverb and background noise easier than ever. Noise removal and clean-up was always a strong area for Audition, but the process was very time consuming and now there are many algorithmic plug-ins available that do that better and faster. But inside Audition and Premiere, these enhancements certainly are welcome. Audition is now more tightly integrated with Premiere Pro and other tools than ever, and that's great for video editors.
I should also point out that Adobe often incorporates its own software with tools from other companies, when it is something they don't do and there is a clear market reference and specialized software they can support. That's what they've recently done with 3D animation, by supporting Maxon's Cinema 4D software. But that becomes hard to fit within its membership model, as I'll expand.
Looking at the latest Adobe Audition CC, it clearly isn't a fully featured DAW, certainly not a tool for music production (it doesn't support MIDI or virtual instruments), and as exciting as auto-ducking can be for editors, it's not something that's going to conquer new audio users for Adobe. It certainly is a more powerful version under the interface, allowing for 400% faster mixdowns and bounces, and more precise surround panning, among other enhancements.
Probably a good example of how Adobe looks at audio updates is the fact that this version introduces support for the good-old Mackie HUI protocol, finally allowing Audition users to use current control surfaces and consoles, including support for HUI-enabled timecode display and control devices (20 years later...). Also, funny enough, the 2018 Character Animator CC update introduces MIDI support for action triggers.
These are obviously welcome enhancements, considering that you get them automatically with your subscription, but it would not encourage anyone to pay for an upgrade. As Adobe explains in the announcement, "improvements we put in each release... often come from feature requests from customers and users and are specific solutions to real problems." So there.
When I asked Durin Gleaves to give me an idea of Adobe's focus for these Audition updates, he was quick to mention "simplification" and "collaboration," because as part of the Creative Cloud, users are no longer working on just one application.
|
Durin Gleaves is Product Manager for Audio at Adobe.
|
"Audition has a lot of adoption from people that are working on video editing, are visual designers, or are creating a podcast to promote a brand. There are a lot of different users and different skill levels, people for whom audio is not their expertise, but they need to create something with sound."
On the other hand, he also explained that they want to create tools that are more powerful but simpler to use. I think that characterizes very well what currently separates the smaller and specialized software houses, which make incredibly powerful and deep software (with an extensive learning curve in most cases) and large software companies (e.g., Adobe), who design tools that are used for media production in general. Integrating machine learning and AI to make the tasks easier and faster for those users is now a completely new focus, and something that I think will determine the way creative tools will be defined from now on.
As Gleaves explains, it simply is no longer possible to teach every user to explore every single manual task (I agree and can attest from my own extensive usage of Adobe's software). So, the software offers "automatic" features side-by-side with the traditional options to manually adjust things in detail and experiment as much as needed, without being intimidating. Very important, allowing the software foundation to scale, and allow the users to grow in their ability to explore and make the best use of the tools that are available. The fact that the software (and the platform it runs in) is now much more powerful and faster, also helps in making these features "automatic" and highly effective, of course.
And as Gleaves explains, it does the processing faster and at the local level (CPU-based) without taking processes to the cloud. So "users can be on a desert island and apply noise-removal or reverb reduction to a hundred clips with low latency (milliseconds) and low overhead, for a project running on a laptop."
|
The new DeReverb tool in Adobe Audition can dramatically improve audio by reducing unwanted echo from a clip using adaptive algorithms that apply real-time adjustments based on the specific characteristics of sound clips. Like the DeReverb, the new DeNoise algorithm are trained using machine learning and get better over time.
|
Interestingly, the Adobe model for the Creative Cloud subscription, makes it possible for users to benefit from new features, and especially new AI-based tools such as speech-to-text transcription, without the need to pay extra or get extra products. While many software companies currently (and I saw that very often in broadcast technology manufacturers this year) are proposing AI "as a service" (basically pay-per-use), Adobe can again leverage those AI features to increase the perceived value of its subscription model. Of course, that's valid for products that are developed in-house or technology that Adobe acquired. If, such as with transcription engines, Adobe is licensing the technology, it becomes harder to integrated it in the workflow... for free. Adobe is working on getting text from video automatically transcribed and indexed on the cloud to increase the production possibilities, especially for large media companies and broadcasters. If the integration is there, all the users/subscribers can benefit. As Gleaves puts it, "I think we can do better."
I invite to follow this link to read the full interview with Durin Gleaves, where we discuss Adobe's plans for immersive audio, how Adobe sees the use of audio production and post production tools on the cloud and more (too much to include in a newsletter... :)