THE PRESENTATIONS & SPEAKERS
The ORPHEUS project, funded under the HORIZON 2020 program by the European Commission, united ten major players of the European media industry.
In the past 30-month ORPHEUS has developed, implemented and validated a complete object-based media chain for audio content. We were running two successful pilots to demonstrate the key features and benefits of object-based broadcasting, including immersive sound, foreground/background control, language selection, and enhanced program services. Pilot 1 was the first ever object-based live interactive radio drama. In pilot 2 we introduced the variable-length feature in non-linear playback that offers users to adapt content in time or depth.
ORPHEUS’ technical coordinator Andreas Silzle explains the basic principles, gives an overview and outlines the specific challenges of the project.
Andreas Silzle achieved his PhD at the Institut für Kommunikationsakustik (IKA) of Prof. Blauert in Bochum on quality evaluation of auditory virtual environments. Now he is senior scientist at Fraunhofer IIS in Erlangen. His current development and research addresses recording, processing and reproduction of 3-dimensional sound fields, with a special focus on quality evaluation. He is actively involved in international standardization in ITU-R.
The object‑based approach to media gives the most fundamental opportunity to re‑imagine the creation, management and enjoyment of media since the invention of audio recording and broadcasting. This change, however, requires certain adaptations within the existing technical workflow of broadcasters. This talk presents an example of an end‑to‑end object‑based audio architecture which is based on open standards, defined by the ORPHEUS project consortium. The final reference architecture specification of ORPHEUS has been shaped by taking into account typical channel‑based broadcast workflows but, more importantly, by also including the knowledge and lessons learned during the pilot phases and stages.
Michael Weitnauer is R&D Engineer and Project Manager at IRT and coordinating IRT’s activities in ORPHEUS. His current activities include the development, standardization and introduction of object-based audio as the new solution for media production and consumption. He is active member and chair of standardization groups within ITU-R, EBU and DVB.
MPEG-H Audio, a Next Generation Audio codec, generates a new experience for the user by enabling immersive sound, interactivity and personalization. The presentation will describe the features of MPEG-H Audio and the necessary steps to upgrade the existing audio infrastructure to support a next generation audio codec as well as the different phases of the transition to the full usage of the feature set of MPEG-H Audio.
Harald Fuchs received his diploma in electrical engineering from the University of Erlangen, Germany in 1997 and joined Fraunhofer IIS in the same year. From 2002 onwards he concentrated on multimedia systems aspects and standardization, starting in 2009 with a focus on audio technologies for broadcasting and broadband streaming applications. He actively participated in several standardization organizations, including MPEG, DVB, ATSC, DLNA and HbbTV, and contributed to several standards documents in those groups. More recently he focused on MPEG-H Audio and more specifically on the delivery and transport of MPEG-H Audio in ATSC 3.0 systems and MPEG-2 Transport Stream based DVB systems. Since 2018 he is head of the Media Systems and Applications department.
The world is constantly evolving and with it, the technologies and the uses undergo a real change. Television as we know it must also evolve and adapt to changing habits and changing needs. it’s a matter of survival for our industry and that’s why we need to explore every possible way to improve the user experience by keeping in mind the mission of public service and openness to all. This requires close work with the creators and a constant feedback from the users.
For more than 20 years, Lidwine Hô has been working on different audio topics at France Télévisions. She is now in charge of the implementation of new audio tools in web creations and linear television. She works closely with the creators on VR programs, interactive programs, and all new forms of narration.
As part of ORPHEUS, project partner MAGIX explored challenges presented to pre-production tools in an object-based broadcasting context.
This talk aims to provide a concise overview of the implemented features in the DAW Sequoia.
Marius Vopel joined MAGIX as a software developer in 2016. He received his master’s degree in Applied Information Technologies from the University of Applied Sciences Dresden.
The Mermaid’s Tears is an immersive and interactive radio drama created by the BBC as part of Orpheus. Listeners can follow one of three characters and switch between them during the programme, which gives three different perspectives on the same story. Listeners can also experience the drama in stereo, surround sound and binaural. Chris will describe how the BBC produced the drama as a live object-based broadcast, the tools they developed in order to achieve this, and the feedback they received from a large-scale public trial.
Chris Baume is a Senior Research Engineer at BBC R&D in London, where he leads the BBC’s research into audio production tools and the BBC’s role in the Orpheus EU H2020 project. His research interests include semantic audio analysis, interaction design, object-based audio and spatial audio. Chris is a Chartered Engineer and received his PhD at the Centre for Vision, Speech and Signal Processing at the University of Surrey.
Having different versions for length of time and depth of information is one of the challenging requests for personalisation.
Object-based media technologies offer now possibilities to let the user decide how deep he wants to go or how muchtime to spend on certain topics.
Within our ORPHEUS pilot phase 2 we have applied a common principle of communication and turned it into a practical approach.
Werner Bleisteiner has 25+ years experience in broadcast journalism in radio, television and internet.
He worked as reporter, author, editor and producer for various broadcast editorial departments throughout the ARD network. He is has also created numerous radio documentaries on the history and developments of broadcasting and audio technology. Werner is involved in BR’s digital radio and media development since 2005. As Creative Technologist he is now designing and coordinating internal and external innovation projects for BR-KLASSIK’s Online/Streaming/TV department.
The ORPHEUS radio player app is a milestone of developement for mobile devices. Never before have so many features of interactive audio been implemented: rendering for binaural or channel-based output, multi-language selection and enhanced information and navigation features in transcripts plus variable-length playback. Niels will take you behind the scene and explain the details.
Niels Bogaards co-founded elephantcandy to create innovative audio apps for mobile devices. Focussed on usability, elephantcandy is constantly searching to bring advanced audio technology to new audiences by combining efficient implementations with well-designed user interfaces.
Previously, Niels was a member of the Analysis/Synthesis team at IRCAM and worked as a senior developer at the Society for Old and New Media in Amsterdam.
Immersive audio experiences in the home are rare because of the need for complex spatial audio systems. However, our homes have many loudspeakers within devices such as mobile phones, laptops and smart assistants. “Media device orchestration” (MDO) uses an ad-hoc array of consumer devices to deliver or augment a media experience. This talk will discuss the commissioning and production of a radio drama that uses ad-hoc arrays, as well as data from psychoacoustic experiments evaluating peoples’ experiences of MDO.
Trevor Cox is Professor of Acoustic Engineering at the University of Salford. He is currently working on two major projects. Making Sense of Sound is a big data project examining everyday sounds combining psychoacoustics and machine learning. The other project is investigating future technologies for spatial audio in the home (S3A).
Trevor has presented 25 science documentaries on BBC Radio, authored articles for New Scientist and the Guardian and written two popular science books. The first book, Sonic Wonderland, won the ASA prize for science writing. He has developed and presented science shows seen by 15,000 children, including appearances at the Royal Albert Hall, the Purcell Rooms and the Royal Institution. He currently holds the Guinness World record for producing the Longest Echo in one of the Inchindown Oil Tanks. His second popular science book, Now You’re Talking, has just been published.
So now we have seen how the technology works – how can we bring its benefits to the audience? What are possible scenarios? What are steps to be taken? Which obstacles are on the way?
Let’s tackle these questions. And create some visions!
Participants:
Paloa Sunna (EBU), Chris Baume (BBC), Matthieu Parmentier (France TV), Lars Hedh (Swedish Radio), Peter Fohrwikl (BR)
Moderator: Simon Tuff
Simon Tuff is currently a Project Engineering Partner at the BBC. He joined the BBC in 1988 as a trainee Radio Engineer and having worked as an engineer across most the BBC’s radio services, he moved to work on TV projects in 2006 when managing part of BBC R&D. He has worked on numerous BBC & EBU projects since, including Loudness [EBU R128], object based audio, binaural audio, audio over IP and audio archiving. He’s a member of the DPP Technical Standards Committee, is vice chair of the AVC group in the Technical Module of DVB and co-chairs the audio specialist group of FAME, a European body looking at the audio technology that will accompany next generation UHDTV and its interoperation across Europe.
THE DEMOS
upHear enables professional-quality immersive audio capturing with custom microphone configurations. The demo showcases a VST plug-in that automatically outputs stereo, 5.1, or 7.1+4 channels using recordings from a compact 8-channel microphone array.
In this demo we will demonstrate how to record and monitor 3D audio scenes using a spherical microphone array (SMA) and the tools developed during the ORPHEUS project. Two tools will be presented: 1) the MicProcessor plugin, which converts the signals recorded by an SMA to the Higher-Order Ambisonic format; and 2) the HoaScope plugin, which displays a map of the incoming sound energy and allows to selectively listen to the sounds incoming from a particular region of space.
As part of ORPHEUS, project partner MAGIX explored challenges presented to pre-production tools in an object-based broadcasting context.
Stop by to see, try and discuss new developments for object-based production in the DAW Sequoia.
In partnership with France Televisions, as a user of Pyramix softwares for its multichannel post-production studios, Merging Technologies has developed a function to export Audio Definition Model metadata within BWF files and allows the use of object and channel-based contents, as well as scene-based formats. This step helps France TV to prototype the mastering of Next Generation Audio programmes.
The ADM Pre-Processor has been developed by IRT to enable interoperability in terms of object count or features during the production. Within an automatic process, ADM files can be processed to match specific ADM profiles for different distribution platforms by keeping the creator’s intent.
The ADMix suite provides a set of tools for the recording and rendering of ADM files. The ADMix Renderer is a pure standalone application able to render an ADM file over any kind of 2D or 3D loudspeaker layouts or over headphones. The ADMix Recorder is a “plug-out” that can be interconnected with any Digital Audio Workstation (DAW) using the Open Sound Control (OSC) protocol to transmit the ADM metadata. The ADMix tools are available for macOS and Windows.
IRT demonstrates the technical setup which was used by BR sound engineers for the ORPHEUS project to produce two pilot programmes. It uses IRT’s experimental production tools for object-based audio along with head-tracked binaural rendering.
The world’s first standardized ADM Renderer, published by the EBU, is explained and demonstrated by the developers to introduce its basic capabilites and concepts.
The ORPHEUS Radio app.
Experience our next generation radio player app: try interactive audio objects, immersive binaural or channel-based rendering and variable-length playback along with multi-language and enhanced information and navigation features in transcripts.
All of that available in the productions made for the ORPHEUS Pilots from BBC, BR and Fraunhofer IIS.
The object-based ORPHEUS iOS app is feeding a prototype 3D soundbar reproducing highly convincing immersive sound. The different interactive features will be presented.
Trinnov Audio presents an immersive playback of object-audio sequences on a 3D setup. Immersive MPEG-H contents are rendered over 32 loudspeakers using the multi-award winning luxury home-cinema processor Trinnov Altitude32. The embedded Remapping algorithm ensures the best spatial-accurate 3D playback, while the room acoustic is digitally corrected by the Optimizer technology.
The Mermaid’s Tears is an immersive and interactive radio drama created by the BBC as part of ORPHEUS.
You will be able to experience our audio drama production, which allows you to follow the characters in a story as they move in and out of the scene, and around multiple locations in a house (the setting for the story) – while images related to elements of the story appear on screen. This is all made possible by providing those elements in the production as separate ‘objects’ which are then be re-assembled in the web browser. Even here you may listen to the drama in stereo, surround sound and binaural. Come experience it for yourself and learn about how it was made.
MPEG-H is on air in Korea sinceMay 2017 and selected for the upcoming Chinese UHD TV system. The demo shows a prototype Set-Top-Box connected to a TV and various interactivity features which become possible with MPEG-H 3D Audio, including, language selection, dialog enhancement, audio description and additional audio channels such as “team radio” for DTM car racing.
Immersive audio experiences usually need complex spatial audio systems. But our homes have many loudspeakers (smart assistants, laptops, mobiles, etc) that can form ad-hoc arrays. This demo of “Media device orchestration” (MDO) will use synchronised wireless connections to allow a radio drama to be reproduced on an ad-hoc array formed from a laptop and two mobiles.
Based upon the ORPHEUS project’s main pillars, the user requests and use cases, the pilot architecture and the pilots themselves, we have developed a basic model for practical examination and evaluation of user experience within an object-based media eco-system as a holistic model. This model is comprised of three main dimensions:
- audio experience
- usability experience
- information experience
In this poster demo we explain the methodology, the setting at JOSEPHS® and the gained results.