Discussion:
[Sursound] Soundstack 2018
Angie M
2018-09-17 10:18:57 UTC
Permalink
Soundstack - A free three day series of workshops & masterclasses on the art and technologies of spatial audio, from Ambisonics to object-based audio, interactivity to synthesis, 5-7 Oct 2018 in London.

Apply to attend (any/ all days) before 25 Sept at https://goo.gl/forms/0LYvfbJIKCKwG6Rp1 <https://goo.gl/forms/0LYvfbJIKCKwG6Rp1> find out more by visiting https://soundstack.intersections.io <https://soundstack.intersections.io/>, or contact ***@qmul.ac.uk <mailto:***@qmul.ac.uk> with questions. This is an intermediate-level event, and requires some understanding of:

• Max MSP
• Pure Data
• Unity
• Spatial audio


Friday 5th Oct - Call & Response Enclave, Deptford, London SE8 4NT
and
Saturday 6th Oct / Sunday 7th Oct 2018 - Queen Mary University London, Mile End Campus, London E1 4NS

These workshops will introduce you to four artist-engineers working across binaural and multichannel audio, at the cutting edge of 3D and interactive applications for VR, AR, installations and performance. Over three days you will hear about specific software and techniques, as well as the aesthetic potential of working with immersive audio in fixed and real-time settings. You will have hands-on instruction, as well as demonstration of work.

By the end of the workshop you will have a better understanding of how to spatially compose sound, how to use the open-source Heavy compiler to work with Pure Data in environments like Unity, how to build a wearable binaural audio system with head-tracking in Pure Data using the Bela platform, and how to use Ircam’s ‘Spat’ suite.

Spatial aesthetics + performance Friday 5th Oct 10am - late, with Tom Slater (workshop) and Alo (performance) @ Call & Response Enclave, Deptford, London SE8 4NT

This 1 day workshop will give the sound curious the opportunity to learn a range of new skills in 3D audio. Working with free software we use an open workshop model, where participants work towards self-directed projects. Participants will be able to diffuse their own work over C&R’s unique 3D 15-speaker sound system and take part in an informal exhibition at our space. Areas covered include: binaural and Ambisonic recording with 3D microphones, Ambisonic processing and spatialisation, immersive sound installation design and generative spatialisation techniques.
SATURDAY 6th October
Bela, Pure Data & head-tracking Saturday 6th Oct, 10am - 1pm with Becky Stewart @ Queen Mary Uni London, Mile End Campus E1 4NS

Learn how to use Bela Mini, an embedded platform for low-latency audio signal processing, to generate interactive binaural audio. You will start by teaching how to program Bela using Pure Data and then how to have the code interact with the physical world using custom sensors. Using paper craft and circuity, you will get to develop your own creative ideas and start prototyping a wearable musical performance or installation
100% organic interactive audio for games and VR using Unity and the Heavy compiler (hvcc)

Saturday 6th Oct, 2pm - 5.30pm with Chris Heinrichs @ Queen Mary Uni London, Mile End Campus E1 4NS

This workshop covers the full workflow of designing, compiling and integrating interactive sound assets for unity using the newly open-sourced heavy compiler by the late Enzien Audio. After a brief introduction to computationally generated audio and its use-cases we will go through the process of designing a simple but flexible sound model. We will then use the hvcc python tool (https://github.com/enzienaudio/hvcc <https://github.com/enzienaudio/hvcc>) to convert this model into an audio plugin for Unity. Within Unity, we'll discover some of the interesting things we can accomplish with simple scripts and interactions, and how each instance of our model can be spatialised binaurally in order for us to begin converging on optimal hyperreal sonic sensation.

SUNDAY 7th October
Introduction to Spat, Sunday 7th Oct, 10am - 5pm with Thibaut Carpentier @ Queen Mary Uni London, Mile End Campus E1 4NS

Thibaut will present an introductory workshop to Ircam Spat. Spat is a real-time spatial audio processor that allows composers, sound artists, performers, or sound engineers to control the localization of sound objects in 3D auditory spaces. The hands-on session will cover the practical implementation and usage of panning techniques (vbap, ambisonics, binaural, etc.), reverberation (convolution or parametric), object-based production, spatial authoring, 3D mixing and post-production.

Participants are required to be conversant with digital audio tools and fluent with Max programming.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.music.vt.edu/mailman/private/sursound/attachments/20180917/ca7373c4/attachment.html>
Loading...