History of Audiovisual Exploration - PhD Research Talk

Found a video online of me presenting my PhD research into the history of humanities explorations into creating synaesthetic relationships between sound and image. Basically cover approaches from Isaac Newton up until the present day. Enjoy :) 

MindBuffer Creative Workflow Discussion at SIAL RMIT 2015

Mindbuffer was invited to speak at the first instalment of the public talk series from SIAL Sound Studios. Myself and Mitchell Nordine presented the evolution of MindBuffer and why we felt the need to develop custom tools instead of using off the shelf software to realise our creative vision.

Includes excerpts from:
Ryoichi Kurokawa – Syn
Ryoji Ikeda – The Transfinite
Modell 5 – Granular Synthesis

Also includes live demonstrations MindBuffer’s generative music software (Jen) and realtime audiovisual granular synthesis software (Kortex) as well as others.

Pause Festival - Project Pixel Squared Panel Discussion

I was invited along with Richard De Souza, Kit Webster and Sean Healy to sit on a panel discussing modern realtime visual techniques and audio visual installations as part of the 2015 Pause Festival in Melbourne. Moderated by Drew Clarke, you can view the discussion below. Beneath the video is a breakdown of the questions and answers from various members of the panel. Was really great to be involved in and share ideas / techniques with fellow realtime visual ninjas.

MindBuffer Live at Code

We got asked to play at the Melbourne Media Lab’s Code event in 2013. Had a blast, see the video link below for a snapshot of the set. I think the video accurately depicts the kind of aesthetics we were going for with the setup back then. 

P.S Sorry about the sound, was recorded on a friends phone.

Hanna Remix Kortex Triple Screen

In 2013, Mindbuffer was planning on doing a tour or the US but had to postpone due to PhD commitments :( Leading up, we decided to test Kortex's abilities at the time to create a AVGS triple screen remix synced to one of our tracks. We chose to remix Hanna as it's one of our favourite films from that time. The whole video is rendered in realtime, no post production, everything synchronised from the inbuilt timeline in Kortex. Have a look below. 

Realtime Audio Granular Synthesis in openFrameworks

I finally got round to making an example for openFrameworks that granulises live microphone input using of the ofxMaxim library.I had to make a little hack in the maximilian.h file specifically the loopRecord() method. This is described in the github with the source that can be found here -> https://github.com/JoshuaBatty/LiveAudioGranularSynthesis-Maxim

The length of the audioBuffer is directly related to the length of the .wav file you tell the maxi play method to use. I have included a 7 second silent .wav file in the example but you can switch this out with any size recording you want your audio buffer to be.

Any suggestions, other uses etc.. welcome!

Also here is a little video demonstrating the example in action. Enjoy!

LaserBeam Exploration

In 2016 I purchased 2x KVANT Clubmax3000 laser beams in order to implement another dimension into MindBuffer shows. The awesome thing with these lasers is the new FB4 DAC module that enables me to control the laser synthesis wirelessly from Kortex. As a result, I can now route and pipe music data generated from Jen to the various laser synthesis parameters in realtime. Obviously this is fairly exciting, within a few seconds of creating sonic-laser mappings you can quite easily be sitting inside of a field of lasers playing drum and bass amen breaks in mid air.