Robotic percussionists usually use simple actuating mechanisms based on solenoids to trigger drum strokes. The dynamical limitations of such actuating mechanisms affect the musical expressivity of such robotic drummers. In this work, I developed a physics-based generative model to capture the mechanisms underlying a multiple bounce drum stroke produced by a human drummer. This model's output is then used as a reference trajectory for the PID controller that drives the motors of the robotic drumming device. This project used the Robotic Drumming Prosthesis (developed by Dr. Gil Weinberg's Robotic Musicianship Group at Georgia Tech Center for Music Technology) as its test-bed. In addition to the basic model, I also developed a suite of transformation algorithms that would take the output of the basic model and modify them in a musically sensible manner.
Detailed torque diagram outlining the different forces at work on a drumstick during a drum stroke.
csSpectral is a real-time, Csound-based, multi-effects processor featuring a Streaming Phase Vocoder and six other FFT-based spectral algorithms from Boulanger Labs. I was the lead iOS developer for this project and was responsible for the front-end software architecture and UX implementation. The diverse collection of real-time DSP effects in csSpectral let you create unique textures and timbres by transforming your voice, instrument, or iTunes library. csSpectral also supports Audiobus, allowing you to send and receive audio from other apps, making the processing possibilities nearly endless.
Co-developed with Takahiko Tsuchiya, Hypnos is a Virtual Software Instrument (VSTi) that lets musicians touch and manipulate sound directly by modifying wavetables. Hypnos also comes with a plethora of additional utility tools such as waveshaping and phase distortion enabling the effortless creation of expressive music. You can find a demo of the original Hypnos here. Hypnos was sold to premier audio technology company 2nd Sense Audio and is currently available as Wiggle.
In this project, a system for instrument identification in single channel polyphonic audio mixtures using Non-Negative Matrix Factorization (NMF) was developed. To address the fundamental question of the appropriate number of components for NMF is resolved by estimating the number of distinct pitches in the audio and using these estimates for determining the rank during training and testing. The project specifically focused on four instruments: violin, flute, trumpet and saxophone. The training set consist of monophonic audio samples and the test set was synthetically created by combining anywhere between 2 to 4 monophonic musical phrases in a randomized fashion.
(work done jointly with Takahiko Tsuchiya)