I’ve worked with a wide range of technologies, from front-end and back-end web stacks, to desktop and mobile apps, graphics, and audio.

Here are a few of my recent projects. Visit GitHub to see some code, and email me or connect on LinkedIn to get in touch about a project.

Wireless Audio Visualiser

DST Innovations

I built an iOS app for DST Innovations to serve as a controller for their prototype multimedia fashion product.

The product features a wireless peripheral driving a visualiser display, which reacts to live music. The iOS app is a music player which analyses the audio in real time and computes a frequency spectrum, which is sent wirelessly to the peripheral.

The main requirement was that the music and visuals played in lockstep, so I devised a clock synchronisation mechanism. The iOS app uses a very basic approximation to Network Time Protocol to establish a common clock with the peripheral — in the order of a few milliseconds’ accuracy. It then delays audio playout by a fixed period for safety, and sends each audio frame with a timestamp against the common clock so the peripheral knows exactly when each frame should be displayed.

I worked with iOS’s Audio Units API to play and capture the audio data, and used a lock free circular buffer to pull samples out, running an FFT analysis on them on a separate thread and queueing them for transmission. Limitations on the data rate for Bluetooth LE meant packing the data fairly tightly, so I designed and documented a binary wire protocol, with a concise layout for sync and data packets.

To test the system end-to-end, I created a simple Node.js tool running on a desktop computer which prints Bluetooth session lifecycle events and statistics on the incoming packets, and could be left running for long periods. This was especially helpful with the clock synchronisation mechanism, which would have been wildly unstable without some timeout values and other insights pulled from extended real-world use.

Mobile Text-to-Speech Library


I wrote an iOS SDK for SpeechKit to give their customers an easy way to add podcast content to their apps. The kit is distributed on CocoaPods, and can be added to an app in minutes.

The implementation was relatively straightforward, being a thin front end to a backend service which uses IBM’s Watson to perform text-to-speech synthesis and intelligently cache articles as apps request them.

The emphasis was really on ease of use and reliability for the developer, so I used iOS’s network and audio frameworks to avoid depending on lots of third party libraries, wrote extensive tests including testing the actual audio playback, and spent time writing readmes, API documentation, and code samples for both Objective-C and Swift.

To demonstrate the kit and help with real-world testing, I put together a React Native app which lists recent news stories from a range of publications, and features a mini player which reads the stories as a playlist. The app was released on the App Store and its native bridge code added to the documentation to make integration with React apps easier.

Multichannel Streaming Server


I created macOS and iOS apps for RIMMS TV to enable their users to hear custom live audio mixes on set.

The macOS server app acts as a live mixer, taking in up to 100 channels from an external sound card in the gallery and providing an unlimited number of separate output mixes fed back to the sound card and recorded using dedicated hardware.

Producers and staff on set carry iPads running the iOS app, which connect to the server and each receive their own live audio mix. Each user can control levels using the iOS app, which talks to the server using a simple Rest API.

I used macOS’s Audio Units API to implement the matrix mixer, so the server is compatible with a range of audio hardware and can work at any sample rate or buffer size. The audio needed to be streamed from the server to the clients with very low latency (in the tens of milliseconds), so I used a lock free circular buffer to lift samples from the Audio Unit graph and queue them in a separate thread for Opus encoding and transmission over TCP.

There were many challenges in making the iOS client app reliable under varying network and battery conditions. I put together a simple wire protocol for the clients to report heartbeat health status messages so the server could adjust its outgoing streams, and did extensive testing using both TCP and UDP approaches to the socket connections.