Dan Halliday


Mobile Text-to-Speech Library

I wrote an iOS SDK for SpeechKit to give their customers an easy way to add podcast content to their apps. The kit is distributed on CocoaPods, and can be added to an app in minutes.

The implementation was relatively straightforward, being a thin front end to a backend service which uses IBM’s Watson to perform text-to-speech synthesis and intelligently cache articles as apps request them.

The emphasis was really on ease of use and reliability for the developer, so I used iOS’s network and audio frameworks to avoid depending on lots of third party libraries, wrote extensive tests including testing the actual audio playback, and spent time writing readmes, API documentation, and code samples for both Objective-C and Swift.

To demonstrate the kit and help with real-world testing, I put together a React Native app which lists recent news stories from a range of publications, and features a mini player which reads the stories as a playlist. The app was released on the App Store and its native bridge code added to the documentation to make integration with React apps easier.