Using Keras & Theano for deep learning driven jazz generation

I built deepjazz in 36 hours at a hackathon. It uses Keras & Theano, two deep learning libraries, to generate jazz music. Specifically, it builds a two-layer LSTM, learning from the given MIDI file. It uses deep learning, the AI tech that powers Google's AlphaGo and IBM's Watson, to make music -- something that's considered as deeply human.

deepjazz has been featured in The Guardian, Aeon, Inverse, Data Skeptic, the front page of HackerNews, and GitHub's trending showcase. Currently, it is being used as reference material for the course "Interactive Intelligent Devices" at the University of Perugia.

Want to listen?

Author

Ji-Sung Kim
Princeton University, Department of Computer Science
jisungk (at) princeton (dot) edu

Citations

This project develops a lot of preprocessing code (with permission) from Evan Chow's jazzml.

As seen in

The Guardian, HackerNews, Inverse, Data Skeptic, and GitHub's Trending Showcase