ChAI : ChucK for AI





(version: need ChucK-1.5.0.0 or higher
ChAI is also available in WebChucK)


Overview

ChAI (ChucK for AI) is a framework for music and artificial intelligence in the ChucK programming language. It contains a set of tools, algorithms, data, and examples for working with supervised learning, unsupervised learning, neural networks, interative music generation, and education. Whenever possible, ChAI is designed to operate in a real-time, interactive context. Combining audio analysis, machine learning/AI, and sound synthesis, ChAI aims to support artful toolbuilding for music composition, performance, design of instruments and expressive toys, making use of AI as a tool for human expression.

The design and development of ChAI is led by Ge Wang and Yikai Li (2022-present), with contributions from the ChucK Research and Development Team. Aspects of ChAI evolved out from ChucK's unit analyzer (UAnae) framework, sMIRk (small music information retrieval toolKit), the on-the-fly learning framework in ChucK by Rebecca Fiebrink, Ge Wang, and Perry R. Cook, Wekinator by Rebecca Fiebrink, and MARSYAS by George Tzanetakis. (See historical note below.)

Tools and Examples

ChAI [beta] first appeared in chuck-1.4.2.0 (January 2023) with a first release in chuck-1.5.0.0 (May 2023). ChAI's development is ongoing. This means that some of the APIs may change moving forward. Presently, available tools and examples include: MLP, Wekinator, KNN, KNN2, SVM, HMM, PCA, Word2Vec—with more on the way. Additionally, additional audio features have been added to the unit analyzer framework, including MFCC, Chroma, Kurtosis, SFM. The API for these objects can be found in the ChAI Documentation or by calling their respective .help() method—e.g., MLP.help();—which will print the Object API to the console or terminal. Examples (also in beta!) can be found in examples/ai/. Additionally, these also be accessed in WebChucK IDE, under Examples->More Examples->ai.

Teaching

Beginning in 2023, ChAI has been used in the graduate-level "critical making" course Music and AI (Music 356 / CS 470) at Stanford University. See some examples of the students' creatives works: homework gallery. The development of ChAI is deeply indebted to the intrepid students of "Music and AI"—and to their immense creativity, and their awareness to be critical about AI.

A Historical Note + Acknowledgements

Much of ChAI and its ways of thinking and doing can be foundationally attributed to the work of Dr. Rebecca Fiebrink, her landmark Ph.D. Thesis, Real-time Human Interaction With Supervised Learning Algorithms for Music Composition and Performance [1], the Wekinator framework [2], her teaching at the intersections of AI, HCI, and Music [3], as well as earlier collaborations between Rebecca and Ge [4,5] including SMIRK [6] (Small Musical Information Retrieval toolKit; 2008; early efforts in on-the-fly learning), MARSYAS and the foundational MIR work of Dr. George Tzanetakis [7,8] (along with the humanistic and musical motiovations behind it), and the unit analyzer framework [9] (2007; which affords real-time audio feature extraction). All of these have directly or indirectly contributed the creation of ChAI. Additionally, ChAI benefitted from the teaching and design philosophy of Dr. Perry R. Cook (Ph.D. advisor to Rebecca, George, and Ge at Princeton, and ChucK co-author), who argued for the importance of real-time human interaction, parametric sound synthesis, and the importance of play in the design of technology-mediated musical tools.

[1] Fiebrink, Rebecca. 2011. Real-time Human Interaction With Supervised Learning Algorithms for Music Composition and Performance Ph.D. Thesis. Princeton University.

[2] http://www.wekinator.org/ Wekinator | Software for real-time, interactive machine learning

[3] Fiebrink, Rebecca. "Machine Learning for Musicians and Artists." Kadenze online course.

[4] Fiebrink, R., G. Wang, P. R. Cook. 2008. "Foundations for On-the-fly Learning in the ChucK Programming Language." International Computer Music Conference.

[5] Fiebrink, R., G. Wang, P. R. Cook. 2008. "Support for Music Information Retrieval in the ChucK Programming Language." International Conference on Music Information Retrieval (ISMIR).

[6] http://smirk.cs.princeton.edu/ sMIRk | Small Music Information Retrieval toolKit

[7] Tzanetakis, G. and P. R. Cook. 2000 "MARSYAS: A Framework for Audio Analysis." Organised Sound. 4:(3)

[8] Tzanetakis, G. and P. R. Cook. 2002 "Musical Genre Classification of Audio Signals." IEEE Transaction on Speech and Audio Processing. 10(5).

[9] Wang, G., R. Fiebrink, P. R. Cook. 2007. "Combining Analysis and Synthesis in the ChucK Programming Language." International Computer Music Conference.


"ChAI ON-THE-FLY" — a collaborative ChAI working logo