Giter Site home page Giter Site logo

kevinzhangcode / codingchaosconcert Goto Github PK

View Code? Open in Web Editor NEW

This project forked from rkelln/codingchaosconcert

0.0 0.0 0.0 4.34 MB

Instructions and scripts for the Coding Chaos concert in May 2019

Home Page: http://www.ryankelln.com/project/coding-chaos/

License: GNU General Public License v3.0

Python 21.39% JavaScript 78.61%

codingchaosconcert's Introduction

Coding Chaos Concert

Instructions and scripts for the Coding Chaos concert in May 2019.

About the concert

The concert was a collaboration with Spectrum Music performed on May 2, 2019 at The Tell in Toronto, Canada.

Where will we be when what we create can create us? Technology continues to grow at an exponential rate as we discover new ways to improve our everyday lives - but at what point do we lose ourselves to a technology advanced far beyond our control? In a digital world where technology constantly blurs the lines between fact and fabrication, the Spectrum composers probe the possibility of losing our sense of identity and grasp on reality - or will technology give us room to grow?

Along with 6 composers and 3 musicians we developed and performed music that had been created with or performed with the help of artificial intelligence.

Spectrum composers included: Mason Victoria, Chelsea McBride, Suzy Wilde, and Jackson Welchner with guest composers Harrison Argatoff and Nebyu Yohannes. Zoe Brown, the Assistant Director of Spectrum Music, coordinated and managed the concert.

Musicians were Bruce Cassidy, Larnell Lewis, and Chris Pruden.

Many thanks to everyone, this was a great collaboration in which we all learned what it was like working with AI tools.

Technology used in the Pieces

Google Magenta

https://magenta.tensorflow.org/

Magenta is an open source research project exploring the role of machine learning as a tool in the creative process. Using deep learning and reinforcement learning algorithms, the Artificial Intelligence is able to generate songs, images, drawings, and other materials. Users are also able to build smart tools and interfaces that allow artists and musicians to extend their processes using different Recurrent Neural Network (RNN) models. Spectrum is using AI Jam, an interactive A.I. jam session using a custom built RNN trained on over 900 jazz standards, classical etudes and licks. Used in: Source Activate, Love, As Though I Knew, The Process, Past Machinery

Voicebox by BOTNIK

https://botnik.org/apps/writer/

Both a tech company and a creative studio, Botnik builds creative writing tools using Artificial Intelligence to write comedy that neither a human nor computer could create on their own. Voicebox is a predictive text keyboard much like the word suggestions in a text messaging app, suggesting the words most likely to come next in any body of text you choose. Users must upload a text document of over 100,000 sentences in order to make a coherent predictive neural network. In our case, we uploaded all of the lyrics written by Joni Mitchell, Leonard Cohen, Bob Dylan and Paul Simon. Used in: Love-Bot

VOCALOID5 by Yamaha

https://www.vocaloid.com/en/

Vocaloid is a singing synthesizer application software developed by the YAMAHA Corporation. The software enables users to synthesize "singing" by typing in lyrics and melody. It uses synthesizing technology with specially recorded vocals of voice actors or singers. To create a song, the user must input the melody and lyrics. A piano roll type interface is used to input the melody and the lyrics can be entered on each note. The software can change the stress of the pronunciations, add effects such as vibrato, or change the dynamics and tone of the voice. Used in: Past Machinery

TTSReader by WellSource

https://ttsreader.com/

TTSReader online service by Wellspring provides text-to-speech services with realistic AI generated voices. Used in: Love-Bot

Learning to See by Memo Atkins

http://www.memo.tv/portfolio/learning-to-see/

An artificial neural network looking out onto the world, trying to make sense of what it sees, in context of what it’s seen before. It can only see what it already knows, just like us. Memo’s work was in turn based on research by Isola et al on the pix2pix AI and Christopher Hesse’s conversion of that to Tensorflow AI system. Used in: Love, As Though I Knew

Img2poem and Kanye West AI

https://github.com/researchmm/img2poem https://robbiebarrat.github.io/oth/rap.html

The img2poem project by Lui et al creates an AI that you give images to and has been trained on a large set of poetry, converting the image to poetry. The Kanye West AI, by Robbie Barrat, takes a set of sentences, for example, all the lyrics to Kayne’s songs, and remixes them into a new rap song. Poetry generated by img2poem was remixed by feeding that into the rap song AI. Used in: Love

Handwriting Synthesis with Recurrent Neural Networks

https://github.com/sjvasquez/handwriting-synthesis

Trained on 12 different sets of handwriting this AI turns text into handwriting. Code by Sean Vasquez, with additional code by Sara Strong. This project in turn based off a paper by Alex Graves. Used in: Love, Love-bot

AI and Software used by Piece:

Source Activate - Mason Victoria

Source Activate - Mason Victoria - still from live video

  • Audio performance: Magenta AI Duet

The Process - Chelsea McBride

The Process - Chelsea McBride - still from live video

  • Audio performance: Magenta AI Duet

Love - Nebyu Yohannes

Love - Nebyu Yohannes - still from live video - cathedral Love - Nebyu Yohannes - still from live video - sunrise

  • Audio performance: Magenta AI Duet
  • Poetry: Img2poem and Kanye West AI
  • Handwriting: Handwriting Synthesis with Recurrent Neural Networks
  • Cathedral and sunrise visuals: Learning to See AI trained on 1500 images from Flickr.

Love-bot - Suzy Wilde

Love-bot - Suzy Wilde - still from live video

  • Audio performance: TTSReader
  • Poetry: Voicebox by BOTNIK
  • Handwriting: Handwriting Synthesis with Recurrent Neural Networks

As Though I Knew - Harrison Argatoff

As Though I Knew - Harrison Argatoff - still from live video

  • Audio performance: Magenta AI Duet
  • Tree visuals: Learning to See AI. trained on 200 photos of trees by Harrison Argatoff.
  • Audio performance: VOCALOID5, Magenta AI Duet

Past Machinery - Jackson Welchner

Past Machinery - Jackson Welchner - still from live video

  • Audio performance: Magenta AI Duet, VOCALOID5

Bruce Cassidy - Bit Buddy

  • Audio performance: Magenta AI Duet

Other visuals for all pieces by Ryan Kelln. Visuals performed using Hydra Live Coding editor created by Olivia Jack. Customized heavily for the performance.

Using this repository

This README documents the concert, hopefully with enough detail that you can replicate it yourself.

Step-by-step?

This isn't a perfect guide, and assumes quite a deal of knowledge of Linux, Python, and machine learning. That being said I've tried to improve on the documentation that I used to get it all working, which was of mixed value. There is little to no support for these instructions but fixes and pull requests are welcome.

Links to other repositories for this concert

You will want to git clone all of these repos to your machine. See each repo for additional instructions.

Links to data sets and models not in this repo

You will need to download and put these models and files in the correct locations in the various repos.

Other helpful tools used in the project

    $ xrandr --fb 2560x1440 --output DVI-D-0 --mode 2560x1440 --pos 0x0 --primary --output DP-5 --mode 1280x720 --pos 80x90 --transform 0.989737,0.062409,-100.000000,-0.016774,1.041399,-70.000000,-0.000061,0.000054,1.000000

Training

Setting up your local Linux environment

This is more complicated than it needs to be, and I suggest using Google Colab for as much as possible, although the integrate with local files is currently rather frustrating. See this document for setting up a local Ubuntu 18.04 environment.

For example I had success setting up Colab notebooks for Nebyu, who first fed images into img2poem to create peoems and then fed all of that text into the Love poem generator (aka Kanye West rap AI):

Training Magenta AI Duet

convert_dir_to_note_sequences \
 --input_dir=/path/to/jazz/midi/ \
 --output_file=/path/to/datasets/tfsequences/songs.tfrecond \
 --recursive
  • Create SequenceExamples
melody_rnn_create_dataset \
 --config=attention_rnn \
 --input=/path/to/datasets/tfsequences/songs.tfrecond \
 --output_dir=/path/to/datasets/tfsequences/sequence_examples/ \
 --eval_ratio=0.10
  • Train model
    • Make training directory in magenta/runs/ (e.g. songs1/train, etc)
    • Note the filename fo the training.tfrecord file in the sequence_examples directory
melody_rnn_train \
 --config=attention_rnn \
 --run_dir=/path/to/magenta/runs/songs1 \
 --sequence_example_file=/path/to/datasets/tfsequences/sequence_examples/songs_training.tfrecord \
 --num_training_steps=20000
  • (Optional) create a bundle file
melody_rnn_generate \
 --config=attention_rnn \
 --run_dir=/path/to/magenta/runs/songs1 \
 --bundle_file=/path/to/datasets/pretrained/jazz_song_attention_rnn.mag \
 --save_generator_bundle
  • Generate test midi
    • This uses a jazz primer midi but you can use whatever you like
melody_rnn_generate \
 --config=attention_rnn \
 --run_dir=/path/to/magenta/runs/songs1 \
 --output_dir=/path/to/generated \
 --num_outputs=10 \
 --num_steps=256 \
 --primer_midi=//path/to/datasets/jazz_primer.mid

Training Learning To See

See the Learning 2 See instructions. Requires images. I used about 200 - 1000 images for training. You can use the scripts provided to get some from Flickr.

Get images from Flickr

Get a Flickr API key. Clone this repo if you haven't already. Edit src/flickr_scrape/get_flickr_urls.py and add API key info. Image retrieval happens in two steps, first finding the URLs of images that match the tag searched for, then downloading those URLs. The first command looks like: python get_flickr_urls.py tag number_of_images_to_attempt_to_download [urls_to_ignore.csv] So you can do something like:

$ cd src/flickr_scrape
$ python get_flickr_urls.py sunrise 500

This creates sunrise_urls_DATE1.csv (where DATE1 is the actual date and time).

$ python get_flickr_images.py sunrise_urls_DATE1.csv

View the images, delete bad ones, see how many you have left. Get more images if needed (this will get 500 new images that weren't already fetched once:

$ python get_flickr_urls.py sunrise 500 sunrise_ids_DATE1.csv

This creates sunrise_urls_DATE2.csv (where DATE2 is the actual date and time), it will automatically append the ignored urls from DATE1 to DATE2.

$ python get_flickr_images.py sunrise_urls_DATE2.csv

Gets another 500 new images, etc. Finally, once all images have been vetted move the images to a more appropriate folder, etc.

Visuals with Hydra

The visuals for the project were performed live using the Hydra editor. Hydra allows for the input of other windows running on the system so the Learning To See project was modified to remove all user interface elements so that the real-time AI generated images could be easily imported into Hydra (and then manipulated live as well).

Each song had it's own visual treatment, and the design goal was to supplement the music without taking attention away from the music. Consequently it was important to reduce flashing and strobing effects. For each song I created a mini-instrument to control the visuals that I could play using the keyboards number pad. In addition many songs included effects or parameters of effects that were connected to the audio signal frequencies.

These mini-instruments allowed for control of:

  • Size and shape of objects and effects
  • Playback speed, looping and seeking of videos
  • Triggering of sets of hydra effects
  • Triggering the playback of svg animations of text

Hydra is available as a web-based tool but has additional capabilities and customization when used in the Atom editor. I built a number of customizations including:

Atom hydra modifications:

  • Work added on top of SHA a5aaa8ca0cb92bc63e4396735db6b63b0331526c

  • Add additional keybinding for eval all code and toggle messages

  • Add customizable keybinding for numpad

  • Allow for display of code to be toggled and add keybind for it

  • Allow initScreen() to be sent name partials for matching

  • Improved clean-up, including unbinding keys, etc

  • Improved display of sound

  • Add many helpers to util.js:

    • create_video, load_video, queue_videos: helps manage playback of video files
    • load_image: helps load an image onto canvas
    • shuffle: randomly shuffle an array
    • create_slideshow: intelligently loads a directory of images into memory for load free access
    • load_svg_stanza, reset_svg, svg_stanza: for loading animated poem stanzas made from svgs

Hydra-synth modifications:

  • Work added on top of SHA da8d6d038ff6875a21251998345ae62fcd5a6992

  • Add stop() method that helps improve cleanup

  • Improvements to createFades() to allow for better stopping of fades

  • Changes to audio.js:

    • Allow for array parameters to many functions
    • Change how volume is calculated and used in detect beat
    • Allows cutoff, smooth, scale to be set per bin
    • Add time parameter to tick() and draw()
    • Indicate clipped audio graphically in audio display
  • Add a number of helpers to control rate of change smoothly:

    • vel: takes a thruster function that applies thrust to generate a velocity
    • pos: takes a velocity and finish function and outputs a position
    • repeat: finish function that sets the position to the start position
    • sweep: finish function that reverses direction, without affecting velocity
    • bounce: finish function that reverses direction and inverts velocity and multiplies by bounciness
  • Add or modify GSGL:

    • Use alpha channel in more GSGL calculations
      • Still very experimental and not completely functional
  • Add tile, tileX and tileY: seamless tile, flipping textures as needed to line up sides

  • Add gray: converts color to grayscale

  • Add removeColor: turns a pixel to 0 alpha (based on how close the pixel color is to target color)

  • Add alpha: sets the alpha channel

  • Add channel: multiplies each channel by the parameter passed in, good for isolating channels

See my repos for details:

Setting up Atom Hydra locally

Installing Atom:

https://atom.io/ https://flight-manual.atom.io/getting-started/sections/installing-atom/ I used version 1.36.0 but newer versions should work similarly.

Installing npm

Manual installation of the versions of atom-hydra and hydra-synth used in the projects require installing node.js and the node package manager (npm). Tested with node v10.16.x and npm v6.9.0 and v6.10.3.

Instructions here: https://github.com/nodesource/distributions/blob/master/README.md#deb

$ curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
$ sudo apt install build-essential gcc g++ make
$ sudo apt install nodejs
$ npm --version

Installing custom concert packages

Once atom is installed then you can clone the two required projects:

$ git clone https://github.com/RKelln/hydra-synth.git
$ git clone https://github.com/RKelln/atom-hydra.git
$ cd hydra-synth

For concert code, checkout the branch and then install:

$ git checkout coding_chaos_concert_2019
$ npm install

(Eventually this should work on master branch as well, but not as of yet.)

Similarly for the atom-hydra project:

$ cd ../atom-hydra
$ git checkout coding_chaos_concert_2019
$ npm install

Then install and link them into the Atom editor:

$ apm install
$ apm link

Now if you open the Atom editor (or reload it if already open) you should see atom-hydra under Community Packages in the Settings. Version should show as 0.2.10.

The basic idea here is that we create the local repositories and then link them to atom. Please note that the atom-hydra package refers directly to the hydra-synth package through a relative directory (..\hydra-synth).

Installing other atom packages

Besides Atom-Hydra I used distraction-free package (v0.1.0) to remove extra screen chrome from the show: https://atom.io/packages/distraction-free

Video materials

The videos associated with he concert cannot be distributed with the concert. Instead and default video called demo.mp4 has been used in place of the real videos. Additional places where configuration needs to be set in the code has been marked with the comment CONFIGURE, e.g.:

video_path = '/path/to/videos/' // CONFIGURE

Because of the generic paths and lack of videos the scripts, as presented, won't work. After setting the video path to the same directory as the scripts, most of the scripts should work, but all the video references are to the demo.mp4, so replacements will be required before anything looks interesting.

Audio settings

Also note that the scripts often have audio input that is currently configured to the concert space, and will need to be adjusted to the space and instruments being played. Ex:

a.setBins(9)
a.setSensitivity(0.8)
a.setCutoff([2,2,2.6,2.6,2.4,2,2.1,2.2,2.4])
a.setScale(2)
a.setSmooth(0.1)

The audio levels display in the bottom right can help you set the values appropriately. Clipping is shown in red. Audio setting values can either be a single value for all 'bins' or an array with values for each bin.

All of these values, other than the number of bins, likely need to be tweaked. Generally 9 bins seemed the right number to separate low, mid, and high frequencies from the music, and if you change the number of bins you'll need to adjust the bin numbers used in the "thrusters", e.g.:

high.thruster = () => (a.fft[5]/2 + a.fft[6] + a.fft[7] + a.fft[8])/2

This relies on the last 4 bins (of 9). Bins are index from 0.

  • setSensitivity: Reduce below 1 if every bin is clipping, increase above 1 if every bin is too low.
  • setCutoff: Sets the expected maximum value for the bin. Beyond this is considered clipping.
  • setScale: Sets the lowest value, such that this bin's zero value starts from this number. Set high enough that bins are not constantly producing a value (green).
  • setSmooth: How much of the previous values is mixed into the current value (i.e. smoothing out the value). Lower values have better responsiveness but much more variable data.

Using Atom-Hydra and Learning To See together

Two of the songs used the Learn To See project to generate real-time visuals which were fed into Hydra using initScreen().

Once Hydra is running in the Atom editor, this entailed starting the L2S project in a separate terminal:

$ activate learn_to_see
$ python webcam-pix2pix.py

Inside Hydra you could then connect the L2S window using:

s0.initScreen("Learn", 256, 256) // capture from L2S

See more details on the Learning 2 See repo.

Useful key commands

Inside Atom, these are very useful key commands:

  • Ctrl+Shift+F5: Reload Atom
  • Ctrl+Alt+o: Start Hydra
  • Ctrl+k,Ctrl+z: distraction free (remove window chrome)

Hydra commands:

  • Ctrl+Shift+h: Hide code
  • Ctrl+Alt+m: Remove messages (bottom output)
  • Ctrl+Enter: Run block of hydra code
  • Ctrl+Shift+Enter: Run block of hydra code

The script are separated into blocks of lines that should be run together. Execute each block using Ctrl+Enter until you hit the // READY comments at which point you're ready to begin the song, sometime by running the block immediately after the // READY sometimes by using the keypad to trigger.

I generally find it is useful to turn off the sidebar and the top menu bar using the menus/settings. Starting a piece usually looks like:

Ctrl+Shift+F5
Ctrl+Alt+o
Ctrl+Alt+m
Ctrl+Shift+h
Ctrl+Enter (each block)

codingchaosconcert's People

Contributors

rkelln avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.