AI image generator simulates synesthesia with the Teenage Engineering OP-Z

  • Post comments:6 Comments



The audiovisual experiment uses the Teenage Engineering OP-Z sequencer as the music source that is then translated into AI art. In real-time, Modem and Bureau Cool’s “digital extension” translates musical properties into text prompts describing colors, shapes and movements. Those prompts then feed into Stable Diffusion (an open-source tool similar to DALL-E 2 and Midjourney) to produce dreamy and synesthetic animations.

Subscribe: https://www.youtube.com/c/Engadget
Like us on Facebook: http://www.facebook.com/engadget
Follow us on Twitter: http://www.twitter.com/engadget
Follow us on Instagram: http://www.instagram.com/engadget
Follow us on TikTok: https://www.tiktok.com/@engadget

The Engadget Podcast: https://itunes.apple.com/us/podcast/the-engadget-podcast/id1142790530?mt=2
More about Engadget Audio: https://www.engadget.com/2019-08-01-engadget-podcasts.html

Read more: http://www.engadget.com

source

This Post Has 6 Comments

  1. IrreverantRex

    I have to say, I have sound-sight synthestesia and it's…not really like this. More pulses, waves and sparks along the edge of my vision and unlike what is being shown here, it is highly reactive to sound-in fact, I think the old windows media player visualizers were closer to what I experience. Honestly, the visuals being presented here are more akin to a movies presentation of a drug trip than anything remotely similar to what I see.

Leave a Reply