Google's Magenta project just wrote its first piece of music, and thankfully it's not great

01.06.2016
Deep in the bowels of Google’s Google Brain, the company is asking the question: can neural networks create music Google’s Magenta project has an answer: yes.

Whether that music is good, however... nah. Google may have annihilated its human competition in the ancient game of Go, but human artists don’t have to worry—yet—about losing their livelihoods. Still, it appears that Magenta could at least drop a riff or two that human artists could remix.

Want to hear it for yourself Here it is: 

 

Why this matters: We’re on the cusp of something special in terms of machine assistance: Bots and digital assistants are learning how to help their users, and Google is investing heavily in it. Google’s highest-profile example of creative machine learning may be DeepDream, the sometimes nightmarish remixing of photographs into pieces of art. In some ways, Magenta is the next iteration.

Google research scientist Douglas Eck said Google is using its TensorFlow machine learning platform to drive the creation of machine-generated art. (It isn’t clear whether Google is also using the related Tensor Processing Unit chip, however, which it custom-designed.)

“Magenta has two goals,” Eck wrote. “First, it’s a research project to advance the state of the art in machine intelligence for music and art generation. Machine learning has already been used extensively to understand content, as in speech recognition or translation. With Magenta, we want to explore the other side—developing algorithms that can learn how to generate art and music, potentially creating compelling and artistic content on their own.”

In addition to the actual creation of art, Google hopes to foster a community of like-minded individuals who can share their knowledge, including their machine learning models. Artists are encouraged to use the alpha code to help author their first machine-generated music.

Over time, Eck wrote, Google expects several machine-learning models will end up in the hands of developers, who will use each of them to generate music. Judging the quality of the music is yet a further step, Eck continued. “To answer the evaluation question we need to get Magenta tools in the hands of artists and musicians, and Magenta media in front of viewers and listeners.” Eck is probably implying that there will be a crowdsourced rating system to determine which pieces of music users like.

What’s clear, though, is that Google is all in on teaching computers how to teach themselves. “Machine learning is a core, transformative way by which we’re rethinking everything we’re doing,” Sundar Pichai, Google’s chief executive, said last fall. We laugh off how Google knows everything about us, but the Google Brain’s getting smarter by the day.

(www.pcworld.com)

Mark Hachman

Zur Startseite