Art is not putting up data until it brings up something "cool". It's the expression of mind. Miyazaki has a clear expression about the subject https://www.youtube.com/watch?v=BfxlgHBaxEU and I'm really deeply sadden that people doesn't get this before hand.
Try expressing feelings and stop using that word that has absolutely nothing to do with machine learning.
The very very very basic Idea of machine learning is at opposite with art.
Art- and in fact, most creative endeavors often DO involve creating many things until you discover something "cool".
With my personal experiments with generative art, I think of the machine almost as a work assistant. I have a rough idea of what I'm going for, and the ways to attempt it, and I teach that to my computer using code. But then I go away from the computer and out into the world and drink coffee, sketch in my notebook, and in the meantime my computerized assistant is back home working on draft after draft of my latest piece of art.
I come home and I see what he's done, and most of it isn't great, but occasionally there is a gem! I look at what he did for that gem and then encourage him to do more of that, and then go back out into the world. I highly disagree that machine learning is the opposite of art.
If you go to any traditional artist's studio you will often find notebooks filled with discarded sketches and ideas. Paintings with color combinations or techniques that weren't ideal. This is almost exactly the same process. And I would argue that process is exactly what art is.
By this logic no photographer is an artist either. They just 'selected' a composition they liked. They didn't work to create any of their subjects, just walked up and took a picture.
Slowly coaxing a desired result out of a computer by painstakingly tweaking an algorithm over the course of hours or days doesn't sound like work to you? The computer wouldn't have just done it on its own if an artist didn't tell it what to do.
I think your definition of art is narrow-minded. Who are you to tell someone their work isn't art. Art is (and always has been) in the eye of the beholder.
>But then, if you drop out an algorithm that does that, handle it to users, would their results be art ?
in the same respect, did someone not create art in photoshop because they used some fractal noise?
I've had this conversation and thought about it a lot for what I'm doing and come to this conclusion.
art can be directed, it can be accidental, it can be both (every artist loves a happy accident), it can be because of a million iterations whether by hand or computer.
it's up to you, the viewer to decide if you think the end result is good art.
What is the difference between creation and generation?
What is the difference between me selecting the specific frame made by a generative algorithm and photographer picking a specific moment in time to capture a frame?
What's the difference between me using a generative algorithm and Michelangelo using assistants to paint the Sistine Chapel?
"What is the difference between creation and generation?" -> that one has already been answered under.
"What is the difference between me selecting the specific frame made by a generative algorithm and photographer picking a specific moment in time to capture a frame?" -> he captures life and tries to create a embodiment of something that already has a meaning for everybody. He can have multiple ways of underlining different aspects of the same thing. He has total control over How and Why. It is not only technical but it also communicates something in very different ways depending on the subject and how he took the picture.
"What's the difference between me using a generative algorithm and Michelangelo using assistants to paint the Sistine Chapel?" -> these are people not machine.
We think of these models as being ways to extend artistic expression... like guitar pedal or cameras or drum machines. Perhaps ML will give rise to something new and aesthetically interesting in this direction.
You mean something like this? [0] As far as I know from the mailing list, they are always working on more tools for integration so musicians can use them in different environments and different ways.
As an amateur composer as well, I have found uses for these types of models even without plugins / with "data I have no control over" for small flashes of interesting motifs/progressions that I rework, modify, and expand on my own. Inspiration comes in many forms.
In a way I agree: machine learning should not (try to) replace human art.
However, I think it has its place if it's part of the artistic process. Many artists put the emphasis not on the end product, but on the process itself. We can perfectly imagine an artist creating an entire collection using AI, find something wicked in it and expose it as a criticism of our times. (sounds hipster yeah… I'm not an artist.)
I am trying to understand your strong negative reaction as charitably as I can. Would it be fair say that for you because there was no mind (and also no feelings) associated with it, something isn't art? I am not saying I think it's good art, but there are clearly many pieces of art (say found art, or lots of conceptual art) that doesn't involve those - and while I may not like it, but it is generally accepted to be art.
Right. Now, my mind is full of thoughts and emotions about ML. And I use ML to make some videos that express my thoughts. And you judge my work and say it's not art?
Really I profoundly disagree.
Saying something is "not art" is normally a tough argument to make. To say it about a whole category of practice is nearly impossible.
I really don't understand the point of this. Do we need a computer to help us add to the vast store of music nobody ever listens to? Reminds me of Douglas Adams's electric monks
As a musician I've wondered what generating actually good music would mean. Music is such a cultural experience, tied in with a sense of being part of a social group, that I don't think we would ever let computers completely replace humans in producing it - until they become sentient and we start seeing them as friends. (How could a computer be a part of my social group unless it was sentient?)
I think the non-research value will be like the value of chess computers to chess communities. It enriches the community, helping them develop their craft, but the community is still a community of humans.
I can think of a few ways machines have beaten humans before, but where human experience is still valued in a way the machine achievement isn't. Modern weapons are valued for what they do, and they can beat any human, but the human practice of martial arts is still valued despite that fact. A player piano with a roll performed by Rachmaninoff will deliver an excellent reproduction of a historic performance, but I'd still rather listen to my friend play, even if what I'm hearing comes down to objectively worse key pressing.
Which might actually be quite possible. At the very least, I'm sure machine learning can have a better understanding of your general musical taste than most of your friends do.
Personally, I find that Spotify doesn't "get" me in that way, not really. Maybe it hits the broad center of a normal distribution, but I think a lot of people lie outside of that. Even then, surely the challenge of curating existing music which is heavily tagged is much less than composing it with a mood or taste in mind. Just my gut feeling that.
Part of me agrees, but at the same time computers can explore spaces far faster than we can. Perhaps it will end up like Go where an AI exploring music will lead to new inspiration for humans.
I think we're only at the place where the computer can propose ideas for the artist to dispose of mostly, and perhaps find a gem occasionally.
Music is indeed a language, so while it evolves, it still requires a network of meaning to rely upon. So doing something wild like music in 5/6 time will for the most part, bounce off people's heads and not resonate with meaning.
A wide-ranging talk on computer generated art, music, stories, humor, etc.
I liked listening to it, but am disappointed by the field's disregard for existing methods of structure. "We don't use rules".... well, rules have helped mankind progress forward for hundreds of thousands of years.
Just because they don't easily transfer into your ML model doesn't mean they are useless. Maybe the model needs to learn some new tricks.
There is a lot of interesting work going on in this area right now in the research community - using ML for things like program synthesis, meta-learning, and combining ideas from constraint satisfaction with ML approaches.
The rules based approach has (as you mention) years of history, so people are currently exploring the green-ish fields of raw learning approaches with good success on tasks where rules based approaches performed much worse or didn't work at all (cf image recognition, speech recognition), and in some areas it seems like the more you let the model learn / get rid of classical rules based approaches (with enough data), the better it does. Whether that is true for field X, not true yet for field X, or will never be true for field X depends on who you ask.
There is definitely a recent tide of models which are focused more on rule learning, function generation, and so on. The general thing I see is that rule based approaches with good approximators/probability models to guide heuristic or exact search can do crazy things - this is the story of AlphaGo at a 10k foot level. People in the ML community are just more focused on the new-ish part (learning good probability/function approximators from data) right now.
Just because rules aren't incorporated widely yet, doesn't mean they won't be in the future. I am personally very interested in this direction, and a bunch of work from Sony CSL (Pachet et. al.) has focused heavily on this idea in the past.
As an aside, whenever you hear an ML researcher say "prior", it is generally functioning as some kind of soft or occasionally hard rule - so maybe there are more rules floating around than it seems. Soft rules aka priors are generally (much!) easier for gradient descent style optimization and incorporating directly into models, so we tend to have priors rather than hard rules as seen in many other parts of computer science. Even the structure of the model itself can be seen as a prior.
I heard a quote about automobiles: "When you remove the horse from the cart you no longer have a horse & buggy" I suspect if convincing music can be generated procedurally people will no longer put much value in a linear composition. Its possible music will become interactive via interfaces like AR and VR.
Re: the latest Performance RNN, discussing their "best sounding" output.
>Note that this isn’t a performance of an existing piece; the model is also choosing the notes to play, “composing” a performance directly. The performances generated by the model lack the overall coherence that one might expect from a piano composition; in musical jargon, it might sound like the model is “noodling”— playing without a long-term structure. However, to our ears, the local characteristics of the performance (i.e. the phrasing within a one or two second time window) are quite expressive.
It's understandable to be encouraged by progress, but to my ears, it's not really expressive other than the first thought that I had when listening to the piece:
"That piano player is having a stroke, somebody should do something to help."
I've used the same analogy. Its unconditional and produces no coherent structure, but at least it captures some pianist-like timing and dynamics. Magenta is a github project ... please do something to help! :-)
Eh, asking a musician to write code is like asking a programmer to learn a scale on an accordion. Either you're going to see the failures in your own art, and be open to criticism, or ask for somebody to do it for you.
I'll put it thusly: As a broke-ass musician barely making ends meet, I'll be more than happy to contribute if Alphabet pays off what's left of the mortgage on my house.
Until then, I've only got the perspective that a project like this is trying to take food off my plate, money out of my pocket, and replace me. If you want my help, Son, you got to come better at me than with Altruism.
Otherwise, the best thing I can do to help is to say "This project is a heap of shit and your time is better spent elsewhere" without tinging it with too much condescension. Your child is brain damaged. Take it to Oregon.
My take (as someone working in the area) on how/why we are encouraged by this progress:
The model is unconditional i.e. any concept of form at all was learned directly from data, with no real structural hints to what it should learn.
Generally you get much stronger "global structure" the more prior information you provide either in the model itself or in the input/targets (such as chord constraints, etc.). This is one reason that harmonization tasks often sound notably better than pure generation - the "backbone" in harmonization was human provided, whereas in pure generation the model needs to be self-consistent.
The analogous task in language would be character level language modeling, whereas something like harmonization or conditional generation from chords would be more akin to machine translation. Character level language models may be a bit wonky looking when generating, but any structure at all was purely model discovered from a simple rule (generally, maximum likelihood), whereas in conditional models a lot of extra help is given by the conditioning variable.
This model is a pretty big jump in quality from previous efforts in the same vein of unconditional generation, and should combine well with techniques for conditioning in sequence models from all over the research community (translation, speech recognition, question answering, summarization, speech synthesis). On top of that, the model is IMO quite elegant, and much simpler to understand than many other attempts at polyphonic generation including my own.
It is worth comparing this output to some previous efforts (from Magenta and others), some with more complicated internal structure such as [0][1][2][3][4][5][6], and also some models with stronger conditioning or different input representations such as [7][8][9], the extreme case being [10] where a human (Benoît Carré) used a modeling/ML toolkit along with standard arrangement and production approaches to produce a wacky, awesome pop song.
Magenta is putting out lots of neat stuff around interactive usage of these models, with web demos and plugins to standard music tools [11]. Indeed, Doug's quotes in this article [12] sum it up for me, excerpt: “I don’t think that machines themselves just making art for art’s sake is as interesting as you might think,” he explained. “The question to ask is can machines help us make a new kind of art?”
Try expressing feelings and stop using that word that has absolutely nothing to do with machine learning.
The very very very basic Idea of machine learning is at opposite with art.
These things have nothing to do together.