ai is one of the most important things humanity is working on it's more
profound than I don't know electricity or fire so my point is we have to be
concerned about it yeah you've probably already guessed it today's video is
gonna be about artificial intelligence which can be really difficult and I've
sort of didn't want to make this video because everyone seems to be using
artificial intelligence as an excuse well we don't need to learn anything
because artificial intelligence will be able to do everything better than us in
the future which might be true in some cases but it's definitely not the case
right now if you look at machine learning which is an aspect of
artificial intelligence then it requires a lot of data and it requires a lot of
processing power it's just not the most efficient solution if you have a linear
problem what I mean by this this is from tensorflow they have a great website
where you can experience with artificial intelligence and machine learning in
general they have this playground place where you can try actual problems and if
we take this problem for instance we have this square where there's two
different colored dots in the middle and depending on how you arrange the dots
you sort of create different problems for the artificial intelligence and if
you are able to split the dots into two different categories by a single line
then we call it a linear problem the more complicated the structures the
harder it is for the machine and the more processing power and the more
hidden layers you need see you can't solve a nonlinear problem without hidden
layers that's where we have this machine learning algorithm that builds these
neural networks they can have as many hidden layers as we want but it's not
always ideal to have many hidden layers because it will add to the amount of
processing power it takes to train the algorithm so in this example for
instance we have a chequered set of pieces which is a nonlinear problem we
have to use at least two lines to solve this so you need to have at least one
hidden layer but apart from that you can kind of experiment with what works best
it is definitely possible to get this split up even if you don't have an ideal
solution when it comes to a more complex problem like this the spiral the
infamous spiral intensive flow it's difficult this is the solution that I
came up with by googling it's really that simple
it is possible to do it even on a pretty bad computer and get really good results
but this kind of shows how you don't need to use a machine learning algorithm
to solve every problem in the world with that being said keep in mind that there
are still different kinds of neural networks and different ways to set these
machines up when people want output right now they usually use what's called
a Gann method gan basically having two different machines inside the program
that fights against each other one of them is trying to cheat the other and
when it succeeds the output in the other end is completely generated by a in
artificial intelligence that's how people make some of these outputs but
without further ado enjoy it's widely accepted that millions of jobs are going
to disappear and in this future the robots and algorithms that will replace
us could either reduce us to poverty or set us free I think we might well
experience a new renaissance of creativity and of social interaction in
a very positive way if people have the leisure to do whatever they want to do
they will be more fulfilled because they'll be doing the things that that
interests them it may sound a long way away after all at the moment robots only
do one thing at a time but what if one can do all the things that we can it's
expected that within about a generation from now a machine will be built which
is better than a human and that changes everything according to the Korea
employment information service AI powered robots will be able to replace
29.1% of the local job market they were also competent enough to replace 70% of
the duties performed by doctors 59.3% of university professors duties and 48.1%
of duties done by lawyers in korea in terms of memory formation physical
tenacity sight hearing and spatial skills artificial intelligence is
incomparably better than the human workforce humans perform better in tasks
measuring creativity and in persuasive or negotiating situations help us
understand what machine learning is because that seems to be the key driver
of so much of the excitement and also the concern around artificial
intelligence how does machine learning work today we have reached a scale of
computing and datasets there was necessary
machine smart so here's how it works if you program a computer today say your
phone then you hire software engineers that write a very very long kitchen
recipe like if the water is too hot turn on the temperature the new thing now is
that computers can find their own rules so instead of an expert deciphering step
by step a rule for every contingency what you do now is you give the computer
examples and have influenced own rules that is exciting because it relieves the
software engineer of the need of being super smart
here's an idea the question is not whether computers and artificial
intelligence can make art the question is whether we will allow them to make
engineers and artists alike are experimenting with artificial
intelligence to see what kind of imagery stories poetry and music machines can
generate for this tract called daddy's car
Engineers at Sony's computer science laboratory developed an AI called flow
machines to create music in the style of the Beatles after feeding sheet music
into the algorithm to teach the a ice machines generated the melody and
harmony AI is becoming part of the toolbox for designers and film makers as
well Adobe's wet brush for example uses
algorithms that simulate brushstrokes and the way liquid paint is distributed
by different painting techniques we've wondered whether the fundamentally human
seeming endeavor of art making can be done by machines I say yes
absolutely it can my name is Doug I'm gonna talk to you about magenta a
project that we're doing in Google brain that's focused on music and art with
machine learning so I want to point out for those of you that are paying
attention to deep learning deep learning in some sense is not new we've had
neural networks since at least the 1980s but they haven't always shown themselves
to be the best models for the job one explanation for this is that neural
networks a really good at scale they're really good when you have a lot of data
or when you have large models and so as we move with more compute power what we
find is that neural networks end up winning out over other other
technologies this project is about teaching a machine learning model to
learn to draw and these are some of the pictures that this machine learning
model drew the input are stroke based drawings done by people when they play
the game quick drop and we encode them using a recurrent neural network that is
actually moving through the sequence of strokes trying to predict the next
stroke it's called a bi-directional recurrent neural network or a
bi-directional lsdm and the whole job of that network is to create this vector
it's going to be used to condition the decoding so we have this embedding this
number the string of numbers in latent space that we can sample from that we
can add some noise to and generate new instances of data that will then be
driven through the decoder which is in this case another recurrent neural
network though only going in one direction from left to right it's going
to drive a mixture of gaussians so a mixture of possible places where the pen
would land next what I'm going to do is I'm going to draw something and then
we're going to sample from the model nine times and remember the model has
some noise in it it's not completely deterministic so we're gonna get nine
different drawings and so let's take him to draw a raindrop all right you'll see
my raindrop appearing nine times and I did a nice big round raindrop and now
I'm gonna let the model go and it's gonna make rain happen you could also
just say hey let's draw a rain like this because some
will draw rain like this and notice the model kind of follows my lead and it
draws rain like I did if you draw a cloud right in your mind's eye what's
gonna happen when I let that cloud go it's gonna rain alright I just think
that's so cool I actually don't know how to draw a cruise ship I've never
actually been on a cruise ship I don't think so I'll just do that and then
quick-draw will fill it in with different kinds of cruise ships or
sketch our and then based on quick draw let's look back at the paper from last
year called wavenet it's trying to learn to generate audio from audio it's
actually learning on the raw PCM post called modulation sampled sixteen
thousand times a second and it's trying to predict the next sample conditioned
on about the last two seconds of samples and what it uses is something called
dilated convolution so you see the arrows they get spread further and
further apart it's almost like they're being dilated in time so that the next
prediction is conditioned not only on the sample that came last but some
samples are some representations of samples that happen further and further
in the past let's play let's play dizzy please
the models learning to trained on dizzy gillespie only and get something out of
it let's play Metallica
so what we see is that the instance data trained on wavenet alone does some cool
things but it doesn't give us what our desired goal which is to have kind of
coherent musical notes so what we decided to do was add a an autoencoder
to wavenet so that we can constrain and help it understand how how sound is
unfolding in time this basic diagram should look familiar you have some input
now it's not a cat it's an input waveform we're going to encode that in
time using a kind of convolutional model it's not a wave net but it's also using
deep dilated convolutions that's going to give us some sort of embedding and in
this case the embedding actually unfolds in time so it's 16 values that change
every few milliseconds and then we're gonna have a wave in that decoder the
same wave net that we just saw and the wavelet is actually going to have the
input audio available when it's training but it's also going to see this
conditioning information from our Zed and if it wants to take advantage of it
it can and in fact it does to great effect so now what we can do is in code
an entire note and we can then decode from it let's listen to the original
base now if we run that base through our model and decode it in the same way that
we ran the cat through our model and looked at the cat it sounds like the
base on the bottom it's a little bit distorted but more or less it captures
the sound of the base so now you're asking why would you want to reconstruct
a noisy versions of these samples because we're living in this embedding
space we can do exactly what we did with the images of the cats we can move
between sounds we know and listen to what the model does in spaces that we
don't know so let's listen to what bass and flute sounds like original
it sounds like a bass and a flute right you just average the signals together
now let's listen to bass and flute from n cent what it does in my mind's eye is
makes a really big bass flute right so we were in Stanford right now our first
class starts here I'm gonna try it and talk to the teacher afterwards well the
main thing is is getting the night light on camera and just doing canvas has a
little cameras insufficient access the first thing they have to do it with
cameras otherwise it will be too heavy prepared as I yeah but then the other
debate yeah manufacture this what am I understand it's interesting that that's
the approach because I actually thought it was the tactic because some of the
regulations come in spit like you know when she live in Silicon Valley you may
start being startled say okay we're not going to eat the whole of the US and you
know even the Daily Caller San Francisco work will do my driveway what local will
do my senior senator ways to get around simpler smaller and I get you to like 10
years 15 two years we're robbing two years you get
demonstrations but the trick is can you have to buy one yeah thank you yeah
perfect that is it I really hope you guys like this video
as you can see artificial intelligence can be used to make art and they can
make art on their own I found this super interesting when I first heard about it
it was this 18 year old speaker named Robbie Bharat who came to the lecture
and I mean he's done crazy stuff he's insane
but yeah very very impressive if you're new to the channel then don't forget to
subscribe I really do put in a lot of effort it'll
ease especially this one I've been super
stoked to share it with you guys I'm a little sorry that the audio is so bad on
the lecture that is just kind of how it has to be when you're in a huge hole
with a lot of people I'm very sorry I put on subtitles I hope that helped but
yeah I'll see you guys next week take care
Không có nhận xét nào:
Đăng nhận xét