The pooling operation used in convolutional neural networks is a big mistake, and the fact that it works so well is a disaster.
— Geoffrey Hinton
Most people at CMU thought it was perfectly reasonable for the U.S. to invade Nicaragua. They somehow thought they owned it.
Humans are still much better than computers at recognizing speech.
I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works.
The role of radiologists will evolve from doing perceptual things that could probably be done by a highly trained pigeon to doing far more cognitive things.
Computers will understand sarcasm before Americans do.
I am scared that if you make the technology work better, you help the NSA misuse it more. I'd be more worried about that than about autonomous killer robots.
My main interest is in trying to find radically different kinds of neural nets.
My view is we should be doing everything we can to come up with ways of exploiting the current technology effectively.
I get very excited when we discover a way of making neural networks better - and when that's closely related to how the brain works.
I had a stormy graduate career, where every week we would have a shouting match. I kept doing deals where I would say, 'Okay, let me do neural nets for another six months, and I will prove to you they work.' At the end of the six months, I would say, 'Yeah, but I am almost there. Give me another six months.'
I have a Reagan-like ability to believe in my own data.
In science, you can say things that seem crazy, but in the long run, they can turn out to be right. We can get really good evidence, and in the end, the community will come around.
The paradigm for intelligence was logical reasoning, and the idea of what an internal representation would look like was it would be some kind of symbolic structure. That has completely changed with these big neural nets.
The brain has about ten thousand parameters for every second of experience. We do not really have much experience about how systems like that work or how to make them be so good at finding structure in data.
In A.I., the holy grail was how do you generate internal representations.
Making everything more efficient should make everybody happier.
Early AI was mainly based on logic. You're trying to make computers that reason like people. The second route is from biology: You're trying to make computers that can perceive and act and adapt like animals.
A deep-learning system doesn't have any explanatory power.
Once your computer is pretending to be a neural net, you get it to be able to do a particular task by just showing it a whole lot of examples.
In the long run, curiosity-driven research just works better... Real breakthroughs come from people focusing on what they're excited about.
The NSA is already bugging everything that everybody does. Each time there's a new revelation from Snowden, you realise the extent of it.
Everybody right now, they look at the current technology, and they think, 'OK, that's what artificial neural nets are.' And they don't realize how arbitrary it is. We just made it up! And there's no reason why we shouldn't make up something else.
I think the way we're doing computer vision is just wrong.
I am betting on Google's team to be the epicenter of future breakthroughs.
Now that neural nets work, industry and government have started calling neural nets AI. And the people in AI who spent all their life mocking neural nets and saying they'd never do anything are now happy to call them AI and try and get some of the money.
I feel slightly embarrassed by being called 'the godfather.'
Deep learning is already working in Google search and in image search; it allows you to image-search a term like 'hug.' It's used to getting you Smart Replies to your Gmail. It's in speech and vision. It will soon be used in machine translation, I believe.
We now think of internal representation as great big vectors, and we do not think of logic as the paradigm for how to get things to work. We just think you can have these great big neural nets that learn, and so, instead of programming, you are just going to get them to learn everything.
I got fed up with academia and decided I would rather be a carpenter.
Machines can do things cheaper and better. We're very used to that in banking, for example. ATM machines are better than tellers if you want a simple transaction. They're faster, they're less trouble, they're more reliable, so they put tellers out of work.
I think it's very clear now that we will have self-driving cars.
Take any old classification problem where you have a lot of data, and it's going to be solved by deep learning. There's going to be thousands of applications of deep learning.
Any new technology, if it's used by evil people, bad things can happen. But that's more a question of the politics of the technology.
All you need is lots and lots of data and lots of information about what the right answer is, and you'll be able to train a big neural net to do what you want.
Irony is going to be hard to get. You have to be master of the literal first. But then, Americans don't get irony either. Computers are going to reach the level of Americans before Brits.
The question is, can we make neural networks that are 1,000 times bigger? And how can we do that with existing computation?
We want to take AI and CIFAR to wonderful new places, where no person, no student, no program has gone before.
The brain sure as hell doesn't work by somebody programming in rules.
In a sensibly organised society, if you improve productivity, there is room for everybody to benefit.
My father was an entomologist who believed in continental drift. In the early '50s, that was regarded as nonsense. It was in the mid-'50s that it came back. Someone had thought of it 30 or 40 years earlier named Alfred Wegener, and he never got to see it come back.
In the brain, you have connections between the neurons called synapses, and they can change. All your knowledge is stored in those synapses.
Most people in AI, particularly the younger ones, now believe that if you want a system that has a lot of knowledge in, like an amount of knowledge that would take millions of bits to quantify, the only way to get a good system with all that knowledge in it is to make it learn it. You are not going to be able to put it in by hand.