When you think of driverless cars, there's a huge potential for these cars to save lives by preventing accidents and by reducing congestion on highways.
— Oren Etzioni
I love sophisticated algorithms that help consumers in a tangible way.
Life is short. Don't do the same thing everyone else is doing - that's such a herd mentality. And don't do something that's two percent better than the other person. Do something that changes the world.
Taking new technology and incorporating into how people work and live is not easy.
When you have a large amount of data that is labeled so a computer knows what it means, and you have a large amount of computing power, and you're trying to find patterns in that data, we've found that deep learning is unbeatable.
The truth is that behind any AI program that works is a huge amount of, A, human ingenuity and, B, blood, sweat and tears. It's not the kind of thing that suddenly takes off like 'Her' or in 'Ex Machina.'
I could do a whole talk on the question of is AI dangerous.' My response is that AI is not going to exterminate us. It's a tool that's going to empower us.
My dream is to achieve AI for the common good.
I became interested in AI in high school because I read 'Goedel, Escher, Bach,' a book by Douglas Hofstader. He showed how all their work in some ways fit together, and he talked about artificial intelligence. I thought 'Wow, this is what I want to be doing.'
Some people have proposed universal basic income, UBI, basically making sure that everybody gets a certain amount of money to live off of. I think that's a wonderful idea. The problem is, we haven't been able to guarantee universal healthcare in this country.
I'm not so worried about super-intelligence and 'Terminator' scenarios. Frankly I think those are quite farfetched.
The Turing Test was a brilliant idea, but it's evolved into a competition of chatbots.
I think that there are so many problems that we have as a society that AI can help us address.
Israel is a wonderful place to grow up.
Infrastructure investment in science is an investment in jobs, in health, in economic growth and environmental solutions.
AI is a tool. The choice about how it gets deployed is ours.
We have an obligation to figure out how to help people cope with the rapidly changing nature of technology.
My family is my number one priority.
Machines and people are both necessary for Facebook, Twitter, Wikipedia, Google, and neither is sufficient on its own.
To take intellectual risks is to think about something that can't be done, that doesn't make any sense, and go for it responsibly.
I'm not a big fan of self-driving cars where there's no steering wheel or brake pedal. Knowing what I know about computer vision and AI, I'd be pretty uncomfortable with that. But I am a fan of a combined system - one that can brake for you if you fall asleep at the wheel, for example.
AI works really well when you couple AI in a raisin bread model. AI is the raisins, but you wrap it in a good user interface and product design, and that's the bread. If you think about raisin bread, it's not raisin bread without the raisins. Right? Then it's just bread, but it's also not raisin bread without the bread. Then it's just raisins.
It's much more likely that an asteroid will strike the Earth and annihilate life as we know it than AI will turn evil.
Science is going to be revolutionized by AI assistants.
I'd like to make a fundamental impact on one of the most exciting, intelligent questions of all time. Can we use software and hardware to build intelligence into a machine? Can that machine help us solve cancer? Can that machine help us solve climate change?
What are we going to do as automation increases, as computers get more sophisticated? One thing that people say is we'll retrain people, right? We'll take coal miners and turn them into data miners. Of course, we do need to retrain people technically. We need to increase technical literacy, but that's not going to work for everybody.
I think it's important for us to have a rule that if a system is really an AI bot, it ought to be labeled as such. 'AI inside.' It shouldn't pretend to be a person. It's bad enough to have a person calling you and harassing you, or emailing you. What if they're bots? An army of bots constantly haranguing you - that's terrible.
Understanding of natural language is what sometimes is called 'AI complete,' meaning if you can really do that, you can probably solve artificial intelligence.
AI is neither good nor evil. It's a tool. It's a technology for us to use.
A universal basic income doesn't give people dignity or protect them from boredom and vice.
Just as our roads and bridges are overdue for investment, so is the infrastructure for scientific research; that is, the body of scientific thought and the tools for searching through it.
If you step back a little and say we want to do A.I., then you will realize that A.I. needs knowledge, reasoning, and explanation.
Machine learning is looking for patterns in data. If you start with racist data, you will end up with even more racist models. This is a real problem.
An AI utopia is a place where people have income guaranteed because their machines are working for them. Instead, they focus on activities that they want to do, that are personally meaningful like art or, where human creativity still shines, in science.
The best students are ones that are willing to take intellectual risks and challenge conventional thinking.
Everybody should do at least one startup sometime in life. It's such an amazing ride.
The only rollercoasters I get on are startups.
It's hard for me to speculate about what motivates somebody like Stephen Hawking or Elon Musk to talk so extensively about AI. I'd have to guess that talking about black holes gets boring after awhile - it's a slowly developing topic.
The biggest reason we want autonomous cars is to prevent accidents.
When there are hiring decisions and promotion decisions to be made, people are hungry for data.
At least inside the city of Seattle, driving is going to be a hobby in 2035. It's not going to be a mode of commuting the same way hunting is a hobby for some people, but it's not how most of us get our food.
I like to say I've been working on big data for so long, it used to be small data when I started working on it.
I don't think that all the coal miners - or even more realistically, say, the truck drivers whose jobs may be put out by self-driving cars and trucks - they're all going to go and become web designers and programmers.
All these things that we've contemplated, whether it's space travel or solutions to diseases that plague us, Ebola virus, all of these things would be a lot more tractable if the machines are trying to solve these problems.
Things that are so hard for people, like playing championship-level Go and poker, have turned out to be relatively easy for the machines. Yet at the same time, the things that are easiest for a person - like making sense of what they see in front of them, speaking in their mother tongue - the machines really struggle with.
A lot of people are scared that machines will take over the world, machines will turn evil: the Hollywood 'Terminator' scenario.
It's paradoxical that things that are hard for people are easy for the computer, and things that are hard for the computer, any child can understand.
Scientists need the infrastructure for scientific search to aid their research, and they need it to offer relevancy and ways to separate the wheat from the chaff - the useful from the noise - via AI-enabled algorithms. With AI, such an infrastructure would be able to identify the exact study a scientist needs from the tens of thousands on a topic.
Driverless cars are a great thing.
I'm trying to use AI to make the world a better place. To help scientists. To help us communicate more effectively with machines and collaborate with them.