Giving the control over powerful AI to the highest bidder is unlikely to lead to the best world we can imagine.
— Jaan Tallinn
I was born behind the Iron Curtain, and I remember heated discussions about large-scale terra-forming projects, such as reversing the direction of the river Ob or putting up large reflectors into space to heat up Siberia.
Incentive schemes, whereby people who have done the most good for humanity are rewarded 20 years into the future, would create the expectation that doing long-term good is valuable.
I used to have the very standard worldview. I can easily identify with people who see computers getting faster and smarter, and technology getting more and more beneficial, without seeing the other side.
It really sucks to be the number two intelligent species on this planet; you can just ask gorillas.
In my view, the fact that computers caught up to humans and completely dominate humans in chess and some other domains already, that says there's evidence that, yes, in principle, they can be better programmers than humans.
Healthcare is the only civil system where new technology makes prices go up instead of down.
The main reason I backed DeepMind was strategic: I see my role as bridging the AI research and AI safety communities.
Your evolutionary heuristics come back to the idea of a future roughly similar to what it is now. You give to the community as it is now, to benefit a similar community in the future.
Shaming people into being virtuous doesn't change behaviour.
If you happen to start a new country in the 1990s, you have the advantage of drafting new laws with the knowledge that the Internet is out there.
Once you acknowledge that human brains are basically made of atoms and acknowledge that atoms are governed by simple laws of physics, then there is no reasoning principle why computers couldn't do anything that people are doing, and we don't really see any evidence that this is not the case.
I myself am also a small investor in Slack, and one can count four to five IM platforms that were launched by Skype alumni alone.
Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to focus on steering.
We don't exactly have the opposite interests to chimpanzees. However, things are not looking up for the chimpanzees, because we control their environment. Our interests are not perfectly aligned with theirs, and it turns out it's not easy to get interests aligned.
I sometimes joke that I can take personal responsibility for saving one million human relationships.
The interesting thing about blockchain is that it has made it possible for humanity to reach a consensus about a piece of data without having any authority to dictate it.
Once we have something that is no longer under control, once technological development is yanked out of our hands, it doesn't have to continue to be beneficial to humans.
Once computers can program, they basically take over technological progress because already, today, the majority of technological progress is run by software, by programming.
There is no shortage of embarrassing facts about healthcare, and people die every day in the U.S. due to preventable errors - would you fly planes if you knew several of them would drop out of the sky every day?
Technology keeps progressing. Young people follow the curve. But as they get older, they get inertia, and they start deviating from that curve.