Business is booming.

Voices in AI – Episode 85: A Conversation with Ilya Sutskever

0

[voices_in_ai_byline]

About this Episode

Episode 85 of Voices in AI features host Byron Reese and Ilya Sutskever of Open AI talk about the future of general intelligence and the ramifications of building a computer smarter than us.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Ilya Sutskever. He is the co-founder and the chief scientist at OpenAI, one of the most fascinating institutions on the face of this planet. Welcome to the show Ilya.

Ilya Sutskever: Great to be here.

Just to bring the listeners up to speed, talk a little bit about what OpenAI is, what its mission is, and kind of where it’s at. Set the scene for us of what OpenAI does.

Great, for sure. The best way to describe OpenAI is this: so at OpenAI we take the long term view that eventually computers will become as smart or smarter than humans in every single way. We don’t know when it’s going to happen, — some number of years, something [like] tens of years, it’s unknown. And the goal of OpenAI is to make sure that when this does happen, when computers which are smarter than humans are built, when AGI is built, then its benefits will be widely distributed. We want it to be a beneficial event, and that’s the goal of OpenAI.

And so we were founded three years ago, and since then we’ve been doing a lot of work in three different areas. We’ve done a lot of work in AI capabilities and over the past three years we’ve done a lot of work we are very proud of. Some of the notable highlights are: our Dota results where we had the first and very convincing demonstration of an agent playing a real time strategy game, trained the reinforcement learning with no human data. We’ve trained robots to record, robot hands to re-orientate the block. This was really cool, it was cool to see it transfer.

And recently we’ve released the GPT-2 — a very large language model which can generate very realistic text as well as solve lots of different energy problems [with] a very high level of accuracy. And so this has been our working capabilities.

Another thrust to the work that we are doing is AI safety, which at [its] core is the problem of finding ways of communicating a very complicated reward function to an agent so that the agent that we build, can achieve goals and great competence. It will do so while taking human values and preferences into account. And so we’ve done some significant amount of work there as well.

And the third line of work we’re doing is AI policy, where we basically have a number of really good people thinking hard about what kind of policies should be designed and how should governments and other institutions respond to the fact that AI is improving pretty rapidly. But overall our goal, eventually the end game of the field, is that AGI will be built. The goal of OpenAI is to make sure that the development of AGI will be a positive event and that its benefits are widely distributed.

So 99.9% of all the money that goes into AI is working on specific narrow AI projects. I tried to get an idea of how many people are actually working on AGI and I find that to be an incredibly tiny number. There’s you guys, maybe you would say Carnegie Mellon, maybe Google, there’s a handful, but is my sense of that wrong? Or do you think there are lots of groups of people who are actually explicitly trying to build a general intelligence?

So explicitly. OK, a great question. So it’s an explicitly… most people, most research labs are indeed not having this as their goal, but I think that many people, the work of many people indirectly contributes to this. Where for example the fact is that much better learning algorithms, better network architecture, better optimization methods, all tools which are classically categorized as conventional machine learning, they also are likely to be directly contributing to those…

Well let’s stop there for a second, because I noticed you changed your word there to “likely.” Do you still think it’s an open question whether narrow AI, whatever technologies we have that do that, is it an open question whether that has anything to do with general intelligence, or is it still the case that a general intelligence might have absolutely nothing to do with that propagation, neural nets and machine learning?

So I think it’s very highly unlikely. Sorry. I want to make it clear, I think that the tools, that is the field of machine learning that is developing today, such as deep networks, backpropagation, — I think those are immensely powerful tools, and I think that it is likely that they will stay with us, with the field, for a long time all the way until we build very true general intelligence. At the same time I also believe, I want to emphasize that, important missing pieces exist and we haven’t figured out everything. But I think that the deep learning has proven itself to be so versatile and so powerful and it’s basically been exceeding our expectations in every turn. And so for these reasons I feel that deep learning is going to stay with us.

Well let’s talk about that though, because one could summarize the techniques we have right now as: let’s take a lot of data about the past, let’s look for patterns in that data and let’s make predictions about the future, which isn’t all that exciting when you say it like that. It’s just that we’ve gotten very good at it.

But why do you believe that method is the solution to things like creativity, intuition, emotion and all of these kind of human abilities? It seems to be at an intuitive level that if you want to teach a machine to play Dota or Go or whatever, yeah that works great. But really when you come down to human level intelligence with its versatility, with transferred learning with all the things we do effortlessly, it’s not even… it doesn’t seem at first glance to be a match. So why do you suspect that it is?

Well I mean I can tell you how I look at it. So for example you mentioned intuition is one thing which – so you used the certain phrase to describe the current tools where you kind of look for patterns in the past data and you use that to make predictions about the future and therefore it sounds not exciting. But I don’t know if I’d agree with that statement. And on the question of intuition, I can tell you a story about about AlphaGo. So… if you look at how AlphaGo works, there is a convolutional neural network.

OK actually let me give you a better analogy – so I believe there is a book by Malcolm Gladwell where he talks about experts, and one of the things that he has to say about experts is that an expert as a result of all their practice. They can look at a very complicated situation and then they can instantly tell like the three most important things in this situation. And then they think really hard about which of those things is really important. And apparently the same thing happens with Go players, where a Go player might look at the board and then instantly see the most important moves and then do a little bit of thinking about those moves. And like I said, instantly seeing what those moves are, — this is their intuition. And so I think that it’s basically unquestionable with the neural network that’s inside AlphaGo calculates a solution very well. So I think I think it’s not correct to say that intuition cannot be captured.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Original Article

Leave A Reply

Your email address will not be published.