Machine learning became a central topic at the neurips conference in Vancouver, Canada, last week.
About 13000 researchers from all over the world focused on neuroscience, how to explain neural network output and how artificial intelligence can help solve major problems in the real world.
During the meeting, Jeff Dean, the head of Google AI, accepted an exclusive interview with VentureBeat and talked about his views on the machine learning trend in 2020. Jeff Dean thought:
In 2020, there will be a big breakthrough in the field of machine learning in multi-task learning and multi-modal learning. At the same time, new equipment will also make the role of machine learning model play a better role.
The following is a brief translation of the English version of some interviews:
1. On AI chip
VentureBeat：What do you think are some of the things that in a post-Moore’s Lawworld people are going to have to keep in mind?
What do you think people need to keep in mind in the post Moore’s law world?
Jeff Dean：Well I think one thing that’s been shown to be pretty effective is specialization of chips to do certain kinds of computation that you want to do that are not completely general purpose, like a general-purpose CPU. So we’ve seen a lot of benefit from more restricted computational models, like GPUs or even TPUs, which are more restricted but really designed around what ML computations need to do. And that actually gets you a fair amount of performance advantage, relative to general-purpose CPUs. And so you’re then not getting the great increases we used to get in sort of the general fabrication process improving your year-over-year substantially. But we are getting significant architectural advantages by specialization.
I think it has proved very effective to use special chips instead of CPU to do non-general computing. Although TPU or GPU have many limitations, they are designed around the needs of machine learning computing, which has greater performance advantages than general GPU.
As a result, it’s hard to see the huge increase of computing power in the past, but we are gaining more architectural advantages through specialization.
2. On machine learning
VentureBeat:You also got a little into the use of machine learning for the creation of machine learning hardware. Can you talk more about that?
Can you elaborate on the application of machine learning in creating machine learning hardware?
Jeff Dean：Basically, right now in the design process you have design tools that can help do some layout, but you have human placement and routing experts work with those design tools to kind of iterate many, many times over. It’s a multi-week process to actually go from the design you want to actually having it physically laid out on a chip with the right constraints in area and power and wire length and meeting all the design roles or whatever fabrication process you’re doing.
So it turns out that we have early evidence in some of our work that we can use machine learning to do much more automated placement and routing. And we can essentially have a machine learning model that learns to play the game of ASIC placement for a particular chip.
Basically, in the design process, some tools can help with layout, but also need human layout and routing experts, so that you can use these design tools for multiple iterations.
It usually takes weeks from the design you want to start with, to the layout on the chip, with appropriate limitations on area, power, and wire length, while meeting all design roles or any manufacturing processes that are being performed.
So it turns out that in some work, we can use machine learning to do more automatic layout and wiring.
We can basically have a machine learning model to play ASIC placement games for specific chips. We have been testing some chips internally, which has also achieved good results.
3. Google challenge
VentureBeat: What do you feel are some of the technical or ethical challenges for Google in the year ahead?
What technical or ethical challenges do you think Google will face in the coming year?
Jeff Dean：In terms of AI or ML, we’ve done a pretty reasonable job of getting a process in place by which we look at how we’re using machine learning in different product applications and areas consistent with the AI principles. That process has gotten better-tuned and oiled with things like model cards and things like that. I’m really happy to see those kinds of things. So I think those are good and emblematic of what we should be doing as a community.
And then I think in the areas of many of the principles, there [are] real open research directions. Like, we have kind of the best known practices for helping with fairness and bias and machine learning models or safety or privacy. But those are by no means solved problems, so we need to continue to do longer-term research in these areas to progress the state of the art while we currently apply the best known state-of-the-art techniques to what we do in an applied setting.
In terms of AI or machine learning, we have completed a reasonable work and established a process. Through this process, we can learn how to use machine learning in different product applications and fields consistent with AI principles. The process has been better adjusted and optimized with things like model cards.
Then, I think that in many areas of principles, there are really open research directions that can help us to solve the problems of fairness and prejudice as well as machine learning models or security or privacy. However, we need to continue to carry out long-term research in these fields to improve the technical level and apply the most famous latest technology to our work.
4. On the trend of artificial intelligence
VentureBeat: What are some of the trends you expect to emerge, or milestones you think may be surpassed in 2020 in AI?
What trends or milestones do you think will emerge in the field of artificial intelligence in 2020?
Jeff Dean：I think we’ll see much more multitask learning and multimodal learning, of sort of larger scales than has been previously tackled. I think that’ll be pretty interesting.
And I think there’s going to be a continued trend to getting more interesting on-device models — or sort of consumer devices, like phones or whatever — to work more effectively.
I think obviously AI-related principles-related work is going to be important. We’re a big enough research organization that we actually have lots of different thrusts we’re doing, so it’s hard to call out just one. But I think in general [we’ll be] progressing the state of the art, doing basic fundamental research to advance our capabilities in lots of important areas we’re looking at, like NLP or language models or vision or multimodal things. But also then collaborating with our colleagues and product teams to get some of the research that is ready for product application to allow them to build interesting features and products. And [we’ll be] doing kind of new things that Google doesn’t currently have products in but are sort of interesting applications of ML, like the chip design work we’ve been doing.
In my opinion, there will be breakthroughs in multi-task learning and multimodal learning to solve more problems. I think it will be interesting.
And I think there will be more and more effective devices (mobile phones or other types of devices) to make the model work more effectively.
I think it’s obviously important to work with AI related principles. But for Google, we are a large enough research institution. In fact, we are doing a lot of different work, so it is difficult to list them one by one.
But in general, we will further develop the most advanced technology and conduct basic research to improve our ability in many important fields, such as NLP, language model or multimodal things.
At the same time, we will also work with our colleagues and product teams to do some research for product applications, so that they can build interesting functions and products.
Link to the original English interview: