When it comes to AI conferences, NeurIPS is the AI highlight of the year. It’s not just the swag, Geoffrey Hinton sightings, and great demos – the real magic happens at the workshops, conferences, and poster presentations. This is what happens when you mix millenia worth of education, experience and learnings coming from pioneering AI scholars.

The ideas that circulated in the Palais des Congrès and well beyond its walls will have a very strong and direct impact on how AI will develop throughout the year.

Our scientists spent well over 200 combined hours at the conference over the course of the week. I wanted to get some their insights into what they saw at this year’s event.

Neural Network Compression Techniques

Rafik Gouiaa

Throughout the time I spent there, there were many interesting ideas. I was particularly interested in advancements in neural network compression techniques, such as pruning and quantization, with a couple of neat workshops and papers on these ideas.

The basic idea of pruning involves removing the weights of the neural networks (by extension removing some feature maps). Regarding quantization, instead of using float representation of the weights we use binary representation – like 8 or 16 bytes. This reduces the memory storage by a significant amount, as we are usually using 64. Combined, these compression ideas that were shared at NeurIPS present interesting means of reducing computational requirements.

In May, we published a paper on pruning by distinguishing between class-specific and generic feature maps. These ideas from NeurIPS on pruning and quantization add to it by now further helping to reduce computational requirements.

Pruning at a Glance: A structured Class-Blind Pruning Technique for Model Compression

A Quantization-Friendly Separable Convolution Architecture for MobileNets

The State of Quantum Computing

Zhi Chen

While still very much in its infancy stage, quantum computing was still exciting because of its potential impact on computational speed. It will be much faster than any CPU or GPU. It’s still immature though. I got to speak to people from IBM who are working on this, and Quantum computing is a “toy model” able to work with 32 binary variables (in the binary quadratic form).

With time, quantum computing will play a major role because of processing speeds, making it a  very interesting space to watch as it relates to AI. IBM has proven that quantum computing can be faster than classical processors, but there’s a ways to go before it’s a reality. It’s exciting to watch these developments to say the least.

Have a look at Qiskit an open-source quantum computing framework.

The BERT Language Model

Yanping Lu

There were interesting conversations shared around the BERT Language Model in Natural Language Processing (NLP). We’ve been aware of it for a while and even implemented the core part of their training strategies (Transformer) into one of our projects. It is interesting to hear other researchers thoughts on this and how it is dictating a lot of NLP work.

The BERT Language Model addresses one of the biggest challenges we face in NLP – we have very small amounts of labeled data. It is clear though, that if we expect Deep Learning to provide optimal performance we need a lot more labeled data. The Language Model that was presented makes up for the gap.

The training of BERT is unsupervised learning, using a large amount of plain text corpus, and presents a very good opportunity to boost the performance of Deep Learning models in many NLP tasks. BERT uses a masked language model to learn contextual relationships of words in a sentence, and next sentence prediction to model relationships of two sentences.  Ultimately, this was interesting to hear about and discuss details with the BERT authors and conference goers as it does present a highly optimized model and a strong direction for NLP.

More on the BERT Language Model

What Bodies Think (and Regeneration Instructions)

Younes Zerouali

There was a lot of talk about the “What Bodies Think” presentation, it was probably the most widely talked about of the whole conference. It wasn’t even by a typical AI researcher, but Professor Michael Levin, a biologist. This particular talk was on work he did on a particular group of flatworms – which are very particular because they can regenerate. Using AI methodology, he was able to map out the regeneration process and ultimately modify and control it.

What is fascinating is that the regeneration instructions are encoded in the electric fields across the body, and can predict the type of regenerated tissue needed after injury. It’s a brand new application of machine learning which can lead to the decoding of patterns in electrical fields in animals and humans. But to caveat this, its application to human tissues are at the moment much more obscure, given the technical and important ethical challenges to tackle.

Meta-Learning

Nuri Hurtado

One of the most interesting topics for me was meta-learning, or how models learning to learn. This means that build models that learn how to learn across tasks. Lise Getoor explained in her workshop that there’s an opportunity for meta-learning methods that can mix probabilistic and logical inference, data and knowledge-driven modelling with meta-modelling for meta-learning. All these topics represent the present and future in ML and marrying them presents an incredible opportunity for optimization.

Image Based Reasoning and Domain Specific Reasoning

Mehrdad Valipour

The work on reasoning that was presented at NeurIPS was particularly exciting. There were a few papers presented in the context of image based questioning and answering. Taking semantics from the image and referencing the text, the models were able to answer relevant questions. It’s early stage, synthetic, and requires supervised training, but the ability to mimic reasoning on an image was quite interesting.

On a bit of tangent, conversational AI working on domain specific chatbots (like Facebook) was interesting. Where it’s going, chatbots will have the ability to have longer and more complex conversations, building better datasets and can use previous conversations better to ensure greater fluidity of communication. Previously, getting the context of the conversation was challenging, but with these developments are important pieces of having better conversations.

Making Good

With all the AI talent in town, this was an opportunity to set a plan for the adoption and scaling of AI in an ethical way. There was an entire day long event on Saturday on “AI for Social Good”. It covered a wide range of topics, from health to education, agriculture to economic inequality, social welfare to justice and (nearly everything in between). The broad focus was on the UN’s Sustainable Development Goals, and ultimately how we can use this technology to change the world for the positive.

Framing these conversations against the impact these technologies can present is a must. It was clear all over the floor, and in the conversations that followed, that we continue to implement AI for Good. Whether meta-learning or quantum computing, the results of compression or certainly the bio-regeneration process, we must continue to pursue AI within an ethical framework. This way, if or when we hit the technological singularity, it will be for the better.

 

 

We are hiring across the board. Come join us!