April 2018 marks the 50th anniversary of Stanley Kubrick’s seminal, groundbreaking film 2001: A Space Odyssey. The main protagonist in the plot is a seemingly sentient artificial intelligence named HAL 9000, who appears to possess human-like intelligence and emotional capabilities.

HAL is depicted as what we in the industry would refer to as Strong AI or Artificial General Intelligence (AGI). This means that he appears to be as conscious and capable of abstract reasoning, empathy, and emotional experience as a human. This class of AI doesn’t currently exist, although some are hopeful that it may emerge in our life times.

Most of the current work being done in AI is in the domain of Weak AI or Narrow AI. This class of AI is only able to perform specific tasks in a specific environment. These applications have become ubiquitous, and affect many aspects of our daily lives, from the content we consume to the product recommendations we receive.

Although many of HAL’s abilities were pure science fiction in 1968, it seems an appropriate time to ask: what can we build today?

Playing Chess

One of the events that kicked off this generation’s fascination with AI came back in 1997, when IBM’s Deep Blue defeated Garry Kasparov, arguably the best chess player in history.

However, we can’t really say that Deep Blue “learned” how to play chess. Rather, it was endowed with a large amount of human curated chess knowledge and then given the computationally intensive task of finding the best move at any given time.

Fast forward to last year, when Alphazero defeated Stockfish. However, the most incredible development here isn’t just that Alphazero defeated one of the best chess programs in the world, but that it did so without any of the prior knowledge Deep Blue had access to.

Alphazero taught itself how to play chess from scratch using a technique called reinforcement learning, where the algorithm gets rewards or punishments based on the success of its actions. The only prior knowledge programmed into Alphazero were the rules of the game and the maximum number of moves that could be made per game. It is clear that modern AI can play chess. The only question left, is does it know what chess actually is?

Can we build this part of HAL today? Basically.

Natural Language

HAL was able to hold incredibly human-like conversation in real time. This involves three main components: speech comprehension, language understanding, and speech generation.

With the resurgent popularity of convolutional neural nets and deep learning, the AI community has made massive progress in these areas. We are approaching human parity on all of these skills, but there is still a lot of work to be done before we can build a program that will converse as naturally as HAL.

In speech comprehension and language understanding, benchmarking studies are often performed on isolated datasets, and algorithms perform poorly when applied to different datasets or the real world. Custom solutions still provide the best results. Humans remain superior at capturing the nuance, content, and complexity of natural language, and there’s no indication that computers actually understand language at a higher level of abstraction. Those interested in the philosophy behind this should check out The Chinese Room thought experiment.

Speech generation, on the the hand, is becoming strikingly good. Recent innovations in voice cloning have produced programs that can replicate a human speaker’s voice to the point of near indistinguishability.

Can we build this part of HAL today? Not yet, but results are promising.


In what is one of the most famous scenes in the movie, co-pilots Dave Bowman and Frank Poole isolate themselves in a space pod to hold a private discussion. Even though HAL is unable to hear them, he catches the drift of their conversation by reading their lips.

Although researchers at Oxford have developed a system for AI-powered lip-reading that can outperform humans on certain data sets, it comes with severe limitations. The algorithm was trained on BBC news clips with subtitles aligned with the lip movements of the speakers. This means that all the videos featured people who were facing forward in well-lit conditions and speaking in structured sentences.

Since the algorithm was trained exclusively on news clips, it would be less accurate at identifying phrases outside of the dataset. We are still far from being able to replicate the scene in the movie, where the lip-reading was performed from the side, and in a dimly lit room.

Can we build this part of HAL today? We have good results but are still far off.


Throughout the film, HAL on multiple occasions gives the impression that he can both read and experience emotions. When HAL refuses to let Dave back into the ship, he says “I can see you’re upset about this”, and later when Dave is about to cut his cognitive functions HAL pleads “Dave, I’m afraid”. Although it is debatable whether HAL actually experienced these emotions, these qualities allow the audience to emphasize with him as if he were truly alive.

Building emotion into an AI is such a large undertaking that some are skeptical it can ever be done, but nonetheless there are many groups working to achieve this goal. Startups like Affectiva are working on detecting emotions from videos of faces, and Huawei is working to integrate emotional understanding into its smartphone assistants. Many companies are building chatbot applications that mimic emotional responses. But at this point in time, we are nowhere close to building anything that acts believably human, and even farther from creating something that genuinely feels emotion.

Can we build this part of HAL today? No. No we can not.

Rest assured, HAL is still a thing of the future. At least for now!