project . 2021 - 2026 . On going

Turing AI Fellowship TEAMER: Teaching Machines To Reason Like Humans

UK Research and Innovation
Funder: UK Research and InnovationProject code: EP/W002876/1
Funded under: EPSRC Funder Contribution: 4,026,220 GBP
Status: On going
30 Sep 2021 (Started) 29 Sep 2026 (Ending)

Over the past years deep learning has brought a revolution in the area of artificial intelligence (AI), producing remarkable results in a variety of application domains including computer vision, natural language processing, speech recgonition, robotics, and clinical decision making. Despite the success of deep learning over a wide spectrum of real-world tasks, there is no doubt that many of the problems that are really at the core of AI are far from being solved. Reasoning, namely taking pieces of information, combining them together, and using these to draw logical conclusions or devise new information, is not a general-purpose capability for modern AI. Imagine an airplane passenger sitting in an exit row, studying the emergency guide, which is often a combination of images and text. Their brain combines visual and textual information in order to infer the intended message -- open the door in the unlikely event of an emergency. A computer system seeing the same document would first employ an image recognition model to scan the image. An Optical Character Recognition (OCR) system would read the text, and a third system would correlate the image and text to understand the complete picture. Although the fundamental principles of analyzing the world around us and the approach a machine takes to process complex information are both based on breaking down the data to its core elements, humans are instinctively better at correlating and integrating information from different modalities, and re-using previously acquired experience and expertise to transfer it to radically different challenges and domains. Today's neural networks fail disastrously when exposed to data outside the distribution they were trained on, overly adhere to superficial and potentially misleading statistical associations instead of learning true causal relations, are unable to reason on an abstract level, which makes it difficult to implement high-level cognitive functions, and are essentially black boxes with with respect to human understanding of their predictions. This fellowship aims to alleviate these deficiencies by developing a new class of neural network models which will demonstrate reasoning capabilities, a skill required to enhance many AI applications. Rather than relying on a monolithic network structure, we propose to assemble a network from a collection of more specialized modules, making use of an explicit, modular reasoning process, which allows for differentiable training (with backpropagation) but without expert supervision of reasoning steps. We will develop a theoretical framework which characterizes what it means for neural network models to reason, design various reasoning modules, and showcase their practical importance in applications which understand requests and act on them, process and aggregate large amounts of data (e.g., from multiple modalities), make generalizations (e.g., robots cannot be pretrained on all possible scenarios they might encounter), deal with changing situations and causality, manifest creativity (e.g., in writing a story or a poem), co-ordinate various agents (e.g., in game playing), and are able explain their predictions and decisions. The proposed Fellowship will have a transformative effect on AI theory and practice. It sets an ambitious agenda which unifies multiple strands of AI research, bridging the gap between the neural and symbolic views of AI and integrating their complementary strengths. It will provide the means for developing a UK skill base in AI, and wil have wide ranging impact in academia, industry, the UK economy, and society e.g., , by embedding AI in many domains of daily life and rendering tools such as neural networks more explainable.

Data Management Plans