2 Projects, page 1 of 1
Over the past years deep learning has brought a revolution in the area of artificial intelligence (AI), producing remarkable results in a variety of application domains including computer vision, natural language processing, speech recgonition, robotics, and clinical decision making. Despite the success of deep learning over a wide spectrum of real-world tasks, there is no doubt that many of the problems that are really at the core of AI are far from being solved. Reasoning, namely taking pieces of information, combining them together, and using these to draw logical conclusions or devise new information, is not a general-purpose capability for modern AI. Imagine an airplane passenger sitting in an exit row, studying the emergency guide, which is often a combination of images and text. Their brain combines visual and textual information in order to infer the intended message -- open the door in the unlikely event of an emergency. A computer system seeing the same document would first employ an image recognition model to scan the image. An Optical Character Recognition (OCR) system would read the text, and a third system would correlate the image and text to understand the complete picture. Although the fundamental principles of analyzing the world around us and the approach a machine takes to process complex information are both based on breaking down the data to its core elements, humans are instinctively better at correlating and integrating information from different modalities, and re-using previously acquired experience and expertise to transfer it to radically different challenges and domains. Today's neural networks fail disastrously when exposed to data outside the distribution they were trained on, overly adhere to superficial and potentially misleading statistical associations instead of learning true causal relations, are unable to reason on an abstract level, which makes it difficult to implement high-level cognitive functions, and are essentially black boxes with with respect to human understanding of their predictions. This fellowship aims to alleviate these deficiencies by developing a new class of neural network models which will demonstrate reasoning capabilities, a skill required to enhance many AI applications. Rather than relying on a monolithic network structure, we propose to assemble a network from a collection of more specialized modules, making use of an explicit, modular reasoning process, which allows for differentiable training (with backpropagation) but without expert supervision of reasoning steps. We will develop a theoretical framework which characterizes what it means for neural network models to reason, design various reasoning modules, and showcase their practical importance in applications which understand requests and act on them, process and aggregate large amounts of data (e.g., from multiple modalities), make generalizations (e.g., robots cannot be pretrained on all possible scenarios they might encounter), deal with changing situations and causality, manifest creativity (e.g., in writing a story or a poem), co-ordinate various agents (e.g., in game playing), and are able explain their predictions and decisions. The proposed Fellowship will have a transformative effect on AI theory and practice. It sets an ambitious agenda which unifies multiple strands of AI research, bridging the gap between the neural and symbolic views of AI and integrating their complementary strengths. It will provide the means for developing a UK skill base in AI, and wil have wide ranging impact in academia, industry, the UK economy, and society e.g., , by embedding AI in many domains of daily life and rendering tools such as neural networks more explainable.
Our mission is to train the next generations of innovators in responsible, data-driven and knowledge-intensive human-in-the-loop AI systems. Our innovative, cohort-based training programme will deliver cohorts of highly trained PhD graduates with the skills to design and implement complex interactive AI pipelines solving societally important problems in responsible ways. While fully autonomous artificial intelligence dominates today's headlines in the form of self-driving cars and human-level game play, the key AI challenges of tomorrow are posed by the need for interactive knowledge-intensive systems in which the human plays an essential role, be it as an end-user providing relevant case-specific knowledge or interrogating the system, an operator requiring crucial information to be presented in an intelligible form, a supervisor requiring confirmation that the system's performance remains within acceptable limits, or a regulator assessing to what extent the system operates according to exacting standards concerning transparency, accountability and fairness. Each of these examples demonstrates a need for specific and meaningful interaction between the AI system and human(s). The examples also demonstrate the importance of knowledge for achieving human-level interaction, in addition to the data driving the machine learning aspect of the system. In close conversation with our industry partners we thus identified Interactive Artificial Intelligence (IAI) as a core sub-discipline of AI where the need for and deficit in advanced AI skills is abundantly evident while being homogeneous enough to have intellectual integrity and be taught and researched within the context of a single CDT. The most important aspects of the training programme are: - Knowledge-Driven AI and Data-Driven AI are core components treated in a close symbiotic relationship: the former uses knowledge in processes such as reasoning, argumentation and dialogue, but in such a way that data is treated as a first-class citizen; the latter starts from data but emphasises knowledge-intensive forms of machine learning such as relational learning which take knowledge as an additional input. - Human-AI Interaction is another core component addressing all human-in-the-loop aspects, overseen by a co-investigator from the human-computer interaction field. - Responsible AI is underpinning not just the taught first year but the students' doctoral training throughout all four years, overseen by two dedicated co-investigators with backgrounds in IT law and industrial codes of practice. Other skill requirements from stakeholders include: the ability to design and implement complete end-to-end systems; acquiring depth in some AI-related subjects without sacrificing breadth; the ability to work in teams of people with diverse skill sets; and being able to take on a role as "AI ambassadors" who are able to inspire but also to manage expectations through their in-depth understanding of the strengths and weaknesses of different AI techniques. The IAI training programme is designed to achieve this by strongly emphasising cohort-based training. Students will develop their projects and coursework within an innovative software environment which means easy integration of their work with that of others. This virtual hub is complemented by a physical hub where all cohorts are colocated -- together both hubs will strongly promote interaction both within and between cohorts: e.g., projects can aim at improving or extending software produced by the previous cohort, so that senior students can be involved in mentoring their juniors. In summary, the IAI training programme pulls together Bristol's unique and comprehensive strengths in doctoral training and AI to deliver highly trained AI innovators, equipping them with essential skills to deliver the interactive AI technology society requires to deal with current and future challenges.