auto_awesome_motion View all 1 versions
organization

Facebook UK

10 Projects, page 1 of 2
  • Funder: UKRI Project Code: EP/S013008/1
    Funder Contribution: 974,935 GBP
    Partners: University of London, Facebook UK

    While the traditional, deductive approach to logic begins with premisses and in step-by-step fashion applies proof rules to derive conclusions, the complementary reductive approach instead begins with a putative conclusion and searches for premisses sufficient for a legitimate derivation to exist by systematically reducing the space of possible proofs. Not only does this picture more closely resemble the way in which mathematicians actually prove theorems and, more generally, the way in which people solve problems using formal representations, it also encapsulates diverse applications of logic in computer science such as the programming paradigm known as logic programming, the proof-search problem at the heart of AI and automated theorem proving, precondition generation in program verification and more. It is also reflected at the level of truth-functional semantics --- the perspective on logic utilized for the purpose of model checking and thus verifying the correctness of industrial systems --- wherein the truth value of a formula is calculated according to the truth values of its constituent parts. Despite the reductive viewpoint reflecting logic as it is actually used, and in stark contrast to deductive logic, a uniform mathematical foundation for reductive logic does not exist. Substantial background is provided by the work of Pym, Ritter, and Wallen, but this is essentially restricted to classical and intuitionistic logic and, even then, lacks an explicit theory of the computational processes involved. We believe coalgebra --- a unifying mathematical framework for computation, state-based systems and decomposition, for which Silva is a leading contributor and exponent --- can be applied to this end. Deduction is essentially captured by inductive constructions, but reduction is captured through the coalgebraic technique of coinduction, which decomposes goals down into subgoals. Existing work shows that coalgebra generalizes truth-functional semantics and can represent basic aspects of search spaces. We will systematize this work to logics in full generality and, by utilizing the coalgebraic approach to the modelling of computation, also capture the control procedures required for proof-search. The algebraic properties of coalgebra should ensure that all aspects of this modelling, including the definitions of logics, their search spaces, and their search procedures, will be compositional. Beyond this advance on the state of the art in semantic approaches to proof-search, we can hope to utilize coalgebraic presentations of computation to achieve much more. By interfacing coalgebraic models of proof-search with coalgebraic models of, for example, probabalistic computation or programming languages, we can hope to give a clean, generic and modular presentation of applications of the reductive logic viewpoint as diverse as inductive logic programming and abduction-based Separation Logic tools such as Facebook's Infer. Abstracting the key features of such systems into a modular semantic framework can help with more than simply understanding how existing tools work and can be improved. Such a framework can also guide the design and implementation of new tools. Thus, in tandem with our theoretical development, we will develop efficient, semantically driven automated reasoning support with wide application. In doing so we can thus hope to implement tools capable of deployment for a large range of reasoning problems and guide the design of theorem provers for specific logics.

  • Funder: UKRI Project Code: EP/P004172/1
    Funder Contribution: 326,972 GBP
    Partners: AU, QMUL, Facebook UK, Yale University, MICROSOFT RESEARCH LIMITED

    The recent work on System-Level Games provides a semantic framework for modelling low-level code interactions involving resources shared between a program and its environment. This project will apply the framework for deriving compositional analysis techniques for software compilation and verification. Semantically, the project will produce a paradigmatic model for programs made of combinations of arbitrary low- and high-level code fragments. By directly examining the interaction traces of code, as produced by our model, we will extract trace-based analyses based on co-induction, as well as behavioural types governing game plays. By trace analysis we will also produce syntax-independent compilation analyses on tamper-resistance and code linking. These techniques will be applied on the prototype language Verity and will deliver significant improvements to its existing GOS compiler, which is particularly suitable a test-bench because of its high degree of heterogeneity both in terms of languages (functional vs. hardware) and platforms (CPU-based vs. FPGA).

  • Project . 2022 - 2025
    Funder: UKRI Project Code: EP/W025493/1
    Funder Contribution: 839,659 GBP
    Partners: Heriot-Watt University, Fix The Glitch, End Violence Against Women, Education Scotland, Alana AI, Facebook UK

    We address the timely topic of online gender-based violence (GBV): Almost 1 in every 2 women and non-binary people (46%) reported experiencing online abuse since the beginning of COVID-19 (Glitch report, 2020). Our aim is to create 'equally safe' online spaces by prevention, intervention and support for online GBV through the development of advanced Machine Learning algorithms. In contrast to previous research in this area, our team will include experts on GBV and online harassment to ensure that we frame the problem in a way which is most helpful to victims/survivors of GBV. In other words, we not only focus on *how* to automatically detect online abuse, but also re-think *what* it is we need to detect, how we can *support* the victims and how to *prevent* online GBV through promoting digital citizenship (i.e. prevention and intervention aimed at perpetrators and bystanders). Our methodology is based on the Scottish Government's Equally Safe strategy and implements a Responsible Innovation Approach in a close collaboration with 3rd sector charities with a long-standing track record in this domain. Our proposal aims to create the following impacts: 1. Create longterm technical solutions to support safer online spaces, including advanced abuse detection tools, tools to automatically generate counter narratives aimed at perpetrators and bystanders, and a chatbot for providing proactive support to victims/survivors. 2. Promote 'equitable' algorithms which are able to reflect multiple perspectives/viewpoints and not marginalise minority views; 3. Increase digital literacy concerning the safe use of social media from an early age.

  • Funder: UKRI Project Code: EP/T007656/1
    Funder Contribution: 1,585,890 GBP
    Partners: Imperial College London, University of Southampton, Ogily Group UK, RAFC, AoC, Portsmouth College, PHE, Southampton Voluntary Services, Facebook UK, IBM Research...

    We urgently need proactive health support at the level of the general population: we have become, on average, an unhealthy nation. The new statistical norm is overweight to obese (60% of men and 49% of women). Co-related conditions from heart disease to type II diabetes, cost the NHS £48 Bn/year. Lack of sleep costs £40Bn. Stress costs £40Bn. 6% of our GDP goes to preventable "lifestyle conditions." Of the top 20 western nations, the UK ranks 18th or lower in QoL, Health, Wealth, Education and Democracy. Our productivity is 20% lower than the rest of the G7. While there is incredible optimism and investment in the potential benefits of ubiquitous, pervasive technology to help redress these conditions, digital health approaches to date have had low impact. This fellowship hypothesises that the lack of broad and sustained uptake of digital health technology is not a fault of the technology per se but with the range of models that inform how these technologies are designed. The current state of the art in digital health tech is (i) targeted at individuals although health practices are significantly influenced by social contexts; (ii) it assumes that given the right data we will make a rational decision to adopt a health practice without taking into account how the rest of our bodies - from our gut to our nervous system - is involved in decision processes (iii) the tools themselves can be antagonistic to rather than supporting of how the body works. E.g. a "smart alarm" that still disrupts sleep rather than finds ways to help us get sleep is antagonistic to our physiology which requires certain amounts of sleep to stay healthy. While current digital health technologies can and do work for some of the people some of the time, they have not been sufficient to deliver health in the complex contexts in which the UK lives and works. We need to develop better models to inform health tech design. This fellowship proposes to develop and test Inbodied Interaction (the alignment of health tech with how the body optimally performs) as a foundation to deliver and sustain personal and social Health Resilience: the capacity for individuals and their groups to build health knowledge, skills and practice to recover from and redress health challenges, from stress at home to shift changes at work. In line with EPSRC's challenge to "transform community health" by enabling better "self-management," digital interactive technologies must be aligned with how we work as organic-physical-cognitive-social complex systems. In respect of that model of "self" the fellowship will innovate on three strands of inbodied interaction technology: 1) Environment-Body Aligned: designing technology to support our physiology, from displays that help us maintain peripheral vision to stay more creative, to light use in VR lenses to improve cognitive performance. 2) Experience-to-Practice Aligned: to provide rapid access to the effects of better health experiences, and connect these with personally effective means to maintain these. 3) Group-to-Culture Aligned: to support groups identify and build more health resilient practices that work for their contexts. Thus "self-management" is transformed into our 3-level model of how this "self" is empowered by health tech in various contexts to create build and maintain "health." Through our co-design we will be engaging directly with hundreds of participants, and thousands more citizens virtually through our nation-wide Citizen Scientist web trials. We also have regular engagement with our expert advisory team representing industry, policy, and a range of disciplines. The Team is committed to help translate our work from project to practice, from policy to process, for transformational impact. By Fellowship end, we will have new digital health technologies and validated models for those tools to deliver Health Resilience for a Healthy Nation, and so help #makeNormalBetter@scale, for all.

  • Funder: UKRI Project Code: EP/R034567/1
    Funder Contribution: 1,579,790 GBP
    Partners: AU, Imperial College London, University of Cambridge, INRIA, Google Inc, Facebook UK, IBM, Amazon Web Services (UK), University of Toronto, GCHQ...

    Modern society faces a fundamental problem: the reliability of complex, evolving software systems on which it critically depends cannot be guaranteed by the established, non-mathematical techniques, such as informal prose specification and ad-hoc testing. Modern companies are moving fast, leaving little time for code analysis and testing; concurrent and distributed programs cannot be adequately assessed via traditional testing methods; users of mobile applications neglect to apply software fixes; and malicious users increasingly exploit programming errors, causing major security disruptions. Trustworthy, reliable software is becoming harder to achieve, whilst new business and cyber-security challenges make it of escalating importance. Developers cope with complexity using abstraction: the breaking up of systems into components and layers connected via software interfaces. These interfaces are described using specifications: for example, documentation in English; test suites with varying degrees of rigour; static typing embedded in programming languages; and formal specifications written in various logics. In computer science, despite widespread agreement on the importance of abstraction, specifications are often seen as an afterthought and a hindrance to software development, and are rarely justified. Formal specification as part of the industrial software design process is in its infancy. My over-arching research vision is to bring scientific, mathematical method to the specification and verification of modern software systems. A fundamental unifying theme of my current work is my unique emphasis on what it means for a formal specification to be appropriate for the task in hand, properly evaluated and useful for real-world applications. Specifications should be validated, with proper evidence that they describe what they should. This validation can come in many forms, from formal verification through systematic testing to precise argumentation that a formal specification accurately captures an English standard. Specifications should be useful, identifying compositional building blocks that are intuitive and helpful to clients both now and in future. Specifications should be just right, providing a clear logical boundary between implementations and client programs. VeTSpec has four related objectives, exploring different strengths of program specification, real-world program library specification and mechanised language specification, in each case determining what it means for the specification to be appropriate, properly evaluated and useful for real-world applications. Objective A: Tractable reasoning about concurrency and distribution is a long-standing, difficult problem. I will develop the fundamental theory for the verified specification of concurrent programs and distributed systems, focussing on safety properties for programs based on primitive atomic commands, safety properties for programs based on more complex atomic transactions used in software transactional memory and distributed databases, and progress properties. Objective B: JavaScript is the most widespread dynamic language, used by 94.8% of websites. Its dynamic nature and complex semantics make it a difficult target for verified specification. I will develop logic-based analysis tools for the specification, verification and testing of JavaScript programs, intertwining theoretical results with properly engineered tool development. Objective C: The mechanised specification of real-world programming languages is well-established. Such specifications are difficult to maintain and their use is not fully explored. I will provide a maintainable mechanised specification of Javascript, together with systematic test generation from this specification. Objective D: I will explore fundamental, conceptual questions associated with the ambitious VeTSpec goal to bring scientific, mathematical method to the specification of modern software systems.