MICROSOFT RESEARCH LIMITED
150 Projects, page 1 of 30
- Project . 2010 - 2010Funder: UKRI Project Code: EP/H043055/1Funder Contribution: 31,980 GBPPartners: University of Edinburgh, MICROSOFT RESEARCH LIMITED
FLoC (Federated Logic Conference) is a quadrennial two-week event that brings together eight top conferencesapplying methods of logic in computer science, and about 60 workshops. The area is of great importance to computing research: logic provides computer science with both a unifying foundational framework and a tool for modelling computing systems; it has been called ``the calculus of computer science,'' and played a crucial role in diverse areas such as artificial intelligence, computational complexity, distributed computing, database systems, hardware design, programming languages, and software engineering. The research of just over 20% of Turing award winners has been on the application of logic to computer science; in the UK, five of the current eight UKCRC grand challenges are in FLoC areas.The proposal is to enable us to invite top-level researchers to deliver plenaries and tutorials at FLoC, to makeFLoC particularly attractive to students and to make it easier for researchers from the UK and elsewhereto attend FLoC.
- Project . 2010 - 2011Funder: UKRI Project Code: EP/H017143/1Funder Contribution: 95,873 GBPPartners: University of Lincoln, MICROSOFT RESEARCH LIMITED
Seabird populations are a valuable and accessible indicator of marine health: population changes have been linked with fish stock levels, climate change, and pollution. However, the most recent reports indicate that many UK seabird colonies are in decline, and suffering from low breeding rates. The mechanisms causing these declines are not well understood, though it has been suggested that low breeding returns in a population of Common Guillemots may be related to increases in attacks on chicks while parents spend more time foraging for food. Identification and interpretation of these types of behavioural artefacts are a key factor in understanding the development of the ecosystems in which these populations live, and securing the future health and sustainability of coastal marine environments. This project will pilot the development of computer vision techniques which support automated surveillance of nesting seabirds, and collect behavioural data on a scale not currently available to ecology researchers.The Computational Ecology and Environmental Science group (CEES) at Microsoft Research, Cambridge, UK are currently working with ornithologists from the Evolutionary Biology and Behavioural Ecology group at Sheffield University on a programme to monitor a population of Common Guillemots on Skomer Island (west Wales). Currently, manual inspection is used to estimate the size of the population. However, this process is highly laborious, and it is not feasible to gather more detailed data about individual birds. This places severe limitations on further analysis of this population. This project will be conducted in collaboration with CEES, and will develop computer vision algorithms which analyse video data of a Guillemot cliff nesting area on Skomer and automatically determine the amount of time that parent birds are spending at their nests. It is not feasible to extract this data using manual methods, so this new data will allow CEES to make investigations into the relationship between chick survival and nest attendance. There is little existing work which uses computer vision to monitor wildlife, and so this project will engage with the specific technical problems of automated visual surveillance and image processing in natural environments. Whilst the project is focussed primarily on the Skomer Guillemots, the proposed techniques could be readily deployed for monitoring other seabird species, and also used to support other applications of computer vision.
- Project . 2020 - 2023Funder: UKRI Project Code: EP/V000365/1Funder Contribution: 879,241 GBPPartners: Imperial College London, MICROSOFT RESEARCH LIMITED
Programming and deployment models for cloud native applications have shifted from virtual machines (VMs), to container-based microservices, and now serverless function-as-service (FaaS) applications, yet security concerns for cloud native applications remain. Tenants must trust bespoke and opaque software security mechanisms in large cloud stacks; cloud providers must protect themselves from untrusted tenant code with heavy-weight mechanisms. A key open research challenge is therefore how to design appropriate isolation mechanisms that can be used to compartmentalise cloud native applications and also shield them from the rest of a complex, untrusted cloud software stack. We believe that hardware-based capabilities, as offered by Arm CHERI hardware, can act as a building block for lightweight yet principled isolation abstractions, and can be used to compartmentalise the full cloud stack including cloud native applications. By leveraging hardware capabilities for isolation, it becomes possible to give unprivileged userspace code strong guarantees about isolation and the impact by the rest of the untrusted cloud stack. The CloudCAP project will conduct research at the intersection of systems and programming languages. Its overall goal is to investigate and devise new abstractions and mechanisms for capability-based hardware to support flexible, lightweight and scalable compartmentalisation as part of future cloud stacks and cloud native applications. The project will result in capability-based cloud compartments, a new abstraction that can express policies about the confidentiality and integrity of data and computation, both within, and across, the components of a cloud stack and cloud native applications. A fundamental contribution of CloudCAP will be that, through CHERI's capability hardware support, it will become possible to make cloud compartments practical: they will be implementable efficiently and be compatible with existing cloud stacks and programming language runtimes.
- Project . 2019 - 2020Funder: UKRI Project Code: EP/R033064/1Funder Contribution: 429,478 GBPPartners: University of Edinburgh, MICROSOFT RESEARCH LIMITED
Individuals are increasingly reliant on digital applications and services to store photos, documents, notes and other valued personal data. They are also accustomed to - and tacitly accept as a hidden cost of using otherwise 'free' services - these applications amassing activity data and metadata from which companies derive significant business value. For example, Facebook makes much of its £22bn yearly revenue by being able to precisely target advertisements to users by deciphering their unique preferences from their likes, tags, contacts, updates, photos, travel patterns etc (some accessed through permissions to Facebook via other apps) [BBC]. There is a highly lucrative, if shadowy, trafficking in users' data: data brokering companies such as Acxiom and Epsilon compile thorough dossiers on people's physical and mental health conditions, sexual orientation, personal vices, and vulnerabilities to aid companies in identifying likely consumers [CBS,SCH]. Meanwhile, there are no corresponding tools accessible for individuals to learn about themselves through their personal data. Within recent years, there is a growing literacy around data as a medium for generating information and key insights. This is represented in the Quantified Self movement (see, e.g.: http://feltron.com), with individuals self-tracking their patterns of behaviour, physiological responses, productivity, correspondences etc with a view toward enabling personal reflection and gaining greater self-knowledge [LI]. Wearable activity trackers have been appropriated by some for self-diagnostic purposes: e.g. finding correlations between activities and symptoms to make informed changes to improve personal wellbeing [ROO]. There is untapped potential in applying this sensibility toward broader and deeper personal sense-making by drawing connections between the full diversity of one's personal data currently siloed in various services and applications - from the wide array of web services, to mobile applications, wearable and home IoT devices. Personal Information Management (PIM) is a growing ICT sector with an estimated market worth of £16.5bn [NES]. Focusing on four major activities - keeping, finding, organizing and maintaining - PIM offers valuable insights into how to develop and sustain practices for effectively managing one's own data [KLI]. A particular challenge in developing PIM solutions is the individuality of lay data management techniques and strategies, which map onto people's individual strengths and familiar, established practices; in short, individuals thrive when they are able to develop strategies that work for them and for the particular goals they have defined. Given that many services ostensibly offer information management to users (albeit with pre-set UX constraints), an especially interesting frontier for extending PIM research lies in lifting data out from the applications that are currently managing them to support individualised, goal oriented collection and management of personal data - and further, offering techniques for managing between diverse data types (e.g. the minutia of metadata, narrative/textual data, photographic data, activity data, etc). This project will fill several important gaps in understandings of personal sense-making, including: 1) in contrast to commercial ends for extracting, collecting and analysing people's personal data, understanding what kinds of self-knowledge would offer significant value to individuals, and how bridging personal data between applications and services might uniquely afford these personal insights; and 2) understanding how people can derive meaning from mixed data types and across applications, unbounded by the goal orientations of the individual applications or services they use to capture their personal data.
- Project . 2015 - 2023Funder: UKRI Project Code: EP/M020576/1Funder Contribution: 2,027,640 GBPPartners: Newcastle University, MICROSOFT RESEARCH LIMITED
The Cloud is an emerging technology that offers democratic access to computing power, data storage, software and services often for a small pay-per-use cost. Like any new technology the Cloud has potential for great good, but in the wrong hands can facilitate criminal activity. Within this project we seek to understand the different types of crime that can happen in the Cloud, build systems that will allow the detection of this criminal behaviour and enable the use of digital evidence to lead to successful prosecution of Cloud crime perpetrators. In order to achieve this goal we are forming a truly inter-disciplinary research centre leveraging the strengths of both Durham and Newcastle Universities. Bringing together the strengths of Durham in criminology, law and ethics along with the strengths of Newcastle in the areas of (computer) systems security, artificial intelligence, data mining and psychology. We are convinced that Cloud crime can only be detected and tackled by such a truly inter-disciplinary centre. Such a centre will actively create the research foundations for successful computational methods in crime detection combined with good user engagement, generating research that can cross disciplines and directly inform public policy, police and prosecution practices and transform public understanding of Cloud crime. This will involve development of a true understanding of what crime can be conducted on the Cloud. Facilitated through the development of cloud crime scripts, defining the activities of a criminal act, which will aid discussion between the different disciplines and must be presentable in a format understandable by our key stakeholders: Cloud providers/users/developers, law enforcement agencies and the criminal justice system. The detection of criminal activity in the cloud requires the integration of heterogeneous sensors, aggregation and analysis techniques, where we draw upon existing expertise in cloud security assurance (Gross, IBM), host monitoring and anomaly detection Ben-ware (McGough, Wall, DSTL), and fuzzy search on unstructured data, intrusion detection and analysis (Nifty, Yan). We propose combining the systems expertise with complementary techniques in artificial intelligence, including data mining (McGough), behaviour machine learning, anomaly detection (Ploetz) and hierarchical machine learning and knowledge extraction (Bacardit). This portfolio gives raise to multiple means to derive and combine intelligence, present bespoke visualizations, situational awareness, grammar or language generation for the cloud crime scripts. Thus allowing the centre to tailor the intelligence, and its presentation, to a given stakeholders needs. We propose using additional human computation and crowd sourcing techniques to reduce the number of situations where the system incorrectly identifies a criminal act. The use of human computation and crowd sourcing will also allow us to hone the machine learning system, developing a suite of hybrid techniques that, together, will improve cloud crime detection but will frame the results in such a way as to support subsequent crown prosecution processes. This latter achievement will require expertise in the disciplines of criminology, forensic sciences, law and ethics and will require collaboration with police forces throughout the UK and Action Fraud. In addition we will bring in relevant work around (i) forensic psychology (Oxburgh) that will deliver case-sensitive interview and investigative procedures for witnesses, victims and investigators; (ii) prosecution procedures that will ensure that evidence going to court is not compromised by intelligence gathering methodologies and (iii) prevention of underreporting of Cloud crime and improvement of public understanding and confidence.