IBM (United Kingdom)
Funder (2)
107 Projects, page 1 of 22
- Project . 2021 - 2025Funder: UKRI Project Code: MR/T041862/1Funder Contribution: 1,106,090 GBPPartners: IBM (United Kingdom)
The construction of high-fidelity digital models of complex physical phenomena, and more importantly their deployment as investigation tools for science and engineering, are some of the most critical undertakings of scientific computing today. Without computational models, the study of spatially-irregular, multi-scale, or highly coupled, nonlinear physical systems would simply not be tractable. Even when computational models are available, however, tuning their physical and geometrical parameters (sometimes referred to as control variables) for optimal exploration and discovery is a colossal endeavour. In addition to the technological challenges inherent to massively parallel computation, the task is complicated by the scientific complexity of large-scale systems, where many degrees of freedom can team up and generate emergent, anomalous, resonant features which get more and more pronounced as the model's fidelity is increased (e.g., in turbulent scenarios). These features may correspond to highly interesting system configurations, but they are often too short-lived or isolated in the control space to be found using brute-force computation alone. Yet, most computational surveys today are guided by random (albeit somewhat educated by instinct) guesses. The potential for missed phenomenology is simply unquantifiable. In many domains, anomalous solutions could describe life-threatening events such as extreme weather. A digital model of an industrial system may reveal, under special conditions, an anomalous response to the surrounding environment, which could lead to decreased efficiency, material fatigue, and structural failure. Precisely because of their singular and catastrophic nature, as well as infrequency and short life, these configurations are also the hardest to predict. Any improvement in our capacity to locate where anomalous dynamics may unfold could therefore tremendously impact our ability to protect against extreme events. More fundamentally, establishing whether the set of equations implemented in a computational model is at all able to reproduce specific, exotic solutions (such as rare astronomical transients [1]) for certain configuration parameters can expose (or exclude) the manifestation of new physics, and shed light on the laws that govern our Universe. Recently, the long-lived but sparse attempts [2] to instrument simulations with optimisation algorithms have grown into a mainstream effort. Current trends in Intelligent-Simulation orchestration stress the need to instruct the computational surveys to learn from previous runs, but they do not address the question of which information it would be most valuable to extract. A theoretical formalism to classify the information processed by large computational models is simply absent. The main objective of this project is to develop a roadmap for the definition of such a formalism. The key question is how one can optimally learn from large computational models. This is a deep, overarching issue affecting experimental as well as computational science, and has been recently proven to be an NP hard problem [3]. Correspondingly, the common approach to simulation data reduction is often pragmatic rather than formal: if solutions with specific properties (such as a certain aerodynamic drag coefficient) are sought, those properties are directly turned into objective functions, taking the control variables as input arguments. This is reasonable when these properties depend only mildly on the input; in the case of anomalous solutions, however, this is often not the case, so one wonders whether more powerful predictors of a simulation's behaviour could be extracted from other, apparently unrelated information contained in the digital model. If so, exposing this information to the machine-learning algorithms could arguably lead to more efficient and exhaustive searches. The investigation of this possibility is the core task that this project aims to undertake.
- Project . 2019 - 2022Funder: UKRI Project Code: 105145Funder Contribution: 997,792 GBPPartners: IBM (United Kingdom)
This project brings innovative and disruptive technologies together from IBM, Rothamsted Research, The University of Sheffield, 2Excel, STFC-Hartree Centre and Syngenta to transform the crop management market with blackgrass as its first use case. Blackgrass is a weed costing farmers more than £0.58bn/year, however data, management strategies and expertise are fragmented in the agronomy sector, slowing down UK production and competitiveness. This project aims to end this fragmentation through the provisioning of an artificial-intelligence (AI) and Big Data platform approach, where all data and expertise is collated, allowing researchers to create new evidence-based models and offer easy exploitation routes. Our newly-generated blackgrass forecasting models will be served from this platform through targeted apps or integration into existing offerings from agri-service providers. The platform will be built in an open, innovative way to enable collaboration, innovation and ease route to market for generated insights. Such disruptive, data-driven approaches will empower the UK agriculture sector to become world-leaders in the area of smart agriculture.
- Project . 2014 - 2016Funder: UKRI Project Code: EP/L024624/1Funder Contribution: 94,734 GBPPartners: University of Edinburgh, IBM (United Kingdom)
Most software systems evolve over time to adapt to changing environment, needs, new concepts, new technologies and standards. Modification to the software is inevitable in software evolution and may be through growth in the number of functions, components and interfaces, or alteration of existing modules. Examples commonly seen are changes or upgrades in operating systems, browsers, communication software. Software maintenance to understand, implement and validate changes is a very expensive activity and costs billions of dollars each year in the software industry. In this proposal, we focus on the problem of validating changes and their effects. This is typically carried out using regression testing, defined as ``selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements'' Regression testing is an expensive and time consuming process and accounts for majority of the maintenance costs. Balancing confidence in the software correctness (gained from extensive and frequent regression testing) against cost of regression test effort is one of the major challenges in software maintenance. Estimates of regression test cost will help developers and managers in achieving this balance. Accurate regression test cost estimates are crucial for planning project schedules, allocating resources, software reliability, monitoring and control. Cost models estimating maintenance effort and development effort have been developed in the past. Nevertheless, these models cannot be used since crucial elements for predicting test effort such as testing requirements, test cases, test quality metrics are ignored for the most part in these models. Our goal in this proposal is to define a model that accurately estimates regression test cost for evolving software taking into account effects of changes on software behaviour, testing requirements, industry and process specific information, and quality metrics to be satisfied. At the end of this proposal, we will have tools that when given the software changes and quality requirements as inputs can predict the cost of validating the changes. We will evaluate the test effort estimates on large open-source software evolution repositories such as DEBIAN GNU/Linux distribution releases and OpenSSH. We will also collaborate with IBM and evaluate our estimates on their software. The prototype analysis tools that we develop during the course of the project will be made freely available in open-source form. We will package and ship our analysis tools as part of well-established distributions such as Debian. This will allow numerous developers of large-scale software (including our project partner) to directly apply our techniques to their problems, improving their development and maintenance processes.
- Project . 2010 - 2015Funder: UKRI Project Code: EP/H02185X/1Funder Contribution: 2,510,410 GBPPartners: IBM (United Kingdom), University of London
Most of our science which is used to inform policy makers about future social and economic events has been built for systems that are local rather than global and are assumed to behave in ways that are relatively tractable and thus responsive to policy initiatives. Any examination of the degree to which such policy-making has been successful or even informative yields a very mixed picture with such interventions being only partly effective at best, and positively disruptive at worst. Human policy-making shows all the characteristics of a complex system. Many of our interventions make problems worse rather than better leading to the oft-quoted accusation that the solution is part of the problem .Complexity theory recognizes this dilemma. In this research programme, we will develop new forms of science which address the most difficult of human problems: those that involve global change where there is no organised constituency and whose agencies are largely regarded as being ineffective. We will argue that global systems tend to be treated in isolation from one another and that the unexpected dynamics that characterises their behaviour is due to their coupling and integration that is all to often ignored. To demonstrate this dynamics and to develop appropriate policy responses, we will study four related global systems: trade, migration, security (which includes crime, terrorism and military disputes) and development aid, which tends to be determined as a consequence of these three individual systems. The idea that this dynamics results from coupling suggests that to get a clear view of their dynamics and a better understanding of global change, we need to develop integrated and coupled models whose dynamics can be described in the conventional and perhaps not so conventional language of complexity theory: chaos, turbulence, bifurcations, catastrophes, and phase transitionsWe will develop three related styles of model: spatial interaction models embedded in predator-prey like frameworks which generate bifurcations in system behaviour, reaction diffusion models that link location to flow, and network models in which epidemic-like diffusion processes can be used to explain how events cascade into one another. We will apply spatial interaction models to trade and migration, reaction diffusion to military disputes and terrorism, and network models to international crime. We will extend these models to incorporate the generation of qualitative new events such as the emergence of new entities e.g. countries, coupling them together in diverse ways. We will ultimately develop a generic framework for a coupled global dynamics that spans many spatial and temporal scales and pertains to different systems whose behaviours can be simulated both quantitatively and qualitatively. Our models will be calibrated to data which we will assemble during the project and which we already know exists in usable form.We will develop various models which incorporate all these ideas into a global intelligence system to inform global policy makers about future events. This system (and we intend there to be many versions of it) will allow policy makers to think the unthinkable and to explore all kinds of what if questions with respect to our four key global systems: trade, migration, security and development, while at the same time, enabling global dynamics to be considered as a coupling of these systems. We will begin developing these models for the UK in terms of the rest of the world but then extend this to embrace all the key countries and events relevant to this global dynamics. Our partners who in the first instance are UK government departments and multinational companies with a global reach will champion this extension to the global arena. The programme will be based on ten academic faculty at UCL spanning a wide range of centres and departments.
- Project . 2007 - 2008Funder: UKRI Project Code: BB/E013201/1Funder Contribution: 200,452 GBPPartners: University of London, IBM (United Kingdom)
With the massive increase in the amount of biological data, there is an increasing need for very powerful computing facilities to allow this data to be analysed. One of the key strengths of the Bloomsbury Centre for Bioinformatics is it's world leading expertise in structural bioinformatics i.e. the use of computing to analyse and predict the structures of complex biological molecules. The projects range from simulations of how proteins fold to novel image processing to allow the structures of proteins to be determined by electron microscopy. Other projects entail mapping of protein structures onto the sequences of all the genes in a genome and clustering all known protein families in order to better annotate their functions. In this project we are seeking to setup 3 new clusters of computers in the three main departments which form the Centre. Each department focuses on different projects and so have specific hardware requirements for their clusters, however for the very largest projects (e.g. folding all of the proteins in a genome) we are able to combine the power of all three clusters using special software (called Jyde) developed in an earlier BBSRC project.