auto_awesome_motion View all 10 versions
organization

Sapienza University of Rome

Country: Italy
Funder (2)
Top 100 values are shown in the filters
Results number
arrow_drop_down
4 Projects, page 1 of 1
  • Funder: SNSF Project Code: 184005
    Funder Contribution: 72,517
    Partners: Sapienza University of Rome, University of Oxford, Faculty of Music
  • Funder: SNSF Project Code: 195003
    Funder Contribution: 51,083
    Partners: Sapienza University of Rome, Department of Strategic Management and Entre Rotterdam School of Management Erasmus University Rotterdam
  • Funder: CHIST-ERA Project Code: COACHES
    Partners: UNICAEN, Sapienza University of Rome, VUB, SU

    Public spaces in large cities are increasingly becoming complex and unwelcoming environments. Public spaces progressively become more hostile and unpleasant to use because of the overcrowding and complex information in signboards. It is in the interest of cities to make their public spaces easier to use, friendlier to visitors and safer to increasing elderly population and to citizens with disabilities. Meanwhile, we observe, in the last decade a tremendous progress in the development of robots in dynamic, complex and uncertain environments. The new challenge for the near future is to deploy a network of robots in public spaces to accomplish services that can help humans. Inspired by the aforementioned challenges, COACHES project addresses fundamental issues related to the design of a robust system of self-directed autonomous robots with high-level skills of environment modelling and scene understanding, distributed autonomous decision-making, short-term interacting with humans and robust and safe navigation in overcrowding spaces. To this end, COACHES will provide an integrated solution to new challenges on: a knowledge-based representation of the environment, human activities and needs estimation using Markov and Bayesian techniques, distributed decision-making under uncertainty to collectively plan activities of assistance, guidance and delivery tasks using Decentralized Partially Observable Markov Decision Processes with efficient algorithms to improve their scalability and a multi-modal and short-term human-robot interaction to exchange information and requests. COACHES project will provide a modular architecture to be integrated in real robots. We deploy COACHES at Caen city in a mall called “Rive de l’orne”. COACHES is a cooperative system consisting of fixed cameras and the mobile robots. The fixed cameras can do object detection, tracking and abnormal events detection (objects or behaviour). The robots combine these information with the ones perceived via their own sensor, to provide information through its multi-modal interface, guide people to their destinations, show tramway stations and transport goods for elderly people, etc.... The COACHES robots will use different modalities (speech and displayed information) to interact with the mall visitors, shopkeepers and mall managers. The project has enlisted an important an end-user (Caen la mer) providing the scenarios where the CAOCHES robots and systems will be deployed, and gather together universities with complementary competences from cognitive systems (SU), robust image/video processing (VUB, UNICAEN), and semantic scene analysis and understanding (VUB), Collective decision-making using decentralized partially observable Markov Decision Processes and multi-agent planning (UNICAEN, Sapienza), multi-modal and short-term human-robot interaction (Sapienza, UNICAEN).

  • Funder: CHIST-ERA Project Code: CHIST-ERA-19-XAI-009
    Partners: University Politehnica of Bucharest, University of Liverpool, Sapienza University of Rome, INFN, MedLea S.r.l.s., University of Sofia “St. Kl. Ohridski”

    Developing and testing methodologies that allow to interpret the predictions of the AI algorithms in terms of transparency, interpretability, and explainability has become today one of the most important open questions in AI. In this proposal we bring together researchers from different fields with complementary skills, essential to be able to understand the behaviour of the AI algorithms, that will be studied with an interesting set of multidisciplinary use-cases in which explainable AI can play a crucial role and that will be used to quantify strengths and highlight, and possible solve, weakness of the available explainable AI methods in different applicative contexts. One aspect hindering so far substantial progress towards explainability is the fact that several proposed solutions in explainable AI proved to be effective after being tailored to specific applications, and frequently not easily transferred to other domains. In this project, we will test the same array of techniques for explainability to use-cases intentionally chosen to be quite heterogeneous with respect to the types of data, learning tasks, scientific questions. The proposed use-cases range from High Energy Physics AI applications, to applied AI in medical imaging, to applied AI for the diagnosis of pulmonary, to tracheal and nasal airways, to machine-learning techniques of explainability used to improve analysis and modelling in neuroscience. For each use-case, the research project will consist of three phases. In the first part, we will apply state-of-the-art explainability techniques, properly chosen based on the requirements, to the case under consideration. In the second part, shortcomings of the techniques will be identified. Most notably, issues of scalability to high-dimensional and raw data, where noise can be prevalent compared to the signal of interest, will be taken into consideration, as long as the level of certifiability afforded by each algorithm. In the final phase, new algorithmic methodologies adequate to HEP, medical, and neuroscientific use cases will be designed, based on these considerations