auto_awesome_motion View all 3 versions
organization

University of Sheffield (Sheffield) / Department of Computer Science

Country: United Kingdom
Funder (2)
Top 100 values are shown in the filters
Results number
arrow_drop_down
3 Projects, page 1 of 1
  • Funder: SNSF Project Code: 151615
    Funder Contribution: 55,700
    Partners: University of Sheffield (Sheffield) / Department of Computer Science
  • Project . 2012 - 2016
    Funder: CHIST-ERA Project Code: ViSen
    Partners: IRII, University of Surrey (Surrey) / Department of Electronic Engineering, ECL, University of Sheffield (Sheffield) / Department of Computer Science

    Today a typical Web document will contain a mix of visual and textual content. Most traditional tools for search and retrieval can successfully handle textual content, but are not prepared to handle hetereogeneous documents. The new type of content demands the development of new efficient tools for search and retrieval. The visual sense project aims at mining automatically the semantic content of visual data to enable “machine reading” of images. In recent years, we have witnessed significant advances in the automatic recognition of visual concepts (VCR). These advances allowed for the creation of systems that can automatically generate keyword-based image annotations. The goal of this project is to move a step forward and predict semantic image representations that can be used to generate more informative sentence-based image annotations. Thus, facilitating search and browsing of large multi-modal collections. More specifically, the project targets three case studies, namely image annotation, re-ranking for image search, and automatic image illustration of articles. It will address the following key open research challenges: To develop methods that can predict a semantic representation of visual content. This representation will go beyond the detection of objects and scenes and will also recognize a wide range of object relations. To extend state-of-the-art natural language techniques to the tasks of mining large collections of multi-modal documents and generating image captions using both semantic representations of visual content and object/scene type models derived from semantic representations of the multi-modal documents. To develop learning algorithms that can exploit available multi-modal data to discover mappings between visual and textual content. These algorithms should be able to leverage ‘weakly’ annotated data and be robust to large amounts of noise. For this purpose, the current project will build on expertise from multiple disciplines, including computer vision, machine learning and natural language processing (NLP), and gathers four research groups from University of Surrey (Surrey, UK), Institut de Robòtica i Informàtica Industrial (IRI, Spain) , Ecole Centrale de Lyon (ECL, France), and University of Sheffield (Sheffield, UK) having each well established and complementary expertise in their respective areas of research.

  • Funder: CHIST-ERA Project Code: uComp
    Partners: Vienna University of Economics and Business, Research Institute for Computational Methods, MODUL University Vienna, Department of New Media Technology, Computer Sciences Laboratory for Mech. & Eng. Sciences, Man-Machine Comm. Department, University of Sheffield (Sheffield) / Department of Computer Science

    The rapid growth and fragmented character of social media and publicly available structured data challenges established approaches to knowledge extraction. Many algorithms fail when they encounter noisy, multilingual and contradictory input. Efforts to increase the reliability and scalability of these algorithms face a lack of suitable training data and gold standards. Given that humans excel at interpreting contradictory and context-dependent evidence, the uComp project will address the above mentioned shortcomings by merging collective human intelligence and automated methods in a symbiotic fashion. The project will build upon the emerging field of Human Computation (HC) in the tradition of games with a purpose and crowdsourcing marketplaces. It will advance the field of Web Science by developing a scalable and generic HC framework for knowledge extraction and evaluation, delegating the most challenging tasks to large communities of users and continuously learning from their feedback to optimise automated methods as part of an iterative process. A major contribution is the proposed foundational research on Embedded Human Computation (EHC), which will advance and integrate the currently disjoint research fields of human and machine computation. EHC goes beyond mere data collection and embeds the HC paradigm into adaptive knowledge extraction workflows. An open evaluation campaign will validate the accuracy and scalability of EHC to acquire factual and affective knowledge. In addition to novel evaluation methods, uComp will also provide shared datasets and benchmark EHC against established knowledge processing frameworks. While uComp methods will be generic and evaluated across domains, climate change was chosen as the main use case for its challenging nature, subject to fluctuating and often conflicting interpretations. Collaborating with international organisations such as EEA, NOAA and NASA will increase impact, provide a rich stream of input data, attract and retain a critical mass of users, and promote the adoption of EHC among a wide range of stakeholders.