Powered by OpenAIRE graph
Found an issue? Give us feedback
auto_awesome_motion View all 5 versions
organization

EURECOM

Country: France
97 Projects, page 1 of 20
  • Open Access mandate for Publications and Research data
    Funder: EC Project Code: 771844
    Overall Budget: 1,991,500 EURFunder Contribution: 1,991,500 EUR
    Partners: EURECOM

    The vast majority of research in computer security is dedicated to the design of detection, protection, and prevention solutions. While these techniques play a critical role to increase the security and privacy of our digital infrastructure, it is enough to look at the news to understand that it is not a matter of "if" a computer system will be compromised, but only a matter of "when". It is a well known fact that there is no 100% secure system, and that there is no practical way to prevent attackers with enough resources from breaking into sensitive targets. Therefore, it is extremely important to develop automated techniques to timely and precisely analyze computer security incidents and compromised systems. Unfortunately, the area of incident response received very little research attention, and it is still largely considered an art more than a science because of its lack of a proper theoretical and scientific background. The objective of BITCRUMBS is to rethink the Incident Response (IR) field from its foundations by proposing a more scientific and comprehensive approach to the analysis of compromised systems. BITCRUMBS will achieve this goal in three steps: (1) by introducing a new systematic approach to precisely measure the effectiveness and accuracy of IR techniques and their resilience to evasion and forgery; (2) by designing and implementing new automated techniques to cope with advanced threats and the analysis of IoT devices; and (3) by proposing a novel forensics-by-design development methodology and a set of guidelines for the design of future systems and software. To provide the right context for these new techniques and show the impact of the project in different fields and scenarios, BITCRUMBS plans to address its objectives using real case studies borrowed from two different domains: traditional computer software, and embedded systems.

  • Open Access mandate for Publications
    Funder: EC Project Code: 670896
    Overall Budget: 2,358,770 EURFunder Contribution: 2,358,770 EUR
    Partners: EURECOM

    Advances in theory, integration techniques and standardization have led to huge progress in wireless technologies. Despite successes with past and current (5G) research, new paradigms leading to greater spectral efficiencies and intelligent network organizations will be in great demand to absorb the continuous growth in mobile data. Our ability to respond suitably to this challenge in the next decade will ensure sustained competitiveness in the digital economy. With few exceptions such as ad-hoc topologies, classical wireless design places the radio device under the tight control of the network. Promising technologies envisioned in 5G such as (i) Coordinated MultiPoint (CoMP) techniques, (ii) Massive MIMO, or (ii) Millimeter-wave (MMW) by-and-large abide by this model. Pure network-centric designs, such as optical cloud-supported ones raise cost and security concerns and do not fit all deployment scenarios. Also they make the network increasingly dependent on a large amount of signaling and device-created measurements. Our project envisions a radically new approach to designing the mobile internet, which taps into the device’s new capabilities. Our approach recasts devices as distributed computational nodes solving together multi-agent problems, allowing to maximize the network performance by exploiting local measurement and information exchange capabilities. The success of the project relies on the understanding of new information theory limits for systems with decentralized information, the development of novel device communication methods, and advanced team-based statistical signal processing algorithms. The potential gains associated with exploiting the devices’ collective, network friendly, intelligence are huge. The project will demonstrate long-term impact of the new paradigm, in pushing the frontiers of mobile internet performance, as well as short- to mid-term impact through its adaptation to currently known communications scenarios and techniques.

  • Open Access mandate for Publications and Research data
    Funder: EC Project Code: 101101031
    Funder Contribution: 150,000 EUR
    Partners: EURECOM

    Communicating video-on-demand (VoD) over various networks is very costly. VoD drives up the costs for purchasing bandwidth and overprovisioning infrastructures, as well as the costs to content providers (Netflix, Amazon, etc.) that must pay large fees to network operators to deliver content to their clients. To reduce the massive VoD fees, networks employ caches. Our findings in our ERC project DUALITY reveal a new method of exploiting caches that boosts the performance of various networks. We wish to explore the practical ramifications of these findings in practical networks. This project will explore the algorithmic and testing work needed to render these ideas practical.

  • Open Access mandate for Publications
    Funder: EC Project Code: 725929
    Overall Budget: 1,978,780 EURFunder Contribution: 1,978,780 EUR
    Partners: EURECOM

    We propose to develop the theoretical foundations of transforming memory into data rates, and to explore their practical ramifications in wireless communication networks. Motivated by the long-lasting open challenge to invent a communication technology that scales with the network size, we have recently discovered early indications of how preemptive use of distributed data-storage at the receiving communication nodes (well before transmission), can offer unprecedented throughput gains by surprisingly bypassing the dreaded bottleneck of real-time channel-feedback. For an exploratory downlink configuration, we unearthed a hidden duality between feedback and preemptive use of memory, which managed to doubly-exponentially reduce the needed memory size, and consequently offered unbounded throughput gains compared to all existing solutions with the same resources. This was surprising because feedback and memory were thought to be mostly disconnected; one is used on the wireless PHY layer, the other on the wired MAC. This development prompts our key scientific challenge which is to pursue the mathematical convergence between feedback-information-theory and preemptive distributed data-storage, and to then design ultra-fast memory-aided communication algorithms that pass real-life testing. This is a structurally new approach, which promises to reveal deep links between feedback information theory and memory, for a variety of envisioned wireless-network architectures of exceptional promise. In doing so, our new proposed theory stands to identify the basic principles of how a splash of memory can surgically alter the informational-structure of these networks, rendering them faster, simpler and more efficient. In the end, this study has the potential to directly translate the continuously increasing data-storage capabilities, into gains of wireless network capacity, and to ultimately avert the looming network-overload caused by these same indefinite increases of data volumes.

  • Open Access mandate for Publications and Research data
    Funder: EC Project Code: 101077361
    Overall Budget: 1,499,060 EURFunder Contribution: 1,499,060 EUR
    Partners: EURECOM

    SENSIBILITÉ describes a novel theory for distributed computing of nonlinear functions over communication networks. Motivated by the long-lasting open challenge to invent technologies that scale with the network size, this intriguing and far-reaching theory elevates distributed encoding and joint decoding of information sources, to the critical network computing problem for a class of network topologies and a class of nonlinear functions of dependent sources. Our theory will elevate distributed communication to the realm of distributed computation of any function over any network. Overall, this problem requires communicating correlated messages over a network, coding distributed sources for computation of functions, and meeting the desired fidelity given a distortion criterion for the given function. In such a scenario, the classical separation theorem of Claude Shannon, which modularizes the design of source and channel codes to achieve the capacity of communication channels, is in general inapplicable. SENSIBILITÉ envisions a networked computation framework for nonlinear functions. It will use the structural information of the sources and the decomposition of nonlinear functions for efficient distributed compression algorithms. For scalability, it will design message sets that are oblivious to the protocol information. For parsimonious representations across networks, it will grip the curious trade-off between quantization and compression of functions. SENSIBILITÉ has a contemporary vision of network-driven functional compression via accounting for the description length and time complexities towards alleviating large-scale, real-world networks of the future. The advanced theory will be tested in a real-life setting on applications of grand societal impact, such as over-the-air computing for the internet-of-things, massive data compression for computational imaging, and zero-error computation for real-time holographic communications.

Powered by OpenAIRE graph
Found an issue? Give us feedback