The goal of SEED is to fundamentally advance the methodology of computer vision by exploiting a dynamic analysis perspective in order to acquire accurate, yet tractable models, that can automatically learn to sense our visual world, localize still and animate objects (e.g. chairs, phones, computers, bicycles or cars, people and animals), actions and interactions, as well as qualitative geometrical and physical scene properties, by propagating and consolidating temporal information, with minimal system training and supervision. SEED will extract descriptions that identify the precise boundaries and spatial layout of the different scene components, and the manner they move, interact, and change over time. For this purpose, SEED will develop novel high-order compositional methodologies for the semantic segmentation of video data acquired by observers of dynamic scenes, by adaptively integrating figure-ground reasoning based on bottom-up and top-down information, and by using weakly supervised machine learning techniques that support continuous learning towards an open-ended number of visual categories. The system will be able not only to recover detailed models of dynamic scenes, but also forecast future actions and interactions in those scenes, over long time horizons, by contextual reasoning and inverse reinforcement learning. Two demonstrators are envisaged, the first corresponding to scene understanding and forecasting in indoor office spaces, and the second for urban outdoor environments. The methodology emerging from this research has the potential to impact fields as diverse as automatic personal assistance for people, video editing and indexing, robotics, environmental awareness, augmented reality, human-computer interaction, or manufacturing.
Advances in tracking technology during the last decade have shown that migratory birds have the capacity to fly longer and faster than we previously thought was possible. Yet, we do not know how birds perform these seemingly impossible travels as it previously only was possible to record spatiotemporal patterns. The overall aim of this project is to reveal constraints and the behavioural and physiological adaptations that has evolved to overcome them, thus making the extreme performances of migratory birds possible. This goal will be met by using novel tracking devices, multisensor data loggers, that in addition to spatiotemporal patterns also record behaviour, including flight altitudes, temperature and detailed timing of flights and stopovers during the entire migration cycle. The few multisensor tracking studies carried out to date have provided hints of stunning new insights, and seriously challenged previously assumed limits on peak flight altitudes, in-flight changes of altitudes, and duration of individual flights. In particular, I have together with colleagues discovered a totally unexpected altitudinal behaviour: some bird species change their flight altitude between night and day, and fly at extremely high altitudes during the day (up to 6000-8000 m). But what makes a migratory bird fly as high as Mount Everest, even when there are no mountains to cross? By launching an extensive multisensor data logging programme, combined with wind tunnel experiments and field studies, the proposed project will change our understanding of the possibilities and limitations of bird migration. This will be done by disentangling the causes and consequences of bird’s altitudinal behaviour, the flexibility, timing and duration of migratory flights (if birds only use diurnal or nocturnal flights, if they prolong flights to last both day and night or even fly nonstop between wintering and breeding grounds), and the costs and consequences of these seemingly extreme behaviours.
This project lies at the crossing of attosecond science, photoionization of atoms and molecules and quantum optics. Progress in the performances of the attosecond sources, in particular regarding repetition rate, now enables us to perform photoionization studies of atoms and molecules using advanced coincidence/three dimensional momentum techniques. Adding an additional dimension, the phase, which is accessible by attosecond interferometric techniques, we will able to follow in time the quantum properties of the studied processes. The aim of the present application is to perform quantum optics experiments, not with photons as in conventional quantum optics, but with electron wave-packets created by absorption of attosecond light pulses. Our objectives are - to characterize and study in the time domain the quantum coherence of attosecond electron wavepackets, - to control quantum interferences of electron wavepackets using a small number of attosecond pulses and - to create and follow in time entangled two-electron attosecond wavepackets. The experiments will use advanced laser systems, attosecond sources and electron detectors. A unique 200-kHz repetition rate laser system based on optical parametric chirped pulse amplification technology, combined with an efficient attosecond source and a three-dimensional momentum electron detector will open the door to attosecond experiments where the kinematics of the light-matter interaction can be recorded. The success in achieving the above objectives will not only lead to a major leap forward in attosecond science and atomic and molecular physics in general; it might shed new lights in fundamental quantum physics, given the originality of the studied systems, attosecond electron wave packets and the versatility of the tools, providing four dimensional information (momentum and time) for multiple particles.