Robust Methods for Dense Monocular Non-Rigid 3D Reconstruction and Alignment of Point Clouds ab 79.99 € als pdf eBook: . Aus dem Bereich: eBooks, Sachthemen & Ratgeber, Computer & Internet,
Robust Methods for Dense Monocular Non-Rigid 3D Reconstruction and Alignment of Point Clouds ab 79.99 € als Taschenbuch: 1st ed. 2020. Aus dem Bereich: Bücher, Ratgeber, Computer & Internet,
Robust Methods for Dense Monocular Non-Rigid 3D Reconstruction and Alignment of Point Clouds ab 79.99 EURO 1st ed. 2020
Robust Methods for Dense Monocular Non-Rigid 3D Reconstruction and Alignment of Point Clouds ab 79.99 EURO
Simultaneous Localization and Mapping, comprising estimation of robot ego-motion and building a map of the surrounding environment, is one of the most fundamental tasks of mobile robotics. Many SLAM systems proposed in the past make use of the Global Positioning System (GPS), which renders them both expensive and overly dependent on the presence of the GPS signal. We propose an alternative, low-cost approach for portable SLAM which is based on monocular vision, a promising technique due to its flexibility, ease of use, and ease of calibration. In order to perform this task we use an Extended Kalman Filter, one of the most efficient and robust methods used in SLAM systems. We show how it is possible to improve the estimated position and reduce its uncertainty by fusing data from different sensors, in particular using a simple 3-axis accelerometer. We prove, through careful and intelligent selection and tuning of image analysis algorithms, that real-time, low-cost SLAM is feasible. This work is useful to professionals developing SLAM systems and to people in the larger field of computer vision, especially those interested in feature detection and tracking.
Many applications of video technology require multi-target tracking, for example for indoor or outdoor surveillance, intelligent vehicles, robotics, or experiments in biology. This book shows how specifica of targets influence the design of tracking methods. Four totally different types of targets are tracked, namely fruit flies (drosophila melanogaster), vehicles, pedestrians, and lane marks. The book presents methods for tracking of such targets under various recording settings (i.e. static, or moving cameras, monocular, binocular, or trinocular). In order to further improve the tracking performance, more specific methods are proposed and discussed for the different targets. This book starts with a comprehensive review of state-of-the-art multi-traget tracking methods. It is suitable for students or researchers who would like to have advise about tracking methods possibly applicable to their application, or who are contributing in the computer vision field to the design of tracking methods.
Visual scene understanding is one of the ultimate goals in computer vision and has been in the field's focus since its early beginning. Despite continuous effort over several years, applications such as autonomous driving and robotics are still subject to active research. In recent years, improved probabilistic methods became a popular tool for state-of-the-art computer vision algorithms. Additionally, high resolution digital imaging devices and increased computational power became available. By leveraging these methodical and technical advancements current methods obtain encouraging results in well defined environments for robust object class detection, tracking and pixel-wise semantic scene labeling and give rise to renewed hope for further progress in scene understanding for real environments. This book improves state-of-the-art scene understanding with monocular cameras and aims for applications on mobile platforms such as service robots or driver assistance for automotive safety. It develops and improves approaches for object class detection and semantic scene labeling and integrates those into models for global scene reasoning which exploit context at different levels.
Recovery of dense geometry and camera motion from a set of monocular images is a well-known problem that can be solved quite reliably in well-conditioned environments. Typical algorithms dealing with this problem assume static lighting and presence of sufficient scene texture. There are, however, many situations where these prerequisites are not met, and common algorithms fail. One example is medical video-endoscopy, where surfaces do not exhibit much texture, and lighting conditions change due to the moving light source that is mounted on the camera. We suggest to address the problem by applying a purely intensity-based approach that also takes into account changes in lighting conditions. In this thesis, we investigate the applicability of sliding window intensity-based bundle-adjustment methods to this problem.
Computational-Imaging-Systeme kombinieren optische und digitale Signalverarbeitung um Information aus dem Licht einer Szene zu extrahieren. In dieser Arbeit wird das Raytracing-Verfahren als Simulationswerkzeug genutzt, um Computational-Imaging-Systeme ganzheitlich zu beschreiben, bewerten und optimieren. Am Beispiel der monokularen Tiefenschätzung wird die Simulation mit einem realen Prototyp einer Kamera mit programmierbarer Apertur verglichen und die vorgestellten Methoden evaluiert. Computational imaging systems combine optical and digital signal processing in order to extract information from the light of a scene. This work presents raytracing as a simulation tool to thoroughly describe, assess and optimize computational imaging systems. By means of the exemplary case of monocular depth estimation, the simulation is compared to a real prototype of a camera with programmable aperture and the presented methods are evaluated.