Erscheinungsdatum: 31.08.2005, Medium: Taschenbuch, Einband: Kartoniert / Broschiert, Titel: Monocular Model-Based 3D Tracking of Rigid Objects, Titelzusatz: A Survey, Autor: Lepetit, Vincent // Fua, Pascal, Verlag: Now Publishers Inc, Sprache: Englisch, Schlagworte: COMPUTERS // Computer Vision & Pattern Recognition // Maschinelles Sehen // Bildverstehen, Rubrik: Informatik, Seiten: 104, Informationen: Paperback, Gewicht: 172 gr, Verkäufer: averdo
Monocular Model-Based 3D Tracking of Rigid Objects ab 45.49 € als Taschenbuch: A Survey. Aus dem Bereich: Bücher, English, International, Englische Taschenbücher,
Many applications of video technology require multi-target tracking, for example for indoor or outdoor surveillance, intelligent vehicles, robotics, or experiments in biology. This book shows how specifica of targets influence the design of tracking methods. Four totally different types of targets are tracked, namely fruit flies (drosophila melanogaster), vehicles, pedestrians, and lane marks. The book presents methods for tracking of such targets under various recording settings (i.e. static, or moving cameras, monocular, binocular, or trinocular). In order to further improve the tracking performance, more specific methods are proposed and discussed for the different targets. This book starts with a comprehensive review of state-of-the-art multi-traget tracking methods. It is suitable for students or researchers who would like to have advise about tracking methods possibly applicable to their application, or who are contributing in the computer vision field to the design of tracking methods.
We present a visual servoing system for an amphibious legged robot. This is a monocular-vision based servoing mechanism that enables the robot to track and follow a target both underwater and on the ground. We use three different tracking algorithms to track and localize the target in the image, with color being the tracked feature. Tracking is performed based on the color of the target object, target color distribution and color distribution with a probabilistic kernel. Output from the tracker is channeled to a proportional-integral-derivative controller, which generates steering commands for the robot controller. The robot controller in turn takes the steering commands and generates motor commands for the six legs of the robot. A large class of significant applications can be addressed by allowing such a robot to follow a diver or some other moving target. The system has been evaluated in open-water environments and under natural lighting conditions, and has successfully performed tracking and following of a wide variety of target objects.
Simultaneous Localization and Mapping, comprising estimation of robot ego-motion and building a map of the surrounding environment, is one of the most fundamental tasks of mobile robotics. Many SLAM systems proposed in the past make use of the Global Positioning System (GPS), which renders them both expensive and overly dependent on the presence of the GPS signal. We propose an alternative, low-cost approach for portable SLAM which is based on monocular vision, a promising technique due to its flexibility, ease of use, and ease of calibration. In order to perform this task we use an Extended Kalman Filter, one of the most efficient and robust methods used in SLAM systems. We show how it is possible to improve the estimated position and reduce its uncertainty by fusing data from different sensors, in particular using a simple 3-axis accelerometer. We prove, through careful and intelligent selection and tuning of image analysis algorithms, that real-time, low-cost SLAM is feasible. This work is useful to professionals developing SLAM systems and to people in the larger field of computer vision, especially those interested in feature detection and tracking.
This work proposes novel approaches for object tracking in challenging scenarios like severe occlusion, deteriorated vision and long range multi-object reidentiﬁcation. All these solutions are only based on image sequence captured by a monocular camera and do not require additional sensors. Experiments on standard benchmarks demonstrate an improved state-of-the-art performance of these approaches. Since all the presented approaches are smartly designed, they can run at a real-time speed.
Developing on-board driver assistance systems (DAS) requires understanding of various events involving the motions of the vehicles in the vicinity of the host vehicle. Determining the position of other vehicles on the road is a key information to help driver assistance systems. Thus, robust and reliable vehicle detection and tracking are the basic steps in these systems. Since monocular vision based systems are particularly interesting for the advantage of reducing costs and maintenance and for the high fidelity information they give about the driving environment, the problem can be addressed by applying computer vision techniques. This work has mainly been focused on detecting and tracking vehicles viewed from inside a vehicle the camera mounted in daylight conditions. The approach presented in the book uses vehicle shadow clues and vehicle edge information to obtain cost effective and fast estimation. After extracting vehicles, the algorithm effectively track them using a Kalman filter based tracking algorithm. Several sequences from real traffic situations have been tested, obtaining highly accurate multiple vehicle detections.
Networked 3D virtual environments allow multiple users to interact with each other over the Internet. Users can share some sense of telepresence by remotely animating an avatar that represents them. However, avatar control may be tedious and still render user gestures poorly. This work aims at animating a user s avatar from real time 3D motion capture by monocular computer vision, thus allowing virtual telepresence to anyone using a personal computer with a webcam. The approach followed consists of registering a 3D articulated upper-body model to a video sequence. The first contribution of this work is a method of allocating computing iterations under real-time constrain that achieves optimal robustness and accuracy. The major issue for robust 3D tracking from monocular images is the 3D/2D ambiguities that result from the lack of depth information. As a second contribution, this work enhances particle filtering for 3D/2D registration under limited computation constrains with a number of heuristics, the contribution of which is demonstrated experimentally. A parameterization of the arm pose based on their end-effector is proposed to better model uncertainty in the depth direction.
Visual scene understanding is one of the ultimate goals in computer vision and has been in the field's focus since its early beginning. Despite continuous effort over several years, applications such as autonomous driving and robotics are still subject to active research. In recent years, improved probabilistic methods became a popular tool for state-of-the-art computer vision algorithms. Additionally, high resolution digital imaging devices and increased computational power became available. By leveraging these methodical and technical advancements current methods obtain encouraging results in well defined environments for robust object class detection, tracking and pixel-wise semantic scene labeling and give rise to renewed hope for further progress in scene understanding for real environments. This book improves state-of-the-art scene understanding with monocular cameras and aims for applications on mobile platforms such as service robots or driver assistance for automotive safety. It develops and improves approaches for object class detection and semantic scene labeling and integrates those into models for global scene reasoning which exploit context at different levels.