A Fuzzy-Mathematical Model to Motion Detection with Monocular Vision ab 44.9 € als Taschenbuch: Vision Based Mobile Robots. Aus dem Bereich: Bücher, English, International, Gebundene Ausgaben,
A Fuzzy-Mathematical Model to Motion Detection with Monocular Vision ab 44.9 EURO Vision Based Mobile Robots
Vision based inference for mobile robots in order to avoid obstacles, is one of the most attracted areas for both domains of Computer Vision and Robotics. Computer Vision, more specifically the vision for intelligent machines enables mobile robots to perceive the external world with 'wisdom'. Therefore Vision based obstacle avoidance has become one of the major research areas of Robotics. Estimating the motion path and predicting the motion behavior of a dynamic object with only single camera (monocular vision) is a real challenge. We realized this can be done by analyzing a sequence of image frames extracted from a live video stream. But, these analytic techniques must be extremely fast in real time processing, since the decision drawn within reasonably short response time is the only 'god' to safeguard the robot & ensure the safety of others in environment! Therefore, we postulate a fuzzy-mathematical model: an Artificial Intelligence approach to achieve the ultimate objective, which has a significant impact in terms of simplicity (reduced complexity) together with efficiency (minimized computational overhead: resource consumption), rather than conventional mathematical modeling.
Depth estimation from a single monocular image is a difficult problem.The task is even more challenging as depth cues such as motion, stereo correspondences are not present in single image. Hence machine learning based approach for extracting depth information from single image is proposed. Firstly depth is generated by manifold learning in which LLE algorithm is used, it is a non linear method of dimensionality reduction in which neighbors of input set in higher dimensional space are preserved while being transformed into lower dimensional space. The depth maps obtained are further refined by fixed point algorithm, it is supervised learning in which those features are extracted from image which have strong correspondences with labels.
Simultaneous Localization and Mapping, comprising estimation of robot ego-motion and building a map of the surrounding environment, is one of the most fundamental tasks of mobile robotics. Many SLAM systems proposed in the past make use of the Global Positioning System (GPS), which renders them both expensive and overly dependent on the presence of the GPS signal. We propose an alternative, low-cost approach for portable SLAM which is based on monocular vision, a promising technique due to its flexibility, ease of use, and ease of calibration. In order to perform this task we use an Extended Kalman Filter, one of the most efficient and robust methods used in SLAM systems. We show how it is possible to improve the estimated position and reduce its uncertainty by fusing data from different sensors, in particular using a simple 3-axis accelerometer. We prove, through careful and intelligent selection and tuning of image analysis algorithms, that real-time, low-cost SLAM is feasible. This work is useful to professionals developing SLAM systems and to people in the larger field of computer vision, especially those interested in feature detection and tracking.
Networked 3D virtual environments allow multiple users to interact with each other over the Internet. Users can share some sense of telepresence by remotely animating an avatar that represents them. However, avatar control may be tedious and still render user gestures poorly. This work aims at animating a user s avatar from real time 3D motion capture by monocular computer vision, thus allowing virtual telepresence to anyone using a personal computer with a webcam. The approach followed consists of registering a 3D articulated upper-body model to a video sequence. The first contribution of this work is a method of allocating computing iterations under real-time constrain that achieves optimal robustness and accuracy. The major issue for robust 3D tracking from monocular images is the 3D/2D ambiguities that result from the lack of depth information. As a second contribution, this work enhances particle filtering for 3D/2D registration under limited computation constrains with a number of heuristics, the contribution of which is demonstrated experimentally. A parameterization of the arm pose based on their end-effector is proposed to better model uncertainty in the depth direction.
Recovery of dense geometry and camera motion from a set of monocular images is a well-known problem that can be solved quite reliably in well-conditioned environments. Typical algorithms dealing with this problem assume static lighting and presence of sufficient scene texture. There are, however, many situations where these prerequisites are not met, and common algorithms fail. One example is medical video-endoscopy, where surfaces do not exhibit much texture, and lighting conditions change due to the moving light source that is mounted on the camera. We suggest to address the problem by applying a purely intensity-based approach that also takes into account changes in lighting conditions. In this thesis, we investigate the applicability of sliding window intensity-based bundle-adjustment methods to this problem.
This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped.In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms.Also, towards developing methods for efficient mapping of large areas (especially with costs related to map storage, transmission and rendering in mind), an online 3D model simplification algorithm is proposed. This new algorithm presents the advantage of selecting only those vertices that are geometrically representative for the scene.
The field of robotic vision has advanced dramatically recently with the development of new range sensors. Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related manipulation motion models. For autonomous robot navigation, different vision-based localization and tracking strategies and algorithms are discussed. New approaches using probabilistic analysis for robot navigation, online learning of vision-based robot control, and 3D motion estimation via intensity differences from a monocular camera are described. This collection will be beneficial to graduate students, researchers, and professionals working in the area of robotic vision.