Control for Navigation of a Mobile Robot Using Monocular Data ab 68 € als Taschenbuch: Local Model Predictive Control for Navigation of a Wheeled Mobile Robot Using Monocular Information. Aus dem Bereich: Bücher, Wissenschaft, Technik,
Control for Navigation of a Mobile Robot Using Monocular Data ab 68 EURO Local Model Predictive Control for Navigation of a Wheeled Mobile Robot Using Monocular Information
We present a visual servoing system for an amphibious legged robot. This is a monocular-vision based servoing mechanism that enables the robot to track and follow a target both underwater and on the ground. We use three different tracking algorithms to track and localize the target in the image, with color being the tracked feature. Tracking is performed based on the color of the target object, target color distribution and color distribution with a probabilistic kernel. Output from the tracker is channeled to a proportional-integral-derivative controller, which generates steering commands for the robot controller. The robot controller in turn takes the steering commands and generates motor commands for the six legs of the robot. A large class of significant applications can be addressed by allowing such a robot to follow a diver or some other moving target. The system has been evaluated in open-water environments and under natural lighting conditions, and has successfully performed tracking and following of a wide variety of target objects.
Simultaneous Localization and Mapping, comprising estimation of robot ego-motion and building a map of the surrounding environment, is one of the most fundamental tasks of mobile robotics. Many SLAM systems proposed in the past make use of the Global Positioning System (GPS), which renders them both expensive and overly dependent on the presence of the GPS signal. We propose an alternative, low-cost approach for portable SLAM which is based on monocular vision, a promising technique due to its flexibility, ease of use, and ease of calibration. In order to perform this task we use an Extended Kalman Filter, one of the most efficient and robust methods used in SLAM systems. We show how it is possible to improve the estimated position and reduce its uncertainty by fusing data from different sensors, in particular using a simple 3-axis accelerometer. We prove, through careful and intelligent selection and tuning of image analysis algorithms, that real-time, low-cost SLAM is feasible. This work is useful to professionals developing SLAM systems and to people in the larger field of computer vision, especially those interested in feature detection and tracking.
This book introduces a new strategy for mobile robot navigation. The complete navigation strategy is based on local landmark detection. However, the work developed just shows the local navigation results. In this context, local artificial potential fields are used as a way to attract the mobile robot towards a local goal that can act as a passage point and a featured landmark. In order to acomplish with the desired objective, simple perception system and reactive control behaviours were implemented and tested. Concretely, a single on-robot camera system was used to infer the closer robot environment where free aproaching path was computed. Moreover, the proposed control strategy is based on on-line model predictive control techniques where only short prediction horizons are considered for dealing with reactive behaviours and dynamic environments.
This book presents a hardware architecture for the Simultaneous Localization And Mapping (SLAM) problem applied to embedded robots. The architecture is composed by highly specialized modules for robot localization and feature-based map building from images obtained directly from CMOS cameras in real time. The system is completely embedded on a Field-Programmable Gate Array (FPGA) device, where several hardware-orientated optimizations are exploited. The main modules of the architecture are the Extended Kalman Filter (EKF) and the feature detection system based on the SIFT (Scale Invariant Feature Transform) algorithm. Additionally, this book also presents basic concepts about mapping and state-of-the-art algorithms for SLAM with monocular and stereo vision.
This manuscript addresses the problem of obstacle avoidance for semi- and autonomous terrestrial platforms in dynamic and unknown environments. Based on monocular vision, it proposes a set of tools that continuously monitors the way forward, proving appropriate road information in real time. Taking into account the temporal coherence between consecutive frames, a new Dynamic Power Management methodology is proposed and applied to a robotic visual machine perception, which included a new environment observer method to optimize energy consumption used by a visual machine. A remarkable characteristic of these methodologies is its independence of the image acquiring system and of the robot itself. This real-time perception system has been evaluated from different test-banks and also from real data obtained by two intelligent platforms. In semi-autonomous tasks, tests were conducted at speeds above 100 Km/h. Autonomous displacements were also carried out successfully.
Vision based inference for mobile robots in order to avoid obstacles, is one of the most attracted areas for both domains of Computer Vision and Robotics. Computer Vision, more specifically the vision for intelligent machines enables mobile robots to perceive the external world with 'wisdom'. Therefore Vision based obstacle avoidance has become one of the major research areas of Robotics. Estimating the motion path and predicting the motion behavior of a dynamic object with only single camera (monocular vision) is a real challenge. We realized this can be done by analyzing a sequence of image frames extracted from a live video stream. But, these analytic techniques must be extremely fast in real time processing, since the decision drawn within reasonably short response time is the only 'god' to safeguard the robot & ensure the safety of others in environment! Therefore, we postulate a fuzzy-mathematical model: an Artificial Intelligence approach to achieve the ultimate objective, which has a significant impact in terms of simplicity (reduced complexity) together with efficiency (minimized computational overhead: resource consumption), rather than conventional mathematical modeling.
Vision-based control of wheeled mobile robots is an interesting field of research from a scientific and even social point of view due to its potential applicability. This book presents a formal treatment of some aspects of control theory applied to the problem of vision-based pose regulation of wheeled mobile robots. In this problem, the robot has to reach a desired position and orientation, which are specified by a target image. It is faced in such a way that vision and control are unified to achieve stability of the closed loop, a large region of convergence, without local minima and good robustness against parametric uncertainty. Three different control schemes that rely on monocular vision as unique sensor are presented and evaluated experimentally. A common benefit of these approaches is that they are valid for imaging systems obeying approximately a central projection model, e.g., conventional cameras, catadioptric systems and some fisheye cameras. Thus, the presented control schemes are generic approaches. A minimum set of visual measurements, integrated in adequate task functions, are taken from a geometric constraint imposed between corresponding image features. Particularly, the epipolar geometry and the trifocal tensor are exploited since they can be used for generic scenes. A detailed experimental evaluation is presented for each control scheme.