Amarasinghe, Dilan (2008) Multisensor based environment modelling and control applications for mobile robots. Doctoral (PhD) thesis, Memorial University of Newfoundland.
- Accepted Version
Available under License - The author retains copyright ownership and moral rights in this thesis. Neither the thesis nor substantial extracts from it may be printed or otherwise reproduced without the author's permission.
For autonomous operations of mobile robots, three key functionalities are required: (a) knowledge of the structure of the world in which it operates, (b) ability to navigate to different positions autonomously using path planning algorithms, and (c) ability to precisely localize itself for the task execution. This thesis will address some of the issues related to the first and third requirements. The knowledge of the structure of the environment can be represented in several forms such as: 3D models, 2D wall plan, 2D plan of landmarks, and position and velocity of moving objects. Efficient navigation and obstacle avoidance methods are often aided by information about the structure of the environment in any of the above forms. At the end of each navigation task the robot has to execute an assigned task such as pick and place or park. In most cases these tasks require precise localization of the robot where the degree of precision required depends on the task specification. -- Taking these functions into consideration, this thesis addresses the issues of learning the structure of the world by constructing a visual landmark map of static landmarks. Additionally, it provides a solution to the precise localization problem of the mobile robot using a vision based hybrid controller. On the subject of the visual landmark map, the thesis describes a landmark position measurement system using an integrated laser-camera sensor. The traditional laser range finder can be used to detect landmarks that are direction invariant in the laser data. The processes that are dependent on the presence of directional invariant features such as navigation and simultaneous localization and mapping (SLAM) algorithms will fail to function in their absence. However, in many instances, it is possible to find a larger number of landmarks that are visually salient using computer vision. The calculation of depth to a visual feature is non-trivial due to the loss of depth information in the sensor model. While considering the drawbacks and limitations in laser and camera as a sensor, this thesis proposes a novel integrated sensor method to calculate position of the visual features. In addition, a comprehensive experimental analysis is presented to verify the sensor integration method for the EKF based SLAM algorithm. -- For effective operation of a robot's SLAM algorithm, it is necessary to identify dynamic objects in the environment. In order to achieve this objective a novel robust technique for detecting moving objects using a laser ranger mounted on a mobile robot is presented. After initial alignment of two consecutive laser scans, each laser reading is segmented and classified according to object type: stationary, non-stationary or indeterminate. Laser reading segments are then analyzed using an algorithm to maximally recover the moving objects. The proposed algorithm has the ability to recover all possible laser readings that belong to moving objects. The developed algorithm is verified using experimental results in which a walking person is detected by a moving robot. -- Finally, a novel vision-based hybrid controller for parking of mobile robots is proposed. Parking or docking is an essential behavioral unit for autonomous robots. The proposed hybrid controller is comprised of a discrete event controller to change the direction of travel and a pixel error driven proportional controller to generate motion commands to achieve the continuous motion. At the velocity control level, the robot is driven using a built-in PID control system. The feedback system uses image plane measurements in pixel units to perform image-based visual servoing (IBVS). The constraints imposed due to the nonholonomic nature of the robot and the limited field of view of the camera are taken into account in designing the IBVS-based controller. The controller continuously compares the current view of the parking station against the reference view until the desired parking condition is achieved. A comprehensive analysis is provided to prove the convergence of the proposed method. Once the proposed parking behaviour is invoked, the robot has the ability to start from any arbitrary position to achieve successful parking given that initially the parking station is in the robot's field of view. As the method is purely based on vision the hybrid controller does not require any position information (or localization) of the robot. Using a Pioneer 3AT robot, several experiments are carried out to validate the method. The experimental system has the ability to achieve the parking state and align laterally within 1 cm of the target pose.
|Item Type:||Thesis (Doctoral (PhD))|
|Additional Information:||Includes bibliographical references (leaves 143-160).|
|Department(s):||Engineering and Applied Science, Faculty of|
|Library of Congress Subject Heading:||Genetic algorithms; Mobile robots--Automatic control; Navigation|
Actions (login required)