Real Time Mini-Robot Using Improved Q-learning

Authors

  • Mohannad Abid Shehab Ahmed Electrical Engineering Department, Al-Mustansiriyah University, Baghdad, Iraq Author

Keywords:

Reinforcement Learning, Q-Learning, Mobile Robot, 89c52 MCU

Abstract

The task planning by a robot becomes easier when it has the requisite knowledge about its world and there is a self improving ability. In many artificial intelligent research areas like robotics navigation, path planning, and autonomous it needs to extract features precisely from environment to get the shortest path away from obstacles and even smooth this path. Choosing the path is being related to many variables like, the random of site and movement of obstacles, changing obstacle’s speed, robot’s size, and robot’s speed variation. Scaling down robots to miniature size introduces many new challenges including memory and program size limitations, low processor performance, and low power autonomy. As a result to obvious, the simplified Q-learning tends to solve these problems as well as it learns the robot behavior on line and in real time. In this paper, numerical efficient methods (sparse reward function and directed explorer) are presented and added to the simplified type to get a self-improving on Q-Learning operations which involves the number of trial, task time and hazard, so it is natural to try to reduce the number of states, actions, and overall time. The overall analysis results in an accurate and numerically stable method for improving Q-learning.

Downloads

Key Dates

Published

2011-09-01

How to Cite

Real Time Mini-Robot Using Improved Q-learning. (2011). Journal of Engineering and Sustainable Development, 15(3), 14-27. https://jeasd.uomustansiriyah.edu.iq/index.php/jeasd/article/view/1330

Similar Articles

1-10 of 181

You may also start an advanced similarity search for this article.