nips nips2005 nips2005-143 nips2005-143-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Urs Muller, Jan Ben, Eric Cosatto, Beat Flepp, Yann L. Cun
Abstract: We describe a vision-based obstacle avoidance system for off-road mobile robots. The system is trained from end to end to map raw input images to steering angles. It is trained in supervised mode to predict the steering angles provided by a human driver during training runs collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two forwardpointing wireless color cameras. A remote computer processes the video and controls the robot via radio. The learning system is a large 6-layer convolutional network whose input is a single left/right pair of unprocessed low-resolution images. The robot exhibits an excellent ability to detect obstacles and navigate around them in real time at speeds of 2 m/s.
[1] N. Ayache and O. Faugeras. Maintaining representations of the environment of a mobile robot. IEEE Trans. Robotics and Automation, 5(6):804–819, 1989.
[2] C. Bergh, B. Kennedy, L. Matthies, and Johnson A. A compact, low power two-axis scanning laser rangefinder for mobile robots. In The 7th Mechatronics Forum International Conference, 2000.
[3] S. B. Goldberg, M. Maimone, and L. Matthies. Stereo vision and rover navigation software for planetary exploration. In IEEE Aerospace Conference Proceedings, March 2002.
[4] T. Jochem, D. Pomerleau, and C. Thorpe. Vision-based neural network road and intersection detection and traversal. In Proc. IEEE Conf. Intelligent Robots and Systems, volume 3, pages 344–349, August 1995. Figure 4: Snapshots from the left camera while the robots drives itself through various environment. The black bar beneath each image indicates the steering angle produced by the system. Top row: four successive snapshots showing the robot navigating through a narrow passageway between a trailer, a backhoe, and some construction material. Bottom row, left: narrow obstacles such as table legs and poles (left), and solid obstacles such as fences (center-left) are easily detected and avoided. Higly textured objects on the ground do not detract the system from the correct response (center-right). One scenario where the vehicle occasionally made wrong decisions is when the sun is in the field of view: the system seems to systematically drive towards the sun, whenever the sun is low on the horizon (right). Videos of these sequences are available at http://www.cs.nyu.edu/˜yann/research/dave/index.html.
[5] A. Kelly and A. Stentz. Stereo vision enhancements for low-cost outdoor autonomous vehicles. In International Conference on Robotics and Automation, Workshop WS-7, Navigation of Outdoor Autonomous Vehicles, (ICRA ’98), May 1998.
[6] D.J. Kriegman, E. Triendl, and T.O. Binford. Stereo vision and navigation in buildings for mobile robots. IEEE Trans. Robotics and Automation, 5(6):792–803, 1989.
[7] E. Krotkov and M. Hebert. Mapping and positioning for a prototype lunar rover. In Proc. IEEE Int’l Conf. Robotics and Automation, pages 2913–2919, May 1995.
[8] Y. LeCun, L. Bottou, G. Orr, and K. Muller. Efficient backprop. In G. Orr and Muller K., editors, Neural Networks: Tricks of the trade. Springer, 1998.
[9] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, November 1998.
[10] Yann LeCun, Fu-Jie Huang, and Leon Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In Proceedings of CVPR’04. IEEE Press, 2004.
[11] L. Matthies, E. Gat, R. Harrison, B. Wilcox, R. Volpe, and T. Litwin. Mars microrover navigation: Performance evaluation and enhancement. In Proc. IEEE Int’l Conf. Intelligent Robots and Systems, volume 1, pages 433–440, August 1995.
[12] R. Osadchy, M. Miller, and Y. LeCun. Synergistic face detection and pose estimation with energy-based model. In Advances in Neural Information Processing Systems (NIPS 2004). MIT Press, 2005.
[13] Dean A. Pomerleau. Knowledge-based training of artificial neural netowrks for autonomous robot driving. In J. Connell and S. Mahadevan, editors, Robot Learning. Kluwer Academic Publishing, 1993.
[14] C. Thorpe, M. Herbert, T. Kanade, and S Shafer. Vision and navigation for the carnegie-mellon navlab. IEEE Trans. Pattern Analysis and Machine Intelligence, 10(3):362–372, May 1988.
[15] S. Thrun. Learning metric-topological maps for indoor mobile robot navigation. Artificial Intelligence, 99(1):21–71, February 1998.