Hostname: page-component-78c5997874-fbnjt Total loading time: 0 Render date: 2024-11-19T10:52:03.108Z Has data issue: false hasContentIssue false

Binocular vision-based 3-D trajectory following for autonomous robotic manipulation

Published online by Cambridge University Press:  01 September 2007

Wen-Chung Chang*
Affiliation:
Department of Electrical Engineering, National Taipei University of Technology, NTUT Box 2125, Taipei 106, Taiwan, R.O.C.
*
*Corresponding author. E-mail: wchang@ee.ntut.edu.tw

Summary

Robotic manipulators that have interacted with uncalibrated environments typically have limited positioning and tracking capabilities, if control tasks cannot be appropriately encoded using available features in the environments. Specifically, to perform 3-D trajectory following operations employing binocular vision, it seems necessary to have a priori knowledge on pointwise correspondence information between two image planes. However, such an assumption cannot be made for any smooth 3-D trajectories. This paper describes how one might enhance autonomous robotic manipulation for 3-D trajectory following tasks using eye-to-hand binocular visual servoing. Based on a novel encoded error, an image-based feedback control law is proposed without assuming pointwise binocular correspondence information. The proposed control approach can guarantee task precision by employing only an approximately calibrated binocular vision system. The goal of the autonomous task is to drive a tool mounted on the end-effector of the robotic manipulator to follow a visually determined smooth 3-D target trajectory in desired speed with precision. The proposed control architecture is suitable for applications that require precise 3-D positioning and tracking in unknown environments. Our approach is successfully validated in a real task environment by performing experiments with an industrial robotic manipulator.

Type
Article
Copyright
Copyright © Cambridge University Press 2007

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Casta, A. and Hutchinson, S. A., “Visual compliance: Task-directed visual servo control,” IEEE Trans. Robot. Autom. 10 (3), 334342 (Jun. 1994).CrossRefGoogle Scholar
2.Hutchinson, S. A., Hager, G. D. and Corke, P. I., “A tutorial on visual servo control,” IEEE Trans. Robot. Autom. 12 (5), 651670 (Oct. 1996).CrossRefGoogle Scholar
3.Huang, H.-P. and McClamroch, N. H., “Time-optimal control for a robotic contour following problem,” IEEE Trans. Robot. Autom. 4 (2), 140149 (Apr. 1988).CrossRefGoogle Scholar
4.Baeten, J. and De Schutter, J., “Hybrid vision/force control at corners in planar robotic-contour following,” IEEE/ASME Trans. Mechatron. 7 (2), 143151 (Jun. 2002).CrossRefGoogle Scholar
5.Reyes, J. F. and Chiang, L. E., “Image-to-space path planning for a scara manipulator with single color camera,” Robotica 21 (3), 245254 (Jun. 2003).CrossRefGoogle Scholar
6.Chang, W.-C., “Hybrid force and vision-based contour following of planar robots,” J. Intell. Robot. Syst. 47 (3), 215237 (Nov. 2006).CrossRefGoogle Scholar
7.Shen, Y., Sun, D., Liuand, Y.-H. and Li, K., “Asymptotic trajectory tracking of manipulators using uncalibrated visual feedback,” IEEE Trans. Mechatron. 8 (1), 8798 (Mar. 2003).CrossRefGoogle Scholar
8.Di, Xiao, Ghosh, B. K., Ning, Xi and Tarn, T. J., “Sensor-based hybrid position/force control of a robotvadjust vfill manipulator in an uncalibrated environment,’ IEEE Trans. Control Syst. Technol. 8 (4), 635645 (Jul. 2000).Google Scholar
9.Akella, M. R., “Vision-based adaptive tracking control of uncertain robot manipulators,” IEEE Trans. Robot. 21 (4), 747753 (Aug. 2005).CrossRefGoogle Scholar
10.Gangloff, J. A. and Mathelin, M. F. de, “Visual servoing of a 6-dof manipulator for unknown 3-D profile following,” IEEE Trans. Robot. Autom. 18 (4), 511520 (Aug. 2002).CrossRefGoogle Scholar
11.Bettini, A., Marayong, P., Lang, S., Okamura, A. M. and Hager, G. D., “Vision-assisted control for manipulation using virtual fixtures,” IEEE Trans. Robotics 20 (6), 953966 (Dec. 2004).CrossRefGoogle Scholar
12.Chang, W.-C., Vision-Based Control of Uncertain Systems Ph.D. Thesis (New Haven, CT: Yale University, Dec. 1997).Google Scholar
13.Chang, W.-C., Hespanha, J. P., Morse, A. S. and Hager, G. D., “Task Re-encoding in Vision-Based Control Systems”, Proceedings of IEEE Conference on Decision and Control, San Diego, California 1 (Dec. 1997) pp. 48–53.Google Scholar
14.Chang, W.-C. and Morse, A. S., “Six Degree-of-Freedom Task Encoding in Vision-Based Control Systems,” Proceedings of the 14th World Congress of International Federation of Automatic Control, Beijing, China 1 (Jul. 1999) pp. 311316.Google Scholar
15.Hespanha, J. P., Dodds, Z., Hager, G. D. and Morse, A. S., “What tasks can be performed with an uncalibrated stereo vision system?Int. J. Comput. Vis. 35 (1), 6585 (1999).CrossRefGoogle Scholar
16.Horn, B. K. P.. Robot Vision 8th ed. The MIT Electrical Engineering and Computer Science Series (McGraw-Hill, New York, 1986).Google Scholar
17.Chang, W.-C., “Precise positioning of binocular eye-to-hand robotic manipulators,” J. Intell. Robot. Syst. (Feb. 27, 2007) (published online).Google Scholar
18.Faugeras, O.Three-Dimensional Computer Vision: A Geometric Viewpoint (MIT Press, Cambridge, Massachusetts, 1993).Google Scholar
19.Sontag, E. D.. “Mathematical Control Theory, Texts in Applied Mathematics 6 (Springer-Verlag, New York, 1990).CrossRefGoogle Scholar
20.Siciliano, B. and Slotine, J.-J. E., “A general framework for managing multiple tasks in highly redundant robotic systems,” Proceedings of IEEE International Conference on Advanced Robotics (May 1991) pp. 1211–1216.CrossRefGoogle Scholar