Skip to main content

Learning-Based Robot Vision

Principles and Applications

  • Book
  • © 2001

Overview

Part of the book series: Lecture Notes in Computer Science (LNCS, volume 2048)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (5 chapters)

Keywords

About this book

Industrial robots carry out simple tasks in customized environments for which it is typical that nearly all e?ector movements can be planned during an - line phase. A continual control based on sensory feedback is at most necessary at e?ector positions near target locations utilizing torque or haptic sensors. It is desirable to develop new-generation robots showing higher degrees of autonomy for solving high-level deliberate tasks in natural and dynamic en- ronments. Obviously, camera-equipped robot systems, which take and process images and make use of the visual data, can solve more sophisticated robotic tasks. The development of a (semi-) autonomous camera-equipped robot must be grounded on an infrastructure, based on which the system can acquire and/or adapt task-relevant competences autonomously. This infrastructure consists of technical equipment to support the presentation of real world training samples, various learning mechanisms for automatically acquiring function approximations, and testing methods for evaluating the quality of the learned functions. Accordingly, to develop autonomous camera-equipped robot systems one must ?rst demonstrate relevant objects, critical situations, and purposive situation-action pairs in an experimental phase prior to the application phase. Secondly, the learning mechanisms are responsible for - quiring image operators and mechanisms of visual feedback control based on supervised experiences in the task-relevant, real environment. This paradigm of learning-based development leads to the concepts of compatibilities and manifolds. Compatibilities are general constraints on the process of image formation which hold more or less under task-relevant or accidental variations of the imaging conditions.

Authors and Affiliations

  • Christian-Albretecht Universiät zu Kiel, Institut für Informatik und Pracktische Mathematik, Kiel, Germany

    Josef Pauli

Bibliographic Information

Publish with us