Robot learning is an exciting and interdisciplinary ?eld. This state is re?ected in the range and form of the papers presented here. Techniques that have - come well established in robot learning are present: evolutionary methods, neural networkapproaches, reinforcement learning; as are techniques from control t- ory, logic programming, and Bayesian statistics. It is notalbe that in many of the papers presented in this volume several of these techniques are employed in conjunction. In papers by Nehmzow, Grossmann and Quoy neural networks are utilised to provide landmark-based representations of the environment, but di?erent techniques are used in each paper to make inferences based on these representations. Biology continues to provide inspiration for the robot learning researcher. In their paper Peter Eggenberger et al. borrow ideas about the role of n- romodulators in switching neural circuits, These are combined with standard techniques from arti?cial neural networks and evolutionary computing to p- vide a powerful new algorithm for evolving robot controllers. In the ?nal paper in this volume Bianco and Cassinis combine observations about the navigation behaviour of insects with techniques from control theory to produce their visual landmarklearning system.
Hopefully this convergence of engineering and biol- ical approaches will continue. A rigourous understanding of the ways techniques from these very di?erent disciplines can be fused is an important challenge if progress is to continue. Al these papers are also testament to the utility of using robots to study intelligence and adaptive behaviour.
Les mer
Among the topics addressed in these papers are map building for robot navigation, multi-task reinforcement learning, neural network approaches, example- based learning, situated agents, planning maps for mobile robots, path finding, autonomous robots, and biologically inspired approaches.
Les mer
Map Building through Self-Organisation for Robot Navigation.- Learning a Navigation Task in Changing Environments by Multi-task Reinforcement Learning.- Toward Seamless Transfer from Simulated to Real Worlds: A Dynamically—Rearranging Neural Network Approach.- How Does a Robot Find Redundancy by Itself?.- Learning Robot Control by Relational Concept Induction with Iteratively Collected Examples.- Reinforcement Learning in Situated Agents: Theoretical Problems and Practical Solutions.- A Planning Map for Mobile Robots: Speed Control and Paths Finding in a Changing Environment.- Probabilistic and Count Methods in Map Building for Autonomous Mobile Robots.- Biologically-Inspired Visual Landmark Learning for Mobile Robots.
Les mer
Springer Book Archives
Springer Book Archives
Includes supplementary material: sn.pub/extras
GPSR Compliance
The European Union's (EU) General Product Safety Regulation (GPSR) is a set of rules that requires consumer products to be safe and our obligations to ensure this.
If you have any concerns about our products you can contact us on ProductSafety@springernature.com.
In case Publisher is established outside the EU, the EU authorized representative is:
Springer Nature Customer Service Center GmbH
Europaplatz 3
69115 Heidelberg, Germany
ProductSafety@springernature.com
Les mer
Produktdetaljer
ISBN
9783540411628
Publisert
2000-10-11
Utgiver
Vendor
Springer-Verlag Berlin and Heidelberg GmbH & Co. K
Høyde
233 mm
Bredde
155 mm
Aldersnivå
Research, UU, UP, P, 05, 06
Språk
Product language
Engelsk
Format
Product format
Heftet