Skip to main content

Learning with Recurrent Neural Networks

  • Book
  • © 2000

Overview

  • The book details a new approach which enables neural networks to deal with symbolic data, folding networks
  • It presents both practical applications and a precise theoretical foundation

Part of the book series: Lecture Notes in Control and Information Sciences (LNCIS, volume 254)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (6 chapters)

Keywords

About this book

Folding networks, a generalisation of recurrent neural networks to tree structured inputs, are investigated as a mechanism to learn regularities on classical symbolic data, for example. The architecture, the training mechanism, and several applications in different areas are explained. Afterwards a theoretical foundation, proving that the approach is appropriate as a learning mechanism in principle, is presented: Their universal approximation ability is investigated- including several new results for standard recurrent neural networks such as explicit bounds on the required number of neurons and the super Turing capability of sigmoidal recurrent networks. The information theoretical learnability is examined - including several contribution to distribution dependent learnability, an answer to an open question posed by Vidyasagar, and a generalisation of the recent luckiness framework to function classes. Finally, the complexity of training is considered - including new results on the loading problem for standard feedforward networks with an arbitrary multilayered architecture, a correlated number of neurons and training set size, a varying number of hidden neurons but fixed input dimension, or the sigmoidal activation function, respectively.

Bibliographic Information

  • Book Title: Learning with Recurrent Neural Networks

  • Authors: Barbara Hammer

  • Series Title: Lecture Notes in Control and Information Sciences

  • DOI: https://doi.org/10.1007/BFb0110016

  • Publisher: Springer London

  • eBook Packages: Springer Book Archive

  • Copyright Information: Springer-Verlag London 2000

  • Softcover ISBN: 978-1-85233-343-0Published: 30 May 2000

  • eBook ISBN: 978-1-84628-567-7Published: 03 October 2007

  • Series ISSN: 0170-8643

  • Series E-ISSN: 1610-7411

  • Edition Number: 1

  • Number of Pages: 150

  • Topics: Control, Robotics, Mechatronics

Publish with us