Skip to main content
Book cover

Neural Representations of Natural Language

  • Book
  • © 2019

Overview

  • Enriches readers’ understanding of how neural networks create a machine interpretable representation of the meaning of natural language
  • Absolutely packed with useful insights drawn from experience using and implementing these algorithms
  • Includes two introductory chapters on neural networks, allowing novice readers to quickly understand how machine learning is revolutionizing the field of natural language processing

Part of the book series: Studies in Computational Intelligence (SCI, volume 783)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (6 chapters)

Keywords

About this book

This book offers an introduction to modern natural language processing using machine learning, focusing on how neural networks create a machine interpretable representation of the meaning of natural language. Language is crucially linked to ideas – as Webster’s 1923 “English Composition and Literature” puts it: “A sentence is a group of words expressing a complete thought”. Thus the representation of sentences and the words that make them up is vital in advancing artificial intelligence and other “smart” systems currently being developed. Providing an overview of the research in the area, from Bengio et al.’s seminal work on a “Neural Probabilistic Language Model” in 2003, to the latest techniques, this book enables readers to gain an understanding of how the techniques are related and what is best for their purposes. As well as a introduction to neural networks in general and recurrent neural networks in particular, this book details the methods used for representing words, senses of words, and larger structures such as sentences or documents. The book highlights practical implementations and discusses many aspects that are often overlooked or misunderstood. The book includes thorough instruction on challenging areas such as hierarchical softmax and negative sampling, to ensure the reader fully and easily understands the details of how the algorithms function. Combining practical aspects with a more traditional review of the literature, it is directly applicable to a broad readership. It is an invaluable introduction for early graduate students working in natural language processing; a trustworthy guide for industry developers wishing to make use of recent innovations; and a sturdy bridge for researchers already familiar with linguistics or machine learning wishing to understand the other.


Authors and Affiliations

  • Department of Electrical, Electronic and Computer Engineering, School of Engineering, Faculty of Engineering and Mathematical Sciences, The University of Western Australia, Perth, Australia

    Lyndon White, Roberto Togneri

  • Department of Computer Science and Software Engineering, School of Physics, Mathematics and Computing, Faculty of Engineering and Mathematical Sciences, The University of Western Australia, Perth, Australia

    Wei Liu, Mohammed Bennamoun

Bibliographic Information

Publish with us