Overview
- An explicit and thorough treatment of the conjugate gradient algorithms for unconstrained optimization properties and convergence
- A clear illustration of the numerical performances of the algorithms described in the book
- Provides a deep analysis of the performances of the algorithms
- Maximizes the reader’s insight into the implementation of the conjugate gradient methods in professional computing programs
Part of the book series: Springer Optimization and Its Applications (SOIA, volume 158)
Access this book
Tax calculation will be finalised at checkout
Other ways to access
Table of contents (12 chapters)
Keywords
About this book
Two approaches are known for solving large-scale unconstrained optimization problems—the limited-memory quasi-Newton method (truncated Newton method) and the conjugate gradient method. This is the first book to detail conjugate gradient methods, showing their properties and convergence characteristics as well as their performance in solving large-scale unconstrained optimization problems and applications. Comparisons to the limited-memory and truncated Newton methods are also discussed. Topics studied in detail include: linear conjugate gradient methods, standard conjugate gradient methods, acceleration of conjugate gradient methods, hybrid, modifications of the standard scheme, memoryless BFGS preconditioned, and three-term. Other conjugate gradient methods with clustering the eigenvalues or with the minimization of the condition number of the iteration matrix, are also treated. For each method, the convergence analysis, the computational performances and thecomparisons versus other conjugate gradient methods are given.
The theory behind the conjugate gradient algorithms presented as a methodology is developed with a clear, rigorous, and friendly exposition; the reader will gain an understanding of their properties and their convergence and will learn to develop and prove the convergence of his/her own methods. Numerous numerical studies are supplied with comparisons and comments on the behavior of conjugate gradient algorithms for solving a collection of 800 unconstrained optimization problems of different structures and complexities with the number of variables in the range [1000,10000]. The book is addressed to all those interested in developing and using new advanced techniques for solving unconstrained optimization complex problems. Mathematical programming researchers, theoreticians and practitioners in operations research, practitioners in engineering and industry researchers, as well as graduate students in mathematics, Ph.D. and master students in mathematical programming, will find plenty of information and practical applications for solving large-scale unconstrained optimization problems and applications by conjugate gradient methods.Reviews
convergence properties. … The book will be very useful for researchers, graduate students and practitioners interested in studying nonlinear CG methods.” (Hiroshi Yabe, Mathematical Reviews, April, 2022)
Authors and Affiliations
About the author
Bibliographic Information
Book Title: Nonlinear Conjugate Gradient Methods for Unconstrained Optimization
Authors: Neculai Andrei
Series Title: Springer Optimization and Its Applications
DOI: https://doi.org/10.1007/978-3-030-42950-8
Publisher: Springer Cham
eBook Packages: Mathematics and Statistics, Mathematics and Statistics (R0)
Copyright Information: The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020
Hardcover ISBN: 978-3-030-42949-2Published: 24 June 2020
Softcover ISBN: 978-3-030-42952-2Published: 24 June 2021
eBook ISBN: 978-3-030-42950-8Published: 23 June 2020
Series ISSN: 1931-6828
Series E-ISSN: 1931-6836
Edition Number: 1
Number of Pages: XXVIII, 498
Number of Illustrations: 3 b/w illustrations, 90 illustrations in colour
Topics: Optimization, Mathematical Modeling and Industrial Mathematics