You are here

Deep Learning: Fundamentals, Theory and Applications

K. Huang, A. Hussain, Q-F Wang, R. Zhang, eds.
Publisher: 
Springer
Publication Date: 
2019
Number of Pages: 
163
Format: 
Hardcover
Series: 
Cognitive Computation Trends
Price: 
179.99
ISBN: 
978-3-030-06072-5
Category: 
Collection
[Reviewed by
Mannan Shah
, on
09/22/2019
]
Deep Learning: Fundamentals, Theory and Applications is a collection of research papers written with the intent to be educational for the student and to be a deeper route of exploration for the experienced practitioner. There are six papers in total with each paper representing one chapter. The main topics covered are density models, recurrent neural networks, and convolutional neural networks.
 
For anyone seeking an introduction, the book is best read sequentially from chapter 1 to chapter 6. The text is loosely organized so that each paper introduces a key topic utilized by the next paper. Each paper endeavors to introduce a practical framework followed by an application to an available data set or a general case study. The reader should not expect theory in the form of theorems and proofs, but rather the reader will be presented with mathematical outlines of the deep learning models under review.
 
The context and data of each chapter varies. Chapter 1 compares the performance of various (deep) density models against ULC-3 (urban land cover), Coil-4-proc (image data), Leuk72_3k (artificial data), and USPS1-4 (handwriting digit data) datasets. Chapters 2 and 3 use CASIA-OLHWDB, a Chinese handwriting database. Chapter 4 uses well-known datasets in NLP (eg, Reuters Corpora, News Group Movie Review Sentiment Classification), some of which ship by default with machine learning libraries and toolsets. Chapter 6 utilizes data from the NOAA.
 
As this is a deep learning text, the basics of machine learning and its prerequisites are assumed. With that said, chapter 1 does introduce a few foundational concepts (maximum likelihood and maximum a posteriori estimation), but not to the same pedagogical depth as a textbook would. Chapter 2 gives an overview of forward and backward propagation. Chapter 3 briefly discusses the concept of regularization. Chapter 4 summarizes RNNs. Chapter 5 defines TFIDF and other classical concepts in NLP. In this way, there is enough of treatment of machine learning and NLP basics so that a student seeking to close knowledge gaps has a starting point for internet-searchable terms.
 
The book is written in typical academic prose, but it is in need of additional copy editing (eg, tahn instead of tanh). With that said, those shortcomings do not materially detract from the overall readability of the text.
 
For the student, this book is not a textbook with problem sets. However, it can act as a reference for searchable terms as well as a large bibliography for further reading material. However, without guidance, perusing through the bibliography can be a daunting task. Additionally, there are references to publicly available data sets which can be used as a common source of truth when comparing their results to the authors' results. This reviewer maintains skepticism about how accessible this book is to the typical undergraduate. However, a senior level graduate student may find incredible value in the exposition.
 
The practitioner may enjoy this text as a companion to an existing library as well as a muse for modifying current methodologies by those cited in the research papers.
Dr. Manan Shah is a mathematician writing at mathmisery.com. He also works in industry building and leading data science and analytics teams.