Начать новую тему Ответить на тему
Статистика раздачи
Размер: 11.27 МБ | | Скачали: 2
Сидеров: 0  [0 байт/сек]    Личеров: 0  [0 байт/сек]
Пред. тема | След. тема 

Автор
Сообщение

Ответить с цитатой 

Neural Network Design

Год издания: 2016
Автор: Martin T. Hagan and Howard B. Demuth
Жанр или тематика: Нейросети

Издательство: Самиздат
Язык: Английский

Формат: PDF
Качество: Издательский макет или текст (eBook)
Интерактивное оглавление: Да
Количество страниц: 1012

Описание: NEURAL NETWORK DESIGN (2nd Edition) provides a clear and detailed survey of fundamental neural network architectures and learning rules. In it, the authors emphasize a fundamental understanding of the principal neural networks and the methods for training them. The authors also discuss applications of networks to practical engineering problems in pattern recognition, clustering, signal processing, and control systems. Readability and natural flow of material is emphasized throughout the text.
Features

Extensive coverage of performance learning, including the Widrow-Hoff rule, backpropagation and several enhancements of backpropagation, such as the conjugate gradient and Levenberg-Marquardt variations.

Both feedforward network (including multilayer and radial basis networks) and recurrent network training are covered in detail. The text also covers Bayesian regularization and early stopping training methods, which ensure network generalization ability.

Associative and competitive networks, including feature maps and learning vector quantization, are explained with simple building blocks.

A chapter of practical training tips for function approximation, pattern recognition, clustering and prediction applications is included, along with five chapters presenting detailed real-world case studies.

Detailed examples, numerous solved problems and comprehensive demonstration software.

Optional exercises incorporating the use of MATLAB are built into each chapter, and a set of Neural Network Design Demonstrations make use of MATLAB to illustrate important concepts. In addition, the book's straightforward organization -- with each chapter divided into the following sections: Objectives, Theory and Examples, Summary of Results, Solved Problems, Epilogue, Further Reading, and Exercises -- makes it an excellent tool for learning and continued reference.
New in the 2nd Edition

The 2nd edition contains new chapters on Generalization, Dynamic Networks, Radial Basis Networks, Practical Training Issues, as well as five new chapters on real-world case studies. In addition, a large number of new homework problems have been added to each chapter.




i
Contents
Preface
Introduction
Objectives 1-1
History 1-2
Applications 1-5
Biological Inspiration 1-8
Further Reading 1-10
Neuron Model and Network Architectures
Objectives 2-1
Theory and Examples 2-2
Notation 2-2
Neuron Model 2-2
Single-Input Neuron 2-2
Transfer Functions 2-3
Multiple-Input Neuron 2-7
Network Architectures 2-9
A Layer of Neurons 2-9
Multiple Layers of Neurons 2-10
Recurrent Networks 2-13
Summary of Results 2-16
Solved Problems 2-20
Epilogue 2-22
Exercises 2-23
ii
An Illustrative Example
Objectives 3-1
Theory and Examples 3-2
Problem Statement 3-2
Perceptron 3-3
Two-Input Case 3-4
Pattern Recognition Example 3-5
Hamming Network 3-8
Feedforward Layer 3-8
Recurrent Layer 3-9
Hopfield Network 3-12
Epilogue 3-15
Exercises 3-16
Perceptron Learning Rule
Objectives 4-1
Theory and Examples 4-2
Learning Rules 4-2
Perceptron Architecture 4-3
Single-Neuron Perceptron 4-5
Multiple-Neuron Perceptron 4-8
Perceptron Learning Rule 4-8
Test Problem 4-9
Constructing Learning Rules 4-10
Unified Learning Rule 4-12
Training Multiple-Neuron Perceptrons 4-13
Proof of Convergence 4-15
Notation 4-15
Proof 4-16
Limitations 4-18
Summary of Results 4-20
Solved Problems 4-21
Epilogue 4-33
Further Reading 4-34
Exercises 4-36
iii
Signal and Weight Vector Spaces
Objectives 5-1
Theory and Examples 5-2
Linear Vector Spaces 5-2
Linear Independence 5-4
Spanning a Space 5-5
Inner Product 5-6
Norm 5-7
Orthogonality 5-7
Gram-Schmidt Orthogonalization 5-8
Vector Expansions 5-9
Reciprocal Basis Vectors 5-10
Summary of Results 5-14
Solved Problems 5-17
Epilogue 5-26
Further Reading 5-27
Exercises 5-28
Linear Transformations for Neural Networks
Objectives 6-1
Theory and Examples 6-2
Linear Transformations 6-2
Matrix Representations 6-3
Change of Basis 6-6
Eigenvalues and Eigenvectors 6-10
Diagonalization 6-13
Summary of Results 6-15
Solved Problems 6-17
Epilogue 6-28
Further Reading 6-29
Exercises 6-30
iv
Supervised Hebbian Learning
Objectives 7-1
Theory and Examples 7-2
Linear Associator 7-3
The Hebb Rule 7-4
Performance Analysis 7-5
Pseudoinverse Rule 7-7
Application 7-10
Variations of Hebbian Learning 7-12
Summary of Results 17-4
Solved Problems 7-16
Epilogue 7-29
Further Reading 7-30
Exercises 7-31
Performance Surfaces and Optimum Points
Objectives 8-1
Theory and Examples 8-2
Taylor Series 8-2
Vector Case 8-4
Directional Derivatives 8-5
Minima 8-7
Necessary Conditions for Optimality 8-9
First-Order Conditions 8-10
Second-Order Conditions 8-11
Quadratic Functions 8-12
Eigensystem of the Hessian 8-13
Summary of Results 8-20
Solved Problems 8-22
Epilogue 8-34
Further Reading 8-35
Exercises 8-36
v
Performance Optimization
Objectives 9-1
Theory and Examples 9-2
Steepest Descent 9-2
Stable Learning Rates 9-6
Minimizing Along a Line 9-8
Newton’s Method 9-10
Conjugate Gradient 9-15
Summary of Results 9-21
Solved Problems 9-23
Epilogue 9-37
Further Reading 9-38
Exercises 9-39
Widrow-Hoff Learning
Objectives 10-1
Theory and Examples 10-2
ADALINE Network 10-2
Single ADALINE 10-3
Mean Square Error 10-4
LMS Algorithm 10-7
Analysis of Convergence 10-9
Adaptive Filtering 10-13
Adaptive Noise Cancellation 10-15
Echo Cancellation 10-21
Summary of Results 10-22
Solved Problems 10-24
Epilogue 10-40
Further Reading 10-41
Exercises 10-42
vi
Backpropagation
Objectives 11-1
Theory and Examples 11-2
Multilayer Perceptrons 11-2
Pattern Classification 11-3
Function Approximation 11-4
The Backpropagation Algorithm 11-7
Performance Index 11-8
Chain Rule 11-9
Backpropagating the Sensitivities 11-11
Summary 11-13
Example 11-14
Batch vs. Incremental Training 11-17
Using Backpropagation 11-18
Choice of Network Architecture 11-18
Convergence 11-20
Generalization 11-22
Summary of Results 11-25
Solved Problems 11-27
Epilogue 11-41
Further Reading 11-42
Exercises 11-44
Variations on Backpropagation
Objectives 12-1
Theory and Examples 12-2
Drawbacks of Backpropagation 12-3
Performance Surface Example 12-3
Convergence Example 12-7
Heuristic Modifications of Backpropagation 12-9
Momentum 12-9
Variable Learning Rate 12-12
Numerical Optimization Techniques 12-14
Conjugate Gradient 12-14
Levenberg-Marquardt Algorithm 12-19
Summary of Results 12-28
Solved Problems 12-32
Epilogue 12-46
Further Reading 12-47
Exercises 12-50
vii
Generalization
Objectives 13-1
Theory and Examples 13-2
Problem Statement 13-2
Methods for Improving Generalization 13-5
Estimating Generalization Error 13-6
Early Stopping 13-6
Regularization 13-8
Bayesian Analysis 13-10
Bayesian Regularization 13-12
Relationship Between Early Stopping
and Regularization 13-19
Summary of Results 13-29
Solved Problems 13-32
Epilogue 13-44
Further Reading 13-45
Exercises 13-47
Dynamic Networks
Objectives 14-1
Theory and Examples 14-2
Layered Digital Dynamic Networks 14-3
Example Dynamic Networks 14-5
Principles of Dynamic Learning 14-8
Dynamic Backpropagation 14-12
Preliminary Definitions 14-12
Real Time Recurrent Learning 14-12
Backpropagation-Through-Time 14-22
Summary and Comments on 
Dynamic Training 14-30
Summary of Results 14-34
Solved Problems 14-37
Epilogue 14-46
Further Reading 14-47
Exercises 14-48
viii
Associative Learning
Objectives 15-1
Theory and Examples 15-2
Simple Associative Network 15-3
Unsupervised Hebb Rule 15-5
Hebb Rule with Decay 15-7
Simple Recognition Network 15-9
Instar Rule 15-11
Kohonen Rule 15-15
Simple Recall Network 15-16
Outstar Rule 15-17
Summary of Results 15-21
Solved Problems 15-23
Epilogue 15-34
Further Reading 15-35
Exercises 15-37
Competitive Networks
Objectives 16-1
Theory and Examples 16-2
Hamming Network 16-3
Layer 1 16-3
Layer 2 16-4
Competitive Layer 16-5
Competitive Learning 16-7
Problems with Competitive Layers 16-9
Competitive Layers in Biology 16-10
Self-Organizing Feature Maps 16-12
Improving Feature Maps 16-15
Learning Vector Quantization 16-16
LVQ Learning 16-18
Improving LVQ Networks (LVQ2) 16-21
Summary of Results 16-22
Solved Problems 16-24
Epilogue 16-37
Further Reading 16-38
Exercises 16-39
ix
Radial Basis Networks
Objectives 17-1
Theory and Examples 17-2
Radial Basis Network 17-2
Function Approximation 17-4
Pattern Classification 17-6
Global vs. Local 17-9
Training RBF Networks 17-10
Linear Least Squares 17-11
Orthogonal Least Squares 17-18
Clustering 17-23
Nonlinear Optimization 17-25
Other Training Techniques 17-26
Summary of Results 17-27
Solved Problems 17-30
Epilogue 17-35
Further Reading 17-36
Exercises 17-38
Grossberg Network
Objectives 18-1
Theory and Examples 18-2
Biological Motivation: Vision 18-3
Illusions 18-4
Vision Normalization 18-8
Basic Nonlinear Model 18-9
Two-Layer Competitive Network 18-12
Layer 1 18-13
Layer 2 18-17
Choice of Transfer Function 18-20
Learning Law 18-22
Relation to Kohonen Law 18-24
Summary of Results 18-26
Solved Problems 18-30
Epilogue 18-42
Further Reading 18-43
Exercises 18-45
x
Adaptive Resonance Theory
Objectives 19-1
Theory and Examples 19-2
Overview of Adaptive Resonance 19-2
Layer 1 19-4
Steady State Analysis 19-6
Layer 2 19-10
Orienting Subsystem 19-13
Learning Law: L1-L2 19-17
Subset/Superset Dilemma 19-17
Learning Law 19-18
Learning Law: L2-L1 19-20
ART1 Algorithm Summary 19-21
Initialization 19-21
Algorithm 19-21
Other ART Architectures 19-23
Summary of Results 19-25
Solved Problems 19-30
Epilogue 19-45
Further Reading 19-46
Exercises 19-48
Stability
Objectives 20-1
Theory and Examples 20-2
Recurrent Networks 20-2
Stability Concepts 20-3
Definitions 20-4
Lyapunov Stability Theorem 20-5
Pendulum Example 20-6
LaSalle’s Invariance Theorem 20-12
Definitions 20-12
Theorem 20-13
Example 20-14
Comments 20-18
Summary of Results 20-19
Solved Problems 20-21
Epilogue 20-28
Further Reading 20-29
Exercises 30
xi
Hopfield Network
Objectives 21-1
Theory and Examples 21-2
Hopfield Model 21-3
Lyapunov Function 21-5
Invariant Sets 21-7
Example 21-7
Hopfield Attractors 21-11
Effect of Gain 21-12
Hopfield Design 21-16
Content-Addressable Memory 21-16
Hebb Rule 21-18
Lyapunov Surface 21-22
Summary of Results 21-24
Solved Problems 21-26
Epilogue 21-36
Further Reading 21-37
Exercises 21-40
Practical Training Issues
Objectives 22-1
Theory and Examples 22-2
Pre-Training Steps 22-3
Selection of Data 22-3
Data Preprocessing 22-5
Choice of Network Architecture 22-8
Training the Network 22-13
Weight Initialization 22-13
Choice of Training Algorithm 22-14
Stopping Criteria 22-14
Choice of Performance Function 22-16
Committees of Networks 22-18
Post-Training Analysis 22-18
Fitting 22-18
Pattern Recognition 22-21
Clustering 22-23
Prediction 22-24
Overfitting and Extrapolation 22-27
Sensitivity Analysis 22-28
Epilogue 22-30
Further Reading 22-31
xii
Case Study 1:Function Approximation
Objectives 23-1
Theory and Examples 23-2
Description of the Smart Sensor System 23-2
Data Collection and Preprocessing 23-3
Selecting the Architecture 23-4
Training the Network 23-5
Validation 23-7
Data Sets 23-10
Epilogue 23-11
Further Reading 23-12
Case Study 2:Probability Estimation
Objectives 24-1
Theory and Examples 24-2
Description of the CVD Process 24-2
Data Collection and Preprocessing 24-3
Selecting the Architecture 24-5
Training the Network 24-7
Validation 24-9
Data Sets 24-12
Epilogue 24-13
Further Reading 24-14
Case Study 3:Pattern Recognition
Objectives 25-1
Theory and Examples 25-2
Description of Myocardial Infarction Recognition 25-2
Data Collection and Preprocessing 25-3
Selecting the Architecture 25-6
Training the Network 25-7
Validation 25-7
Data Sets 25-10
Epilogue 25-11
Further Reading 25-12
xiii
Case Study 4: Clustering
Objectives 26-1
Theory and Examples 26-2
Description of the Forest Cover Problem 26-2
Data Collection and Preprocessing 26-4
Selecting the Architecture 26-5
Training the Network 26-6
Validation 26-7
Data Sets 26-11
Epilogue 26-12
Further Reading 26-13
Case Study 5: Prediction
Objectives 27-1
Theory and Examples 27-2
Description of the Magnetic Levitation System 27-2
Data Collection and Preprocessing 27-3
Selecting the Architecture 27-4
Training the Network 27-6
Validation 27-8
Data Sets 27-13
Epilogue 27-14
Further Reading 27-15
xiv
Appendices
Bibliography
Notation
Software
Index
Доп. информация: http://hagan.okstate.edu/CaseStudyData.zip
http://hagan.okstate.edu/nndesign_2014b.zip
Правила, инструкции, FAQ!!!
Торрент   Скачать торрент Магнет ссылка
Скачать торрент
[ Размер 14.56 КБ / Просмотров 101 ]

Статус
Проверен 
 
Размер  11.27 МБ
Приватный: Нет (DHT включён)
.torrent скачан  2
Как залить торрент? | Как скачать Torrent? | Ошибка в торренте? Качайте магнет  


     Отправить личное сообщение
   
Страница 1 из 1
Показать сообщения за:  Поле сортировки  
Начать новую тему Ответить на тему


Сейчас эту тему просматривают: нет зарегистрированных пользователей и гости: 1


Вы не можете начинать темы
Вы не можете отвечать на сообщения
Вы не можете редактировать свои сообщения
Вы не можете удалять свои сообщения
Вы не можете добавлять вложения

Перейти:  
Ресурс не предоставляет электронные версии произведений, а занимается лишь коллекционированием и каталогизацией ссылок, присылаемых и публикуемых на форуме нашими читателями. Если вы являетесь правообладателем какого-либо представленного материала и не желаете чтобы ссылка на него находилась в нашем каталоге, свяжитесь с нами и мы незамедлительно удалим её. Файлы для обмена на трекере предоставлены пользователями сайта, и администрация не несёт ответственности за их содержание. Просьба не заливать файлы, защищенные авторскими правами, а также файлы нелегального содержания!