Neural Networks and Learning Machines; Simon Haykin; 2009
endast ny

Neural Networks and Learning Machines Upplaga 3

av Simon Haykin
Neural Networks and Learning Machines

Third Edition

Simon Haykin

McMaster University, Canada

 

This third edition of a classic book presents a comprehensive treatment of neural networks and learning machines. These two pillars that are closely related. The book has been revised extensively to provide an up-to-date treatment of a subject that is continually growing in importance. Distinctive features of the book include:

On-line learning algorithms rooted in stochastic gradient descent; small-scale and large-scale learning problems.

Kernel methods, including support vector machines, and the representer theorem.

Information-theoretic learning models, including copulas, independent components analysis (ICA), coherent ICA, and information bottleneck.

Stochastic dynamic programming, including approximate and neurodynamic procedures.

Sequential state-estimation algorithms, including Kalman and particle filters.

Recurrent neural networks trained using sequential-state estimation algorithms.

Insightful computer-oriented experiments.

 

Just as importantly, the book is written in a readable style that is Simon Haykins hallmark.
Neural Networks and Learning Machines

Third Edition

Simon Haykin

McMaster University, Canada

 

This third edition of a classic book presents a comprehensive treatment of neural networks and learning machines. These two pillars that are closely related. The book has been revised extensively to provide an up-to-date treatment of a subject that is continually growing in importance. Distinctive features of the book include:

On-line learning algorithms rooted in stochastic gradient descent; small-scale and large-scale learning problems.

Kernel methods, including support vector machines, and the representer theorem.

Information-theoretic learning models, including copulas, independent components analysis (ICA), coherent ICA, and information bottleneck.

Stochastic dynamic programming, including approximate and neurodynamic procedures.

Sequential state-estimation algorithms, including Kalman and particle filters.

Recurrent neural networks trained using sequential-state estimation algorithms.

Insightful computer-oriented experiments.

 

Just as importantly, the book is written in a readable style that is Simon Haykins hallmark.
Upplaga: 3e upplagan
Utgiven: 2009
ISBN: 9780131471399
Förlag: Pearson
Format: Inbunden
Språk: Engelska
Sidor: 936 st
Neural Networks and Learning Machines

Third Edition

Simon Haykin

McMaster University, Canada

 

This third edition of a classic book presents a comprehensive treatment of neural networks and learning machines. These two pillars that are closely related. The book has been revised extensively to provide an up-to-date treatment of a subject that is continually growing in importance. Distinctive features of the book include:

On-line learning algorithms rooted in stochastic gradient descent; small-scale and large-scale learning problems.

Kernel methods, including support vector machines, and the representer theorem.

Information-theoretic learning models, including copulas, independent components analysis (ICA), coherent ICA, and information bottleneck.

Stochastic dynamic programming, including approximate and neurodynamic procedures.

Sequential state-estimation algorithms, including Kalman and particle filters.

Recurrent neural networks trained using sequential-state estimation algorithms.

Insightful computer-oriented experiments.

 

Just as importantly, the book is written in a readable style that is Simon Haykins hallmark.
Neural Networks and Learning Machines

Third Edition

Simon Haykin

McMaster University, Canada

 

This third edition of a classic book presents a comprehensive treatment of neural networks and learning machines. These two pillars that are closely related. The book has been revised extensively to provide an up-to-date treatment of a subject that is continually growing in importance. Distinctive features of the book include:

On-line learning algorithms rooted in stochastic gradient descent; small-scale and large-scale learning problems.

Kernel methods, including support vector machines, and the representer theorem.

Information-theoretic learning models, including copulas, independent components analysis (ICA), coherent ICA, and information bottleneck.

Stochastic dynamic programming, including approximate and neurodynamic procedures.

Sequential state-estimation algorithms, including Kalman and particle filters.

Recurrent neural networks trained using sequential-state estimation algorithms.

Insightful computer-oriented experiments.

 

Just as importantly, the book is written in a readable style that is Simon Haykins hallmark.
Ny bok
3200 kr3368 kr
5% studentrabatt med Studentapan
Begagnad bok (0 st)
Ny bok
3200 kr3368 kr
5% studentrabatt med Studentapan
Begagnad bok (0 st)