Neural network methods for natural language processing / Yoav Goldberg.
Goldberg, Yoav, 1980- author.
|Format||Book and Print|
|Publication Info||[San Rafael, California] : Morgan & Claypool, |
|Copyright Notice||© 2017.|
|Description||xxii, 287 pages : illustrations ; 24 cm|
More information about this title
|Series||Synthesis lectures on human language technologies, 1947-4059 ; # 37
Synthesis lectures on human language technologies ; 37. ^A1011049
|Contents||1. Introduction -- 2. Learning basics and linear models -- 3. From linear models to multi-layer perceptrons -- 4. Feed-forward neural networks -- 5. Neural network training -- 6. Features for textual data -- 7. Case studies of NLP features -- 8. From textual features to inputs -- 9. Language modeling -- 10. Pre-trained word representations -- 11. Using word embeddings -- 12. Case study: a feed-forward architecture for sentence meaning inference -- 13. Ngram detectors: convolutional neural networks -- 14. Recurrent neural networks: modeling sequences and stacks -- 15. Concrete recurrent neural network architectures -- 16. Modeling with recurrent networks -- 17. Conditioned generation -- 18. Modeling trees with recursive neural networks -- 19. Structured output prediction -- 20. Cascaded, multi-task and semi-supervised learning -- 21. Conclusion -- Bibliography -- Author's biography.|
|Summary||Neural networks are a family of powerful machine learning models. This book focuses on the application of neural network models to natural language data. The first half of the book (Parts I and II) covers the basics of supervised machine learning and feed-forward neural networks, the basics of working with machine learning over language data, and the use of vector-based rather than symbolic representations for words. It also covers the computation-graph abstraction, which allows to easily define and train arbitrary neural networks, and is the basis behind the design of contemporary neural network software libraries. The second part of the book (Parts III and IV) introduces more specialized neural network architectures, including 1D convolutional neural networks, recurrent neural networks, conditioned-generation models, and attention-based models. These architectures and techniques are the driving force behind state-of-the-art algorithms for machine translation, syntactic parsing, and many other applications. Finally, we also discuss tree-shaped networks, structured prediction, and the prospects of multi-task learning.|
|General note||Part of: Synthesis digital library of engineering and computer science.|
|Bibliography note||Includes bibliographical references (pages 253-285).|
|Library||Location||Call Number||Status||Item Actions|
|Joyner||General Stacks||QA76.9.N38 G655 2017||✔ Available||Place Hold|