• Contact

  • Newsletter

  • About us

  • Delivery options

  • Prospero Book Market Podcast

  • News

  • Neural Network Learning: Theoretical Foundations

    Neural Network Learning by Anthony, Martin; Bartlett, Peter L.;

    Theoretical Foundations

      • GET 20% OFF

      • The discount is only available for 'Alert of Favourite Topics' newsletter recipients.
      • Publisher's listprice GBP 119.00
      • The price is estimated because at the time of ordering we do not know what conversion rates will apply to HUF / product currency when the book arrives. In case HUF is weaker, the price increases slightly, in case HUF is stronger, the price goes lower slightly.

        60 225 Ft (57 358 Ft + 5% VAT)
      • Discount 20% (cc. 12 045 Ft off)
      • Discounted price 48 181 Ft (45 886 Ft + 5% VAT)

    60 225 Ft

    db

    Availability

    Estimated delivery time: In stock at the publisher, but not at Prospero's office. Delivery time approx. 3-5 weeks.
    Not in stock at Prospero.

    Why don't you give exact delivery time?

    Delivery time is estimated on our previous experiences. We give estimations only, because we order from outside Hungary, and the delivery time mainly depends on how quickly the publisher supplies the book. Faster or slower deliveries both happen, but we do our best to supply as quickly as possible.

    Product details:

    • Publisher Cambridge University Press
    • Date of Publication 4 November 1999

    • ISBN 9780521573535
    • Binding Hardback
    • No. of pages404 pages
    • Size 229x152x27 mm
    • Weight 760 g
    • Language English
    • 0

    Categories

    Short description:

    This book describes theoretical advances in the study of artificial neural networks.

    More

    Long description:

    This book describes theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including a discussion of the relevance of the Vapnik-Chervonenkis dimension, and calculating estimates of the dimension for several neural network models. A model of classification by real-output networks is developed, and the usefulness of classification with a 'large margin' is demonstrated. The authors explain the role of scale-sensitive versions of the Vapnik-Chervonenkis dimension in large margin classification, and in real prediction. They also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is self-contained and is intended to be accessible to researchers and graduate students in computer science, engineering, and mathematics.

    'The book is a useful and readable mongraph. For beginners it is a nice introduction to the subject, for experts a valuable reference.' Zentralblatt MATH

    More

    Table of Contents:

    1. Introduction; Part I. Pattern Recognition with Binary-output Neural Networks: 2. The pattern recognition problem; 3. The growth function and VC-dimension; 4. General upper bounds on sample complexity; 5. General lower bounds; 6. The VC-dimension of linear threshold networks; 7. Bounding the VC-dimension using geometric techniques; 8. VC-dimension bounds for neural networks; Part II. Pattern Recognition with Real-output Neural Networks: 9. Classification with real values; 10. Covering numbers and uniform convergence; 11. The pseudo-dimension and fat-shattering dimension; 12. Bounding covering numbers with dimensions; 13. The sample complexity of classification learning; 14. The dimensions of neural networks; 15. Model selection; Part III. Learning Real-Valued Functions: 16. Learning classes of real functions; 17. Uniform convergence results for real function classes; 18. Bounding covering numbers; 19. The sample complexity of learning function classes; 20. Convex classes; 21. Other learning problems; Part IV. Algorithmics: 22. Efficient learning; 23. Learning as optimisation; 24. The Boolean perceptron; 25. Hardness results for feed-forward networks; 26. Constructive learning algorithms for two-layered networks.

    More