rent-now

Rent More, Save More! Use code: ECRENTAL

5% off 1 book, 7% off 2 books, 10% off 3+ books

9780521865593

Learning Theory: An Approximation Theory Viewpoint

by
  • ISBN13:

    9780521865593

  • ISBN10:

    052186559X

  • Format: Hardcover
  • Copyright: 2007-05-14
  • Publisher: Cambridge University Press

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $98.00 Save up to $28.17
  • Rent Book $69.83
    Add to Cart Free Shipping Icon Free Shipping

    TERM
    PRICE
    DUE
    SPECIAL ORDER: 1-2 WEEKS
    *This item is part of an exclusive publisher rental program and requires an additional convenience fee. This fee will be reflected in the shopping cart.

How To: Textbook Rental

Looking to rent a book? Rent Learning Theory: An Approximation Theory Viewpoint [ISBN: 9780521865593] for the semester, quarter, and short term or search our site for other textbooks by Felipe Cucker , Ding Xuan Zhou. Renting a textbook can save you up to 90% from the cost of buying.

Summary

The goal of learning theory is to approximate a function from sample values. To attain this goal learning theory draws on a variety of diverse subjects, specifically statistics, approximation theory, and algorithmics. Ideas from all these areas blended to form a subject whose many successful applications have triggered a rapid growth during the last two decades. This is the first book to give a general overview of the theoretical foundations of the subject emphasizing the approximation theory, while still giving a balanced overview. It is based on courses taught by the authors, and is reasonably self-contained so will appeal to a broad spectrum of researchers in learning theory and adjacent fields. It will also serve as an introduction for graduate students and others entering the field, who wish to see how the problems raised in learning theory relate to other disciplines.

Table of Contents

Forewordp. ix
Prefacep. x
The framework of learningp. 1
Introductionp. 1
A formal settingp. 5
Hypothesis spaces and target functionsp. 9
Sample, approximation, and generalization errorsp. 11
The bias-variance problemp. 13
The remainder of this bookp. 14
References and additional remarksp. 15
Basic hypothesis spacesp. 17
First examples of hypothesis spacep. 17
Reminders Ip. 18
Hypothesis spaces associated with Sobolev spacesp. 21
Reproducing Kernel Hilbert Spacesp. 22
Some Mercer kernelsp. 24
Hypothesis spaces associated with an RKHSp. 31
Reminders IIp. 33
On the computation of empirical target functionsp. 34
References and additional remarksp. 35
Estimating the sample errorp. 37
Exponential inequalities in probabilityp. 37
Uniform estimates on the defectp. 43
Estimating the sample errorp. 44
Convex hypothesis spacesp. 46
References and additional remarksp. 49
Polynomial decay of the approximation errorp. 54
Reminders IIIp. 55
Operators defined by a kernelp. 56
Mercer's theoremp. 59
RKHSs revisitedp. 61
Characterizing the approximation error in RKHSsp. 63
An examplep. 68
References and additional remarksp. 69
Estimating covering numbersp. 72
Reminders IVp. 73
Covering numbers for Sobolev smooth kernelsp. 76
Covering numbers for analytic kernelsp. 83
Lower bounds for covering numbersp. 101
On the smoothness of box spline kernelsp. 106
References and additional remarksp. 108
Logarithmic decay of the approximation errorp. 109
Polynomial decay of the approximation error C[infinity]for kernelsp. 110
Measuring the regularity of the kernelp. 112
Estimating the approximation error in RKHSsp. 117
Proof of Theorem 6.1p. 125
References and additional remarksp. 125
On the bias-variance problemp. 127
A useful lemmap. 128
Proof of Theorem 7.1p. 129
A concrete example of bias-variancep. 132
References and additional remarksp. 133
Least squares regularizationp. 134
Bounds for the regularized errorp. 135
On the existence of target functionsp. 139
A first estimate for the excess generalization errorp. 140
Proof of Theorem 8.1p. 148
Reminders Vp. 151
Compactness and regularizationp. 151
References and additional remarksp. 155
Support vector machines for classificationp. 157
Binary classifiersp. 159
Regularized classifiersp. 161
Optimal hyperplanes: the separable casep. 166
Support vector machinesp. 169
Optimal hyperplanes: the nonseparable casep. 171
Error analysis for separable measuresp. 173
Weakly separable measuresp. 182
References and additional remarksp. 185
General regularized classifiersp. 187
Bounding the misclassification error in terms of the generalization errorp. 189
Projection and error decompositionp. 194
Bounds for the regularized error D([gamma],[pi]of f[subscript gamma]p. 196
Bounds for the sample error term involving f[subscript gamma]p. 198
Bounds for the sample error term involving f[superscript pi][subscript z,gamma]p. 201
Stronger error boundsp. 204
Improving learning rates by imposing noise conditionsp. 210
References and additional remarksp. 211
Referencesp. 214
Indexp. 111
Table of Contents provided by Ingram. All Rights Reserved.

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program