Suboptimality of Penalized Empirical Risk Minimization in Classification - Archive ouverte HAL Access content directly
Preprints, Working Papers, ... Year : 2007

Suboptimality of Penalized Empirical Risk Minimization in Classification

Abstract

Let $\cF$ be a set of $M$ classification procedures with values in $[-1,1]$. Given a loss function, we want to construct a procedure which mimics at the best possible rate the best procedure in $\cF$. This fastest rate is called optimal rate of aggregation. Considering a continuous scale of loss functions with various types of convexity, we prove that optimal rates of aggregation can be either $((\log M)/n)^{1/2}$ or $(\log M)/n$. We prove that, if all the $M$ classifiers are binary, the (penalized) Empirical Risk Minimization procedures are suboptimal (even under the margin/low noise condition) when the loss function is somewhat more than convex, whereas, in that case, aggregation procedures with exponential weights achieve the optimal rate of aggregation.
Fichier principal
Vignette du fichier
154-SubERMGuillaumeLecue.pdf (209.2 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-00138840 , version 1 (27-03-2007)

Identifiers

Cite

Guillaume Lecué. Suboptimality of Penalized Empirical Risk Minimization in Classification. 2007. ⟨hal-00138840⟩
122 View
109 Download

Altmetric

Share

Gmail Facebook X LinkedIn More