AGGREGATION FOR REGRESSION LEARNING - Archive ouverte HAL Access content directly
Preprints, Working Papers, ... Year : 2004

AGGREGATION FOR REGRESSION LEARNING

Abstract

This paper studies statistical aggregation procedures in regression setting. A motivating factor is the existence of many different methods of estimation, leading to possibly competing estimators. We consider here three different types of aggregation: model selection (MS) aggregation, convex (C) aggregation and linear (L) aggregation. The objective of (MS) is to select the optimal single estimator from the list; that of (C) is to select the optimal convex combination of the given estimators; and that of (L) is to select the optimal linear combination of the given estimators. We are interested in evaluating the rates of convergence of the excess risks of the estimators obtained by these procedures. Our approach is motivated by recent minimax results in Nemirovski (2000) and Tsybakov (2003). There exist competing aggregation procedures achieving optimal convergence separately for each one of (MS), (C) and (L) cases. Since the bounds in these results are not directly comparable with each other, we suggest an alternative solution. We prove that all the three optimal bounds can be nearly achieved via a single "universal" aggregation procedure. We propose such a procedure which consists in mixing of the initial estimators with the weights obtained by penalized least squares. Two different penalities are considered: one of them is related to hard thresholding techniques, the second one is a data dependent L1-type penalty.
Fichier principal
Vignette du fichier
BTW20oct.pdf (249.88 Ko) Télécharger le fichier
Loading...

Dates and versions

hal-00003205 , version 1 (01-11-2004)

Identifiers

  • HAL Id : hal-00003205 , version 1

Cite

Florentina Bunea, Alexandre B. Tsybakov, Marten H. Wegkamp. AGGREGATION FOR REGRESSION LEARNING. 2004. ⟨hal-00003205⟩
219 View
200 Download

Share

Gmail Facebook X LinkedIn More