Title: The Impacts of Overparameterization under Distribution Shift
Abstract: Modern machine learning often relies on overparameterized models, typically trained to overfit the data. Recent studies have suggested that such overfitting can be "benign," leading to estimators with strong generalization performance. In this talk, we further examine the consequences of overparameterization under distribution shifts and present the following findings:
1. For adversarially perturbed data, benign overfitting can result in models that lack adversarial robustness, even when the true underlying model is robust.
2. On the other hand, when test data are perturbed non-adversarially but exhibit a distribution shift from the training data, overparameterization proves advantageous. As the model size increases, the out-of-distribution performance also improves. We support our theoretical results with experimental evidence.