*Presentation will be recorded.
Incorporating side observations in decision-making can reduce uncertainty and boost performance, but it also requires we tackle a potentially complex predictive relationship. While one may use off-the-shelf machine learning methods to separately learn a predictive model and plug it in, a variety of recent methods instead integrate estimation and optimization by fitting the model to directly optimize downstream decision performance. Surprisingly, in the case of contextual linear optimization, we show that the na¨ıve plug-in approach actually achieves regret convergence rates that are significantly faster than methods that directly optimize downstream decision performance. We show this by leveraging the fact that specific problem instances do not have arbitrarily bad near-dual-degeneracy. While there are other pros and cons to consider as we discuss and illustrate numerically, our results highlight a nuanced landscape for the enterprise to integrate estimation and optimization. Our results are overall positive for practice: predictive models are easy and fast to train using existing tools, simple to interpret, and, as we show, lead to decisions that perform very well.
Yichun Hu is a Ph.D. candidate in the School of Operations Research and Information Engineering at Cornell University, advised by Nathan Kallus at Cornell Tech. Her research leverages machine learning, stochastic optimization, and statistics to develop fast and reliable personalized data-driven decision-making. She has interned at several tech companies, including Google and Facebook. Before coming to Cornell, she received her BS in Mathematics and Applied Mathematics and BA in Economics from Peking University.