Here, we develop new shrinkage methods that can produce optimal prediction under agglomerative piece-wise linear (check) losses in a high dimensional Gaussian model. Because of the nature of this loss, our inferential target is a pre-chosen quantile of the predictive distribution rather than the mean of the predictive distribution. In common with many other problems we find that shrinkage rules provides better performance than simple coordinate-wise rules. However, the problem here differs in fundamental respects from estimation or prediction under the symmetric quadratic loss that is considered in most of the previous literature. This necessitates different strategies for creation of effective empirical Bayes predictors. We develop new methods for constructing uniformly efficient asymptotic risk estimates for conditionally linear predictors. These risk estimates are adaptive to the asymmetric nature of the loss function and minimizing them we obtain an empirical Bayes prediction rule, which has asymptotic optimality properties not shared by EB strategies that use maximum likelihood or method-of-moments to estimate the hyper-parameters.