Published online by Cambridge University Press: 16 April 2014
The Gibbs posterior is a useful tool for risk minimization, which adopts a Bayesian framework and can incorporate convenient computational algorithms such as Markov chain Monte Carlo. We derive risk bounds for the Gibbs posterior using some general nonasymptotic inequalities, which can be used to derive nearly optimal convergence rates and select models to optimally balance the approximation errors and the stochastic errors. These inequalities are formulated in a very general way that does not require the empirical risk to be a usual sample average over independent observations. We apply this framework to study the convergence rate of the GMM (generalized method of moments) risk and derive an oracle inequality for the ranking risk, where models are selected based on the Gibbs posterior with a nonadditive empirical risk.