A class of estimators for Monte Carlo integration is proposed that leverages gradient information on the sampling distribution to improve statistical efficiency. The novel contributions of this work are based on two important insights; (i) a trade-off between random sampling and deterministic approximation and (ii) a new gradient-based function space. The proposed estimators can be viewed as a non-parametric development of control variates. Unlike control variates, however, our estimator achieve super-root-n rates of convergence convergence, often requiring orders of magnitude fewer simulations to achieve a fixed level of precision. Theoretical and empirical results are presented, the latter focusing on integration problems arising in hierarchical models and models based on non-linear ordinary differential equations.