Abstract: Achieving balance between experimental groups is a cornerstone of causal inference. Without balance any observed difference may be attributed to a difference other than the treatment alone. In controlled/clinical trials, where the experimenter controls the administration of treatment, complete randomization of subjects has been the golden standard for achieving this balance because it allows for unbiased and consistent estimation and inference in the absence of any a priori knowledge or measurements. However, since estimator variance under complete randomization may be slow to converge, experimental designs that balance pre-treatment measurements (baseline covariates) are in pervasive use, including randomized block designs, pairwise-matched designs, and re-randomization. We formally argue that absolutely no balance better than complete randomization's can be achieved without partial structural knowledge about the treatment effects. Therefore, that balancing designs are in popular use, are advocated, and have been proven in practice means that some structural knowledge is in fact available to the researcher. We propose a novel framework for formulating such knowledge using functional analysis. It subsumes all of the aforementioned designs in that it recovers them as optimal under different choices of structure, thus theoretically characterizing their underlying motivations and comparative power under different assumptions and providing extensions of these to multi-arm trials. Furthermore, it suggests new optimal designs that are based on more robust nonparametric modeling and that offer extensive gains in precision and power. In certain cases we are able to argue linear convergence 1/2^O(-n) to the sample average treatment effect (as compared to the usual logarithmic convergence O(1/sqrt(n))). We theoretically characterize the unbiasedness, variance, and consistency of any estimator arising from our framework; solve the design problem using modern optimization techniques; and develop appropriate inferential algorithms to test differences in treatments. We uncover connections to Bayesian experimental design and make extensions to dealing with non-compliance.