Over the past two decades, a wave of Bayesian explanations has swept through cognitive science, explaining behaviour in domains from intuitive physics and causal learning, to perception, motor control and language. Yet people produce stunningly incorrect answers in response to even the simplest questions about probabilities. How can a supposedly Bayesian brain paradoxically reason so poorly with probabilities? Perhaps Bayesian brains do not represent or calculate probabilities at all and are, indeed, poorly adapted to do so. Instead the brain could be approximating Bayesian inference through sampling: drawing samples from its distribution over likely hypotheses over time. This work aims to put meat on the bones of this hypothesis by identifying the kinds of algorithms used by the brain to draw samples. Previous proposals of simple sampling algorithms both do not match human data, nor scale well to more complex probability distributions and hypothesis spaces. In our first work programme, we will investigate advanced algorithms that have been developed in computer science and statistics, to see which one is employed by the brain to draw samples. A catalog of reasoning errors has been used to argue against a Bayesian brain, but only with infinite samples does a Bayesian sampler conform to the laws of probability. In our second work programme, we will show how with finite samples the sampling algorithm we identify systematically generates classic probabilistic reasoning errors in individuals, upending the longstanding consensus on these effects. In our third work programme, we will apply the algorithm to group decision making, investigating how the sampling algorithm provides a new perspective on group decision making biases and errors in financial decision making, and harness the algorithm to produce novel and effective ways for human and artificial experts to collaborate.