We have some new preprints out on the arxiv which stem from our collaboration with Rob Spekkens at the Perimeter Institute. The first is related to a new way of operationally determining the ‘Generalized Probability Theory’ that explains experimental results; we applied this method to optical 3-level systems, which quantum mechanics would describe as qutrits. The second work is related to applying different causal accounts of a Bell experiment and using model selection techniques to determine the most likely explanation for the results.

**Experimentally bounding deviations from quantum theory for a photonic three-level system using theory-agnostic tomography**

Michael Grabowecky, Christopher Pollack, Andrew Cameron, Robert Spekkens, Kevin Resch

Abstract: If one seeks to test quantum theory against many alternatives in a landscape of possible physical theories, then it is crucial to be able to analyze experimental data in a theory-agnostic way. This can be achieved using the framework of Generalized Probabilistic Theories (GPTs). Here, we implement GPT tomography on a three-level system corresponding to a single photon shared among three modes. This scheme achieves a GPT characterization of each of the preparations and measurements implemented in the experiment without requiring any prior characterization of either. Assuming that the sets of realized preparations and measurements are tomographically complete, our analysis identifies the most likely dimension of the GPT vector space describing the three-level system to be nine, in agreement with the value predicted by quantum theory. Relative to this dimension, we infer the scope of GPTs that are consistent with our experimental data by identifying polytopes that provide inner and outer bounds for the state and effect spaces of the true GPT. From these, we are able to determine quantitative bounds on possible deviations from quantum theory. In particular, we bound the degree to which the no-restriction hypothesis might be violated for our three-level system.

https://arxiv.org/abs/2108.05864

**Experimentally adjudicating between different causal accounts of Bell inequality violations via statistical model selection**

Patrick J. Daley, Kevin J. Resch, Robert W. Spekkens

Abstract: Bell inequalities follow from a set of seemingly natural assumptions about how to provide a causal model of a Bell experiment. In the face of their violation, two types of causal models that modify some of these assumptions have been proposed: (i) those that are parametrically conservative and structurally radical, such as models where the parameters are conditional probability distributions (termed ‘classical causal models’) but where one posits inter-lab causal influences or superdeterminism, and (ii) those that are parametrically radical and structurally conservative, such as models where the labs are taken to be connected only by a common cause but where conditional probabilities are replaced by conditional density operators (these are termed ‘quantum causal models’). We here seek to adjudicate between these alternatives based on their predictive power. The data from a Bell experiment is divided into a training set and a test set, and for each causal model, the parameters that yield the best fit for the training set are estimated and then used to make predictions about the test set. Our main result is that the structurally radical classical causal models are disfavoured relative to the structurally conservative quantum causal model. Their lower predictive power seems to be due to the fact that, unlike the quantum causal model, they are prone to a certain type of overfitting wherein statistical fluctuations away from the no-signalling condition are mistaken for real features. Our technique shows that it is possible to witness quantumness even in a Bell experiment that does not close the locality loophole. It also overturns the notion that it is impossible to experimentally test the plausibility of superdeterminist models of Bell inequality violations.