A Quantum Probability Framework for Causal Inference
Reasoning about the causal relationships between events is an important component of cognition, allowing us to make sense of the world. Arguably, the most successful models of causal reasoning, causal Bayes nets, perform well in some situations, but there is considerable variation in how well they are able to account for data, both across scenarios and between individuals. More generally, decades of research have shown that human decision-making often violates the rules of classical (Bayesian) probability theory. Quantum probability (QP) theory provides an exciting new approach for modeling human cognition and decision-making.
In this talk, I will discuss how QP theory can be used to construct a framework for causal reasoning that accounts for behavior in situations where Bayes nets fail. I will discuss how changing assumptions about compatibility (i.e., how joint events are represented) leads to the construction of a hierarchy of models, from ‘fully’ quantum to ‘fully’ classical, that could be adopted by different individuals in different situations.
I will illustrate the approach with new laboratory experiments and model comparisons as well as discuss two factors that determine the form of the representation, individual differences in cognitive thinking style and familiarity with the causal reasoning domain. I will conclude by showing how the framework can used to understand real world causal judgments using a large (N=1200) experiment conducted during the US Presidential primaries involving judgments about the outcomes of primaries and the eventual nominations.
Integrative Physiological Modeling: Looking at a Larger Picture
One approach to modeling is the use of minimal models that portray only the elements believed to be most causative of a particular phenomenon. An alternate approach is to connect many such minimal models together through their inputs and outputs to generate an integrated model in which larger phenomena can emerge. These emergent features do not belong to the minimal models, but rather are characteristic of their interactions. By integrating well-understood mechanisms into a consistent whole, the role of the individual pieces can be more fully understood. If the simple models and their linkages are viewed as the hypothesis of a theory, the integrated model is the testable part of that hypothesis.
Such models have been used to great effect in physiology to create cohesive scientific theories where no single causative agent could be found. Examples of this are the role of the kidney in establishing hypertension, and the complex interplay between the left and right heart in determining cardiac output. These models have been appreciated for this value for nearly 50 years in physiology, but enormous gaps remain to be studied. Among these is the relationship between cognitive state and physiological function.
In this talk, I will summarize past and current efforts in integrative physiological modeling from groups around the world, with special attention paid to the knowledge that flowed from studying the emergent properties of such models. Additionally, I will discuss domains in physiology that we believe will require cognitive models for deeper understanding of the physiology.
Automaton Theories of Human Sentence Comprehension
The ability to understand what other people are saying, in a language that you know, is a impressive feat of cognition. Within this domain, many fundamental questions remain open. Among them: how does sentence structure figure in the comprehension process? Why is comprehension so fast & effortless most of the time? And which parts of the brain do which subtasks? This talk argues that cognitive architecture gives us a good head-start on these questions.
Presenting a few proposals based on Hale (2014) it invites modelers to join in the enterprise.
John Hale. Automaton Theories of Human Sentence Comprehension. CSLI Press 2014.