Nelson J.D., Crupi V., Meder B., Cevolani G., and Tentori K., Beyond Shannon entropy:
A unified mathematical framework for entropy measures and its importance for understanding human active learning, 48th Annual Meeting of the Society for Mathematical Psychology (Newport Beach, July 20, 2015).
Abstract. One of the most important kinds of decisions that people make are decisions about which test (or experiment) to conduct next. For instance, in medical diagnosis, a carefully chosen test can helpfully narrow the range of plausible diseases that the patient might have. In a probabilistic framework, test selection can often be predicted with the idea that people have the goal of reducing entropy (uncertainty) in their beliefs about the possible states of the world. For instance, the goal could be to conduct the test that in the expectation will lead to lowest posterior uncertainty (entropy) about the patient’s true illness. In psychology and medical decision making, reduction in Shannon entropy (information gain) is predominant. But a variety of entropy metrics (Hartley, Tsallis, Rényi, Arimoto, Quadratic) are popular in different fields within the social sciences, the natural sciences, artificial intelligence, and the philosophy of science. Particular entropy metrics have been predominant in particular fields; it is not clear when particular measures’ predominance in an individual domain may be due to historical accident. We show that many entropy and information gain measures arise as special cases in the Sharma-Mittal family of entropy measures. Using mathematical analyses, analysis of earlier human behavioral data, and simulations, we address:
(1) How do these different entropy models relate to each other? What insight can we
obtain by considering the individual entropy models within this unified framework?
(2) What is the psychological plausibility of each possible entropy model?
(3) What important new questions for empirical research arise from these analyses, both with human subjects and in applied domains?