Why are some behavior explanations more satisfying than others?
People are constantly trying to explain other people’s behavior. Just think about the last time you saw someone do something and asked yourself, “Why did they do that?”
Some behavior explanations seem better than others. For example, imagine that you saw a woman driving way too fast and you wanted to explain her behavior. One explanation is that she was late for an appointment — makes sense. Another explanation is that she was late for an appointment, her car’s speedometer was broken, and she didn’t know the speed limit. This explanation still makes sense but it might feel a little less satisfying because it’s too complicated; it over-explains the speeding. A much less satisfying explanation is that she was wearing jeans. What does wearing jeans have to do with driving too fast?
Based on examples like this one, Austin Derrow-Pinion, AJ Piergiovanni, and I hypothesized, in a paper published in Cognition, that people rely on two main factors when judging different behavior explanations: simplicity and rational support. That is, simpler explanations are preferred, and explanations that make the behavior “make sense”, under the assumption that the person is rational, are preferred. We showed how both of these factors can be formalized using a framework called decision networks.
Do people actually rely on these factors when comparing different behavior explanations? After running a series of experiments, the answer seemed to be: It depends.
In one set of experiments, we told people about Lori, who arrived last to a meeting with three other people who all sat in a single row of chairs. Lori likes some of these people, dislikes some people, and is indifferent toward some. People saw where Lori ended up sitting and then rated different explanations for why she sat there. For example, did she sit closest to Alice because she likes Alice, because she dislikes Carol, or because she likes Alice and dislikes Carol?
People did rely on both simplicity and rational support in their ratings but didn’t combine the factors in the way predicted by our decision network model. Instead, their ratings were better predicted by another model that simply multiplied the scores assigned for the two factors together (our so-called non-probabilistic model).
In another experiment, we told participants about people who had tickets to a show with three stages, each with a different performer. Each person could choose which seating section they wanted (with different sections closer to different stages), but they might not know in advance which performers would be at which stages. After learning which seating section each person picked, people rated different explanations for their choice. For example, if a person who doesn’t like clowns sat in the stage closest to the clown, did they not know where any of the performers would be when they made their choice, or did they only not know where the clown specifically would be (and perhaps knew where the magician would be)?
This time, we found that people again relied on rational support, but we didn’t find evidence that they relied on simplicity. A follow-up experiment that was specifically designed to test whether people use simplicity when comparing explanations that differ in what people knew also found no evidence for reliance on simplicity.
In sum, our results indicate that people do prefer behavior explanations that provide rational support. And they suggest that people prefer simpler behavior explanations, but not when the explanations differ in what the person knew, like whether the clown was on Stage A or not.
Why is that? This result is somewhat surprising given past studies showing a preference for simpler causal explanations. Additional research would need to explore this question further to say for sure. But one possibility is that preferences and knowledge are actually different and people think about them differently. For example, with preferences, you can like, dislike, or be indifferent about something, but with knowledge, you either know something or you don’t. As a result, when an explanation doesn’t explicitly state that something is known, people might assume it is not known. If this is true, people may treat “Jacob believed that the clown would be on Stage A” and “Jacob believed that the clown would be on Stage A and the magician would be on Stage B” as equally simple because the first explanation would get mentally interpreted as “Jacob believed that the clown would be on Stage A and the magician would NOT be on Stage B”.
Regardless, our work helps to move toward a formal understanding of behavior explanation. Robots and AI are taking on increasingly social roles. As they do, they will need to understand and explain our behavior just like we do when we’re trying to understand why a woman is speeding. And they will need a formal method for doing so.