Category Archives: Jury Research

Articles and posts on the psychology of juries in criminal cases.

Recent Study Suggests Older Jurors Are More Likely To Convict

I saw that the folks at The Jury Room blogged today about a study from 2012 examining how age affects one’s likelihood to convict. Unsurprisingly, the authors found that a jury with an average age over 50 were more likely to convict. From what I can tell, the researchers ran stats on actual juries and their verdicts…which necessarily precludes the ability to control for the “strength” of the prosecution’s case. But it seems worth a closer look. I’m traveling for Christmas so hopefully I’ll have some time to review the article and follow up in a few days with a full commentary.

Merry Christmas and Happy New Year in advance!

-Zachary Cloud

Psychology Weekends: What Does “Reasonable Doubt” Really Mean to a Jury?

There are some concepts in the law that are frustratingly amorphous. One of those principles is “proof beyond a reasonable doubt.” What does that even mean? Jury instructions purport to explain it but have the tendency to just complicate the matter. We know from past precedent that it can’t be quantified into some percentage of certainty and that it’s not a requirement of absolute, 100% certainty. But that’s about it…leaving one to wonder how jurors grapple with the concept and apply it in actual criminal trials.

Today, I’m gonna tell you how.

1. Jury Instructions Lower The Amount of Certainty Jurors Need to Feel When Deciding to Convict

There are two pieces of research that we should look at. The first, by Daniel Wright and Melanie Hall, is an interesting attempt to determine what level of confidence may actually be the threshold for reasonable doubt in the real world. The design of the study is fairly simple: have participants read a fact pattern and make a decision about the criminal defendant’s guilt. Next, have the individual rate his or her level of confidence in the guilt. Finally, determine the confidence “point” at which 50% or more of participants will vote to convict. Essentially, this is applying statistical methods most common to pharmacology and biochemistry in general—we look to see what the “LD-50” or lethal dose for which 50% of the population would die.

The study examines a far more intriguing question. Namely, can a judge’s instructions defining reasonable doubt have an effect on the belief of guilt itself? Or, put a different way: would a jury that’s been given no definition of reasonable doubt tend to have a different threshold for what’s enough evidence than a jury instructed on the concept? The researchers theorized that jury instructions telling jurors that they didn’t need to be absolutely certain would have the effect of increasing conviction rate.

To investigate this inquiry, the researchers conducted 2 experiments. In the first one, participants were put into two conditions: no jury instruction given and jury instruction given. Half of the participants were told only to find the fictitious defendant in a fact pattern guilty if they believed him guilty beyond a reasonable doubt. The other half of participants were told this and then given a jury instruction clarifying that “beyond a reasonable doubt” does not require 100% certainty. Lastly, participants rated their confidence in their decision and provided a brief, written explanation for why. The second experiment used the same methods as the first experiment but a substantially larger group of participants.

Some interesting results came out of the experiments. There was not statistically significant evidence to support the theory that detailed jury instructions will make people more likely to convict. However, the instruction did affect the “LD50” for how much proof is needed. That is, when jury instructions were given that told people they needn’t be absolutely certain of guilt, the confidence level at which 50% of jurors chose to convict decreased.

This is both an intuitive and remarkable finding if you think about it. The results tell us that when people are given a jury instruction saying that total certainty isn’t necessary, people will convict on a 63% level of confidence. However, when no jury instruction is given explaining reasonable doubt, people will not convict unless they are 77% convinced of guilt.

How do we square this what I said earlier—that jury instructions don’t make people more likely to convict? It’s a little tricky but the authors explain it as follows: “[A]lthough the instruction lowered the threshold for a guilty verdict, because belief in guilt was also lowered, there was no significant effect on the number of guilty verdicts.” (Wright & Hall, 2007: 96).

This study, like so many others, has a downside in that it relies upon students for its group of participants. It also has the limitation of occurring in settings remarkably different from an actually jury—these were participants in a university setting rather than a group of jurors who were assembled in a courtroom. Further still, this study assess how a juror would make decisions prior to group deliberation and the power of group dynamics for altering one’s perspectives can be strong. However, the limitations of this study can simultaneously be assets. Its crisp, simple design lets us have more certainty that the results are due to the variables manipulated rather than intervening or moderating variables. It also is worth remembering that every jury is a group of jurors who sat down, watched a trial, heard judicial instructions about the law, and formed an opinion before deliberations. Understanding how people form their individuals opinions that they ten take with them into the deliberations is quite important in and of itself.

2. Alternative Explanations for What Happened Tend to Decrease Likelihood of Conviction

That jury instructions may actually cause people to convict under lower certainty is alarming—at least to a practitioner and believer in a high burden of proof. Is there anything we can do to counteract this effect?

Thankfully, yes.

Researchers at University of Virginia and Georgetown University considered whether creating an alternate narrative could reduce the likelihood of conviction. You see, in the early ‘90s, Nancy Pennington and Reid Hastie suggested the “Story Model” of jury deliberations. I talked a little bit about it here. A quick recap is that jurors work to build a story of what happened rather than performing a surgical breakdown of proof by each legal element.

The theory makes sense and is intuitive enough; one of its implications is that, where jurors are not sure which of two stories is the right one, there should probably be reasonable doubt. For example, if prosecutors allege a defendant committed a crime and the defendant presents credible evidence that it had to be someone else because he wasn’t at the scene, a jury should acquit. In fact, unless the jury decides that the defendant’s “story” is lacking in believability, they should acquit. But does this work in practice?

To find out, the authors designed an experiment where participants read a trial transcript. The prosecution case was designed to make it appear virtually certain that the defendant was guilty and the defense transcript introduced evidence that suggested another individual was actually the culprit. The experimenters varied the specifics of the alternate story such that some participants received a greater number of potential alternatives. Participants read the trial transcript for their assigned condition, then read a description of “beyond a reasonable doubt,” and finally made determinations of guilt. Notably, participants were asked to indicate their level of confidence in the guilt of the defendant and, where applicable, of any other suspects that the defense suggested.

The results show that an “alternative story” does indeed increase the likelihood of acquittal. It’s worth pointing out that participants who chose to acquit didn’t merely feel “less certain” of guilt but actually felt more certain in innocence. This effect is not a minor one—the researchers found a sizeable change in confidence when an alternative story was provided to jurors.

Hold your horses, because there’s more. Although an alternate story where the defense provides additional suspects beyond the defendant decreased likelihood of conviction, jurors still found the defendant as “more likely” to be guilty than the other suggested perpetrators. This is quite an important point because it highlights that jurors actually were operating on the principles of “beyond a reasonable doubt.” They may have felt the defendant was most likely out of a group of suspects but they didn’t feel convinced enough to convict. That’s justice as it should be.

This research is seemingly common-sense. After all, providing juries with another story should, if believed, cause them to be less clear on the prosecution’s theory. Some might say this research just goes to confirm what we already know…and yet, I’m not sure I’d agree. First of all, research can be valuable merely for providing empirical support for what we assume to be true. Secondly and more importantly, this disconfirms a concern that I’ve had for some time: that jurors see all defense attorneys as just out to trick them or manufacture reasonable doubt. If jurors felt that way, they would pay no heed to an alternative theory or explanation. Accordingly, this gives me some encouragement that jurors take their jobs seriously and aren’t out just to convict.

3. Final Thoughts

These two pieces of research show us interesting things about juries. They show us that jury instructions on reasonable doubt can actually lower an individual’s threshold for the amount of certainty he or she needs to feel in order to convict. On the other hand, they also show us that jurors are willing to examine a defense’s contentions rather than flatly rejecting them or tending to presume guilt. If there are any take-home points, they would be: (1) push for a jury instruction that does not overly emphasize the nature of “beyond a reasonable doubt” as a lower standard than complete certainty, and (2) build your theory of the case to explain why the prosecution’s story is the incorrect one. Those two methods appear to have an impact on how juries process and evaluate their confidence in guilt.

The concept of proving something “beyond a reasonable doubt” has a long history of evading any real meaning. Judges who have tried to come up with concrete jury instructions have sometimes been reversed on appeal or otherwise criticized for meddling with the standard. Legally speaking, no one really knows exactly what it means. In truth, that’s probably no accident. For example, the law often uses a “reasonable standard” or a “Totality of the Circumstances” test to avoid bright line rules and make the determination of some legal issue rest on its facts. It is a way of turning fact-finders into legal decision makers. It is a way of building flexibility into the system. This is both a blessing and a curse because it allows for flexibility and equity while simultaneously removing any guidance. Jurors are almost always lay persons who don’t practice law for a living and would probably very much like more guidance and clarity they receive. Yet the law refuses to tell them what to do with a detailed definition or explanation of “beyond a reasonable doubt.” Essentially, the law tells them: “use your best judgment based on what you understand this legal concept to mean.” In the face of such an instruction, jurors have worked hard to comply. And as the research presented here tends to show, those jurors tend to be doing a decent job.

-Zachary Cloud


Tenney, E.R., Cleary, H. M. D., & Spellman, B. A. (2009). Unpacking the Doubt in “Beyond a Reasonable Doubt”: Plausible Alternative Stories Increase Not Guilty Verdicts. Basic and Applied Social Psychology, 31, 1-8.

Wright, D.B. & Hall, M. (2007). How a “Reasonable Doubt” Instruction Affects Decisions of Guilt. Basic and Applied Social Psychology, 29, 91-98

Psychology Weekends: Ingroup Effects on Sentencing

Last week, I noted that I went on a hunt for interesting studies. One of the ones I found is an interesting look at sentencing. Specifically, do people sentence members of their ‘ingroup’ more harshly than outgroup members?

A couple thoughts up front before I dive into the research. First, this is a piece that has some definite, real-world implications. People get sentenced every day in courts across this country. For the vast majority, the sentence imposed is by a judge though, more broadly, prosecutors plea deal recommendations also play a significant role. Second, I think this study has a particularly limiting component reducing external validity. It, like so much other research that psychologists do, relied upon college students. I think that such reliance could be particularly problematic in the realm of sentencing, where it’s essentially guaranteed that the person passing judgment will be older—and usually much older—than college aged people. Generational effects can be powerful and likely play a role here that the current study was simply unable to identify.

That said, let’s take a look at the research. German researchers Mario Gollwitzer and Livia Kelle (2010) investigated how members of a group punished repeat offenders considered to be a part of the ingroup. In their study, they hypothesized that (1) people would punish a repeat ingroup offender more seriously than a first-time ingroup offender, and (2) people would not display this effect when sentencing outgroup members.

In order to test their hypotheses, the researchers designed an experiment where participants read a narrative describing a student who hid library material where only he could find it in order to keep others from using it or gaining access to it. Participants randomly were assigned to an “ingroup” or “outgroup” condition. The ingroup condition held the perpetrator out to be a fellow psychology student while the outgroup condition held the perpetrator out to be majoring in a different subject. Additionally, participants were also randomly assigned to either a “first time” or “repeat offender” condition. Participants read the fact pattern they received then answered questions measuring their “anger/outrage” with the crime and their “societal concerns” about such conduct. Finally, participants indicated if they thought the perpetrator should be punished and, if so, how severely.

The results provided support for the researchers’ hypothesis. People gave repeat offenders harsher sentences when they were members of the ingroup. Additionally, evidence did not show the same effect for members who were in the outgroup. Their prior records did not appear to drive the severity of their punishment.

It’s worth noting the authors’ discussion of these results. They put forward the suggestion that the results are due to what the judging person (participant) thinks is threatened. A participant might see an ingroup perpetrator with a prior history as someone who threatens “normative cohesion” of the group and its values. On the other hand, a perpetrator in the outgroup is more likely to threaten the ingroup’s “power and status” as whole.

My thoughts reading the study are that we should consequently expect outgropu members to be judged more harshly on average. Here’s what I mean: a judge may be particularly harsh with a member of the ingroup who has a history of bad acts but may be especially lenient on a ingroup member with no prior transgressions. However, if a judge sees outgroup behavior as a threat to his group overall, even first time offenders should see at least a moderate punishment. So on average, the two types (ingroup v. outgroup) of perpetrators should either end up getting roughly equal treatment or, more likely, outgroup members should get it worse.

In addition to my caveat at the beginning about generalizing due to age, I also worry about how useful this particular case is. The researchers paint a student doing a bratty thing (hiding library books). Would the same results occur if the crime in question were severe such as beating someone to death or burning a building to the ground? What effect would it have if the act was a victimless one (e.g. speeding or possessing illegal narcotics)? I tend to think that punishment doesn’t act linearly. In other words, I think that a sentencing individual responds differently when the consequences of punishment are severe (e.g. life in prison) than when they are minor (e.g. probation). This point about the nonlinear psychology of sentencing is, from what I can tell, greatly under-researched. I would like to see more treatment of it.

Another flaw with the study is the questionnair method. This is an easy way of conducting research and not necessarily as bad as you might think. It lets a researcher control for some other observable traits such as race. For example, a written can hid a person’s race by not mentioning that detail but a live, mock sentencing of a human being necessarily implicates this quality. So, there are indeed some advantages to using questionnaires. There are also some downsides. It is likely easier to punish the hypothetical individual on paper than to do so to a live person. Now, perhaps the Stanford Prison Experiment should suggest to me that participants would be just as willing to exact punishments on their living colleagues but I still would like to see an acknowledgement of this limitation.

All in all, this is some interesting research. If true, it no doubt has some implications for how practitioners might want to approach sentencing matters. You may want to take into account if your client is from the same background as the judge who may sentence that client. I suspect usually, the client will be an outgroup member. But it’s never good to assume. And moreover, if your client has no prior record, it might be worth the effort to paint him or her as similar to the judging figure—it might just make the difference in getting lenient treatment.

-Zachary Cloud


Gollwitzer, M. & Keller, L. (2010). What You Did Only Matters if You Are One of Us: Offenders’ Group Membership Moderates the Effect of Criminal History on Punishment Severity. Social Psychology, 41, 20-26.

Psychology Weekends: What Makes A Police Officer Tick?

I went trawling around this afternoon through a variety of psychology journals to see if there was anything worth writing about. Lo and behold, I came across a brand new, intriguing article by two researchers in France about the psychological motivations of police officers. For this weekend’s post, I thought I’d take some time to review their work and give my thoughts on it.

The study by Gatto & Dambrun, entitled “Authoritarianism, Social Dominance, and Prejudice Among Junior Police Officers,” attempts to explain more or less why police exhibit observed social psychological traits such as Social Dominance Orientation and Right-Wing Authoritarianism. Before we dig in to the current study, let’s take a quick look at what those two traits are. Social Dominance Orientation refers to “the degree to which individuals desire and support the group-based hierarchy and the domination of ‘inferior’ groups by ‘superior’ groups.” (Gatto & Dambrun, 2012: 61 (quoting Sidanius & Pratto (1999)). The authors do a poorer job of defining Right-Wing Authoritarianism though they note that it is a construct made up of three subcategories: aggression, submission, and conventionalism. I don’t want to get too hung up on defining the constructs because it’s not critical to understanding the study’s design or results. I will just note, however, that a little bit more time defining the constructs on the authors’ part would not have been time wasted.

At any rate, the authors note previous studies showing that officer tend to score high in these two traits…meaning that they strongly exhibit characteristics of Social Dominance Orientation and Right-Wing Authoritarianism. But why? What is it that makes a police officer exhibit these traits? That’s the question these researchers set out to solve. In essence, they put forward to possible hypotheses: (1) group socialization, or (2) social projection. That is, perhaps officers tended to adopt these traits as a result of becoming socialized by their fellow officers or maybe they started exhibiting traits they perceived were important in the group to fit in.

So how did they test their hypotheses? The researches administered a survey to newly hired police officers about to begin training with questions assessing traits for the two constructs and also with questions measuring what they thought the desirable traits were among their peers and others in their organization. Finally, the participants answered questions designed to measure prejudice toward “disadvantaged groups.”

To determine the results, the authors examined descriptive data then ran the data through “EQS Structural Equation Program.” They found significant correlations between the two constructs and they also found that each construct showed a strong, positive correlation with bias against disadvantaged groups. More to the heart of the matter, the authors found statistical support for their theory that expected group norms drive individuals’ personality traits in the police context.

Okay. So here are my thoughts on the paper. First and foremost, I was a little dissatisfied with the reliance solely on a self-report measure. I am no opponent of using surveys; I think they can be particularly effective when used correctly. But here,  I don’t think they were. It’s not really enough to analyze survey-item response correlation by itself absent some sort of experimental manipulations. The authors in this study didn’t have any “experimental” or “control” that might have allowed for some causal inferences to be drawn. Why not run this same survey with civilians who have no connection to law enforcement then compare those civilians scoring similarly high on these two constructs with their police counterparts?

Another complaint with methodology is the reliance on “Structural Equations.” I know a fair amount about statistics in general and the statistical methods in the social sciences in particular…and I hadn’t ever heard of this term. It turns out to be a generic one that can refer to a variety of modeling techniques such as ANOVA, ordinary least squares regression, and the like. I would have preferred a little bit more detail regarding the specific tests being run…I certainly see familiar symbols like chi-square statistics but I am not entirely sure what to make of the data without a more descriptive explanation of the statistical models involved.

But perhaps the largest problem I see with the study is that it isn’t all that helpful in explaining or even investigating what the authors wanted to examine. In other words, if we’re interested in knowing why police tend to be so hierarchical and authoritarian, we’re no closer to knowing now than before the study was conducted. First of all, we’re facing a chicken and egg problem: do people high in these constructs tend to be come police and thus the role reinforces previously-existing character traits or does the job actually change a person’s affect? Secondly, it might be fine to say that groups of other officers tend to amplify or drive a new officer’s character traits but then you have to turn around and ask what caused those other officers to act they way they act. Again, this study is no help in answering that.

All in all, I found this research a little disappointing. I had hopes when I started to read through it that we might have some pretty interesting insights into why police have the observed character traits that we often see (e.g. strong on authoritarianism, big on hierarchy, etc…). Regrettably, this study only took some baby steps toward answering that mystery. Perhaps next week, I can share some research that’s a little bit more insightful.

-Zachary Cloud


Gatto, J. & Dambrun, M. (2012). Authoritarianism, Social Dominance, and Prejudice Among Junior Police Officers. Social Psychology, 43, 61-66.

Sidanius, J. & Pratto, F. (1999). Social dominance: An intergroup theory of social hierary and oppression. New York: Cambridge University Press.

Psychology Weekends: You’re Probably Bad at Spotting Liars

I am dying to find some time to sit down and write a post about the various revelations about what the National Security Agency is doing. If you don’t know what I’m talking about then you live under a rock and need to go read here and here and here IMMEDIATELY. However, my discussion of that is for another time. Right now, it’s the weekend and we’re talking psychology…and I thought it might be sort of timely to talk about lying. Given all the news coverage about spying, a post on deception seems fitting. So, let’s talk about how good we are about spotting a person telling a lie and some steps we might take to improve our personal lie-detection abilities.

For quite some time, psychologists have been looking into humans’ abilities to perceive deception. For instance, almost 20 years ago, DePaulo (1994) noted that people are generally quite bad at accurately rating the deceptiveness of other people. Participants who watched another discuss something (e.g. what they did for a living, how they felt about something, what they believed, what they knew, etc…) could not reliably determine which people were lying and which people weren’t. This same effect holds true for law enforcement; when they were pitted against untrained college students, they were no more accurate at detecting when a person was lying. Though it’s worth highlighting that law enforcement thought they were better.

Some might be quick to point out that detecting a liar requires a baseline. In other words, you need to have some knowledge of how people respond when being truthful in order to spot when they act differently (e.g. give off some sort of tell). And indeed, polygraphs are useless unless a physiological baseline can be established. Nevertheless, most people are also bad at detecting lies told by people they know well (DePaulo, 1994).

All of this is old news. Let’s talk about what you may not have heard about. The first big point to note is that there are some minor exceptions to the general rule that we’re bad at spotting lies. Consider the work of Ekman and colleagues (1999), which shows that a small number of professionals have much greater accuracy than the rest of us. The researchers gathered a large number of different law enforcement and federal personnel including members of the CIA and Secret Service. Additionally, the study compared these law enforcement agents against clinical psychologists. Interestingly enough, the authors did find that federal agents trained in detecting behavior cues (e.g. facial muscle movements) outperformed other categories and in fact had a 70% or better accuracy rate.

Ok, but let’s say you don’t have the time or money to go through lots of formal training on what types of body cues suggest potential deception. Is there anything the average joe can do to help spot a lie??

There might just be…

A fairly recent study suggests that the key may be in avoiding mimicking. You see, we very commonly will imitate the physical behaviors of people with whom we’re communicating. This is so for a variety of reasons mostly related to rapport-building. It allows us to “smooth[] interaction” and “facilitate emotional understanding” (Stel, et al., 2009: 693). However, the researches wondered if this tendency to mimic might also mask our ability to see others’ deception. To find out, they designed a study where participants were assigned to one of three categories: (1) control group, (2) do mimic the target, or (3) don’t mimic the target. Thus, half of participants received particular instruction on whether they should mimic the potential liar during a conversation while the other half received no instruction at all. Meanwhile, other participants were recruited and assigned as “targets.” They were given an opportunity to donate to a charity then later told either to be “truthful” or “dishonest” with their conversation partner about whether they decided to donate. Then the participants paired up and had 3 minute conversations where the regular participants could interact and ask questions of the target. These sessions were videotaped for analysis by the researchers.

The results showed what the authors of the study hypothesized. Participants who actively chose not to mimic the target conversation partners were more accurate at predicting if the partner was lying. This effect was true across both the experimental and control groups such that people in the control who naturally didn’t mimic their partners enjoyed the effect in the same way that participants actively choosing not to mimic did.

Additionally, there is one other option you might consider if you want to improve your own accuracy in spotting lies. Recent research suggests that creating high cognitive loads and time pressures makes it much more difficult for liars to lie well. Walczyk and colleagues tested  their “time restricted integrity confirmation (TriCon)” system of detection by using methods similar to an Implicit Attitude Test. Namely, the experimenters had participants answer questions on a laptop designed to measure how long it took participants to provide their answers. Of course, participants were told ahead of time to lie when they got to certain questions.  The basic results—though the authors’ paper gets into more depth and describes two related experiments—is that people will respond faster when they are being truthful.

The take-home point here seems to be that people under pressure will have a harder time lying. So, if you keep the heat on the person you suspect and put them under other cognitive load (e.g. multitasking while answering questions), you will probably see them take longer to answer on questions where they are not being truthful.

That’s all I have for this weekend. I should note that each of these studies presents at least a couple areas worth skepticism or criticism. In other words, aspects of the study that might properly discourage you from relying too heavily on it. While I say that much, I leave it to you to read the studies and be your own critical thinkers…I just don’t have the time to do an in-depth analysis of each one’s methods nor do you probably want that. Even still, I encourage you to read the studies yourself!

Until next time…

-Zachary Cloud


DePaulo, B.M. (1994). Spotting Lies: Can Humans Learn to Do Better? Current Directions in Psychological Science, 3, 83-86.

Ekman, P., O’Sullivan, M., & Frank, M.G. (1999). A Few Can Catch A Liar. Psychological Science, 10, 263-266.

Stel, M., van Dijk, E., & Oliver, E. (2009). You Want to Know the Truth? Then Don’t Mimic! Psychological Science, 20, 693-699.

Walczyk, J.J., Mahoney, K.T., Doverspike, D., & Griffith-Ross, D.A. (2009). Cognitive Lie Detection: Response Time and Consistency of Answers as Cues to Deception. Journal of Business and Psychology, 24, 33-49.

Psychology Weekends: Three New Studies You May Want to Read

This time around, I decided to do something shorter and more focused than last week’s psychology post. Today, I decided to peruse the latest volume of Law & Human Behavior to see what interesting studies have recently hit the press. Below, I’ll do a quick review of three that caught my eye

1. “Postincident Conferring by Law Enforcement Officers: Determining the Impact of Team Discussions on Statement Content, Accuracy, and Officer Beliefs.”

This article by Lorraine Hope and Joanne Fraser (University of Portsmouth), and Fiona Gabbert (University of Abertay) presents work in which the authors investigated police discussing an incident with each other prior to writing up their official police reports. Essentially, the authors were interested in determining what effects it might have on police to “get their stories straight” (my phrasing, not theirs) before writing up an account.

To do that, the authors recruited right around 300 police officers and paired them up into groups of six officers on average. The groups were randomly assigned into “conferring permitted” or “conferring not permitted” and then made to respond to a mock incident. Afterward, they either had to write up what happened and what actions they took without any discussion amongst colleagues or they were allowed to talk with their teammates before drawing up reports. All mock incidents were filmed so that the officers’ written statements could be accuracy-checked.

The results?

Interestingly, there was no evidence of a main effect for either improved or decreased accuracy when officers were allowed to confer with each other before writing up reports. BUT, far more interesting is that officers who were allowed to chat with their teammates felt more confident about their accuracy.

The take-home?

As I see it, a big take-home point is this. Police who have had a chance to “get their stories straight” post-incident will likely present as more sure of themselves. That could be a double edged sword at trial. If the officer’s version of events is effectively contradicted by other evidence then the credibility of the officer may be called into question. Of course, it might also make the jurors feel more likely to believe the officer. In much the same way, an officer who is less sure may have a lower overall effect on the jury’s willingness to believe him but simultaneously look more human and less biased for acknowledging the shortcomings of memory.

Full Citation:

Hope, L., Gabbert, F., & Fraser, J. (2013). Postincident Conferring by Law Enforcement Officers: Determining the Impact of Team Discussion on Statement Content, Accuracy, and Officer Beliefs. Law and Human Behavior, 37, 117-127.

2. “False Alibi Corroboration: Witnesses Lie for Suspects Who Seem Innocent, Whether They Like Them or Not.”

This is an interesting piece of research. The authors wanted “[t]o test the commonly held assumption that individuals who share a personal relationship are more likely to lie for one another than are strangers.”

Here’s how the study was set up: in a 2 x 2 design, participants got assigned to either a “friendship-enhancing” or “stranger-maintaining” condition. They were placed in a room with a confederate who would either worked with them to complete a group project or not. Next, the confederate left the room and either returned with some money or empty-handed to create either  “apparent theft” or “no apparent theft” conditions.

The next stage was interrogation. Both the confederates and participants were questioned by an experimenter. At this point, the confederate denied ever leaving the testing room and claimed being with the participant the whole time. The experimenter split the confederate and participant up and asked the participant if that statement was true.

The results?

The participants would lie. They would back up the confederate’s false statement about never leaving the room. However, they were more likely to lie when there was no presence of evidence. That is, when the confederate returned to the testing room without any money. In the instances where the confederate returned to the room with money, participants were less often willing to lie and cover for the confederate.

Particularly noteworthy though: no evidence was found that “friendship-enhancing” or “stranger-maintaining” had any effect on the likelihood of a participant lying. As the authors of the study say it, “it would seem that people tend to underestimate how likely it is that strangers, or individuals sharing distant social relationships, would lie for one another.”

The take-home?

I find this study encouraging. It seems to provide evidence that people are less concerned about whether they like you than if there is evidence against you. Of course, absence of proof isn’t proof of absence. The authors here simply didn’t find any statistically significant difference between “friendship-enhancing” and “stranger-maintaining” when evaluating likelihood of lying. This doesn’t prove there isn’t an effect. It’s just lack of any hard evidence. Perhaps the bigger take-home is that we’re fairly willing to cover for strangers who are accused. There must be something to this…that we’re willing to stick up for each other by potentially tarnishing our own reputation for truthfulness. Interesting indeed.

Full Citation:

Marion, S.B. & Burke, T. M., (2013). Alibi Corroboration: Witnesses Lie for Suspects Who Seem Innocent, Whether They Like Them or Not. Law and Human Behavior, 37, 136-143.

3. “Callous-Unemotional Traits Robustly Predict Future Criminal Offending in Young Men.”

This study is not necessarily unintuitive and I won’t spend too much time on it. I thought it worth pointing out because it provides further (and perhaps better) empirical support for the general notion that certain traits in adults tend to predict likelihood to commit crimes. Specifically, the traits include: lack of empathy, deficient guilt/remorse, and shallow affect.

The authors point out that prior research examining this topic have not necessarily separated the predictive effect of such personality traits from other intermediary variables such as prior offense records, association with delinquent individuals, etc… This study sought to isolate potentially confounding variables that were driving likelihood to commit crime and see if Callous-Unemotional (CU) traits had predictive value in and of themselves.

In order to do this, the authors assessed participants in their homes. They administered self-report scales measuring various CU traits and also gathered information on the participants’ arrest records, convictions, alcohol/substance use, and demographic information (education, employment, marital status, ethnicity, etc…). The authors followed up on each participant (the average follow-up period was 3.5 years) to see what if any criminal offending the participants had engaged in. The authors built a logistic regression model to determine the effect of CU measures and other variables on likelihood to offend.

The result?

Self-reported CU factors are a “robust” predictor of a person’s likelihood to offend and held their predictive power even after other “well-established risk factors” had been controlled for in the regression model. The authors found that CU traits predicted arrests and being charged with crimes. However, authors did not find that any one particular CU trait (e.g. lack of empathy) had predictive power on its own.

The take-home?

I don’t see it as all that suprising that people whose personality traits don’t align with the status quo would frequently be at odds with the law. A person who doesn’t place the same value on your right to property or your right to be left alone might well do things that cause him trouble with the law. What I found worth pondering is that no one trait is predictive. In other words, you need to have some critical mass of CU traits. The wider implications of that could be scary. Might employers, insurers, or the government start evaluating people on their CU traits and trying to predict who will offend before it happens in a Minority Report type fashion? And even more fundamentally, does this observed effect indicate a health issue we need to address? Would it be arrogant to label people scoring high on these CU predictors as ‘unhelathy’ or ‘ill’? Those are larger questions perhaps to be answered another time.

Full Citation:

Kahn, R.E., Byrd, A.L., & Pardini, D.A. (2013). Callous-Unemotional Traits Robustly Predict Future Criminal Offending in Young Men. Law and Human Behavior, 37, 87-97.

4. Conclusion

Well that’s it folks. I just wanted to take a glimpse of new work and leave you with some food for thought and material to digest should you so choose. Until next time…

-Zachary Cloud

Psychology Weekends: Why We Tend To Presume A Criminal Suspect is Guilty

For this inaugural weekly post on psychology, I want to explore a topic that has intrigued me for a long time: the tendency to assume suspects are guilty of the crimes they’re charged with. I hear, with some frequency, the basic sentiment that people arrested of crimes are probably guilty. People say things such as “why would the police arrest him if he weren’t?” or “he looks like he did it” or things of a similar nature. Of course, these are snap judgments made by uninvolved people…not informed decisions of jurors who have heard evidence from both sides and come to a reasoned decision. Why does this happen? Why are people quick to presume guilt…particularly when they have such limited information (i.e. that which they hear, see, or read in media)? That’s the topic I want to delve into. This is an informal piece where I’ll propose a set of possible factors at play based on a review of known psychological phenomena.

1. A Hypothetical

Let me start with a scenario. A rash of break-ins occurs in a middle-class neighborhood of your city. Over the past couple of weeks, almost a dozen good, honest, hard-working people like you have had their privacy invaded and their possessions stolen. The hot items tend to be electronics (e.g. flat screens, blu-Ray players, laptops) and cash. Less frequently, jewelry goes missing. A vast majority of these break-ins occur during the day and when the homes are unoccupied.

The local news outlets have all been reporting on this disturbing trend. Then, one day the media announce that police have made an arrest in connection with the burglaries. A shot of the perp in cuffs appears on the TV along with a sound bite from the local police head saying: “We got him!”

Okay, now let’s pause. How do you feel?

Yes, yes, I know this is just a hypothetical but I have little doubt you’ve come across real-life scenarios that are quite similar to what I described above. So, how does that make you feel? I’m probably not too far off the mark to guess that it made you feel relieved. And why would you feel relieved? Because you presumed that the person who posed a threat to people like you (i.e. good, honest, hard-working people) is now in custody and you won’t be at risk of a break-in yourself.

Your assumption is in opposition to law. Any person who is charged with a crime is legally presumed innocent. The United States Supreme Court said so way back in the 19th century, at which point the presumption of innocence was already deemed “undoubted law, axiomatic and elementary … .” Coffin v. United States, 156 U.S. 432, 453 (1895). Now, if we wanted to we could get into an extended discussion of what it means to be found legally guilty of committing a crime because there is, after all, a distinction between whether a person acted in a certain way and whether such action would subject them to criminal penalty. Honestly though, I don’t want to get bogged down in that discussion. What I’m interested in is a discussion of why we tend to believe did what they’re accused of doing.

2. Basics of Ingroup / Outgroup bias

It seems to me that at least part of the reason people tend to assume a criminal suspect’s guilt has to do with the basics of “ingroup” and “outgroup.” There is plenty of research in the literature on the topic of the ingroup and the outgroup and the basic effect is that we tend to prefer people we view as similar to ourselves and we tend to dislike people who are dissimilar. Thus people like us are a part of the ingroup and people we view as different are a part of the outgroup. Researchers have provided evidence that we often see members of our ingroup as being more diverse while we often perceive outgroup members as being much more similar to each other (termed outgroup homogeneity bias). This is the essential psychological mechanism at play when we say people of a different race “all look the same” (Ackerman et al., 2006). Jost and colleagues (2004) provide a decent review of some of the major implications for those wishing to get up to speed.

As Ackerman (2006) also rightly noted, the outgroup homogeneity bias is a function of our reliance on heuristics. Rodin (1987) provided a series of experiments about how we allocate our cognitive resources and “disregard” things and people that we are not interested in paying much attention to. In other words, “[w]hen there are multiple events or stimuli competing for limited attentional resources, attention is mainly allocated to those that are most informative or suprising” (Rodin, 1987: 146).

Now let’s apply this. The label of “criminal” is a way of categorizing people—presumably people in your outgroup. I’m guessing you likely don’t view yourself as a criminal. In fact, you probably don’t consider yourself a criminal even if you have had some trouble with the law in the past. Say you got a DUI once or maybe when you were a kid you were charged with trespassing. It was a slip-up, a limited circumstance. You’re not a criminal, just a person who made a mistake. Of course, if you’ve never been charged breaking the law before then your belief in yourself as someone who isn’t a criminal will be doubly true.

Similarly, you probably aren’t arrested very often if ever. Again, some of you might have had a minor incident here or there but that’s it. So, when you see someone on TV in handcuffs it’s hard to relate to that.

From every angle, your conception of yourself as being different from a “criminal” helps create an “outgroup” mentality. People like you don’t do bad things nor do the police hassle your or arrest you. Therefore, the people who are arrested must be different. They must be in the outgroup.

3. Trusting Authority’s Ability to Get the Right Guy

The next component that I think goes into a tendency to presume a criminal suspect’s guilt is our pre-disposition to trusting certain authority figures. As Glanville and Paxton (2007) demonstrated, our tendency to trust is influenced in large party by social learning. That is, we learn based upon our particular life experiences whether people are generally trustworthy or not. This general method of forming trust can apply to the government and its actors. For example, concerns over security relate to our willingness to trust government. This has been particularly clear in the context of our response to government after 9/11 (Chanley, 2002). And, as one political philosopher / psychologist has described it, “when in doubt, follow a lead” (Searing, 1995: 680). Indeed, Searing (1995) might agree that a small portion of our trust is reciprocity-based; that we reward the government when it keeps us safe by giving it our trust.

So again, some application. The police say that they have caught a burglar or a bank robber. We assume that they are acting in good faith…that they have pieced together evidence and leads to find the right guy and place him under arrest.

4. Why Trusting Authority’s Ability Is Problematic

Here’s why that’s a bad thing to assume. Police are not necessarily any better at figuring out if someone is guilty or innocent than you are. Kassin and colleagues (2003) have examined the ability of police to successfully determine guilt through a series of experiments. Drawing on the work of Akehurst et al. (1996), Ekman & O’Sullivan (1991), and DePaulo & Pfeifer (1986), Kassin sought to confirm the fact that police “hold many false beliefs about the behavior indicators of deception and that groups of ‘experts’ who make such judgments for a living … are highly prone to error” (Kassin et al., 2003: 188).

The result? He found that investigators with “guilty expectations” not only were predisposed to interpret behavior and responses by suspects as indications of guilt but that it led them to utilize interrogation techniques “which constrained the behavior of suspects and led others to infer their guilt—thus confirming the initial presumption” (Kassin et al., 2003: 199). In other words, police and other law enforcement seem particularly prone to confirmation bias. Confirmation bias is the effect of interpreting information in a way that “confirms” your preconceived belief. For example,  a person who dislikes Chevrolets is most likely to give stronger weight to negatives that he learns about the brand. He is less likely to pay attention to, or give equal weight to, positives about the brand. In sum, Kassin and colleagues suggest that this tendency on the part of law enforcement to come into the interrogation room already thinking they have the right guy causes them to act in a way that both makes them particularly prone to confirmation bias and probably in a way that subjects a criminal suspect to high levels of coercion at the same time. The problem with coercion is, of course, that it has reliably translated into false confessions because suspects think saying what police want to hear will help them end the encounter.

In other words, investigators are every bit as prone to bias and getting it wrong as you and I are. What’s worse, there is some evidence to suggest that a suspect’s attempt to explain his innocence or present another story will not only be disregarded but may actually be used as further evidence of his guilt. Kassin observed that “it appears that [investigators] interpreted [an innocent person’s] denials as proof of a guilty person’s resistance—and redoubled their efforts to elicit a confession.” (2003: 200). This observation aligns with new research from Esposo and colleagues (2013), which illustrates we generally refuse to legitimately consider the possibility that a person’s argument is correct when that person is a member of an outgroup.

5. Concluding Remarks

In this post, I’ve put forward the mechanisms that seem to be at play in our tendency to presume a criminal suspect is guilty. I have not come across research that empirically tests precisely how common it is for us to consider an arrested person guilty nor any articles that have proposed models for the seeming effect. Accordingly, this is my informal and original suggestion of what is probably going on…or at least what known psychological tendencies probably play a role.

I have suggested that many people who presume criminal suspects guilty do so because they view “criminals” as members of an outgroup and because they are willing to trust that police do a good job of catching the right guys. I then have illustrated why there is strong psychological evidence to suggest that law enforcement aren’t necessarily that good at figuring out who is guilty versus who is innocent and reviewed studies explaining why.

For the most part, I aimed to keep this post descriptive rather than normative. I didn’t set out to g on a long tirade about how awful people are for presuming everyone is guilty rather than following the grand tradition of “innocent until proven guilty.” However, I do think at this point it’s appropriate to remind ourselves of a valuable college lesson: be a critical thinker.

We have a lot of information we have to process every day. It’s overwhelming and it’s lead us to, quite understandably, use short-cuts for processing. These heuristics let us parse all the data we receive quickly and efficiently. We put people into ingroups and outgroups. We make generalized determinations about who to trust. All of this saves us time and psychological energy. Unfortunately, it’s also lazy and dangerous. When a person’s liberty is at state—as it is with the criminal suspect—we owe it not just to them but to ourselves to be critical thinkers. This is something we owe ourselves because the legitimacy of our justice system depends upon it. The respect of process and justice that we desire can only come from us treating such judicial mechanisms seriously and giving them a rigorous foundation to rest upon. Our ability to think through things carefully, critically, and rationally is not in question—our ability to be disciplined enough to actually behave in such ways is. If we are willing to treat matters like guilt an innocence with proper gravity than we have gone a long way to ensuring that our system is fair and just. And a system that is fair and just is good for all of us.

-Zachary Cloud


Ackerman, J., et al. (2006). They All Look the Same to Me (Unless They’re Angry): From Out-Group Homogeneity to Out-Group Heterogeneity. Psychological Science, 17, 836-840.

Akehurst, L., Kohnken, G., Vrjj, A., & Bull, R. (1996). Lay persons’ and police officers’ beliefs regarding deceptive behaviour. Applied Cognitive Psychology, 10, 461-471.

Chanley, V. (2002). Trust in Government in the Aftermath of 9/11: Determinants and Consequences. Political Psychology, 23, 469-483.

DePaulo, B. & Pfeifer, R. (1986). On-the-job experience and skill at detecting deception. Journal of Applied Social Psychology, 16, 249-267.

Ekman, P. & O’Sullivan, M. (1991). Who can catch a liar? American Psychologist, 46, 913-920.

Esposo, S., Hornsey, M., & Spoor, J. (2013). Shooting the messenger: Outsiders critical of your group are rejected regardless of argument quality. British Journal of Social Psychology,  1-10

Glanville, J. & Paxton, P. (2007). How do We Learn to Trust? A Confirmatory Tetrad Analysis of the Sources of Generalized Trust. Social Psychology Quarterly, 70, 230-242.

Jost, J., Banaji, M., & Noesk B., (2004). A Decade of System Justification Theory: Accumulated Evidence of Conscious and Unconscious Bolstering of the Status Quo. Political Psychology, 25, 881-919.

Kassin, S., Goldstein, C., & Savitsky, K. (2003). Behavioral Confirmation in the Interrogation Room: On the Dangers of Presuming Guilt. Law and Human Behavior, 27, 187-203.

Rodin, M. (1987). Who is Memorable to Whom: A Study of Cognitive Disregard. Social Cognition, 5, 144-165.

Searing, D. (1995). The Psychology of Political Authority: A Causal Mechanism of Political Learning Through Persuasion and Manipulation. Political Psychology, 16, 677-696.