The polygraph machine may be a failed experiment in reading minds, but lie detection is nonetheless desired by authorities. Criminal courts could one day be peering directly into a person’s brain for evidence.
This would be accomplished using “Functional Magnetic Resonance Imaging” (FMRI), turning a medical procedure into the new lie detector. FMRI measures blood oxygenation in the brain over time, correlating brain states with ongoing behavior under different conditions. Inferences are made based on contrasting measurements between “lying” and “not-lying” conditions.
Government and private companies alike are pushing acceptance of brain-scanning for evidence. In its post-9/11 paranoia, the US poured money into the development of fMRI lie detection as a tool of interrogation.
Private companies such as No Lie MRI and Cephos Corporation already provide the service of fMRI lie detection for employment screenings, and are seeking its use in criminal court. No Lie MRI cited a study funded by the US Defense Advanced Research Projects Agency (DARPA) as part of its justification.
FMRI is a valuable tool in the field of cognitive science, giving us insight into psychological and neural processes. However, its usefulness in lie detection is questionable.
Zachary Shapiro of Harvard Law points out that neuroimaging for the purpose of lie detection is not generally accepted in the scientific community, as peer-reviewed studies have small patient bases and are not representative of the general population.
Other factors, such as the ability of a person to trick the machine, and jury bias based on the veneer of scientific technology, outweigh its potential.
“Finally, there is a concern that deception is far from a homogeneous behavior, and is still poorly understood by neuroscience. Indeed, there are many types, gradations, and motivations for not telling the absolute truth. It is possible that different types of lying have different neural correlates, which could confound anybody trying to generalize fMRI results to create a specific image of what a “lying” brain looks like. It is also possible that the level of stress, degree of rehearsal, and other factors could influence neuroimaging techniques in ways we do not yet fully understand.”
At present, courts generally do not accept fMRI evidence. If it does become admissible in court, as technology improves and proponents refine their arguments, its effect would be multifaceted.
On one hand, it could exonerate innocent people or help convict murderers and rapists. On the other hand, compelling a person to submit to fMRI can be viewed as an invasion of privacy. And the possibility of error would always be present.
Psychological research shows that interrogators frequently wield the ability to convince individuals they committed a crime that never happened. Hold a person in an isolated and uncomfortable scenario long enough and they will begin to believe they’ve committed a crime.
On top of fabricating memories of non-existent crimes, research shows that all humans, not just those subject to interrogation, are capable and actually do create false memories of events that never occurred.
Could fMRI evidence be admitted without the consent of the defendant? Or would our brain processes be treated as speech?
“…if classified as physical evidence, the recording of brain activity by way of neuroimaging could be compelled in criminal cases and used against the accused by the prosecution. On the other hand, if neuroimaging evidence is considered testimony, then the argument could be made that any inclusion would violate the defendant’s right against self-incrimination.”
Will our brain patterns be the last frontier of privacy? If anything is our own, it is our thoughts. Considering how convoluted and hypocritical the justice system is, introducing the notion of brain-scanning for evidence seems too incredible, leaving too much room for manipulation. The police state will have no reservations latching onto this technology.