Spotting AI Fakes: Can Art Historians Help?


IF RECENT HEADLINES are any indication, one of the most pressing issues right now is the threat posed by fake or manipulated images. The wide availability of generative AI, along with the increasingly user-friendly interface of image editing software like Photoshop, has enabled most people with a computer and internet access to produce images that are liable to deceive. The potential dangers range from art forgery to identity fraud to political disinformation. The message is clear: images can mislead, and the stakes are high. You should learn to detect the real from the fake.

Or should you?

Related Articles

The most recent headline grabber is an instructive case in point. A suspect photo of Princess Kate offered grist to the churning mill of royal conspiracy theorists. To mark British Mother’s Day, Kensington Palace released a photo of Middleton with her three children,
the first photograph of her to be published since she had surgery in January. Major news agencies like the Associated Press promptly killed the photograph, citing anomalies that cast doubt on its authenticity. Rumors exploded, and Middleton subsequently issued an apology, claiming responsibility for the bad Photoshop job before announcing the reason behind her desire to conceal: the princess has cancer.

Before all this was clarified, journalists identified the characteristic tells of a manipulated, or outright fabricated, image in the Middleton photo. Their close attention to these attributes is not unlike how I, as
an art historian, examine a painting. Such signs, amounting to what one might think
of as connoisseurship in the age of digital images, include:

  • things that do not line up (patterns in tile,
    parts of garments)
  • skin that looks unnaturally smooth
  • hands that are excessively elongated or
  • unnaturally posed
  • spatial warping, or incongruous planes
  • misaligned reflections and shadows
  • a wholly blurred background lacking
    in specificity
  • problems with object permanence (say,
    whether a cane appears over and under
    the same limb)
  • garbled text

Illustration by Kat Brown.

In gathering a credible team to search for these traits, the Associated Press performed a task that ought to become a standard service offered by news agencies now: arbitrating the authenticity of news imagery disseminated to the public. The reliability of this task, of course, requires that news agencies remain free from state, corporate, and political influence, further incentive to protect democracy. Because, useful as this list may be for the moment, when it comes to combatting AI, it’s more of a stopgap measure that misses three bigger issues.

One issue is that every image is worth scrutinizing as a cultural object that conveys values—but only if we can be certain about its origins. How can we interpret a photograph of an event from 1924 if the photograph was digitally fabricated in 2024?

The second issue is that the responsibility for assessing the authenticity of images has fallen to untrained citizen volunteers.

And the third is that, shortly after this piece is published, the list above will be obsolete: Both image editing programs and generative AI are perpetual works in progress. Individuals can try to keep pace with these developments, but the effort can never amount to more than a rearguard maneuver, whatever damage done by deceptive images a fait accompli. And none of these concerns even begins to address the biases inherent in generative AI, which is trained on datasets overwhelmingly populated by white faces.

The Middleton episode is telling not because it involved a manipulated photo: celebrities have been the subject of doctored images forever, from the earliest idealized sculptures of emperors to every photoshop fail a Kardashian has committed. And it is easy to empathize with Middleton’s wanting privacy at such a time. But still, the affair is suggestive of a new regime of mistrust prompted by the broad availability of AI-generated imagery. Far more alarming than the misleading images themselves is the crisis of confidence we are experiencing, accompanied as it is by the erosion of public consensus about what constitutes a credible source. This consensus is the basis for productive communication and good-faith debate. Yet the barrage of bullshit on the internet cultivates an environment of acute cynicism that is detrimental to civic participation.

To be clear, skepticism is healthy, and gullibility is dangerous. Images can lie not simply because they have been generated or manipulated algorithmically. Images can
lie because of the words that caption them, or for what they leave out.

But the problem is not skepticism. Nor is it only that anyone can create and widely distribute a faked image. It’s that this ability has given everyone a permission structure to doubt. Everyone, in other words, has been granted license to choose which images they will and will not believe, and they can elect to unsee an image simply because it doesn’t confirm their priors: the mere possibility of its algorithmic generation opens it to suspicion.

This then encourages people to become their own image detectives, exacerbating the boom in conspiracy theories that gave us anti-vaccination campaigns and allegations of voter fraud. It not only normalizes suspicion as everyone’s default setting, it also suggests that the algorithmic tools at everyone’s disposal (i.e., Google) can themselves reverse-engineer algorithms,
and that they are all that is needed to discover the truth.

Three images of white women at parties, plus close-ups revealing that they have atypical numbers of fingers.

Illustration by Kat Brown.

WHAT, IF ANYTHING can art history offer us in this regard? Close looking can’t solve the problem: soon enough, the target will move. The problem concerns the culture of images, and that’s something that art history can help us assess, and perhaps even resolve. More than 30 years ago, art historian Jonathan Crary opened his book Techniques of the Observer by commenting that “the rapid development in little more than a decade of a vast array of computer graphics techniques is part of a sweeping reconfiguration of relations between an observing subject and modes of representation.” Unchecked, the ultimate outcome of this reconfiguration will be profound doubt that threatens to plunge us all into nihilism and paralysis. One could argue that this, and not the faked images themselves, is the endgame
of those who wish to weaken people’s belief in the value of basic civic institutions and the fourth estate.

If the tips I offered above about sussing out photoshopped or AI-generated images are useful, then by all means, deploy this form of close looking to every image online. But the better solution, I think, lies not in connoisseurship but in provenance: not in close looking but in sourcing.

Art historians look carefully at images to search for incongruities. In authenticating or attributing a painting, we don’t just look at brushstrokes and pigments. We consider the painting’s ownership, the hands through which it has passed, and other information about the history that the painting has accumulated along the way. Our present situation demands a similar process for digital images—known as digital forensics—but the public at large cannot be responsible for this process. At some point, every person needs to accept that they cannot claim impartiality or universal expertise: I cannot tell if a bridge is safe to drive over or determine whether my lettuce contains E. coli. So I value agencies and organizations that employ experts who can. The same goes for the sources of information I consume, including those that provide images illustrating current events, who should be responsible for doing the provenance research outlined here. That’s as far as my own provenance research can go.

One model for alleviating the paranoia may be as simple as supporting news agencies and image archives that employ professionals to authenticate the images they reproduce. The Associated Press has now shown this can be done.

If this seems impractical, I have to ask: what’s more impractical, strengthening journalistic integrity, or requiring that all consumers of news become their own digital forensics experts?  

This article is part of our latest digital issue, AI and the Art World. Follow along for more stories throughout this week and next.



Source link

About The Author

Scroll to Top