Is it AI — or a plane crash?

In the age of artificial intelligence, a top federal accident investigator worries about the technology’s potential influence on the public following disasters.

An illustration featuring a now-unavailable X post of a suspected AI-generated image of January’s plane crash in Washington. | Illustration by Bill Kuchman/POLITICO (source image via X)

About 11 hours after the nation’s worst airline disaster in more than two decades, an X user posted a dramatic image of rescuers climbing atop the wreckage in the Potomac River, emergency lights illuminating the night sky.

But it wasn’t real.

The image, which got more than 21,000 views following January’s deadly crash between a regional jet and an Army Black Hawk helicopter, doesn’t match photos of the mangled fuselage captured after the Jan. 29 disaster — or the observations of Washington police officers responding to the scene, according to police department spokesperson Tom Lynch.

One media fact-check quickly flagged it as a forgery — probably created using artificial intelligence, according to a “DeepFake-o-meter” developed by the University at Buffalo. Three AI checking tools used by POLITICO also labeled it as being likely AI-generated. The post is no longer available; X says the account has been suspended.

But the image is not an isolated occurrence online. A POLITICO review found evidence that AI-created content is already becoming routine in the wake of transportation disasters, including after a UPS cargo plane crash earlier this month that killed 14 people. Posts about aviation incidents highlighted in this story were from users who didn’t respond to requests for comment.

The blooming phenomenon is alarming experts such as Jennifer Homendy, the nation’s top transportation accident investigator.

“I’m very concerned that with the use of AI,” it will “really sway the perception of the public and passengers” given the widespread interest in the causes of a crash, Homendy, the chair of the National Transportation Safety Board, said at an Air Line Pilots Association forum in September. The NTSB is pursuing a yearlong investigation into what led to the January disaster in Washington, which left 67 dead.

Aviation safety consultant Jeff Guzzetti agreed, saying AI-generated content spreading so soon after a crash could catapult misinformation and “undercut the integrity of the real investigation.”

The worst-case scenario is that “there’s a delay in preventing the next accident because everyone is distracted,” said Guzzetti, a former longtime official at both the NTSB and Federal Aviation Administration.

But he didn’t have an easy answer for how the FAA could respond to AI-driven content on social media, saying that it will take a “whole of government solution.”

In a statement, the aviation agency said “misinformation campaigns can increase safety risks and erode public confidence,” adding that it uses “every method and platform available to deliver accurate and real-time updates” to people.

‘Nonsensical text’
But the fakes keep coming.

University at Buffalo computer science and engineering professor Siwei Lyu referenced a purported 911 recording from last year’s Francis Scott Key Bridge collapse in Baltimore that circulated soon after the disaster on social media, allegedly from a motorist whose car was sinking after falling into the water. He said the audio was likely AI-created — with a “telltale sign” being the “calm cadence in the conversation that does not fit the urgency of the situation.”

More recently, POLITICO found a seemingly AI-generated video on TikTok of a UPS cargo plane crash in Louisville, Kentucky, and a subsequent NTSB news conference about the early November incident. Two AI checking tools used by POLITICO flagged the content as being probably AI-created.

When sent the video, which includes high-resolution imagery showing the burning plane from underneath as it took off, NTSB spokesperson Peter Knudson noted that it depicts part of the media briefing held by the safety board, but ultimately wasn’t sure if that portion was made with AI.

National Transportation Safety Board Chair Jennifer Homendy at a July board hearing. She is raising the alarm on disaster fakes. | Rod Lamkey Jr./AP

But according to Lyu, the video is composed of multiple 4- to 5-second AI-generated clips “stitched together, accompanied by synthetic text-to-speech narration.”

“The content contains numerous detectable AI artifacts, including shifting fire positions inconsistent with the real accident, object pop-ins, static or mismatched background elements, incorrect aircraft markings, geographically inaccurate scenes, and hallucinated details such as fake police lights and nonsensical text on the black box,” Lyu said.

He noted, for instance, that a plane in the video remained frozen midair as mourners gathered near a fence and, during the NTSB news conference, a man in the background “exhibits unnatural hand deformation, where his fingers and wrist change shape abruptly.”

The poster, whose username is “pulsenewsreels,” didn’t respond to a request for comment, though the account’s description says: “Based on the facts that we know at the time these videos were made.”

‘That’s not real’
Even the NTSB can be fooled.

Homendy, speaking at the ALPA forum, recalled that after the June crash of an Air India flight in Ahmedabad, India, which left 260 people dead, “somebody in one of our operational offices … came up to me and said, ‘Did you see that video from Air India?’ And I was like, ‘Oh God, what video?’ … She shows me and I’m like, ‘That’s not real.’”

“People believe this stuff,” Homendy said.

POLITICO found several examples of apparently AI-generated content related to the Air India incident.

One video on TikTok, for instance, was so realistic that even three AI checking tools available online diverged when assessing whether it was machine-made. (One said the answer was likely yes; the other two disagreed.) But the image doesn’t match actual photos of the disaster published by the internationally respected Reuters news service.

The poster, using the name “drsun0715,” didn’t respond to a request for comment, nor did TikTok. The video is now unavailable.

In the case of the January crash near Ronald Reagan Washington National Airport, AI-checking tools weren’t the only tipoff that the purported image wasn’t real. The University at Buffalo’s Lyu also pointed to “several oddities” in it, such as “vague human-like shapes,” a person with a head-to-body ratio that doesn’t look natural and a “mysterious blue light reflection” on the water.

When shown the image, NTSB spokesperson Eric Weiss added that the plane’s windscreen appeared to be that of a McDonnell Douglas DC-10, a much larger jet than the Bombardier CRJ-700 involved in the collision.

The user who shared the image, named “254Debug,” didn’t respond to a request for comment. Neither did X, though the individual’s account is now suspended. It’s unclear why.

The post is no longer visible.

Leave a Comment