DoorDash has banned a delivery driver for allegedly using an AI-generated image in a suspected hack to fake a legitimate delivery.
According to Nexstar, the on-demand delivery platform confirmed the deletion of the Dasher’s account following an incident where the driver allegedly used an AI-generated photo to convince a customer that their order had been delivered.
The customer, Byrne Hobart of Austin, TX, first shared details of the incident, including the photo he allegedly received, in a post on X on Dec. 26, 2025.
“Amazing. DoorDash driver accepted the drive, immediately marked it as delivered, and submitted an AI-generated image of a DoorDash order … at our front door,” Hobart wrote.
Hobart later said he does not believe the DoorDash driver was ever at his home, but instead used several tactics to deceive both him and the company, Nexstar reports. He thinks the person behind the phony delivery was a scammer who hacked an actual driver’s account, changed their information, and attempted the stunt multiple times before the company caught on.
Hobart shared in a follow-up post on X that DoorDash refunded his order and issued him a credit. He later received an actual delivery of his poke bowl.
In a statement to Nexstar, DoorDash said the company has “zero tolerance” for fraud and uses systems designed to prevent bad actors from exploiting the platform, adding that it continues to improve safeguards as new issues arise.
“After quickly investigating this incident, our team permanently removed the Dasher’s account and ensured the customer was made whole,” a spokesperson said. “We have zero tolerance for fraud and use a combination of technology and human review to detect and prevent bad actors from abusing our platform.”
They added, “Our teams are constantly working on improving those systems as new tactics emerge.”
The Misuse Of AI-Generated Images
AI is becoming more widely used. While the technology can be helpful in many ways, concerns continue to grow about its ethical and responsible use.
As AFROTECH™ previously reported, individuals used AI-generated content to rapidly spread misinformation across social media during the California wildfires that affected the Los Angeles metropolitan area and Ventura County from Jan. 7 to Jan. 31, 2025.
Several AI-generated visuals circulated widely during the fires, including one of the iconic Hollywood sign engulfed in flames. Another viral clip inaccurately showed firefighters using women’s handbags to extinguish flames. However, an LAFD spokesperson clarified that they used standard canvas bags commonly used to combat small fires, per AFROTECH™.
How To Identify And Combat Phony AI
Everypixel Journal estimates that over 15 billion AI-generated images — an average of about 34 million daily — have been generated since 2022, AFROTECH™ noted.
Walid Saad, an engineering and machine learning expert at Virginia Tech University, told Virginia Tech News that human input is a key to fighting AI misinformation.
“Addressing this challenge requires collaboration between human users and technology,” he noted. “While LLMs have contributed to the proliferation of fake news, they also present potential tools to detect and weed out misinformation. Human input — be it from readers, administrators, or other users — is indispensable. Users and news agencies bear the responsibility not to amplify or share false information, and additionally, users reporting potential misinformation will help refine AI-based detection tools, speeding the identification process.”

