DoorDash’s new AI-powered ‘SafeChat+’ tool automatically detects verbal abuse
DoorDash hopes to reduce verbally abusive and inappropriate interactions between consumers and delivery people with its new AI-powered feature that automatically detects offensive language.
Dubbed “SafeChat+,” DoorDash is leveraging AI technology to review in-app conversations and determine if a customer or Dasher is being harassed. Depending on the scenario, there will be an option to report the incident and either contact DoorDash’s support team if you’re a customer or quickly cancel the order if you’re a delivery person. If a driver is on the receiving end of the abuse, they can cancel a delivery without impacting their ratings. DoorDash will also send the user a warning to refrain from using inappropriate language.
The company says the AI analyzes over 1,400 messages a minute and covers “dozens” of languages, including English, French, Spanish, Portuguese, and Mandarin. Team members will investigate all incidents recognized by the AI.
The feature is an upgrade from SafeChat, where DoorDash’s Trust & Safety team manually screens chats for verbal abuse. The company tells TechCrunch that SafeChat+ is “the same concept [as SafeChat] but backed by even better, even more sophisticated technology. It can understand subtle nuances and threats that don’t match any specific keywords.”
“We know that verbal abuse or harassment represents the largest type of safety incident on our platform. We believe that introducing this feature could meaningfully reduce the overall number of incidents on our platform even further,” DoorDash adds.
DoorDash claims that more than 99.99% of deliveries on its platforms are completed without safety-related incidents.
The platform also has “SafeDash,” an in-app toolkit that connects Dashers with ADT agents who can share the location and other information with 911 services in an emergency.