Ever use one of those mobile food delivery apps — only to realize your delivery person isn't who you expected? There's a lesson here about identity, authentication, and what happens when the best laid tech plan meets human beings. Credit: Thinkstock One of the oldest IT jokes is the CIO who says, “IT operations would go so much more smoothly if I wasn’t for these end-users mucking everything up.” It’s true: humans have a tendency to not do what they are should or — more likely — what someone in IT wants them to do. This is a lesson now being learned by the major food delivery services, which have run into some of the same authentication and security issues other industries face daily. What started out as a perfectly reasonable authentication effort intended to make customers feel safer — because they could see that the person delivering their food is the same person who’s supposed to deliver it — has largely failed in the field. Sam Amrani, the CEO of PassBy, a retail technology firm, recently took to a LinkedIn forum to complain about the problem, and was quickly joined by others who’d experienced the same issue. “I have no way of knowing whether (the delivery person) was a legitimate user of the app or whether there was something more malicious going on,” Amrani said. “Systemic technical error or black market for illegal workers? A bit of both, it seems.” People, he continued, are “hopping onto these apps courtesy of gig-work brokerages who sell or lease accounts. It’s a loophole in these gig-economy apps [that] isn’t being safeguarded. Some 80 percent of the things I’ve ordered through a gig-economy app have been facilitated by a completely unknown person. No background checking. No ID validation. We’re letting people into buildings and getting into cars with zero regulation. Whilst I am sure 99 percent of these people are just trying to make a grey-market living, there are dangerous consequences to the level of exploitation that this can lead to.” “As long as apps allocate and communicate the details of the driver, my view is that they are responsible for ensuring that the correct person is the person who arrives,” said TrustD Director Siofra Neary. According to Riccardo Russo, head of growth marketing at China-based Yodo1 Games, the situation has been dealt with there “with a facial recognition check every two hours or so from major ride-hailing and delivery apps. It used to be a big issue.” (The LinkedIn discussion went offpoint when one commenter suggested this as a streaming TV series, featuring a murder-for-hire team that takes jobs with food delivery services to make their hits. Tagline: “It won’t be the saturated fat that kills you.”) Computerworld reached out to three of the largest food-delivery services in the US — Grubhub, UberEats and DoorDash — and they either confirmed identity swapping is a known issue or didn’t deny it. None of the three would agree to an on-the-record interview to explore the issue. Grubhub responded with a generic statement that “we conduct background checks on all our delivery partners, and while reports of this kind are rare, misrepresentation or fraudulent activity of any kind could lead to deactivation of that delivery partner’s account.” Note the absence of any plans to do anything to stop it proactively. UberEats offers a link — fairly well hidden, but it’s there — on their app for customers to report if a different driver makes a delivery. There’s no explicit indication of what will happen to that driver if the accusation is confirmed. If UberEats were serious about the issue, it could offer incentives for consumers such as a $50 credit for completing the form and require proof, such as video doorbell footage, to reduce bogus reports. One DoorDash employee at corporate headquarters did agree to an interview, but only on background, about the delivery drivers they call “Dashers.” “We do not believe that this has anything to do with a technical issue or coding error,” the DoorDash source said. “Sometimes Dashers choose to dash with their friends, partners or family members. Although Dashers are free to do this, the person completing the actual dash must be the Dasher listed on the account. We’ve also seen instances where Dashers share accounts with individuals who are not Dashers, which is in strict violation of our policies. Any Dasher who engages in this type of behavior faces consequences, including removal from the DoorDash platform.” DoorDash is the only service that seems to be making an effort to thwart bogus drivers. In August 2023, it implemented a re-verification mechanism, which periodically asks a driver to take an immediate selfie to check against their government ID already on file. Still, DoorDash drivers are often not who they’re supposed to be. (I’ve seen this many occasions when I use the service.) It is not clear how often the spotchecks are supposed to happen and, far worse, how often they actualy take place. Amrani noted the criminal potential for these misrepresentations. “An organized crime group could scout out locations for robberies or break-ins” because this tactic gets them into apartment buildings and office complexes easily. “This is a supply-and-demand issue. There aren’t enough legitimate people (so) they have to loosen the strings a little bit. It’s a loophole and they could choose to tighten up those loopholes much more, but they are not doing it.” To a limited degree, delivery services have brought this problem onto themselves. Knowing they have drivers who bend rules the company can’t enforce sufficiently — partially because they need the drivers more than the drivers need them — why not stop sharing the name and image of the driver? The problem is two-fold. First, drivers often switch in the middle of a delivery. Nothing nefarious there, but sometimes the first driver will cancel and another driver will pick up the order. But that makes the upfront identification rather pointless. Secondly, we have the issue DoorDash acknowledged: drivers share their duties with friends and family. Until that’s resolved, which is likely to never happen, why not just stop posting the identities? At least that would end the customer confusion. It doesn’t help with the authentication situation, but that’s not going to happen either way. My point is this: If you’re going to do authentication, think seriously about how the end users — whether they’re consumers or colleagues — are realistically going to use it. Is there an easy way for them to get around your plan? How hard is it for the bad guys to avoid? This is broader than just the food industry. Consider banks. Many still use Caller ID to verify customers for limited, but sensitive, financial purposes — such as checking current balance and the five or 10 most recent transactions/activity. But faking Caller ID is easy. Yet again, we see convenience winning out over security. Or think about a major investment house that relies on voice recognition, despite the fact that edited digital audio files can fool that kind of biometric system. (Need we even get into the fact that powerful new generativeAI tools can easily fool voice-recognition systems.) Technology would, in most cases, work just fine — if it weren’t for those darned human beings. Related content opinion GenAI might be the least-trustworthy software that exists. Yet IT is expected to trust it. If you can't trust the product, can you trust the vendor behind it? By Evan Schuman Jun 10, 2024 6 mins Generative AI Technology Industry opinion Privacy policies have gone insane. Doubt it? Consider Instacart Corporate privacy policies are supposed to reassure customers that their data is safe. So why are companies listing every possible way they can use that data? By Evan Schuman Jun 03, 2024 7 mins Regulation Technology Industry Data Privacy opinion Think Shadow AI is bad? Sneaky AI is worse It’s bad enough when an employee goes rogue and does an end-run around IT; but when a vendor does something similar, the problems could be broadly worse. By Evan Schuman May 09, 2024 5 mins Vendor Management Security Vendors and Providers opinion GenAI is to data visibility what absolute zero is to a hot summer day Given the plethora of privacy rules already in place in Europe, how are companies with shiny, new, not-understood genAI tools supposed to comply? (Hint: they can’t.) By Evan Schuman May 06, 2024 6 mins Data Privacy GDPR Generative AI Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe