A quick guide to not driving to Canada

Tell me if you’ve heard this before — a Lyft driver with a programmed GPS (almost) ends up in the wrong country. No, not the one where the Uber driver got stuck on stairs, or for that matter any of myriad examplesof people blindly following a magical machine voice, with morals about the dehumanization power of machines. My story is about how machines, or perhaps, more appropriately, the algorithms that run them, need to be considered as active agents in social interactions, albeit ones that do not behave in the same way as human actors.

At the end of the Global Business Anthropology Summit at Wayne State University, a couple of colleagues and I arranged to travel to the airport together. A Lyft was called, arrived, and we all piled in. I happened to sit in the front, and talked with our driver, a Detroit native who is also rehabbing homes for veterans. The other women chatted in the back. We hit a bit of traffic on the freeway, and the GPS directed the driver to exit. He narrated this and noted that he assumed it was routing us around the backup. After a few minutes I started to notice that our new route seemed to be going the opposite way, and I saw that the GPS instructions were to take the exit for the Ambassador Bridge to Windsor, Ontario. I thought it was odd, but having never been to the area I figured the exit must have a road that turned back around toward Detroit Metropolitan Airport. As we exited I did note a sign that said “no return to US,” and all four of us in the car started questioning what, exactly, was going on, and why. My colleague figured out that when she had called for the Lyft, she had simply typed “airport” and algorithm chose the closest one, no matter that it was in another country.

“No worries,” said our driver, “I will turn us around.” As cars were being shunted into different lanes, he followed a truck on path that looked like it would turn around, until it was clear that we were headed up the bridge and he deftly maneuvered a (possibly illegal) U-turn and overrode the GPS directions. A reasonably quick trip through US immigration later, we were on the way to the airport on the correct side of the border and laughing about the adventure.

Looking back on it, no one in the vehicle was blindly following a machine without thinking, least of all the driver navigating his home town. We were all thinking, discussing, questioning, and aware of what was happening. The issue was we were sharing our space with social expectations and conventions as well as a social actor — the algorithm — whose behaviors are not those we expect from other humans. The elements in this mix demonstrate the complexities of the social space we share with machines, and the assumptions we make based on human interactions

Element I: Three smart and well-traveled women need to get to the airport, and also figure the trip affords them time to catch up with each other before they go their separate ways. The Lyft caller simply and logically states her destination as airport. As a visitor to the city, she has no way of knowing there might be more than one option and also probably expects some feedback to let her know if there are decisions to be made in this regard.

Element II: An algorithm that presents nearest matching destination as top choice, and has not be programmed for a thoughtful back and forth or to ask “are you sure that is where you want to go?”

Element III: A driver who knows his city but by working for Lyft has agreed to follow programmed instructions to ensure his customers gets where they have requested. He notices he is exiting when he doesn’t expect to, but also notices the traffic so makes some assumptions about what the algorithm must be doing, and tells his passengers as much, and they are content with the assessment.

Element IV: Collective action to resolve the situation. Besides reassuring one another, there was discussion of the best course of action, and reprogramming of the recalcitrant member of the party.

There are probably many other elements I could pick apart with more reflection, even around my own behavior. I didn’t vocalize any questions about the exit when I saw it coming up, even though my husband will tell you I am generally not silent in the car when I see a sign I think needs to be noticed — how does being in a car with a professional driver and business friends differ from being with one’s spouse? How do our individual expertise and human confidences figure into how we interact with others in given situations? Certainly the GPS does not seem to lack for confidence even when objectively wrong.

We take for granted many elements of our day to day interactions with other people, and more and more take for granted our interactions with connected things, be they GPS devices, intelligent agents, or other tools. But just as our assumptions about other people sometimes end up being wrong, so too do our assumptions about systems, especially when we forget that they do not behave like people. They are nonetheless a part of our social systems, and need to be treated as tangible actors in our everyday life.