Four Smart Takes on SF's Tech Ban
San Francisco Blows Off Facial Recognition Technology. Our Innovators Respond
What happens when a major American city takes a stand against a cutting-edge piece of tech? San Francisco recently decided to find out… by banning facial recognition technology. This was a bold and provocative move by The City by The Bay, one laden with various design, law enforcement, political, philosophical, and ethical meanings. In an effort to understand this complicated situation better, we asked four of our colleagues – Dustin Boutet, Clare Bond, Jonathan Lupo, and Alison Kotin – for their read on the ban.
Dustin Boutet, Principal Interaction Designer
San Francisco’s facial recognition ban has far-reaching implications, way beyond policing. It sends a strong signal about what the citizens of that tech hub really value, and adds a bit of shine to a somewhat tarnished record on government interventions.
With the rise of authoritarian governments and the increased prevalence of facial recognition being used as a tool to extinguish descent and opposition speech, locals and travelers alike will be on the lookout for safe spaces where they can have the peace of mind to know that their every move isn’t being tracked.
Question is: What will prevent repressive countries from using facial recognition to blacklist travelers whose behavior doesn’t align with their narrow worldview? Countries like China are already looking at this tech for use with their own citizens.
There may be a place for facial recognition, but without a clear regulatory framework, we’re speeding toward a dystopian future. Broad use of this technology with retroactive rationale is a recipe for a civil rights disaster. We’re already seeing how concerned citizens are taking steps to avoid this type of government tracking in the Hong Kong protests.
I’m proud to live in the second city (after San Francisco) in the U.S. to ban the use of this technology. I think it sends a clear message to those who want to live and visit. We’re open and progressive in the way we think about the rights of our citizens. As we think about mobility in the future, safe spaces that are free from this kind of tracking will be a major consideration of savvy travelers.
Clare Bond, Senior Director, Experience Design
Where we go, who we see, what we buy, our emails, phone conversations, photographs, our ethnicity, sexuality and more, are all readily available to any hacker with $70 of equipment, as we connect to the Wi-Fi in Starbucks while grabbing our morning coffee.
And it’s not just hackers. We freely give up our data to apps on our cell phones and other devices, allowing them to capture our location or access our cameras, and think nothing of it. The exchange of data is worth the convenience of not having to type in an address or password. Integration with social media adds more layers of data that can be readily accessed by third parties. Further, data trackers are fed by those same apps sharing information not just on your location, but on what you open and tap on and, in some cases, personally identifiable information. We simply don’t know where our data is going, who is accessing it and to what purpose. Very little of our identity and behavior is hidden and private.
Your Wi-Fi router is also watching you. Every time you move around your house you interrupt the Wi-Fi signal in ways that can be used to identify individuals with a very high level of accuracy; the tracking can be so nuanced that keystrokes on a keyboard can be recognized and used to reconstruct what you are typing. No cameras – or access to your home – needed. Alexa is listening in, as is your smart TV.
Security websites speak coyly of implicit and explicit digital footprints, but here’s the reality: Our data is a highly valuable commodity that blazes a trail right through our front door, and beyond. The government tracking our faces in public, while confronting, seems almost quaint in comparison.
The outcry against tracking our physical selves through facial recognition is starkly contrasted by the seeming lack of care we have for our digital bodies. The government having our facial data raises panopticon fears of totalitarianism. The principle of the panopticon is that a centralized ‘watcher’ oversees the activities of a population, and corrects their behavior, without being observed. The power of a panopticon is that people are aware that they are being observed, but contemporary technology is more subversive. We do not know we are being watched. We do not know to what end our data is being used. And whether that use is serving – or harming – our best interests.
Like most forms of prohibition, banning facial recognition, while seemingly worthy, misses a bigger point. We’ve already swapped our digital right to privacy for the convenience of avoiding a few thumb taps. We are already tracked and quantified. There is already nowhere to hide.
Jonathan Lupo, VP, Experience Design
Products have given way to convenient services, all at the cost of our privacy. I've unconsciously accepted it.
As a modern digital-service-consuming zombie, I blindly opt to share my location, personal health history, payment info, family connections, personally identifying information (even my DNA) with brands that I perceive provide the most value, in the form of convenient services. These services include, for example, having a driver come to my current location (not even my residence) to pick me up. Digital transactions and service fulfillment have become seamless, taking advantage of device innovations to eliminate all friction when exposing my data to service providers. My phone is a beacon, beaming my identity and location, everywhere I go. When I move about the world, my identity is announced, in addition to my presence. I, like millions of others (perhaps billions), have willingly exposed myself to the public. I haven't yet suffered major consequences for this foolishness. I hope I never do.
So, as careless as I am with my personal data, I take comfort in the fact that my city cares about my privacy. I thank the city of San Francisco for proactively banning facial recognition technology until it can be properly regulated. I understand that proponents claim to want to use the tech to keep me safe...but, until I am convinced that political and other biases can be eliminated when these technologies are used, I applaud the action.
Alison Kotin, Senior Innovation Consultant, Interaction Design
Let’s step back from technology-induced excitement and alarm to ask: Does facial recognition make sense as a policing tool? Is this something that will make people feel safer around the police officers in their neighborhoods, and help those officers work more effectively?
Tasked with understanding future technology needs for first responders, we conducted field research to understand how police officers might make use of new information streams such as data from real-time facial recognition systems. “First responders having situational awareness improves their safety because it provides them additional information to apply to that specific incident,” we were told, helping us understand why continuous monitoring with facial recognition technology seems so initially appealing to police working to navigate increasingly complex, technology-mediated interactions. At the same time, serious concerns around privacy surfaced immediately, as well as pushback from officers already overloaded with data and sensory inputs.
So, does the reality of today’s facial recognition capabilities make sense for police officers in the field? Officers we rode along with told us that relationships and their ability to connect with community members make their work effective. When communication breaks down, either among officers or between officers and people on the street, tensions rise, and confrontations become unsafe. “I have to dictate my response, not the computer, and people don’t want the privacy invasion,” one senior officer told us.
The need to be constantly taking in and analyzing facial recognition data adds another layer of distance between officers and the people they serve, preventing officers from focusing their full attention on the situation in front of them and eroding trust. Until this kind of information can be collected transparently, in a manner aligned with community members’ preferences for privacy and autonomy (and it’s questionable whether this is even possible), universal facial recognition poses an emotional and social barrier to effective policing. Until this data can be sorted, parsed, and curated to the point where it is relevant, actionable, and not distracting, it will only serve as a further challenge for officers already struggling to act thoughtfully within tense, complex situations.
You can find the original article here.