Scrolling through my Twitter feed on a crowded New York subway train, I came across Kara Swisher’s latest column for the New York Times entitled, “Who Will Teach Silicon Valley to be Ethical?” Swisher focuses on whether one answer for big tech companies could be hiring a Chief Ethics Officer. It made me wish there were more of these companies at the All Tech Is Human Conference I attended on Saturday, which was mostly filled with NGOs, academics and designers. The focus was on ethics in technology, and a few key themes emerged for me.
‘We’re not prepared for the world we programmed’
I loved this quote from Sara Holoubek from Luminary Labs, as she talked about the unintended consequences of innovation and new inventions such as the first death as a result of an autonomous vehicle or the inadvertent revelation of U.S. soldiers’ location in Afghanistan after they jogged wearing Fitbits. Yoav Schlessinger, Chief of Staff for Tech and Society Solutions Lab at Omidyar Network made an apt comparison of the state of big tech to where the automotive industry was in the early 1960s when the Chevy Corvair was both at the height of its popularity and causing numerous car accidents. Auto engineers worried about safety improvements making cars more expensive. The emphasis was put on drivers as being responsible for driving safely rather than the cars.
Ralph Nader’s book, Unsafe at Any Speed, marked the beginning of safety culture going from reactive to proactive (Schlessinger also acknowledged the Ad Council’s role in raising awareness of safety issues with our PSAs around seatbelts and drinking and driving!). He argued tech is at a similar point in its evolution – requiring new industry standards to increase trust between users and the industry. He shared an ecosystem of organizations leading this effort including Data for Democracy, Mozilla’s support of ethics in computer science education, tools like the Ethical OS, movements like the Center for Humane Technology, infrastructure like Dot Everyone and networks connecting employees such as Coworker.org.
New word: De-Metricize
I’m pretty sure this is a new word, which came up during a panel focusing on technology’s impact on trustworthy news (i.e. fake news!). The panelists were unpacking why Silicon Valley has embraced a culture of “neutrality” or is often perceived as being terrified to take a stand on issues related to hate speech and harassment.
Cennydd Bowles, a designer and writer who used to work at Twitter, argued it’s not that these CEOs have an ideology like libertarianism or free speech as it is being beholden to performance metrics. He argued that in such a polarized environment, taking a stand can bring about backlash (alienating the other side) and can harm the bottom line. The panelists wondered what would happen if product managers were liberated from the “yolk” of hitting targets or could change the metrics to something like “what someone learns from a news story.” One suggestion was to target tech investors to be more ethical as a way of deemphasizing performance metrics that make it difficult for these companies to take action.
‘Less brogrammer more Divinci’
The other theme that was woven throughout the discussions was about the importance of bringing in expertise and perspectives from across different disciplines to the field of computer science as well as on product teams. Specifically those with deep experience in what it means to be human. Holoubek also talked about looking to sources such as speculative fiction or even movies that hint at AI gone wrong to help us think more about unintended consequences.
I wrote about how these issues were beginning to surface at this past year’s SXSW Interactive – All Tech Is Human was a deeper dive into what we as humans can do to address and anticipate important ethical issues as we become an increasingly digital society.