Chelsea Manning, Caroline Sinders, and Kristian Lum: “Technologists, It’s Time to Decide Where You Stand On Ethics”

By Lisa Rein

Chelsea Manning at the Aaron Swartz Day Ethical Algorithms Panel, November 4, 2018

A lot of folks were wondering about what Chelsea Manning‘ meant when she discussed a “Code of Ethics” during her SXSW talk, last March. Well there’s no need to wonder, because Chelsea discussed this in detail, with her co-panelists Kristian Lum (Human Rights Data Analysis Group) and Caroline Sinders (Wikimedia Foundation), during the Ethical Algorithms track at the last Aaron Swartz Day at the Internet Archive.

Here’s a partial transcript to shed some light on the situation.

 

(Left to Right) Kristian Lum, Caroline Sinders, Chelsea Manning.

The panel discussed the issue of valid data and how companies have become more concerned with whether they can sell an algorithm-driven software product, and less concerned with whether the damned thing works and who its results might affect, should they be inaccurate.

Link to the complete video for Ethical Algorithms panel.

Lisa Rein: Okay first question: Are the software companies who are making these algorithm-based products are just selling them to whomever they can, for whatever they can apply them to? And if so, how can we stop this from happening?

These companies are powerful and getting larger and more powerful all the time. Yet, no one seems to care; even the companies buying the snake oil products; as long as they can resell the services somehow, and the money keeps coming in. What can we do?

Chelsea Manning: Me personally, I think that we in technology have a responsibility to make our own decisions in the workplace – wherever that might be. And to communicate with each other, share notes, talk to each other, and really think – take a moment – and think about what you are doing. What are you doing? Are you helping? Are you harming things? Is it worth it? Is this really what you want to be doing? Are deadlines being prioritized over – good results? Should we do something? I certainly made a decision in my own life to do something. It’s going to be different for every person. But you really need to make your own decision as to what to do, and you don’t have to act individually. We can work as a community. We can work as a collective entity. People in technology, we’re not going to be able to explain to people – all this stuff. People know that everything’s messed up, and they know that things are messed up because of the algorithms that we have. We’ve educated them on that. They understand that. They understand that viscerally, because they see the consequences and the results of these things that are happening every day.

The problem is that people in technology aren’t paying attention, and even some of us who are paying attention, aren’t doing anything about it. We’re waiting for somebody to do something about it. Somebody else isn’t going to do it. We’re going to have to do it ourselves.

Kristian Lum and Caroline Sinders.

Caroline Sinders: Even if you feel like a cog in the machine, as a technologist, you aren’t. There are a lot of people like you trying to protest the systems you’re in. Especially in the past year, we’ve heard rumors of widespread groups and meetings of people inside of Facebook, inside of Google, really talking about the ramifications of the U.S. Presidential election, of questioning, “how did this happen inside these platforms?” – of wanting there even to be accountability inside of their own companies. I think it’s really important for us to think about that for a second. That that’s happening right now. That people are starting to organize. That they are starting to ask questions.

I think especially looking at where we are right now in San Francisco – inside the hub of Silicon Valley – in a space where it’s very amenable to protest is very amiable to supporting ethical technology. How do we build more support for other people. Is it going to spaces we’re not usually in? Is it going to other tech meet ups? Maybe. Is it having hard conversations with other technologists? Probably. How do we push the politics of our community into the wider spread community? We have to go and actually evangelize that I think.

Is the company even using the algorithm in the way it was intended to be used? Often a company purchases an algorithm that is made for one kind of analytics and it gets used for a completely different thing, and then you get these really skewed results.

Lisa Rein: Wait a minute. I can’t believe I’m asking this, but, are you saying that, as long as they like the results, nobody cares if the results are accurate?

Caroline Sinders: ‘How sure are we that it’s true?’ is not the question that I’m hearing in the conference room. It’s more like ‘we’ve gotten these results and these people have purchased it.’ or ‘It’s selling really well.’ Cause we are in the age of people building software as a product, capabilities as a product, APIs as a product. (Meaning that you buy access to an API that’s like a pipeline.) And if it’s returning certain results that a company can then use and put in a portfolio to sell to other different kinds of clients, like, it doesn’t actually matter how much it works, if it has the appearance of working; if it’s pumping out ‘results.’ So, I can’t speak to like academic verifiability of different kinds of APIs. I can speak to “that I have not ever heard people really talk about that.

Chelsea Manning: Yeah. I’ve had experience with this in particular… For verifiability of data, it’s purely academic. That’s what I’ve found. When you are working in a corporate or a business setting or whether your are working in a government setting – military context or whatever – it’s ‘results, results, results.’ Nobody cares how verifiable the data is. Everybody’s cutting corners. Everybody’s trying to meet deadlines. That’s why we — people in technology – we need to be thinking more ethically. We need to be very cognizant of the systems that we’re building, and not just sitting there, continually meeting deadline and meeting priorities that our set by our leadership, or by clients or by senior corporate, ya know, C Suite people.

You really need to think about what you’re doing. What the consequences of what you’re doing are. Because these (questions) are not happening, and they should be happening. In many cases, for some of these systems, maybe the question of whether we should be doing a system like this at all is a question that should be asked – at least asked, in some of these rooms. It’s not, and it’s not going to be.

Caroline Sinders: I think there is a big push though, if you work in industry software, to really understand the ethical ramifications of the products your using, or the software that you’re using, and how it effects your users. And how this effects even unintended bystanders – people that have not opted in to the system, or into the product, right? And that’s where you get into like, different surveillance systems, or systems that are in the whole vein of the Internet of Things, right? How many people are “accidentally” a part of a data set that they didn’t get to opt in to. 

Aaron Swartz Ceramic Statue (by Nuala Creed) and Kristian Lum.

Kristen Lum: And in some cases, remember, you yourself can act as a sponge for accountability. Because now, let’s say you have a system that’s been purchased, that’s been created by “peer reviewed science” or very expensive technology, and it’s saying to do the thing that your organization kinds of wants to do anyway. Well, maybe do some research and show the people your working with, and say “hey we may be over policing this community.” Because, otherwise, it’s like “Hey, this software we spent all this money on is telling us to do it,” which gives them justification to do what they want to do anyway. So, try to maybe act like a buffer, between these viewpoints, by being able to ask, and question, ‘why are you doing that?’

Lisa Rein: Would opening up the “black box” solve everything?

Chelsea Manning: It’s not just that it’s a black box, even when the code is available to you, sometimes how it’s actually coming up with the predictions it’s coming up with — apart  from doing pure math, when you’re trying to come up with something that’s understood by humans for an explanation, it can escape you sometimes. So I think that’s one of the dangers of depending on showing the entire algorithm. We have to fully understand these algorithms and not just see how they work from a code perspective or an algorithmic perspective.

What scares me with that is some of these algorithms being used in like, Bail hearings… You literally changing this person’s life, because they are going to stay in jail, because they are in a bail hearing, where an algorithm — made by some company — decided that you’re more predicted to be arrested. It’s not evidentiary in any way, but it’s being used in an evidentiary manner. It’s just a mathematical prediction based on false data — or poor data — and it’s actually tearing people’s lives apart. And it’s also feeding into this feedback loop, because they’re seen as being re-arrestable. Therefore, it reinforces the data set.

Kristen Lum: There are a lot of models now predicting whether an individual will be re-arrested in the future. Here’s a question: What counts as a “re-arrest?” Say someone fails to appear for court and a bench warrant is issued, and then they are arrested. Should that count? So I don’t see a whole lot of conversation about this data munging.

Caroline Sinders: Specifically, I think some of the investigative reporting that Pro Publica has done specifically on this is really worth highlighting.

(Editor’s Note: Parts of this partial transcript were rearranged slightly for flow and readability.)

(Author’s Note: Lisa Rein thanks ThoughtWorks, for their support of the Ethical Algorithms Panel and Track at Aaron Swartz Day 2017.)

Facecatraz: Becoming the Warden or Facebook as a Penal Colony

or

How Facebook is becoming the digital Alcatraz of Social Media

by E.F Fluff

Written early 2016, extract from a larger work

A few weeks ago, for reasons still unknown to me, my Facebook account was suspended. Upon attempting to login, I was directed to a page requesting various types of ID to prove I was who my profile said I was. The foremost of these request was a scan of my passport with its ID number unobscured.

I am remaining anonymous for a variety of reasons including but not limited to needing to remain hidden from the man who attempted to blind and kill me. The same man I am trying to prosecute; the same man who has since been convicted of unrelated attempted manslaughter. With no information privacy or safety guarantees and the knowledge that this information would be handled by obtuse “subcontractors” and given their poor track record in everything, I provided Facebook with real documents with the artist pseudonym I have used for over seven years. None of them included a photo, as I have never linked a photo to that account.

Other equally intrusive options are available, though a quick search of the net will tell you depressing stories of people whose IDs were not accepted, even one or two whose passport were, apparently, not accepted. In some cases, people are using their real names or names slightly altered, (middle name spelt different, a common nickname such as Bob, no surname etc.).

There are very few times in life you will ever be required to provide your passport with its number.

Border control upon entering and leaving a country. Registering as a foreign resident in a country. Opening a bank account in a foreign country as a freelance worker. In some places, dealings with welfare or perhaps, when going to prison.

The passport is a very important document and was historically a document of “safe conduct.” Passport-like documents can be traced back to the Bible. With the current refugee crisis, it is clear the importance of the document has not diminished.

For example, in Finland, male citizens aged 18–30 years require military approval, or must prove that they have completed, or are exempt from, their obligatory military service to be granted an unrestricted passport. Otherwise, to ensure that they return to carry out military service, a passport is issued that is valid only until the end of their 28th year. Other countries with obligatory military service, such as Syria, have similar requirements. In Ireland, you do not own your passport; it is essentially on loan from the Ministry of Foreign Affairs and Government.

For a company such as Facebook to begin requesting passports, drivers licenses, employment pay stubs and other varied forms of confidential ID, you would think they were an extension of a State body rather than a stealth advertising company whose largest commodity is its “free” users. Users whose information it corrals and spins into billions. Some people are there by choice, other’s are there against their better judgement but feel compelled to use it due to its huge reach. One could possible draw analogies to the Prison-Industrial Complex, where prisoners become the bread and butter commodity, spinning money any way they are turned, in subsidies, contracts and penal labour.

In these days of doxxing, identity theft and swatting, the maxim should be, “You don’t know me, and that, unless I decide otherwise, is the way I want it.” Indeed, we should encourage obfuscation of identity, for safety, for cultural richness and truth-telling.

Increasingly, Facebook is being used as a means to background and credit check. Now, unless carefully hidden with maintained privacy and anonymity settings, soon your disparate Read more “Facecatraz: Becoming the Warden or Facebook as a Penal Colony”