On Monument Lab’s bulletin, we recently published “The New Gatekeepers: Will Google Decide How We Remember Syria's Civil War.” Written by Global Voices Advocacy Director Ellery Roberts Biddle, “The New Gatekeepers” examines how big tech companies like Google and Facebook are shaping our view of the historical record of war atrocities and other traumatic events. These companies increasingly use artificial intelligence to handle and sometimes censor shared content on social media posted from the frontlines of conflict zones, impacting how we learn about and will remember historical events.
In her roles at Global Voices and the Berkman Klein Center for Internet and Society at Harvard University, Biddle Roberts reports on and defends the rights of journalists around the world. For Monument Lab, Roberts Biddle looked specifically at a counter example, the case of the Syrian Archive, a Syrian-led group of technologists based in Berlin, seeking to organize and backup more than five million images and video files shared online from the war, including devastating images of violence and chemical attacks on civilians. This archive can be used by journalists and perhaps one day in a war crimes trials, if the regime ever faces charges.
For this episode, we are also joined by Jacke Zammuto of Witness, where she is a program manager focused on video and media to defend human rights. Zammuto works with community organizations on documenting police accountability, immigrant rights, and indigenous rights.
Paul Farber: Ellery Roberts Biddle and Jackie Zammuto, welcome to Monument Lab.
Jackie Zammuto: Thanks for having me.
Ellery Roberts Biddle: Yeah, excited to be here.
Farber: I want to start with Ellery. What led you to the Syrian Archive?
Roberts Biddle: I have been working for the last seven years in this emerging field of human rights and technology, where there are sort of a global community of people who are asking and investigating questions about how the internet and other kinds of digital tools and technology can help us better exercise our fundamental rights, like our right to free speech, freedom of association, and also our rights to personal privacy. Who are asking questions about how technology can help us exercise our rights, but, at the same time, looking at the ways in which our rights might be put at risk in a different way in the digital era. The Syrian Archive is a really important project that was started by our friend and colleague, Hadi Al Khatib, who I had met in different moments and different events. It occurred to me what he and his colleagues are doing is something that kind of addresses both sides of this question about the interaction between technology and human rights. On one hand, what this team of people is doing is they are gathering and preserving video and photographic documentation of what has happened in Syria since 2011 of the war and, actually, of different kinds of things happening in people's lives as a result of this war. They're organizing that material in a way that is really, really powerful when you start to think about the importance of individuals being able to express themselves and to document and offer testimony about what is happening to them in their lives and their cities and communities. They were really putting this incredible amount of power in this collective exercise of testimony. At the same time, they have run into these challenges to that that I found really emblematic of so many of the problems that we face now. In attempting to use the internet to express ourselves, we're running into significant legal barriers in some cases, but, in others, they're technical barriers, but they're barriers that exist not because of the fundamental way that the internet works but because of the enormous private companies that actually take up most of the space of the internet. Companies like Google and Facebook, they are not the internet, but for most people, especially in contexts like Syria where you have limited access to the internet, limited low-bandwidth and, for some people, not a whole lot of time or capacity to learn different ways to use the internet, Facebook and YouTube are really easy tools to use. People are using these incredibly popular, easy to use, really well-functioning platforms to document their stories and to show what they're seeing and what's happening to them. When they do that, when you upload the video onto YouTube let's say, you're not just sharing it publicly, you're actually putting it into the hands of a private company based in the United States and that company, Google, which is the parent company of YouTube, has a tremendous amount of power to decide what to do with that material, how to present it or not present it to the public. Effectively, they decide what happens to it.
Farber: I want to talk about the inner workings of that sharing platform that you talk about in a moment. But how does the Syrian archive work? How does it operate and how would someone utilize it?
Roberts Biddle: Sure. This is a group of about five developers and technologists. They have an incredible amount of knowledge about the different, typically small, media groups who are working on the ground in Syria. They have personal connections with most of these groups. They also are very connected with civil society groups, emergency organizations that are working to provide humanitarian aid. All of these, this collection, this network that they've built, all of the groups that they've built relationships with are doing some kind of documentation of incidence of bombings, of attacks of different kinds, of the human effects of those attacks. There's a lot of video and images from hospitals, for example. All of these groups are collecting this material, and for many of the reasons I mentioned before, they're uploading it and often really storing it on big social media platforms. What the Syrian Archive is doing is grabbing those files as quickly as they can and putting them into their own archive and onto their own servers. When they do that, they go through a process in which they identify the key contextual details of the files. Here's a video of several people outside on the street where a sarin gas attack has just occurred and people are convulsing in pain and you can see others running to try to help them, to spray water on them, to get some of the gas off. So where did it happen? What day and time? What are some of the details that you can collect from a media file from the metadata and then also what you can see in the image or in the video that might give an indication of what caused the attack and who might have been responsible? The group in the Syrian Archive, they are taking care to document all of those details and to hold them in an independent space, something akin to a library where they know that material will be safe. At the same time, they're adding tags and categories to each file so that any person, including you, Paul, could go along and search through the database and see what do you find. Their intention and what they're achieving is to provide a resource for journalists, researchers, human rights workers, people in the legal field to be able to find documentation of incidents and actually understand something about the context, about what happened here. That's useful right now for all kinds of reasons of the current coverage and documentation of what's happening, but it also may become really important in the future if and when some of the major perpetrators of this war, probably, most importantly, the regime of Bashar al-Assad, are held to account for what's happened here.
Farber: In 'The New Gatekeepers', you frame the violence in Syria's civil war with a story about an image from the Vietnam War, an iconic and painful photograph informally known as "napalm girl." The image that circulated around the world of Kim Phúc taken by Nick Ut that was a Pulitzer Prize-winning photograph and also pointed out the massive violence wrought on civilians during that conflict. Why was it important for you to bring in that legacy image to talk about this contemporary moment regarding Syria?
Roberts Biddle: I chose to start with that image because it's well-known, it's very powerful, and it is really emblematic of documentation of war in a time where photography as a technology and practice had become pretty common. But, we did not yet have the technology of the internet. This was before anybody had a phone where they could just take pictures all day long. The work of documentation in media and in news outlets was really held by people like Nick Ut who were working with major media outlets. In his case, Associated Press. The image is powerful. The way that the image came into the public archive and got onto the covers of newspapers around the world and then became memorialized after that, it didn't get there accidentally. It was put there very intentionally. There was some good writing about how editors at the New York Times had a whole debate about whether to publish this photograph because it depicted a naked child, which the New York Times editorial code says, "We will never do that." But the decision because of the context and the story that the image told about this war and about the role of the United States in the war was so important, the editors said, that it was worth it to put this in front of the eyes of millions of readers. That, to me, seemed really significant and powerful because we have moved so far from that kind of careful decision-making leading us to what we might see when it comes to documentation of war. The other reason that I was so interested in that particular photo is that many years after it was taken, it reappeared on Facebook. A newspaper in Norway was doing a retrospective on the Vietnam War and they included this image. They posted it on their Facebook page and Facebook censored the image. And the reason that Facebook censored the image was the same reason that the New York Times originally had that made this a difficult discussion for editors. They don't publish images of naked children. The difference is that for Facebook there isn't a group of editors carefully sitting and thinking about the meaning, the context, and the value of an image for purposes of public history. It's a machine. A machine that doesn't know anything about the picture. All the machine sees is enough visual clues. So, away the image goes..To me, I was struck by that shift and the scary idea that so many images and videos that are collected in situations of human rights violations, whether we're talking about the world and Syria or any other number of conflicts, that there's a lot of images and documentation that individuals, just people with phones, are capturing and putting online but that we will never see. The reason we won't see it is because of a technology that is built not to understand context or the public history value of an image. It understands solely something as simple as explosion, beheading, naked child, and that's it and then it's gone.
Farber: In your Bulletin piece, you dug back and looked at statements made by Mark Zuckerberg. Can you share what you found in terms of his remarks and also perhaps give us an insight into how to balance those, what might seem as on one hand, obvious points of footage to flag, but also the complications of having machine-run technology be the censors for this kind of content?
Roberts Biddle: I'm not sure that although people addressed Mark Zuckerberg in the conversation about this, I don't think he ever individually responded. There was, as Facebook often will have, an unnamed spokesperson give a statement . In this case, they did that and gave a statement to The Guardian and the statement basically said, Facebook said something to the effect of we made the wrong decision here and that they had changed the decision and allowed the photo to stay up. The reason they gave was that they decided it was an iconic image of historical importance, but the only reason that Facebook was able to make that decision was because of this loud response that they received. After removing this image, the Aftenposten, the newspaper in Norway, wrote all about it. The Guardian wrote about it and so did other media. In a way, this was probably an easy choice to make because there was such a clear, resounding consensus from all of these different entities that this was very important and that this company made a mistake. But, what about images today that might have an equal amount of historical importance in the future? We can't know. So, that to me, it was very important and I wanted to dig into that question. Silicon Valley companies like Facebook and Google, they are in a legitimately difficult position because they are having to contend with millions of hours of video that is uploaded to their platform every day. They have, in their quest to become ubiquitous and to become the platforms that everybody around the world uses to communicate, share information, seek information, and advertise, they have found themselves in a place where all kinds of information and documentation that they probably never imagined having to deal with is getting uploaded to their sites at a rate that is really, really difficult to manage. I don't envy them. They have been through many phases of considering the different ways to handle this. They receive significant and legitimate pressure from many governments to get violent content off of their platforms. There's been a particular emphasis on removing videos and images and other content associated with violent extremism and with the recruitment of people to go and fight for ISIS, for example. The companies, they receive some pressure from the public to do this. They receive serious pressure from governments and there's an increasing push to actually create financial and legal consequences for companies that cannot keep this material off of their platforms. Most of what drives that is a concern for public safety, which is really serious and legitimate. The problem is that the scale at which these companies are taking up and managing all of this documentation makes it so that the exercise of looking at a video or a photo and trying to figure out is this important for the public record, is this going to be important 40 years from now? If Assad is taken to the international criminal court, which may or may not happen in the future, do we want to be able to show this video as evidence of something that happened as a result of the actions of his regime? Those are really difficult questions to answer and there is not a way, I don't think there's a way to train a technology to make really thoughtful choices in that realm. The kinds of images that come up, you can get a roomful of human rights lawyers who can debate for hours whether or not something should be preserved and made public, preserved and kept privately. The idea that you could somehow train a technology to make that decision seems really unrealistic and preposterous to me.
Farber: I want to bring Jackie Zammuto from Witness into this conversation and ask you a two-part question. Who, in the U.S. and internationally, who are the kinds of people who would be uploading this footage that you're speaking about and who are the people or what are the technological forces that are interpreting it and making sense of it to see if it will go public?
Zammuto: I can respond to the first part, at least to start. In terms of who is uploading this type of footage here in the United States, there is nothing quite on the scale of what we're seeing coming out of Syria in terms of footage of potential war crimes being documented and uploading. But there is a lot of footage of issues like police abuse happening, police killings, violations by immigration enforcement, and other private security forces that people are documenting and are risking their lives to document and upload for the purpose of exposing these abuses and, hopefully, securing some sort of accountability. I think that there's a quote from Ta-Nehisi Coates that resonates a lot with me, which, especially in the context of the police violence is really stating that this violence is not new. This has been happening in this country for a very long time. The nature of the systemic abuse by the police forces has been happening since the police forces were created. But what's new are the cameras. Because everyone has such easy access to these tools, it's become much easier, as Ellery mentioned in Syria, for people to document what they're seeing in their own communities and neighborhoods and making that available for other people around the country and around the world to understand the reality of those people and the types of abuses that they're facing. I can briefly touch on the second part [of your question], which is basically the same thing as what Ellery was saying before that it is these major companies like YouTube or Google and Facebook who are making the decisions on what can be uploaded, what is taken down, and what is censored. We have seen here in the U.S. a number of occasions in which people's footage has been taken down because it's been violent or especially on Facebook we see a number of instances where people are even just voicing their own frustrations around police violence and those posts do not get circulated because they violate the community protocols. Even sometimes Witness, when we post on our institutional account, resources on how to film the police in a safe and effective way, we're not allowed to post that information because they say it violates their community standards. We haven't gotten a very satisfactory answer as to why that is.
Farber: I seem to recall a time when even big technology companies, and correct me if I'm wrong, even big technology companies had individuals responding to specific cases. In each of your work, you've been able to highlight the ways that artificial intelligence is making these calls about what content moves online and how. At what point did big technology companies like Google, YouTube, and Facebook turn to artificial intelligence and what are some of the ramifications of that turn?
Zammuto: Well, I don't know the exact point in which it shifted and I know that there still are humans behind this decision-making process here in the U.S. and in other countries. As we saw with the whole situation in Burma with Facebook and the heightened hate speech and the fact that that led people to actually commit murders and enact violence on a certain group of people all being fueled by language that was being used on Facebook, caused Facebook to recognize that they were playing a role in the genocide there. Because of that, they put into place a number of human reviewers who were looking at the content, who are familiar with the local languages. They are trying both approaches of both using the AI as well as human moderation. In terms of what types of issues it brings about when it's artificial intelligence reviewing people's videos, to me, it's just the fact that it's artificial intelligence and it's not looking at it with a human perspective. It's not looking at it from a legal perspective or historical memory perspective. It's just looking for things that trigger it to say, "This is really violent. This is a bad word." It's not looking at the context in which that's being used or perhaps, as we've seen a number of times, sometimes people post stuff online responding to hateful comments that they've received, yet they're the ones who get censored and not the person who posted that original content. One of the issues both with artificial intelligence as well as human intelligence is a lack of consistency from these platforms as well as a lack of transparency around what their procedures and protocols are when stuff gets taken down. What actually happens when you file a complaint? How can you follow up on that? These are some of the things that my colleagues are really advocating for from these platforms to try to make sure that people have a clear sense of how their contents being used, the choices that are being made when it's taken down, and how they can facilitate some sort of dialogue with those platforms to either say this was unjust and you should put it back up there or I don't understand why you're not taking this down because it is very hateful or violent.
Roberts Biddle: I would add that the companies have almost no obligation to the public interest in the United States or anywhere else. They actually are beholden to higher legal standards in Europe, and in Germany in particular, because of regulations that have been passed in those places that effectively say, "If you don't remove this type of hate speech or Nazi propaganda," in the case of Germany, "we will fine you." Once the money gets involved, the company has a very different response to the situation. But, in the United States, there's still is no real mechanism or regulation that holds these companies to account for how they treat our speech. That means naturally they do what is in their best interest. What we've seen over time is they're constantly, as Jackie says, changing the way they respond to things. It's not consistent. It had evolved over time and, in some ways, they have improved their systems for people to report problems that they see on the platform. But, at the same time, the company also seems to be constantly responding to different kinds of political pressure regarding certain kinds of content. It's a moving target and a tremendous amount of what is actually happening when Google or Facebook decides to take down a video or leave one up is unknown to the public and even to experts like Jackie or I who do actually communicate with and work with these companies because we have to. There is so little that is known about their decision-making processes and, as far as the artificial intelligence that they are building and "perfecting" goes, that code, the actual technology, is protected as a trade secret. There are technologists who could look at it and evaluate it on the basis of a human rights standard, but that's not going to happen. These are private companies and their interest is in their business. It's in making money. Their interest is not to protect or promote the public interest or human rights.
Farber: You both mentioned the challenges of transparency and consistency. What have your dealings in your respective jobs been like with big tech companies around those issues?
Roberts Biddle: I have been interacting with Google and Facebook mainly, some of the other companies as well, but those are the two that I've really had a chance to talk and negotiate with different teams and members of their staff for six or seven years. I think that there have been some really significant shifts over that time. The two companies are, they're often mentioned together, they're thought of in similar ways for a lot of good reasons, but I have found overall that Google, as a company where its search engine is its central, its flagship technology and platform, that the company does think in a maybe, let's say, more deeper, a little bit more sophisticated way about how information is organized in general. You feel that in interaction with the company. That said, there has been a shift away from engaging with human rights discussions, which the company really did in a way that seemed like a good faith effort for some years. Then, there was a little bit of a shift probably around 2013/14 and it's gotten a little more serious ever since. My sense that their commitment to human rights is not really what they said it was. The biggest piece of concrete evidence of that is probably that the company had been building a search engine for use in China where any technology that is showing people information needs to comply with a really sophisticated, serious censorship apparatus that is established by the state. That was an enormous shift for Google, which had actually left China on the grounds of wanting to protect the rights of its customers. In contrast, Facebook, they built a technology that, in the beginning, was just about letting people connect to each other, and that is very different from a technology that is designed to help people find information. Facebook, I have had many interactions with, particularly with their policy staff and staff that address issues of human rights and situations of violence where safety is a real concern and where the company is aware that threats of violence or incidents of violence are playing out on the platform or they're escalating from the platform. But, the company does not have the capacity to understand what is actually happening on the ground in whatever place, country, an incident is playing out. In my role with Global Voices, which is an international citizen media network with contributors all over the world, a lot of times I play a bridging role and say, "You know, there's a really serious series of attacks going on against Muslim women in Southern India. The attacks are being planned on Facebook and they stay up on the platform. No one does anything about it." It would be my job to say to the company, "Hey, this is really serious and I know it's happening in a language, say Malayalam, that maybe Facebook doesn’t have any staff that read that language." They're hoping that the artificial intelligence systems will catch things or hoping that there are human reviewers that will look at something and make a judgment call maybe using a translation tool, but that's really different than understanding the actual context. I've gotten a mix of responses over the years, but overall, there was more compassion in the earlier years. I could sense that the person I was talking to wanted to try to make things better despite the many barriers that they might have been facing internally in the company. Today, it feels like the people in those staff roles, that most of what they're doing is trying to just calm me down or create a little bit of a steam valve barrier between people in civil society like myself and the company.
Farber: Recently, Facebook announced that they were seeking to end the posting of white nationalist material and content. In this conversation, it's clear that that's a difficult task. How would they go about doing it and, as to your point about creating buffers between civil society and their business, is that even an attainable goal?
Zammuto: I think that it raises some really crucial questions around censorship and who is making these decisions. I think, more importantly, all of these issues and questions you're raising bring up this issue of who is deciding what gets to be remembered? For Witness, we've been in touch with these groups with Facebook and YouTube and Google for a number of years and have seen similar evolutions to what Ellery described. I think that the way we've approached it is really trying to be a presence in saying, "Look, this is what's happening on the ground. These are the issues that our partners are experiencing," and raising that awareness. That's sometimes met with people who are interested in helping out and sometimes it's not. Something that one of my colleagues, Dia Kayyali, who does a lot of work directly advocating to Facebook and Google, has brought up this term that gets used a lot when we see announcements like the one that Facebook just made about we're entering a slippery slope. If we're making decisions on what is hate speech and what is not and we're leaving that decision up to corporations that are not being fully regulated and that their number one priority is to make money, that is a slippery slope. But I think in the work that we're doing that's more based in human rights and working with people who are documenting abuses on the ground, it's not just a slippery slope. I think we would say that it's been a slippery slope for a very long time and that a lot of the people who are already very marginalized in our society have already been very marginalized online. They've already been kicked off or received warnings or had their account suspended for posting information about white supremacy, about police killings, about immigration enforcement. That has gotten much less press than when Facebook suddenly decides that they're going to ban white supremacists.
Zammuto (cont): I think that, again, it all bubbles up into this bigger question of it's not just who is making the call, who is censoring what, it's about what, as a society, do we really want to be preserved and remembered. As we've seen historically, when it's left up to primarily corporations and institutions, a lot of voices and a lot of experiences are getting left out. These are the voices and experiences, again, that are often the most marginalized, the people who are the most victimized by these same institutions many times. So, a lot of the work that we're doing at Witness is really trying to help bring knowledge and tools into communities to support them, to create their own archives, their own historical memory and preserve content in a way that is how they want it to be remembered in a more complete history that takes into account their own personal experiences and their own personal lives.
Farber: Can you unpack for us the difference like how is social media posting different than an archive?
Zammuto: One of the things that we like to say a lot in training that always see some light bulbs pop up is that YouTube is not an archive. It is a social media platform and it is not a safe place to store valuable human rights content. We absolutely recognize that in some cases, especially in situations like conflict zones like in Syria, that there really are very few other options if people want to get their content out. But because of all of these issues around takedown, content moderation, censorship, these platforms like YouTube and Google cannot be seen as archives because they do not have the intention of preserving this content for the long term. Whereas an archive within a library or a museum or a community organization exists for the purpose of preserving history for the long term, making it accessible for the community, or at least that would be the hope, and really trying to curate content in a way that is meaningful to the people who it belongs to. That would be the ideal. That, of course, does not happen in all archives. But, I just think that many people have this idea that, "Oh, I just shot this video on my phone," whether it's a human rights violation or their own child or pet or something. They put it online and think it's going to be there forever. As we've seen time and time again, this stuff gets lost. It gets destroyed. It gets removed. I think it was just last week that there was some announcement that MySpace lost thousands and thousands of files and people's content because of a glitch in backing up one of their servers. So, when you put your media on these platforms, you're really entrusting them to preserve it and I think the bottom line is they can't be trusted for that purpose.
Farber: In thinking about the impact of archives, if there are people who are able to gather their own archives whether in the form of the Syrian Archive or in others, what is some of the short-term and perhaps longer-term impact or possibility that can come out of those memory projects?
Zammuto: I think some of the short-term impacts is really helping build a sense of ownership within the community and helping people recognize that they can have a say in how they are remembered and how the story of their community or their family is remembered. Giving them the tools to make those decisions and to preserve and maintain and make accessible these types of stories. I think that there's just a real sense of empowerment that comes with that type of ownership. I think that it also makes it possible, with the Syrian Archive, for example, for outside people like researchers and investigators to utilize that content for their own purposes. Whether it's putting together a report about chemical weapon usage in Syria or propose that a deeper investigation into war crimes be started, I think that the really sophisticated work that groups like Syrian Archive are doing has made it possible for this content not just to be stored in a way that it's going to be preserved, but also they're doing a really good job of tagging and describing and contextualizing the content so that it's useful to people who are not there, who might not have any prior knowledge to what is actually happening in Syria. They come across these projects like the Syrian Archive and they're able to glean a lot of information. Whereas if you're going onto a platform like YouTube, things are a lot more scattered. A lot of times the uploaders don't include that valuable context. That makes it really easy for it to be taken out of context, to be confused, the messaging of it, to be seen as propaganda. I think these efforts to contextualize and nuance the content that they're working with is really crucial in being able to report to a broader international community about the types of violations that are happening. One of the projects we worked on here in the United States, we partnered with a group in Sunset Park, Brooklyn to organize, curate, and preserve some of the content that they've been shooting over the past several decades. Almost all of this footage is documenting abuses by the police in their neighborhood. The purpose of this was really two-fold. Actually, we had a lot of purposes. This is a big project, but one of them was to better organize this content in a way that they could actually go back and locate it more quickly when they were speaking to journalists. They also really wanted to find a way to start to identify patterns of behavior within the precinct that is patrolling their community. Because so many of these community members know certain police officers. They know that they've been involved in misconduct and abusive behavior time after time after time, but because they don't have a way to connect those dots, it's been hard for them to make that case to the media or to lawyers. Especially here in New York where there's a super Draconian law that makes it essentially impossible for the public to access police personnel records. We don't know anything about their misconduct history or whether or not they've been disciplined. It really puts the burden of the proof on the community to show certain officers that have been repeatedly engaged in misconduct and violent behavior. One to the goals of the project we were working on with this group, El Grito de Sunset Park, was to try to find a way to connect the dots between officers' behavior and then also pull in other open source data like officers' salaries to show, "Hey, this officer has been involved in several different abusive incidents, but he continues to get a raise year after year after year. As far as we know, he's not getting disciplined." I think just that immediate power to start connecting the dots, even if we're not identifying wide-spread patterns, I think that ability to see the bigger picture of the stories that this type of media can tell collectively is really powerful.
Roberts Biddle: One of the things that I think, Paul, as I was preparing for this I was thinking about the other projects of Monument Lab and your focus on public monuments. There are so many challenges and questions that you know far better than I do about how a decision is made to erect a monument or to tell a story in a public, real-life space. There's a lot of government power around how that happens. One of the really, still, very powerful things about the internet is that the barriers to building the kind of archive, that El Grito de Sunset Park that Jackie was telling us about, one of the really exciting things about that is that the internet is still there for us. The ability to build a website and maintain it is you need some skills and some resources, but the barriers to getting those things and to doing it are significantly lower than they are to build something in real life. I'm pretty sure. I feel it's important to highlight the enduring value of the internet as a space where we can still build our own archives, build spaces for testimony where a community that's trying to tell its own stories gets to decide how they want to do that. This is especially true in the United States where we have fairly strong protections for freedom of speech. While there is censorship on the internet here, the censorship of individual websites and the projects like this, there's a pretty strong protection against that happening. When our content gets into the hand of these companies, the equation complete changes, but the relatively low barrier to building your own archive or story on the internet is something that I hope people are thinking about and can get excited about. Because it is really powerful.
Zammuto: I think another thing that's really exciting about this idea around community-led archives is that it is helping facilitate a lot of collaborations in spaces that might not always be people talking to each other. I think a lot of that has to do with the fact that an archive is not an easy thing to create and a lot of times when people hear the word archive, they think of a static document or a series of documents or a hard drive that's just being protected. But, it's actually very much a living thing and it requires human input to be able to maintain and upkeep that and, again, to make that content accessible if that's the purpose of the archive. I think what a lot of community-led organizations are recognizing is it's not something that they can do on their own. So, they're reaching out to different types of organizations whether it's media groups, universities, other community organizations, to think about how they can strategically work together to create decision-making processes for what is and is not stored in the archive, how they can train volunteers who might not have any archival background in maintaining and promoting the archive. We've seen some kind of interesting collaborations comes out of this that, I think, I feel hopeful for as a way forward in terms of really making sure that this important, historical memory is preserved.
Farber: We're aware of the wave of online content that any day could include incredibly harrowing scenes of violence, of important documentation of brutality against residents. Do you, not just professionally but also personally, have a sense of how to have a respectful way for yourself and others to be in digital spaces, to handle images that may be violent, may be traumatic? And figuring out when to share them, when to view them, or also moments you may feel compelled to turn your head the other way if that's an option as well?
Zammuto: Witness has done quite a bit of work in thinking about how do we make the decisions around whether or not to share this content, these ethical considerations about protecting the people shown in the video, the people who filmed the video, and the viewers. Because even if it's not an issue that you're directly involved with, watching a horrific video of a killing or a beheading or even of just a violent incident between two people, can be really traumatic and triggers people in a lot of different ways. So, we've put out this resource to help people think through that process about when it is appropriate to share something, how that should be credited, and when you might not want to show something or when you might want to blur somebody's face out or reach out directly to the person who uploaded it before sharing it more broadly. There's a lot of very practical considerations that can be put into place and I've seen a lot of these implemented in newsrooms, which I think is really important. On a more personal level, it's definitely difficult to see this type of content on such a regular basis and it can mess with your head in ways that you don't always fully realize until something else triggers you and you recognize that you're maybe not in the best space. I've tried to be a little bit more selective about when I watch video content and sometimes I chose not to watch it. Sometimes I will watch stuff and if I have to watch it multiple times, I'll turn the sound off because I've heard and I've experienced that that can be a really good tactic to create a little bit of distance between the video and you as a viewer. Those are some of the things. I know that there is an emerging field around vicarious trauma that is really trying to help people who are experiencing this type of violence secondhand and grappling with it in a whole lot of different ways. I know the Columbia Journalism School, the Dart Center there, has some really fantastic resources around how to cope with this for newsrooms as well as individuals. That's been something that I've pulled from quite a bit.
Roberts Biddle: One example that comes to mind, Global Voices, we are a media organization, so we're publishing images all the time, videos too. We have a lot of discussions about this question of all the things that Jackie raised. Who is in the image? Who is tied to the image? Who is responsible for creating it and sharing it? Then, how are we putting it into context? When we have something that's pretty strong or graphic, we walk through all those steps in order to make a determination about whether to show it and how. There are cases in which we'll actually describe an image and then offer a link where if a reader wants to go see something they can, but they don't have to. We're also working on a little tool that some sites have. It's effectively like a little curtain. So, it's covered and if you would like to pull back the curtain and look at an image, you may, but you get a little summary or what's sometimes called a trigger warning so that you know that when you do that, you might see something that will really disturb you. There was a recent really significant controversy and backlash that happened in Kenya in January. There was a violent attack on an office and hotel complex in Nairobi and many people were held hostage. A number of people were killed. There were several big media photographers who came and took pictures of what they could see from the outside of the building while there were still hostages inside. Among other outlets, the New York Times actually published a photograph, a breaking news story that included a photograph in which there were two men who were dead, both had been sitting at tables in an outdoor covered café space. Clearly had been sitting at laptops, but both bodies were slumped over so you couldn't see either person's face. Because you couldn't see their faces, the photograph was technically permissible by the New York Times standards. The response from people in Kenya who saw that photo online while there were still hostages in the building and while people were still trying to figure out if their loved ones, colleagues, friends were alive or dead, the responses to that photo were really extreme. People were furious. Many people started calling for the government to deport the New York Times editor, for the bureau chief in Nairobi. It was a moment of crisis in which, I think, some of the energy behind those calls had to do with the severity of the situation, but it also raised a lot of questions about the ethics of that type of photography and the quick posting and sharing of it even by a big media organization that has the capacity to make a well-thought-out decision about how that will affect everybody who's touched by an image. There was some really good writing about it afterwards and it struck me as a really important moment in which to think carefully about all of the people that can be affected by a single image and take a moment before deciding to push it out to the whole world. The other thing that I would say that I think is something that we're still learning as a society and that it will be important to bring into education and into family life is talking about the ways to talk about an image or the ways to talk about a video. If you post something or share it in some way, can you frame it by offering some kind of invitation for thoughtful, critical discussion about it? Or, can you share it and say, "This made me feel this way and I'm concerned about X and Y." Are there ways to set norms around talking about something so that people can express how they feel about it in such a way that it feels safe and possible to have a dialogue, in contrast to this super, rapid-fire decisions and sharing and sometimes careless, incredibly offensive, irresponsible comments that people will make? I have hope for that and hope that we'll develop stronger more thoughtful and informed norms and tendencies about how to talk about images online and offline.
Zammuto: One of the things that we recommend to people that we're training is that before you post any image online ...Say you record an incident of police abuse, the first thing that you should do is pause and just take a breath and really think about what happened. Then, start to think strategically about how are you going to share this, what consequences might it have for the person you just filmed and what consequences might it have for you as the filmer? We've seen a number of instances, for example, Ramsey Orta who was the young man that filmed the death or Eric Garner at the hands of the NYPD. After he posted the video of Eric Garner being killed, Ramsey became targeted and was harassed and surveilled online and offline as well as his family received a lot of hateful messages, people outside his home. Currently, he is serving four years in prison on what are technically unrelated charges, but many believe that he really was targeted by the police because of his involvement in that case. That is a really important consideration and it's not just about safety and ethics. I think it's really important to remember that sometimes waiting to share an image, especially something around police violence or state violence, can be a really good tactic because what it does is it gives space for the police of the government to put out their own official statement. Then sometimes when that citizen or eye witness footage is posted, it does a really powerful job of contradicting the official statement. One really clear example of this was the killing of Walter Scott in South Carolina. The killing of Walter Scott where the eye witness who filmed the shooting of him by the police officer initially did not want to post that online because he feared retaliation against himself. He waited until the police statement came out in which the officer said that Walter Scott was running towards him and trying to grab his taser. That was in no way what the video showed, let alone what actually happened. When the eyewitness, Feidin Santana, saw the police report, he decided to go to a local community organizer to help him get in touch with the family who gave their consent for posting the video. Then, he worked with a journalist and they were able to get the video out after the statement was released. Because there was such a strong contradiction of what the video showed, that officer was actually fired that day and charged. He is now serving time in prison. It's one of the few instances of accountability from the police, an incident of police violence that we can point to. I think a lot of it has to do with the strategy in which that video was shared.
Farber: Ellery Roberts Biddle and Jackie Zammuto, thank you so much for joining Monument Lab.
Roberts Biddle: Thank you. Thank you for having us.
Zammuto: Thank you so much.