http://themonthlymuktidooth.blogspot.com

Friday, October 11, 2019

WSJ on DeepFakes: ‘It’s a cat & mouse game’





How does the committee categorise deepfakes? What kinds of technology are you most worried about and why?


Content generated with the help of AI can be broadly called Synthetic Media. When that process is used to deceive audiences, we call these synthesized pieces of content deepfakes. Deepfakes can be segmented into fake video, images, audio and text.
While most of the conversation around deepfakes is focusing on video right now, we have also seen audio deepfakes improve rapidly. The first examples of AI-generated audio that we heard sounded very robotic and were easy to identify as fakes. One of the latest examples, a virtual copy of the voice of podcast host Joe Rogan, sounds eerily life-like. It is very complicated to tell it apart from Rogan’s real voice. This technology could be used for fraud. In August, the WSJ reported the case of criminals who used AI-based software to mimic a chief executive’s voice demanding a fraudulent transfer of €220,000.
Over the past few months,there have been several examples of a different form of doctored video, so-called ‘cheapfakes’ or ‘shallowfakes’. These are videos that have not been altered with the help of artificial intelligence, but with rudimentary video editing tools. A good example of that is a video of House Speaker Nancy Pelosi that appeared on social media in May. Forgers slowed down footage of Pelosi speaking at a conference and readjusted the pitch of her voice. This made it appear like she was intoxicated.
Even though it was relatively easy to spot that this was doctored footage, the video spread widely on social media. This shows that it does not take much to fool some online users. Those are the effects of something that experts call ‘confirmation bias’: When people see a video that seems to prove something that they already believe, they are more likely to think that the footage is real and to share it online.
This infamous video of Nancy Pelosi on Facebook spurred the same old question about the balance between freedom of speech and the fight against misinformation. How could the fact that deepfake is based on AI and machine learning reframe the question of freedom of speech?
As mentioned, the Pelosi video is not a deepfake since it doesn’t falsely show her saying or doing anything she didn’t actually say and it’s wasn’t created using deep learning (which is where the “deep” part of deepfake comes from). It’s a ‘shallow fake’ in which the footage is just slowed down and taken out of context to make it misleading. A DeepFake refers to ‘synthetic media’ which is modified through AI, specifically generative adversarial networks.
The second issue is related to whether social networks should take the video down. It’s not as simple as it may seem. Broadly speaking, could content of this nature be considered satire? Where should the line be drawn? We have seen that platforms like Facebook, Youtube and Twitter have found different answers to these questions so far: In the case of the Pelosi video, Facebook decided to leave it on its website, downrank it and add a note for viewers that it has been debunked. Youtube decided to pull it. Twitter left it on its platform.
In an interview with Digiday, you spoke of ‘a massive proliferation in technology in a couple of years which will coincide with the U.S. general election’. Do you think the US media is ready to take on this challenge? What can medium- and small-sized newsrooms do to be better prepared?
Media forensics experts predicted that the Midterm Elections 2018 would be the first major political event in the U.S. that would entail the spread of deepfakes on social media. That was not the case. But we have to stay vigilant. Our goal is to be prepared when deepfakes become prevalent. The Wall Street Journal is one of the first newsrooms to tackle the looming threat that deepfakes pose, but we are seeing heightened awareness at other news organizations, too. Reuters for example is training its reporters in deepfake detection by showing them examples of synthetic videos that the news agency generated in collaboration with a startup. NYT launched the provenance project and the Washington Post developed a guide to manipulated video.
However, there’s still a massive knowledge gap between large newsrooms like WSJ, NYT or WaPo and small news organizations when it comes to DeepFakes. One of the solutions is collaboration and having the news industry come together as one to address this challenge. Our team at the Journal has published a deepfake guide on Nieman Lab. It is important to us to advance our own understanding of the issue, but also to share best practices with the rest of the news industry.
What are your thoughts on media forensics training, an understanding of the underlying technologies behind the various tools used and the ability to detect hidden or deleted data? Do you think it should be part of all journalists’ education? How can the curricula keep up with the fast developing technology?
At the moment, it is still relatively easy to spot a deepfake video, if you know what to look for. Basic video manipulation and digital research skills are most likely enough to recognize most altered videos. However, we have to keep up with the advancements in deepfake technology and constantly update our detection methods. Some startups and tech companies are already preparing for more intricate deepfakes by developing automatic detection software based on Machine Learning, which will help social networks and newsrooms spot altered videos and photos faster and more reliably. Understanding deepfakes should be part of the training program of newsrooms. However, misinformation has always existed and it’s not going away anytime soon.
Some universities are also being proactive in educating its students, and understand this as a broader media literacy issue. For example, the Missouri School of Journalism is running a student competition to develop deepfake detection tools while New York University has a media literacy class called “Faking the News”, where students learn (among other things) how to create deepfakes.
Most of the technology that could aid deepfake verification is still not available publicly or inaccessible to the workflow of newsrooms. Besides, the more deepfake detection algorithms are developed, the better will deepfake creators know how to improve the technology. Do you see any upcoming solutions to resolve this issue?
Most of this research is still in early stages. We can see that many universities are focused on finding a verification method, like UC Berkeley or SUNY. The Department of Defense is also interested in finding forensics techniques: the Defense Advanced Research Projects Agency (DARPA) has some programs to address media verification.There are also some startups and IT security companies trying to tackle this issue, including: Deeptrace, Amber and ZeroFox.
 Tech companies are also starting deepfake detection initiatives. Google has shared a dataset of deepfake videos, which allows researchers to test their detection tools. Facebook announced a deepfake detection challenge and will release a similar dataset. The company will fund this effort with $10 million and give out cash prizes to researchers that come up with best detection methods.
Tech giants are increasingly investing in deepfake detection. Given their tech and financial resources, do you worry about a handful of tech companies having the monopoly over the control of deepfakes?
We are going through a period where the public is becoming less positive about the impact of big tech companies, and naturally journalists may be wary of using deepfake detection tools developed by these giants. But deepfakes seem to be a concern for these companies as well and we see them releasing training datasets that could be helpful for creating a detection tool. This is a great first step. However, I imagine that journalists may be a bit more comfortable using independently developed or open source tools developed internally that leverage these and other data sets.
Tools that were developed by tech companies for innocuous purposes eventually ended up being used for dis/malinformation. How can this risk be mitigated in the future? What is the responsibility of the companies in addressing it?
It’s important for tech companies to consider following ethical & transparency guidelines and put processes in place that attempt to prevent their technology to be used to spread misinformation. One possible solution for companies to mitigate the risk of video and image manipulation tools is to introduce a watermarking or blockchain system that makes it easy to detect if content has been altered using specific software.
Deepfake is not an issue only for video, but also for audio, which is arguably even harder to tackle. Some are now also warning against ‘fake text’. What are your thoughts on the particularities of deepfake for different media types?
Each media format — video, audio, text, images — has its own particularities and there are different detection tips depending on each one. Fake text is very difficult to catch. A well known algorithm is the GPT-2 model, created by Open AI. This model was trained to predict text and can also translate between languages and answer questions. Because of the potential misuse of this technology, Open AI decided to release its model in stages. One potential misuse could be, for example, generating misleading news articles. In fact, research from Open AI partners found that people find synthetic text almost as convincing as real articles. Here’s a site that let’s you test the GPT-2 model: https://talktotransformer.com/ .

Courtesy: GEN 
 

No comments: