Held on November 10 and 11 in São Paulo, the third edition of Coda.Br (“Conferência anual de jornalismo de dados e métodos digitais”) featured more than 300 participants and dozens of hours of activities, including presentations and practical activities. I wasn’t able to assist, but fortunately, organizers gathered and shared all conference presentations in one place!
Here I highlight some lectures and workshops regarding subjects such as data visualization, machine learning and data visualization:
TheUSmilitary is funding an effort to determine whether AI-generated video and audio will soon be indistinguishable from the real thing—even for another AI.
The Defense Advanced Research Projects Agency (DARPA) is holding a contest this summer to generate the most convincing AI-created videos and the most effective tools to spot the counterfeits.
Some of the most realistic fake footage is created by generative adversarial networks, or GANs. GANs pit AI systems against each other to refine their creations and make a product real enough to fool the other AI. In other words, the final videos are literally made to dupe detection tools.
Why it matters? The software to create these videos is becoming increasingly advanced and accessible, which could cause real harm. Sooner this year, actor and filmmaker Jordan Peele warned of the dangers of of deepfakes by manipulating a video of Barack Obama’s speech.
Have you heard about the so-called deepfakes? The word, a portmanteau of “deep learning” and “fake”, refers to a new AI-assisted human image synthesis technique that generates realistic face-swaps.
The technology behind deepfake is relatively easy to understand. In short, you show a set of images of an individual to a machine (a computer program or an app such as FakeApp) and, through an artificial intelligence approach, it finds common ground between two faces and stitches one over the other.
Deepfake phenomenon started to draw attention after 2017 porn scandal when an anonymous Reddit user under the pseudonym “Deepfakes” posted several porn videos on the Internet.
Deepfakes in politics
Deepfakes have been used to misrepresent well-known politicians on video portals or chatrooms. For example, the face of the Argentine President Mauricio Macri has been replaced by the face of Adolf Hitler:
Also, Angela Merkel’s face was replaced with Donald Trump’s.
In April 2018, Jordan Peele, from BuzzFeed, demonstrated the dangerous potential of deepfakes, with a video where a man who looks just like Barack Obama says the following: “So, for instance, they could have me say things like ‘Killmonger was right’ or ‘Ben Carson is in the Sunken Place,’ or ‘President Trump is a total and complete dipshit.'”
I wonder if Peter Burke would rethink the documental and historical status of photography when we start to see AI and Deep Learning systems (like generative adversarial networks – GANs) being used to create fake and believable images at scale.