From tulip-mania speculation to the cryptocurrency market

Drawing historical parallels from “tulip-mania” that swept across Netherlands/Europe in the 1630s to the speculation currently ongoing around crypto-currencies, Mosaic Virus, created by the artist and researcher Anna Ridler, is a video work generated by a GAN (Generative Adversarial Network), an artificial intelligence (AI) technique that makes computers creative.

The video shows a tulip blooming, an updated version of a Dutch still life for the 21st century. The appearance of the tulip would be controlled by bitcoin price. “Mosaic” is the name of the virus that causes the stripes in a petal which increased their desirability and helped cause the speculative prices during the time. In this piece, the stripes will depend on the value of bitcoin, changing over time to show how the market fluctuates.

Text adapted from Anna Ridler’s website.

AI may have outwitted itself with fake video

The US military is funding an effort to determine whether AI-generated video and audio will soon be indistinguishable from the real thing—even for another AI.

The Defense Advanced Research Projects Agency (DARPA) is holding a contest this summer to generate the most convincing AI-created videos and the most effective tools to spot the counterfeits.

Some of the most realistic fake footage is created by generative adversarial networks, or GANs. GANs pit AI systems against each other to refine their creations and make a product real enough to fool the other AI. In other words, the final videos are literally made to dupe detection tools.

Why it matters? The software to create these videos is becoming increasingly advanced and accessible, which could cause real harm. Sooner this year, actor and filmmaker Jordan Peele warned of the dangers of of deepfakes by manipulating a video of Barack Obama’s speech.

Introducing deepfakes

Have you heard about the so-called deepfakes? The word, a portmanteau of “deep learning” and “fake”, refers to a new AI-assisted human image synthesis technique that generates realistic face-swaps.

The technology behind deepfake is relatively easy to understand. In short, you show a set of images of an individual to a machine (a computer program or an app such as FakeApp) and, through an artificial intelligence approach, it finds common ground between two faces and stitches one over the other.

Deepfake phenomenon started to draw attention after 2017 porn scandal when an anonymous Reddit user under the pseudonym “Deepfakes” posted several porn videos on the Internet.

Deepfakes in politics

Deepfakes have been used to misrepresent well-known politicians on video portals or chatrooms. For example, the face of the Argentine President Mauricio Macri has been replaced by the face of Adolf Hitler:

Also, Angela Merkel’s face was replaced with Donald Trump’s.

In April 2018, Jordan Peele, from BuzzFeed, demonstrated the dangerous potential of deepfakes, with a video where a man who looks just like Barack Obama says the following: “So, for instance, they could have me say things like ‘Killmonger was right’ or ‘Ben Carson is in the Sunken Place,’ or ‘President Trump is a total and complete dipshit.'”

Mind-reading machines

A new AI model sort of reconstructs what you see from brain scans.

Schematics of our reconstruction approach. (A) Model training. We use an adversarial training strategy adopted from Dosovitskiy and Brox (2016b), which consists of 3 DNNs: a generator, a comparator, and a discriminator. The training images are presented to a human subject, while brain activity is measured by fMRI. The fMRI activity is used as an input to the generator. The generator is trained to reconstruct the images from the fMRI activity to be as similar to the presented training images in both pixel and feature space. The adversarial loss constrains the generator to generate reconstructed images that fool the discriminator to classify them as the true training images. The discriminator is trained to distinguish between the reconstructed image and the true training image. The comparator is a pre-trained DNN, which was trained to recognize the object in natural images. Both the reconstructed and true training images are used as an input to the comparator, which compares the image similarity in feature space. (B) Model test. In the test phase, the images are reconstructed by providing the fMRI activity of the test image as the input to the generator. (Shen et al, 2018)

Check the Journal Article here

Reference: Shen, Guohua, Kshitij Dwivedi, Kei Majima, Tomoyasu Horikawa, and Yukiyasu Kamitani. “End-to-End Deep Image Reconstruction from Human Brain Activity.” BioRxiv, February 27, 2018, 272518. https://doi.org/10.1101/272518.

AI is learning how to invent new fashions

In a paper published on the ArXiv, researchers from the University of California and Adobe have outlined a way for AI to not only learn a person’s style but create computer-generated images of items that match that style. This kind of computer vision task is being called “predictive fashion” and could let retailers create personalized pieces of clothing.

The model can be used for both personalized recommendation and design. Personalized recommendation is achieved by using a ‘visually aware’ recommender based on Siamese CNNs; generation is achieved by using a Generative Adversarial Net to synthesize new clothing items in the user’s personal style. (Kang et al., 2017).
Reference: Kang, Wang-Cheng, Chen Fang, Zhaowen Wang, and Julian McAuley. “Visually-Aware Fashion Recommendation and Design with Generative Image Models.” arXiv:1711.02231 [Cs], November 6, 2017. http://arxiv.org/abs/1711.02231.

Artificial Intelligence that can create convincing spoof photo and video

I wonder if Peter Burke would rethink the documental and historical status of photography when we start to see AI and Deep Learning systems (like generative adversarial networks – GANs) being used to create fake and believable images at scale.

Reproduction from Ian Goodfellow’s speaking presentation at EmTech MIT 2017.
Reference: J. Snow, “AI could send us back 100 years when it comes to how we consume news,” MIT Technology Review. [Online]. Available: https://www.technologyreview.com/s/609358/ai-could-send-us-back-100-years-when-it-comes-to-how-we-consume-news/. [Accessed: 09-Nov-2017].

Machine Learning and Logo Design

The rise of neural networks and generative design are impacting the creative industry. One recent example of this repercussion is Adobe using AI to automate some designer’s tasks.

This Fast Company article approaches the application of Machine Learning to Logo Design and touches the issue of whether or not robots and automation are coming to take designer’s jobs.

More specifically, the article describes Mark Maker, a web-based platform that generates logo designs.

In Mark Maker, you start typing in a word.
The system then generates logos for the given word.

But how does it work? I’ll quote Fast Company’s explanation: “In Mark Maker, you type in a word. The system then uses a genetic algorithm–a kind of program that mimics natural selection–to generate an endless succession of logos. When you like a logo, you click a heart, which tells the system to generate more logos like it. By liking enough logos, the idea is that Mark Maker can eventually generate one that suits your needs, without ever employing a human designer”.

I’m not sure if we can say this tool is actually applying design to create logos. Either way, it still a fun web toy. Give it a try!