Google Arts & Culture initiative promotes experiments at the crossroads of art and technology created by artists and creative coders. I selected two experiments that apply Machine Learning methods to detect objects in photographs and artworks and generate machine-based tags. These tags are then used to enhance accessibility and exploration of cultural collections.
Tags and Life Tags
These two demo experiments explore how computers read and tag artworks through a Machine Learning approach.
Tags: without the intervention of humans, keywords were generated by an algorithm also used in Google Photos, which analyzed the artworks by looking at the images without any metadata.
The user interface shows a list of tags (keywords) followed by its number of occurrence in the artwork collection. Selecting the tag ‘man’ reveals artworks containing what an intelligent machine understands to be a man. Hovering an artwork reveals other tags detected on that specific representation.
Life Tags: organizes over 4 million images from the Life magazine archives into an interactive interface that looks like an encyclopedia. The terms of the “encyclopedia” were generated by an algorithm based on a deep neural network used in Google photo search that has been trained on millions of images and labels to recognize categories for labels and pictures.
Labels were clustered into categories using a nearest neighbor algorithm, which finds related labels based on image feature vectors. Each image has multiple labels linked to the elements that are recognized. The full-size image viewer shows dotted lines revealing the objects detected by the computer.
As Digital Humanities 2018 is approaching, I took a time to look at its program. Unfortunately, I didn’t have contributions to submit this year so I won’t attend the Conference. But I had the pleasure to be a reviewer this edition and I’ll also stay tuned on Twitter during the Conference!
My main topic of interest in Digital Humanities bridges the analysis of large-scale visual archives and graphical user interface to browse and make sense of them. So I selected the following contributions I would like to attend if I were at DH2018.
Distant Viewing with Deep Learning: An Introduction to Analyzing Large Corpora of Images
Taylor and Lauren coordinate the Distant Viewing, a Laboratory which develops computational techniques to analyze moving image culture on a large scale. Previously, they contributed on Photogrammar, a web-based platform for organizing, searching, and visualizing the 170,000 photographs. This project was first presented ad Digital Humanities 2016. (abstract here) and I’ve mentioned this work in my presentation at the HDRIO2018 (slides here, Portuguese only).
Beyond Image Search: Computer Vision in Western Art History, with Miriam Posner, Leonardo Impett, Peter Bell, Benoit Seguin and Bjorn Ommer;
Computer Vision in DH, with Lauren Tilton, Taylor Arnold, Thomas Smits, Melvin Wevers, Mark Williams, Lorenzo Torresani, Maksim Bolonkin, John Bell, Dimitrios Latsis;
Building Bridges With Interactive Visual Technologies, with Adeline Joffres, Rocio Ruiz Rodarte, Roberto Scopigno, George Bruseker, Anaïs Guillem, Marie Puren, Charles Riondet, Pierre Alliez, Franco Niccolucci
Extracting and Aligning Artist Names in Digitized Art Historical Archives by Benoit Seguin, Lia Costiner, Isabella di Lenardo, Frédéric Kaplan (from EPFL, Switzerland);
Métodos digitales para el estudio de la fotografía compartida. Una aproximación distante a tres ciudades iberoamericanas en Instagram (by Gabriela Elisa Sued)
Paper session: Visual Narratives
Computational Analysis and Visual Stylometry of Comics using Convolutional Neural Networks, by Jochen Laubrock and David Dubray (from University of Potsdam, Germany);
Automated Genre and Author Distinction in Comics: Towards a Stylometry for Visual Narrative, by Alexander Dunst and Rita Hartel (from University of Paderborn, Germany);
Metadata Challenges to Discoverability in Children’s Picture Book Publishing: The Diverse BookFinder Intervention, by Kathi Inman Berens, Christina Bell (from Portland State University and Bates College, United States of America)
Chromatic Structure and Family Resemblance in Large Art Collections — Exemplary Quantification and Visualizations (by Loan Tran, Poshen Lee, Jevin West and Maximilian Schich);
Modeling the Genealogy of Imagetexts: Studying Images and Texts in Conjunction using Computational Methods (by Melvin Wevers, Thomas Smits and Leonardo Impett);
A Graphical User Interface for LDA Topic Modeling (by Steffen Pielström, Severin Simmler, Thorsten Vitt and Fotis Jannidis)
The last seminar held by the Vision and Graphics Laboratory was about data mining with historical documents. Marcelo Ribeiro, a master student at the Applied Mathematics School of the Getúlio Vargas Foundation (EMAp/FGV), presented the results obtained with the application of topic modeling and natural language processing on the analysis of historical documents. This work was previously presented at the first International Digital Humanities Conference held in Brazil (HDRIO2018) and had Renato Rocha Souza (professor and researcher at EMAp/FGV) and Alexandre Moreli (professor and researcher at USP) as co-authors.
The database used is part of the CPDOC-FGV collection and essentially comprises historical documents from the 1970s belonging to Antonio Azeredo da Silveira, former Minister of Foreign Affairs of Brazil.
• +10 thousand documents
• +66 thousand pages
• +14 million tokens / words (dictionaries or not)
• 5 languages, mainly Portuguese
Existing projects in visualization-based interfaces (interfaces which enables navigation through visualization) for cultural collections usually focusses on making their content more accessible to specialists and the public.
Possibly one of the first attempts to explore new forms of knowledge discovery in cultural collections was SFMOMA ArtScope, developed by Stamen Design in 2007 (now decommissioned). The interface allows users to explore more than 6,000 artworks in a grid-based and zoomable visualization. Navigating the collection follows a visualization-based first paradigm which is mainly exploratory (although the interface enables navigation through keyword search, the visualization canvas is clearly protagonist). The artworks’ thumbnails are visually organized by when they were purchased by the museum. The user is able to pan the canvas by dragging it and the lens serves as a selection tool, which magnifies the selected work and reveals detailed information about the selected piece.
ArtScope is an attractive interface which offers the user an overview of the size and content of SFMOMA’s collection. However, the artworks in the canvas are only organized by time of acquisition, a not very informative feature for users (maybe just for the staff museum). Other dimensions (authorship, creation date, technique, subject, etc.) can’t either be filtered and visually organized in the structure of the canvas.
The video bellow illustrates the interface navigation:
Multiplicity is a collective photographic portrait of Paris. Idealized and designed by Moritz Stefaner, in the occasion of the 123 data exhibition, this interactive installation provides an immersive dive into the image space spanned by hundreds of thousands of photos taken across the Paris city area and shared on social media.
Content selection and curation aspects
The original image dataset consisted of 6.2m geo-located social media photos posted in Paris in 2017. However, for a not really clarified reason (maybe a technical aspect?), a custom selection of 25.000 photos was chosen according to a list of criteria. Moritz highlights it was his intention not to measure, but portray the city. He says: “Rather than statistics, the project presents a stimulating arrangement of qualitative contents, open for exploration and to interpretation — consciously curated and pre-arranged, but not pre-interpreted.” This curated method wasn’t just used for data selection but also for bridging the t-SNE visualization and the grid visualization. Watch the transition effect in the video below. As a researcher interested in user interface and visualization techniques to support knowledge discovery in digital image collections, I wonder if a curated-applied method could be considered in a Digital Humanities approach.
Using machine learning techniques, the images are organized by similarity and image contents, allowing to visually explore niches and microgenres of image styles and contents. More precisely, it uses t-SNE dimensionality reduction to visualize the features from the last layer of a pre-trained neural network to cluster images of Paris. The author says: “I used feature vectors normally intended for classification to calculate pairwise similarities between the images. The map arrangement was calculated using t-SNE — an algorithm that finds an optimal 2D layout so that similar images are close together.”
While the t-SNE algorithm takes care of the clustering and neighborhood structure, manual annotations help with identification of curated map areas. These areas can be zoomed on demand enabling close viewing of similar photos.