IMS Photography Research Grant

In order to stimulate the research of the history of Brazilian photography and the works of its collection, Instituto Moreira Salles, holder of one of the most relevant photographic collections in Brazil, is promoting the first edition of the IMS Photography Research Grant. An unprecedented project will be selected based on the production of Marc Ferrez (1843-1923), one of the most important Brazilian photographers of the 19th century, whose work has been under the custody of IMS since 1998.

Entries are from July 16 to August 31, 2018, and the final result will be announced in October.

Grant announcement, application form and additional information here (Portuguese only)

Digital Humanities 2018: a selection of sessions I would like to attend

As Digital Humanities 2018 is approaching, I took a time to look at its program. Unfortunately, I didn’t have contributions to submit this year so I won’t attend the Conference. But I had the pleasure to be a reviewer this edition and I’ll also stay tuned on Twitter during the Conference!

My main topic of interest in Digital Humanities bridges the analysis of large-scale visual archives and graphical user interface to browse and make sense of them. So I selected the following contributions I would like to attend if I were at DH2018.

Workshop

Distant Viewing with Deep Learning: An Introduction to Analyzing Large Corpora of Images

by Taylor Arnold, Lauren Tilton (University of Richmond)

Taylor and Lauren coordinate the Distant Viewing, a Laboratory which develops computational techniques to analyze moving image culture on a large scale. Previously, they contributed on Photogrammar, a web-based platform for organizing, searching, and visualizing the 170,000 photographs. This project was first presented ad Digital Humanities 2016. (abstract here) and I’ve mentioned this work in my presentation at the HDRIO2018 (slides here, Portuguese only).

Panels
  • Beyond Image Search: Computer Vision in Western Art History, with Miriam Posner, Leonardo Impett, Peter Bell, Benoit Seguin and Bjorn Ommer;
  • Computer Vision in DH, with Lauren Tilton, Taylor Arnold, Thomas Smits, Melvin Wevers, Mark Williams, Lorenzo Torresani, Maksim Bolonkin, John Bell, Dimitrios Latsis;
  • Building Bridges With Interactive Visual Technologies, with Adeline Joffres, Rocio Ruiz Rodarte, Roberto Scopigno, George Bruseker, Anaïs Guillem, Marie Puren, Charles Riondet, Pierre Alliez, Franco Niccolucci

Paper session: Art History, Archives, Media

  • The (Digital) Space Between: Notes on Art History and Machine Vision Learning, by Benjamin Zweig (from Center for Advanced Study in the Visual Arts, National Gallery of Art);
  • Modeling the Fragmented Archive: A Missing Data Case Study from Provenance Research, by Matthew Lincoln and Sandra van Ginhoven (from Getty Research Institute);
  • Urban Art in a Digital Context: A Computer-Based Evaluation of Street Art and Graffiti Writing, by Sabine Lang and Björn Ommer (from Heidelberg Collaboratory for Image Processing);
  • Extracting and Aligning Artist Names in Digitized Art Historical Archives by Benoit Seguin, Lia Costiner, Isabella di Lenardo, Frédéric Kaplan (from EPFL, Switzerland);
  • Métodos digitales para el estudio de la fotografía compartida. Una aproximación distante a tres ciudades iberoamericanas en Instagram (by Gabriela Elisa Sued)
Paper session: Visual Narratives
  • Computational Analysis and Visual Stylometry of Comics using Convolutional Neural Networks, by Jochen Laubrock and David Dubray (from University of Potsdam, Germany);
  • Automated Genre and Author Distinction in Comics: Towards a Stylometry for Visual Narrative, by Alexander Dunst and Rita Hartel (from University of Paderborn, Germany);
  • Metadata Challenges to Discoverability in Children’s Picture Book Publishing: The Diverse BookFinder Intervention, by Kathi Inman Berens, Christina Bell (from Portland State University and Bates College, United States of America)
Poster sessions:
  • Chromatic Structure and Family Resemblance in Large Art Collections — Exemplary Quantification and Visualizations (by Loan Tran, Poshen Lee, Jevin West and Maximilian Schich);
  • Modeling the Genealogy of Imagetexts: Studying Images and Texts in Conjunction using Computational Methods (by Melvin Wevers, Thomas Smits and Leonardo Impett);
  • A Graphical User Interface for LDA Topic Modeling (by Steffen Pielström, Severin Simmler, Thorsten Vitt and Fotis Jannidis)

Multiplicity project at the 123data exhibition in Paris

Multiplicity is a collective photographic portrait of Paris. Idealized and designed by Moritz Stefaner, in the occasion of the 123 data exhibition, this interactive installation provides an immersive dive into the image space spanned by hundreds of thousands of photos taken across the Paris city area and shared on social media.

Content selection and curation aspects

The original image dataset consisted of 6.2m geo-located social media photos posted in Paris in 2017. However, for a not really clarified reason (maybe a technical aspect?), a custom selection of 25.000 photos was chosen according to a list of criteria. Moritz highlights it was his intention not to measure, but portray the city. He says: “Rather than statistics, the project presents a stimulating arrangement of qualitative contents, open for exploration and to interpretation — consciously curated and pre-arranged, but not pre-interpreted.” This curated method wasn’t just used for data selection but also for bridging the t-SNE visualization and the grid visualization. Watch the transition effect in the video below. As a researcher interested in user interface and visualization techniques to support knowledge discovery in digital image collections, I wonder if a curated-applied method could be considered in a Digital Humanities approach.

Data Processing

Using machine learning techniques, the images are organized by similarity and image contents, allowing to visually explore niches and microgenres of image styles and contents. More precisely, it uses t-SNE dimensionality reduction to visualize the features from the last layer of a pre-trained neural network to cluster images of Paris. The author says: “I used feature vectors normally intended for classification to calculate pairwise similarities between the images. The map arrangement was calculated using t-SNE — an algorithm that finds an optimal 2D layout so that similar images are close together.”

While the t-SNE algorithm takes care of the clustering and neighborhood structure, manual annotations help with identification of curated map areas. These areas can be zoomed on demand enabling close viewing of similar photos.

 

About machine capabilities versus human sensitivities

For Recognition, an artificial intelligence program that associates Tate’s masterpieces and news photographs provided by Reuters, there are visual or thematic similarities between the photo of a woman with a phrase on her face that reads #foratemer (out Temer) during a protest against a constitutional amendment known as PEC 55 and the portrait of an aristocrat man of the seventeenth century in costumes that denote his sovereignty and authority. In times when intelligent and thinking machines, like chatbots, are a topic widely discussed I wonder if the algorithms that created the dialogue between these two images would be aware of the conflicting but no less interesting relationship between resistance and power established between them.

Visualizing cultural collections

Browsing the content from Information Plus Conference (2016 edition) I bumped into a really interesting presentation regarding the use of graphical user interfaces and data visualization to support the exploration of large-scale digital cultural heritage.

One View is Not Enough: High-level Visualizations of Large Cultural Collections is a contribution by the Urban Complexity Lab, from the University of Applied Sciences Potsdam. Check the talk by Marian Dörk:

As many cultural heritage institutions, such as museums, archives, and libraries, are digitizing their assets, there is a pressing question which is how can we give access to this large-scale and complex inventories? How can we present it in a way to let people can draw meaning from it, get inspired and entertained and maybe even educated?

The Urban Complexity Lab tackle this open problem by investigating and developing graphical user interfaces and different kinds of data visualizations to explore and visualize cultural collections in a way to show high-level patterns and relationships.

In this specific talk, Marian presents two projects conducted at the Lab. The first, DDB visualized, is a project in partnership with the Deutsche Digitale Bibliothek. Four interactive visualizations make the vast extent of the German Digital Library visible and explorable. Periods, places and persons are three of the categories, while keywords provide links to browsable pages of the library itself.

 

The second, GEI – Digital, is a project in partnership with the Georg Eckert Institute. This data dossier provides multi-faceted perspectives on GEI-Digital, a digital library of historical schoolbooks created and maintained by the Georg Eckert Institute for International Textbook Research.

 

DH2017 – Computer Vision in DH workshop (Papers – Third Block)

Third block: Deep Learning
Chair: Thomas Smits (Radboud University)

6) Aligning Images and Text in a Digital Library (Jack Hessel & David Mimno)

Abstract
Slides
Website David Mimno
Website Jack Hessel

Problem: correspondence between text and images.
  • In this work, the researchers train machine learning algorithms to match images from book scans with the text in the pages surrounding those images.
  • Using 400K images collected from 65K volumes published between the 14th and 20th centuries released to the public domain by the British Library, they build information retrieval systems capable of performing cross-modal retrieval, i.e., searching images using text, and vice-versa.
  • Previous multi-modal work:
    • Datasets: Microsoft Common Objects in Context (COCO) and Flickr (images with user-provided tags);
    • Tasks: Cross-modal information retrieval (ImageCLEF) and Caption search / generation
  • Project Goals:
    • Use text to provide context for the images we see in digital libraries, and as a noisy “label” for computer vision tasks
    • Use images to provide grounding for text.
  • Why is this hard? Most relationship between text and images is weakly aligned, that is, very vague. A caption is an example of strong alignments between text and images. An article is an example of weak alignment.

7) Visual Trends in Dutch Newspaper Advertisements (Melvin Wevers & Juliette Lonij)

Abstract
Slides

Live Demo of SIAMESE: Similar advertisement search.
  • The context of advertisements for historical research:
    • “insight into the ideals and aspirations of past realities …”
    • “show the state of technology, the social functions of products, and provide information on the society in which a product was sold” (Marchand, 1985).
  • Research question: How can we combine non-textual information with textual information to study trends in advertisements?
  • Data: ~1,6M Advertisements from two Dutch national newspapers Algemeen Handelsblad and NRC Handelsblad between 1948-1995
  • Metadata: title, date, newspaper, size, position (x, y), ocr, page number, total number of pages.
  • Approach: Visual Similarity:
    • Group images together based on visual cues;
    • Demo: SIAMESE: SImilar AdvertiseMEnt SEarch;
    • Approximate nearest neighbors in a penultimate layer of ImageNet inception model.
  • Final remarks:
    • Object detection and visual similarity approach offer trends on different layers, similar to close and distant reading;
    • Visual Similarity is not always Conceptual Similarity;
    • Combination of text/semantic and visual similarity as a way to find related advertisements.

8) Deep Learning Tools for Foreground-Aware Analysis of Film Colors (Barbara Flueckiger, Noyan Evirgen, Enrique G. Paredes, Rafael Ballester-Ripoll, Renato Pajarola)

The research project FilmColors, funded by an Advanced Grant of the European Research Council, aims at a systematic investigation into the relationship between film color technologies and aesthetics.

Initially, the research team analyzed a large group of 400 films from 1895 to 1995 with a protocol that consists of about 600 items per segment to identify stylistic and aesthetic patterns of color in film.

This human-based approach is now being extended by an advanced software that is able to detect the figure-ground configuration and to plot the results into corresponding color schemes based on a perceptually uniform color space (see Flueckiger 2011 and Flueckiger 2017, in press).

ERC Advanced Grant FilmColors