A keynote by Lindsay King & Peter Leonard (Yale University) on “Processing Pixels: Towards Visual Culture Computation”.
Abstract: This talk will focus on an array of algorithmic image analysis techniques, from simple to cutting-edge, on materials ranging from 19th century photography to 20th century fashion magazines. We’ll consider colormetrics, hue extraction, facial detection, and neural network-based visual similarity. We’ll also consider the opportunities and challenges of obtaining and working with large-scale image collections.
Project Robots Reading Vogue project at Digital Humanities Lab Yale University Library
1) The project:
- 121 yrs of Vogue (2,700 covers, 400,000 pages, 6 TB of data). First experiments: N-Grams, topic modeling.
- Humans are better at seeing “distant vision” (images) patterns with their own eyes than “distant reading” (text)
- A simple layout interface of covers by month and year reveals patterns about Vogue’s seasonal patterns
- The interface is not technically difficult do implement
- Does not use computer vision for analysis
2) Image analysis in RRV (sorting covers by color to enable browsing)
-
- Media visualization (Manovich) to show saturation and hue by month. Result: differences by the season of the year. Tool used: ImagePlot.
- “The average color problem”. Solutions:
- Slice histograms: Visualization Peter showed.
-
3. Experiment Face Detection + geography
- Photogrammer
- Code on Github
- Idea: Place image as thumbnail in a map
- Face Detection + composition
4. Visual Similarity
- What if we could search for pictures that are visually similar to a given image
- Neural networks approach
- Demo of Visual Similarity experiment:
- In the main interface, you select an image and it shows its closest neighbors.
Other related works on Visual Similarities:
- John Resig’s Ukiyo-e (Japenese woodblock prints project). Article: Resig, John. “Aggregating and Analyzing Digitized Japanese Woodblock Prints.” Japanese Association of Digital Humanities conference, 2013.
- John Resig’s TinEye MatchEngine (Finds duplicate, modified and even derivative images in your image collection).
- Carl Stahmer – Arch Vision (Early English Broadside / Ballad Impression Archive)
- Article: Stahmer, Carl. (2014). “Arch-V: A platform for image-based search and retrieval of digital archives.” Digital Humanities 2014: Conference Abstracts
- ARCHIVE-VISION Github code here
- Peter refers to paper Benoit presented in Krakow.
5. Final thoughts and next steps
- Towards Visual Cultures Computation
- NNs are “indescribable”… but we can dig in to look at pixels that contribute to classifications: http://cs231n.github.io/understanding-cnn/
- The Digital Humanities Lab at Yale University Library is currently working with as image dataset from YALE library through Deep Learning approach to detect visual similarities.
- This project is called Neural Neighbors and there is a live demo of neural network visual similarity on 19thC photos
- The idea is to combine signal from pixels with signal from text
- Question: how to organize this logistically?
- Consider intrinsic metadata of available collections
- Approaches to handling copyright licensing restrictions (perpetual license and transformative use)
- Increase the number of open image collections available: museums, governments collections, social media
- Computer science departments working on computer vision with training datasets.