Webinar: ImageNet – Where have we been? Where are we going?

ACM Learning Webinar
ImageNet: Where have we been? Where are we going?
Speaker: Fei-Fei Li
Chief Scientist of AI/ML at Google Cloud; Associate Professor at Stanford, Director of Stanford A.I. Lab

Slides

Webinar abstract: It took nature and evolution more than 500 million years to develop a powerful visual system in humans. The journey for AI and computer vision is about half of a century. In this talk, Dr. Li will briefly discuss the key ideas and the cutting-edge advances in the quest for visual intelligence in computers, focusing on work done to develop ImageNet over the years.

_____

Some highlights of this webinar:

1) The impact of ImageNet on AI/ ML research:
  • First. What’s ImageNet? It’s an image database, a “… largescale ontology of images built upon the backbone of the WordNet structure”;
  • The article “ImageNet: A Large-Scale Hierarchical Image Database” (1) has ~4,386 citations by the time on Google Scholar;
  • The dataset gave origin to The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) (2), a benchmark in image classification and object detection in images running annually from 2010 up to now;
  • Many ImageNet Challenge Contestants became Startups (e.g. Clarifai; VizSense);
  • ImageNet became a key driven-force for deep learning implementation and helped to spread the culture of building structured datasets for specific domains:
Annotated datasets for specific domains.
  • Kaggle: a platform for predictive modeling and analytics competitions in which companies and researchers post data and statisticians and data miners compete to produce the best models for predicting and describing the data

Datasets – not algorithms – might be the key limiting factor to develpment of human-level artificial inteligence.” (Alexander Wissner-Gross, 2016)

2) The background of ImageNet
  • The beginning: Publication about ImageNet in CVPR (2009);
  • There are a lot of previous datasets that should be acknowledged:
Previous image datasets.
  • The reason why ImageNet became so popular is that this dataset has the rights characteristics to implement Computer Vision (CV) tasks from a Machine Learning (ML) approach.;
  • By 2005, the marriage of ML and CV became a trend in the scientific community;
  • There was a shift in the way ML was applied for visual recognition tasks: from a modeling-oriented approach to having lots of data.
  • This shift was partly enabled by the rapid internet data growth, that meant the opportunity to collect a large-scale visual data.
    3) From Wordnet to ImageNet
  • ImageNet was built upon the backbone of the WordNet, a tremendous dataset that enabled work in Natural Language Processing (NLP) and related tasks.
  • What’s WordNet? It’s a large lexical database of English. The original paper (3) by George Miller et al is cited over 5k. The database organizers over 150k words into 117k categories. It establishes ontological and lexical relationships in NLP and related tasks.
  • The idea to move from language to image:
From WordNet to ImageNet.
  • Three steps shift:
    • Step 1: ontological structures based on wordnet;
    • Step 2: populate categories with thousands of images from the internet;
    • Step3: clean bad results manually. By cleaning the errors you ensure your dataset is accurate.
From WordNet to ImageNet: three steps.
  • Three attempts to populate, train and test the dataset. The first two failed. The third was successful due to a new technology that became available by that time:  Amazon Mechanical Turk, a kind of crowdsourced engineer. Imagenet had the help of 49k workers from 167 countries (2007-2010).
  • After three years, ImageNet goes live in 2009 (50M images organized by 10K concept categories)
4) What they did right?
  • Based on ML needs, ImageNet targeted scale:
ImageNet: large-scale visual data
  • Besides, the database cared about:
    • image quality (high resolution to better replicate human visual acuity);
    • accurate annotations (to create a benchmarking dataset and advance the state of machine perception);
    • free of Charge (to ensure immediate application and a sense of community -> democratization)
  • Emphasis on Community: ILSVRC challenge is launched in 2009;
  • ILSVRC was inspired in PASCAL by VOC (Pattern Analysis, Statistical Modelling, and Computational Learning). From 2005-2012.
  • Participation and performance: the number of entries increased; classification errors (top-5) went down; the average precision for object detection went up:
Participation and performance at ILSVRC (2010-2017)
5) In what ImageNet invested and still investing efforts?
  • Lack of details: just one category annotated per image. Object detection enabled to recognize more than one class per image (through bounding boxs);
  • Hierarchical annotation:
Confusion matrix and sub-matrices of classifying the 7404 leaf categories in ImageNet7K, ordered by a depth-first traversal of the WordNet hierarchy (J. Deng, A. Berg & L. Fei-Fei, ECCV, 2010) (4)
  • Fine-grained recognition: recognize similar objects (class of cars, for example):
Fine-Grained Recognition (Gebru, Krause, Deng, Fei-Fei, CHI 2017)
6) Expected outcomes
  • ImageNet became a benchmark
  • It meant a breakthrough in object recognition
  • Machine learning advanced and changed dramatically
7) Unexpected outcomes
  • Neural Nets became popular in academical research again
  • Together, with the increase of accurate and available datasets and high-performance GPUs they promoted a Deep Learning revolution:
  • Maximize specificity in ontological structures:
Maximizing specificity (Deng, Krause, Berg & Fei-Fei, CVPR 2012)
  • Still, relatively few works uses ontological structures;
  • Human comparing versus machine comparing:
How humans and machines compare (Andrej Karpathy, 2014)
7) What lies ahead
  • moving from object recognition to human-level understanding (from perception to cognition):
It means more than recognizing objects AI will allow scene understanding, that is, the relations between people, actions and artifacts in an image.
  • That’s the concept behind Microsoft COCO (Common Objects in Context) (5), a “dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding”;
  • More recently there is the Visual Genome (6), a dataset, a knowledge base, an ongoing effort to connect structural image concepts to language:
    • Specs • 108,249 images (COCO images) • 4.2M image descriptions • 1.8M Visual QA (7W) • 1.4M objects, 75.7K obj. classes • 1.5M relationships, 40.5K rel. classes • 1.7M attributes, 40.5K attr. classes • Vision and language correspondences • Everything mapped to WordNet Synset
    • Exploratory interface:
The interface allows to search fore image and select different image attributes.
  • Visual Genome dataset was further used to advance the state-of-art in CV:
    • Paragraph generation;
    • Relationship prediction;
    • Image retrieval with scene graph;
    • visual questioning and answering
  • The future of vision intelligence relies upon the integration of perception, understanding, and action;
  • From now on, ImageNet ILSVRC challenge will be organized by Kaggle, a data science community that organizes competitions and makes datasets available.
References 

(1) J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. 

(2) Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” International Journal of Computer Vision 115, no. 3 (December 2015): 211–52. doi:10.1007/s11263-015-0816-y.

(3) Miller, George A. “WordNet: A Lexical Database for English.” Communications of the ACM 38, no. 11 (1995): 39–41.

(4) Deng, Jia, Alexander C. Berg, Kai Li, and Li Fei-Fei. “What Does Classifying More than 10,000 Image Categories Tell Us?” In European Conference on Computer Vision, 71–84. Springer, 2010. https://link.springer.com/chapter/10.1007/978-3-642-15555-0_6.

(5) Lin, Tsung-Yi, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Dollár. “Microsoft COCO: Common Objects in Context.” arXiv:1405.0312 [Cs], May 1, 2014. http://arxiv.org/abs/1405.0312.

(6) Krishna, Ranjay, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, et al. “Visual Genome.” Accessed September 27, 2017. https://pdfs.semanticscholar.org/fdc2/d05c9ee932fa19df3edb9922b4f0406538a4.pdf.

Your face in 3D

Reconstructing a 3-D model of a face is a fundamental Computer Vision problem that usually requires multiple images. But a recent publication presents an artificial intelligence approach to tackle this problem. And it does an impressive job!

In this work, the authors train a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. See more information at their project website.

Try their online demo!
Reference: Jackson, Aaron S., Adrian Bulat, Vasileios Argyriou, and Georgios Tzimiropoulos. “Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression.” arXiv:1703.07834 [Cs], March 22, 2017. http://arxiv.org/abs/1703.07834.

Machine Learning and Logo Design

The rise of neural networks and generative design are impacting the creative industry. One recent example of this repercussion is Adobe using AI to automate some designer’s tasks.

This Fast Company article approaches the application of Machine Learning to Logo Design and touches the issue of whether or not robots and automation are coming to take designer’s jobs.

More specifically, the article describes Mark Maker, a web-based platform that generates logo designs.

In Mark Maker, you start typing in a word.
The system then generates logos for the given word.

But how does it work? I’ll quote Fast Company’s explanation: “In Mark Maker, you type in a word. The system then uses a genetic algorithm–a kind of program that mimics natural selection–to generate an endless succession of logos. When you like a logo, you click a heart, which tells the system to generate more logos like it. By liking enough logos, the idea is that Mark Maker can eventually generate one that suits your needs, without ever employing a human designer”.

I’m not sure if we can say this tool is actually applying design to create logos. Either way, it still a fun web toy. Give it a try!

How bias happens in Machine Learning

Interaction bias? Latent bias? Selection bias?

An insightful video by Google Creative Lab explaining how intelligent machines perpetuates humans bias.

Just because something is based on data, doesn’t automatically make it neutral. Even with good intention, it’s impossible to separate ourselves from our own human biases. So our human biases become part of the technology we create in many differente ways.

“Human-augmented” design: how Adobe is using AI to automate designer’s tasks

According to a Fast Company article, Adobe is applying machine learning and image recognition to graphic and web design. Using Sensei, the company has created tools that automate designers’ tasks, like cropping photos and designing web pages.

Instead of a designer deciding on layout, colors, photos, and photo sizes, the software platform automatically analyzes all the input and recommends design elements to the user. Using image recognition techniques, basic photo editing like cropping is automated, and an AI makes design recommendations for the pages. Using photos already in the client’s database (and the metadata attached to those photos), the AI–which, again, is layered into Adobe’s CMS–makes recommendations on elements to include and customizations for the designer to make.

Should designers be worried? I guess not. Machine learning helps automate tedious and boring tasks. The vast majority of graphic designers don’t have to worry about algorithms stealing their jobs.

While machine learning is great for understanding large data sets and making recommendations, it’s awful at analyzing subjective things such as taste.

The problem of gender bias in the depiction of activities such as cooking and sports in images

The challenge of teaching machines to understand the world without reproducing prejudices. Researchers from Virginia University have identified that intelligent systems have started to link the cooking action in images much more to women than men.

Gender bias test with artificial intelligence to the act “cook”: women are more associated, even when there is a man in the image.

Just like search engines – which Google has as its prime example – do not work under absolute neutrality, free of any bias or prejudice, machines equipped with artificial intelligence trained to identify and categorize what they see in photos also do not work in a neutral way.

Article on Wired.

Article on Nexo (Portuguese)

Reference: Zhao, Jieyu, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. “Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus-Level Constraints.” arXiv:1707.09457 [Cs, Stat], July 28, 2017. http://arxiv.org/abs/1707.09457.

A Computer Vision and ML approach to understand urban changes

By comparing 1.6 million pairs of photos taken seven years apart, researchers from MIT’s Collective Learning Group now used a new computer vision system to quantify the physical improvement or deterioration of neighborhoods in five American cities, in an attempt to identify factors that predict urban change.

A large positive Streetchange value is typically indicative of major new construction (top row). A large negative Streetchange value is typically indicative of abandoned or demolished housing (bottom row).

The project is called Streetchange. An article introducing the article can be found here.

Reference:Naik, Nikhil, Scott Duke Kominers, Ramesh Raskar, Edward L. Glaeser, and César A. Hidalgo. “Computer Vision Uncovers Predictors of Physical Urban Change.” Proceedings of the National Academy of Sciences 114, no. 29 (July 18, 2017): 7571–76. doi:10.1073/pnas.1619003114.

DH2017 – Computer Vision in DH workshop (lightining talks part 1)

To facilitate the exchange of current ongoing work, projects or plans, the workshop allowed participants to give very short lightning talks and project pitches of max 5 minutes.

Part 1
Chair: Martijn Kleppe (National Library of the Netherlands)

1. How can Caffe be used to segment historical images into different categories?
Thomas Smits (Radboud University)

Number of images by identified categories.
  • Challenge: how to attack the “unknown” category and make data more discoverable?

2. The Analysis Of Colors By Means Of Contrasts In Movies 
Niels Walkowski (BBAW / KU Leuven)

  • Slides 
  • Cinemetrics, Colour Analysis & Digital Humanities:
    • Brodbeck (2011) “Cinemetrics”: the project is about measuring and visualizing movie data, in order to reveal the characteristics of films and to create a visual “fingerprint” for them. Information such as the editing structure, color, speech or motion are extracted, analyzed and transformed into graphic representations so that movies can be seen as a whole and easily interpreted or compared side by side.

      Film Data Visualization
    • Burghardt (2016) “Movieanalyzer
Movieanalyzer (2016)

3. New project announcement INSIGHT: Intelligent Neural Networks as Integrated Heritage Tools
Mike Kestemont (Universiteit Antwerpen)

  • Slides
  • Data from two museums Museums: Royal Museums of Fine Arts of Belgium and Royal Museums of Art and History;
  • Research opportunity: how can multimodal representation learning (NPL + Vision) help to organize and explore this data;
  • Transfer knowledge approach:
    • Large players in the field have massive datasets;
    • How easily can we transfer knowledge from large to small collections? E.g. automatic dating or object description;
  • Partner up: the Departments of Literature and Linguistics (Faculty of Arts and Philosophy) of the University of Antwerp and the Montefiore Institute (Faculty of Applied Sciences) of the University of Liège are seeking to fill two full-time (100%) vacancies for Doctoral Grants in the area of machine/deep learning, language technology, and/or computer vision for enriching heritage collections. More information.

4. Introduction of CODH computer vision and machine learning datasets such as old Japanese books and characters
Asanobu KITAMOTO (CODH -National Institute of Informatics)

  • Slides;
  • Center for Open Data in the Humanities (CODH);
  • It’s a research center in Tokyo, Japan, officially launched on April 1, 2017;
  • Scope: (1) humanities research using information technology and (2) other fields of research using humanities data.
  • Released datasets:
    • Dataset of Pre-Modern Japanese Text (PMJT): Pre-Modern Japanese Text, owned by National Institute of Japanese Literature, is released image and text data as open data. In addition, some text has description, transcription, and tagging data.

      Pre-Modern Japanese Text Dataset: currently 701 books
    • PMJT Character Shapes;
    • IIIF Curation Viewer

      Curation Viewer
  • CODH is looking for a project researcher who is interested in applying computer vision to humanities data. Contact: http://codh.rois.ac.jp/recruit/

5. Introduction to the new South African Centre for Digital Language Resources (SADiLaR )
Juan Steyn

  • Slides;
  • SADiLaR is a new research infrastructure set up by the Department of Science and Technology (DST) forming part of the new South African Research Infrastructure Roadmap (SARIR).
  • Officially launched on October, 2016;
  • SADiLaR runs two programs:
    • Digitisation program: which entails the systematic creation of relevant digital text, speech and multi-modal resources related to all official languages of South Africa, as well as the development of appropriate natural language processing software tools for research and development purposes;
    • A Digital Humanities program; which facilitates research capacity building by promoting and supporting the use of digital data and innovative methodological approaches within the Humanities and Social Sciences. (See http://www.digitalhumanities.org.za)

DH2017 – Computer Vision in DH workshop (Papers – Third Block)

Third block: Deep Learning
Chair: Thomas Smits (Radboud University)

6) Aligning Images and Text in a Digital Library (Jack Hessel & David Mimno)

Abstract
Slides
Website David Mimno
Website Jack Hessel

Problem: correspondence between text and images.
  • In this work, the researchers train machine learning algorithms to match images from book scans with the text in the pages surrounding those images.
  • Using 400K images collected from 65K volumes published between the 14th and 20th centuries released to the public domain by the British Library, they build information retrieval systems capable of performing cross-modal retrieval, i.e., searching images using text, and vice-versa.
  • Previous multi-modal work:
    • Datasets: Microsoft Common Objects in Context (COCO) and Flickr (images with user-provided tags);
    • Tasks: Cross-modal information retrieval (ImageCLEF) and Caption search / generation
  • Project Goals:
    • Use text to provide context for the images we see in digital libraries, and as a noisy “label” for computer vision tasks
    • Use images to provide grounding for text.
  • Why is this hard? Most relationship between text and images is weakly aligned, that is, very vague. A caption is an example of strong alignments between text and images. An article is an example of weak alignment.

7) Visual Trends in Dutch Newspaper Advertisements (Melvin Wevers & Juliette Lonij)

Abstract
Slides

Live Demo of SIAMESE: Similar advertisement search.
  • The context of advertisements for historical research:
    • “insight into the ideals and aspirations of past realities …”
    • “show the state of technology, the social functions of products, and provide information on the society in which a product was sold” (Marchand, 1985).
  • Research question: How can we combine non-textual information with textual information to study trends in advertisements?
  • Data: ~1,6M Advertisements from two Dutch national newspapers Algemeen Handelsblad and NRC Handelsblad between 1948-1995
  • Metadata: title, date, newspaper, size, position (x, y), ocr, page number, total number of pages.
  • Approach: Visual Similarity:
    • Group images together based on visual cues;
    • Demo: SIAMESE: SImilar AdvertiseMEnt SEarch;
    • Approximate nearest neighbors in a penultimate layer of ImageNet inception model.
  • Final remarks:
    • Object detection and visual similarity approach offer trends on different layers, similar to close and distant reading;
    • Visual Similarity is not always Conceptual Similarity;
    • Combination of text/semantic and visual similarity as a way to find related advertisements.

8) Deep Learning Tools for Foreground-Aware Analysis of Film Colors (Barbara Flueckiger, Noyan Evirgen, Enrique G. Paredes, Rafael Ballester-Ripoll, Renato Pajarola)

The research project FilmColors, funded by an Advanced Grant of the European Research Council, aims at a systematic investigation into the relationship between film color technologies and aesthetics.

Initially, the research team analyzed a large group of 400 films from 1895 to 1995 with a protocol that consists of about 600 items per segment to identify stylistic and aesthetic patterns of color in film.

This human-based approach is now being extended by an advanced software that is able to detect the figure-ground configuration and to plot the results into corresponding color schemes based on a perceptually uniform color space (see Flueckiger 2011 and Flueckiger 2017, in press).

ERC Advanced Grant FilmColors

DH2017 – Computer Vision in DH workshop (Papers – Second Block)

Second block: Tools
Chair: Melvin Wevers (Utrecht University)

4) A Web-Based Interface for Art Historical Research (Sabine Lang & Bjorn Ommer)

Abstract
Slides
Computer Vision Group (University of Heidelberg)

  • Area: art history <-> computer vision
  • First experiment: Can computers propel the understanding and reconstruction of drawing processes?
  • Goal: Study production process. Understand the types and degrees of transformation between an original piece of art and its reproductions.
  • Experiment 2: Can computers help with the analysis of large image corpora, e.g. find gestures?
  • Goal: Find visual similarities and do formal analysis.
  • Central questions: which gestures can we identify? Do there exist varying types of one gesture?
  • Results: Visuelle Bildsuche (interface for art historical research)
Visuelle Bildsuche – Interface start screen. Data collection Sachsenspiegel (c1220)
  • Interesting and potential feature: in the image, you can markup areas and find others images with visual similarities:
Search results with visual similarities based on selected bounding boxes
Bautista, Miguel A., Artsiom Sanakoyeu, Ekaterina Sutter, and Björn Ommer. “CliqueCNN: Deep Unsupervised Exemplar Learning.” arXiv:1608.08792 [Cs], August 31, 2016. http://arxiv.org/abs/1608.08792.

5) The Media Ecology Project’s Semantic Annotation Tool and Knight Prototype Grant (Mark Williams, John Bell, Dimitrios Latsis, Lorenzo Torresani)

Abstract
Slides
Media Ecology Project (Dartmouth)

The Semantic Annotation Tool (SAT)

Is a drop-in module that facilitates the creation and sharing of time-based media annotations on the Web

Knight News Challenge Prototype Grant

Knight Foundation has awarded a Prototype Grant for Media Innovation to The Media Ecology Project (MEP) and Prof. Lorenzo Torresani’s Visual Learning Group at Dartmouth, in conjunction with The Internet Archive and the VEMI Lab at The University of Maine.

“Unlocking Film Libraries for Discovery and Search” will apply existing software for algorithmic object, action, and speech recognition to a varied collection of 100 educational films held by the Internet Archive and Dartmouth Library. We will evaluate the resulting data to plan future multimodal metadata generation tools that improve video discovery and accessibility in libraries.