Artscope: a grid-based visualization for SFMOMA cultural collection

Existing projects in visualization-based interfaces (interfaces which enables navigation through visualization) for cultural collections usually focusses on making their content more accessible to specialists and the public.

Possibly one of the first attempts to explore new forms of knowledge discovery in cultural collections was SFMOMA ArtScope, developed by Stamen Design in 2007 (now decommissioned). The interface allows users to explore more than 6,000 artworks in a grid-based and zoomable visualization. Navigating the collection follows a visualization-based first paradigm which is mainly exploratory (although the interface enables navigation through keyword search, the visualization canvas is clearly protagonist). The artworks’ thumbnails are visually organized by when they were purchased by the museum. The user is able to pan the canvas by dragging it and the lens serves as a selection tool, which magnifies the selected work and reveals detailed information about the selected piece.

ArtScope is an attractive interface which offers the user an overview of the size and content of SFMOMA’s collection. However, the artworks in the canvas are only organized by time of acquisition, a not very informative feature for users (maybe just for the staff museum). Other dimensions (authorship, creation date, technique, subject, etc.) can’t either be filtered and visually organized in the structure of the canvas.

The video bellow illustrates the interface navigation:

Gugelmann Galaxy

Gugelmann Galaxy is an interactive demo by Mathias Bernhard exploring itens from the Gugelmann Collection, a group of 2336 works by the Schweizer Kleinmeister – Swiss 18th century masters. Gugelmann Galaxy is built on Three.js, a lightweight javascript library, allowing to create animated 3D visualizations in the browser using WebGL.

The images are grouped according to specific parameters that are automatically calculated by image analysis and text analysis from metadata. A high-dimensional space is then projected onto a 3D space, while preserving topological neighborhoods between images in the original space. More explanation about the dimensionality reduction can be read here.

The user interface allows four types of image arrangement: by color distribution, by technique, by description and by composition.  As the mouse hovers over the items, an info box with some metadata is displayed on the left. The user can also perform rotation, zooming, and panning.

The author wrote on his site:

The project renounces to come up with a rigid ontology and forcing the items to fit in premade categories. It rather sees clusters emerge from attributes contained in the images and texts themselves. Groupings can be derived but are not dictated.

 

My presentation at HDRio2018

During the paper session “Social networks and visualizations”, held on April 11 at HDRio2018 Congress, I presented the work “Perspectivas para integração do Design nas Humanidades Digitais frente ao desafio da análise de artefatos visuais”  (“Perspectives for integrating Design in Digital Humanities in the face of the challenge of visual artifacts analysis”).

In this work, I outline initial considerations of a broader and ongoing research that seeks to reflect on the contributions offered by the field of Design in the conception of a graphical user interface that, along with computer vision and machine learning technologies, support browsing and exploration of large collections of images.

I believe my contribution raises three main discussions for the field of Digital Humanities:

  1. The investigation of large collections of images (photographs, paintings, illustrations, videos, GIFs, etc.) using image recognition techniques through a Machine Learning approach;
  2. The valorization of texts and media produced on social networks as a valid source of cultural heritage for Digital Humanities studies;
  3. Integration of Design principles and methodologies (HCI and visualization techniques) in the development of tools to retrieve, explore and visualize large image collections.

Slides from this presentation can be accessed here (Portuguese only).

About machine capabilities versus human sensitivities

For Recognition, an artificial intelligence program that associates Tate’s masterpieces and news photographs provided by Reuters, there are visual or thematic similarities between the photo of a woman with a phrase on her face that reads #foratemer (out Temer) during a protest against a constitutional amendment known as PEC 55 and the portrait of an aristocrat man of the seventeenth century in costumes that denote his sovereignty and authority. In times when intelligent and thinking machines, like chatbots, are a topic widely discussed I wonder if the algorithms that created the dialogue between these two images would be aware of the conflicting but no less interesting relationship between resistance and power established between them.

Visualizing time, texture and themes in historical drawings

Past vision is a collection of historical drawings visualized in a thematic and temporal arrangement. The interface highlights general trends in the overall collection and gives access to rich details of individual items.

The case study examines the potential of visualization when applied to, and developed for, cultural heritage collections. It specifically explores how techniques aimed at visualizing the quantitative structure of a collection can be coupled with a more qualitative mode that allows for detailed examination of the artifacts and their contexts by displaying high-resolution views of digitized cultural objects with detailed art historical research findings.

Past vision is a research project by Urban Complexity Lab at Potsdam University of Applied Sciences.

Reference: “Past Visions and Reconciling Views: Visualizing Time, Texture and Themes in Cultural Collections.” ResearchGate. Accessed March 8, 2018.

Visualizing cultural collections

Browsing the content from Information Plus Conference (2016 edition) I bumped into a really interesting presentation regarding the use of graphical user interfaces and data visualization to support the exploration of large-scale digital cultural heritage.

One View is Not Enough: High-level Visualizations of Large Cultural Collections is a contribution by the Urban Complexity Lab, from the University of Applied Sciences Potsdam. Check the talk by Marian Dörk:

As many cultural heritage institutions, such as museums, archives, and libraries, are digitizing their assets, there is a pressing question which is how can we give access to this large-scale and complex inventories? How can we present it in a way to let people can draw meaning from it, get inspired and entertained and maybe even educated?

The Urban Complexity Lab tackle this open problem by investigating and developing graphical user interfaces and different kinds of data visualizations to explore and visualize cultural collections in a way to show high-level patterns and relationships.

In this specific talk, Marian presents two projects conducted at the Lab. The first, DDB visualized, is a project in partnership with the Deutsche Digitale Bibliothek. Four interactive visualizations make the vast extent of the German Digital Library visible and explorable. Periods, places and persons are three of the categories, while keywords provide links to browsable pages of the library itself.

 

The second, GEI – Digital, is a project in partnership with the Georg Eckert Institute. This data dossier provides multi-faceted perspectives on GEI-Digital, a digital library of historical schoolbooks created and maintained by the Georg Eckert Institute for International Textbook Research.