It’s stressful out there; maybe some of you could use a little levity right now. My old colleagues at the Digital Public Library of America, along with our international friends at other expansive digital libraries, including Europeana, Trove, and Digital NZ, are running the fun GIF IT UP competition again this year. Contestants take open access digitized materials from libraries, archives, and museums, and turn them into whimsical GIFs.
This year Japan Search, a relatively new national aggregator of digitized library/museum content, joins in. Here’s the original, gorgeous “Snow at Shinkawabata, Handa, Bishu” by Kawase Hasui (with kudos to the Tokyo Fuji Art Museum for CC0-ing the digital image):
And a clever, peaceful GIF created from that artwork:
The recently launched Atlascope Boston provides a movable window into the past by combining over a hundred old and new highly detailed atlases of the city, allowing you to see change over time:
Back when the New York Public Library had the creative NYPL Labs group, they were building toward something like this with their NYC Space/Time Directory.
And if you can imagine combining this with archival materials and other documents and data, you can see where we are headed with the Boston Research Center.
I’ve been writing Humane Ingenuity long enough that there have been developments on topics from earlier issues of the newsletter.
First, there’s a terrific paper out from the Carnegie Mellon University Libraries on the focus of HI3: AI in the archives. “CAMPI: Computer-Aided Metadata generation for Photo archives Initiative,” by Julia Corrin, Emily Davis, Matt Lincoln, and Scott Weingart, is brilliant and very promising. The approach is similar to the one I speculated about, that a combination of computer vision and human guidance could lead to a vast improvement in how we describe and search through large collections:
The ultimate goal of our prototype was to leverage these new visual similarity capabilities with the existing archival structure and description to rapidly streamline how editors created item-level metadata in the form of content tagging. Editors would select a tag to work on and then identify a starting seed photograph by searching through the existing metadata for a representative picture of, say, “Football players”, then use visual similarity results based on that photograph to identify other photos across the collection that needed the same tag.
The machine-aided clustering of similar photos creates a foundation for quick human-led processing—the best of both worlds.
In HI24, I explored the idea that using lower resolution digital environments might provide—surprisingly—a greater feeling of connection online than verisimilitude. My university adopted this idea with its 8-bit zone for student groups:
Andrew Hadro, brother of HIer Josh Hadro and a saxophonist, kindly recorded a demonstration of the Playasax after it was noted in HI27:
And finally, Brian Foo, 2020 Innovator in Residence at the Library of Congress (see HI10), has made significant progress on his software for inserting public domain sound samples into music tracks: