Open Audio Weekend Hackathon: Exploring Audio Accessibility for the Public Good
On June 25 and 26, the Library welcomed developers, designers, data scientists, audio producers, and others to a two-day hackathon designed to advance the cause of audio accessibility and explore future uses of media archives online. This event built upon and extended our work on Together We Listen, a project generously supported by a Knight Foundation prototype grant which was awarded to NYPL and our incredible partners The Moth to crowdsource corrections to computer-generated transcripts.
Kicking Off the Day
Open Audio Weekend attracted a diverse group of participants, with backgrounds in libraries, oral history, software development and more.
All Open Audio Weekend participants received t-shirts designed by NYPL's Brian Foo. The design is five different representations of the spoken phrase "Open Audio Weekend" which Brian stitched together from audio clips in the NYPL Community Oral History project. The first representation is the words, the second is the phonetic transcription, the third is musical notation based on the pitches and rhythms of the speakers, the fourth is the pitch analysis based on raw frequency data of the speakers, and the last is the waveform or amplitude of the speech.
We shared an audio mashup (created by Tim Lou Ly) of Moth stories and excerpts from NYPL’s Community Oral History Project.
Open Audio Weekend Themes
Participants formed into small groups around the following prompts, designed to focus work around broad themes:
- Discover: What new experiences can we make around discovery of audio?
- Find: How can we make it easier to search audio?
- Listen: What are some meaningful ways we can augment the experience of listening to audio?
- Share: How can we make it easier to share audio?
- Learn: What can we learn from audio? How can audio be used in an educational context?
- Engage: How can we engage a community through audio?
- Access: How can we make audio collections more usable for people with disabilities?
The projects presented during our shareout on Sunday afternoon represented a rich array of prototypes and creative imaginings of the future of audio accessibility. The full list of projects is available on the GitHub repo for the event, and here are the highlights!
Crowdscribe is a proof of concept for a Chrome extension that supports crowdsourced transcriptions. Users can request transcriptions of media using the extension, and users who are on the same webpage at the same time will get a notification to help transcribe media on the page. This prototype raises awareness around accessibility, allows for the crowdsourcing of transcription, and is designed with live events in mind. By targeting live events, the extension builds upon existing communities and audiences.
Have you ever wondered what an oral history would sound like overlaid on top of hip hop beats? Check out the BPL Sampler, which remixes voices excerpted from the Brooklyn Public Library Our Streets, Our Stories project.
P.I.T.C.H.Y. D.A.W.G. (Perfecting Interactive Technology for Content Heard by You Despite Awkward Word Groupings)
In addition to being an imaginatively named project, P.I.T.C.H.Y. D.A.W.G. combines the experience of listening to audio with exploring related media, with the option of 3 modalities: Audio Only, Highlights, Full Experience.
Mapping place names from oral history collections. To build this prototype, the group ran transcripts through the Stanford Named Entity Recognizer. These place names were then plotted on a map, creating a cartographic way of browsing audio and transcriptions.
InstaBurns is an experiment in auto-generating common terms and their frequency from transcripts in order to explore the relationship of terms within and across audio files. The InstaBurns platform also uses significant terms to automatically generate a slideshow of related images using the Google Image API.
A-to-V is a one-stop central database where collectors of oral histories provide searchable information about their audio files and make those files directly available to users.
A project modeling potential engagement and reuse activities around oral history collections. Building on NYPL's Open Transcript Editor, this model would allow for users to clip two minutes of an oral history and record their own complimentary response to the clip which would be ingested back into the larger collection.
Thanks to all the participants for making our weekend-long hackathon a great success. Thanks also goes to our great Together We Listen partners PopUp Archive and The Moth, as well as contributing partners Gimlet Media, Buzzfeed Audio, Fresh Air, WNYC Archives, Library of Congress, Brooklyn Public Library, PRI's The World in Words, Columbia University Master of Arts in Oral History Program and to the support of the Knight Foundation for making all of this possible.
More scenes from the event below!