Skip to content Skip to navigation
See our Campus Ready site for the most up to date information about instruction.Campus ReadyCOVID Help

UC Berkeley Digital Humanities Fair

April 21, 2021 - 1:00pm

Join the UC Berkeley Digital Humanities Fair!

DH Fair 2021
Wednesday, April 21st, 2021
1:00-4:00pm

Online

The DH Fair is an annual event that offers the UC Berkeley community the opportunity to share projects at various stages of development, receive invaluable feedback from peers, and reflect on the field more broadly. This year's events include a keynote speech from Roopika Risam on Digital Humanities for Social Justice, a panel discussion with Tim Tangherlini and Lisa Wymore on computation for analyzing and choreographing dance in the K-pop and folk music genres, and lightning talks.

DH Fair Registration
http://ucblib.link/dh-fair-registration

More Information
https://digitalhumanities.berkeley.edu/dh-fair-2021

Propose a Lightning Talk
http://ucblib.link/dh-fair-proposal

 

Program

1pm: Keynote: Roopika Risam
Digital Humanities for Social Justice
Roopika Risam, Chair of Secondary and Higher Education and Associate Professor of Education and English at Salem State University

In this talk, Professor Roopika Risam will discuss trends in approaches to digital humanities that foreground social justice. She will consider the practices that are critical to using digital humanities to intervene in the gaps and omissions of the digital cultural archive born from the history of colonialism. Additionally, she will discuss the project Torn Apart/Separados as an example of what digital humanities makes possible for social justice.

Dr. Roopika Risam is Chair of Secondary and Higher Education and Associate Professor of English and Education at Salem State University. She is the author of New Digital Worlds: Postcolonial Digital Humanities in Theory, Praxis, and Pedagogy (Northwestern UP, 2018) and co-editor of Intersectionality in Digital Humanities (Arc Humanities Press, 2019), South Asian Digital Humanities (Routledge, 2020), and The Digital Black Atlantic (Debates in the Digital Humanities series, University of Minnesota Press, 2021).

 

2pm: Lighting Talks

Learn about current work in digital humanities at UC Berkeley.

 

3pm: Panel: Dancing with Computers: Layerings of Collective Embodied Knowing within our Machines
A panel discussion with Timothy R. Tangherlini and Lisa Wymore, moderated by Claudia von Vacano
https://digitalhumanities.berkeley.edu/dh-fair-2021

Dancing in the Fire: Toward a Choreographic Search Engine
Timothy R. Tangherlini, Professor, Department. of Scandinavian, University of California, Berkeley
Collaborative work with Peter M. Broadwell (Stanford Library)

Critics have long noted the strong visual aspects of K-pop, with the videos for newly released songs garnering millions of hits in a very short time span. A key feature of many Kpop videos is the dancing. Although many of the official videos are not solely dance focused, incorporating aspects of visual storytelling, nearly all of Kpop videos include some form of dance. In addition to the "main" video for a Kpop release, the release of a dance video, or a dance rehearsal video, focusing exclusively on the dances has become common practice. These videos allow fans to learn and practice the dance, thereby increasing the kinesthetic connection between fans and their idols. At the same time, it affords an opportunity to explore the "dance vocabulary" of Kpop dances. While there are well-known Kpop choreographers who work with the Kpop idols to create their dances, there is little documentation of these dances beyond the dance videos themselves. In our work, we develop a series of methods for (a) identifying dance sequences in Kpop videos, irrespective of whether they are dance videos (b) develop a series of classifiers for the navigation of a large scale Kpop video corpus and (c) apply deep learning methods to identify dancers and their body positions. Taken together, these approaches pave the way for the development of a macroscope for the study of Kpop videos, allowing researchers to identify patterns in the Kpop space, explore dynamic change in features such as color space, or interrogate the differences in visual representations of male and female performers at an aggregate scale. Importantly, as pose estimation has become more accurate, these methods allow us to begin the process of inferring the dance vocabulary of Kpop and start the process of tracing transcultural choreographic flows.

What Do Computers Know about Making Dances?
Lisa Wymore, Professor, Department of Theater, Dance, and Performance Studies, University of California, Berkeley

Dance makers can choose to imbue embodied knowledge into our machines through a variety of methods from motion capture, to voice detection, to image recognition, to motion tracking, etc. What happens when we ask our computers to co-create a piece of choreography with this embodied information? Can we find innovative and unexpected modes of expression that would have not otherwise occurred if the computer or the choreographer had worked alone? For this presentation I will be showing examples from my work entitled Endless Gestures of Goodwill (March 2015), which is a dance film derived from a cache of over 250 video files of dance movements and gestures. The gestures were created specifically with a variety of compatible input and output poses. The video files were then coded and run through a random generating algorithm to create an endless dance series that appears seamless without any sudden or jerky transitions. Ideally, the piece can run indefinitely, as if the computer is creating an endless dance. The piece was designed to be viewed within a museum setting, rather than viewed in a theater. To add to the feeling of collaborating with the computer on the dance, a camera hanging from the ceiling of the museum captures the audience members' proximity to the screens. From this data, the film either slows down or speeds up depending on the spatial position of the viewers. This means that the exhibit has a lively interplay between the gestures being projected in the film and the movement of the audience members in real time within the museum space. In thinking about this piece again, I am wondering about the possibility of creating larger caches of recorded gestures utilizing cloud-based technology and using AI deep learning to speed up the detection of compatible dance gestures within very large data sets. When would this data become a dance and would the computer know if it had created one?

The DH Fair is sponsored by the UC Berkeley Digital Humanities Working Group, The Townsend Center, the D-Lab, and the Library.