Recently, I decided to document some older, informal work from the past couple of years so that it might live more broadly than in my memories and those of my closest collaborators (as too much of this work does). I sat down to write a brief overview of a game Jim Mathews and I have been playing around with on and off for a while, Soundscapes. I did that, but I also ended up thinking a lot about the topic of audio as an element of AR.
Soundscapes is an ARIS game about sharing sounds. Players use the Notebook to record, annotate, and comment on the aural aspect of diverse environments. The idea is to share just a small bit of a world you find yourself in. It can be an everyday world that might be unique to you or a special occasion you’d like to share.
Wayne Albertson – Auctioneer in Green County. Farm auctions are common as farming changes and agricultural land is being converted.[audio https://dl.dropboxusercontent.com/u/121361/auctioneer.m4a]
As a listener, you have a portal into someone else’s life. Small but evocative. The result can be intimate and thought provoking.
When Jim came up with this idea, I was excited but it took me a while to understand what would make for a good sound to share. The mellow sounds of rain, or an open field in the Gila Wilderness, while quite relaxing, upon listening later did not feel rooted to the place itself the way Jim’s sounds did. Once I realized the true joy of this idea was not about becoming an aural tourist, but instead about sharing special moments, I found I had an ear peeled when least expected. For instance, one day I was riding home from work and heard a terrible cacophony. It was a large murder of crows, at least three dozen, all in a nearby tree. Instantly, I had Soundscapes out and recording.
A murder of crows[audio https://dl.dropboxusercontent.com/u/121361/crows-loud.m4a]
Augmenting Reality With Sound
Sound is an important dimension of augmenting reality, but we have a lot more to figure out about how to make use of it. Back in the ARGH days, there was one moment in Saving Lake Wingra that opened our ears to this truth.
You are playing the game, following your virtual map on your device as you walk from point to point along a boardwalk. You are surrounded for much of this on both sides by high cattails (at least during the seasons kids were playing). Suddenly your device notifies you, presenting you with the call and image of a red wing blackbird. This has a profound affect. You start listening. You realize the wealth of sounds around you all along, hearing them anew. The cattails block your vision, isolating the aural element.
Red Wing Blackbird Clip from the ARGH game Saving Lake Wingra[audio https://dl.dropboxusercontent.com/u/121361/red-winged%20blackbird.m4a]
I’m not sure whose design this moment was (likely Ming Fong Jan). But it comes up often when Jim and I wax about AR. Soundscapes is another small attempt to tap into that magic.
As AR has become mainstream, it is noteworthy that Zombies, Run! one of the only commercially and critically successful titles thus far makes audio a big part of its design, layering game content with your own music as an assembled “radio broadcast”.
Of course sound is of high relevance to games concerned with language learning. For Mentira, Julie Sykes and I had always hoped to work up to an audio-based version. As it is, there is only one audio element in the existing version of the game, a short news clip. Presenting actual spoken words can help us put learners that much closer to the actual context of a living language while still retaining the benefits of a curated experience.
This insight struck me very strongly in my work on the game ‘Analy. As I’ve written before, Natalie Diaz’ lullaby has stuck with me. I think before I heard it, Mojave was an academic construct for me, a language I heard about. After, it felt real in a way. Of course it doesn’t matter much for me. I’m not learning Mojave. My perception is doubtless colored by various romantic notions wrapped up in the master narrative. But I still believe this short clip presents the strongest effect in our design, a tighter relationship between what the game is and what it’s for.
Still, these ideas for designing aural gameplay are one-way. They are language that we, the authors of the game, are presenting to our players. Language learning is ultimately about language production. We are likely a ways off from machine speech recognition being a viable mechanism to take advantage of, but my gut tells me that’s a good thing. Our designs would get lazy if we had that. We need to find creative ways to incorporate verbal language production into what we’re doing.
One instance of this is what the TA’s teaching Mentira did with the end of the game. Mentira is a murder mystery, meaning that the culmination is finding out who the killer is. Typically, this would be a payoff moment for interactive system like a videogame: if you do everything well enough, you are reworded in the story line by uncovering the killer’s identity. For Mentira however, the TA’s developed an in-class trial where the groups of students would call witnesses, present evidence, and make accusations, eventually voting to convict the killer. A trial doesn’t actually fit in with the story, and this might just be ruining the story arc in terms of psychological payoff, but as a language learning activity of purposeful production it has promise and stays true to our original design intent: to create a fictional scenario situated in local culture whose participatory vector was the Spanish language.
NoTours and Background Audio
Another take on audio, an approach we’re hoping to integrate into ARIS someday, is provided by the open source AR design platform NoTours.
Authors place sound clips in the world. They then have a radius over which they decay. The result for a user who walks through the augmented space is an overlapping soundscape.
Fred Adam brought our attention to NoTours in 2012. He and his students in Murcia, Spain sometimes combine ARIS and NoTours in their designs, letting NoTours be the soundtrack to the cinema they create with ARIS (his and Veronica Perales’ chapter in our upcoming book details some of this).
Too often, designers of AR for learning get bogged down in the information we present our audiences. We risk falling into an attitude of presenting content instead of creating experiences. I believe a focus on overlapping audio—soundtracks—as in NoTours may help us focus on creating emotional states among our players, improving our designs by improving the connection we can help them make to the worlds we’re creating.
ARIS briefly had the ability to play background audio within conversations, something Fred and I both loved to use to deepen the experiences we could share with players. My favorite example was in Earl Shank and Anthony Thompson’s Los Duendes where little Maria’s ghostly cough follows you throughout the game.
Previous designs shed some light on ways ARIS could become to be a better tool for authoring AR experiences, and make some suggestions for how authors might think of crafting those experiences with a more subtle hand—audio is often not as in-your-face but hugely important to experiences nonetheless—and it also highlights some other areas of possible growth for our tool and how we design using it. Here are a couple:
Reaching out from ARIS. To be used for many-many sharing in an unconstrained timeframe (like Soundscapes), the ARIS Notebook needs to touch the rest of the world. Notifications, sharing on Facebook, something. I found that when I came across the crows, I knew Jim wouldn’t just open the game and see my submission. I texted him instead.
Deeper Notebook Interaction. Consumption in the Notebook should have designed interactions too. What happens once Jim has heard my crows and “liked” or “commented” on them? His Notebook should reflect that somehow, like reporting “read” notes differently than “new” ones. Certainly, ARIS has some hooks for this, like the requirement types “player has given # of likes/comments”, but the Notebook itself doesn’t change.