Plaques, conversations, and quests oh my!

ARIS is great, but not always straight forward. I forget this since we have grown together over the course of a decade.

In particular, ARIS authors are faced with a bit of difficulty deciding which kinds of “game objects” to use when creating games and other experiences (plaques, conversations, items). A recent post to the forums the questions I’m sure thousands of other authors have faced.

Although the manual and other places do go over technical differences between plaques and conversations, this isn’t helpful enough. So here I’d like to share a bit of practical advice about choosing game objects. 

First, there is no one right answer. ARIS is very flexible, an open ended system for creating augmented reality experiences. Do what seems good to you. Chances are you’ll figure out something I haven’t. 

But, especially for newcomers, it’s nice to have a good place to start.

Start with Plaques

I use plaques until I really need something else. They are the simplest, easiest to create and use objects in ARIS. The way they show up for players in the client is similar to what you see in the editor. And yet plaques are still powerful. Already with plaques you get a deeper interactive system than with other ar tools like Aurasma. 

So what can a plaque do?

At minimum, a plaque presents the player with a title, a bit of media, and some words set below on the screen. 

Intro Plaque from our recent game, Zimm

So 99% of my plaques look like this one above, with an image on top and some text below (there is a bit of HTML to make the big, red font and paragraph spacing). When the player is ready to move on, they tap “continue”. It’s very simple for them and me. 

And within this format, an author can do a lot of storytelling. Plaques can represent turns in a conversation, quests being given, informational plaques, and so much more. The best advice I have for authors is to not let our vocabulary words limit their imaginations when it comes to representational metaphors. 

A plaque’s media need not be an image. A great and also simple use for plaques is to present video (or audio, but it’s clunkier). Putting videos in plaques may help you ditch text all together (note too that video playback in plaques is full screen by default, making them more immersive than videos inserted as character media which are constrained to vertical orientation within the existing conversation window).

What the above plaque looks like in the ARIS editor – don’t worry about code until you’re looking to fancy things up

A third thing plaques can do, and where the simplicity of plaques lets authors quickly make use of advanced interaction in ARIS, is “edit events” (blue button in the screen shot above). To a first approximation, “edit events” allows authors to give and take items to and from the player upon viewing the plaque. Interactions with hit points, keys, money, etc. are quite simple to put together. 

There are far more advanced things you can do with events too, like change the whole game world for all its players (not just each one who encounters the plaque). There are also other places you can use events. But giving and taking items from players is already a lot to play with. And running events from plaques makes their execution simple too. 

Finally, the text field in plaques can accommodate HTML and JavaScript. You can use HTML to customize the presentation of text, bold, italics and so on.

Javascript can be used to do just about anything. In the example above, we use it to make the device vibrate upon opening the plaque. ARIS even has its own JavaScript library and documented examples to do things like present leaderboards or insert the player’s name into text.

Starting with plain text then going to HTML and Javascript, all within the same part of ARIS, a lowly plaque, is an awesome way to start simply and ramp up into complexity. And these skills will transfer to other places too! Most text fields in ARIS have the same power of parsing code, and these languages have a few other uses too. 

Again, my first advice to ARIS authors is to use plaques until you really need something else. They can do so much!

Plaques, triggers, and locks

Another thing authors gain by sticking to plaques early on is learning the other functions of ARIS, especially triggers and locks, in a simpler context.

Triggers – How the real world and game world meet

Plaques (and pretty much everything else) are accessed through triggers. The starting point is to assume that every plaque needs a trigger. The most common kind of trigger is a location. As an author, you would create a location in the real world for your plaque, a place where the player could find it, by creating a location trigger and pointing it at the plaque in question. Until you have a trigger (or some other connection), your plaque exists in the game world but cannot be reached by a player.

Plaque and location trigger – basic mechanic of ARIS

There are many other kinds of triggers too, from QR codes, Bluetooth beacons, timers, and even AR triggers that work by the camera matching a given image to one the author sets. My previous post on the design of Zimm shows a few of these in action and the manual lists them all.

Locks – creating sequence from content

The notion of sequence in ARIS is determined entirely by locks, and this is another area where your initial assumptions may not line up with how ARIS is set up. By default, any trigger (in a scene—until you really get into ARIS, just use a single scene; they are messy) is always available to every player, even if they’ve already seen what’s there. Without locks, the game world is static. This is similar to many tour making apps, like Aurasma.

With locks, you go from tour to game. Real interaction becomes possible. Basically, you put a lock on a trigger by specifying the key that opens the lock, i.e. what the player needs to do or have in order to make the trigger accessible. You can lock triggers you’re done with, open up new triggers as the player progresses, and make triggers conditional upon pretty much any condition ARIS can keep track of. The manual and its tutorials have further details on their use.

Although there are other objects and mechanics for ARIS authors to explore and exploit, in many situations you can use plaques to accomplish the functions of these other, more complicated features. This is a great way to limit the complexity of your learning adn the design you are hoping to realize.

Below, I detail a couple examples of this, why I often use plaques in lieu of quests or conversations, even though I know ARIS quite well.

Plaques instead of quests

The Quests feature in ARIS is quite powerful, and equally complicated. I typically remove that tab from my games. I can accomplish the job (notifying the player of their progress and goals) with plaques and appropriate locks, telling the player at certain times that they should be doing something in particular or relying on them to move forward if the available options give them little choice. Making a few plaques to communicate game state is way easier than figuring out this whole panel:

This takes longer to learn than it looks; quests are complicated-just use plaques!

Plaques instead of conversations 

Authors often have a hard time deciding if/when to use a plaque or conversation. My general rule of thumb is:

Use a conversation if it needs to branch or page. Otherwise use a plaque.

The choices in conversations allow players to take different paths within them, and they also allow you to create a layered experience (page-ing), switching out text and media with each tap by the player. They are the main reasons, I’ll go to the trouble of using a conversation instead of a plaque.

The one other consideration is appearance. Instead of text below an image, conversations fill most of the screen with their image, and layer text over that semitransparently. Conversations need more specific image aspect ratios and the bottom part of the image can’t be important because the player won’t always see it. But they also, in blending visually, look a bit fancier. The text players tap is also customizable in conversations, but not plaques (well, maybe but I haven’t gotten an answer as to how). In a plaque, the player moves forward by tapping “continue” but in conversations you control not only the number of ways to continue but also the text that is used. You can use something like “tell me more…” as a more evocative option, or use other languages aside from localizing everything, even when you don’t need a branching interaction. 

Other ideas?

So that’s a basic rundown of what plaques really can be and why they are a good first choice for beginners and veterans of ARIS alike. If you’ve got other questions about them I’d be happy to hear. Or too, if you have a different set of default design pathways with ARIS.

What other areas can I hope to provide this sort of practical advice in, or otherwise share contexts of AR learning design, not just technical details?

Zimm – AR in the Library (Part 1)

I spend a lot of time thinking about what AR is good for, and mostly this means not just choosing content, but places to augment, and audiences to inhabit that augmented place. Some places feel impossible to play in and others feel like giant piles of potential. Today I want to say a bit about the latter.

The ghostbusters of Zimm

The creators of Zimm –  (Left to right) Chris Holden, Vanessa Svihla, and Yang Liu. Not present Cindy Pierard.

Libraries should be great places for AR games for a wide variety of purposes, and are an especially excellent place to take advantage of the new AR capabilities in ARIS and other platforms. Advances in the context awareness of mobile devices (AR vision, bluetooth beacons) make indoor games suddenly easier to pull off and more fun to play. Last year, I started working with a new colleague, Vanessa Svihla, and together with one of her students, we spent the spring putting together a small game in our own university library. We learned a lot that I hope might help others.

On this blog, I’d like to share a few thoughts about

  • How libraries could be places for playing are games and for constructive, creative work between departments, students, staff, and faculty. A new place for AR and model for collaboration that can open doors.
  • New AR features in ARIS and implications for design of AR games, especially in indoor spaces, and some design considerations for their use.
  • How a diverse group of people can come together around game design and place, and some suggestions for doing this kind of work in a way that is approachable and hopefully sustainable.
  • The pervasive nature of place. How we can use game design and play to engage with the places we live and work.

I’ve split this conversation into a few parts. If you’re coming to AR from far away, or just Pokemon Go, these articles might help you see the depth of purpose I and others have found in this work. There’s also a lot more on this very blog.

If you’d like to know more about Zimm itself, here’s a bit of basic info. If you’re in the area and would like to give the game a go, get in touch.

Intro Plaque from Zimm

Here, in Part 1 – Why Libraries, I lay out the basic reasons why I think libraries and AR could be like chocolate and peanut butter.

Part 1 – Why libraries?

Libraries are great places for so many reasons. They will likely play new and important roles in education and communities in the near future, and if you haven’t looked in a while, they already are. Precisely because “a place to find dead trees and ink” seems anachronistic now, libraries are confronting the future generally in more direct and creative ways on the whole than school. They are also very interesting to begin with and some of the only remaining public spaces in America, places to go and be and meet without needing to have a shopping agenda. Libraries provide diverse and deep resources, not just books and quiet. They are jumping off points for many journeys, from wizarding worlds, to job applications, to organizing community action. And while libraries are not playgrounds, places for yelling and jumping generally, they may be excellent places in which to structure other sorts of play. Each book is a world between covers, waiting to be discovered and shared. And even as the dead trees dwindle a bit, this basic ethos seems to pervade much of what a library might offer. I’ve felt it in every conversation I’ve had with a librarian over the last decade. They’re excited for the future and working hard to provide open doors that beckon in new ways.

the doctor

A third of a controversial mural in Zimmerman Library at UNM. It’s racist content set the stage for our design

This setting is an excellent place for AR especially. The chief strength of AR is to help people explore worlds hidden in plain sight. It could easily serve their roles as centers of discovery.

And as we focus on libraries, let’s keep these goals lofty. Too often, we tend to get a bit too sidetracked by solving logistical problems for patrons in the most basic sense. I’ve seen a host of AR projects set in libraries whose sole objective was to simplify the finding of a correct call number in the stacks. Sure, the sorting system (whether Dewey or LoC) is a barrier to outsiders finding what they need, but the real barriers to participation in a library space are largely psychic.

This mirrors a lot of problems with learning where we focus on the mechanics of knowledge uptake in a very general way, not realizing that most problems really stem from a lack of care and familiarity, feeling like the learning places are where you can become something great.

So as we look at libraries for AR and other games, let’s see our design challenge as a need to realize the potential for exploration there. If the imagination is there, navigating call numbers might be more of a quest to complete, or a puzzle mechanic, not just a simple navigation UI concern. We should begin to think in terms of long lost treasure maps as much as an efficient system for locating a desired title.

We should remember that there’s more to finding your way than being able to locate a spot in the stacks.

Libraries are partly specific and partly generic

One of the problems faced by place-based AR projects is that the design feels too parochial, hard to pass on to others in new places. Libraries offer some help. Libraries today are large, varied, mostly indoor spaces. Stacks are just the beginning. And unlike a lot of the unique settings for AR (say a specific neighborhood in Albuquerque), libraries exhibit both the universal and local. No other library is quite like yours, but there are a lot of similarities across most, both in what you’ll find and how the space is organized. AR design work done in a particular library has potential to be easier to localize to other places and settings.

IMG_4595.JPG

This exact mural isn’t in your library, but incidental art is commonplace. Our use of it in Zimm is easy to pass on.

Libraries also have potential as a sort of neutral ground, a place not controlled entirely by a single interest, use, or age group. Such neutrality can be hard to find in schools, but it is badly needed now, as we realize our divisions among job descriptions, disciplines, and function prevent us from addressing the educational needs of this century. We are bound within our roles, timetables, and departments, and libraries may give us some of the space we need to collaborate in more meaningful ways to supercharge learning beyond the ordinary.

As usual, what should be a simple blog post about a cool idea to make games in the library turns—for me—into a polemic on structural issues in education. So rather than going too far down that road, I’ll just list a couple reasons this stuff matters.

  1. Agency for all. Staff have power here along with faculty. And students can easily be welcomed as co-owners of action and planning in a way that is just not possible in other areas. This neutrality is hugely important for empowering all stakeholders, a goal often missing from educational design and reform efforts, whose inclusion should be important as a basic humanistic principle of this work we call education.
  2. Scale. It has been maybe the chief difficulty of educational innovation. How to spread ideas beyond their original contexts? Usually, and this is not a good approach, just convenient in a world of mass media, we put the idea in a box and send it off, excluding of the needs and expression of interest of multiple stakeholders in the new spaces. All stakeholders interests and needs need to be addressed, not just what those at the top demand of those below them. The internet affords new forms of growth in partnership—grassroots, non-hierarchical—that we hope to use as better routes to scale.
  3. Sustainability. If an important project depends on the drive of a single person and their incentives, what happens when that person takes another job or decides it is not worth the trouble to keep pushing that stone uphill. Including others meaningfully is the first step in creating something that is more like an aspect of the ecosystem instead of just a pet project.

Not every single experiment needs to tackle these beasts head on, but work done without a consideration of how it leads into their toothy jaws and back out again is doomed from the start. If you’re new to this topic of conversation, I’d recommend reading Seymour Papert’s The Children’s Machine. It was written in 1993, a decade after his (and others’) groundbreaking Logo software was introduced to school, alongside computer hardware, with the idea that they would revolutionize education in schools. They didn’t and Papert has some cogent and timeless thoughts about why not.

Zimm – Parts 2-?

So that’s why libraries, structurally, might be good places to make AR games. Next, in Part 2, I’ll mention a few ways that the affordances of AR match the constraints of the space and where some apparent possibilities for design lie. We’ll look at some of the features of the game we made at Zimmermann (it’s called Zimm) and what we learned about the realities of making games in libraries. It’s also a good chance to look at some of the newer AR mechanics available in ARIS.

In Part 3, I’ll go through some of the practical logistics of this work, how a group of people can come together around the idea of making a game in a library, learn to work together, set appropriate expectations, and use this work as a way to grow closer together and understand your common cause as well as the burgeoning art of AR design. Usually, we spend a lot of time with AR talking about mechanics of the software, but not the groupings of people who will make these games. The latter is hugely important if this work is going to go anywhere in the end and remain a vernacular rather than a license we buy into.

If I get to Part 4, there’s one other issue that came up with Zimm that feels central to me when it comes to making sure we do the best work we can with AR. Place. Place is always important and more than location. Even when you don’t plan for it, it intrudes. Instead of seeing it as an unwanted variance, we can listen to it to do educational work that is more vital and relevant to the lives of those who participate in our experiments.

I’d love it if these articles, or just their titles, got someone else excited about making games in libraries, and not just to find call numbers. Once we have a few people experimenting in the area, the real interesting conversations can begin.

 

Showing Off Seesign

Seesign's Main Screen

seesign in use

There’s a new project here at Local Games Lab ABQ: Seesign. It is Celestina Martinez’ tool to visually identify signs (as in sign language, not road signs). She used Nomen to make Seesign, a new platform from Field Day Lab, my ARIS co-conspirators.

Martinez’ goal with Seesign is to replace the very user-unfriendly dictionaries sign language students need to use to look up unfamiliar signs with something obvious and easy-to-use. The usual approach to looking up signs is to search by the English word you think might correspond to what you saw. The affordances of dictionaries are just not very useful for organizing visual as opposed to textual information. It is not hard to see that this is terribly inefficient for learners who need to look up signs they don’t recognize.All you have to do is mark the features you do notice: hand shapes, movement type, etc.

Seesign's Main Screen

Seesign’s Main Screen

You do this by tapping the relevant choices on Seesign’s main screen (above). The possible matches to your choices are numbered and linked at the top right of the screen, and are narrowed in scope with each additional feature you mark. At any point, you can tap on this number to see those potential matches (below; in the actual app these are animated GIFs).

Likely Matches

Likely Matches to a Choice on Main Screen

Nomen makes it quite simple for novices without much technical background to produce tools for this and similar identification tasks. More on that later, but for now, note that Ms. Martinez does not have a technical background and put this app together, with only minor assistance from me, part-time over this last year. She began work on Seesign last fall as part of an assignment in my course, Things That Make Us Smart, and has continued this semester as an independent study. She also submitted it to the 2016 UNM App Contest, but did not win any of the big prizes (robbed IMO).

Why Seesign Is Cool

Ms. Martinez describes it a bit better than I do, but basically, her app was born of the frustration of learning sign language using common tools. The usual approach to looking up signs is to search by the English word you think might correspond to what you saw. It is not hard to see that this is terribly inefficient. It works fine if you are trying to say something you know in English, but not at all for identifying signs you see. The affordances of dictionaries are just not very useful for organizing visual as opposed to textual information. Reverse-look-up visual dictionaries do exist, but they are not common and they are expensive and very limited in terms of

  • The words they provide – very few
  • How signs are represented – static images, very basic
  • The mechanics of looking up – tons of pages to rifle through.

Seesign improves on all of these greatly with its obvious, streamlined interface for marking features and looking at the matching results. Not only is the user able to mark as little or as much of what they recognize—maybe you notice the type of movement, but not what hand shapes were used—but Nomen is also flexible in how it organizes and displays matches: showing both “likely” matches, signs that match all marked features, and “possible” matches, signs that match at least one marked feature. And since it is a web-app, unlike my long-time favorite ARIS, Seesign is accessible on almost every internet connected device with a screen: phones, tablets, laptops, desktops.

So far, Martinez has developed two iterations of her basic prototype (one each semester). She currently only has 10 words catalogued, but has worked extensively at

  1. Developing a useful categorization scheme that classifies words in a way that brings out the visually distinct features of signs and fitting them into a small number of readily apparent choices, and
  2. Producing a visual format for illustration and feedback on the users’  marked choices: animated GIFs.

I’ve been really impressed with Ms. Martinez’ work this last year, both in her general ability to start with an idea and really dig into developing it into something tangible (not just the app but the “soft” work too: finding a model to work with, entering the UNM app contest, etc.), but also the acuity with which she has developed (through thoughtful iteration) the experience of her app, especially through those two areas above. She also displayed attention to detail. Nomen is strictly-speaking authorable without writing code, but it is still essentially a working prototype of the basic functionality Field Day hopes to turn into something far more polished in the next year. In particular the formatting guidelines are strict and unforgiving. Ms. Martinez’ build compiled on her very first upload, a feat I doubt I will ever manage.

This work is worth looking at for several reasons, some of which go far beyond her specific innovations:

  • It is a good idea: the advantage and use of this tool to aid sign language learners is clear.
  • It begins to describe a model for extending learning outside the classroom based on interests developed within.
  • It is an example of how accessible tools can open new areas of development by subject matter enthusiasts instead of technologists.
  • It is an example of how a local games lab can encourage and support development in small but meaningful ways.
  • It is an opportunity to discuss how events like app contests might focus innovation and enhance collaboration, and having seen this contest play out, what might be learned from what apps are being tried, why, and how.
  • It is the first documented use of Nomen (there’s another one I’ve been working on I hope to post about soon), a new accessible and open tool that could see myriad uses by others across many fields of inquiry and contexts of interaction.

In follow-up posts, I’d like to discuss these more general ideas. If you beat me to it or have other resources to point to, then by all means…