Emotionally compelling AR Views

Despite being an author and promoter of “augmented reality” games for more than a decade now, a dedicated enthusiast if you will, I find that what gets shown off most often as augmented reality in the end seems like a boring gimmick. This is disappointing in some ways (not all—experimenting in a medium has its own value). But even though this is what I see peddled around, I don’t think it has to be that way. What most tech enthusiasts and bystanders understand AR to be could become something more alive. And I have some ideas about how to get there.

Update – I added a section at the end clarifying where inspiring work in AR can be found. There’s lots out there but you need to be looking for it.

Continue reading

Advertisements

Another Awesomenauts – Making a game with a close friend

Another Awesomenauts Game Title

I haven’t had much chance to make games lately. Lots of leading design jams and student work, but not a project where we make a game and play it. A game we spend some time and energy on, where we try to realize a singular vision. Not in a while. But this last year, in my spare time on weekends and such, I found myself lucky enough to work on a new game. A couple weeks ago, we finished a 1.0 and ran it up the flagpole. It was exhilarating. It felt great to see it come to fruition and put it through its paces. I’d like to tell you a little about this project and where we might start to look for learning and games to come together in ways that can lead far.

My design partner’s name is Alex, he’s 5, and he’s my oldest son. Our game is called Another Awesomenauts.

Continue reading

Showing Off Seesign

Seesign's Main Screen

seesign in use

There’s a new project here at Local Games Lab ABQ: Seesign. It is Celestina Martinez’ tool to visually identify signs (as in sign language, not road signs). She used Nomen to make Seesign, a new platform from Field Day Lab, my ARIS co-conspirators.

Martinez’ goal with Seesign is to replace the very user-unfriendly dictionaries sign language students need to use to look up unfamiliar signs with something obvious and easy-to-use. The usual approach to looking up signs is to search by the English word you think might correspond to what you saw. The affordances of dictionaries are just not very useful for organizing visual as opposed to textual information. It is not hard to see that this is terribly inefficient for learners who need to look up signs they don’t recognize.All you have to do is mark the features you do notice: hand shapes, movement type, etc.

Seesign's Main Screen

Seesign’s Main Screen

You do this by tapping the relevant choices on Seesign’s main screen (above). The possible matches to your choices are numbered and linked at the top right of the screen, and are narrowed in scope with each additional feature you mark. At any point, you can tap on this number to see those potential matches (below; in the actual app these are animated GIFs).

Likely Matches

Likely Matches to a Choice on Main Screen

Nomen makes it quite simple for novices without much technical background to produce tools for this and similar identification tasks. More on that later, but for now, note that Ms. Martinez does not have a technical background and put this app together, with only minor assistance from me, part-time over this last year. She began work on Seesign last fall as part of an assignment in my course, Things That Make Us Smart, and has continued this semester as an independent study. She also submitted it to the 2016 UNM App Contest, but did not win any of the big prizes (robbed IMO).

Why Seesign Is Cool

Ms. Martinez describes it a bit better than I do, but basically, her app was born of the frustration of learning sign language using common tools. The usual approach to looking up signs is to search by the English word you think might correspond to what you saw. It is not hard to see that this is terribly inefficient. It works fine if you are trying to say something you know in English, but not at all for identifying signs you see. The affordances of dictionaries are just not very useful for organizing visual as opposed to textual information. Reverse-look-up visual dictionaries do exist, but they are not common and they are expensive and very limited in terms of

  • The words they provide – very few
  • How signs are represented – static images, very basic
  • The mechanics of looking up – tons of pages to rifle through.

Seesign improves on all of these greatly with its obvious, streamlined interface for marking features and looking at the matching results. Not only is the user able to mark as little or as much of what they recognize—maybe you notice the type of movement, but not what hand shapes were used—but Nomen is also flexible in how it organizes and displays matches: showing both “likely” matches, signs that match all marked features, and “possible” matches, signs that match at least one marked feature. And since it is a web-app, unlike my long-time favorite ARIS, Seesign is accessible on almost every internet connected device with a screen: phones, tablets, laptops, desktops.

So far, Martinez has developed two iterations of her basic prototype (one each semester). She currently only has 10 words catalogued, but has worked extensively at

  1. Developing a useful categorization scheme that classifies words in a way that brings out the visually distinct features of signs and fitting them into a small number of readily apparent choices, and
  2. Producing a visual format for illustration and feedback on the users’  marked choices: animated GIFs.

I’ve been really impressed with Ms. Martinez’ work this last year, both in her general ability to start with an idea and really dig into developing it into something tangible (not just the app but the “soft” work too: finding a model to work with, entering the UNM app contest, etc.), but also the acuity with which she has developed (through thoughtful iteration) the experience of her app, especially through those two areas above. She also displayed attention to detail. Nomen is strictly-speaking authorable without writing code, but it is still essentially a working prototype of the basic functionality Field Day hopes to turn into something far more polished in the next year. In particular the formatting guidelines are strict and unforgiving. Ms. Martinez’ build compiled on her very first upload, a feat I doubt I will ever manage.

This work is worth looking at for several reasons, some of which go far beyond her specific innovations:

  • It is a good idea: the advantage and use of this tool to aid sign language learners is clear.
  • It begins to describe a model for extending learning outside the classroom based on interests developed within.
  • It is an example of how accessible tools can open new areas of development by subject matter enthusiasts instead of technologists.
  • It is an example of how a local games lab can encourage and support development in small but meaningful ways.
  • It is an opportunity to discuss how events like app contests might focus innovation and enhance collaboration, and having seen this contest play out, what might be learned from what apps are being tried, why, and how.
  • It is the first documented use of Nomen (there’s another one I’ve been working on I hope to post about soon), a new accessible and open tool that could see myriad uses by others across many fields of inquiry and contexts of interaction.

In follow-up posts, I’d like to discuss these more general ideas. If you beat me to it or have other resources to point to, then by all means…

Second Annual Game Symposium

2016-04-01 09.27.26

Last Friday (4/1/2016) was the second annual Game Symposium hosted by the Local Games Lab ABQ student group at the University of New Mexico. It was tons of fun and somewhat amazing. It’s essentially a mini GLS conference put on by local students. There were students, faculty, and local devs speaking and in the audience.

It is hard to get people’s time and attention at UNM and in Albuquerque. This is true for student clubs, political parties, and everything else. The fact that this club is going strong after two years, has hosted many events, and has once again put on this symposium is a testament to the hard work and leadership of Gianna, Zack, Diane, and Joey. Logistically, the event was great too. Simple and competent.

2016-04-01 12.19.39

As an audience member, what strikes me the most is that many of the issues concerning games and their uses that have come up for me as an academic are important to people coming from other perspectives, and that we seem to be able to understand each others’ struggles. Also, the sense of optimism that there are a lot of nascent opportunities with games, opportunities that the big players are mostly blind to but that will be explored through the expanding democratization of game making. It was clear that everyone in the room was speaking and listening from a core positive experience: games had enriched their lives and given them meaning, connecting them to the worlds they live in and people they meet. There was a sense of shared purpose, that continued dedication to this craft would take the benefits from early chance encounters and find ways to expand and further realize and share them.

Below are a few additional notes from the event. If you’re thinking about making games, or trying to dig further into just what we can benefit from considering the learning that happens in playing and making games, the concrete experiences below might be a nice counterpoint to more academic treatments, and offer some clues about how these big ideas are woven into and emerge from people’s lived experiences with games. I haven’t gotten the official schedule yet, so forgive the missing names. I wrote down those I could catch. Continue reading

The Button – New AR Paradigms Using World Items and ARISjs

Giving an item to the world and to the player in ARIS

Giving an item to the world and to the player in ARIS

Over the summer, ARIS gained some new features that vastly expand the kinds of game that can be built with it. The uses of HTML, Javascript, and ARISjs to extend ARIS are just beginning to be explored; my previous tutorial about Leaderboards is just scratching the surface. Another addition to ARIS is world items. 

World items are items possessed by the game world, not a player. They can be used to define the state of the game world and have it respond to players. This makes ARIS far more capable as a multiplayer engine.

This post is an intro to world items: how they might be useful, how to use them, and the ARISjs you need along the way to get the most out of them. We will do this by looking at the design of a concrete example, The Button, an experimental game Jim Mathews put together for the recent ARIS Global Game Jam.

Continue reading

ARIS Design Challenge – Greenland is Melting

Screenshot from the article. Looks like ARIS, no?

Today, the NYT has a web article about a scientific mission to Greenland. This is very fancy web design, something only the most headlined of articles receives. About halfway through reading it, I thought, “What if this was an ARIS game?”

Many of the visual techniques and visual sources are a good match to what ARIS can do (overhead satellite maps, on-site videos and images) and the techniques try to pull the audience into the story by giving them some feeling of control (zooming the satellite shot into the basecamp as the viewer scrolls the page). The bulk of the article itself puts you inside the trials and tribulations faced by the team trying to conduct research in such a far-off, extreme place—again a good match to the strengths of ARIS and a bit different approach than communicating the underlying scientific ideas or the consequences of ice melt on this scale. There was even a portion of the article where the image of the ice from the top looked just about exactly the way it would if you had done it in ARIS, faded and transparent blue circles around points of interest.

So how about it? Would anyone take me up soon the challenge of producing a version of this story in the medium of ARIS?

I think such an undertaking, and other similar translation style activities, could teach the author a few things about how storytelling in this medium might work and how it can be similar to and different from the fancy web format. I also wonder:

  • Is vicarious travel, tapping points on a map as opposed to more typical AR game design, worth undertaking? Is it compelling? Can it improve over handing someone Google maps as a set of points of interest and bring someone into the story?
  • What are the possible effects of placing someone in the story as opposed to telling them about someone else’s?
  • What choices do we make about what to tell and what to show? What do we hope someone gets out of being in the audience?
  • If a few people do this, how different are the results? To what extent do either the software or our perceptions of it determine how we try to tell stories with it?
  • What other game or game-like formats would be a good or different match for this task? e.g. how does ARIS compare to RPG Maker as a possible vehicle?

I’d be happy to hear from you if you try this design challenge or if this idea brings up any other questions.

Algorithmic AR – Part 2

In Algorithmic AR Part 1 I outlined the concept of Algorithmic AR and gave a brief tutorial on how Factories work in ARIS to make games based on place-based algorithms. The tutorial made use of an existing game Rupee Collector to illustrate those features.

In this follow up, I show some ideas for how to take Factories further through a couple other examples, one of which is an early game design assignment I give my students each year. There’s also a brief discussion about Algorithmic AR in the broader world, where we might begin to think of it as a general interactional paradigm through existing designs and the ones we learn to create.

Continue reading