Today’s Meet – A Disposable Backchannel

This morning, I discovered a tool that may be of use to many of you out there, Today’s Meet. There are many ways to use this tool, but you can start out thinking about it as a disposable back channel. You can create a space for back and forth dialogue, short entries using the same 140 character count as Twitter. This space is not connected to anything else, but is accessible via URL, QR Code, and embedding. Each participant creates their own nickname in each “room”, allowing for anonymous or pseudonymous contributions. And the room self destructs after a certain amount of time (you specify). You can use Today’s Meet without signing in at all, but if you create an account, you get additional controls. If you pay ($5/month) you can get even more customization and features sold under “Teacher Tools”.


I’ve heard of many teachers conducting class on Twitter, hoping to introduce students both to the format of short online conversations and microblogging, and also the idea of being part of a bigger world. But Twitter really is the deep end. Not only is it a tool whose use is idiosyncratic and hard to pick up, there are real dangers there too. Most of the Twitter-linked pedagogy I’ve heard of really doesn’t intend to go this deep into what Twitter is, or how your presence there is a part of your public identity. Gamergate in particular revealed to me the dangers lurking there and really made me think twice about pushing students to work publicly by default.

I like the idea of introducing students to a modern writing and reading experience, but outside some real time and effort to devote to Twitter itself (and maybe some anti-litigation waivers to sign) I have wondered if I and my students might be better served by a mechanically similar but socially distinct tool. Today’s Meet seems like a good match to me.

Coming from another angle, as a teacher whose classes revolve around discussion, Today’s Meet has another appeal. In fact, this is how I came across it today. I’m reading a book about strategies for good discussions, The Discussion Book: 50 Great Ways to Get People Talking, by Brookfield and Preskill (2015). Using Today’s Meet is #5 (I’ll let you know a bit later if I’d recommend the book, so far so good). Upon seeing this tool, I had to take a break to try it out, and then write you.

One problem with discussions, well at least one problem I have in running a course with discussions at the center, is the sense that they are ephemeral, and since nothing concrete is produced, worthless. A lecture produces notes of facts, methods, etc. to be studied and recalled later, while our discussions are, in practice, not often used for reflective purposes later on. This shouldn’t be the case, but it is a bit out of the status quo for high-achieving college students to take notes about things that actually interest them and that they may want to think about again later, especially if all the ideas and examples are being generated by the rabble instead of the one with the PhD. My hope is that using Today’s Meet can help me and my students to produce something concrete to which we might return later, and to spice up our discussions in other ways too.

Even though we have other tools for collaborative discussion and development, adoption and use to this end—strengthening discussions—have not been strong. Google docs, sheets, and Slack channels are the ones I use most as they are essential to other parts of my classroom workflow. Students tend to see their participation in these spaces like this: totally formal aspects of their record of student achievement. Spontaneity, tentativeness, and separating what you say from who you are supposed to be don’t come easily. Because they are supposed to be saved, turned in, etc. these forms have not been properly improvisational as much as I would have liked. Too serious to use for making a small, offhand note. Perhaps I can present Today’s Meet as something a bit different. And that’s before we get to the fun of nicknames.

One more reason Today’s Meet excites me, and the reason I wanted to tell readers of this blog in particular about it, is that the simplicity and spareness of the tool makes it ripe for inclusion in bricolages built from multiple tools and mutation to other ends, maybe even for something its creators may not have imagined.

You can hack with this tool.

For instance, maybe you could incorporate a room from Today’s Meet into an ARIS game as a “webpage” object. Since a user does not need to log in, and since the room’s UI itself is so basic, it would not be too clunky to combine. And the QR code access option, just as is the case with ARIS, leads one to think about the coordination of an online discussion with places in the physical world.

As you consider this tool for use in your circumstances, think especially about how you might want to interpret the “nickname” feature. The authors of the discussion book think of it as a way to make it possible to discuss controversial topics without fear of recrimination or the bias of knowing who is speaking, but it also has tremendous creative potential.

One of the more salient features that games and learning folks have latched onto this last decade and a half is the ability for a game to let you be someone else. That person can be someone you invent or someone who is given to you, a shell you inhabit. Considering your actions and choices from the perspective of the character you inhabit can provide insight into many aspects of the human condition. Perhaps, some of this can be done with something as simple as a handle and a room to write in.

I’m excited about Today’s Meet because it fills some of the same types of needs the other tools I love, like ARIS, do.

  • It is simple to get started with.
  • It allows me to harness the power of connected computing.
  • It is underdetermined, so that I can think creatively about how to use it.
  • It is general enough to be broadly applicable.
  • It is small enough and not walled off: I can incorporate it with with other work and tools.
  • Let’s not forget, it’s free for me and everyone else to get started with.
  • If I need more, and am able/willing to pay, it is not a big wall facing me. The premium features are friendly to an individual consumer, not just institutions.

Showing Off Seesign

Seesign's Main Screen

seesign in use

There’s a new project here at Local Games Lab ABQ: Seesign. It is Celestina Martinez’ tool to visually identify signs (as in sign language, not road signs). She used Nomen to make Seesign, a new platform from Field Day Lab, my ARIS co-conspirators.

Martinez’ goal with Seesign is to replace the very user-unfriendly dictionaries sign language students need to use to look up unfamiliar signs with something obvious and easy-to-use. The usual approach to looking up signs is to search by the English word you think might correspond to what you saw. The affordances of dictionaries are just not very useful for organizing visual as opposed to textual information. It is not hard to see that this is terribly inefficient for learners who need to look up signs they don’t recognize.All you have to do is mark the features you do notice: hand shapes, movement type, etc.

Seesign's Main Screen

Seesign’s Main Screen

You do this by tapping the relevant choices on Seesign’s main screen (above). The possible matches to your choices are numbered and linked at the top right of the screen, and are narrowed in scope with each additional feature you mark. At any point, you can tap on this number to see those potential matches (below; in the actual app these are animated GIFs).

Likely Matches

Likely Matches to a Choice on Main Screen

Nomen makes it quite simple for novices without much technical background to produce tools for this and similar identification tasks. More on that later, but for now, note that Ms. Martinez does not have a technical background and put this app together, with only minor assistance from me, part-time over this last year. She began work on Seesign last fall as part of an assignment in my course, Things That Make Us Smart, and has continued this semester as an independent study. She also submitted it to the 2016 UNM App Contest, but did not win any of the big prizes (robbed IMO).

Why Seesign Is Cool

Ms. Martinez describes it a bit better than I do, but basically, her app was born of the frustration of learning sign language using common tools. The usual approach to looking up signs is to search by the English word you think might correspond to what you saw. It is not hard to see that this is terribly inefficient. It works fine if you are trying to say something you know in English, but not at all for identifying signs you see. The affordances of dictionaries are just not very useful for organizing visual as opposed to textual information. Reverse-look-up visual dictionaries do exist, but they are not common and they are expensive and very limited in terms of

  • The words they provide – very few
  • How signs are represented – static images, very basic
  • The mechanics of looking up – tons of pages to rifle through.

Seesign improves on all of these greatly with its obvious, streamlined interface for marking features and looking at the matching results. Not only is the user able to mark as little or as much of what they recognize—maybe you notice the type of movement, but not what hand shapes were used—but Nomen is also flexible in how it organizes and displays matches: showing both “likely” matches, signs that match all marked features, and “possible” matches, signs that match at least one marked feature. And since it is a web-app, unlike my long-time favorite ARIS, Seesign is accessible on almost every internet connected device with a screen: phones, tablets, laptops, desktops.

So far, Martinez has developed two iterations of her basic prototype (one each semester). She currently only has 10 words catalogued, but has worked extensively at

  1. Developing a useful categorization scheme that classifies words in a way that brings out the visually distinct features of signs and fitting them into a small number of readily apparent choices, and
  2. Producing a visual format for illustration and feedback on the users’  marked choices: animated GIFs.

I’ve been really impressed with Ms. Martinez’ work this last year, both in her general ability to start with an idea and really dig into developing it into something tangible (not just the app but the “soft” work too: finding a model to work with, entering the UNM app contest, etc.), but also the acuity with which she has developed (through thoughtful iteration) the experience of her app, especially through those two areas above. She also displayed attention to detail. Nomen is strictly-speaking authorable without writing code, but it is still essentially a working prototype of the basic functionality Field Day hopes to turn into something far more polished in the next year. In particular the formatting guidelines are strict and unforgiving. Ms. Martinez’ build compiled on her very first upload, a feat I doubt I will ever manage.

This work is worth looking at for several reasons, some of which go far beyond her specific innovations:

  • It is a good idea: the advantage and use of this tool to aid sign language learners is clear.
  • It begins to describe a model for extending learning outside the classroom based on interests developed within.
  • It is an example of how accessible tools can open new areas of development by subject matter enthusiasts instead of technologists.
  • It is an example of how a local games lab can encourage and support development in small but meaningful ways.
  • It is an opportunity to discuss how events like app contests might focus innovation and enhance collaboration, and having seen this contest play out, what might be learned from what apps are being tried, why, and how.
  • It is the first documented use of Nomen (there’s another one I’ve been working on I hope to post about soon), a new accessible and open tool that could see myriad uses by others across many fields of inquiry and contexts of interaction.

In follow-up posts, I’d like to discuss these more general ideas. If you beat me to it or have other resources to point to, then by all means…

Minimize Clutter While Notebooking with ARIS

ARIS is most often used to author content for players to experience. But it also holds functionality for you to send players out to experience the world and share what they find with each other and you. This can be data collection, photo mapping, etc. The Notebook allows players to record geolocated media (video, audio, photo, text) and together to build a collaborative record of their explorations. This functionality has broad potential and combining data collection features with the other affordances of ARIS (making games, telling stories, etc.) is a truly unique thing. Being able to richly establish a context for those who you are sending out to do the collecting is a fantastic opportunity.

Buuuuut, if you’ve actually used the ARIS Notebook, if you really had people go out there and collect some pictures, etc. then you know that clutter is a problem, especially when there is a good deal of non-Notebook content you need players to see. After a bit, the map just looks like a mess.

Notebook Clutter in Chrono Ops

In ChronoOps, by the 503 Design Collective, notes left by players obscure the map and authored content.

Clutter exists because every note is marked on the game map for all players. This can be useful for viewing notes later, but it can really get in the way too. ARIS will continue to evolve, so this clutter may eventually be less of a problem. But there are some things that you can do right now as an author to clean things up for your players. Continue reading – Inspiration for data collection activities

It’s just about one project right now (the history of the Macintosh computer), but has a really interesting take on collective historical storytelling. Their idea and platform might be good inspiration for things to do on the backend of data collection. Individuals tell stories, there is a little bit of classification and media. Comments and ratings help direct readers and it is searchable in a few different ways.

Analogies with SAGE

SAGE is open source math software. I spent some time becoming familiar with the software and the project before I jumped ship from the math world. Essentially, the idea is to be able to replace the closed, expensive, proprietary math software frequently in use (e.g. Matlab, Mathematica, Magma) with a free and open source alternative. Along with the usual arguments for this sort of software, there are additional reasons research mathematicians have for wanting something open like this. Truth in math is determined by people, and if people can’t see the source code, they can’t decide whether an argument holds water. Closed software is inherently divorced from the basic epistemology of mathematics.

Anyway, SAGE has been evolving for years, and even though it’s been around longer than ARIS, it looks like they’re heading for a watershed moment too. William Stein writes on the SAGE blog this week about some of the organizational possibilities ahead of them in terms of hosting and long term sustainability. It’s interesting to see some of the similarities between our two projects. I hope there’s a lot to learn from their work and struggles.

There are a few other things that Stein has written that might help those far away from math understand what this open source math project is about.

Explaining SAGE and the need for open source math software

William Stein’s personal history with math software

Thoughts about technology and math education

Tensions between research and education uses for SAGE (this one might also be relevant to our work, but mostly it makes me glad to be over here instead of in math)

Photo taking and simple collaborative collections

Jim Mathews and I have often discussed taking pictures as a way to get to know a place, comment on or investigate an issue there, or more basically just have a conversation about a place. It’s a kind of note taking that I personally enjoy and gravitate to more than breaking out a pen and paper. It’s an activity that we feel mobile devices could impact in a positive way. I think in particular we think they could put together a few things that are normally separate activities that require their own time and place to establish and maintain.

  • Collaboration – Everyone taking pictures and the pictures become the basis for group discussion and commentary as well. A mobile device could facilitate taking pictures within a particular, ad-hoc context together as well as the ensuing discussions.
  • Curation – The categorization and relative relevance of the photos is important. The mobile could apply appropriate metadata that would assist in basic efforts of organization.
  • Extended participation – Though our discussions are typically centered around class participation, many topics naturally lend themselves to wider, informal participation. The mobile could mean that an investigation could be joined by others when and where appropriate without having to start over or scale additional walls.
  • Structure – Taking photos is only one step. A mobile could act as a more connected component than a mere camera and be common to the other forms of participation and activity desired.
  •  Training – Poor word choice but concise. The mobile could not only allow people to participate in a given activity but help show them the ropes too.

With these aims in mind, we’ve been working on making ARIS capable of supporting activities that support collaborative photo gathering (other media data collection as well). While that’s happening, and also as inspiration for what would work best in ARIS, I try to keep an eye out for other apps that may offer similar capabilities or show the way. Two recent apps in this category are the famously funded and ambitious Color and Piictu (and its upcoming doppelganger Photovine from Google).

At a basic level, these photo apps are different than other photo sharing apps. They are not so much a place to put your photos of your life so that family and friends can see, but are organized around the content of the photos in one way or another.


Color got a lot of press back in March because it received a boatload of VC funding, and spent a good chunk on a domain name. Also the app itself is apparently built with the idea of being more thirsty for collecting and sharing our private data than Facebook. Aside from that, and a somewhat strange UI, it almost gets this collaborative photo taking thing right.

The basic function of Color is to allow initially physically co-present people to contribute to a common photo album with very little configuration. Some other strengths:

  • Access to the photo album outside the app
  • Ability for participants to like and comment on the photos in the group album
  • Once players begin an album, they don’t need to remain near one another

However, the app has several limitations that impair this basic functionality:

  • Group albums have an automatic timing mechanism. Go back to an album a day later and you won’t be able to add to it any longer.
  • Group albums can only be begun when players are co-present. This prevents participation across places, and inhibits category, rather than event based photography.
  • Group albums are not nameable, nor do they carry any other metadata. Any context is carried by the pictures and between the players.
  • There is no mechanism for selective sharing. Any active, nearby group album is joinable by anyone.
  • There’s no way to combine, transfer, or otherwise alter album content once it’s in the app.

This seems more an artifact of the strange ambitions of the producers than a technical limitation. Their idea is that this app would be used to crowd-source concert photos or more likely, party photos. So I’m not especially optimistic that Color will be a better match as time goes on.


Piictu is more similar at a glance to more typical photo sharing apps like instagram or photo sharing that happens on twitter. In fact, the following mechanism and its centrality in the app tend to take away from the key innovation which to me seems promising. What makes Piictu interesting though is its main gimmick. In piictu, your caption becomes a set of instructions to other piictu users. Your photo becomes the first in a collaborative album. Google’s upcoming photovine looks to be a pretty straight copy of piictu, but their teaser video does a good job of quickly communicating this basic mechanism.

Piictu also contains a like function, but does not have image comments. There is also no way to remove a picture from an album, or to search albums by title. The only way to contribute is to happen to see an interesting one in the main interface. This gives the piictu app an air of playfulness, but rules out going back at a later date or finding topics to which you may wish to contribute. Beyond a suggestive title, no other information can be added to an album, and there is no way to access that album specifically, or outside the app. Piictu can push out to tumblr or facebook, but this points more towards personal image collection not collaborative work. Like other image services, this app does not allow for membership within an album to be defined. Anyone may add.

The Future

I hope some of our aspirations can be contained in a version of ARIS soon, and that we think of good ways to integrate collaborative picture taking into our work producing local games. I’m also on the lookout for other apps like these above that help us to understand what we want and how that differs from the more typical photo apps out there.