What might mobile media afford education?

I’ve been doing some talks on campus and thought that it might be nice to start posting parts of the presentations to encourage conversation and design.

Below are 8 categories of the kinds of things I can image doing in the next few months with mobile learning.

#1: A new wrapper for existing media forms

Mobile is undeniably a new media form. Some even argue it is the most ubiquitous communication technology on the planet as of 2010.

What’s interesting is that the new breed of mobile devices not only offer new affordances such as location services and multi-touch interfaces, but they also are capable of containing things we are used to such as web browsers, podcasts, text and animation.

I’m willing to bet that the first line of adoption is going to be (and already seems to be) a matter of creating mobile interfaces to our current learning assets.

  • Lecture Capture I often roll my eyes when I think about recording hour long videos of a talking head and distributing them as new media. But have you been out to iTunesU recently? It is freaking amazing. I’m learning about media from Jenkins at MIT and programming from Stanford. No matter how much of a media snob you may be, that is cool.
  • Electronic books bring the advantages of indexing and searching to traditional text. Its also very cool to have all your books in the palm of your hand, not 20lbs of weight on your back while walking around campus.
  • Traditional “E-Learning” modules, with simple drawings and bulleted lists have always sucked, and will continue to suck, but at least you can click the next button while on the bus.

#2: Physically Contextualized Knowledge

Some knowledge is abstarct, but some knowledge is deeply tied to space and place. For example, I’ve always wanted to make a mobile game about US history in Boston. If you have ever been to the site in which an event took place you will know how much richer the experience of the story can become. Its like we get “Schema for free,” the real world (in better than HD quality and bigger than an IMAX) lets us experience the context of our studies. Here are some other examples:

  • Chris Holden and Julie Sykes teaching a lesson in a Spanish 101 course at New Mexico State using iPod touches running aris in the physical context of a Spanish speaking neighborhood. Real people can be intermixed with virtual characters, practicing the language in a guided but authentic environment.
  • Teaching horticulture in the UW Arboretum. I’m hoping we can find someone to build this with, but I can image learning about a rare plant when it is physically encountered, or being sent on “missions” to locate a species in the environment where it should be found.

#3: Place Based Learning

We can also take the concept of content and place a bit further and engage learners actively in their local neighborhood with its unique history, literature and art. My favorite example of this was done by the Local Games Lab where middle-schoolers engaged in a place based narrative using pocket pcs that was based on the events that took place in a neighborhood here in Madison called “The Greenbush.”

This little part of the near-west side at one time had a rich irish community, but do to the construction of a hospital and other city planning, the tight community was replaced with parking lots and random storefronts. Through playing the game, the kids saw the now vs. the then and ended up making a case to the city council to never do anything like this again. 13 year olds involved in their city, learning to think about the responsibility of being a citizen –  it’s a cool story and I’d like to see more.

#4: Mobile Data Collection

Our team has been talking with Mark Berres, a professor here in the genome center about building a mobile app to compliment some algorithms he has created for determining bird species from an audio recording of its call. This is an exciting way to look at the mobile devices because it allows the senses and expertise of the user be magnified, giving them a very useful tool in the field.

After a bird has been identified by audio or by visual, the software can geo-tag the sighting with a time stamp and send this info into the server. Now we are doing distributed data collection and mobile delivery of the content.

On most of the modern devices we have accelerometers for measuring movement and orientation, GPS, full AV recording, a still camera, a touch screen and a light sensor. Any one of these can be used to capture anything from interviews to the physics or a roller-coaster ride.

In addition, I’d love to see a whole set of usb accessories made for it to measure things like pH, temperature, color, etc like we had back with the old TI-85. Looks like I might have to break out my soldering gun again.

#5: Physically Embedded Information

We’ve all seen barcodes. They are a visual way to represent a number or some other data that a computer can read easily.Last summer we created a prototype that used this info to link to a custom micro-blog software so that any object that you put a sticker on would have a communal discussion space attached. It’s like virtual graffiti but less pretty, amazons comments and rating in the physical world.

Now what if we skipped the bar code and just used some image matching software? Now any object that you can take a photo of can be linked to a whole digital universe. Every object simply a query into the database, curating data assets and collaborative spaces around physical objects and real spaces.

#6: Augmented Reality

If we take the above observation one step further we arrive at augmented reality. With AR the link between the physical and virtual is represented by visualizing the two superimposed in a sensory experience, most often visual. Effectively, the virtual has been superimposed onto the real, using the conventions of physicality.

We have already done a good bit with this idea with ARIS. In an ARIS experience a virtual item, character or just information (we call them plaques) can be placed in physical space. Users can then interact with these objects by talking with characters, picking up and examining items and so forth. In the most recent prototypes with iPhone’s that have a compass, we can even superimpose the image of the virtual object on top of the camera image so the user “sees” the virtual projected into the real.

#7: Apps

Apps are more than just little widgets, they allow us to add actual abilities to a user in a black box sort of fashion. For example, with a math app, the student doesn’t need to understand how to calculate a log function, only howto interface and read the output from software that does. Think Vygotsky and the Zone of Proximal Development; we could see little apps allowing students to operate in areas where they don’t have expertise as long as they have the technology literacy, augmenting not only knowledge but ability.

#8: Mobile Educational Gaming

Really, this is just repackaging, but because there is still so much to be learned I thought it deserved a separate mention. Bottom line: play, formative feedback, identity, narrative, activity, etc are not only the same things that make for a good game but also a good learning environment.  Mobiles are not only another platform but also come with a whole cool set of new interfaces and approaches.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s