Thursday, April 30, 2015

Reflections

In my first blog post, I stated my belief that the most important division of digital literacies was between the sociocultural and the functional. That division—between “how it works” and “why does it work this way?”—has been a theme throughout the course. I already had some experience with the Adobe product suite. Without such a steep functional learning curve, I tried to focus as much as possible on tying the readings to our projects. I spent hours making my collage in Photoshop only to decide that I had visualized the enframing of my subject rather than the standing reserve. As frustrating as it was, I scrapped the collage, and started from scratch. I would have been much more reluctant to do that if I hadn’t had prior experience with Photoshop. I imagine it would have been much more difficult to translate ideas as difficult as Heidegger’s into a functional form that I was a novice at.

In this sense, my pre-existing literacy helped me to accomplish my coursework. The software we used is also not cheap, in general. The versions we used in class were available to us through a corporate partnership the university had made. The link between access and literacy is not a trivial one. We read more than once about how the technological wonders of the digital era can be used to enforce existing social barriers. In my last blog post, I questioned Selfe and Selfe’s assertion that the “cyborg” selves created by our attachment to technology could break down those barriers.

And when barriers are broken, it doesn’t always paint a pretty picture. I think Adobe Illustrator, which my group analyzed for our project, is a great product. It’s fun to use, and makes beautiful art. I’m sure many people would, and do, enjoy using it as part of their daily job. Unfortunately, as we learned from our research, Illustrator’s ease of use may be tied to how few people get to make a decent living as graphic designers. Digital innovation isn’t neutral; it has effects on society and the people in them.

I took the most pleasure in considering and demonstrating how important design is. I was intrigued by the examples the Jones and Hafner used in the Multimodality chapter, and I took it as a challenge to find my own examples on the web and analyze how their aesthetic affects their interpretation. I intended the website I made for our final project to represent my ideal web magazine. I’m very aware of what size of type of fonts I prefer, how much whitespace I want, and what kind of images complement the text. I ended up replicating pretty closely the spatial logic that Jones and Hafner explained to be evident in most of the modern web. I can honestly say I did not intend this. I don’t think that’s a bad thing, either. It’s only on reflection that I thought about why I had made what I had; a good example of how important it is to have both sides of literacy.

Thursday, April 23, 2015

The Roots of our Cyborg Selves (Selfe & Selfe)

It’s fitting that our last reading takes us farther back than any others. Selfe and Selfe’s essay was published in 1996, around when I started using the internet. The authors comb through the chattering on listservs, a medium that is quaint and outdated to us, to draw conclusions about computer use that resonate nearly 20 years later. They look back through history, from the public forums of ancient Athens, through the salons of Enlightenment Europe, and find that even when a new public sphere opens up to exchange ideas, it is still constrained by the dominant ideologies of its culture (338-9).

The authors repeatedly drive home the point that computer networks are firmly rooted in military research, and are “war machines” (343-4). The opening up of these networks to civilian use is, they say, a way for states to push their populations toward spaces that allow new and free expressions of ideas, but are also regulated. In this sense, computer networks are still fundamentally shaped by their roots in war, and broader state control, such as “computerized records of citizen criminals, computerized lists of welfare recipients, computer-supported census records” (342). I immediately think of contemporary revelations of government cyber-spying on its own citizens and across the globe. I imagine these aren’t revelations at all to Selfe and Selfe; such use was implicit from the birth of computer networks.

But the authors point out that, while powerful state and corporate interests may still have zones of power in computer networks, there are also zones of impotence (348). While these networks were designed for control, there are way to subvert them. Selfe and Selfe refer to such actions as “tactics,” which are “small and—at some levels, partially invisible—ways of ‘making do’ within an oppressive system that reproduces its own power constantly and in extended ways” (349). They give the example of technical writers (who were probably among the earliest regular internet users who couldn’t program) going outside of their employers’ restrictions to seek digital information and connect with others who shared their professional and personal interests. This still happens today, though it’s not exactly revolutionary to turn to the web to quench curiosity. I think of contemporary “hacktivists” such as Anonymous as the current practitioners of tactics. Their use of computer networks to strike at businesses and governments they disagree with are conscious displays of subverting what they consider an oppressive system.

The authors ponder whether the novel approaches to computer use in the 90s are paving a road to humanity becoming “cyborgs.” I don’t think they use that word in a literal sense (although technologies to make that so aren’t far off), but rather in the sense that we treat our machines as extensions of ourselves. We are “makers of the machine, and this activity has made us partially machine ourselves” (352). As we’ve discussed several times this semester, our digital literacy doesn’t just add to our lives; it fundamentally changes how we live. The authors think that our cyborg-selves are able to break down barriers. We can “recode, rewrite, reconstitute not only the text of their own bodies, but also the larger cultural/economic/ideological narratives and mythologies of the male-dominated war State, the cultural body politic” (353). This strikes me as more triumphalist than reality has proven. While our ever-present machines have given us new and strange abilities, old barriers remain, and new barriers may still be raised.

Selfe, Cynthia L., and Richard J. Selfe. “Writing as Democratic Social Action in a Technological World: Politicizing and Inhabiting Virtual Landscapes.” Nonacademic Writing: Social Theory and Technology. Eds. Ann Hill Duin, and Craig J. Hansen. Mahwah, NJ: Lawrence Erlbaum Associates, 1996. 325-358.

Wednesday, April 15, 2015

How We Interface Now (Manovich)

Computer interfaces have historically been based on metaphors of existing media. As Lev Manovich writes, the original computer metaphor was that of an office desktop. This made sense in the 1970s and 80s, when computers were almost entirely work machines. In the 1990s, with the advent of the World Wide Web, Manovich notes that the metaphor shifted to “media-access machines” such as VCRs or CD players (89). Manovich made these observations in 2001, just after the bursting of the dot-com bubble, and before the rise of “Web 2.0”, the current paradigm of human-computer interaction (HCI). So what’s the current metaphor?

I have a hard time answering that question. The desktop and media metaphors still exist, but they’ve been subsumed into an always-online world that I can’t easily fit into any older media form. The most prominent difference between HCI now and the turn of the millennium is increased interaction. While chat rooms and message boards existed before, these look like huddled conversations in a corner compared to the constant bullhorn pronouncements and responses that characterize a platform like Twitter. There’s also the strange decline in anonymity; whereas the web interactions of old were often between people adopting pseudonyms, the current interfaces encourage linking one’s online identity to the real world.

I’m tempted to say that contemporary HCI is based on no metaphor or all previous metaphors simultaneously. But maybe it’s that, stacked on top of the old desktop, and static media-viewing, there is now a metaphor for human interaction in general. Instead of simply performing tasks or passively viewing or listening to media, we are now encouraged by our interface to “comment” on something, or “tag” something as related to another thing.

Manovich observes that the computer had shifted from being simply technology to being a “filter for all culture…the art gallery wall, library and book, all at once” (64). This observation fits with his understanding of the then-metaphor of computer-as-media-machine. I think it’s fair to say the computer is now a filter not just for all culture, but for all interaction between people. What other conclusion could you draw when most people now carry powerful computers around in their pockets whose primary function is communication? In this sense, I don’t think that the “the lines between the human and its technological creations” are as clearly drawn as Apple’s 1984 Macintosh represents to Manovich (63). The aesthetic is still relevant, but the lines are decidedly blurry.


Manovich, Lev. The Language of New Media. 2001.

Thursday, April 9, 2015

Limited Transparency (Jones & Hafner, Ch. 7)

I frequently use email during most days to communicate with work colleagues. Email is a form of media that is somewhat opaque, in the sense that the protocols that are used to send it are open source and simple. It’s unlikely that most people—myself included—have the knowledge or desire to modify the inner workings of email, however. It’s a strange idea to think about. An email is just a blank digital canvas in which you can put pretty much anything and send it to someone else. This makes it seem very transparent, as a very natural way to send a message between people.

Snapchat is an application with a deliberately more limited ability to communicate. It allows its users to send images and video that will last for no more than 10 seconds before disappearing (more or less) permanently. Theoretically, the same functionality exists in email. You can send an image in an email that can be stored in perpetuity. You could send a video that lasts 10 minutes instead of 10 seconds. The appeal of Snapchat is in its opacity. For its users, these constraints are, strangely enough, a sort of freedom. Snapchat not only provides a simple interface to make and send media (which would require more tools, skills, and time before sending the same media in an email), but also gives its users the illusion of transparency with its messages’ time limits and impermanence.

A Snapchat message feels more like a moment in the real world that occurred and then vanished, as opposed to an email, which could be stored and dug back up indefinitely. The possible permanence of email can make life more difficult. A rude remark said out loud might be forgotten, ignored, or laughed off. A rude email could stick around forever, and, fairly or not, taint its sender’s reputation. Snapchat’s transient nature doesn’t necessarily make it a clearer analogue to in-person communication. What happened before the 10 seconds of video? What’s the context of the image? Does the caption actually reflect what was happening when the photo was taken? Snapchat, as its name implies, uses snapshots of life to encourage conversation. These chats aren’t likely to get out of the shallows, and might be falsely substituted for deeper conversations.


I’ve seen people use Snapchat to extend the conversation a bit. The application is designed to limit captions to a certain number of characters. The idea is likely that the image or video should mostly speak for itself. Lately, I’ve seen people use the application’s drawing functionality to complete thoughts that are cut off by the character limits. If a sentence doesn’t fit into the caption box, the last word or few will be scribbled onto the image before being sent. The crude shape and bright colors of the drawn words make them stand out from the rest of the words in the caption. This can give them a striking quality, like a kind of punctuation at the end of the sender’s thought. 

It’s like if a typed sentence ended with the last words not only colored, 
but suddenly in a larger and different font. 

It’s a function that the makers of Snapchat probably didn’t plan for, but which grew out of users modifying around the application’s limits.

Thursday, April 2, 2015

Analyzing online language (Jones & Hafner, Activity 5.1)

There is a pretty solid hierarchy of formality in digital interactions.  What passes for acceptable language between close friends in a private chat may not pass the standards of an email between work colleagues.  I see a general trend of text messages at the most casual end of the spectrum, with emails at the most formal end.  Text messages are typically between close acquaintances, and are often one-on-one.  Since they take place on each participant’s phones, they are also very private.  This makes texts a common medium for abbreviations, creative spelling, and emoticons.  The assumed closeness of text participants makes them most likely to understand each other’s idiosyncratic digital speech patterns.

I would place instant messaging right next to texts in formality.  These have all the same constraints as texts, except that they can also take place on desktop computers.  This makes instant messages slightly less private.  They’re probably used more commonly than texts as a way for work colleagues to communicate.  Facebook wall posts are tricky to place.  They can include a lot of short forms that most readers may not understand, but I think the larger audience (all of a Facebook user’s Friends) enforces a standard of formality.  Facebook posts are not private conversations, but they are between social relations and are relatively short-form.

At the far end of formality are emails.  Emails could be between close friends who would understand abbreviations and emoticons, but there are better forms of communication to convey that language.  I generally understand emails to be reserved for long-form communication, regardless of the audience.  Emails can convey complicated information, so they assume a stricter standard than more-constrained forms like texts and instant messages.  In this sense, it is the potential media richness of an email that dictates its language.

Below is a snippet of text conversation between myself and a friend, where I’ve highlighted the abbreviations, non-standard spellings, and emoticons.

Me: Whatcha up to tonight?

Them: Makin pizza dough right now but I wanna hang fo sho! You??

Heading to [a bar] for [a friend]’s roommate’s going away party.

Sounds like fun

Were in the basement if you wanna join.

You sure??
I won’t be imposing?

Of course not.

Kk ill be there in a few :D

Most of the non-standard spellings are just contractions (e.g. “whatcha” for “what are you”), while “fo sho” is a slang form of “for sure.”  Being followed by an exclamation point, it imparts not just informality, but an excitement and jauntiness that “for sure” may not convey.  I’ve never quite figured out a consistent logic behind double question marks, but I take them to mean a slightly more pleading question.  In this conversation, they may mean that if I were listening to my friend, their voice would rise slightly higher or be drawn out a little longer when asking those questions.  The emoticon at the end of the last text frames the message in a context of happy excitement, as opposed to a simple statement of intent to be somewhere in a few minutes.


Dissecting a conversation like this reminds me how I take for granted my ability to read this person’s intents in what are only a few dozen text characters.  I may misunderstand the same language coming from someone else, or my friend may communicate in a completely different way in an email.  The intended meanings arise out of nothing but the digital context and the relationship between the participants.

Thursday, March 12, 2015

Multiply-torn Attentions (Jones & Hafner, Ch. 6)

Perhaps a decade ago, “multitasking” was a hot word to put on resumes.  It showed that an applicant was hip to the digital world, and knew how to navigate its competing attention-takers.  This isn’t considered much of a skill anymore.  Being able to pay attention to multiple streams of information at once is just a condition of living in the modern world.  I think most people are now aware of the constraints on attention that Jones and Hafner detail in this chapter.

I hear from older colleagues that meetings used to be fewer and smaller when they were confined to physical rooms.  The combination of conference calls and live-streaming of computer desktops has altered what Jones and Hafner call attention structures.  The biggest change is in the communication tools.  The tools I mentioned make it possible to expand a meeting beyond a meeting room, or even a city.  This affects the other attention structures as well.  The number of people involved can multiply, bringing new perspectives to the meeting.  These people also have different social relationships, making the interactions in the meetings different.

The affordances of this technology are clear.  More people get to collaborate more easily on more tasks.  But the constraints are also obvious.  The easier and wider an invitation is to send, the more get sent.  This taps into what Jones and Hafner refer to when they speak of living in an “attention economy” (90).  It’s ironic that, the easier it is to get a hold of anyone, the harder it is to…well, get a hold of everyone.  And if someone has multiple invitations to tasks that may only require that they dial a phone number and watch a screen, they’re tempted to do as many of those tasks as possible at once.

As the authors explain with the computer classroom example, the key is to align our attention structures to the right context.  In the example of meetings, it seems wise to sparingly use technologies that physically remove participants.  Those who can meet in person should, while those who absolutely can’t can connect digitally.  In this sense, multitasking becomes less of a proactive killing-two-birds-with-one-stone activity, and more of an occasional convenience.


Jones, Rodney H., and Christoph A. Hafner.  Understanding Digital Literacies: A Practical Introduction.  London: Routledge, 2012.

Wednesday, March 4, 2015

Contextualizing the Class Computer (Selber)

Until fifth grade, the bulk of my computer use in school consisted of playing Oregon Trail on some truly ancient Macs.  As informative as I’m sure repeatedly dying of dysentery was, it didn’t feel like an experience that grounded me in the realities of how computers would be used later in my life.  My fifth grade teacher was a little savvier.  He tried an experiment where we passed around a piece of paper, writing contributions to a conversation one student at a time.  The goal was to explain the concepts of internet chat rooms and forums.  The experiment’s success was mixed, but it forced me to think about the internet in a way I hadn’t; as a social medium that was both like and unlike the non-digital world.  The experiment contextualized the technology for me.

Stuart Selber thinks that understanding the social context of technology is vital to students’ use of it, and ultimately, how well they navigate the digital world.  He highlights how easy it is for technology to be decontextualized for students, however.  He quotes David Orr saying that conventional teaching of technology risks students having “no confrontation with the facts of life in the twenty-first century” (9).  Confrontation is the key word.  Students are at no risk of under-exposure to technology, but how well can they situate their use of technology? 

UNO has an agreement with Microsoft that gives all students access to its Office 365 suite, even on their personal devices.  It’s quite a deal, giving students free access to tools that are considered industry standards.  How the UNO announcement of the deal justifies it is telling: “As educators, everyone at UNO is united behind a single goal – help prepare our students to become the best they can be…According to IDC students with Office skills are better prepared for work in the professional world.”  The last sentence includes a link to a Microsoft article detailing the study in question.  Selber would likely see these statements as decontextualizing the technology.  It juxtaposes students “being the best they can be” with simply being good workers.  Is that the best they can be?

Selber thinks that teaching the social context of technology is too often an afterthought, and should instead be the core of instruction (21).  But economic realities seem to work against this notion.  If a company offers a university a killer deal for access to its software, university administrators aren’t likely to complain when that company wants to advertise the deal as training a new workforce.  Selber thinks that purely functional digital literacy need not be disempowering, but can serve to explore more critical analysis through humanities education.  This class is a good example of his notion.  It shows that teaching how to use a software and use it well doesn’t have to have purely vocational goals.  It’s just as possible to ask students to actively question technology (“Ja,” says Herr Heidegger) as they learn it.

Selber, Stuart A. Multiliteracies for a Digital Age. Carbondale: Southern Illinois UP, 2004.

Wednesday, February 25, 2015

The Question Concerning (the Essence of) Technology (Heidegger)

By chance, I happen to be reading Brave New World, by Aldous Huxley.  The world in the title is a sterile one in which humanity has completely surrendered itself to technology, where Henry Ford, who popularized the assembly line, is the closest to a god.  If Martin Heidegger were asked, he wouldn’t say that it was just technology that the ordered citizens of Huxley’s world were subjected to.  It was a certain context of viewing technology.  A context that he viewed as dangerous to humanity’s free essence (32).

Context meant a great deal to Heidegger.  How we see the world.  How we see ourselves in the world.  How we speak of the world.  Etymology, and language in general, is so central to his work that his idiosyncratic uses of German phrases are often italicized alongside their English translations.  In his exploration—or should I say revealing—of the context in which we moderns view technology, he spends a great deal of time explaining how the Greeks referred to the manufacture, use, and context of tools, sculpture, and poetry.  He does so to eventually land on the revelation that they used the same terms to refer to all of those contexts (34).  It is we who have insisted on taking techne to be nothing but technique, and purging it of any context aside from use by humanity.  We call this technology.

Heidegger’s term for this way of thinking of technology is “enframing,” which “blocks the shining-forth and holding-sway of truth” (28).  Enframing nature makes us consider it a “standing reserve,” a utility.  He thought that truth was a more primal desire than mere utility.  I find his discussion of the aims of modern physicists interesting for its context.  This essay was developed between 1949 and 1953, after the novelties of quantum physics had lost their charms and segued into the fears of the atomic age.  He says that enframing “demands that nature be orderable” and that our investigations into causality (which I would call science) are shrinking into mere reporting of calculations (23).  This is a bleak view of science.  When I studied physics as an undergraduate, I thought I was searching for truth in the same way as Heidegger’s ancient Greeks.  His point seems to be that instead of just challenging nature, we should be questioning how we do so, and not accept it as the only way to reveal truth.

After finishing this essay a second time, I thought of Keats’s “Ode on a Grecian Urn.”  It’s not just because of Heidegger’s use of a silver chalice as an example of causality.  It’s also his discussion of poetry at the very end of the essay:
The poetical brings the true into the splendor of what Plato in the Phaedrus calls to ekphanestaton, that which shines forth most purely.  The poetical thoroughly pervades every art, every revealing of coming to presence into the beautiful. (34)
It took ancient Greek techne to bring forth the poem’s titular Urn, an object of both practical and artistic worth.  But it also required a kind of technique by Keats as the poet, a revealing of truth.  I wonder if Heidegger agreed that

        "Beauty is truth, truth beauty,"—that is all
         Ye know on earth, and all ye need to know

Heidegger, Martin. The Question Concerning Technology and Other Essays.Trans. William Lovitt. New York: Garland, 1977.

Wednesday, February 18, 2015

Instant and persistent messaging (Jones & Hafner, Ch. 5)

I often hear the claim that most communication is visual and audible.  Without facial expressions, hand gestures, vocal tones, and interjections, we’re left with something invented solely by humans: language.  In the digital world, a great deal of communication is purely textual, and it can cause dramatic changes in behavior.  Jones and Hafner’s discussion of transaction costs made me think of how I use text communication in my daily life.  Most of the examples given in the chapter on online language have to do with social life, but I’d like to explore how digital communication affects the workplace.

When my desk phone rings, I get startled.  What could possibly require hearing my voice?  Doesn’t this person know they can instant-message me, or send an email?  Actually, it’s very understandable.  Both email and IMs provide the affordance of quickly communicating something without having to spend the time it takes to have a full verbal conversation.  But the lack of verbal cues can be a constraint.  An email or IM may not convey the urgency of a situation as well as a tone of voice.  Text-based messages are also much easier to ignore.  After all, they’re just sitting there in my inbox or in a corner of my screen.  My phone ringing tells me that something is urgent, and I respond with the same urgency.

With so many options to communicate with colleagues, there are clear affordances and constraints to each.  Email provides a long format for communicating complex information, but demands the least timely response.  This also means that they can pile up quickly.  It's absurd how many emails I can get in a day.  I guarantee that most people would not be able to handle the equivalent of that much information verbally.  Instant messaging makes low-complexity conversations quick and simple, but can also be ignored, and lack the same cues that a verbal conversation would carry.  The constraints of these forms give phone calls or in-person conversations greater importance.  Jones and Hafner explain this exact phenomenon in regards to romantic partners reserving phone calls for “special” importance (77).

It’s easy to adopt a contrarian and curmudgeonly attitude to digital communication, and wonder why we don’t all just get up and talk to someone in person.  But the very availability of such forms of communication have changed what’s possible.  I work with people across the country and around the world.  That probably wouldn’t be true if the means didn’t exist to quickly communicate with them.  It’s inevitable that we all have to come to terms with the differences between the modes of expression available to us, and fit each one into its appropriate context.


Jones, Rodney H., and Christoph A. Hafner.  Understanding Digital Literacies: A Practical Introduction.  London: Routledge, 2012.

Thursday, February 12, 2015

Multimodal and international layouts (Jones and Hafner, Ch. 4)

Our screens are not glowing pieces of paper.  The digital document represents a logic of understanding that is distinct from that of the printed page.  This is the central point of Jones and Hafner’s chapter of multimodality, the combination of different “modes” of expression to form the documents we regularly interact with in the digital world.  As the authors note, the web did used to be little more than a collection of hyperlinked text files with the occasional visitor counter and dancing baby GIF.

Today’s web is much more refined, and yet, there is a spatial logic that seems to underline most sites’ visual layout.  As Jones and Hafner explain, information on a web site tends to move from given (what you knew you were getting into when you clicked the link), on the left to new, yet-to-be-determined (fill in these fields, dear visitor) on the right.  There’s also a movement from the ideal (what will I get from this site?) at the top to the real (who copyrighted what?) at the bottom.  There may also be some central object to draw attention.

This arrangement seems to be governed by the Western tradition of reading from left to right, and top to bottom.  It’s interesting to note than Jones and Hafner are English professors at City University of Hong Kong, situated in a former British colony that’s now a semi-autonomous enclave of China.  This made me wonder how widespread their idea of a basic web layout really is.  The main web page of the South China Morning Post, Hong Kong’s largest English newspaper, certainly seems to conform.  The paper’s title is on the top left, with a search bar on the right.  Moving down the page, articles appear mostly on the left, with user-driven features like polls on the right.  At the bottom is the boring stuff (FAQ, terms and conditions, contact info).  What about the Oriental Daily News, the city’s largest Chinese paper?  It’s striking that, even without understanding the text, I can make informed guesses about what each section of the page is because it does indeed conform to the same basic layout.

I found it funny reading Jones and Hafner’s insights on digital visual design in blocks of text on a page accompanied by black-and-white images of web sites.  There is, in fact, a companion site to the book.  How is it laid out?  The front page conforms largely to the left-right, top-down aesthetic, though it’s a bit sparser than a Hong Kong newspaper.  The pages on individual chapters don’t look much different from Web 1.0 pages: bulleted text with occasional images.  They might link to an infographic to prompt a student activity, but the visual media doesn't seem to be integrated tightly into the pages any more than the usual hyperlink.

Maybe the publisher missed an opportunity by not fleshing out the visual design of the site to a book on digital literacy.  Maybe they just didn't think of investing that much effort.  It may also be that spatial layout takes on the most importance in the first page a viewer sees.  As one delves deeper into a site, the information (usually text) takes on a greater importance.  Complex visual design creates a hook that most book covers can’t, but it may be the content that determines how long a visitor (or reader) sticks around.

Jones, Rodney H., and Christoph A. Hafner.  Understanding Digital Literacies: A Practical Introduction.  London: Routledge, 2012.

Drawing out the meaning in images (Kress and van Leeuwen)

In 1972, the Pioneer 10 spacecraft was sent on a course out of our solar system with a plaque showing simple scientific facts, including depictions of a man and woman, and a diagram of the sun and planets.  An arrow emerges from the third planet, and points to an icon of the spacecraft.  This was meant to represent Pioneer 10’s origin and trajectory to whomever (or whatever) discovered it.  But would an arrow have meaning to these hypothetical ETs?  What if their ancestors didn't spear other creatures for their flesh?  The arrow is very simple: one long line, with two angled short lines on one end.  It’s an example of a symbol that seems to have a shared meaning for all of us here on the pale blue dot.

The plaque from the Pioneer 10 spacecraft, meant to explain Earth to the 
friendly extraterrestrials who find it.
Kress and van Leeuwen think that visual communication is a basic form of communication apart from speech and writing.  They also think it gets short shrift in a globalized (Western) culture where the written word is still paramount.  The authors go through the usual story of writing developing out of the need to record amounts and transactions (21).  But where did the symbols come from that came to represent how many heads of oxen are owed to which king?  From pictures.  Symbols for ox became letters that began the word for ox.  To Kress and van Leeuwen, this represents verbal language subsuming the visual (22). 

In 21st century Western culture, purely visual communication is often seen as childish, a form of expression below writing (17).  Why do teachers shift their evaluation from our pictures to just our writing as we move through school?  If writing originated as images, wouldn't this make visual literacy at least as important as its verbal and written equivalents?  Kress and van Leeuwen certainly think so.  They don’t say so explicitly, but I get the feeling they think visual communication is more paramount by being more primal.

The aboriginal Australian who drew this 
image had no written caption to leave.
The authors buttress their conviction by explaining how visual literacy is central to children’s understanding of the world.  They relate the story of a child making a series of drawings, and only resorting to words when asked by an adult what the images meant (36).  They posit that “[i]t may well be that the complexities realized in the six images and their classification were initially beyond the child's capacity of spoken expression, conception and formulation” (39).  The implication is that there’s nothing wrong with this.  The child was able to fully express something meaningful to him without recourse to words until forced.  This can be as true for adults as children.  The dominant mode of literacy that happened to originate with Bronze Age Mesopotamian traders isn't necessarily the default for all cultures.  As Kress and van Leeuwen note, Australian Aboriginal drawings exist apart from verbal translations, as an independent form of communication (22).

In our own culture, the trend may be rolling back.  Digital technology is giving visual design greater importance to more people.  Whereas visual layout was mostly the concern of a small number of experts, desktop and web publishing have opened up the field to people who have a greater need to communicate visually.  Which is a more sensible way to convey a scientific concept: a textbook with occasional static diagrams that illustrate the writing, or an interactive animation with writing that anchors the images’ context?  Science education has, in fact, been heading in this direction, and the authors wonder whether it even constitutes a different understanding of science from the older writing-centric method (31).  I had an astronomy professor who has gradually built up a curriculum of Flash animations that, in his opinion, more richly convey the concepts he wants to explain.  While there is sometimes substantial text, the visual simulations are the centerpieces of his lessons.

It’s a strange revelation to think that we, as members of our culture, are constantly translating between modes of expression.  It may be that no form of literacy takes precedence over any others, but a recognition of how they interact is central to a nuanced interpretation of all symbols.  This includes recognizing that images can have meanings in and of themselves, no translation needed.

Kress, Gunther, and Theo van Leeuwen.  Reading Images: The Grammar of Visual Design.  London: Routledge.


Thursday, February 5, 2015

The not-yet-common creative commons (Jones & Hafner, Ch. 3)

As a teenager, I had notions of being a film editor.  My outlet for this ambition was to take video of movies and TV shows and cutting the footage into themed montages set to music.  My goals in doing so were nothing more than to build my experience and showcase my talent.  I was aware that these videos probably couldn’t be freely distributed.  Platforms like YouTube hadn’t truly taken off yet, but if I’d posted my creations on such a site, they likely would’ve attracted copyright complaints.  Jones and Hafner would say that I was engaging in remix culture, by mashing up different media to create new content.

The practice of mashing up media is fed by, and feeds, the social aspect of the contemporary “read-write” (as opposed to “read-only”) web (42).  The ease with which images, video, and audio can be shared and edited has opened the creative process to a much wider audience than the pre-digital world allowed.  I’ve never physically cut film together in my life; I “cut” video clips that came in computer files.

The ease of creativity has not been matched by an ease of legal cover.  In fact, as both Jones and Hafner and Lawrence Lessig note, those who took creative liberties with previous works have denied the same privilege to others.  Lessig’s example is of the Walt Disney Company, which was built on taking works that predated copyrights, and creating derivative work and claiming it as their intellectual property.  I take Lessig’s point that this is hypocritical, but I also think it’s understandable.  From Disney’s point of view, it’s not their fault that the brothers Grimm didn’t have access to copyright laws.  Disney, in turn, is simply using the power it has to protect something that, if used without their consent, could compromise the company’s image.  I doubt you’d get anyone from the company to publicly make that point, but I think that’s what it comes down to: they don’t see a problem using a power they know they have.

Still, this is the beauty of Lessig’s idea of a creative commons.  Those who could exercise control over their creative works willingly won’t use that power, as long as further use is done respectfully and credits the creator.  This does still require that right to be granted.  Lessig also brings up the example of George Lucas, who is famously protective of his vision of his works being above all others (until he sold the rights to those works to…oh, right).  Would Lucas have licensed Star Wars under the creative commons if it had existed in the 1970s?  I’m not sure.  The power to protect a creative work from all tampering is tempting, and if it’s available, it’s hard to turn down.  I don’t think the simple existence of a creative commons will change the status quo.  I think it’s more likely that the sheer volume of mashing-up and sharability will overwhelm companies’ attempts at rigid copyright enforcement.


Jones, Rodney H., and Christoph A. Hafner.  Understanding Digital Literacies: A Practical Introduction.  London: Routledge, 2012.

Polishing a window out of existence (Bolter & Grusin)

In 1896, Auguste and Louis Lumiere screened a 50-second film of a train pulling into a station.  Popular legend holds that the Paris audience leapt to their feet in fear that the locomotive would rip through the screen and into the theatre.  The story is funny and poignant to us in the 21st century; how could those silly people be so naïve?  Bolter and Grusin explain that “[t]he audience members knew at one level that the film of a train was not really a train, and yet they marveled at the discrepancy between what they knew and what their eyes told them” (30-31).  The Parisians were connecting with an image in a way they never had.  This is an example of what the authors call transparent immediacy, the desire to remove the barrier between images and reality.  Our digital world of computer-generated images and virtual reality has continued the trend of the Lumiere brothers a century earlier.

Masaccio's Holy Trinity, an early
use of perspective.  See this video for
an exploration of the work.
Bolter and Grusin contend that the quest for transparent immediacy began with earnest in the Renaissance, with the invention of linear perspective.  Before perspective, art appeared relatively flat.  People and objects seemed to sit on a single plane.  Some artists experimented with varying the sizes of objects in the background of a picture, but the effect was uneven.  Innovation came in considering a single vantage point for the artist.  Bolter and Grusin quote the 15th century architect Alberti, who explained that “[o]n the surface on which I am going to paint, I draw a rectangle of whatever size I want, which I regard as an open window through which the subject to be painted is seen” (24).  Alberti’s “window” onto his work became the viewer’s window into another world: their perspective.


To Bolter and Grusin, perspective represents the beginning of the mathematizing of art.  Alberti’s window was squeaky clean compared to what came before.  Art was more transparent, and its subjects more immediate to its viewers.  The photograph removed even more of the smudges from the window, and film put it in motion.  What’s the state of the windows through which we view the digital world?

As the authors state, although Renaissance artists used geometry, they distorted perspectives to suit their tastes.  A programmer who uses physics equations to make a digital model reflect natural laws is applying mathematics much more rigorously.  Contrasting these two extremes, I can see the appeal of abstract art, and why it emerged in the late 19th century, after the invention of photography.  Just because an image is natural, it may not be appealing.  Reflecting the natural world as perfectly as possible is the goal of what the authors call the “naïve” view of transparent immediacy (31).  

Strangely enough, the closer media moves toward replicating nature, the less natural the methods become.  Whereas a photograph or film can at least claim that the photons entering the lens are direct remnants of the natural world, it would be difficult to claim the electrons flowing through a computer chip represent the hand of nature (27).  The authors state that computer graphics’ claim to reflect reality “seems to be appealing to the…proposition that mathematics is appropriate for describing nature” (26).  I think they come off as a little condescending of this notion, but I do take their point: the path toward bringing images closer to reality relies on progressively rigid and complex methods.

Bolter and Grusin repeatedly refer to virtual reality as a kind of holy grail of immediacy.  A truly immersive virtual experience would mean that a viewer “has jumped through Alberti's window and is now inside the depicted space” (29).  The window is smashed, and the image supplants reality.  While seamless virtual reality hasn’t arrived yet, even the art of making two-dimensional images seemingly pop out of a screen can trigger a novel reaction in viewers.  There are digital maestros who are not only aware of how to perform these tricks, but actively strive to push their audience through the window.  At what point does art for its own sake get replaced with ideology?  I think this is what the authors mean when they talk about transparent immediacy being a naïve goal.

3D video and virtual reality are simply the next steps in the legacy of transparent immediacy.  The trend has been to gradually remove the presence of the artist or even a medium.  It’s easy to speak of Masaccio’s Holy Trinity; a single work by an artist whose methods we can dissect and whose inspirations we can debate.  The creation of digital art can seem so much more remote and sterile, the work of hundreds of programmers and animators pushing electrons around screens.  But this is what it takes to make the subject immediate.  I doubt many in 1896 thought a train was really going to roll over their seats.  I think they just marveled at what technology could show them.


Bolter, Jay David, and Richard Grusin.  Remediation: Understanding New Media.  MIT, 1999.


Thursday, January 29, 2015

Really Sadistic Syndication (Jones and Hafner, Ch. 2)

I’m flattered when anyone thinks they know me well enough to recommend something.  It tells me that someone has considered my interests, and thinks that there’s something out there that I will enjoy.  Who doesn’t want an enjoyable distraction delivered to them without having to search for it?  What if it’s being delivered by an algorithm?  Is this similar to human behavior?  How useful is it?  Jones and Hafner’s exploration of how technology collects, organizes, and delivers data in (hopefully) useful information caused me to reflect on all of the media that’s being fed to me daily.

I think I’ve got a better-than-average control of my digital diet, but there is some spam that gets in.  What’s interesting is that it’s mostly of my own doing.  The authors explain several different types of algorithms (29).  I don’t have much trouble with social algorithms, which trawl social media for my supposed interests.  I certainly see the ads that pop up in my Facebook feed, and I’m amused at the sudden appearance of ads for Adobe software that I happen to have just started using and seeking tutorials on.  Maybe some of it is getting through subconsciously, but I’m too cynical to click on practically any of these ads.  They seem like noise to me.

In this way, I’m using what Jones and Hafner refer to as a mental algorithm (30), letting only the data I consider relevant to filter through.  The rest really does get treated like background noise.  But I’ve also set up my own personalized algorithms to aid my sense of digital discovery.  I love the podcast app on my smartphone.  I have a library of podcast feeds that I can refresh at my leisure and instantly have all of the latest episodes.

I tend to collect podcasts by stumbling across one episode, deciding I enjoy it, and hitting “Subscribe.”  It’s simple to press that button.  But it can turn out to be kind of a commitment.  What if I don’t enjoy subsequent episodes quite as much?  What if the feed updates too frequently for me to keep up with the latest episodes?  Or maybe I’ve just got so many feeds updating that I fall behind on some.  Does that mean I don’t enjoy them enough to keep?  Is this the kind of anxiety that replaces the daily fight for survival of most people in human history?

Yeah, probably.

I find I just have to occasionally cull my podcast feeds.  This is another use of mental algorithms on my part.  It’s not too different from what we do in our non-digital lives.  We even sort through personal relationships and decide whether to keep or discard them.  What’s different is where the recommendations are coming from, and how they’re being made.  It’s very easy for me to press a button and give an algorithm the power to push data to me.  Even though I can end that by just as easily pressing a button, it’s not that easy in practice.  My mental algorithm needs a lot of experience to decide when data that I chose to receive stops being useful information.  Hitting “Unsubscribe” isn’t quite like saying goodbye to a friend, but it is taking action to put an end to something.  Then again, I can always hit “Subscribe” again.


Jones, Rodney H., and Christoph A. Hafner.  Understanding Digital Literacies: A Practical Introduction.  London: Routledge, 2012.

Thursday, January 22, 2015

Affordances and Constraints of Commerce (Jones & Hafner)

Civilization rests on tools to do things and mediate actions between people.  That’s probably a truism, but it’s a broad concept that lets us examine just about everything that people do and every way we do those things.  Jones and Hafner state that all cultural tools come with built-in affordances and constraints (3).  Fire affords us heat for our camp.  It can also burn our camp to the ground. 

Two cultural tools with possibly less-dramatic affordances and constraints are smart cards and cash registers.  They also happen to work quite well together in something called electronic commerce, which is of some importance to 21st century life.  A “smart card” resembles a typical credit card, but with a computer chip embedded in it.  This gives it more uses than the old-fashioned magnetic-stripe credit card.  In full disclosure, I should state that I work on the technology side of the credit card processing industry, and know more about the subject than I ever wanted to.

Credit cards in general mark a shift in mediation comparable to Jones and Hafner’s example of the wristwatch.  Instead of the physical weight of cash or change in your wallet or purse, your entire budget might be represented in one flimsy piece of plastic.  That plastic is easy to whip out and swipe.  By removing the physical awkwardness of counting out money, it takes away a psychological barrier as well, making it easier to spend money.  Smart cards chip away a little more at those barriers.  Some can be waved in front of a device that reads the card data in radio waves.  No swiping required, and usually not even a signature or a PIN.  This affords a great degree of convenience, but constrains our need or ability to consider how significant the bytes we just waved away were to our budget. 

The devices that read smart cards will increasingly be found on cash registers.  These devices have gone from being mere repositories of cash and coins to computer terminals linked to inventory systems, banks, and even marketing software.  Souped-up cash registers, or point-of-sale (POS) devices, afford businesses the ability to instantly update their inventory as it changes, move customers quickly through purchases, and monitor their buying habits.

The conveniences of electronic commerce also constrain us.  If a server goes down, and every POS device in the store loses its connection, what happens?  How many times has a customer heard a cashier lament “the system” being down or having problems?  It’s understandable in that the conveniences that both the customer and the cashier are used to have vanished, and both seem helpless.  On the other hand, the same transaction used to be handled with an exchange of cash or coins and some calculations.  How would our ancestors have felt if, after a half-day trek across the prairie to the general store, the clerk told them he couldn’t make a sale because he ran out of paper to record their purchases?

In the same way that we don’t really “know” the time before looking at watch, we’re not really exchanging money with a card the same way we do with cash.  We’re using a different tool to mediate a similar action.  We’re also mediating different relationships with each other.  Customers and merchants don’t have to exchange many words—if any—to make a transaction.  By removing physical money and mental calculations, there’s hardly any time to converse.  Without falling too deeply into dystopian laments of budget-ignorant consumers silently shuffling through checkout lines, the affordance of speedy and accurate transactions are real.  I’ve been to countries where haggling is expected.  The rustic charm of playing psychological games every time you want to buy something wears off eventually.

Socrates was wrong when he worried that writing would turn our minds to mush (11).  But it did fundamentally change civilization; a less robust memory may have freed up a more creative mind.  Likewise, removing more and more thought from the exchange of money will leave lasting changes to society.


Jones, Rodney H., and Christoph A. Hafner.  Understanding Digital Literacies: A Practical Introduction.  London: Routledge, 2012.

Thursday, January 15, 2015

How many Literacies can there be?

My grandma likes to treat the comment box on Facebook posts as an open forum.  A photo of my cousin riding a bike will elicit the comment “Did you get a job yet?”  A music video posted by my mom prompts the comment “Did you get those coupons I sent you?”  She doesn’t throw out such non sequiturs in normal conversation.  What’s going on here?  Colin Lankshear and Michele Knobel might say she’s lacking in a certain digital literacy, specifically for social media.  Or does she just have a completely different digital literacy from me?

In the introduction to Digital Literacies: Concepts, Policies and Practices, Lankshear and Knobel argue that there are many digital literacies.  The clearest dividing line is between functional and sociocultural literacies.  Grandma’s got the functionality down: she can boot up a computer, open an internet browser, log into Facebook, and type out a message.  The disconnect must be in the sociocultural realm.  As the editors argue, literacy is more than just encoding and decoding something.  That something must also be understood in its context.

The digital world provides many contexts.  Lankshear and Knobel give a non-exhaustive list of “blogs, video games, text messages, online social network pages, discussion forums, internet memes, FAQs, [and] online search results” (5).  Posting a plea for help to a veterinary advice forum about your cat’s sudden hair loss takes a different mindset from searching for funny photos of hairless cats.  The editors argue that there are not just different digital contexts, but different digital literacies.  They define literacies as “different ways of reading and writing and the ‘enculturations' that lead to becoming proficient in them” and state that, because we are all “apprenticed” more than one, we must speak of literacies, in the plural (7).

I can’t object to drawing a line between reading something and understanding it.  I’m also on board with the idea that we’re all interpreting media in wildly different contexts, with varying proficiencies.  I’m just not sure it necessarily follows that this constitutes different literacies.  Why isn’t just as valid to say that there is only one concept of digital literacy and we all use it in different contexts to different degrees?  Perhaps anticipating this question, Lankshear and Knobel chose for the first chapter of their collection a work by David Bawden called “Origins and Concepts of Digital Literacy.”  That’s literacy, singular.

Bawden builds up a definition of digital literacy from the fundamental ideas of literacy itself.  He gives an overview of the concept of “information literacy” as opposed to “computer literacy.”  While the latter is almost purely functional, information literacy involves “the evaluation of information, and an appreciation of the nature of information resources” (21).  These are broad concepts, but the division mirrors Lankshear and Knobel’s separation of functional and sociocultural literacies, applied specifically to the digital world.

In collecting components of definitions from different authors, Bawden can’t help but repeatedly note the definition offered by Paul Gilster: “digital literacy is about mastering ideas, not keystrokes.”  It is, Bawden says, “the current form of the traditional idea of literacy per se—the ability to read, write and otherwise deal with information using the technologies and formats of the time—and an essential life skill” (18). 

I can get behind “ideas, not keystrokes.”  It’s broad, but if one division matters most, it’s between functional and sociocultural literacies.  Grandma might be a smidge less digitally-literate than others, but if it’s an essential life skill, she’s doing alright.



Lankshear, Colin, and Michele Knobel, eds. Digital Literacies: Concepts, Policies and Practices.  New York: Peter Lang, 2008.