Thursday, April 30, 2015


In my first blog post, I stated my belief that the most important division of digital literacies was between the sociocultural and the functional. That division—between “how it works” and “why does it work this way?”—has been a theme throughout the course. I already had some experience with the Adobe product suite. Without such a steep functional learning curve, I tried to focus as much as possible on tying the readings to our projects. I spent hours making my collage in Photoshop only to decide that I had visualized the enframing of my subject rather than the standing reserve. As frustrating as it was, I scrapped the collage, and started from scratch. I would have been much more reluctant to do that if I hadn’t had prior experience with Photoshop. I imagine it would have been much more difficult to translate ideas as difficult as Heidegger’s into a functional form that I was a novice at.

In this sense, my pre-existing literacy helped me to accomplish my coursework. The software we used is also not cheap, in general. The versions we used in class were available to us through a corporate partnership the university had made. The link between access and literacy is not a trivial one. We read more than once about how the technological wonders of the digital era can be used to enforce existing social barriers. In my last blog post, I questioned Selfe and Selfe’s assertion that the “cyborg” selves created by our attachment to technology could break down those barriers.

And when barriers are broken, it doesn’t always paint a pretty picture. I think Adobe Illustrator, which my group analyzed for our project, is a great product. It’s fun to use, and makes beautiful art. I’m sure many people would, and do, enjoy using it as part of their daily job. Unfortunately, as we learned from our research, Illustrator’s ease of use may be tied to how few people get to make a decent living as graphic designers. Digital innovation isn’t neutral; it has effects on society and the people in them.

I took the most pleasure in considering and demonstrating how important design is. I was intrigued by the examples the Jones and Hafner used in the Multimodality chapter, and I took it as a challenge to find my own examples on the web and analyze how their aesthetic affects their interpretation. I intended the website I made for our final project to represent my ideal web magazine. I’m very aware of what size of type of fonts I prefer, how much whitespace I want, and what kind of images complement the text. I ended up replicating pretty closely the spatial logic that Jones and Hafner explained to be evident in most of the modern web. I can honestly say I did not intend this. I don’t think that’s a bad thing, either. It’s only on reflection that I thought about why I had made what I had; a good example of how important it is to have both sides of literacy.

Thursday, April 23, 2015

The Roots of our Cyborg Selves (Selfe & Selfe)

It’s fitting that our last reading takes us farther back than any others. Selfe and Selfe’s essay was published in 1996, around when I started using the internet. The authors comb through the chattering on listservs, a medium that is quaint and outdated to us, to draw conclusions about computer use that resonate nearly 20 years later. They look back through history, from the public forums of ancient Athens, through the salons of Enlightenment Europe, and find that even when a new public sphere opens up to exchange ideas, it is still constrained by the dominant ideologies of its culture (338-9).

The authors repeatedly drive home the point that computer networks are firmly rooted in military research, and are “war machines” (343-4). The opening up of these networks to civilian use is, they say, a way for states to push their populations toward spaces that allow new and free expressions of ideas, but are also regulated. In this sense, computer networks are still fundamentally shaped by their roots in war, and broader state control, such as “computerized records of citizen criminals, computerized lists of welfare recipients, computer-supported census records” (342). I immediately think of contemporary revelations of government cyber-spying on its own citizens and across the globe. I imagine these aren’t revelations at all to Selfe and Selfe; such use was implicit from the birth of computer networks.

But the authors point out that, while powerful state and corporate interests may still have zones of power in computer networks, there are also zones of impotence (348). While these networks were designed for control, there are way to subvert them. Selfe and Selfe refer to such actions as “tactics,” which are “small and—at some levels, partially invisible—ways of ‘making do’ within an oppressive system that reproduces its own power constantly and in extended ways” (349). They give the example of technical writers (who were probably among the earliest regular internet users who couldn’t program) going outside of their employers’ restrictions to seek digital information and connect with others who shared their professional and personal interests. This still happens today, though it’s not exactly revolutionary to turn to the web to quench curiosity. I think of contemporary “hacktivists” such as Anonymous as the current practitioners of tactics. Their use of computer networks to strike at businesses and governments they disagree with are conscious displays of subverting what they consider an oppressive system.

The authors ponder whether the novel approaches to computer use in the 90s are paving a road to humanity becoming “cyborgs.” I don’t think they use that word in a literal sense (although technologies to make that so aren’t far off), but rather in the sense that we treat our machines as extensions of ourselves. We are “makers of the machine, and this activity has made us partially machine ourselves” (352). As we’ve discussed several times this semester, our digital literacy doesn’t just add to our lives; it fundamentally changes how we live. The authors think that our cyborg-selves are able to break down barriers. We can “recode, rewrite, reconstitute not only the text of their own bodies, but also the larger cultural/economic/ideological narratives and mythologies of the male-dominated war State, the cultural body politic” (353). This strikes me as more triumphalist than reality has proven. While our ever-present machines have given us new and strange abilities, old barriers remain, and new barriers may still be raised.

Selfe, Cynthia L., and Richard J. Selfe. “Writing as Democratic Social Action in a Technological World: Politicizing and Inhabiting Virtual Landscapes.” Nonacademic Writing: Social Theory and Technology. Eds. Ann Hill Duin, and Craig J. Hansen. Mahwah, NJ: Lawrence Erlbaum Associates, 1996. 325-358.

Wednesday, April 15, 2015

How We Interface Now (Manovich)

Computer interfaces have historically been based on metaphors of existing media. As Lev Manovich writes, the original computer metaphor was that of an office desktop. This made sense in the 1970s and 80s, when computers were almost entirely work machines. In the 1990s, with the advent of the World Wide Web, Manovich notes that the metaphor shifted to “media-access machines” such as VCRs or CD players (89). Manovich made these observations in 2001, just after the bursting of the dot-com bubble, and before the rise of “Web 2.0”, the current paradigm of human-computer interaction (HCI). So what’s the current metaphor?

I have a hard time answering that question. The desktop and media metaphors still exist, but they’ve been subsumed into an always-online world that I can’t easily fit into any older media form. The most prominent difference between HCI now and the turn of the millennium is increased interaction. While chat rooms and message boards existed before, these look like huddled conversations in a corner compared to the constant bullhorn pronouncements and responses that characterize a platform like Twitter. There’s also the strange decline in anonymity; whereas the web interactions of old were often between people adopting pseudonyms, the current interfaces encourage linking one’s online identity to the real world.

I’m tempted to say that contemporary HCI is based on no metaphor or all previous metaphors simultaneously. But maybe it’s that, stacked on top of the old desktop, and static media-viewing, there is now a metaphor for human interaction in general. Instead of simply performing tasks or passively viewing or listening to media, we are now encouraged by our interface to “comment” on something, or “tag” something as related to another thing.

Manovich observes that the computer had shifted from being simply technology to being a “filter for all culture…the art gallery wall, library and book, all at once” (64). This observation fits with his understanding of the then-metaphor of computer-as-media-machine. I think it’s fair to say the computer is now a filter not just for all culture, but for all interaction between people. What other conclusion could you draw when most people now carry powerful computers around in their pockets whose primary function is communication? In this sense, I don’t think that the “the lines between the human and its technological creations” are as clearly drawn as Apple’s 1984 Macintosh represents to Manovich (63). The aesthetic is still relevant, but the lines are decidedly blurry.

Manovich, Lev. The Language of New Media. 2001.

Thursday, April 9, 2015

Limited Transparency (Jones & Hafner, Ch. 7)

I frequently use email during most days to communicate with work colleagues. Email is a form of media that is somewhat opaque, in the sense that the protocols that are used to send it are open source and simple. It’s unlikely that most people—myself included—have the knowledge or desire to modify the inner workings of email, however. It’s a strange idea to think about. An email is just a blank digital canvas in which you can put pretty much anything and send it to someone else. This makes it seem very transparent, as a very natural way to send a message between people.

Snapchat is an application with a deliberately more limited ability to communicate. It allows its users to send images and video that will last for no more than 10 seconds before disappearing (more or less) permanently. Theoretically, the same functionality exists in email. You can send an image in an email that can be stored in perpetuity. You could send a video that lasts 10 minutes instead of 10 seconds. The appeal of Snapchat is in its opacity. For its users, these constraints are, strangely enough, a sort of freedom. Snapchat not only provides a simple interface to make and send media (which would require more tools, skills, and time before sending the same media in an email), but also gives its users the illusion of transparency with its messages’ time limits and impermanence.

A Snapchat message feels more like a moment in the real world that occurred and then vanished, as opposed to an email, which could be stored and dug back up indefinitely. The possible permanence of email can make life more difficult. A rude remark said out loud might be forgotten, ignored, or laughed off. A rude email could stick around forever, and, fairly or not, taint its sender’s reputation. Snapchat’s transient nature doesn’t necessarily make it a clearer analogue to in-person communication. What happened before the 10 seconds of video? What’s the context of the image? Does the caption actually reflect what was happening when the photo was taken? Snapchat, as its name implies, uses snapshots of life to encourage conversation. These chats aren’t likely to get out of the shallows, and might be falsely substituted for deeper conversations.

I’ve seen people use Snapchat to extend the conversation a bit. The application is designed to limit captions to a certain number of characters. The idea is likely that the image or video should mostly speak for itself. Lately, I’ve seen people use the application’s drawing functionality to complete thoughts that are cut off by the character limits. If a sentence doesn’t fit into the caption box, the last word or few will be scribbled onto the image before being sent. The crude shape and bright colors of the drawn words make them stand out from the rest of the words in the caption. This can give them a striking quality, like a kind of punctuation at the end of the sender’s thought. 

It’s like if a typed sentence ended with the last words not only colored, 
but suddenly in a larger and different font. 

It’s a function that the makers of Snapchat probably didn’t plan for, but which grew out of users modifying around the application’s limits.

Thursday, April 2, 2015

Analyzing online language (Jones & Hafner, Activity 5.1)

There is a pretty solid hierarchy of formality in digital interactions.  What passes for acceptable language between close friends in a private chat may not pass the standards of an email between work colleagues.  I see a general trend of text messages at the most casual end of the spectrum, with emails at the most formal end.  Text messages are typically between close acquaintances, and are often one-on-one.  Since they take place on each participant’s phones, they are also very private.  This makes texts a common medium for abbreviations, creative spelling, and emoticons.  The assumed closeness of text participants makes them most likely to understand each other’s idiosyncratic digital speech patterns.

I would place instant messaging right next to texts in formality.  These have all the same constraints as texts, except that they can also take place on desktop computers.  This makes instant messages slightly less private.  They’re probably used more commonly than texts as a way for work colleagues to communicate.  Facebook wall posts are tricky to place.  They can include a lot of short forms that most readers may not understand, but I think the larger audience (all of a Facebook user’s Friends) enforces a standard of formality.  Facebook posts are not private conversations, but they are between social relations and are relatively short-form.

At the far end of formality are emails.  Emails could be between close friends who would understand abbreviations and emoticons, but there are better forms of communication to convey that language.  I generally understand emails to be reserved for long-form communication, regardless of the audience.  Emails can convey complicated information, so they assume a stricter standard than more-constrained forms like texts and instant messages.  In this sense, it is the potential media richness of an email that dictates its language.

Below is a snippet of text conversation between myself and a friend, where I’ve highlighted the abbreviations, non-standard spellings, and emoticons.

Me: Whatcha up to tonight?

Them: Makin pizza dough right now but I wanna hang fo sho! You??

Heading to [a bar] for [a friend]’s roommate’s going away party.

Sounds like fun

Were in the basement if you wanna join.

You sure??
I won’t be imposing?

Of course not.

Kk ill be there in a few :D

Most of the non-standard spellings are just contractions (e.g. “whatcha” for “what are you”), while “fo sho” is a slang form of “for sure.”  Being followed by an exclamation point, it imparts not just informality, but an excitement and jauntiness that “for sure” may not convey.  I’ve never quite figured out a consistent logic behind double question marks, but I take them to mean a slightly more pleading question.  In this conversation, they may mean that if I were listening to my friend, their voice would rise slightly higher or be drawn out a little longer when asking those questions.  The emoticon at the end of the last text frames the message in a context of happy excitement, as opposed to a simple statement of intent to be somewhere in a few minutes.

Dissecting a conversation like this reminds me how I take for granted my ability to read this person’s intents in what are only a few dozen text characters.  I may misunderstand the same language coming from someone else, or my friend may communicate in a completely different way in an email.  The intended meanings arise out of nothing but the digital context and the relationship between the participants.

Thursday, March 12, 2015

Multiply-torn Attentions (Jones & Hafner, Ch. 6)

Perhaps a decade ago, “multitasking” was a hot word to put on resumes.  It showed that an applicant was hip to the digital world, and knew how to navigate its competing attention-takers.  This isn’t considered much of a skill anymore.  Being able to pay attention to multiple streams of information at once is just a condition of living in the modern world.  I think most people are now aware of the constraints on attention that Jones and Hafner detail in this chapter.

I hear from older colleagues that meetings used to be fewer and smaller when they were confined to physical rooms.  The combination of conference calls and live-streaming of computer desktops has altered what Jones and Hafner call attention structures.  The biggest change is in the communication tools.  The tools I mentioned make it possible to expand a meeting beyond a meeting room, or even a city.  This affects the other attention structures as well.  The number of people involved can multiply, bringing new perspectives to the meeting.  These people also have different social relationships, making the interactions in the meetings different.

The affordances of this technology are clear.  More people get to collaborate more easily on more tasks.  But the constraints are also obvious.  The easier and wider an invitation is to send, the more get sent.  This taps into what Jones and Hafner refer to when they speak of living in an “attention economy” (90).  It’s ironic that, the easier it is to get a hold of anyone, the harder it is to…well, get a hold of everyone.  And if someone has multiple invitations to tasks that may only require that they dial a phone number and watch a screen, they’re tempted to do as many of those tasks as possible at once.

As the authors explain with the computer classroom example, the key is to align our attention structures to the right context.  In the example of meetings, it seems wise to sparingly use technologies that physically remove participants.  Those who can meet in person should, while those who absolutely can’t can connect digitally.  In this sense, multitasking becomes less of a proactive killing-two-birds-with-one-stone activity, and more of an occasional convenience.

Jones, Rodney H., and Christoph A. Hafner.  Understanding Digital Literacies: A Practical Introduction.  London: Routledge, 2012.

Wednesday, March 4, 2015

Contextualizing the Class Computer (Selber)

Until fifth grade, the bulk of my computer use in school consisted of playing Oregon Trail on some truly ancient Macs.  As informative as I’m sure repeatedly dying of dysentery was, it didn’t feel like an experience that grounded me in the realities of how computers would be used later in my life.  My fifth grade teacher was a little savvier.  He tried an experiment where we passed around a piece of paper, writing contributions to a conversation one student at a time.  The goal was to explain the concepts of internet chat rooms and forums.  The experiment’s success was mixed, but it forced me to think about the internet in a way I hadn’t; as a social medium that was both like and unlike the non-digital world.  The experiment contextualized the technology for me.

Stuart Selber thinks that understanding the social context of technology is vital to students’ use of it, and ultimately, how well they navigate the digital world.  He highlights how easy it is for technology to be decontextualized for students, however.  He quotes David Orr saying that conventional teaching of technology risks students having “no confrontation with the facts of life in the twenty-first century” (9).  Confrontation is the key word.  Students are at no risk of under-exposure to technology, but how well can they situate their use of technology? 

UNO has an agreement with Microsoft that gives all students access to its Office 365 suite, even on their personal devices.  It’s quite a deal, giving students free access to tools that are considered industry standards.  How the UNO announcement of the deal justifies it is telling: “As educators, everyone at UNO is united behind a single goal – help prepare our students to become the best they can be…According to IDC students with Office skills are better prepared for work in the professional world.”  The last sentence includes a link to a Microsoft article detailing the study in question.  Selber would likely see these statements as decontextualizing the technology.  It juxtaposes students “being the best they can be” with simply being good workers.  Is that the best they can be?

Selber thinks that teaching the social context of technology is too often an afterthought, and should instead be the core of instruction (21).  But economic realities seem to work against this notion.  If a company offers a university a killer deal for access to its software, university administrators aren’t likely to complain when that company wants to advertise the deal as training a new workforce.  Selber thinks that purely functional digital literacy need not be disempowering, but can serve to explore more critical analysis through humanities education.  This class is a good example of his notion.  It shows that teaching how to use a software and use it well doesn’t have to have purely vocational goals.  It’s just as possible to ask students to actively question technology (“Ja,” says Herr Heidegger) as they learn it.

Selber, Stuart A. Multiliteracies for a Digital Age. Carbondale: Southern Illinois UP, 2004.