Wednesday, February 25, 2015

The Question Concerning (the Essence of) Technology (Heidegger)

By chance, I happen to be reading Brave New World, by Aldous Huxley.  The world in the title is a sterile one in which humanity has completely surrendered itself to technology, where Henry Ford, who popularized the assembly line, is the closest to a god.  If Martin Heidegger were asked, he wouldn’t say that it was just technology that the ordered citizens of Huxley’s world were subjected to.  It was a certain context of viewing technology.  A context that he viewed as dangerous to humanity’s free essence (32).

Context meant a great deal to Heidegger.  How we see the world.  How we see ourselves in the world.  How we speak of the world.  Etymology, and language in general, is so central to his work that his idiosyncratic uses of German phrases are often italicized alongside their English translations.  In his exploration—or should I say revealing—of the context in which we moderns view technology, he spends a great deal of time explaining how the Greeks referred to the manufacture, use, and context of tools, sculpture, and poetry.  He does so to eventually land on the revelation that they used the same terms to refer to all of those contexts (34).  It is we who have insisted on taking techne to be nothing but technique, and purging it of any context aside from use by humanity.  We call this technology.

Heidegger’s term for this way of thinking of technology is “enframing,” which “blocks the shining-forth and holding-sway of truth” (28).  Enframing nature makes us consider it a “standing reserve,” a utility.  He thought that truth was a more primal desire than mere utility.  I find his discussion of the aims of modern physicists interesting for its context.  This essay was developed between 1949 and 1953, after the novelties of quantum physics had lost their charms and segued into the fears of the atomic age.  He says that enframing “demands that nature be orderable” and that our investigations into causality (which I would call science) are shrinking into mere reporting of calculations (23).  This is a bleak view of science.  When I studied physics as an undergraduate, I thought I was searching for truth in the same way as Heidegger’s ancient Greeks.  His point seems to be that instead of just challenging nature, we should be questioning how we do so, and not accept it as the only way to reveal truth.

After finishing this essay a second time, I thought of Keats’s “Ode on a Grecian Urn.”  It’s not just because of Heidegger’s use of a silver chalice as an example of causality.  It’s also his discussion of poetry at the very end of the essay:
The poetical brings the true into the splendor of what Plato in the Phaedrus calls to ekphanestaton, that which shines forth most purely.  The poetical thoroughly pervades every art, every revealing of coming to presence into the beautiful. (34)
It took ancient Greek techne to bring forth the poem’s titular Urn, an object of both practical and artistic worth.  But it also required a kind of technique by Keats as the poet, a revealing of truth.  I wonder if Heidegger agreed that

        "Beauty is truth, truth beauty,"—that is all
         Ye know on earth, and all ye need to know

Heidegger, Martin. The Question Concerning Technology and Other Essays.Trans. William Lovitt. New York: Garland, 1977.

Wednesday, February 18, 2015

Instant and persistent messaging (Jones & Hafner, Ch. 5)

I often hear the claim that most communication is visual and audible.  Without facial expressions, hand gestures, vocal tones, and interjections, we’re left with something invented solely by humans: language.  In the digital world, a great deal of communication is purely textual, and it can cause dramatic changes in behavior.  Jones and Hafner’s discussion of transaction costs made me think of how I use text communication in my daily life.  Most of the examples given in the chapter on online language have to do with social life, but I’d like to explore how digital communication affects the workplace.

When my desk phone rings, I get startled.  What could possibly require hearing my voice?  Doesn’t this person know they can instant-message me, or send an email?  Actually, it’s very understandable.  Both email and IMs provide the affordance of quickly communicating something without having to spend the time it takes to have a full verbal conversation.  But the lack of verbal cues can be a constraint.  An email or IM may not convey the urgency of a situation as well as a tone of voice.  Text-based messages are also much easier to ignore.  After all, they’re just sitting there in my inbox or in a corner of my screen.  My phone ringing tells me that something is urgent, and I respond with the same urgency.

With so many options to communicate with colleagues, there are clear affordances and constraints to each.  Email provides a long format for communicating complex information, but demands the least timely response.  This also means that they can pile up quickly.  It's absurd how many emails I can get in a day.  I guarantee that most people would not be able to handle the equivalent of that much information verbally.  Instant messaging makes low-complexity conversations quick and simple, but can also be ignored, and lack the same cues that a verbal conversation would carry.  The constraints of these forms give phone calls or in-person conversations greater importance.  Jones and Hafner explain this exact phenomenon in regards to romantic partners reserving phone calls for “special” importance (77).

It’s easy to adopt a contrarian and curmudgeonly attitude to digital communication, and wonder why we don’t all just get up and talk to someone in person.  But the very availability of such forms of communication have changed what’s possible.  I work with people across the country and around the world.  That probably wouldn’t be true if the means didn’t exist to quickly communicate with them.  It’s inevitable that we all have to come to terms with the differences between the modes of expression available to us, and fit each one into its appropriate context.


Jones, Rodney H., and Christoph A. Hafner.  Understanding Digital Literacies: A Practical Introduction.  London: Routledge, 2012.

Thursday, February 12, 2015

Multimodal and international layouts (Jones and Hafner, Ch. 4)

Our screens are not glowing pieces of paper.  The digital document represents a logic of understanding that is distinct from that of the printed page.  This is the central point of Jones and Hafner’s chapter of multimodality, the combination of different “modes” of expression to form the documents we regularly interact with in the digital world.  As the authors note, the web did used to be little more than a collection of hyperlinked text files with the occasional visitor counter and dancing baby GIF.

Today’s web is much more refined, and yet, there is a spatial logic that seems to underline most sites’ visual layout.  As Jones and Hafner explain, information on a web site tends to move from given (what you knew you were getting into when you clicked the link), on the left to new, yet-to-be-determined (fill in these fields, dear visitor) on the right.  There’s also a movement from the ideal (what will I get from this site?) at the top to the real (who copyrighted what?) at the bottom.  There may also be some central object to draw attention.

This arrangement seems to be governed by the Western tradition of reading from left to right, and top to bottom.  It’s interesting to note than Jones and Hafner are English professors at City University of Hong Kong, situated in a former British colony that’s now a semi-autonomous enclave of China.  This made me wonder how widespread their idea of a basic web layout really is.  The main web page of the South China Morning Post, Hong Kong’s largest English newspaper, certainly seems to conform.  The paper’s title is on the top left, with a search bar on the right.  Moving down the page, articles appear mostly on the left, with user-driven features like polls on the right.  At the bottom is the boring stuff (FAQ, terms and conditions, contact info).  What about the Oriental Daily News, the city’s largest Chinese paper?  It’s striking that, even without understanding the text, I can make informed guesses about what each section of the page is because it does indeed conform to the same basic layout.

I found it funny reading Jones and Hafner’s insights on digital visual design in blocks of text on a page accompanied by black-and-white images of web sites.  There is, in fact, a companion site to the book.  How is it laid out?  The front page conforms largely to the left-right, top-down aesthetic, though it’s a bit sparser than a Hong Kong newspaper.  The pages on individual chapters don’t look much different from Web 1.0 pages: bulleted text with occasional images.  They might link to an infographic to prompt a student activity, but the visual media doesn't seem to be integrated tightly into the pages any more than the usual hyperlink.

Maybe the publisher missed an opportunity by not fleshing out the visual design of the site to a book on digital literacy.  Maybe they just didn't think of investing that much effort.  It may also be that spatial layout takes on the most importance in the first page a viewer sees.  As one delves deeper into a site, the information (usually text) takes on a greater importance.  Complex visual design creates a hook that most book covers can’t, but it may be the content that determines how long a visitor (or reader) sticks around.

Jones, Rodney H., and Christoph A. Hafner.  Understanding Digital Literacies: A Practical Introduction.  London: Routledge, 2012.

Drawing out the meaning in images (Kress and van Leeuwen)

In 1972, the Pioneer 10 spacecraft was sent on a course out of our solar system with a plaque showing simple scientific facts, including depictions of a man and woman, and a diagram of the sun and planets.  An arrow emerges from the third planet, and points to an icon of the spacecraft.  This was meant to represent Pioneer 10’s origin and trajectory to whomever (or whatever) discovered it.  But would an arrow have meaning to these hypothetical ETs?  What if their ancestors didn't spear other creatures for their flesh?  The arrow is very simple: one long line, with two angled short lines on one end.  It’s an example of a symbol that seems to have a shared meaning for all of us here on the pale blue dot.

The plaque from the Pioneer 10 spacecraft, meant to explain Earth to the 
friendly extraterrestrials who find it.
Kress and van Leeuwen think that visual communication is a basic form of communication apart from speech and writing.  They also think it gets short shrift in a globalized (Western) culture where the written word is still paramount.  The authors go through the usual story of writing developing out of the need to record amounts and transactions (21).  But where did the symbols come from that came to represent how many heads of oxen are owed to which king?  From pictures.  Symbols for ox became letters that began the word for ox.  To Kress and van Leeuwen, this represents verbal language subsuming the visual (22). 

In 21st century Western culture, purely visual communication is often seen as childish, a form of expression below writing (17).  Why do teachers shift their evaluation from our pictures to just our writing as we move through school?  If writing originated as images, wouldn't this make visual literacy at least as important as its verbal and written equivalents?  Kress and van Leeuwen certainly think so.  They don’t say so explicitly, but I get the feeling they think visual communication is more paramount by being more primal.

The aboriginal Australian who drew this 
image had no written caption to leave.
The authors buttress their conviction by explaining how visual literacy is central to children’s understanding of the world.  They relate the story of a child making a series of drawings, and only resorting to words when asked by an adult what the images meant (36).  They posit that “[i]t may well be that the complexities realized in the six images and their classification were initially beyond the child's capacity of spoken expression, conception and formulation” (39).  The implication is that there’s nothing wrong with this.  The child was able to fully express something meaningful to him without recourse to words until forced.  This can be as true for adults as children.  The dominant mode of literacy that happened to originate with Bronze Age Mesopotamian traders isn't necessarily the default for all cultures.  As Kress and van Leeuwen note, Australian Aboriginal drawings exist apart from verbal translations, as an independent form of communication (22).

In our own culture, the trend may be rolling back.  Digital technology is giving visual design greater importance to more people.  Whereas visual layout was mostly the concern of a small number of experts, desktop and web publishing have opened up the field to people who have a greater need to communicate visually.  Which is a more sensible way to convey a scientific concept: a textbook with occasional static diagrams that illustrate the writing, or an interactive animation with writing that anchors the images’ context?  Science education has, in fact, been heading in this direction, and the authors wonder whether it even constitutes a different understanding of science from the older writing-centric method (31).  I had an astronomy professor who has gradually built up a curriculum of Flash animations that, in his opinion, more richly convey the concepts he wants to explain.  While there is sometimes substantial text, the visual simulations are the centerpieces of his lessons.

It’s a strange revelation to think that we, as members of our culture, are constantly translating between modes of expression.  It may be that no form of literacy takes precedence over any others, but a recognition of how they interact is central to a nuanced interpretation of all symbols.  This includes recognizing that images can have meanings in and of themselves, no translation needed.

Kress, Gunther, and Theo van Leeuwen.  Reading Images: The Grammar of Visual Design.  London: Routledge.


Thursday, February 5, 2015

The not-yet-common creative commons (Jones & Hafner, Ch. 3)

As a teenager, I had notions of being a film editor.  My outlet for this ambition was to take video of movies and TV shows and cutting the footage into themed montages set to music.  My goals in doing so were nothing more than to build my experience and showcase my talent.  I was aware that these videos probably couldn’t be freely distributed.  Platforms like YouTube hadn’t truly taken off yet, but if I’d posted my creations on such a site, they likely would’ve attracted copyright complaints.  Jones and Hafner would say that I was engaging in remix culture, by mashing up different media to create new content.

The practice of mashing up media is fed by, and feeds, the social aspect of the contemporary “read-write” (as opposed to “read-only”) web (42).  The ease with which images, video, and audio can be shared and edited has opened the creative process to a much wider audience than the pre-digital world allowed.  I’ve never physically cut film together in my life; I “cut” video clips that came in computer files.

The ease of creativity has not been matched by an ease of legal cover.  In fact, as both Jones and Hafner and Lawrence Lessig note, those who took creative liberties with previous works have denied the same privilege to others.  Lessig’s example is of the Walt Disney Company, which was built on taking works that predated copyrights, and creating derivative work and claiming it as their intellectual property.  I take Lessig’s point that this is hypocritical, but I also think it’s understandable.  From Disney’s point of view, it’s not their fault that the brothers Grimm didn’t have access to copyright laws.  Disney, in turn, is simply using the power it has to protect something that, if used without their consent, could compromise the company’s image.  I doubt you’d get anyone from the company to publicly make that point, but I think that’s what it comes down to: they don’t see a problem using a power they know they have.

Still, this is the beauty of Lessig’s idea of a creative commons.  Those who could exercise control over their creative works willingly won’t use that power, as long as further use is done respectfully and credits the creator.  This does still require that right to be granted.  Lessig also brings up the example of George Lucas, who is famously protective of his vision of his works being above all others (until he sold the rights to those works to…oh, right).  Would Lucas have licensed Star Wars under the creative commons if it had existed in the 1970s?  I’m not sure.  The power to protect a creative work from all tampering is tempting, and if it’s available, it’s hard to turn down.  I don’t think the simple existence of a creative commons will change the status quo.  I think it’s more likely that the sheer volume of mashing-up and sharability will overwhelm companies’ attempts at rigid copyright enforcement.


Jones, Rodney H., and Christoph A. Hafner.  Understanding Digital Literacies: A Practical Introduction.  London: Routledge, 2012.

Polishing a window out of existence (Bolter & Grusin)

In 1896, Auguste and Louis Lumiere screened a 50-second film of a train pulling into a station.  Popular legend holds that the Paris audience leapt to their feet in fear that the locomotive would rip through the screen and into the theatre.  The story is funny and poignant to us in the 21st century; how could those silly people be so naïve?  Bolter and Grusin explain that “[t]he audience members knew at one level that the film of a train was not really a train, and yet they marveled at the discrepancy between what they knew and what their eyes told them” (30-31).  The Parisians were connecting with an image in a way they never had.  This is an example of what the authors call transparent immediacy, the desire to remove the barrier between images and reality.  Our digital world of computer-generated images and virtual reality has continued the trend of the Lumiere brothers a century earlier.

Masaccio's Holy Trinity, an early
use of perspective.  See this video for
an exploration of the work.
Bolter and Grusin contend that the quest for transparent immediacy began with earnest in the Renaissance, with the invention of linear perspective.  Before perspective, art appeared relatively flat.  People and objects seemed to sit on a single plane.  Some artists experimented with varying the sizes of objects in the background of a picture, but the effect was uneven.  Innovation came in considering a single vantage point for the artist.  Bolter and Grusin quote the 15th century architect Alberti, who explained that “[o]n the surface on which I am going to paint, I draw a rectangle of whatever size I want, which I regard as an open window through which the subject to be painted is seen” (24).  Alberti’s “window” onto his work became the viewer’s window into another world: their perspective.


To Bolter and Grusin, perspective represents the beginning of the mathematizing of art.  Alberti’s window was squeaky clean compared to what came before.  Art was more transparent, and its subjects more immediate to its viewers.  The photograph removed even more of the smudges from the window, and film put it in motion.  What’s the state of the windows through which we view the digital world?

As the authors state, although Renaissance artists used geometry, they distorted perspectives to suit their tastes.  A programmer who uses physics equations to make a digital model reflect natural laws is applying mathematics much more rigorously.  Contrasting these two extremes, I can see the appeal of abstract art, and why it emerged in the late 19th century, after the invention of photography.  Just because an image is natural, it may not be appealing.  Reflecting the natural world as perfectly as possible is the goal of what the authors call the “naïve” view of transparent immediacy (31).  

Strangely enough, the closer media moves toward replicating nature, the less natural the methods become.  Whereas a photograph or film can at least claim that the photons entering the lens are direct remnants of the natural world, it would be difficult to claim the electrons flowing through a computer chip represent the hand of nature (27).  The authors state that computer graphics’ claim to reflect reality “seems to be appealing to the…proposition that mathematics is appropriate for describing nature” (26).  I think they come off as a little condescending of this notion, but I do take their point: the path toward bringing images closer to reality relies on progressively rigid and complex methods.

Bolter and Grusin repeatedly refer to virtual reality as a kind of holy grail of immediacy.  A truly immersive virtual experience would mean that a viewer “has jumped through Alberti's window and is now inside the depicted space” (29).  The window is smashed, and the image supplants reality.  While seamless virtual reality hasn’t arrived yet, even the art of making two-dimensional images seemingly pop out of a screen can trigger a novel reaction in viewers.  There are digital maestros who are not only aware of how to perform these tricks, but actively strive to push their audience through the window.  At what point does art for its own sake get replaced with ideology?  I think this is what the authors mean when they talk about transparent immediacy being a naïve goal.

3D video and virtual reality are simply the next steps in the legacy of transparent immediacy.  The trend has been to gradually remove the presence of the artist or even a medium.  It’s easy to speak of Masaccio’s Holy Trinity; a single work by an artist whose methods we can dissect and whose inspirations we can debate.  The creation of digital art can seem so much more remote and sterile, the work of hundreds of programmers and animators pushing electrons around screens.  But this is what it takes to make the subject immediate.  I doubt many in 1896 thought a train was really going to roll over their seats.  I think they just marveled at what technology could show them.


Bolter, Jay David, and Richard Grusin.  Remediation: Understanding New Media.  MIT, 1999.


Thursday, January 29, 2015

Really Sadistic Syndication (Jones and Hafner, Ch. 2)

I’m flattered when anyone thinks they know me well enough to recommend something.  It tells me that someone has considered my interests, and thinks that there’s something out there that I will enjoy.  Who doesn’t want an enjoyable distraction delivered to them without having to search for it?  What if it’s being delivered by an algorithm?  Is this similar to human behavior?  How useful is it?  Jones and Hafner’s exploration of how technology collects, organizes, and delivers data in (hopefully) useful information caused me to reflect on all of the media that’s being fed to me daily.

I think I’ve got a better-than-average control of my digital diet, but there is some spam that gets in.  What’s interesting is that it’s mostly of my own doing.  The authors explain several different types of algorithms (29).  I don’t have much trouble with social algorithms, which trawl social media for my supposed interests.  I certainly see the ads that pop up in my Facebook feed, and I’m amused at the sudden appearance of ads for Adobe software that I happen to have just started using and seeking tutorials on.  Maybe some of it is getting through subconsciously, but I’m too cynical to click on practically any of these ads.  They seem like noise to me.

In this way, I’m using what Jones and Hafner refer to as a mental algorithm (30), letting only the data I consider relevant to filter through.  The rest really does get treated like background noise.  But I’ve also set up my own personalized algorithms to aid my sense of digital discovery.  I love the podcast app on my smartphone.  I have a library of podcast feeds that I can refresh at my leisure and instantly have all of the latest episodes.

I tend to collect podcasts by stumbling across one episode, deciding I enjoy it, and hitting “Subscribe.”  It’s simple to press that button.  But it can turn out to be kind of a commitment.  What if I don’t enjoy subsequent episodes quite as much?  What if the feed updates too frequently for me to keep up with the latest episodes?  Or maybe I’ve just got so many feeds updating that I fall behind on some.  Does that mean I don’t enjoy them enough to keep?  Is this the kind of anxiety that replaces the daily fight for survival of most people in human history?

Yeah, probably.

I find I just have to occasionally cull my podcast feeds.  This is another use of mental algorithms on my part.  It’s not too different from what we do in our non-digital lives.  We even sort through personal relationships and decide whether to keep or discard them.  What’s different is where the recommendations are coming from, and how they’re being made.  It’s very easy for me to press a button and give an algorithm the power to push data to me.  Even though I can end that by just as easily pressing a button, it’s not that easy in practice.  My mental algorithm needs a lot of experience to decide when data that I chose to receive stops being useful information.  Hitting “Unsubscribe” isn’t quite like saying goodbye to a friend, but it is taking action to put an end to something.  Then again, I can always hit “Subscribe” again.


Jones, Rodney H., and Christoph A. Hafner.  Understanding Digital Literacies: A Practical Introduction.  London: Routledge, 2012.