|
What
happened during that time?
In the first half of the 90s, books such as George
Landows Hypertext 2.0 or Jay Bolters
Writing Space celebrated the coming of a new age for
a medium that is a metaphor of the mind: decentered,
fragmentary, associative. Symptomatic of these early
publications is a statement that artist Nicole Stenger made
in her essay Mind is a leaking rainbow, which is
included in Michael Benedikts 1991 book Cyberspace:
First Steps: Cyberspace, though born of war
technology, opens up a space for collective restoration, and
for peace. As
screens are dissolving, our future can only take on a
luminous dimension! Welcome to the new world (Stenger,
1991: 58). The company Eastgate Systems
actually built a whole business around this ideology with
its professional hypertext editing program Storyspace and
CD-ROM releases of major hypertext fiction such as
Moulthrops Victory Garden (1995) or Michael
Joyces Afternoon, a Story (1990), both written
in the Storyspace program. Since then, however,
hypertext (in the sense of an authoritative artwork) has
steadily been on the decline, alongside with the new
economy. Comparatively
expensive hypertext works, shipped in boxes that almost blow
one CD-ROM up to the size of a small paperback, did not
overtake books in salesafter all, you would hardly
bring your Apple Powerbook to the beach for a read. So the Digerati were as quick to turn away from
hypertext as they were to hype it before.
What you got now were remarks like Hypertext? Oh yeah...
been there, done that. Stefan Porombkas 2002
book Hypertext nicely illustrates this turn:
Porombkas basic argument is that the liberatory hype
about hypertext constitutes a narrative in itself. What gets lost in this argument and in an all too
quick turning away from hypertext, however, is a critical
discussion of some of the reasons why hypertext
failed. Or,
in my mind, the critical remarks about hypertext hurried
back to older conceptions of text instead of looking at the
structural, socio-political reasons for hypertexts
loss of coolness. The critics, it can be argued, celebrated
the downfall with the same rhetoric as hypertexts
appearance; a rephlex which, even on a cursory glance, looks
like a market mechanism of hype. Porombkas
book itself, viewed in that sense, in its reaction to the
hype becomes just another narrative (in that it acknowledges
what it sets out to criticize).
To
get a somewhat different grip on the question of why
hypertext failed to bring about what has been called the
freedom of information, in this paper, Ill attempt to
argue that internet-based, authoritative hypertext or
hypermedia works (software that includes texts, images,
sounds, movies and so on) as new media objects have formal
limitations that hold true for the graphical user interface
(GUI) in general (for one thing, because the reception of
internet-based artwork obviously takes place within this
interface). To sum up my general hypothesis: The interface
can be seen as a site where absent cultural and social
contradictions clash in its form/design, and meaning is
being dialogically produced for a cultural community. Now, it is important to
highlight that this is not to unmask hypertext
works as not having the potential of being the tools for
resistance that they seemed to be: Instead, both the older
celebratory and the recent gloomy rhetoric about hypertext
are part of the same logic of capitalist hype.
On a formal level, then, I will try to describe some of the
structural limits of authoritative hypertext works and of
the cultural interface in which they are perceived by
looking at new media objects such as Afternoon and
Victory Garden, the Storyspace computer program, the
AOL interface, and the Netscape World Wide Web (WWW) browser
software. Within a post-Marxist political framework, I will
then associate the GUI in general with Ernesto Laclaus
concepts of hegemony, articulation, and antagonism, and the
hyperlink logic of hypertext with the Althusserian notion of
interpellation. If this makes the interface
laden with mainstream political ideology, it may
come as a surprise that I will refrain from calling all
resistance futile.
But
hypertext, when understood as the totality of computers that
are linked through the internet, on a formal level does
promote an authoritative shift in new media objects such as
the Netscape: the software comes with an HTML (hypertext
mark-up language) editorunlike with old media, reading
or manipulating a Website now becomes an equal choice in the
file menu. The ability to manipulate data (and to
redistribute the manipulated data) in computer programs such
as Netscape, finally, might constitute a socio-political
function of hypertext that contributed to the success of the
WWW (which is the totality of HTML pages on the internet);
the lack of these functions, on the other hand, might
explain why authoritative hypertext works
failed. Whereas
such works trap a user into a single reading experience,
simply reading an HTML Webpage source code can actually
constitute a shared experience that serves a users
desire for intimacyprovided you have
access to the code.
2. The
Politics of New Media Objects
Form
and content in discourse are one; once we understand that
verbal discourse is a social phenomenon.
M.
M. Bakhtin,
The Dialogic Imagination
Before
I go into a discussion of hypertext software and its
socio-political function, Ill attempt to describe what
a political interpretation of new media objects can look
like. European
cultural critics Richard Barbrook and Andy Cameron have took
some steps in that direction with their essay The
California Ideology, but the result remains far from
being a coherent theoretical position. In their text, Barbrook and Cameron state that
a loose alliance of writers, hackers, capitalists, and
artists from the West Coast of the USA have succeeded in
defining a heterogeneous orthodoxy for the coming
information age.
This heterogeneous orthodoxy is what the two
critics call California Ideology: The idea that
new media will make everybody both hip and rich,
being able to express themselves freely within
cyberspace. Barbrook
and Cameron hold that this new media utopia is grounded in a
willful blindness towards (
) racism, poverty,
and environmental degradation, so they see a need for
European theorists to step into the picture to develop
a more coherent analysis of the impact of hypertext than can
be found within the ambiguities of the Californian
Ideology. Although
I find this position somewhat overstated, I would like my
paper to be seen as part of the theoretical project to
ground new media theory more firmly in the social and
political sphere (the German term Kulturtechnik
nicely describes this) instead of the lofty U.S. West Coast
cybertopia that Barbrook and Cameron criticize.
In
a somewhat less polemical approach, then, Ill try and
make several key concepts from Fredric Jamesons
seminal book The Political Unconscious fruitful for
my approach to a politics of the graphical user interface. Basically, remembering that
men represent their real conditions of existence to
themselves in an imaginary form (Althusser, 1971:
163), it is not hard to see how the political could enter
the analysis of hypertext at all: The GUI is a cultural
object that is indexical to the dreams and hopes that we
have, as well as to the conflicts that are raging across the
socio-political sphere.
Whats harder to see is the primacy of a political
reading over other readings from theoretical schools such as
psychoanalysis, feminism, or deconstructionism; this
primacy, however, is precisely what I need to establish in
order to make a reading of new media objects in purely
political terms sound plausible.
In
The Political Unconscious, Jameson asserts that he is
not calling for just another method of political
criticism. The
social and the political, for him, form the very backdrop of
cultural production, so he rather holds that Marxism
subsumes other interpretative modes or systems; or, (
)
the limits of the latter can always be overcome, and their
more positive findings retained, by a radical historicizing
of their mental operations, such that not only the content
of the analysis, but the very method itself, along with the
analyst, then comes to be reckoned into the text
or phenomenon to be explained (Jameson, 1981: 47). In Jamesons view,
then, text, method, and analyst all become part of a larger
political configuration that can be uncovered by a
historical analysis of the methods mental structuring
of material; zooming into a code-only version of cultural
life, from this viewpoint, is too quick a move for an
understanding of the structural limitations (and
possibilities) that are at work in the culture the new media
object originates from.
When
applied to new media studies, this means that the feedback
loop from the new media object (such as the interface of
Netscape 7 or an authoritative hypertext CD-ROM) to
socio-political reality has to be scrutinized alongside
with the code in order to see how we present reality to
ourselves numerically encoded through the GUI.
This
takes us to Jamesons understanding of historical
reality. Generally, Jameson does away with the fashionable
notion that everything is a text (in a similar
way, Régis Debray does away with the sign
in favor of the structure in media studies in his notorious
Media Manifestos).
Without receeding to an essentialist notion of history,
Jameson holds that history is not a text, not a
narrative, master or otherwise, but that, as an absent
cause, it is inaccessible to us except in textual form, and
that our approach to it (
) necessarily passes through
its prior textualization, its narrativization in the
political unconscious (35).
When the critic uncovers this narrativization in the process
of the interpretation of a cultural object, however,
historical reality never reveals its true meaning, but
rather remains the absent cause for the production of the
cultural object.
Whats
uncovered is not reality but the form of its interpretation. Significantly, Jameson also
points to the necessity of reading history through cultural
objects: We are left with them as traces of the
political unconscious, or of our ideas of historical power
configurations. Jamesons
move is thus twofold: While receeding from essentialist
notions of history by calling history an
absent cause, he also establishes a kind of
formalist essentialism with which stuggles over
the interpretation of history can be discovered in the form
of cultural objects (books, CD-ROMs, etc) and their
structural limitations (and possibilities).
Political criticism of any cultural object, then, will attempt
to extract structural antagonisms that are indexical of a
historical dialectic as absent cause.
Furthermore, when one understands form as sedimented
content (Jameson), the individual narrative, or
the individual formal structure, is to be grasped as the
imaginary resolution of a real contradiction (77).
The
cultural object can now be interpreted as a strategy for
unification of differences which retains certain traces of
those difference in its form. How does an interface make
the world coherent? Are
authoritative hypertexts simply a strategy for cutting
something coherent into pieces, only to paste the parts into
a mosaic whole again? Whats
the function of an authoritative hypertext if, given the
right computer program, many people can authorize texts that
enhance or contradict the original version?
But
lets inquire a bit more into how hypermedia can be
said to be political for now.
Starting from a Jamesonian, formal approach to new media
studies, I think that Ernesto Laclaus post-Marxist
notions of hegemony, decision, antagonism, and articulation
will provide a few more interesting ideas for the discussion
of particular new media objects. Laclaus theoretical
framework starts from the understanding that
self-determination is not the expression of what a
subject already is but the result of its lack of being
instead (Laclau, 1996: 55).
This point nicely enhanced Jamesons theory in that it
lays the foundation for a questioning of how new media
objects might be used to influence users economically and
politically: The pointing and clicking subject emerges
through interaction with the GUI; it does not meet with a
computer program on an equal level. Determined to constitute
herself, then, the user identifies itself with various
interface objects/designs, since self-determination
can only proceed through processes of identification (
55). The
critical point here, of course, is the decision taken with
whom or what to identify. For Laclau, this decision is
undecidable in the final instance, so the subject
(simulating its own completeness) emerges in the distance
between the undecidability of the structure and the
decision: I can not really decide why I browse the Web with
Microsoft Internet Explorer of Netscape 7, but my decision
makes me (personally) a Netscape 7 user. The subject/users
decision is further complicated due to the fact that she is
a part of a larger socio-political group and is therefore
necessarily represented by an individual that hegemonically
stands for this group (nobody can decide on all
issues all the time).
Now
if this theoretical outlook sounds like a gloomy perspective
for what some analytical philosophers call the free subject,
lets not forget that the antagonisms of the interface
contain possibilities for resistance as well: Hegemony is an
experience of the limit of all objectivity
(Laclau, 1996: 122) since the presence of the
Other prevents me from being totally
myself (125). The
impossibility to fully constitute oneself (lets say,
for the AT&T telephone company in the face of severe
hacker attacks in 1990) opens up a sphere for the critical
rewriting of the (...) text in such a way that the
latter may itself be seen as the rewriting or
restructuration of a prior historical or ideological
subtext (Jameson, 1981: 81).
To come back to the California Ideology: It
incorporates into its world view the idea that politics has
come to an end and that restistance is merely a matter of
culture jamming.
This radical cultural turn, which, as Barbrook and Cameron
have pointed out, ironically comes from the very people that
participated in the countercultural movements of
the 60s, overlooks the ways in which political antagonisms
are inscribed into the limitations and possibilities of new
media objects as indexical strategies for the unification of
socio-political differences. Or, as Jameson argues in The Political
Unconscious, the convenient working distinction
between cultural texts that are social and political and
those that are not becomes
something worse than an error: namely, a symptom and a
reinforcement of the reification and privatization of
contemporary life (Jameson, 1981: 20).
So
again, its important to keep in mind that the
political criticism that I have layed out in this chapter
will not lead to the unmasking of new media
objects as feedback loops into an economic system which they
were originally opposed to: The benefit of a formal,
political analysis is that it wont automatically lead
to the theoretical dead-end for new media or cultural
studies of seeing opposition as only preparing another
underground trend for the multinationals to recycle in their
next campaign. Jameson
puts it this way: The lesson of the vision
of a total system is for the short run one of the structural
limits imposed on praxis rather than the latters
impossibility (91).
3.
Cultural Software
Without
that material anchorage, text is free to become infinite, to
assume magical, semi-divine powers.
It is such a theological concept of the infinite text that
inhabits cyberspace, and which a materialist account of
reading must expose.
Sean
Cubitt,
Digital Aesthetics
I
have been mentioning terms such as graphical user interface,
new media object, or hypermedia in this paper without really
describing whats behind any of these concepts.
Also, I havent said anything yet to differentiate
between authoritative hypermedia works such as
Afternoon and networked texts on the Web such as the
Connex I/O project. Lev Manovichs recent
The Language of New Media is one of the first books
that establishes a formal view of new media, so lets
now look at the concepts that Manovich employs to lay the
groundwork for a later, more detailed analysis. In its most basic sense, Manovichs approach
is somewhat similar to Fredric Jamesons argument in
that Manovich seeks to establish a formal, digital
materialist reading of new media while at the same
time deconstructing any real meaning behind new
media objects. He does this by tracing new
media back to a historical convergence between photography
and the computer in the first computers that executed
whatever programs were fed into it in the form of punching
cards (just as film is fed into a movie projector).
While
this approach prevents Manovich from engaging in utopian
cyberspeculations (recall Stengers our future
can only take on a luminous dimension), I disagree
with the strong emphasis that he puts on the cinematic
character of new media. The main thesis of the book,
namely, that the visual culture of a computer age is
cinematographic in its appearance and digital on
the level of its material (Manovich, 2001: 180), is
perhaps best understood in the context of Manovichs
U.S. West coast background in computer graphics,
programming, and game culture (Manovich now teaches at UC
San Diego). But, as Inke Arns has asked
in her review of The Language of New Media, what
about text-based electronic mail as the most widely used
service of the internet?
What about textual Web chats (http://www.chatcity.de)
and IRC (internet relay chat), internet applications that
more people use than 3D chat environments (http://www.thepalace.com)
since the latter require elite, high speed Web connections?
But
lets leave this discussion aside for now, since with
the ecstasy about virtual reality (VR) of the early 90s
having subsided and access politics having stepped to the
foreground, the appearance of new media has in some areas
become more simple or textual (hip, stripped-down code
editors such as Textpad as opposed to larger programming
environments), while it has become more cinematic in others
(the Mac OSX and Windows XP interfaces, for instance). The relative dichotomy
between Manovichs Californian interpretation of visual
culture (surface/cinematic) over a European low tech
aesthetics (code/textual) actually does not harm any of
Manovichs underlying principles of new
mediacutting and pasting also works on the text-only
system of a Unix workstation.
So let me turn to the new media characteristics.
Manovich
employs several terms that Im also using in this paper
to describe the politics of cultural software.
First of all, his term cultural object needs some
explanation. Object, for
Manovich, reaches beyond new media to the cultural sphere in
that it suggests that various kinds of cultural expressions
share a similar formal logic: books, CD-ROMs, hypertext,
computer programs, video games, or 3D-environments can all
be regarded as cultural objects.
Furthermore, the term object invokes the computer
lingo of object-oriented programming (Java, C++ etc.) and
the Object Linking and Embedding (OLE) technology in
Microsoft Office (meaning, for instance, the possibility of
inserting an image into a Word document). Labeling something, more
specifically, new media object emphasizes the
principles of new media that hold true across all
media types, all forms of organization, and all
scalesnew media objects are a subset of cultural
objects in general (14).
With
this in mind, Manovich establishes five principles of new
media (as opposed to old media): Numerical representation,
modularity, automation, variability, and transcoding.
Numerical representation refers to the possibility of a
translation of all existing media into numerical data
accessible through computers (20). A film, an image, or a sound can be manipulated
on a computer without regard to their orginal format (for
example through cut and past operations), since it is stored
in digital code; as soon as an old media object (such as a
photograph or a book page) is scanned/coded in numerical
form it enters the logic of new media.
Modularity and automation point to the fact that when composed
into a new media object, data items retain their distinct,
original structure. Think of a website: Its
content is distributed over a database, with images, sounds,
and text usually being stored in different
folders. A
Website is then assembled automatically by a programmed HTML
file that calls up the modulesin fact, if
a page has several frames and works with
dynamical content (a Website that requires a
user log-in, for example), the content modules are probably
even stored on different computers. So much for the
deconstruction of new media objects: Starting at a higher,
metaphorical level, all modules are equal; on a lower level,
all modules are hierarchical, since they are organised in a
system of hierarchical folders; on the lowest level, the
modules again become flattened out into a stream
of binary code.
New
media, then, essentially remains open to changes.
Old media, of course, is not put together on user request
(except in a metaphorical way)all copies of a book
look the same, and an illustration cannot vanish, or be cut,
changed, and later inserted again. The epic world is an
utterly finished thing, not only as an authentic event from
the distant past but also on its own terms and by its own
standards; it is impossible to change, to re-think, to
re-evaluate anything in it, says M.M. Bakthin
(Bakhtin, 1981: 17). As a closed object, a book
structurally does not permit changes; annotations are always
discernible as such from the main text, and errors can only
be corrected in another edition, thus books as old media
objects can be read as a sedimented strategy for unification
and closure of a content that is divergent, or antagonistic,
whereas new media objects remain open and liquid.
Any political interpretation of new media will then have to
take into account the module codes and the form in
which they are remixed.
The
most important principle of new media, for Manovich, is
transcoding. Fredric Jameson describes
this aspect for cultural criticism as the invention of
a set of terms, the strategic choice of a particular code or
language, such that the same terminology can be used to
analyze and articulate two quite distinct types of
objects (Jameson, 1981: 40). In computer culture, of
course, transcoding is not a strategic invention but rather
the everyday operation to translate something into
another format (Manovich, 2001: 47). But Manovich takes the concept of transcoding
further, suggesting that, in the last instance, the
socio-political sphere and computer culture are being
transcoded when cultural categories and concepts are
substituted (
) by new ones that derive from the
computers ontology, epistemology and pragmatics
(47). New media
logic transforms everyday culture in many waysthink of
the useless, interface-like forward/back-buttons that have
entered contemporary graphic design. On a higher level, we are
browsing through a cultural catalogue to chose modular
clothes, music, friends, or food to copy and paste these
things into our lives: We start seeing the world around us
as a database (it is no wonder that Kittlerian, heavyweight
media theory has started to advance the concept of
Kulturtechnik). Furthermore, the principle of transcoding, as has
often been suggested throughout the last ten years or so,
holds some new implications for authorship.
In
The Language of New Media, Manovich attempts
to grasp this be refering to the figure of the DJ:
Programming a new media objects seems to be something like the
record mixing of the DJ in modern musical culture (many DJs
prove this logic when stopping to work with analog media at
all to employ notebooks for their sets). The German Connex I/O project (http://www.c-io.de)
has taken this up and developed the concept of the text
jockey (TJ), but in my mind, such metaphors of the DJ/TJ
remain sketchy. I
would rather put forward an additional principle of new
media to understand the different role of authorship in new
media objects more clearly: instability.
As I have said above, I regard a book as a sedimented strategy
for closure of a divergent socio-political content, so what
happens if content and strategy are not sedimented but
modular and liquid? Doesnt
the author then have to juggle with instable objects that
can at best temporarily forced into a coherent form?
And what happens if there are multiple authors?
Authorship,
in my mind, then generally becomes a matter of coping with
unstable links and programs.
In a nutshell, the computer can be regarded as a desiring
machine, so authorship becomes charged with intimacy or a
closeness that can never be fully attained. One of the most obvious
illustrations of instability as a sixth principle of new
media are the characteristics of pornography on the
internet: Similar to the early stages of other cultural
technologies such as film, there is a fascination with the
indexical in so-called adult entertainment
chatrooms (Are you masculine or feminine? is the
first questions asked in any conversation), but at the same
time the indexical is heavily disturbed by the instability
of technologythe images are grainy and Webcams deliver
a slow, thumb cinema-like picture quality, for
instance. This
is where the other five principles come in to oppose, if
combined with instability, the fascination with the
indexical in old media objects: Everyone seeks to be close
to everyone through the machine, even if that remains an
empty gesture in the last instance (this has been called the
desire for the real simulation). Furthermore, the teleactive
aspect of Web pornography can be a way to interpret new
media socio-politically: The discourse about intimacy or
closeness in the directing of another person via chat and
the fascination if the person did what one told her to do
(Webcam feedback) also highlights the impossibility of
attaining stable links and thus the impossibility of the
fullness of politico-social relationshipsand the
ongoing desire to nevertheless connect.
But to come full cycle in my argument, even the communites of
hypertext authors can be read as imagined desiring
communities. New media authorship, then, is a kind of
authorship that takes place at the within an environment of
unstable technology.
4.
The Innocence of the GUI
The
Macintosh interface is designed to provide a computer
environment that is understandable, familiar, and
predictable.
Apple
Computers Inc.,Macintosh
Human Interface Guidelines
Computer
programs and hypertext objects today typically
run on a computer workstation that consists of
the computer, a monitor, a mouse, and a keyboard. The graphical user
interfaces of today can thus actually be regarded as a
medium between numerical computer data and the user, with
software applications as the main and (necessarily)
mediating layer, so the formal aspects of interface design
should substantiate my thesis that the interface is a site
where absent cultural and social contradictions clash and
meaning is being dialogically produced for a cultural
community, in that the interface constructs out of an
unstable, messy, or liquid data world a surface that seems
coherent to the user. In
what follows, to set the general backdrop for my analysis of
the politics of hypertext in the next chapter, Ill try
to substantiate this thesis by looking at the interface
designs of two prominent operating systems (Mac OS and
Windows XP), along with the computer programs AOL
8.0/Internet Explorer and Mozilla/Netscape 7 that are
usually used to access internet-based hypertext works.
Most
graphical computer environments of today (such as Windows
XP, Gnome, or KDE) derive their interface design from the
first Macintosh computer that Apple introduced in January
1984. Arguably,
were now moving toward the invisible computer that is
integrated, for instance, into human clothing, but
Apples initial desktop metaphor still serves as the
most widely used interface, since the companys idea of
a relatively inexpensive personal computer was apparently
years ahead of the computer development at the time of its
design (expensive multi-user systems prevaled that were
securely stored away in some computer lab on a university
campus). Interestingly,
although Apple had the possibilities of designing a product
almost entirely from scratch, the strategy turned out to be
quite conservative. The
company puts its strategy this way: The 80 percent
solution means that your design meets the needs of at least
80 percent of your users. If you try to design for the
20 percent of your target audience who are power users, your
design will not be usable by the majority of users
(Apple Computers, 1992: 35). Out of marketing considerations, then, Apple
apparently reduced the relative complexity of text-based
computer interfaces from the start to familiar, graphical
metaphors of everyday U.S. life, thus heeding the warning of
Ben Shneidermans Designing the User Interface:
Surprising system actions, tedious sequences of data
entries, inability or difficulty in obtaining necessary
information, and inability to produce the action desired all
build anxiety and dissatisfaction (Shneiderman, 1998:
75).
Generally,
the Apple solution became point and click rather
than type a command, and to design and communicate an ease
and simplicity in the use of a Macintosh computer has ever
since the design of the first Mac been Apples major
promotional strategy. In
June 2002, for instance, Apple launched the famous switcher
campaign (ads that feature real life people who
switched from PC to Mac) that again makes use of such
terminology: More
people are interested in switching from PCs to Macs than
ever before. See why they made the change and how easy it
was. (
)
And understand how Macs can make your life easier and your
possibilities endless (http://www.apple.com/switch).
In its official design handbook from 1992, the Apple Human
Interface Guidelines, Apple developed eleven principles
of interface design by which the company tried to
incorporate ease and simplicity into its products: Metaphors, direct
manipulation, see-and-point, consistency, WYSIWYG, user
control, feedback and dialog, forgiveness, perceived
stability, modelessness, and aesthetic integrity. The Apple Macintosh
interface thus ideally presents the user throughout with
nicely designed, familiar metaphors (such as the trash can
or the desktop itself) which one can interact with in
close-to-real time by using a point-and-click device while
immediately seeing results or getting a failure feedback and
a possibility to undo the action.
Since Microsoft, having copied the Mac interface and its
principles almost one to one, dominates the personal
computer market with its Windows operating system, the
socio-political purpose of virtually every user interface
today can be said to be to create safety nets for
people (Apple Computers, 1992: 10).
From
the principles in the official Apple design handbook, I
think its already clear that, at its inception in
1984, the personal computer as we know it in 2002 has not
been arbitrarily designed into the friendly
computer (Microsoft on its XP website).
Rather, the eleven principles seem to rest on specific
socio-political assumptions that Apple expresses in the
Interface Guidelines discourse on the power user and
stability. First, the Guidelines
strictly separates the features that the so-called
power user needs from those that the rest
of us play with.
The Guidelines explicitly advises the interface
designer not to hide features in your application by
using abstract commands (Apple Computers, 1992: 8) or
not to use technical jargon or computer science
terminology (307). Whereas the power user of an
Emacs editor might have extensive keyboard shortcuts that
call up the many program functions, the rest of
us is better left with a few directly visible choices
in plain English, or so Apple says.
After all, we might even do harm to the computer, or, as the
Guidelines nicely put it, the goal of todays
safety net interfaces is to achieve a
balance between providing (the power) user with the
capabilities they need to get their work done and preventing
(the rest of us) from destroying data (9).
Such
expert politics are incorporated into todays interface
design in many ways, most notably perhaps in Windows XP,
Microsofts most recent computer operating system, in
the so-called principle of gradual disclosure. Gradual disclosure means
that, for instance, in Office applications such as Microsoft
Word, the menus only show a small number of commands by
default (such as format paragraph), but hide
more specific commands (such as format styles)
from the userthose commands are only available to the
power user who moves his point-and-click device
over a little arrow at the bottom of the opened menu.
Another, more general instance of gradual disclosure in the
Windows XP interface is data visibility in the file manager
program Explorer. By default, the Explorer
starts with a subfolder that only contains user data
(My files) and hides the content of subfolders
from view that contain programs or system files, such as
C:/Progam Files or C:/Windows/System.
On opening such a subfolder, the Windows XP interface warns:
This folder contains files that are important for your
systems stability. You
should not modify the content of this folder. For expert use, below the
large letter warning is a link that reads in small font
Show folder content. Expert interface politics is
closely intertwined with the discourse of stability that
serves as the other bottom line of Apples eleven
design principles. Not
regarding Apples claim to sell the more stable
operating system, both Mac OS and Microsoft Windows exhibit
design features that back up the user perception of
stability and continuity of a GUI.
The
most prominent example of stability design is the taskbar
that is located at the bottom of Windows XP and at the top
in most Mac OS. It
doesnt matter how many programs you are running, in
Windows XP the taskbar is always visible and by default
tells you the local date and time (thus locating your
physical body), always shows you a pop-up start menu (or
gives power to execute commands), and usually iconically
represents all programs that are running in small form.
The taskbar visually remains always on top,
meaning that you cannot move another window over it, so
its the most prominent stability feature in every
interface (even alternative shells such as LiteStep have
one). Other stability features
include the feedback that the interface gives when the
computer calculates for a longer timeusually this is
done in the form of a growing bar at the bottom of a window
(such as Internet Explorer) or a remaining time
pop-up. Familiar icons such as the
Recycle Bin or Trash that have a specific place in many
different interfaces, the upper left corner in Windows XP,
make novice users feel at home instantly. Also, the continuity of
design throughout all applications is important for the
stable look-and-feel: Every program has menu bar and
its first and second entries are usually
File and Edit.
Now,
all these stability features are linked to the expert
discourse in the following way:
They actually cover up an instable system to the
rest of us (Apple has even taken this into the design
of the iMac and the iBook themselves: closed, shining
entities). If you run an alternative
shell such as LiteStep, youll see on a little monitor
the data writing and deletion that constantly takes place on
your computers virtual an physical memory; thus, much
of the data representation you have in, say, the Explorer
file manager is actually a reference to a quite unstable
heap of data. As even the Apple
Guidelines remind the designer-expert, it is the
perception of stability that you want to preserve,
not stability in any strict physical sense (11).
Given
the inherent instability of new media objects, it is perhaps
little surprising that Jacob Nielsen and Don Gentner came up
with their Antimac interface that turns most of the Apple
principles upside down (although the authors, for whatever
reason, emphasize that they do not think the Mac is bad). The basic assumption of
Nielsen and Gentner in their The Antimac: Violating
the Apple Human Interface Guidelines is that while the
Mac/Microsoft interface that were using today might be
appropriate to teach what a computer can do to novice users,
todays computer users are people with extensive
computer experience who want to manipulate huge numbers of
complex information objects while being connected to a
network shared by immense numbers of other users and
computers. In
other words: Apples 80 percent solution doesnt
work if your users are the Post-Nintendo
Generation (NielsenGentner, 1995:. In the Antimac,
Gentner and Nielsen detect a number of problematic aspects
of what they call the WIMP model (windows,
icons, menus, pointer) of the original Apple Macintosh GUI. Although the principle of
metaphor usage, for instance, may help the novice user
(Oh, the trash can is where my deleted file
went!), metaphors usually hide those computer
capabilities that go beyond the actual metaphor: The trash
can, for instance, saves deleted files on every physical
drive separetely, but if you empty it, all data
are gone from all drivesthe possibility to only delete
files on, lets say, the C: drive, is undermined by the
very use of the metaphor.
The
problem with the principle of consistency according to
Gentner and Nielsen is that, although learning might be
reduced if new media objects look the same, new
possibilities are overlooked.
As the Antimac puts it, for Apple and Microsoft
were still children in the computer age, and
children like stability. They want to hear the same bedtime
story or watch the same video again and again. But as we
grow more capable and are better able to cope with a
changing world, we become more comfortable with changes and
even seek novelty for its own sake.
Finally, WYSIWYG is inappropriate to computer usage in 2002,
since the What you see is what you get-principle
assumes there is only one useful representation of the
information: that of the final printed report. The principle thus overlooks
that it may be useful to have a different
representation when preparing the document. For example, we
may want to see formatting symbols or margin outlines, or it
may be useful to see index terms assembled in the margin
while we are composing. Against this interface
politics of the beginning new media age, Gentner and Nielsen
set an interface that features the central role of language,
richer internal representation of objects, expert users ,
and shared control. Instead
of a poor office imitation, then, they see the computer
today as an ubiquitous tool to work, communicate, and play.
The computer, so to say, introduced the new ethic of You wont always have to work that
hard instead of giving you the early digital
capitalist Power to Be Your Best (Nielsen).
Of
course, when recalling Althussers statement that
men represent their real conditions of existence to
themselves in an imaginary form, even the Antimac
becomes another imaginary, cultural representation of absent
socio-political reality.
For instance, to invoke a gendered reading of both Mac and
Antimac: One could say with Robert Milthorp that the Antimac
is a result of mens fascination with technology
(that) is linked to the masculine need to be in control of
the material world, to know how to extend that control, to
be able to act, and to be independent of reliance on
others is expressed in the Antimac (Milthorp, 1996:
137). However, I do not want to
elaborate on this criticism here but instead underscore that
the very Antimac alternative points to the existence of the
space of inherent instability of new media objects that is
so central to my argument.
Now,
in the coming age of the highly networked computer, Web
browsers are the other important new media objects that
frame and politically shape data within GUI.
Ill look at the AOL 8.0 software and Mozilla/Netscape 7
in order to show how this might be so. America Online, a
subdivision of AOL Time Warner, claims to be the single
largest access provider to the Internetalthough
relative in size in relation to the US population, the AOL
member figures actually do suggest that many people (at
least in the U.S.) access the Web through the companys
software. According
to the companys website, AOL has more than 35
million members of its flagship AOL service along with more
than 3 million CompuServe members, 120 million registered
users of ICQ, and 48 million registered users of the
Netscape.com service (www.aol.com).
In addition to those 206 million users, AOL also operates
popular services on the World Wide Web such as AOL Instant
Messenger and Winamp, a music filesharing tool.
No doubt then, that America Online has played a major
role in creating the consumer online experience
worldwide, as the AOL website claims. Now, similar to Apples 80 percent solution,
at the heart of the AOL marketing strategy stands
providing convenient, easy-to-use services for
mass-market consumers.
But as we have seen above, the internet remains a new media
object that is unstable and multilayered, so in reaction to
that, AOL has developed its own version of an Apple
Macintosh desktop for internet access, namely, the AOL
software suite. This
software contains the AOL Mosaic Web browser, an email
client, chat programs and other tools conveniently compiled
in one package that is downloadable from the companys
website or freely available in many stores or on CD-ROMs
that come with digital lifestyle magazines.
Similar
to Apples Macintosh, the installation instructions of
AOLs Website address a novice computer user: They
highlight the ease and simplicity of the software usage
(Follow the easy online instructions to install your
FREE America Online Software!) and the speed with
which the novice user gets accustomed to the new programs
(Youll be enjoying the benefits of America
Online in no time!).
AOL is also quick to warn the novice user of the dangers that
might lurk in her computer file system: Important: Be
sure to write down the filename and path to the directory
where you save the file.
After the user has installed the program (Look for a
file that starts with setup!), she is
ready to experience the Internet in an instant! The mission statement page of the German AOL
subdivision nicely summarizes the general AOL strategy:
In the infinite world of the internet, AOL offers you
a home. An
intimate place where you meet friends, where you feel
cared-for and safe.
Similar
to Apple, AOL is aware that the company is painting a neat
surface/interface over something that is messy, fragile and
unstable. And
similar to Apple, AOL silences the space for difference on
the internet; a strategy that can be deduced from what AOL
holds its term netwise to mean.
Like the rest of the world, the Internet may contain
some material that is inappropriate for young
audiences is one of the conclusions of the instability
factor for AOL (one might imagine for any
audience as an alternative ending). To answer this problem, AOL
has coined the term netwise for what it likes
its member to be: informed about those countermeasures
against messiness and instability which AOL gives you:
Working together, we can make the online environment a
safe and rewarding experience.
Not surprisingly, in stark contradiction to what one might
imagine a term netwise to signify, AOL safety
strategies heavily rely on automatic filtering software
(Filtering software like CyberPatrol, NetNanny, and
SurfWatch can help keep children from inappropriate online
areas) and AOL-controlled on-the-surface security
settings (similar to Microsoft Internet Explorer).
AOL
devised software improvements for the netwise
user such as giving members greater control over their
incoming mail with a mail sorting feature that lets them
choose to view only messages from people they know,
alongside with new self-expression features, including
animated Buddy Icons, a choice of new Buddy Sounds, colorful
backgrounds and stationary that let members tailor the
appearance of their e-mail and instant messages to reflect
their personalities and moods, and a choice of
(...) six different versions of the Welcome Screen offering
distinct programming tailored to member interests and
updated through the day that include Headlines,
Business News and Sports, Headlines, Latest
Music, Games and Homework Help, and Headlines
Nightlife and Great Discoveries.
The netwise person, then, uses MatchChat, Music
Share, and Buddy Share technologies to let their computer
find out their likings.
Generally, then, the templateverse of AOL seeks to build
a global medium as central to peoples lives as the
telephone or television, and this is were AOL does not
grip the new logic of the net. German hacker Dragan Espenschied has talked about
this in his How AOL influences its users,
observing that within the AOL software environment, basic
principles of new media do not work anymore, namely,
there does not exist an option to save or edit the
content of a viewed site and one cannot rely on
the principle of copy and paste. His analysis culminates in the
assertion that the totality of the Web turns into one
single page out of AOLs content (Espenschied,
2001).
In
contrast to the restrictive interface policies of America
Online stand the capabilities of the Mozilla software suite
which derived from the Netscape source code.
Since the date of Microsofts coupling of its
fast-and-easy browser program Internet Explorer to the
operating system Windows in 1995, the percentage of users
that surf the Web with the Netscape browser has been
steadily on the decline. In fact, Netscape is only
used by an estimated 10% of internet surfers to access
Websites, and as a result of that many Websites such as
amazon.com target their content only to the capabilities of
Internet Explorer (you cant click on Buy this
book for example as a Netscape 7
usertheres no button there to click on). Notwithstanding the low usage, Netscape, now in
version 7, still does stand as an alternative approach to
the politics of Web browsing of AOL or the Internet
Explorer, possibly due to the fact that at one point of
usage decline, the company thought the browser battle
against the Internet Explorer lost and decided to lay open
the program source code of the Netscape browser and to make
it freely available for modification and redistribution. With one limitation:
Netscape used a restricted general public licence (GPL) that
allowed the company to later integrate the free, changed
code into its proprietary program again (more restrictive
free software licences such as the GPL only allow the free,
open distribution of the program).
The result of this act is the Mozilla 1.0 browser which again
serves as the code basis for Netscape 7.
Again, this is not to suggest that Netscape 7 is the
better choiceits just to highlight
formal differences that do suggest a different user politics
of AOL or the Internet Explorer and Netscape 7/Mozilla.
Now, both Netscape 7 and Mozilla are freely available for
download from their respective websites (www.netscape.com
and www.mozilla.org). Since the programs are
software suites, modules such as the web browser, the email
client, or the usenet mailer have a similar look and feel,
and, in contrast to AOL, they allow copying and pasting in
between each module. In
addition, the similar feeling of, for example, writing an
email or writing a usenet post, gives a feeling of equality
of the action. Similarly,
the equal look of an FTP address and an http address
reminds the user of the fact that email and http
are only the most prominent, but not all, services of the
internet.
As
Dragan Espenschied has argued, Netscape/Mozillas
view source menu that makes the HTML source code
of a website accessible to the user has pushed the HTML
prominence to a large extent.
Microsoft Internet Explorer has only later integrated this
feature and its still less prominently placed.
The view source menu virtually gives the user power in that it
shows how the website was doneif you copy the source
code you have an identical website. In addition to this,
Netscape/Mozilla comes with an HTML composer, which allows
you to copy and edit the text of any Website you access or
to program your own. Furthermore,
in the browser file menu, edit page
becomes an entry just like open page, so there
is a whole emphasis on the creation of content that
is non-existent in AOL or the Internet Explorer that does not
come with a composer program. Mozilla even goes beyond
such features: The browser is able to suppress advertising
pop-ups, and many script behaviors
(most of which are potentially dangerous to your computer)
can be individually configured.
Seemingly,
data are collections of individual items, with every
item possessing the same significance as any other, as
Lev Manovich has argued in his paper on the concept of the
database (Manovich, 2001: 218).
But although such data equality might hold true on
the level of a code stream or on a strictly physical level,
I have suggested in this chapter that the modules in a
database have a specific, socio-political organization
within the GUI. It
seems that in todays interfaces, programmers and
interface designers (while trying to get closer to the
machine and actually admiring its instability), seek to
sell the computer as a nice looking, reliable work
tool (which it is not) to the rest of us.
Apart from interface design strategies, this dichotomy also
figures in the fact that many companies that engage in
discourses of stability and constancy make a large amount of
revenues from so-called second level services
such as bugfixing, anti-virus security, and installation
support.
Now,
recall that, for Ernesto Laclau, self-determination
can only proceed through processes of identification
(Laclau, 1996: 55) and that a subject/user emerges in the
distance between the undecidability of the structure and the
decision. Recall also that a subject/user is necessarily
hegemonically represented as a part of a larger
socio-political group.
If this is so, a programmer and a designer of a user interface
deals with several, in part contradictory, aspects: First,
the decision with what to identify is largely taken on the
level of the interface designer, not on the user level; the
subject/user is only left to decide about pre-selected
items. Ironically, if
subjectivation in Laclaus sense then becomes limited
in the context of user interface design (the undecidability
of the structure becomes silenced through the interface
design process), the very goal of interface design to foster
identification with a specific company or interface is
undermined: An identification with a Microsoft interface
might as well turn into a passion for the Apple Macintosh
desktop.
So
heres a gap in new media that, in my mind, might open
up a space for resignification (in a traditional Marxist
framework, one could even say that someone gave away the
means of production here) in the sense that interface logic
then becomes essentially a logic of making something
unstable repeatedly look nice (each time a start
Windows XPuntil it crashes).
Notwithstanding coherent interface designs, however, I think
that users feel that the computer is essentially
unstable, and that and how designers try to paint over this
fact; the presence of the Other (the
crash) prevents me from being totally myself, as
Laclau says (125). Within
such an framework, hypertext artists and everyday users
become part of a group that tries to bridge unstable
mediathe imagined, shared experience of, for instance,
the Apple Macintosh user community might serve the desire
for intimacy of each user.
So in the next chapter, Ill try to interpret hypermedia
authoring as trying to attain (empty) intimacy in the space
that instability opens up.
5. Hypertext and Hegemony
Can you imagine what The Futurists would
have done with an Information Superhighway?
Mark
Amerika, Hypertextual
Consciousness
Although
I find criticizing utopian dreams about hypertext a somewhat
tiresome exercise, at this point of my paper I do have to
look at a few of such classic, apologetical positions and at
hypertext works in order to highlight the way in which
projections onto those objects might be more than a cultural
rephlex of the mediated society.
Recall two of my general points in this paper: on the one hand
one there seems to be a formal gap in new media that makes
hypermedia authoring something like a longing for
closeness (to other authors) in the space that
new media instability opens up, and on the other hand the
instable states in digital technology and their link logic
to some extent establishes what could be called a poetry of
the digital. As
a prerequisite for a discussion of those aspects that will
follow up in the last chapter, it needs to be established
how new media itself might foster the desire for closeness
and the drive of the hackernegative
hypertext criticism cannot grasp either one, since it
cant account for the reasons of the failure of
authoritative hypertexts to reach a wider audience or for
the turning of chatlogs, emails, or peer-to-peer network
structures (P2P) into lifelike art. As well see in the
next chapter, digital poetry has surprisingly little to do
with the standard, authoritative authoring of
artworks/hypermediaon the contrary, it is in everday
internet culture that such poetrical gaps emerge.
One
of the classic Utopian projections onto hypertext within a
U.S.-American context can be found in Jay Bolters
Writing Space, a book that came out in 1991.
In a nutshell, Bolter argues in the book that the hypertextual writing space becomes a metaphor (
)
for the human mind (Bolter, 1991: 5).
The concept of the human mind that lies behind such a
statement is, of course, the fragmentary and associative
structure that comes close to postmodernist
visions, or to what Gilles Deleuze and Felix Guattari call
the rhizome in their A Thousand Plateaus.
In fact, Bolter explicitly equates nature and technology when
he says that the book of nature is a
hypertexthis argument here is in line with the
still fashionable conception of the human mind as a computer
(106). Since I am concerned with the form of hypertext,
I will not go into the discussion of the computer metaphor
of the mind here, but rather criticize below what, for
Bolter, formally derives from his initial standpoint: Having
linked digital technology to human thinking, Bolter argues
that electronic writing opposes standardization and
unification as well as hierarchy.
It offers as a paradigm the text those changes to suit the
reader rather than expecting the reader to conform to its
standards (233).
In a later piece Degrees of Freedom, Bolter
elaborates on this thesis by saying that the author of
a hypertext is less a commanding figure than the author of a
printed work. For
the authors work is not a product of his ego alone;
instead, the author works in collaboration with the readers
to create the text (Bolter, 1995). George Landows
Hypertext 2.0, another classic work in the field of
new media theory and hypertext, rehearses a similar
argument: The presence of multiple reading paths,
which shift the balance between reader and writer, thereby
creating Barthess writerly text, also creates a text
that exists far less independently of commentary, analogues,
and traditions than does printed text.
This kind of democratization not only reduces the hierarchical
separation between the so-called main text and the
annotation, (
) but it also blurs the boundaries of
individual text (Landow, 1992: 25). Starting from an initial equation of digital life
with biological/political life, one could position the
hypertext reader into a much more active role that the
traditional book reader, even to the extent that they see
hypertext as the only democratic reading medium that exists. Or so Landow says when he
sums up that hypertext provides an infinitely
recenterable system whose provisional point of focus depends
upon the reader, who becomes a truly active reader
(36). Again, I want to emphasize
that I dont simply want to bash hypertext
enthusiasts, so lets first look at one part of Mark
Amerikas new media artwork Grammatron to see
how Bolter and Landow might have arrived at their
positivistic ideas and then discuss the limits of theirs
(and Mark Amerikas) outlook.
Hypertextual
Consciousness 1.0 (http://www.grammatron.com/htc1.0)
is the theoretical, fourth part of Amerikas early
internet art project Grammatron that has by now
become something like an internet art classicit was
one of the first internet art project to be exhibited at the
Whitney Biennal of American Art.
Amerika programmed Hypertextual Consciousness
(HTC) around 1995, so it originally relied on the
Netscape 3 program architecture to work correctly.
Today the work still runs without compatibility problems under
Netscape 7/Mozilla, since HTC is largely a text-based
collection of HTML sites and seldom uses more complex new
media objects such as sound or animated images. In fact, Amerikas relying on the
then-prominent Netscape browser and its (in contrast to
Microsofts Internet Explorer) ever-since relatively
consistent style of HTML tag usage (HTML tags are short
commands within HTML code such as, for instance, the
<b> tag which tells a browser to print bold type text)
might have even been vital for the works reputation as
one of the few net art classics in the first
place.
Hypertextual
Consciousness 1.0 runs within a browser program on a standard graphical user
interface. When
accessing the work on a computer interface that consists of
operating system and browser, the reader is told on the
first page that HCT is an exploration into
cyborg-narrators, virtual reality and the teleportation of
narrative consciousness into the electrosphere.
Interestingly, HCT, being the theoretical part of
Grammatron, does give the user the opportunity to
select from different text links to follow, but it branches
into topical units that (with a few exceptions) are fairly
easily identifiable by their file address in the location
bar in Netscape 7: the files a-p.html to a-p13.html of
Amerikas work, for instance, describe his concept of
Avant-Pop in an ongoing slide show,
with almost every text file containing just one link that
leads to another file. The very file names are
traces of Amerika acting as a sorting hand in a database
that he then tries to link a little more randomly.
This makes the fact that Amerika links the text Next
slide please in the section about books (book3.html
linking at this point to book4.html) somewhat less
ironica large amount of Amerikas site works in
this way.
In
the section on Avant-Pop, Amerika explains how Avant-Pop
differs from postmodernism and what role the artist/author
takes on in the new paradigm: Despite its early
insistence on remaining caught up in the academic and
elitist art worlds presuppositions of self-
institutionalization and incestuality, Postmodernism found
itself overtaken by the popular media engine that eventually
killed it and from its remains Avant-Pop is now born,
Amerika writes (http://www.altx.com/htc1.0/a-p2.html). Furthermore, Avant-Pop
artists have had to resist the avant-garde sensibility that
stubbornly denies the existence of a popular media culture
and its dominant influence over the way we use our
imaginations to process experience. At the same time, A-P
artists have had to work hard at not becoming so enamored of the false consciousness of the Mass
Media itself that they lose sight of their creative
directives (http://www.altx.com/htc1.0/a-p3.html).
There
are, in my mind, a number of problems with Amerikas
statements in HTC that link directly to what Landow
and Bolter propose in their work. As Amerika states,
consciousness (a hypertextual construct) is now
compatible with more radical forms of random departure or
instantaneous clickual realities than previously thought
possible (http://www.grammatron.com/htc1.0/narr.html). Amerika reasons in much the
same way as Bolter here; a direct between biology and new
media is proposed in the sense that new media finally brings
the human minds functioning into close-to-material
existence. But
the equation of organic life and digital life
limits the mind and social interaction to the logic of the
computer metaphor in much the same way as the Macintosh
Desktop metaphors limits the actual computer capabilities. The whole misunderstanding
about the democratic nature of hypertext might
actually have its origin in this equation: the mind is free,
and so is hypertext, because it is a model of the mind.
Since, as Michel Foucault has taught us, the mind is not
exempt from power relations but, on the contrary, deeply
embedded in them through the concept of biopower (and
resistance makes use of this very fact, or so
Judith Butler has added), the equation becomes absurd.
But
apart from this, the question arises whether or not the very
fact of thinking that hypertext is essentially democratizing
is an outcome of the formal hegemony of the interface and
computer semanticsan argument which I will take up
later. Suffice
it to point out here that Landows use of
democratization as a noun presupposes the active
advancement of democracy and thus actually requires
biopower. But to return to Mark
Amerika: Amerika relativizes his own findings when doing
away with any modernist avant-garde concept in his
development of Avant-Pop: Literary
establishment? Art establishment? Forget it. Avant-Pop
artists wear each others experiential data like waves
of chaotic energy colliding and mixing in the textual-blood
while the ever-changing flow of creative projects that
ripple from their collective work floods the electronic
cult-terrain with a subtle anti-establishment energy that
will forever change the way we disseminate and interact with
writing (http://www.altx.com/htc1.0/a-p9.html). Apart from overusing the
future term will in this text passage, Amerika
himself can easily be said to be the establishment he that
criticizes (a well-known problem for the 60s generation):
Having exhibited his work at most of the important new media
shows all over the world, hes part of the first
generation of new media artists whose work young hackers
today might consider prominent and, in its technological
aspects, boring. In
fact, Amerikas understanding of the artist as the
beacon of Avant-Pop is, in the final instance,
not far removed from a conventional, modernist conception
and its avant-garde claim (interestingly, the term
avant-garde derives from military use and thus
shows the working of biopower again).
Since
I am concerned with form and politics of new media objects
in this paper, at this point I will discuss Amerikas
understanding of reader- and authorship (and books as old
media) in more detail.
Alongside with Bolter and Landow, Amerika advances the idea
that with a change of the role of author from distinct
self to collective-self comes a series of other
complimentary changes that radically affect the way we
interact with narrative environments. Instead of the author
acting as a function of discourse, we will see the
proliferation of cyborg-narrators who function as networkers
who create publishing-nodes within cyberspace
(http://www.grammatron.com/htc1.0/hyper.html). The networking of the
cyborg-narrators, as we have seen above is to a
large extent socio-politically influenced by interface and
browser design/usage (given that such narrators wish to
exchange artwork source code at alla much
more heeded practice in electronic music production than in
literary production).
Amerikas
Grammatron itself is an example for the lack of a
shift to the collective-self seldom takes place
in conventional hypertext works.
Using Mozilla/Netscape 7, I can save Amerikas work, open
and change it in the Netscape composer HTML editor (which,
again, comes along with the browser for free) and
disseminate those changes, for instance. However, there is no
programmed function to collaborate with Amerika on his text
in the actual work. The notion of the author as
collective self is thus actively limited by the
work itself, not by the formal software capabilities.
It must be concluded that Amerikas view of the
reader-author collaboration is largely that the reader
"creates" a text by choosing from multiple reading
pathsa notion that perhaps equally misperceives the
formal nature of the Web in the same way that early film
producers misunderstood their medium when inserting text
into silent film. Furthermore,
the process of collaboration in the perception of an
authoritative artwork is nothing new in the world of art:
All art perception rests on this, even to the extent that
the practice of iconoclasm (the destruction of artworks)
presupposes collaboration and relies, similar to Mark
Amerikas work, on the logic of the conventional art
institution in the final instance.
The
(largely U.S. American) misunderstanding of authoritative
artworks as liberating or democratizing (sic) rests
on false assumptions about the nature of the book (which
usually serves as closed counter-example to the
open hypertext) and about the logic of the link
on which the whole hypertext concept restsboth, as I
will show, are in last instance overestimating the power of
the free, liberal subject.
Mark Amerika describes the book as a prison for the thoughts
of the writer when he says that the cyborg-narrator,
whose language investigations will create fluid narrative
worlds for other cyborg-narrators to immerse themselves in,
no longer has to feel bound by the self-contained artefact
of book media (http://www.grammatron.com/htc1.0/cyborg-narr.html). Similar to Bolter and
Landow, Amerika imagines the writer as a free acting subject
in a pre-Foucauldian fashion when he says that every book
writer is being held hostage by the page metaphor and
its self-limiting texture as a landscape with distinct
borders. But similar to the
perception of art, the practice of reading wouldnt be
rewarding if power relations were taken out of the picture.
In fact, it could be argued that part of the pleasure of
reading is the very giving-away of control to the
narrator/authora state that has been called looking
though the page, not at it.
The democratization of new media that ties its
user to her seat in front of a computer (at least with the
current interfaces) equals the book that ties a reader to an
armchair (however, you can at least take the book anywhere
you want to).
So
it seems to be more interesting to discuss why
writers such as Mark Amerika think that they are not
bound by the self-contained artefact of book
media when programming hypertextual works than saying
that (as most hypertext arguments do) after all, they are
bound by the form. With such a way of posing the question, of
course, interactivity becomes a useless criteria to propose
as a difference of books and hypertext, since books require
interactive behavior on part of the reader
as well. Now,
on the discourse level, liberation-theories
about hypertext, apart from overemphasizing choice and
reader freedom in a programmed menu, also
undervalue what Michael Bakhtin has called
heteroglossia in the book medium, that is, they
do not see the book as a contradiction-ridden,
tension-filled unity (Bakhtin, 1981: 272). In a misreading of the
outward form, the content of the book is seen to perfectly
mirror the outward, monolithic form in that it contain only
a single voicea concept that, if phrased in this way,
sounds strangely out-of-date since language, with Bakhtin,
can be understood to be diverse and contradictory:
Every utterance participates in the unity of
language (its centripetal forces and tendencies) and
at the same time partakes of social and historical
heteroglossia (the centrifugal, stratifying forces)
(Bakhtin, 1981: 272).
As
we have seen throughout this chapter, central to the libratory hypertext argument is
the freedom of the link as opposed to the
closed form of the book. Whereas Steven Johnson reminds
us in his Interface Culture that the link
should usually be understood as a synthetic device, a tool
that brings multifarious elements together into some kind of
orderly unit (Johnson, 1997: 111), the principle of
linking itself for Bolter and Landow seemingly points to the
liberty of the individual. However, as we can already
see in contemporary consumer culture, selection is usually
limited to prefabricated items that contain only a few,
easily edible choices (the colors of Nike sneakers, for instance, not the
overall design). Arguably,
modern society itself, while dreaming of free choices, is in
fact limited to a few options, especially if we take into
account what Lev Manovich says about cultural transcoding:
The modern subject proceeds through life by selecting
from numerous menus and catalogs of
items (Manovich, 2001: 126).
Theoretically, all of this can be grasped within the Laclauian
political framework that I outlined above: The link
necessarily partakes in a hegemonic framework that actually
highlights the limits of choice rather than its
possibilities (whereas it does not say that such
limits are bad).
One could also frame the logic of the link in terms of the
Althusserian notion of interpellation: Instead
of the famous example of the policeman calling a subject,
the link is calling us to click on it, and the
military-industrial complex (MIC) now becomes the
Ideological State Apparatus (ISA).
If the internet really is an extension of the MIC, we accept
our place within it with each click (even if we can choose
between two or more options).
Althussers insights that the vast majority of (good)
subject work all right all by themselves, i.e.
by ideology (whose concrete forms are realized in the
Ideological State Apparatuses) (Althusser, 1971: 181) Heath Buntings internet artwork readme.html is
a good example for the limits of the link in the new media
sphere: The piece is simply a Website that contains a
biographical text about the Britain-based artist, but
Bunting has linked each word to its .com extension
(and to www.and.com and so on).
At its inception in 1998, hardly any of the words did lead to
an existing company; as of 2002, almost all of them do.
Fredric Jameson traces the whole discourse of open/good and
closed/bad that also figures in the discussion about
open source software to the paradigm of American
pluralism, with its unexamined valorization of the open
(freedom) versus its inevitable binary
opposition, the closed (totalitarianism)
(Jameson, 1981: 31). Furthermore, a libratory
link concept is based on a view that forgets that language
is not a neutral medium that passes freely and easily
into the private property of the speakers intentions;
it is populatedoverpopulatedwith the intentions
of others (Bakhtin, 1981: 294). As Bakhtin goes on to argue,
in language generally in the makeup of almost every
utterance spoken by a social person (
) a significant
number of words can be identified that are implicitly or
explicitly admitted as someone elses (354).
Thus, democratizing hypertext (recall the active
nature of the concept), apart from ignoring that the
referential nature of language extends into its sedimented
forms (books and hypertext), actually ties down a
word meaning by linking it to only one specific other
location.
Finally,
a work that wonderfully illustrates the workings of power
and hegemony in hypertext linking is Stuart Moulthrops
Victory Garden (http://www.eastgate.com/VG/
VGStart.html).
While being considered a hypertext classic, Victory
Garden has (to my knowledge) never been read in terms of
restrictions and possibilities of form.
For this purpose, it should be sufficient to point out that
the content of work largely deals with the Gulf War rather
than to give a detailed content analysis, for the form will
be seen to entail the possibilities and limitations of the
subject. Or, in other words, the
structure of Victory Garden is deeply embedded within
what has been called the military-industrial complex (MIC)
of the United States and can actually be read as a
reflection on this very fact in form (here Im
taking up an argument that has been suggested to me by
Michael Joyce). The negative take of the
implicit narrator on the Gulf War figures in a quotation
from Donna Haraways Cyborg Manifesto, which is
linked to from a text section entitled The Big
Game: I argue
for a politics rooted in claims about fundamental changes in
the nature of class, race, and gender in an emerging system
of world order analogous in its novelty and scope to that
created by industrial capitalism; we are living through a
movement from an organic, industrial society to a
polymorphous, information systemfrom all work to all
play, a deadly game (Haraway). This deadly game
is, of course, the Gulf War and by extension of the
military-industrial complex can also read to be the
internet. The
figure Thea is quite explicit about this connection between
war and play: Call it a big game, Thea
went on, Call it the end of history, call it your New
World Order, whatever name you hang on it, its got a
very elegant logic. Build all these tanks and planes and
guns for a war with the Russians but the Russians go broke,
oops, you just have to find somebody else to drop the old
bombs on so you can go out and buy some new bombs. This
isnt a war, its a fucking clearance
sale.
Contrary
to this war as a game, however, in Victory Garden
real war seems to exist that takes place elsewhere:
History is not about return or repetition. You
cant get back to the future.
History, the big story, is about the possibility of rapid and
fundamental change. The
kind of thing people fight wars about.
Moulthrop has linked Victory Garden to a section of his
personal Website entitled What is a well-formed text
on the Web, in which he equates a link with real
history in that You can never link to the same
space twice. You only think you can.
Thus, while his work is an extension of the MIC (it was
written in Storyspace, a commercial software program for
hypertext creation), Moulthrop also uses the very structure
of the internet to highlight his point.
As Moulthrop has written in an essay for the collection
Hyper/Text/Theory: Hypertext-and its as yet
more distant cousins-will not produce anarchist enclaves or
pirate utopias (Moulthrop, 1994:
316).
6. Shifting
Paradigms
Of all the ways of acquiring books,
writing them oneself is regarded as the most praiseworthy
method.
Walter Benjamin,
Unpacking my Library
Now
that I have talked at length about the limits of the
interface and hypertext, it is time to spend the rest of
this paper on speculations about possibilities of the
hypertext principle. To
pose the question differently: In what way does new
media form actually offer spaces for resignification and
power play? What is the socio-political
function of new media?
Do peer-to-peer network structures (P2P) maybe come close to
what has been called lifelike art? As John Crary points out in
the beginning chapters of his Techniques of the
Observer, a technological paradigm shift is always
useful in that it helps to think about the preceeding as
well as the new medium.
So lets go back to the book to get a clearer view on
hypertext.
As I have argued in the
preceeding chapters, the surface form of the interface
heavily restricts any liberatory use of computer
technology (apart from the fact that such a concept of
liberation is highly simplified) through, for instance,
metaphorical desktop design and limited capabilities of the
most widely used WWW browser, Microsofts Internet
Explorer and the AOL client software. However, as I have described
in chapter 4, a user necessarily has to interface to
numerical data through software in much the same way as
hegemony is a prerequisite for political representation in a
non-technological political paradigm. While the current turning away from hypertext is,
in the last instance, an extension of the
miliary-industrial complex and its emphasis on
surface and the form, the existence of direct
action in the digital world points to a formal
possibility that takes Butlers concept of
resignification from the symbolic level down to
the level of a serious, technological play with
possibilities. Since
the end of this paper cannot be that location to muse on
peer-to-peer network structures (and Im only just
beginning to read about the subject) Ill have to close
by way of an analogy.
Arguably, the logic of the
digeratis turn-down of the written page is repeated in
their turn-down of authoritative hypertext in a
misunderstanding of what a new medium actually brings about.
Sean Cubitt has reminded us that the book, that fortress
of words, was not the sole invention of printing, which
broadcast a riot of cheap dissemination (Cubitt, 1998:
7). The novel, which is actually
the main target of attack for many hypertext authors and
critics attack is only a minor thing in the world of
printingsocio-technologically, flyers, posters and so
on were the much more (even revolutionary) application of
printing as a Kulturtechnik. In much the same way,
authoritative hypertext works seem to be misunderstood by
literary critics to be the premier function of the internet,
while online shopping malls are the premier function for
global capitalism. If
the book is a strategy for unification of antagonistic,
diverging content that is sedimented in the monolithic book
form, and the GUI works in much the same way since data are
never accessed innocently, peer-to-peer might
disturb this logic through its decentralized,
center-at-the-margin aspects. Instead of multinational capital, maybe now is
time for the Other to finally stare back at us
through our computer screen?
7. References
Althusser, Louis. Lenin and
Philosophy and other Essays. New York: Monthly Review
Press, 1971.
Amerika, Mark. Hypertextual
Consciousness 1.0. http://www.grammatron.com/htc1.0,
1995.
Apple Computers, Inc..
Macintosh Human Interface Guidelines. Cupertino, CA.:
Addison-Wesley, 1992.
Arns, Inke. Metonymical
Mov(i)es. Review of Lev Manovich's The Language of New
Media. http://www.artmargins.com/content/review/inke.html,
2002.
Bakhtin, Mikhail M.. The
Dialogic Imagination. Austin, Tx.: University of Texas
Press, 1981.
Barbrook, Richard, and Andy
Cameron. The California Ideology. http://www.wmin.ac.uk/media/HRC/ci/calif.html,
1989.
Benjamin, Walter.
Illuminations. New York: Schocken, 1969.
Bolter, Jay David. Writing
Space: The Computer, Hypertext, and the History of Writing.
Hillsdale, N.J.: Erlbaum, 1991.
Bolter, Jay David. Degrees
of Freedom. http://www.lcc.gatech.edu/~bolter/degrees.html,
1995.
Bunting, Heath.
_readme.html. http://www.irational.org/heath/_readme.html,
1998.
Crary, Jonathan. Techniques
of the Observer. Cambridge, Mass.: MIT Press, 1990.
Cubitt, Sean. Digital
Aesthetics. Sage: London, 1998.
Debray, Régis. Media
Manifestos: On the technological Transmission of Cultural
Forms. London & New York: Verso, 1996.
Espenschied,
Dragan. Man muss nur klicken können: Wie AOL seine
Nutzer bevormundet. http://www.odem.org/insert_coin/mythen/aol.html,
2001.
Ess, Charles. The
Political Computer: Hypertext, Democracy, and
Habermas. Hyper/Text/Theory. Ed. George P.
Landow. Baltimore: Johns Hopkins Press, 1994. 225-267.
Jameson, Fredric. The
Political Unconscious: Narrative as a Socially Symbolic
Act. Ithaca, N.Y.: Cornell University Press, 1981.
Johnson, Steven. Interface
Culture: How Technology transforms the Way we Create and
Communicate. New York: Basic Books, 1997.
Laclau, Ernesto, and Chantal
Mouffe. Hegemony and Socialist Strategy. New York:
Verso, 1985.
Laclau, Ernesto.
Deconstruction, Pragmatism, Hegemony.
Deconstruction and Pragmatism. Ed. Chantel Mouffe.
London: Routledge, 1996. 47-67.
Landow, George P. Hypertext
2.0: The Convergence of Contemporary Critical Theory and
Technology. Baltimore: Johns Hopkins University Press, 1992.
Manovich, Lev. The Language
of New Media. Cambridge, Mass.: MIT Press, 2001.
Milthorp, Rob.
Fascination, Masculinity, and Cyberspace.
Immersed in Technology. Cambridge, Mass.: MIT Press,
1996. 129-150.
Moulthrop, Stuart.
Rhizome and Resistance: Hypertext and the Dreams of a
New Culture. Hyper/Text/Theory. Ed. George P.
Landow. Baltimore: Johns Hopkins Press, 1994. 299-322.
Moulthrop, Stuart. Victory
Garden. Watertown, Mass.: Eastgate Systems, 1995.
Nielsen, Jakob, and Don
Gentner. The Anti-Mac: Violating the Macintosh Human
Interface Guidelines. http://www.acm.org/sigchi/chi95/proceedings/panels/gen_bdy.htm,
1995.
Shneiderman, Ben. Designing
the User Interface. New York: Addison-Wesley, 1998.
Stenger, Nicole. Mind is
a Leaking Rainbow. Cyberspace: First Steps.
Cambridge, Mass.: MIT Press, 1991. 49-58.
posted: January 31,
2003
dichtung-digital
|
|