icosilune

Stuart Moulthrop: From Work to Play

[Readings] (08.29.08, 4:49 pm)

Overview

Moulthrop comes from a general background of Ludology, and spends much of his time critiquing Janet Murray’s Hamlet on the Holodeck. The general idea of the essay seems to be an attempt to find other means of relevance for game studies, especially as a means of comprehending culture. Moulthrop strongly references Pierre Levy on collective intelligence, and is trying to derive some form of understanding of games and game studies as being critical to the realization of a molecular society.

Notes

Moulthrop opens his essay by noting the contemporary setting and peculiar position of game studies in the (particularly American) political climate. He references Donna Haraway on the current state of the cultural system, as polymorphous and informational rather than industrial. Haraway sees society as transforming to one oriented around play and deadly games instead of work. The idea of using play to express the cultural/political condition in the sort of post-industrial “neo-Taylorist” setting seems, as Moulthrop notes, inherently troublesome.

This idea is the first step towards realizing the molecular society, though. The essence of Moulthrop’s connection is that games are emblematic of a culture of configuration, rather than consumption (or “mere” interpretation). Configuration enables a new response to media, and a new kind of awareness hitherto disabled.

A connection is made between these ideas and the restrictive nature perceived by the role of literary criticism as applied to games. The objection here is the classic cry of attack made by ludologists, but it is posed in the context of this larger cultural dilemma. A division is posed between drama and narrative as compared to play, simulation, and game. The idea is that the so called narrativists expect these media to serve the function of telling stories.

The change in focus that Moulthrop desires is to turn attention to the configurative power that users/players might have over works, as opposed to the interpretive ones. One of the fundamental things that is appealing about games is the process of participatory freedom, at least in ways of engaging with the game in multiple ways. For example, seeing what happens when the puzzle fails rather than succeeds.

However, it is important to note that even in the “most interactive” of games, there are severe limits to this configuration. In GTA, I might have the freedom to commit many kinds of crime, to customize my character, to run free about the city, but there is a significant limitation in that this is all it is ever possible to do. I cannot make the game about anything else. This may be a shallow criticism, but it is exactly the same constraint posed by the practice of interpretation. The only difference is that with configuration, the openness of the work to the player is inscribed and represented within the world of the work itself, but with interpretation it is only in the mind of the reader. This difference too is not severe, since ultimately, nothing is actually *changed* in the game, at least not in the way that others might play it. From the perspective of networked games or communities, this could easily mirror the status of interpretive communities that share and communally shape interpretations.

Further difference is claimed in Alternate Reality games (such as those for the film AI and the Beast for Halo), and emphasizing their procedural appeal. At the same time, narratives to have their procedural mechanics, and furthermore the appeal of these, the mystery, is rarely the ineffability of the “puzzleness” alone. I would argue that the puzzle solving and the fiction of these sorts of games are inseparable, or at least, it is their intricate and deep connection that makes the game so appealing.

Moulthrop finds that one of his main differences with Janet Murray is the role of transparency in games. This perspective of Murray’s seems to originate from Don Norman, who views transparency as ideal. All media eventually find some transparency (ie, over time the television ceases to be a box and turns into a window). However, part of the power of computers (as found by Sherry Turkle) is their immersiveness and holding power, which derive from their inherent opacity. Transparency is furthermore not even the ideal in literature, since open works are deliberately ambiguous and inexact, and even sometimes seemingly contradictory.

A conflict is posed here, between the consumptive values of transparent media versus the values of participatory media. The example given is Citizen’s Broadcast radio, which fell because people supposedly found nothing to say. The trend for passivity seems to change dramatically after the introduction of online participatory culture, but the suddenness of that is debatable. From DeCerteau, we might find that individuals do more with the things they consume than merely absorb them.

Moulthrop’s conclusion nonetheless ties together the values of configuration (what one might even call openness) to the ideas of participation and procedural literacy and criticism.

Reading Info:
Author/EditorMoulthrop, Stuart
TitleFrom Work to Play
Typebook
Context
Tagsdms, cybertext, ludology, games
LookupGoogle Scholar, Google Books, Amazon

Bruno Latour: We Have Never Been Modern

[Readings] (08.29.08, 4:48 pm)

Overview

The overarching thesis is that the divide between nature and man, as proclaimed by modernism, is artificial and never existed. This divide enabled both science and humanities to make claims for the absolute truth. Concealed by this divide was the emergence of hybrids “quasi-objects” that blurred the lines between human and nature.

The intention of this is to re-examine the role and past of hybrids, and find a new relationship between science and culture. Essentially, to humanize science, and make humanities more scientific.

Notes

Chapter 1: Crisis

Latour opens by looking at the explosion of meaning systems that can be exposed by looking at a magazine. The magazine mixes science and politics and crosses many domains. “All of culture and all of nature get churned up again every day.” This issue stems from an enormous density of connections, where the myriad ways in which science and culture affect each other are exposed, leading a curious observer down the rabbit hole of connections.

Latour seems to be claiming that while this confluence of factors is so densely connected, knowledge is separated from power and politics. There is some sort of perceived divide between the two. Latour mentions a number of writers who have expounded on how technology shapes society, (MacKenzie on guidance systems, Michel Callon on fuel cells, Thomas Hughes on the incandescent lamp, etc.). Latour is trying to find that these relations are more than merely science or politics. The claim seems to be that a culture before a technology and after are very different. And that, despite the connections provided by magazines, the larger connections remain invisible.

Latour is concerned with the feasability of this sort of technological-cultural criticism. He wants to determine whether it is possible to effectively perform this analysis at all or not. His claim is that networks are elusive for deeper anthropological problems.

Latour describes the fall of the Berlin Wall in 1989, and compares it with the first conferences on global ecology. Both socialism and capitalism began with the best intentions (to abolish man’s exploitation of man, and to reorient man’s exploitation of man to an exploitation of nature, respectively), but both paths have gone to such extremes that they turned in on themselves. It then becomes a question over what path to take next. The “antimodern” approach is to no longer dominate nature. The postmodern approach is indecisive and incomplete, while others aim to continue and push towards the modern anyway.

The idea of “modern” is hazy and difficult to define, but it is evocative. Latour’s idea is to define modern by identifying two characteristics. One is the practice of translation, which creates hybrids of nature and culture. The second practice is purification, which aims to isolate meanings and create distinct ontological zones. Translation corresponds to the development of networks, while purification enables criticism.

Reading Info:
Author/EditorLatour, Bruno
TitleWe Have Never Been Modern
Typebook
Context
Tagsdms, postmodernism
LookupGoogle Scholar, Google Books, Amazon

WJT Mitchell: The Reconfigured Eye

[Readings] (08.29.08, 4:47 pm)

Notes

Chapter 3: Intention and Artifice

This particular text is on photography, and its role as representing the truth. The credibility of photographs has, after a long period of time of claiming to represent the whole objective truth, or at least representing that something happened. This criticism runs deeper than a rejection of photographs as merely objective. The idea that the camera is a viewpoint, must take a physical position, and is thus inherently subjective has been around and acknowledged. However, the photograph still stood as a proof of existence.

Mitchell is concerned with the constructability of photographs and the fact that they are subject to the same kind of artifice that any other form of evidence is.

Photographs are fossilized light: momentary interpretations that have been made permanant by their exposure on film. They can be seen as records, but like any record, they construct a system and view of reality.

Photographs have a strong connection with matters of convention, for a long while, painting was rooted in matters of convention (Ancient Egyptian figures are a good example), but photography adheres to its own conventions as well. The characteristics of shading and lighting and all of these aspects are firmly grounded in the conventions of the photographic tradition. In turn, these conventions connect with our means of reading and perceiving information from signs encoded in this convention. Mitchell cites Roman Jakobson who explains that over the course of tradition, images become ideograms which are immediately mapped to ideas, and we no longer see pictures. This idea relates again to the traditions of media gradually becoming more transparent over time.

Photographs themselves tend to be deeply connected to their subjects, though: “since photographs are very strongly linked by contiguity to the objects they portray, we have come to regard them not as pictures but as formulae that metonymically evoke fragments of reality.”

This line of reasoning also seems to connect to the ideas of simulation sublimation. Images are seen as semiotic systems, and are internalized, or are, at least, taken to denote the truth when cast as such (which is how we have approached photography culturally). It is only recently that we have begun to exhibit a kind of awareness of the artificiality of the image that we can criticize photographs as non-representative of the truth. I imagine that similar arguments can be made toward simulation, but its artificiality is generally much more apparent.

The part of the camera that lends it its authority is its mechanical nature. Because it is a machine, seemingly untouched by other values or human intervention, it mechanically must capture the truth without the factor of human error. The ramifications of the “superiority” of machine reproduction and capture are documented (for instance with Walter Benjamin), but the near autonomous power of the machine emerges in a great deal of AI criticism.

Cameras impose the existence of a subject, though. It is impossible to take a photograph of “no particular” horse or person, though it might be possible to make a picture of them. The necessity of this particular subject is the essence of the camera’s intentionality. This intentionality is also where the camera’s subjectivity comes into play, and it introduces values based on the question of framing, perspective, and choice. Mitchell explains that this forms a spectrum: “However, Scruton’s distinction between intentional and causal components in image production is helpful, particularly if we do not insist on a clear cut dividing line between paintings and photographs but think rather of a spectrum running from nonalgorithmic to algorithmic conditions–with ideal paintings at one end and ideal photographs at the other.” (p. 29)

The problem that arises in comparing this to computation is that even algorithms have biases and values.

Digital photography and photo editing blur the lines between causal and intentional images. Something which is intentional can be hidden and be made to seem a natural causal element. This perverts the authority of the mechanical process by which photographs tend to derive their legitimacy.

Reading images requires a certain interpretive labor to be undertaken by the observer, and this process is where the influence of convention comes into play. “In forming interpretations of images, then, we use evidence of the parts to suggest possible interpretations of the whole, and we use the context of the whole to suggest possible interpretations of the parts.” (p. 34)

The problem of altering images involves balancing the alteration with the requirements for consistency and coherence in the relation of parts to the whole. Beyond that, we can check for implausibility, by comparing the subjects to that which we already know to be true (or might be able to verify externally). This sort of issue pulls back and seems to reflect ideas being worked through in AI and cognition. When faced with a black box that explains something that we have no means of verifying or checking, but are grounded in a photograph-like mechanic, we have no choice but to accept the result. The examples Mitchell gives are the photographs of the astronauts on the moon, but that idea can be extended to other regions, such as simulation.

Mitchell writes of the general acceptance of images: “In general, if an image follows the conventions of photography and seems internally coherent, if the visual evidence that it presents supports the caption, and if we can confirm that this visual evidence is consistent with other things that we accept as knowledge within the framework of the relevant discourse, then we feel justified in the attitude that seeing is believing.”

Reading Info:
Author/EditorMitchell, W.T.J.
TitleThe Reconfigured Eye
Typebook
Context
Tagsdms, visual culture
LookupGoogle Scholar, Google Books, Amazon

WJT Mitchell: Picture Theory

[Readings] (08.29.08, 4:46 pm)

Notes

Chapter 2: Metapictures

This essay is about images that refer to images. It further relates to art that refers to art (partially itself, and partially other art). Going into it, Mitchell starts with Clement Greenberg and Michael Fried, who discuss how modern art has become essential and self-referential. This partly seems to be representative of postmodernism and eternal self-reference and analysis. This also connects to self reference in language and literature, metalanguages that reflect on languages.

Mitchell’s focus is still on images, pictures, though, but this is mentioned upfront, so we keep it in mind.

The first example given is of a man drawing a world and then finding himself inside of it. This is more of a casual and tired bourgeoisie type of art (it’s a New Yorker illustration) rather than a sublime and frightening perspective of the enveloping nature of images. The fear derived seems to be of the lack of boundary and enveloping nature of images. We create images, and in turn find ourselves reflected back in them.

Looking at metapictures requires a second order discourse. The first order of an image is the “object-language”, which I suppose is the manner of direct representation. Even images which self reference (frame within a frame, or a man painting a picture of a man painting a picture…) can be posed hierarchically by a first order representation. This works (at least on paper) because images are necessarily finite. Extending this to a second order requires a blurring of the inside and outside.

In mathematics, first order logic can make generalizations about classes of things, but a second order of logic is requried to make generalizations about functions and relationships. This in mind, second order thinking involves thinking beyond direct interpretation and consider sorts of external analogies and references.

Mitchell looks at multistable images, which are ones that can be interpreted in two ways. These have different thresholds, and confuse the role of pictoral representation, but they do not formally confound representation in the sense that the New Yorker cartoon does. The second order boundary is ambiguous, but not overtly flaunted. Images in this sense have a supposed mutability. That form of multistability is also observed in various literary forms (The Lady or the Tiger).

The dilemma with multistable forms is an essential question (Wittgenstein found the rabbit-duck image to be extremely troubling). The question ultimately seems to reside in where we do the identification in our minds. Is it an internal model, an external model (with respect to vision), is it determined by some sort of grammar?

Pictures generally have some sort of presence and role in life. They can seem to have lives of their own. Thus, metapictures, which throw the reference and metaphor into ambiguity, call into question the role and self-understanding of the observer.

This is understood in looking at Foucault’s writing on Las Meninas (a classical painting by Velazquez which employs some degree of reflection and ambiguity), and how Foucault’s attention to Las Meninas is similar to Wittgenstein’s dwelling on the duck-rabbit which complicate and make the images all the less penetrable or comprehensible. They encourage us to think of not the images directly, but think of how the images relate to their culture, our culture, and our thought.

Matisse’s “Les trahison des images” serves a similar role, but instead of exploring our relationship with images (or images with images), it explores the relationship of images and words. This, in turn, is an infinite relation.

The conclusion of the chapter offers some insights: “The metapicture is not a subgenre within the fine arts but a fundamental potentiality inherent in pictoral representations as such: it is the place where pictures reveal and “know” themselves, where they reflect on the intersections of visuality, language, and similitude, where they engage in speculation and theorizing on their own nature and history.”

Reading Info:
Author/EditorMitchell, W.T.J.
TitlePicture Theory
Typebook
Context
Tagsdms, visual culture
LookupGoogle Scholar, Google Books, Amazon

Clement Greenberg: Avant-Garde and Kitsch

[Readings] (08.29.08, 4:44 pm)

Avant-Garde and Kitsch

Greenberg is writing to discern and define two relatively recent antipodes of art and culture. The avant-garde and kitsch are both products of modernism, and both have emerged concurrently, affecting each other through their opposition. Greenberg begins his essay by noting that the answer in their difference lies deeper than mere aesthetics.

avant-garde

The avant-garde is a product of Western bourgeois culture. Outside of this moment of culture and the abundance produced by capitalist society, the avant garde would not have been possible, and bohemia would never have been established. The avant-garde came to being by emigrating from the old model of aristocratic patronage, to instead pursue art for its own sake (as opposed to for funding). The avant-garde never fully detached from bourgeois society, still depending on its money.

As an academic and intellectual movement, the avant-garde is compared with “Alexandrianism”, a certain reluctance to interfere or comment on culture. The essence of Alexandrianism is a devotion to repetition of themes and old masters. A consequence of this is a non-involvement in political matters. Where the avant-garde is different is from the devotion to pushing the boundaries of art. Instead of a reverence for the tradition and history of art, the avant-garde is about its future and potential. Content is irrelevant, only the form of art itself is of importance.

This devotion to art for its own sake is coupled with a sort of desire in the recreation of the world. Greenberg notes on artistic values: “And so he turns out to be imitating, not God–and here I use ‘imitate’ in its Aristotelian sense–but the disciplines and processes of art and literature themselves.” The abstract or non-representational stems from this notion. The avant-garde artist is not imitating or re-creating the world, but imitating and re-creating the form the art itself.

This idea focusing on the form and values are exposing a certain systematicity in the ideas of art. Where traditional representation evokes or creates a system of nature, the abstract evokes or creates the underlying ideas that ground representation in the first place. Instead of art that imitates life, art that imitates art. In simulation and modeling, this line of construction is prefixed with “meta”.

While the avant-garde rejects content and the bourgeois society from which it arose, it still depends on the wealth of the bourgeois to survive. Since the only audience with the culture and education to appreciate the strange avant-garde perspective is the audience with the wealth to afford education and culture.

kitsch

Kitsch is hard to explain, but easy to give examples. Greenberg explains that “Kitsch is a product of the industrial revolution which urbanized the masses of Western Europe and America and established what is called universal literacy.” While universal literacy certainly sounds nice enough, but has some ominous undertones. Greenberg explains that kitsch arose to fill a demand for culture coming from the proletariat of industrialized countries. The proletariat consisted of peasants from the country who were settling in cities and simultaneously losing a taste for country culture and developing some degree of leisure time that required fulfillment.

The culture that arose from this is composed of artifacts of the existing cultural tradition. It borrows and synthesizes devices and forms from culture and manufactures products which are easily understandable and palatable for the prole audience. Kitsch is the ultimate synthesis of culture and media, and is also the ultimate recycler and disposal: it will use bits of artifacts that “work” and re-use them until exhausted. As a result (in the sense of Walter Benjamin), it destroys culture and then profits immensely.

Greenberg also explains some properties of kitsch: it is profitable, it is mechanical, it operates by formulas, it is vicarious, it is sensational, it adopts varying styles, it is the epitome of all that is spurious, it pretends to demand nothing from its customers but money. While not a formal definition, it helps clarify some things about what kitsch might be, but does not exactly explain, except in contrarian terms, what kitsch is not. Not kitsch is not profitable, not popular, not spurious. All of these qualifiers are exceedingly vague and subjective.

Greenberg claims that artistic values are relative, but that among cultured society, there is a general consensus over what is good art and what is bad. This may work qualitatively, in that many agree on the values of classics, there is very little agreement in contemporary works. Greenberg continues and explains that to acknowledge the relative nature of values, that artistic values must be distinguished from other values, but Kitsch is able to erase this distinction though its mechanical and industrial construction.

A key theme in this is the notion of value, and a relative situation of values. It is a sort of intellectual and educational background that defines and establishes these values for the educated audience, and lacking these, the proles miss the value inherent in abstract works. This education supplies a history and context, which is totally missing from the world of kitsch.

wrapping up

The avant garde imitates the process (and system) of art, and kitsch imitates its effect. The history of art had not enabled the interconnection of the artist with form, because of the nature of patronage as it had supported artists since the middle ages (which seems a little puzzling). Later, following the renaissance, a history of artists preoccupied and lonely in their attachment to their art begins to appear.

It is capitalism and authoritarianism that turn to kitsch as being tremendously effective at profiting from and placating the masses. Greenberg explains that socialism is the only movement that can support the avant-garde, or new culture.

critical notes

Greenberg’s primary concern seems to be that only the avant-garde is producing new cultural value, through the pushing of its limits. But, this attitude leaves something to be desired. Surely, cultural value must be seen as more than a scalar quantity?

There are many subtle assumptions underlying the criticism of Kitsch, which is that, when understood formally, as a synthetic product that seeks to make a profit, one could say that near unto all forms of art are kitsch. Ancient cultures were constantly referencing and alluding to the legitimacy of previous cultural products- Roman gods were borrowed from Greece and used to satisfy a cultural demand and need for legitimacy, yet this borrowing is not really seen as Kitschy. Even kitsch is disposed to find new ideas in itself from time to time.

Many contemporary works, must create something new, arguably have some artistic value, reference and synthesize, and some even have the misfortune of being popular. The qualifier of kitsch seems to only occur when the popularity and profit is absent. Clearly, there is a spectrum of gradations of a work in terms of its accessibility, but this is not necessarily equivalent with its artistic value. The danger of the existence of kitsch is to blur and erase this distinction, but that seems to afford the existence of kitsch much more authority than it ought to deserve.

Additional contemporary works derive from forms that might be considered kitsch, while not avant-garde, they can embrace the values of abstraction and, having emerged from a popular medium, form bubbles of artistic experimentation and radical difference and creativity. For example, within the popular medium of newspaper-style comics emerge highly experimental and complex works. These cannot be said to be kitsch in their emergence, but rather wholly new products.

In this sense, it seems that the qualitative distinction between kitsch and avant-garde, while an effective border, is little more than an arbitrary line over superficial ideas of value and imitation. The image drawn here (in 1939) is evocative of an intellectual stagnancy, one that began in the industrial revolution, but contemporary culture is certainly changed and contains new value from when Greenberg was writing. That value certainly did not all stem from avant-garde artists, nor is all of that value purely capitalist, so it must have come from elsewhere. But where?

Reading Info:
Author/EditorGreenberg, Clement
TitleAvant-Garde and Kitsch
Typebook
Context
Tagsdms, postmodernism
LookupGoogle Scholar, Google Books, Amazon

Roland Barthes: The Death of the Author

[Readings] (08.29.08, 4:43 pm)

Notes

Barthes opens his essay by looking at a quote from Balzac’s Sarrasine, and digging into the methods of understanding the quote’s author. The quote is remarking on a castrato impersonating a woman, describing the fluid evocation of the idea of “Woman” given off by the impersonator. Barthes is trying to discern who is behind the quote, though, who is saying it. It could be the story’s hero, it could be Balzac the author, Balzac the philosopher, it could be universal wisdom or romantic psychology. Barthes explains that due to the nature of writing itself, it is impossible to know. Writing is a voice with an indeterminate speaker, whose identity and body is lost through writing.

The idea of the author is a construction that derives from rationalism and the Reformation, which were concerned with elevating and unearthing the individual in all things. There is a fascination with modern society to connect the author to their work, and to understand the the human behind the work, through the work, perhaps instead of the work itself. Criticism sees a work as the product of the author, or a product of the author’s character traits.

Barthes looks into Mallarme (who was a subject of great interest by Umberto Eco), and explains that Mallarme’s work was intended as a removal of the author so that pure language remained.

Other writers see to expound on the relationship between themselves, their works, and their characters, blending them to some degree. The author’s relationship to the work can be seen as somewhat incidental and residing in chance. Writers may challenge the position that they stand on in relation to their work. Surrealism pushes this further by playing with the system of language. This playing is supposed to expose the limits of authorship (or authorial intent, I suppose?) and exhaust the idea of the person in writing, as opposed to the subject, the one who writes.

As a side, many popular contemporary authors see their writing as being very systematic. They do not control or master the writing from the top down, but rather they develop characters and let the characters act on their own. In this sense, the writing is a run of a simulation.

With this in mind, modern works may be approached with the knowledge of the author’s absence. If we “believe in” the author, it is as the parent of the book, the precursor to the work. In the modern text, the writer is born along with the text.

Barthes explains a perception of the text which is lacking the absolute message of the author (in an almost theological sense). The text is a “space of many dimensions”, it is a “tissue of citations”. Expression is merely translation from a dictionary. “Succeeding the Author, the writer no longer contains within himself passions, humors, sentiments, impressions, but that enormous dictionary, from which he derives a writing which can know no end or halt: life can only imitate the book, and the book itself is only a tissue of signs, a lost, infinitely remote imitation.”

In a post-author text, deciphering becomes impossible or useless. Imposing an author onto a text forces the text to adopt an ultimate signification, which destroys the writing. Modern writing instead can be distinguished and traversed.

Written language is inherently ambiguous, and when we remove the author, written language can be perfectly understood. Barthes mentions Greek tragedies, which use ambiguity and duplicity to convey meanings. It is the reader who is able to interpret, connect, and weave these together.

Barthes is not trying to criticize the meaning or unity of texts, but rather the idea that unity or meaning descend from an external author who precedes and begets the work. Rather, meaning and the unity of a work coalesce in the reader, who connects and strings together meanings from all places. The reader lacks history or psychology or identity in the sense that the author does. The reader’s meaning can be considered a liberation or popularization from the idea that meaning is from and for the author.

Thoughts

Authorship is interesting in a modern society, especially in terms of commercial products. In a culture where corporations are extended rights and status granted to individuals, commercial products tend to stand with the company or the corporation as their author. Some examples of this is are computer software, pharmaceuticals, fast food, etcetera. Despite the fact that many individuals are responsible for creation, and these creations have evolved and changed significantly over time, the products themselves are, even legally, authored by the corporation.

Authorship is important in simulation as well. If one ascribes to the belief that all creative expressions are systematic (that is, they are embedded with models of meaning), then these systems could be said to be authored. The systems are open works, in the Umberto Eco sense, as they are free to some interpretation, but are still constrained by their original authorship.

Reading Info:
Author/EditorBarthes, Roland
TitleThe Death of the Author
Typebook
Context
Tagsdms, postmodernism, narrative
LookupGoogle Scholar, Google Books, Amazon

Espen Aarseth: Narrativism and Genre Trouble

[Readings] (08.29.08, 4:42 pm)

Overview

Aarseth presents another critique of narrative theory as applied to games. He is challenging the idea that narrativism can be used to analyze anything, or more specifically, is challenging the interpretation of anything as a text. Aarseth defines a game as consisting of three elements: Rules, a semiotic system (the game world), and the resulting gameplay. The semiotic system is incidental, and may be exchanged. Knowledge of the semioitic space of the game world, or of the skin that has been applied to it is unnecessary for skill at the game itself. However, it may be necessary to better *appreciate* the game.

Aarseth’s claim about the relevance of semiotic systems to games is tricky, though. It may be possible to interchange skins on existing games, but there are further connections that are made between the world of the game and its rules and resulting gameplay. It wouldn’t make sense to place a skin on a game of chess that randomized the types of pieces. We have certain associations with the order of chess and the order of the skins that we apply to it. What we do when skinning something is drawing a metaphor. The mechanics are necessarily unchanged, but the associative meanings are different.

Aarseth is also looking to demonstrate the disconnection between games and stories. It doesn’t make a lot of sense to equate games as being narratives, especially when exploring abstract games. However, an inescapable fact is that many contemporary (especially successful commercial) games are grounded in stories. This is what Jesper Juul might call the “fiction” of the game. Aarseth proceeds to wonder what the relationship is between games and stories, whether it is a dichotomy, or a continuum or rivalry? He notes that games may be translated among game forms (Rogue and Diablo, for instance), much like narratives may be translated between narrative forms. These are structural equivalences, though: Game adaptations preserve rules, narrative adaptations preserve key events and relationships. Realistically, though, many successful narrative adaptations use much more creative approaches.

The key problem with adapting games to narratives and vice versa can be found in genre theory (John Cawleti): Underlying form cannot be translated, but style and convention may be adapted with relative ease. Aarseth gives the specific example of the Star Wars films and the various games associated with them. A genre that does try to mix the two is the adventure game, which Aarseth derails as being uninteresting, unsatisfying from gameplay perspectives, and limiting in terms of freedom.

Another domain, the simulation game, also employs story to a strong degree, but is flexible where the adventure game is not. Aarseth makes the significant claim that simulation is the core essence of games, and that games are the art of simulation. Aarseth further extends that by saying how simulation is much more satisfying and allows games to handle unusual situations that are not permitted in narratives. Adventure games have a conflict between player volition and the mechanics of the game. Aarseth claims that within simulation games, the player is afforded opportunities to counter authorial intention, that the authors of simulations are essentially removed from the work, and that players will have the last word.

Aarseth’s stance here is ludicrous. Simulation authors are capable of imposing very severe restrictions on players, and the simulation itself may be biased in its very model that defines it. Civilization used (and still uses) a very expansionist, colonialist model of history, and it is not possible for the player to thwart that ideology in any way. The only road to success is to ascribe to the ideology and act in accordance with it. The most recent release of Civilization opens up the model, so that advanced users can write new ideologies into the rules, or rip out the existing ones, but these users are not the average player. It also bears noting that in the discussion of simulation, Aarseth does not mention the Sims (though he had mentioned it earlier). Not sure what to make of that, though…

The important thing to note about comparing games and narratives, though, is to follow Aarseth’s initial focus, of looking at translatability. If we explore how narratives have been translated, adapted, and (especially) extended, it might be possible to make a not-too-revolutionary claim that many successful adaptations break many of the rules of narrative structure. A good example is Jane Austen adaptations, or extending Aarseth’s examples, one could look at Star Wars novels, and the extended universe developed around the world defined by the films. The resulting products might be narratives, but relationships might be changed, settings might be changed, characters might be changed. What has been translated might not be the narrative at all, but rather the world, or the underlying value system of the story. In this sense, we can make the claim that narratives themselves are artifacts of systems. We may not be able to adapt the narrative directly, but the elements of the system may be procedural in nature.

Reading Info:
Author/EditorAarseth, Espen
TitleNarrativisim and Genre Trouble
Typebook
Context
Tagsdms, ludology, simulation, games
LookupGoogle Scholar, Google Books, Amazon

Stephen Wolfram: A New Kind of Science

[Readings] (08.29.08, 4:41 pm)

Overview

Wolfram’s book, A New Kind of Science is chiefly concerned with the implications of his deep study of cellular automata, originally triggered by his MacArthur grant. The main principles and findings of this research seem to be that simple rules can lead to computational complexity and very interesting results.

While the notion that simple rules can lead to powerful results is not a new notion, especially in the sense that science and mathematics have striven to find simple and elegant ways for describing the laws and theories governing nature. Wolfram’s pursuit seems to be towards computation, and relating the simplicity of cellular automata towards emergent natural phenomena. Wolfram aims for CAs to lead towards a new type of science and mathematics that uses the (simulative?) power of CAs to make useful claims about nature.

The visual appeal of the automata is certainly very compelling, but there is an equally disconcerting lack of mathematical reason backing up his arguments. Unfortunately, he also does not give evidence as to what mathematical justification might look like, instead choosing to demonstrate problems through visual analogy.

Wolfram’s use of CAs is also an exemplary demonstration of Baudrillard’s simulation, in that when viewed through the lens of cellular automata, everything seems to become one. CAs become the universal tool which may be used to represent recreate everything.

Notes

The Notion of Computation

The idea of Computational Universality becomes something of great significance here. A function is universal if it can compute anything that is computable. The Turing machine is the fundamental example of this. A consequence of universality is that universal functions may be simulated or computed analagously. A consequence of Wolfram’s research has been to find that certain classes of CAs are universal, may be used as computing machines.

Wolfram has additionally discovered a number of CAs which are reversible, that is, their inputs may be determined uniquely from their outputs. Computationally, this represents an interesting class of functions, but it also references issues of information and disorder that are important in signal systems and in thermodynamics.

The Principle of Computational Equivalence

Wolfram’s thesis is essentially this: “all processes, whether they are produced by human effort, or occur spontaneously in nature, can be viewed as computations.”

Wolfram extends this idea to the point of declaring it a new law of nature: “The Principle of Computational Equivalence introduces a new law of nature to the effect that no system can ever carry out explicit computations that are more sophisticated than those carried out by systems like cellular automata and Turing machines.” And thus, when viewing nature as a system of computation, this principle is naturally very relevant.

An issue with representing things as computations is that it disregards the idea that not everything requires brute computation, instead, certain things may be proven rather than computed. This distinction is tricky and important. Some information may be proven only with difficulty, and other facts may be much more easily computed than proven. However, there is generally a difference between that which is computed and that which is proven. The advantage of the later is eliminating the need for the former. Wolfram’s argument hinges on the notion of raw computation, which may pale in the face of abstract proof. One may set out to compute that there are infinitely many primes 3 mod 4, which is an indefinite exercise, or one may instead aim to prove this in a finite and short number of steps.

This point is important, but flawed. Later on, Wolfram examines rules and theorems of mathematics, and uses their countable, enumerable nature to represent them as computable. In this view, theorem proving is a process of computation, rather than some mysterious or magical intellectual exercise. This fact has been used in the past, notably by Kurt Goedel to prove the incompleteness of mathematics. This means that proofs are indeed finite and computable, but that is still not a good way of approaching them.

Computation is still computation and must obey the law that computations may not be “outdone”, as it is not possible to out-compute something in the same number of logical steps. On the other hand, proof and ideal computation are different from raw computation in that they might be more efficient and save time (or computational steps). The essence to these, the way that proofs are made and solved in practice is not by computation, but rather by “intuition” and experience. The two of these may seem magical in abstract, but actually echo more strongly the ideas of pattern matching. Instead of applying rules in brute force, pattern matching relies on analogy, recognition, and application of known rules to new inputs. This approach is still computable, but not easily by CAs.

Reading Info:
Author/EditorWolfram, Stephen
TitleA New Kind of Science
Typebook
Context
Tagsdms, simulation, emergence
LookupGoogle Scholar, Google Books, Amazon

Norbert Weiner: Cybernetics

[Readings] (08.29.08, 4:40 pm)

Notes

Introduction

Weiner begins Cybernetics by posing some of the problems encountered by the growing field of modern science. Specifically, and this echoes Vannevar Bush, he is concerned about the massive specialization in science. He argues, though, that scientists need to be versed in each others’ disciplines. He too is interested in developing some sort of calculating machine, but is proposing an electronic model that seems to more closely resemble what we use today. What is interesting about Weiner’s model is that it is inspired by the human nervous system.

The essential problem that is set out to be solved is anti-aircraft artillery. This is the essence of the idea, and segues cleanly into the notion of feedback loops that will be explored in detail later. This idea involves a certain forecasting of the future, and relates closely to how human action works as usual. Human actions, such as picking up a pencil involve certain kinaesthetic or proprioceptive senses. This correlates in some fashion to the intentionality described by phenomenologists.

Furthermore, the kinds of dilemmas from the problems Weiner is describing are generally solved by pattern recognition. Distinguishing signal from noise, guidance, switching, control, etc. It is interesting to note that the type of discipline proposed by Weiner more closely resembles analytic patterns that seem to be suggested by Dreyfus.

Some of Weiner’s application seems grounded in Gestalt psychology, which is the psychology of the coordination of senses. The sum idea is that the whole amounts to more than the sum of its parts. Generally it is a psychology of perception. One of the ideas that Weiner is approaching with this and toward the end of the introduction, is the idea of developing a fully functional prosthetic limb. The limb would not only need to fill the space and function as the lost limb, but also register the immediate senses, and furthermore the proprioceptive senses. The combination of these seems to unite the goals of cybernetics. Also notably, the idea here is the replacement/extension of a limb, not the mind.

A further concern with the potential of this prosthetic power of computation is its complicating moral significance. One moral dilemma posed is the notion of machine slave labor, which has the potential to reduce all labor to slave labor. While robots have not replaced human labor, this concern is insightful in terms of the economic changes due to computers (divisions of companies being replaced by silicon chips, etc).

Chapter 5: Computing Machines and the Nervous System

Weiner gives early on a somewhat hand-waving proof that the best way to encode information (when there is a constant cost for the information) is to use binary for storage. The logic of some operators is described, as well as the ways of implementing binary logic in several engineering approaches. After that, he mentions their potential grounding in the neurological system.

Weiner next attempts to address some of the tricky details of mathematical logic (such as the paradoxes of naive set theory) with corresponding analogues that could apply to a computational system modeled after the nervous system.

Reading Info:
Author/EditorWeiner, Norbert
TitleCybernetics
Typebook
Context
Tagsdms, ai, digital media
LookupGoogle Scholar, Google Books, Amazon

Katherine Hayles: Writing Machines

[Readings] (08.29.08, 4:38 pm)

Notes

Introduction

Institutions are not run by specified rules, but rather by complex networks of individuals, among whom the real causes and reasons for things become apparent. It is individuals and networks of them that cause things to happen. Hayles wishes to look at the digital age and writing environments, but to do that must focus on the networks of forces and individuals that surround the discipline and culture.

To do this, she is starting from a somewhat autobiographical perspective: Hayles started pursuing a track in science (specifically chemistry), but later found the cutting edge research to be tedious and unengaging. She took some courses in literature, and in a new track, found herself puzzling out the inconsistencies with her scientific discipline (ambiguity over clarity, investigating rather than solving problems, etc).

Electronic Literature:

Hayles opens this chapter by noting how the Turing Machine (and by extension the computer) was originally theorized as a calculating machine, but had a hitherto unexpected power for simulation. Hayles poses simulation as applying to environments, and this makes it seem a much more tactile and somatic experience than a conceptual one. She connects simulation to literature and the result is this sort of electronic literature.

Hayles specifically is looking at Talan Memmott’s hypertext work, “Lexia to Perplexia”, which is a jumble of jargon and wordplay, intended to confuse the idea of subjectivity. Hayles describes this language as a creole of English and computer code.

The admitted illegibility is an indication of electronic processes that the reader does not understand, or cannot grasp. “Illegibility is not simply a lack of meaning, then, but a signifier of distributed cognitive processes that construct reading as an active production of a cybernetic circuit and not merely an internal activity of the human mind.” I think this is supposed to mean that interpretation is transcendent of human thought.

The goal in this transformation is to raise awareness and weave together the human body with electronic materiality. This idea seems to be looking in the direction of Donna Haraway, but going more in the direction of a semiotic system. The goal is not to challenge human nature, but challenge subjectivity and language.

Prevalent in the work are allusions to Narcissus and Echo, and these mythological references are intended to highlight the collapse of the original into simulation. Following Baudrillard, there is no longer an ontological distinction between the real and the simulated.

The work is intended to be a “later generation” multimedia or hypertext work, very active and confusing with respect to user interaction. The work goes beyond general hypertext and instead of moving from lexia to lexia it acts nervously, seeming of its own accord.

Reading Info:
Author/EditorHayles, Katherine
TitleWriting Machines
Typebook
Context
Tagsdms, cybertext, digital media
LookupGoogle Scholar, Google Books, Amazon
« Previous PageNext Page »