icosilune

Category: ‘Readings’

Ted Friedman: The Semiotics of Sim City

[Readings] (02.10.09, 12:03 pm)

This is a summary of an article that Ted Friedman wrote for First Monday in 1999. The article is ostensibly about simulation and semiotics, but relates simulation to subjectivity and identification in an interesting way. His argument is that simulation becomes an extension of consciousness, and the player identifies with the simulation as a component of him or herself. This would have strong support in the space of cognitive science, especially in terms of cognitive extensions. It also provides a way of connecting a model-based view of the world to an embodied and experiential view of the world.

Friedman initially compares the experience of playing a game to the experience of reading a book. Books are non-reactive, though there is exchange between reader and book. Games are artifacts with reactive feedback loops, enabling a tighter sense of identification with the artifact’s contents. Reading gives a variety of interpretive freedoms, but simulation is not free from perspective of player. Any simulation is rooted in the assumptions of its model. Sim City has received criticism for its model and economic assumptions, but Friedman explains that these are not flaws but principles. “Computer programs, like all texts, will always be ideological constructions.”

It is frequently argued that simulation games have an aura of mystification, in that they appear to be realistic. Friedman argues to the contrary that the player succeeds by learning its model and understanding how the model works, which is a process of demystification. I would challenge this, though. The level of mystification is dependent on the self-consciousness of the player. Many players learn the system of a game but do not reflect on its values. Mastery and understanding are different things.

Simulation in Sim City is constant, it does not stop. It is easy to reach a trance-like state where the simulation is an organic extension of the player’s consciousness (referencing Haraway). The actual experience of playing puts the player in a variety of roles, according to what the player actually controls. The player is much more than just the mayor and urban designer (the ostensible roles given to the player). The player has control over details unavailable to those real life roles, and is able to manage and micromanage different parts of the game with relative fluidity. Thus, the player has shifting identifications. This seems like it ought to be jarring, but it is not. Friedman argues that experience is a form of identification, but with the simulation. Losing oneself in a game is identifying with its simulation.

From the perspective of a god-game (like Sim City, The Sims, etc), which gives the player significant controls over the entire system, or a major part of it, a simulation is engrossing. The entire simulation becomes an extension of the player’s cognitive processes, which are both visual and visceral. This suggests that the experience is in some sense embodied. I think it is possible to look back on this, though, and realize that most digital games have simulation elements, but restrict the freedom of the player within them, putting the player under constraint of not only the rules, but also giving the player a more limited part of the system. Civilization, for instance, places the player in control of only one civilization. It can be argued that the player still experiences extension and identification, but only with the substance that the player can control. So the player will identify with the entire city in Sim City, the household in The Sims, the civilization in Civilization, or the avatar in a platforming game.

Friedman concludes the essay suggesting that simulations are a kind of postmodern quasi-narrative: systems of interwoven strands of subjectivity.

Reading Info:
Author/EditorFriedman, Ted
TitleSemiotics of Sim City
Typearticle
Context
Sourcesource
Tagsgames, semiotics, simulation, specials
LookupGoogle Scholar

Marie-Laure Ryan: Possible Worlds

[Readings] (02.08.09, 5:02 pm)

Possible Worlds is an intersection between narrative theory and AI. In this book, narrative is deeply tied to fiction, and it is in fiction that the idea of story worlds most clearly emerges. In order to understand the ways that fiction can work, Ryan turns to the theory of possible worlds. This is motivated by a need to turn to new ideas in the scope of formalist narrative theory, as other formalist approaches have begun to run dry, specifically the semiotic square and generative grammars.

The theory of possible worlds is introduced as a logical model. This depends on 1) the semantic domain of a text, and 2) the modal operators that define states. In narratology, the theory of text as a world is relatively familiar. Specifically mentioned are Alvin Plantinga and Robert Merrihew Adams. In this dimension, the theory of possible worlds is preoccupied with structure and also the matter of truth in fiction, namely what statements can be classified as true in a world. This sort of reasoning echoes the domain of formal logic, which connects the theory to AI.

The motive behind the actual use of the grammars is partly shared with AI, the other influence on this book. AI has several influences on the discussion of narratives. The use of AI as an approach treats the text and plot as something comparatively definable. Ryan rejects the idea that meaning is some sort of ethereal in inscrutable phenomenon, but instead argues that it resembles something more of a definable system. “The fundamental belief is that the creation of meaning is not a mysterious brainstorm caused by a random meeting of circumstances–a unique individual in an ephemeral state of mind, nurtured to some immesurable extent by a culture whose boundaries remain fuzzy, and bringing to the text a deeply private experience of the world–but the predictable output of definable processes operating on a variable input.” (p. 9-10)

This is somewhat disconcerting, as I agree with the basic premise of possible worlds, but I do not agree with this statement. Or, rather, the way that meaning is made can be procedural, but all AI projects that have sought to acommodate the sort of commonsense and everyday processes that go into interpretation or even comprehension of a text have met with severe handicaps. Most of them have failed to produce anything of value or substance, and it is thus hard to imagine that fruit can be gained by turning to the same assumptions that AI imposes on the world and applying those to narrative theory.

Part of the problem is that Ryan’s motivation to use AI comes from an interest in the generation of stories, an aim that I do not share within my research. My approach has sought to look at story worlds as having meaning within a particular context, and simulating that space of meaning, leaving the problem of interpretation on the side of the human participant, rather than attempting to include it in the system itself.

Fictional Recentering

One of the roots of possible worlds comes from Leibniz, who considered that there were an infinite number of possible worlds, but only one that was actual, the best of them all that was chosen and instantiated by God. Here, Ryan introduces the logical roots of possible worlds, considering them as systems of propositions, which may or may not be true. When these are arranged in large sets of all possible truth conditions, this creates a semantic universe, a term introduced by Kripke.

Fictional (possible) worlds are mental constructions. Ryan compares the theories of Rescher and David Lewis. Rescher considers that all possible worlds exist, because they can be imagined. In this theory, the possible worlds mut be considered as totally knowable and factual (in the sense that all the facts of the world are known). Lewis, whose outlook Ryan seems to favor, considers there to be an indexical theory of how we relate to possible worlds. The worlds in Lewis’s view are not fully known and determined by experience.

Ryan makes an interesting comparison between possible worlds and games. Make believe games are characterized by many rules of substitutions (where one object represents another). Fiction emphasizes only one substitution, that the narrated world is really the actual world. Both of these make use of the magic circle, where outside of the circle, the rules for understanding occurrences and experience are normal, but inside the reader or player employs rules to develop meaning and understand the world. Textual worlds make use of several axioms: (p. 24-25)

  1. There is only one AW.
  2. The sender (author) of a text is always located in AW.
  3. Every text projects a universe. At the center of this universe is TAW.
  4. TAW is offered as the accurate image of a world TRW, which is assumed (really or in make-believe) to exist independently of TAW.
  5. Every text has an implied speaker (defined as the individual who fulfills the felicity conditions of the textual speech acts.) The implied speaker of the text is always located in TRW.

These terms come from the glossary of terms before the introduction of the book: (p. vii)

  • System of reality: A set of distinct worlds. The system has a modal structure, and forms a modal system, if it comprises a central world surrounded by satellite worlds. The center of a modal system is its actual world, the satellites are alternative possible worlds.
  • Textual universe: The image of a system of reality projected by a text. The textual universe is a modal system if one of its worlds is designated as actual and opposed to the other worlds of the system.
  • Semantic domain: A concept slightly more general than textual universe. The set of concepts evoked by the text, whether or not those concepts form a system of reality (i.e., whether or not the text asserts facts and makes existential claims)
  • AW: The actual world, center of our system of reality. AW is the world where I am located. Absolutely speaking, there is only one AW.
  • APW: Alternative possible world in a modal system of reality.
  • TRW: Textual reference world. The world for which the text claims facts; the world in which the propositions asserted by thte text are to be valued. TRW is the center of a system of reality comprising APWs.
  • TAW: Textual actual world. The image of TRW is proposed by the text. The authority that determines the facts of TAW is the actual sender (author).
  • TAPW: Textual alternative possible world. An alternative possible world in a textual universe structured as a modal system TAPWs are textually presented as mental constructs formed by the inhabitants of TAW.
  • NAW: Narratorial actual world. What the narrator presents as fact of TRW.

Possible Worlds and Accessibility Relations

This section describes the logical relations of possible worlds. These worlds describe entire universes. They may abide by following some assortment of logical properties. These lay out the formal logic of the textual worlds. We could also consider these logical relations as rules for building possible worlds. The rules for considering the logical properties can vary depending on whether the world operates according to concrete logic, dream logic, nonsense logic, and so on. The different properties are given below: (p. 32-33)

  1. (A) Identity of properties. TAW is accessible from AW if the objects common to TAW and AW have the same properties.
  2. (B) Identity of inventory. TAW is accessible from AW if TAW and AW are furnished by the same objects.
  3. (C) Compatibility of inventory. TAW is accessible from AW if TAW’s inventory includes all the members of AW, as well as some native members.
  4. (D) Chrnological compatibility. TAW is accessible from AW if it takes no temporal relocation for a member of AW to contemplate the entire history of TAW. (This means TAW is not in the future of AW)
  5. (E) Physical compatibility. TAW is accessible from AW if they share natural laws.
  6. (F) Taxonomic compatibility. TAW is accessible from AW if both worlds contain the same species, and the species are characterized by the same properties. Within F, it may be useful to distinguish a narrower version F’ stipulating that TAW must contain not only the same inventory of natural species, but also the same types of manufactured objects as found in AW up to the present.
  7. (G) Logical compatibiltiy. TAW is accessible from AW if both worlds respect the principles of noncontradiction and of excluded middle.
  8. (H) Analytical compatibility. TAW is accessible from AW if they share analytical truths, i.e., if objects designated by the same words have the same essential properties.
  9. (I) Linguistic compatibility. TAW is accessible from AW if the language in which TAW is described can be understood in AW.

Reconstructing the Textual Universe

This section derives from David Lewis, coming from his investigation of counterfactuals. The notion of what is “true” in fiction is ambiguous. Complications arise in terms of what is known as fact, versus what is told as fact, or what is assumed or expected. The section introduces what is called the “principle of minimum departure”, which explains that a possible world is more likely if it has a sort of minimum distance from the actual world of the fiction. Ryan gives an algorithm for considering counterfactuals in TRW:

There is a set of modal universes A, which are constructed on the basis of a fictional text f, and in which whose actual world the nontextual statement p is true.

There is a set of modal universes B, which are constructed on the basis of a fictional text f, and in whose atual world the nontextual statement p is false.

Of all these universes, take the one which differs the least, on balance, from our own system of reality. If it belongs to set A, then p is true in TRW, and the statement “in TRW, p” is true in AW. Otherwise, p is false in TRW, and “in TRW, p” is false in AW. (p. 50)

An important part of minimum departure is interpretation, which often involves actual world construction. How does the reader build up the substance of the TRW based on limited knowledge? The textual world is necessarily an in complete picture of not only the world’s semantic universe, but also the world of the fiction itself. This is complicated by way in which facts may be interdependent in the TRW.

Considering TRWs as subjects for possible worlds (including counterfactuals) leasds to the development of a textual universe. This is what I am attempting to simulate in the Pride and Prejudice game.

The Modal Structure of Narrative

The actual content of a narrative is some system of events in a sequence, but the nature of these is much more than mere propositions. In addition to conveying actual events, narratives are concerned with events that are non-actual, not-yet actual, may-have-been actual, and so on. The way for communicating these is for them to be given in modes. This discussion is influenced heavily by Todorov and Doležel. Todorov gives four modal operators: (p. 110)

  1. Obligatory mode: events dictated by the laws of a society.
  2. Optative mode: states and actions desired by characters.
  3. Conditional mode: actions that characters will perfom if other events happen.
  4. Predictive mode: antipated events.

Doležel describes three systems of modes: (p. 111)

  1. The deontic system, formed by the concepts of permission, prohibition, and obligation.
  2. The axiological system, which is assumed to be constituted by the concepts of goodness, badness, and indifference.
  3. The epistemic system, represented by concepts of knowledge, ignorance, and belief.

Ryan also describes how the substance of propositions is held together in a world: “To form the image of a world, propositions must be held together by a modal operator acting as common denominator. In the literal sense of the term, a possible world is a set of propositions modalized by the operator of the so-called alethic system: possible, impossible, necessary.” (p. 111)

It is worthwhile to note that these modalities explain how the content of the world might be represented procedurally. For instance, constructing models of what is true, versus what is desired by characters, or what is prohibited, and so on. Ryan goes on to explain that each of these modes defines kinds of worlds that may be private to characters: for instance, knowledge worlds (K-worlds), wish worlds (W-worlds), obligation worlds (O-worlds).

The Dynamics of Plot

This section examines narrative as a sequence of states. Narrative sequence is how states and state changes are revealed to the reader. This is dependent on a depiction of narrative time. The first major issue in considering a narrative as composed of states is the determination of what information is pertinent to the state versus what is purely descriptive. This delineation is frequently less than clear, and so the separation requires some degree of interpretive creativity. Ryan explains that there is a criterion for determining relevance depending on the narrative, but does not explain how this criterion ought to be defined.

The logic of states is relevant to considering narrative as a single run through or trace of the simulation of a story world. However, the analysis of states does not comprise the entirety of the story world, but just its plot. The story itself includes more than plot, and narrative includes discourse in addition to story.

Also relevant is a consideration of the actual and potential states. These are negotiated by character decisions and moves. Many significant character moves are passive. States and evens are linked together by a graph structure of goals, prerequisites, side effects, and blocking relationships. (p. 140)

Given a state and action system, it is easy for plan structures to exist. This mode of planning fits the traditional AI models. The examples that Ryan gives are primarily from moral tales and fables. Maybe this is an indication that moral tales fit the planning model of behavior better than other narratives. The state system used encourages a global and objective sense of state, which is oppositional to the situated view, but exactly how is not yet clear.

The Formal Representation of Plot

The motive of this section is to formalize story plots as a system of nodes and connections. This is essentially a formal and structural plan. This approach presents the world as a top down analysis. Ryan gives several criteria for describing the properties that such a formal representation should have: It must be able to convey the same representation for multiple stories that have the same plot, it must convey information readily, and contain representations of important functional units. The story analyzed in this section is the story of “The Fox and the Crow”. It is important to remember that by looking closely at plot, this is a very different agenda than simulation.

An early diagram has a tree structure (p. 206), but this is revealed to be insufficient. A different model comes from Lehnert, with a system of +/- states, as well as goals, beliefs, and plans. This model is very similar to the traditional AI approach. Various actions and complications are described as nodes linked by arrows with + or – signs indicating their favorability for characters, and ‘G’ indicating that the state is a goal of a character. Ryan’s favored model builds from this, and gives a recursive graph structure (p. 223), indicating goals, plans, and beliefs on the part of both characters.

Reading Info:
Author/EditorRyan, Marie-Laure
TitlePossible Worlds: Artificial Intelligence and Narrative Theory
Typebook
Context
Tagsdigital media, ai, narrative, games, specials
LookupGoogle Scholar, Google Books, Amazon

Sims, BattleBots, Cellular Automata God and Go

[Readings] (02.06.09, 1:04 pm)

These are notes from an interview that Celia Pearce did with Will Wright in 2001. The notes are my impressions from the interview and how the design principles and ideas can be carried over to my work in the adaptation of fiction through simulation. The interview has a great deal to do with the principles of mental models and how those relate to play and the way that players can both consume and produce content.

Wright’s design philosophy: Wright’s original ideas were the most affected by his practice of building models as a kid. Making things is about creating models, which are at first static, then dynamic models, then about giving others tools to build their own things. This is an approach that is about creation, and is continually outward moving, from creating objects, to creating tools to make objects, and then creating tools for others to be able to create tools to make objects. At a distance, this philosophy resonates strongly with the mental model theory that is later explicitly adopted.

The reason for creating tools is to enable players to solve a problem from within the space of a game. This supposes that players have goals and problems concerning what to do within the game, though. If this is the case, then building empathy is about the size of the solution space – the player will have more empathy with the game if they are able to do something personal within the game.

Wright’s influences were forms of simulations, which set up worlds with rules, but were open to play with. The engagement and enjoyment in these comes from exploring their boundaries. When such experimentation is possible, it enables the practice of experiments and the scientific method. This sort of boundary play is common to simulation genres, and is often used in attempting to exploit and disrupt the game, and is thus generally oppositional to immersion.

The origin of the experimentation with rules goes back to war games, which have elaborate rule systems, and the enjoyment is partially the negotiation, application, analysis, and mastery of the rules. These are described as laid back, in opposition to intense twitch arcade games (which conversely produce an experience more like flow/immersion). The pleasure is thus the kind of putting together and figuring out sort, rather than the kind of being in a world. The two seem to be at odds, but I think they are not necessarily contradictory.

An interesting detail is that the model of the system in war games is more than can be contained or simulated in one’s head, so it the game must be played in order for the full rules to become apparent. This also yields a mechanic of experimentation, which becomes prevalent in maxis games. In these, the model really exists in the game programming, or in the designers’ minds. The actual computer running the game is an intermediary layer between the rules and the player. The core element of this is thus that the play is a process of learning the designers’ model.

Much has to do with use of metaphors. Players of SimCity initially think of it as a train set that comes to life, or think of The Sims as a dollhouse that comes to life. Gradually, through play, the players come to adopt new metaphors. The interaction with SimCity metaphorically resembles gardening more than a train set. With The Sims, the metaphor depends on play style.

Regarding how to advance and make improvements for sequels and next versions, it is necessary to analyze how players use the different parts of the game, and add material to the exchange (in The Sims). Wright explains how using data mining and observing this information is exploring the landscape of how people play the game. At abstract, this is building a model of the model (Pearce’s terms). Wright compares the process to cultural anthropology. This idea is relevant in comparison to building a game off of something, like Austen, who is established well in fan culture.

Wright is interested in an extended and automated system around this data mining process, which analyzes player behavior, preferences, and the things they create, and then responds to those, and can share those with other players. The ideal format of this is automated and invisible.

Regarding abstraction, Wright describes how elements of gameplay are abstracted. The parts that are not simulated must be moved to the player’s head. These are the elements that I commonly refer to as the representative elements. This is described as a kind of offloading. Games are abstracted in the sense that selections, what the player may select or manipulate, is simplified to some degree. What is missing represents gaps. It is the player’s role to fill in these gaps, to make the resulting system seem consistent. This is analogous to the gap filling in the sense of narratives.

In competitive games like Go, play is about bringing the players’ models together. Each player has sense of regions and territory, but this may be in disagreement with the other player. The conflict is on terms not of what is physically present on the board, but in terms of what the models and plans are in players heads. This is strongly connected to the theory of mental models. It is important to note that in comparison with other perspectives of mental model theory, this is about models and deception, and involves a lot of work with inducing beliefs and illusions. It also ties with the physical board, but involves overlays. Experience and practice are critical.

Simulation can be used as a communication tool, for people to model their community or environment, reflect their world using language of simulation, and share and communicate this model. Disagreement is generally at root a disagreement over a model (could be argued over a metaphor), so communication helps explore these disagreements.

When playing a game like The Sims, the player fluidly shifts between thinking of the character as an extension of the self versus a separate agent. The player may move between identification to alternation, thinking of the character as an avatar or extension of self (using first person to describe the character’s actions, “I am going to make dinner”), or as an autonomous agent (generally described in third person, “he is not doing what I want him to do”). This is a type of jumping in and out, which is very fluid, and is surprising to Wright. Roots of this may be somewhat considered in case of performance in sense of Goffman, or Mead, in the sense that the self can be considered as an object.

An issue at stake in the matter of gameplay is comparing the possibility space of games. Wright’s goal is to enable variability and flexibility in the space, so that player has greatest control. This is contrary to sense of controlled models, where the designer has supreme control over experience, aesthetic value is giving player maximum return on experience. The metaphor of the game as a landscape of space continues.

However, play is also a process of navigating this space, and it may have a convoluted terrain. Gives an example of hill climbing- where the player might want to get to certain places and navigate there. This involves understanding of model and creative discovery, and requires the topography to be consistent, but also relies on the ability for the player to create goals within the space in the first place. The significance of this is not really addressed. The question that I want to ask is, “why does the player have goals within the game”, especially as Maxis games do not have explicit objectives. There are clearly things that are valuable and meaningful to some players, but what are they, and what value and meaning are they getting from them?

Result of engagement is a situation where player is both consumer and producer, (Ken Perlin calls this hybrid a “conducer”), where the player pays for the right to produce content. This content is shared. The model described echoes the emergence and popularity of blogging and YouTube (that emerged after the date of this interview), where people may share their own creations. An important issue I am interested in is why they share, what they share, and what meaning others get from them. The process of this is similar to fan culture, which thrives on building from some cultural base which already means something to a group.

Reading Info:
Author/EditorPearce, Celia
TitleSims, Battle Bots, Cellular Automata, and Go
Typearticle
Context
Sourcesource
Tagsspecials, games, simulation, emergence
LookupGoogle Scholar

Channels of Discourse, Reassembled

[Readings] (02.04.09, 11:35 pm)

Robert Allen: Introduction

The subject of this book is television. The book was published originally in 1982, and then a second edition was published in 1992. The need for the book is seen as the massive cultural penetration of television and a lack of critical discourse surrounding it. To study television, it must be defamiliarized, or, as Alfred Schutz has said, made “anthropologically strange.” A problem, similar to with games, is that television is seen as merely entertainment. As such, it is not taken seriously as a medium. It is interesting to examine the approach to television, as it is substantially more legitimate now, and with the popularization of DVD box sets, the production of quality programs is now motivated by a product-oriented approach in addition to the matter of the constantly streaming television signals. I would also argue that the common use of television is beyond mere entertainment, and has a significant role in cultural communication, establishing a cultural object around which people may engage socially, sharing common values.

All of the approaches in this book use semiotics in one form or another. An implicit question is: how are meanings and pleasures produced in our engagement with television? This question is naturally relevant to other things, games among them. Television is pervasive, and as such it is apparently natural. We seem to have an ability to “read” television, even though the way in which we do so is not natural. The practice of television viewing is intertwined with production, and both have developed the language by which it is read over time, and this has become culturally encoded. Television produces a sense of transparency, since it resembles a window, but this transparency is illusory.

The role of authorship is convoluted in television, especially in programs that are not explicitly fictional. Contemporary criticism is interested in how television constructs representations of the world, rather than asking whether it tells the truth. Allen compares contemporary criticism with the traditional: “Whereas traditional criticism emphasizes the autonomy of the artwork, contemporary criticism foregrounds the relationships between texts and the conventions underlying specific textual practices. Traditional criticism is artist centered; contemporary criticism stresses the contexts within which the production of cultural products occurs and the forces that act upon and channel that production. Traditional criticism conceives of meaning as the property of an artwork; contemporary criticism views meaning as the product of the engagement of a text by a reader or groups of readers.” (p. 11)

Ellen Seiter: Semiotics

Television is made from iconic and indexical signs. Indexical signs rely on a material connection between the signifier and the signified. Icons are signs where the signifier structurally resembles the signified, but there may not be any material connection. A set of tracks in the snow is an indexical sign of the animal who walked through it, and a child’s drawing is an iconic sign of that same animal. Neither of these is free from tampering. Pierce’s model of signs does not require the signs to be intentional, and there does not even necessarily need to be a receiver.

There are two means of extracting meaning from signs: Reading denotation and connotation. Denotation is an actual “picture” that conveys the substance of the sign, but the connotation is about the mood or message. The connotation requires a context to understand, while with the denotation that is not necessarily the case. Reading connotations is strongly guided by conventions. Non representative elements may also have no denotations, but the may have connotative meaning, for instance, the sound of a minor chord in a suspenseful scene.

An active question is what is the smallest unit of television. Film studies uses the shot, following from Metz, who argued that there is no small linguistic unit, but the shot is the largest minimum segment. This sort of question is analogous in games. In games where there are so many elements and factors, identifying a unit is very difficult. Because games are interactive and not passively experiential, the languages from narrative, film, television, and theatre, must be mixed with the languages of architecture, performing arts, board or tabletop games, and sports, among others. To describe television, Seiter poses the unit of the flow, which derives from Raymond Williams. A minimum segment should have paradigmatic and syntagmatic dimensions.

Seiter discusses Robert Hodge and David Tripp’s analysis of children’s television, following the format of structuralism. An extensive analysis is put on 1) single images, 2) narrative (voice overs) and 3) personification. This analysis totally ignores the long traditions of personification in children’s stories. The anthropomorphization of characters comes across as a surprise in the analysis, which seems totally out of place. Meaning is also considered as existing in individual shows, not within the series of shows which the children would presumably watch. It also looks at the show in isolation from the cultural context of the children who would be watching it.

Additionally, the analysis shown here does not examine themes, plots, or the actual content of the shows, just the openings. The shows examined have several themes, the prevailing one being a hybridization of nature and culture, of animals and humans. The question that I have is if the other structures, the mechanics (the means by which things actually happen), also support these nature-culture and human-animal dyads. I also want to ask how these fit with the larger tradition of animal characters in children’s books, folktales, stories, etcetera.

Sarah Kozloff: Narrative Theory

This chapter discusses television as a narrative form, specifically borrowing from Chatman, on story and discourse. The observation here (from Robert Allen) is that interest in television is generally on the paradigmatic axis, rather than the syntagmatic one. Instead of the viewer being concerned with what comes next, which is the syntagmatic question, the viewer is interested in “what could happen instead?” This is especially relevant with the character-oriented focus of the situation comedy, where strong characters are put in many diverse situations, and the pleasure of the audience is in how those characters react. This approach is precisely the opposite of the formalist narrative tradition, which is focused on the syntagmatic axis. In the formalist tradition, characters are weak and reduced to the degree to which they satisfy the functional needs of the story. The emphasis on what could happen is also emblematic of fan culture and fan fiction.

Robert C. Allen: Audience Oriented Criticism

The focus in this section is on the readers (watchers) of television, and how they create meaning. Viewers are generally more addressed in television than they are in other narrative forms. With studio audiences, commercials, and several formats (especially the news), the speakers directly address the viewer. In order to accommodate this sort of focus on the viewer, Allen proposes the use of audience oriented criticism, that places the viewer at the center of the study.

Television is analyzed phenomenologically as a performance. It is concretized when watched (as text is brought to life when read). The novel and the written word are occupied with gap filling, but there are fewer gaps in television. The gap filling is occupied by the way in which the reader constructs the world of the text, supplying missing details and constructing causal relationships where the gaps exist.

However, in the case of television, the gaps do exist, but their location and function has been changed. Gaps exist between the serial occurrences of the shows. Between episodes, as was the case in serial novels (for instance, Dickens), the readers and viewers are left to contemplate what has happened, what is going to happen, and share and reflect in a community about their beliefs and opinions. Dorothy Hobson found that the value of television is not in the watching experience itself, but in the social life apart from the television. This is now commonly understood as the watercooler discussions, where people gather around the watercooler at their workplace to talk about what happened on television. This has been found as useful (I don’t know the sources, but I heard Henry Jenkins talk about it) as a means for discussing ethical beliefs and values through projection of those beliefs onto the characters.

There are watercooler games, or, at least, there are games that intend to capture the dimension of social discussion, but these are primarily news games. These miss the periodic and mystery elements found in the gaps used by serial novels and television. To get the watercooler phenomenon, games must have consecutive gaps.

Reading Info:
Author/EditorAllen, Robert
TitleChannels of Discourse, Reassembled: Television and Contemporary Criticism
Typecollection
Context
Tagsmedia traditions, media theory, specials
LookupGoogle Scholar, Google Books, Amazon

Mark Wolf and Bernard Perron: The Video Game Theory Reader

[Readings] (02.04.09, 8:30 pm)

Originally published in 2003, the Video Game Theory Reader chiefly aims to examine games as a medium. The chapters each contain the work of scholars examining games in the light of criticism, looking at what games have the potential to do, and how they can be understood as artistic works. The book opens with Warren Robinett describing his role in the creation of Atari’s game Adventure. To set the tone for the rest of the reader, Robinett compares the practice of creating games to other artistic practices, and goes on to describe the difficulty with which he managed to write his name in the game. Though it is not a major feature of his introduction, the idea of authorship is central to the understanding of games as artifacts. In the Atari days, games were consumer products, and the actual creators of the games were not given credit as a matter of policy. Their names were not in the credits, on the box, in the manual, anywhere. One of Robinett’s proud achievements was the sneaking of his signature into a secret room in the game.

There have been many turning points in the development of games, but this marks one of the early ones, where the game is considered an authored work. This is a long way off from being an artistic work, but it is the first of many steps to that direction. I do not want to stress the role of authorship as the end-all of artistry. The idea of the single hand of the artist shaping a work is markedly false, especially as pertains to film, television, and games, which are the result of so much creation and collaborative energy. However, in considering authorship, whether it is in a decision regarding a whole work or a small piece of it, the hand of any author implies intention, which is the first step in artistic expression of any kind.

Walter Holland, Henry Jenkins, and Kurt Squire: Theory by Design

The opening questions in this chapter are: What is theory for video games? What should it do? Who conducts theory? Theory and practice feed into one another and form a braid whereby each is improved by the other’s influence. The authors explain that theory is inescapable regarding any given practice, that no matter what activity is practiced, theory can be built around it. An argument to this effect necessarily demands some questions of what theory is and what it does. The authors borrow from Thomas McLaughlin and explain that theory is a sense of the premises and ideals that go into any given practice.

To examine the ideals and premises of games, the authors apply their discussion to the “Games-to-Teach Project” at MIT. This is pedagogically oriented and is intended to occur at an intersection of games, education, as well as the subjects being taught, namely math, science, and engineering. The role of games in education is well supported. Students learn by manipulating things with rules: playing, rather than being explicitly instructed. The process of designing these games is explained as a way of looking at and approaching theory. Design considers conceptual questions but addresses them through concrete solutions. The process of design is thus an approach that exists in between theory and practice.

Mark J. P. Wolf: Abstraction in the Video Game

This chapter explores the role of games between the conflict of abstraction and representation. Abstraction is the opposite of representation. The goal of abstraction is to simplify rather than reproduce. Traditional artistic media (for instance, painting and photography) aim to reproduce in detail their subjects, representing them. Early games have tended to do the opposite, simplifying instead of reproducing. The early games generally described come from the Atari era, where abstraction was a necessity and a constraint of the medium. Later games, bolstered by technology, have worked toward reproduction more and more, gradually moving in the direction of photorealism. This movement is sometimes justified as necessary to build credence of games as artistic artifacts, because they are capable of producing aesthetic images.

Wolf’s essay focuses on the visual elements in games, but abstraction has heavy value within the space of interaction, especially in simulation. Abstraction focues on action and enables interaction, providing clear means for grappling with the material. A player’s engagement with a game is abstracted, necessarily, but the channels and limitations of whatever device is used (joystick, controller, mouse and keyboard, etc), which restrict the range of the player’s interactive choices. This is a severe form of abstraction, as compared to normal human engagement with objects, where we can make use of many senses, all embodied. Abstraction and representation are key words to consider in terms of simulations, because a simulation can be intended to be highly representative and numerically accurate (such as in scientific simulations) where thousands of variables are under consideration, or it can be a severe abstraction where only a few variables are considered.

History has shown that neither photorealism nor a complex simulation make a game better on their own (though realistic games do often, but not invariably, enjoy financial success). Instead, a reasonable takeaway from this is that abstraction and representation are tools to be used to create meaning in the development of games.

Gonzalo Frasca: Simulation versus Narrative: an Introduction to Ludology

In this provocatively titled essay, Gonzalo Frasca aims to compare two ways of looking at games. The first way is the “traditional” approach, which is the usual straw man, is embodied in Brenda Laurel and Janet Murray, who are said to consider games as extensions of drama and narrative. The alternative is the ludological perspective, which is a formalist approach to looking at games, focusing on the structure and elements of games, specifically their rules. Often, this approach comes across as combative, primarily oriented towards dethroning the narrativist occupation of game studies, but Frasca considers this to be missing the point.

Simulation is posed as an alternative to representation (which is embodied by film and narrative). Frasca gives a definition for simulation from Videogames of the Oppressed: “to simulate is to model a (source) system through a different system which maintains (for somebody) some of the behaviors of the original system.” (p. 223) The emphasis of this definition is meant to be on behavior, but I find myself focusing on the parenthetical “for somebody”, which asserts that the simulation means something to someone. Frasca argues that traditional media is representational, which depicts and reproduces something. The example he gives is “A photograph of a plane will tell us information about its shape and color, but it will not fly or crash when manipulated.” (p. 223) I find this to be problematic. This suggests that representations dwell only on surface information, such as color, maybe sound or movement (on film), but nothing more. However, the process of interpretation involves much more in terms of the reader understanding cause and effect, and reading patterns, motivations, and many other configurative elements. An image may be static, but narratives necessarily convey worlds as systems, beyond depictions. Frasca’s examples of the affordances of simulation dwell on manipulation and interactivity, which is a feature obviously missing from traditional media. However, there is nothing in Frasca’s definition that suggests that a simulation must be interactive. Indeed, scientific simulations are usually noninteractive. Simulations may be configured, by giving them initial conditions which affects their outcomes, but this markedly weakens the argument that Frasca is making.

I do not mean to claim that games and narrative are the same things, of course, but I want to emphasize that both games and traditional media are are not distinguished by clear cut differences along the lines of simulation and representation. Games may be highly representative and have very little simulation, or the simulated elements of a game may have little to do with the represented elements. Texts (usually of the modern variety) may be labyrinthine, unconventionally posed and, while static, the meaning derived from them will be different depending on the point at which the reader entered the text. What all of these share is a systemic dimension, where the artifact abides by certain rules, and the audience/reader/player has certain codes for understanding the way in which those rules are used. Systemic does not imply simulation, but neither does the word game.

The challenge posed by simulation to narrative (in the traditional sense), is volatility. In writing, fate and outcome must be fixed. Simulation threatens the authorial role of fate. Introducing elements of simulation into a work increases the freedom of the player or audience, while limiting the range of control of the author. The author still has the final say, but must necessarily give up some control. The example given is Emile Zola’s Germinal, which is about striking workers in Northern France. This work is about the conditions of labor, and is meant to communicate something about social justice, among other things. The narrative rhetoric relies on using the ending to help convey the message of the work. In writing, though, Zola was faced with two options. The workers must either win or lose. In trying to convey the delicate balance at stake, an author could write several different stories which play out differently and contain different endings. Frasca’s alternative is to develop a simulation where the ending is dependent on the player’s actions. This can be contested on the grounds that maybe the ending is not supposed to be different, but this option is still supported by Frasca’s idea. A simulation could allow the player a variety of choices, but the ultimate conclusion despite those choices may be the same. Such a game would still be powerful and meaningful, indicative of social problems and suggesting that the only way to change the outcome would be to change the rules, a powerful and perfectly worthwhile rhetorical message.

Frasca compares Aristotelian drama with Boal’s Theatre of the Oppressed. Simulation takes away the narrative power of causality. Frasca claims that narrative authors must train their stories so that they will “perform in an almost predictable way”. Whereas simulations follow rules and may operate flexibly outside of the rigid path to which narrative is constrained. I find this argument difficult, because of two reasons: interpretations are unpredictable (though they are outside of the domain of the author, like other forms of interaction), but also because games and some simulations may be totally predictable. This does not challenge explicitly Frasca’s argument, but reveals that authorship of a simulation is a skill and meaning does not come for free.

Finally, Frasca gives four levels for thinking about the ideology of games and simulation. These derive from the way that rules are used and what kind of play is enabled. The types of play are given as Paidia and Ludus (from Callois), where the former represents unstructured play, and the latter represents play with explicit goals. The four levels are useful for considering the way of thinking about the meaning and values of simulation.

  1. Representation. This is given as a kind of concession, because games and simulations must necessarily represent something, referencing something in order for the the simulated world to be meaningful in its context.
  2. Manipulation. This is how the player affects and manipulates the simulation, which may express what the kinds of possibilities there are in the simulation, or the shape of its state space.
  3. Goals. This is particular to games, and not simulations. Goals are a means for the author to explicitly embed an objective for the player, who may manipulate and play so much as he or she chooses, but will not “win” until the goals are met. It is important to remember that the author of a simulation may encode different, contradictory goals, ironic goals, or simply no goals whatsoever.
  4. Meta-rules. This is the means by which the simulation can be extended or modified. This encodes not only what is possible to do within the game, but allows users to modify the rules of the game or simulation itself, such as by publishing source code or APIs. This is bound by rules in the sense that only certain parts may be “opened up”, while others would remain closed.

This last list of elements is extremely helpful for thinking about authorial values within simulations.

Reading Info:
Author/EditorWolf, Mark and Perron, Bernard
TitleGame Theory Reader
Typecollection
Context
Tagssimulation, games, narrative, specials
LookupGoogle Scholar, Google Books, Amazon

Rick Altman: Film/Genre

[Readings] (02.03.09, 3:29 pm)

What’s at stake in the history of literary genre theory?

This book is about film and genre, but early on, Altman focuses on genre as a literary and phenomenon, looking at how it has been understood historically. He starts with the classic treatment of genre. The purpose of genre is to classify literary types. In terms of classical theorists, Altman compares Aristotle and Horace. Aristotle’s Poetics is arguably one of the most influential and most well known studies of literay types. Altman closely examines Aristotle’s introduction of poetry, and finds that the Poetics relies on many “unspoken and apparently incontrovertible assumptions.” The chief of these is that Aristotle aims his study of poetry as a solid and well defined object, without considering it as a culturally variable thing. These assumptions lead naturally to his conclusions and ultimate judgments.

In an interesting comparison, Aristotle judges poetry as imitating life, whereas Horace treats poetry as imitating other poetry. To Horace, poetics are a matter of how works fit into a tradition of models and techniques, and works should be judged based on the adherence to those models, rather than to life.

Altman reviews the historical approaches to genre studies, but comes to no clear conclusion regarding what genre is or what are the basic principles accepted by theorists. The reason for this is that there are no common agreed-upon principles shared by theorists. However, Altman does find that they tend to rely on several common assumptions, and gives a listing of these: (p. 11-12)

  1. It is generally taken for granted that genres actually exist, that they have distinct borders, and that they can be firmly identified. Indeed, these facts have seemed so obvious to theoreticians that they have rarely seemed worthy of discussion, let alone questioning.
  2. Because genres are taken to be ‘out there’, existing independently of observers, genre theorists have generally sought to describe and define what they believe to be already existing genres rather than create their own interpretive categories, however applicable or useful.
  3. Most genre theory has attended either to the process of creating generic texts in imitation of a sanctioned predefined original, or to internal structure attributed to those texts, in part because the internal functioning of genre texts is considered entirely observable and objectively describable.
  4. Genre theorists have typically assumed that texts with similar characteristics systematically generate similar readings, similar meanings, and similar uses.
  5. In the language of theoreticians, proper genre production is regularly allied with decorum, nature, science, and other standards produced and defended by the sponsoring society. Few genre theorists have shown interest in analysing this relationship.
  6. It is regularly assumed that producers, readers, and critics all share the same interests in genre, and that genres serve those interests equally.
  7. Reader expectation and audience reaction have thus received little independent attention. The uses of generic texts have also largely been neglected.
  8. Genre history holds a shifting and uncertain place in relation to genre theory. Most often simply disregarded by its synchronically oriented partner, genre history nevertheless cries out for increased attention by virtue of its ability to scramble generic codes, to blur established genre tableaux and to muddy accepted generic ideas. At times, genre history has been used creatively in the support of specific institutional goals, for example by creating a new canon of works supportive of a revised genre theory.
  9. Most genre theorists prefer to style themselves as somehow radically separate from the objects of their study, thus justifying their use of meliorative terms like ‘objective’, ‘scientific’, or ‘theoretical’, to describe their activity, yet the application of scientific assumptions to generic questions usually obscures as many problems as it solves.
  10. Genre theoreticians and other practitioners are generally loath to recognize (and build into their theories) the institutional character of their own generic practice. Though regularly touting ‘proper’ approaches to genre, theorists rarely analyze the cultural stakes involved in identifying certain approaches as ‘improper’. Yet genres are never entirely neutral categories. They — and their critics and theorists — always participate in and further the work of various institutions.

This list of bullets is wholly cited from Altman. The listing is very useful because it lays out exactly the foundations of the current study of genres, and what Altman considers to be wrong with it. Each of these bullets is representative of a critique that Altman gives later on, and will often provide his own solutions. In the next chapter, Altman explores some of these assumptions as they pertain to genre in film specifically. It is also worth comparing these principles in the study of the genres of games. The lamentably rigid genre system of mainstream games is supported by the institutional power hinted at in Altman’s points.

Where do genres come from?

This chapter discusses the process by which film genres are formed. Altman develops and hypothesiszes another set of bullets, behind how genres are formed and emerge. This is coupled with an exploration of how films have been marketed historically. Early film was diverse, and genres were gradually imposed. Again, this resembles the emergence of game genres. In the early history of both games and film, the types of artifacts that were made were strongly constrained to the affordances of the media. In this discussion, it is useful to think about genres as systems of patterns and models, which interact with both the affordances of the medium, and the content of the work itself.

  1. Films often gain generic identity from similar defects and failures rather than from shared qualities and triumphs. (p. 33)
  2. The early history of film genres is characterized, it would seem, not by purposeful borrowing from a single pre-existing non-film parent genre, but by apparently incidental borrowing from several unrelated genres. (p. 34)
  3. Even when a genre already exists in other media, the film genre of the same name cannot simply be borrowed from non-film sources, it must be recreated. (p. 35)
  4. Before they are fully constituted through the junction of persistent material and consistent use of that material, nascent genres traverse a period when their only unity derives from shared surface characteristics deployed within generic contexts perceived as dominant. (p. 36)
  5. Films are always available for redefinition — and thus genres for realignment — because the very process of staying in the black involves reconfiguring films. (p. 43)
  6. Genres begin as reading positions established by studio personnel acting as critics, and expressed through film-making conceived as an act of applied criticism. (p. 44)
  7. The first step in genre production is the creation of a reading position through critical dissection, and the second is reinforcement of that position through film production, the required third step is broad industry acceptance of the proposed reading position and genre. (p. 46)
  8. The generic terminology we have inherited is primarily retrospective in nature; though it may provide tools corresponding to our needs, it fails to capture the variety of needs evinced by previous producers, exhibitors, spectators and other generic users. (p. 48)

As pertains to adaptation, the most important of these is (3), which suggests that in order for an adaptation to be made in the first place, the genre must be adapted and recreated, or borrowed from similar adaptations. This is doubly important in the space of game adaptations, where the issue of mechanics and content is so variable.

Are genres stable?

Genres are frequently understood in adjective-noun pairs, for example “musical comedy”, or “western romance”. These evolve in cycles and blend and fold back in on themselves. The adjective-noun pairs can also be thought of in terms of content and mechanics. Genre explains what sort of content is present and according to what sorts of rules they operate. These may also be seen as intersections of models, where the resulting artifact is constrained by both sets of expectations.

Where are genres located?

The goal defined by the question of this chapter is to find where genres are located. In the traditional sense of genre studies, they are located detached from both the audience and the work, as external and objective categories. Altman reformulates the matter of finding location as discerning how one genre is different from or resembles its neighbors. This echoes Wittgenstein’s assertion that games are alike in the sense of family resemblances. A genre is a complex situation, rather than a structure. There are ways of looking at genres, all different, from the perspective of the authors, readers, or the text itself.

To clarify the genre location issue, Altman asks: where is the location of America? Is it in the Constitution, the Declaration of Independence, the Bill of Rights? Is it in geographic landmass, in the flag, shared values, or in the hearts and minds of Americans? This question is really purely epistemological, but reveals fundamental ambiguities in how something so supposedly straightforward is unclear.

These problems are similar to the problem of finding models, especially as pertains to fictional works. Ultimately, models and genres are interpreted, but the process of interpreting is an authoratative act. Whoever interprets and then conveys their own take on a model is exerting authority over the artifact in question.

Genres often emerge because of the constraints imposed by institutions (such as the half-hour television block), which usually come from economic or technical concerns. Gradually, as these constraints are turned from limitations into affordances, they become institutionalized themselves.

A final example compares genre classifications to ways of looking. Borrowing from Wittgenstein, who exhorts the reader to, in discussing similarities, look rather than think, Altman does an experiment looking at, not thinking about, nuts. He first looks in the supermarket, where nuts are grouped together along with other things, such as mixes, chocolate morsels, and oils. The products are near each other based on surface qualities. At home, nearness relates to function, not form. This comparison is useful in consideration of genres, classifications, as well as literature and games.

What communication model is appropriate for genres?

The traditional communications model relies on a sender, receiver, and some sort of medium that exists in between them. In the case of literature or film, the thing in the middle is both the medium and the text itself. Altman explains that in mass culture, such as film, the films are dispersed to many recipients, but those recipients in turn interact with each other, and finally change their impression of the actual artifact and the medium themselves. The codes of interpretation and understanding are written by the audience. Thus, borrowing from Eric Rothenbuhlerr, Altman argues that film is twice written: “To count as generic communication, however, something must be read as if it were twice written, first by the original authors and then again by the constellated community that ‘rewrites’ the genre.” (p. 172)

Borrowing from Jameson, I might make a few observations: This approach also can be used to consider a text as existing in time, not just having meaning to one audience, but to many. Texts have lives of their own, and are invigorated by translations, adaptations, and reinterpretations. These change the original, going so far as to tear it up and reincorporate it in new products where the form of the original has little resemblance with the final product. While living, a text is also a shell of the ideas of the original author. The original model gone, and now it relies on its readership to have meaning made from it. In this context, genre is like a surface which attempts to meld between the artifact and the audience, but, the genre is pliable and, like the audience, will change over time.

Reading Info:
Author/EditorAltman, Rick
TitleFilm/Genre
Typebook
Context
Tagsmedia traditions, film, specials
LookupGoogle Scholar, Google Books, Amazon

Jay Bolter: Writing Space

[Readings] (02.01.09, 2:36 pm)

An early issue is the history of print. Bolter gives the example of Victor Hugo’s story Notre-Dame de Paris, where a priest laments that the printed book will destroy the cathedral. This has not exactly been the case, cathederals arguably remain both standing and well appreciated and attended. Nonetheless, the printed word has changed the relationship between people and text, and thus their textual engagement with the cathederal. How people look toward the cathederal, and indeed anything else, was fundamentally altered after the development of the printing press. Bolter suggests that we are living in the “late age of print”, in the sense that what print is has been changed. With digital media and the internet, the nature of print and its meaning are changing. The printed form has lost primacy as a medium, and the reader and author distance has been contracted.

Cultural evolution has gradually moved text in a more participatory dimension. Medieval texts were paragons of authority and their virtues were aesthetics and precision. Printing gave texts fixity and permanence. In the modern era, they created a form of mass distribution so that texts could be bought by anyone and distributed everywhere. The digital treats texts as fluid and multiple. The roles of readership and authorship have become blended and fuzzy. These changes affect the voice of the text to its readers.

Writing Space looks at writing using a spatial metaphor. “Each writing space is a material and visual field, whose properties are determined by a writing technology and the uses to which that technology is put by a culture of readers and writers. A writing space is generated by the interaction of material properties and cultural choices and practices.” (p. 12) Writing is thus a fusion of both texts and culture. Space is given as the environment in which the text and its conjugates reside. This is a very different space than the fictional world. It nonetheless has similar qualities, and one could argue that both are environments for performance.

Bolter’s aim in this text is to explore how digital texts, and hypertext especially, are remediations of print. These remediations have in turn affected how print is used and approached, from levels both technical and compositional.

Early writing, while not mechanical, was still technology– techne, in that considerable skill was needed to create a parchment and make it a written surface which could be read. Even oral poetry requires techne, in the sense of speech, memory, and composition. In the world of the digital, in terms of hypertext (and simulation as well), the role of techne once again comes to prevalence. Both art and skill are required to create writing. Writing requires mastery over whole new technologies in order to make use of these digital forms.

Since the early history of writing, there has been  a conflict between oral and written communication. Reading is linear, following a path according to textual codes. Oral dialogues are more participatory. Both are bound by codes and expectations. Plato’s dialogues were nostalgiacally backwards-looking toward oral culture. His writing occured in a time where writing was gradually subsuming the Greek oral culture. Dialogic interaction has a resurgence in the form of hypertext and web pages.

In interactive fiction, the spatial metaphor becomes more prevalent. Bolter examines this in the context of Michael Joyce’s Afternoon, where the spatial metaphor is especially apt. “Reading afternoon several times is like exploring a vast house or castle. Although the reader may proceed often down the same corridors and through familiar rooms, she may also come upon a new hallway not previously explored or find a previously locked door suddenly giving way to the touch.” (p. 126) The analogy to exploration refers back to the spaces of textual dungeons and games. The interesting thing about these is that these spaces are textual, they are spaces of writing, not worlds. Sterne, Joyce, and Borges are all predecessors to hypertext, and are arguably hypertext authors themselves. Their texts present fragmented and exploratory spaces, not meant to be understood as ordinary linear narratives.

The conflict between text as a space versus a world is a subtle and important difference. The spatial metaphor applies to the lexia of the text, where what is being explored is the events, scenes, and descriptions. The act of reading a space is a matter of assembling a coherent picture out of these figments. This exploration operates on the discourse layer of the text. The world metaphor applies to the content of the world, the story instead of the discourse (in the Chatman sense). A reader exploring a world is interested not just in what happens in the context of the narrative, but what might happen beyond and outside of the narrative. The world operates on the diegetic level, and exploring it can seek out the space of what could happen, rather than what does happen.

Reading Info:
Author/EditorBolter, J. David
TitleWriting Space: The Computer, Hypertext, and the History of Writing
Typebook
Context
Tagsmedia traditions, narrative, cybertext, specials
LookupGoogle Scholar, Google Books, Amazon

Marie-Laure Ryan: Narrative as Virtual Reality

[Readings] (01.31.09, 10:55 pm)

The focus of Ryan’s discussion is to use the language and metaphors of virtual reality to look at texts and narrative. Virtual Reality, as a genuine endeavor has largely been discredited, or at least has come to be understood as suffering from major flaws, and was so in 2003 when this book was published. The important elements of virtual reality are ways of thinking about experience. Ryan picks out the two dimensions that follow from VR as immersion and interactivity. These dimensions do not belong to VR, but it is in that context where they take on their most evocative meaning. VR is important not because of what has been achieved in terms of hardware and input devices, but because of the ideas and metaphors it has created in culture. The idea of a virtual world is a powerful thing, and it has roots in ritual, narrative, and performance.

What is perhaps the most startling about Ryan’s analysis is that she is not attempting to analyze VR in context of narrative theory, but instead analyze narrative theory in the context of VR. Within the scope of this analysis, Ryan brings forth several ways of thinking about texts as worlds.

There is a useful review of the history of literature with respect to immersion and interactivity. The immersive ideals are tied to the aesthetics of illusion, relating to the idea of transparency in the medium. A narrative is transparent when it easily enables immersion. Nonfictional modes tip away from immersion, with a focus on form and language. The 19th century realist novel tips back and focuses on immersion again. Subsequently, modern literature often returns to examining the use of language, again distancing from the experience of the storyworld. In postmoden literature, the idea of meaning becomes ambiguous and is dependent on interpretation. Interpretation begets a kind of co-creation of meaning, which shifts to an aesthetic of interactivity. The postmodern form can be seen to exist the most prevalently in hypertext.

The Two (and Thousand) Faces of the Virtual

This chapter compares two perspectives on the virtual. Ryan compares the interpretations of Baudrillard to Lévy. The conflict is over whether the virtual is purely fake or a matter of potential. The Baudrillard sense of the virtual is simulacra, that which designates an absence of the real and gradually is mistaken and interchanged with it. Lévy’s understanding (taken from Becoming Virtual) is more positive, that virtual is a state of potentiality. In this sense, there are two processes: virtualization and actualization, which compliment each other. Virtualization introduces ambiguities and possibilities, while actualization moves the virtual closer to a state of concreteness. There are four points about these processes: (p. 36)

  1. The relation of the virtual to the actual is one-to-many. There is no limit on the number of possible actualizations of a virtual entity.
  2. The passage from the virtual to the actual involves transformation and is therefore irreversible. As Lévy writes, “Actualization is an event, in the strongest sense of the term.”
  3. The virtual is not anchored in space and time. Actualization is the passage from a state of timelessness and deterritorialziation to an existence rooted in a here and now. It is an event of contextualization.
  4. The virtual is an inexhaustible resource. Using it does not lead to its depletion.

The role of virtualization and actualization are significant within narrative. Ryan clearly prefers Lévy’s understanding of the virtual, and rightly so. A text may be considered something that employs both of these processes, and the reading of a text is a means of moving from a more virtual world into a more actual one.

Virtual Reality as Dream and as Technology

This chapter discusses the ideas and hopes for VR as compared to its reality. Much of the concept comes from the metaphor of the holodeck, which became a common idealization. The intensity of speculation about the potential of VR (in its early days) was rampant and suggests at the pervasiveness of the desire to be within a world.

The idea of VR is a dynamic object, a simulation. Ryan compares the idea of simulacra to simulation, as it pertains to VR and computer simulations. Baudrillard’s conception of simulacra are objects which embody deception. “Computer simulations differ from this conception of the simulacrum on several essential points: they are processes and not objects; they possess a function, and this function has nothing to do with deception; they are not supposed to re-present what is but to explore what could be; and they are usually produced for the sake of their huristic value with respect to what they simulate. To simulate, in this case, is to test a model of the world.” (p. 63)

When input is given to a simulation, it becomes the life story of its user. Simulation is a space of possible stories.

The Text as World

The metaphor of text as world is used by VR theorists, but it has roots deep in literature. Ryan gives the particular examples of Brontë and Conrad, who present worlds in effort to get the reader to see and experience. A text, as a semantic domain,  is a cosmos of meanings. In order to be a world, it must be seen as a whole. Reading turns literary codes into content. There are four approaches to understanding texts as worlds:

  1. Cognitive psychology (metaphors of transportation, and being lost in the book). This is the classical sense of immersion, explored by Richard Gerrig and Victor Nell.
  2. Analytical philosophy (possible worlds). This model is the subject of another of Ryan’s books. This explores the combinatorics of what worlds could possibly exist.
  3. Phenomenology (make-believe). This model is connected to the sense of play, where the story world exists within a kind of magic circle. This approach is studied by Kendall Walton. In this sense, representation and fiction are equivalent, and worlds take on a phenomenological character.
  4. Psychology (mental simulation). Simulation is the subject of most interest to me. Simulation is studied by Walton, as well as by Stephen Stich and Shaun Nichols, and Gregory Currie. Stich and Nichols argue that simulation is a kind of counterfactual reasoning, where the simulator suspends their own decision making and takes on the beliefs and desires of the simulated to determine their course of action. The act of reading moves the story world forward in time (and also along the causal chain). Simulation is thus the reader’s performance of the narrative. It hearkens back to Aristotle’s Poetics which recommends envisaging things vividly. Simulation is perhaps the best endorsed by the process of writing: “There cannot be a more eloquent tribute to the heuristic value of mental simulation than the feeling voiced by many authors that their characters live a life of their own.” (p. 114)

Immersive Paradoxes

Temporal immersion is similar to what might be called suspense. This has to do with dramatic tension and reader investment. Ryan makes an analogy of suspense to the to the engagement of sports fans. Even though sports fans have no agency over the outcomes, they are very engaged and immersed in the game, psychologically invested in its outcome. There are three bullets describing the immersion of the sports fans, and then another three describing narrative suspense. (p. 141-142)

  • The enjoyment of the spectator is due that he roots for one of the teams and sees one outcome as vastly preferable to another.
  • Spectators participate in the action through the activity known as “armchair quarterbacking”: they imagine scenarios for the action to come and make strategic decisions for the participants. This activity is made possible by the rigidity of the rules that determine the range of the possible.
  • Suspense increases as the range of possibilities decreases. It is never greater than in the ninth inning or the last two minutes of the game, when the teams are running out of resources and options are reduced to a sharply profiled alternative: score now and stay alive, or fail to do so and lose the game. At the height of suspense, the ticking of the clock (if the game is limited by time) becomes strategically as important as the actions of the players. When this happens, the spectator reaches a state of complete temporal immersion.

Narrative suspense operates according to similar rules:

  • Dramatic tension is usually correlated to the reader’s interest in the fate of the hero. The prototypical suspense situation occurs when a character is in danger and the reader hopes for a favorable alternative.
  • Suspense is dependent on the construction of virtual scripts and events. Though it is tied to uncertainty, it must present what Noel Carroll has called “a structured horizon of anticipation” (“Paradox,” 75). This horizon is given shape by potentialities that trace visible roads into the future, such as the processes currently underway, the desires of characters, the goals to which they are committed, and the plans under execution. The reader’s ability to project these paths is facilitated by narrative devices that constrain the horizon of possibilities in the same way the rules of a game determine what can happen….
  • The intensity of suspense is inversely proportional to the range of possibilities. At the beginning of a story, everything can happen, and the forking paths into the future are too numerous to contemplate. The future begins to take shape when a problem arises and confronts the hero with a limited number of possible lines of action. When a line is chosen, the spectrum of possible developments is reduced to the dichotomy of one branch leading to success and another ending in failure, a polarization that marks the beginning of the climax in the action.

There are four kinds of suspense. These differ in terms of intensity, subject matter, and the role of the reader.

  1. What suspense. This is the most intense of the varieties, and is a matter of wanting to know what will happen next. Usually this is met with some crucial narrative question: will the hero triumph over the villian, will the two lovers marry, etc. The involvement of the reader is in a state of not knowing the outcome.
  2. How (why) suspense. At the next level is the question of how an event or situation came to take place. The reader knows the outcome, but not the process leading to that outcome, and the mode of engagement is thus a matter of the reader moving forward and backward in time.
  3. Who suspense. Ryan explains that this is the suspense of murder mysteries, finding out who did it. The reader is epistemic rather than invested, and the reader is not attached to the outcome. This suspense is a matter of considering several finite options instead of paths.
  4. Metasuspense. This is a matter of suspense regarding how the author is going to weave the narrative together. This is the least immersed in that it involves meta speculation on the part of the player.

The paradox of suspense is that a written text is necessarily certain. Suspense requires uncertainty (from Carroll). Kendall Walton poses a resolution that uses make-believe theory, which is of play. The reader constructs a game wherein certainty is stricken from the game rules.

From Immersion to Interactivity

Section compares texts as games and texts as worlds. Ryan gives the examples of Balzac, Dickens, Tolstoy, Dostoevsky, and Proust. These were maximally dense as narrative worlds and pinnacles of the realist movement. Subsequent works could not maintain the density of narrative realism, and subsequently narrative worlds needed to break apart and become dissociated. Play becomes a dominant theme and operation within these. Ryan compares games and language, where the terms of games do not fit easily into the literal sense of narrative.

Ryan compares each of Callois’ types of play to texts. This works in terms of reader expectations. In modern and postmodern narratives, narrative rules shift from ludus to paidia, as systems move to subvert rules.

Reading Info:
Author/EditorRyan, Marie-Laure
TitleNarrative as Virtual Reality
Typebook
Context
Tagsnarrative, simulation, specials
LookupGoogle Scholar, Google Books, Amazon

Rodney Brooks: Intelligence Without Reason

[Readings] (01.28.09, 9:52 pm)

Brooks argues that instead of AI having an influence on computer architectures, the converse is true, that architectures have had a strong influence on our models of thought and intelligence. Particularly from the Von Neumann model of computation.

Brooks avoids a formal definition of intelligence explaining that it can lead to philosophical regress. Instead, “therefore I prefer to stay with a more informal notion of intelligence being the sort of stuff that humans do, pretty much all the time.” This is nice, but unfortunately can lead to wildly varying interpretations.

The classical account of AI is built from the top-down, starting with thought and reason. This naturally leads into abstract approaches to cognition such as knowledge representation, planning, and problem solving. Brooks’ approach aims start at the bottom level, starting with physical systems that are situated in physical environments: robots. This approach is aimed to reflect the evolutionary path of human development. The running comparison is between artificial intelligence as compared to robots and biological systems.

Early approaches to robotics made use of robots with onboard computers that would form models of their environments and then form plans to act in relation to them. This set of models is refered to as the Sense, Model, Plan, Act framework, or SMPA. This approach was influenced by the traditional AI models, and was not very successful. Brooks explains that the assumption behind SMPA was that once the problem of performing tasks in a static environment had been solved, then the more difficult problem of acting in a dynamic environment would fall into place. This echoes the claims of Newell and Simon in their assertion that once reasoning were solved, then issues of emotion, human interaction, and the like would naturally follow.

Around 1984, roboticists realized several things of importance:

  • Most of what people do ordinarily is neither problem solving nor planning, but activity in a benign but dynamic world. Objects are not defined by symbols, but by interactions. (Agre and Chapman)
  • An observer may be able to describe an agent’s beliefs and goals, but those do not need to be reflected in the agent itself. (Kaelbling and Rosenschein)
  • In order to test ideas of intelligence, it is necessary to build agents in the real world. Agents can exhibit behavior that appears intelligent even without having internal data structures. (Brooks)

This new approach to robotics signifies a significant departure from traditional AI. It also contains several new values for how to think about intelligence and robots. The approach values both situatedness and embodiment. Agents are present in the world and interact with real situations as opposed to abstractions. The intelligence of robots is given from their repoire with the world, as opposed to from abstract reasoning. Robotics is also characterized by emergent behavior from the component elements.

Brooks compares the systematic behavior of both computers and biological systems. Biology operates in parallel, with low speed. By comparison, AI systems run on Von Neumann machines, which have serial calculations, large spaces of memory, and narrow channels with which to access that memory. In AI research, there is a strong trend of associating the current models of computation as the pinnacle of computational technology. This is perhaps a straw-man argument, but it is reflected in the way that human problem solving has been associated to computational models. Brooks argues that this is extremely foolhardy.

Particularly, Brooks is critical of the approach of Turing that cognition and computation are independent of embodiment. Turing’s examples of computational intelligence also encouraged disembodied activities (especially chess), the prevalence of which has continued in AI research. Brooks continues to describe several AI movements, each of which have met with little success.

Brooks gives an overview of some biological perspectives of how cognition and intelligence work. Early appraoches to biology, notably ethology, were heavily influential in early AI. These approaches propose hierarchical models of behavior selection, but have largely been discredited by modern evidence. Particularly, modern approaches to psychology, especially neurophysiology, suggest a flexibility present in the workings of human brains which is dramatically contrary to ideas used by AI, notably the notion of the brain as a knowledge storage system, and heirarchical models of cognition. Brooks argues that the models used by traditional AI are not reflective of how human brains are built.

Brooks’ critique of AI is severe: He asserts that it is necessary to focus on robots that are situated within physical space. The robots should not hold internal representations, but instead use the world as a model. Systems would also need to work within the constraints of its physical components. Robots are not only present in physical space, but, like real bodies, they are also imperfect and subject to limitations, deficiencies, and drifting calibrations.

While these arguments are not particularly helpful for the purpose of developing a simulation of a cultural world, they are important for considering ways to think of agents within a world. Brooks’ focus is on intelligence, not in the sense of the metaphysical, or the ability to solve complex abstract problems, but rather in the empirical sense. Intelligence is determined by interactions within the world, and by the eye of the observer. In light of this, his essay provides an anchoring to the empirical and demonstrable. Even in the case of a simulated world, agents are still situated (though in a significantly reduced sense), and thus their intelligence is still determined by their engagement with that world.

Reading Info:
Author/EditorBrooks, Rodney
TitleIntelligence Without Reason
Typebook
Context
Tagsspecials, ai, embodiment
LookupGoogle Scholar, Google Books, Amazon

Sherry Turkle: Seeing Through Computers

[Readings] (01.28.09, 1:47 pm)

This article elucidates some material which later appears in Turkle’s book, The Second Self. The subject of this essay is the culture of simulation and its effect on pedagogy. The article is tied between the competing ideas of a computer as a creative tool versus an appliance, and between the role of education as teaching mastery or usage.

Turkle’s article was published in 1997, which gives it some historical distance from the current trends in education, but the state of affairs in 1997 seems to strongly resemble the state of affairs now, at least as pertains to simulation. I think that modern education has become overtaken by the cultural effects of the internet and mass information.

Computer education in the 1980s relied on teaching students programming, and using the metaphor of the computer as a machine or calculator. Educators aimed to portray the internals of the computer as something to be understood and manipulated. This moment was seminally infleunced by Seymour Papert’s Mindstorms. This style of thinking culminates in an example of an exhibit in the Boston Computer Museum, of a computer visually blown up, so that children can see the insides.

Gradually, there is a shift in the way that people think about computers, which is heralded by the desktop metaphor of the Macintosh user interface. Instead of seeing transparency as looking into the lower operations of the computer, the role of transparency changes to the immediacy of metaphors and interfaces. Transparency becomes the value of being able to look at a computer screen and immediately see a document, a spreadsheet, or a desktop.

This shift brings in an educational change, where educators become motivated to teach the computer as an appliance, and instruct children in the operation of programs. Instead of being taught how to build machines, children are being taught to use them. This is partially motivated by the prevalence of computers in the workplace and as a means of training students for future employment. Along this way, students begin to get used to thinking of the computer as a black box, with the internals hidden and unknowable, rather than something to be learned and mastered. The pinnacle of this new moment is simulation, which is all about black boxes.

Turkle gives an example of a child playing SimLife, who does not attempt to ask what the meaning of the terms in simulation are. Instead, he understands things functionally. This is depicted with some degree of terror. Turkle fears that simulation shuts down questions rather than answering them, but I disagree, and say that simulation instead demonstrates answers by playing them out, by exposing procedural and functional relationships. Instead of telling, simulation shows.

Another element is that this demonstrates in kids a comfortability (in simulation culture) to working with partial and incomplete information. This is also a gender issue. In Western culture, Girls are traditionally less comfortable with working with partially understood systems, and prefer having a more complete understanding.

Turkle exposes this question further about simulation culture: why should kids use virtual magnets to pick up virtual pins? However, I think the reason is exactly the same as using real magnets to pick up real pins. Interaction is playful, but is illustrative of relationships.

Describes that concerns over simulation in college education. Simulation results in students detachment from their work. With simulation, educators are concerned that students do not understand importance and effects of the subjects they are learning. It does however enable students to do work and experiments they would not have been able to do before. This is still a subject of some controversy in education today. There is a heretical/blasphemous element to simulation in science, where educators fear that students will mistake world of simulation for the real world. This fear goes back to Baudrillard.

This article discusses Turkle’s ideas of simulation resignation and denial, and poses a third mode of criticism, which examines and challenges internal assumptions. She argues that it would be possible to develop a readership for culture of simulation. This would emphasize a way of distinguishing between the world of the simulation and the real world.

The way to build this would be to have children create their own simulations, to develop authorial skill to learn how to critique and read the simulations. We have centuries-long history of readership for written text, a similar tradition must be made for understanding simulations.

Reading Info:
Author/EditorTurkle, Sherry
TitleSeeing Through Computers: Education in a Culture of Simulation
Typearticle
Context
JournalAmerican Prospect 8, no 31 (March 1997)
Tagscyberculture, specials
LookupGoogle Scholar
« Previous PageNext Page »