icosilune

Category: ‘Research’

Lucy Suchman: Plans and Situated Action

[Readings] (08.29.08, 4:51 pm)

Overview

This excerpt is from NMR. The editors comment that: Suchman made a deep critique of AI, that planning and symbolic manipulation is flawed as models of human intelligence. Suchman argues instead that human reasoning is not based in planning, but rather, action is based on embodiment. Suchman claims that plans are stories used to justify and explain these actions. This critique influenced Philip Agre and David Chapman, who explored AI in consistency with Suchman’s argument.

Notes

The preface, “Navigation” opens with a discussion of Trukese and European navigators. The European navigator operates in accordance with the plan at all times, while the Trukese navigator instead operates only with an objective in mind, using circumstance and conditions to alter his course. The metaphor can inform three possibilities: One, that how to act is purposefully learned, and is different across cultures. Suchman follows to counter this with the counterargument that all activity, even the most analytic is fundamentally embodied. Two, one might argue that planning is used instrumentally, depending on experience or expertise. But this seems to imply that the Trukese navigator would not get anywhere. Three, Suchman’s critique, is that all purposeful actions are situated in their circumstances: we act like the Trukese, but talk like the Europeans. Rather, plans are an ad-hoc resource for the action. This metaphor is an excellent representation of Suchman’s critique, but it also exposes some other qualities that may easily slip by. One is that it was Eurpoean thinking, heavily based in Cartesian dualism, that led to the development of computers and AI. The absence of non-western thinking pushes other forms of reasoning and philosophy into the background in common electronic models of cognition and interaction.

Suchman begins by discussing Turkle’s research on computers as collaborative objects. Computers are reactive, linguistic, and internally opaque: this leads to design challenges, especially with accountability. Computers seem to reason, but the manner of that reasoning is concealed. On Automata: cognitive science has pushed a symbolic metaphor of cognition, with AI and computers being the logical receptors. Cognitive science emphasizes the detachment of rationality from embodiment, and supports the abstract symbolic reasoning pervasive in AI today. Suchman discusses the linguistic metaphor for interaction in HCI. This emphasizes interaction as a dialogue, compared with dialogue between two people. In such case, as Dennett argues, mutual opacity makes intentional explanations much more powerful. Thus, the opacity of computer invites an intentional stance. From the design perspective, artifacts do and should try to explain themselves, but this is muddy water when it comes to intentionality. Opacity, especially in certain untrusted situations can place a user at odds with an treacherous and untrusted world. Obviously, this is not the dominant perception of computers or computation, rather they are seen as extensions or objects. Interaction is not a dialogue, so much as commands and filtering. From an AI perspective, though, the concern is natural.

Suchman finishes off the chapter with a comparison of the computer as an “artifact designed for a purpose” versus “an artifact having purposes”. The former, which is the instrumental approach, evokes embodied and situated reasoning, with the computer as an adaptable tool. The latter approach is that of AI, which (as far as it is instrumental) treats the computer as an intelligent device, which engages with the user reflexively, and is not *usable* in the literal sense. However, again, this form of reasoning gets convoluted when applied to games, which, having no extradiagetic goals, have no purpose. Games are most engaging when reflective and automated, but simultaneously evoke an intense state of situation via their immersive character.

Reading Info:
Author/EditorSuchman, Lucy
TitlePlans and Situated Actions
Typebook
Context
Tagsdms, embodiment, ai
LookupGoogle Scholar, Google Books, Amazon

Paul Dourish: Where the Action Is

[Readings] (08.29.08, 4:50 pm)

Overview

Dourish writes of “embodied interaction”. This idea is meant to connect the realms of HCI, interfaces, and design with that of continental phenomenology. Dourish’s premise is that HCI has learned a great deal from phenomenology (specifically in the developments of tangible and social computing), but stands to gain a great deal more from applying the principles of embodiment and being to computational artifacts rather than persisting with conventional procedural metaphors. Specifically, Dourish claims that the sense of humans as being subservient to computers (originating in early history of computer science) still remains strongly today. While Dourish’s discussion is meant to give a new perspective on HCI, his book also gives some insight into how simulated agents might experience embodiment and may be “phenomenologically sympathetic”. Embodied interaction may be seen one way as human to computer, but potentially between simulated agents and their artificial worlds.

Notes

A History of Interaction

Dourish opens, describing that “Embodied Interaction is interaction with computer systems that occupy our world, a world of physical and social reality, and that exploit this fact in how they interact with us.” (p. 3) Embodied interaction shifts emphasis from the procedural model to a more process based model, specifically based in things like Milner’s Pi Calculus, and Rodney Brooks’s robotics. (p. 4)

Dourish also makes note that the original “computers” were real people whose careers were doing calculations. (p. 6) This is interesting for comparing computer based simulation to rule based simulation to human imagination. Removing the digital element from computing, there is a lot of room for error and creativity within human processes (necessarily, mechanization tries to stamp this out), but it is ironic to imagine a digital story simulation being carried out by live individuals. The procedural aspect seems necessarily very restrictive in the creative role. However, both stem from this idea of a rule based system, but where humans are challenged and made more creative by the constraints implied by rules, digital systems seem to be limited.

One of the means of interaction that Dourish discusses is the textual form. Here he is primarily referring to command line interfaces, such as those seen on a command shell. These are textual, text, sort of, but not really. Not in the sense that a letter to a friend, IM conversation, or verbal instructions are textual or linguistic. Dourish notes that conversation and dialogue are now integral to our understanding of interactivity. The purpose of such conversation is implicitly assumed to be to give instructions to run computer tasks, not for actual social engagement. “Textual interaction drew upon language much more explicitly than before, and at the same time it was accompanied by a transition to a new model of computing, in which a user would actually sit in front of a computer terminal , entering commands and reading responses. With this combination of language use and direct interaction, it was natural to look on the result as a ‘conversation’ or ‘dialogue’.” (p. 10)

In the discussion of visual metaphor, Dourish describes the visual interaction as a more direct form of engagement, wherein the user interacts with abstract objects in a direct and concrete manner. These objects and interactions are represented symbolically and visually, creating a world of metaphors wherein the system of concepts is complete and consistent. “From these separate element, the designer builds an inhabited world in which users act. Direct manipulation interfaces exploit and extend the benefits of graphical interaction.” (p. 13) While this approach is interesting from an interface perspective, it also renders the computer world as a Baudrillardian simulation. So the representative power of simulation becomes most clear at this level, when metaphors and direct interaction become present in interfaces.

Social and tangible interfaces are grounded in embodied interaction, which is at odds with a positivist Cartesian ‘naive cognitivism’ which gives a very dualist take on interaction, with a heavy emphasis on symbolic representation. Embodiment claims that cognition without a body is fallacy, and embodied interaction exploits the corporeality of its interactors. Embodiment also implies a presence, and hence, participation: “Embodiment, instead, denotes a form of participative status. Embodiment is about the fact that things are embedded in the world, and the ways in which their reality depends on being embedded.” (p. 18) So, we can extend this to thinking about agents and their participative nature.

Being in the World: Embodied Interaction

In the next section, Dourish spends time discussing various phenomenologists (specifically Husserl, Heidegger, Schutz, and Merleau-Ponty), and their potential application towards HCI. He starts with two definitions of embodiment, the second is this: “Embodied phenomena are those that by their very nature occur in real time and real space.” (p. 101) Looking at people interacting with computers, Dourish asserts that people respond physically, directly, and kinetically with the world around them in a tangible manner, and that operating through a computer abstracts this, even in immersive environments, users are operating in opposition to interfaces. We do not need to do planning in engaging with the world as we do with interfaces. However, this seems a dubious claim: many people have great trouble engaging with the world (since it does not have interfaces, but it does have protocols and social conventions). Many people with WoW or SL addictions tend to interact more seamlessly in those worlds than they do in reality. Dourish specifically examines 3d interfaces, wherein a user navigates a world with keyboard and mouse, and notes that our operation with the world is not of this nature- that we do not have a homunculus sitting inside our head observing through our eyes and controlling indirectly our actions (which sounds like Searle and AI). This is the difference between player and avatar, and this relationship, I think, is much more complex and nuanced, and has the capacity to be much tighter than presumed here. (p. 102)

On Husserl: Husserl wanted to carry Cartesian dualsim to address the phenomena of experience. Specifically, he was dissatisfied with the abstraction of mathematics and science, and wanted to “develop the philosophy of experience as a rigorous science”. Moreover, he drew lines between objects of consciousness and objects of intentionality (the Cartesian duals of objects). These sound like they might relate to the socially enacted objects of Mead. Objects of intentionality are “noema”, and our mental consciousness of these objects are “noesis”. Underlying the concept of noema is a platonic conception of essence. (p. 105-106)

On Heidegger: Heidegger rejected Husserl’s dualism, and emphasized that the mental and physical spaces are deeply connected. “Essentially, Heidegger transformed the problem of phenomenology from an epistemological question, a question about knowledge, to an ontological question, a question about forms and categories of existence. Instead of asking, ‘How can we know about the world?’ Heidegger asked, ‘How does the world reveal itself to us through our encounters with it?'”. This change in question is a focal point for HCI and interaction. (p. 107)

On Schutz: Schutz looked on phenomenology as applied to social action. Social enaction is rooted in shared experience, which is phenomenological in nature. Collective action depends on intersubjective understandings of the world. “Schutz argued that the meaningfulness of social action had to emerge within the context of the actor’s own experience with the world.” (p. 111)

On Merleau-Ponty: The focus here is on the phenomenology of perception. Merleau-Ponty wished to reconcile Husserl’s philosophy of essences with Heidegger’s philosophy of being. This involved a change in perspective of the role of the body in experience. “For Merleau-Ponty, the body is neither subject nor object, but an ambiguous third party.” To understand the body, one must understand perception. This is an interesting approach towards embodiment, since his treatment of the body sounds very applicable to the approach necessary for simulated agents. Simulated agents do not *have* bodies, but they must be embodied within their world, and to address this problem, one must turn to the matter of peception. Merleau-Ponty goes on to emphasize a reversibility in perception (that others may perceive ourselves? Can get very Lacanian here), which means that we can apprehend “perceptions of ourselves that we engender in others”. This work was done by Robertson in 1997, and sounds very similar to Goffman’s performance of the self. (p. 114-115)

Ultimately, the phenomeonlogists have explored the relationship between embodied action and meaning. Meaning, to them, is found in the world with which we are in constant contact and engagement. Meaning can be found via the world revealing itself to us and affording for us actions to perform upon it. (p. 116) This sounds very similar to the system of affordances developed by J.J. Gibson and later Don Norman. More on this: “In other words, an affordance is a three-way relationship between the environment, the organism, and an activity. This three-way relationship is at the heart of ecological psychology, and the challenge of ecological psychology lies in just how it is centered on the notion of an organism acting in an environment: being int he world.” (p. 118) And again, this three-way relationship sounds like a model for simulated behavior. Michael Polanyi makes a significant investigation of embodied skills which require a “knowing how” versus “knowing what”. (p. 119)

On Wittgenstein: he used embodiement in relation to language. In the linguistic tradition, “He argues that language and meaning are inseparable from the practices of language users. Meaning resides not in disembodied representations, but in practical occasions of language use.” (p. 124) The return to language is very interesting here, from the departure to visual and tangible interfaces.

Reading Info:
Author/EditorDourish, Paul
TitleWhere the Action is
Typebook
Context
Tagsdms, embodiment, hci
LookupGoogle Scholar, Google Books, Amazon

Stuart Moulthrop: From Work to Play

[Readings] (08.29.08, 4:49 pm)

Overview

Moulthrop comes from a general background of Ludology, and spends much of his time critiquing Janet Murray’s Hamlet on the Holodeck. The general idea of the essay seems to be an attempt to find other means of relevance for game studies, especially as a means of comprehending culture. Moulthrop strongly references Pierre Levy on collective intelligence, and is trying to derive some form of understanding of games and game studies as being critical to the realization of a molecular society.

Notes

Moulthrop opens his essay by noting the contemporary setting and peculiar position of game studies in the (particularly American) political climate. He references Donna Haraway on the current state of the cultural system, as polymorphous and informational rather than industrial. Haraway sees society as transforming to one oriented around play and deadly games instead of work. The idea of using play to express the cultural/political condition in the sort of post-industrial “neo-Taylorist” setting seems, as Moulthrop notes, inherently troublesome.

This idea is the first step towards realizing the molecular society, though. The essence of Moulthrop’s connection is that games are emblematic of a culture of configuration, rather than consumption (or “mere” interpretation). Configuration enables a new response to media, and a new kind of awareness hitherto disabled.

A connection is made between these ideas and the restrictive nature perceived by the role of literary criticism as applied to games. The objection here is the classic cry of attack made by ludologists, but it is posed in the context of this larger cultural dilemma. A division is posed between drama and narrative as compared to play, simulation, and game. The idea is that the so called narrativists expect these media to serve the function of telling stories.

The change in focus that Moulthrop desires is to turn attention to the configurative power that users/players might have over works, as opposed to the interpretive ones. One of the fundamental things that is appealing about games is the process of participatory freedom, at least in ways of engaging with the game in multiple ways. For example, seeing what happens when the puzzle fails rather than succeeds.

However, it is important to note that even in the “most interactive” of games, there are severe limits to this configuration. In GTA, I might have the freedom to commit many kinds of crime, to customize my character, to run free about the city, but there is a significant limitation in that this is all it is ever possible to do. I cannot make the game about anything else. This may be a shallow criticism, but it is exactly the same constraint posed by the practice of interpretation. The only difference is that with configuration, the openness of the work to the player is inscribed and represented within the world of the work itself, but with interpretation it is only in the mind of the reader. This difference too is not severe, since ultimately, nothing is actually *changed* in the game, at least not in the way that others might play it. From the perspective of networked games or communities, this could easily mirror the status of interpretive communities that share and communally shape interpretations.

Further difference is claimed in Alternate Reality games (such as those for the film AI and the Beast for Halo), and emphasizing their procedural appeal. At the same time, narratives to have their procedural mechanics, and furthermore the appeal of these, the mystery, is rarely the ineffability of the “puzzleness” alone. I would argue that the puzzle solving and the fiction of these sorts of games are inseparable, or at least, it is their intricate and deep connection that makes the game so appealing.

Moulthrop finds that one of his main differences with Janet Murray is the role of transparency in games. This perspective of Murray’s seems to originate from Don Norman, who views transparency as ideal. All media eventually find some transparency (ie, over time the television ceases to be a box and turns into a window). However, part of the power of computers (as found by Sherry Turkle) is their immersiveness and holding power, which derive from their inherent opacity. Transparency is furthermore not even the ideal in literature, since open works are deliberately ambiguous and inexact, and even sometimes seemingly contradictory.

A conflict is posed here, between the consumptive values of transparent media versus the values of participatory media. The example given is Citizen’s Broadcast radio, which fell because people supposedly found nothing to say. The trend for passivity seems to change dramatically after the introduction of online participatory culture, but the suddenness of that is debatable. From DeCerteau, we might find that individuals do more with the things they consume than merely absorb them.

Moulthrop’s conclusion nonetheless ties together the values of configuration (what one might even call openness) to the ideas of participation and procedural literacy and criticism.

Reading Info:
Author/EditorMoulthrop, Stuart
TitleFrom Work to Play
Typebook
Context
Tagsdms, cybertext, ludology, games
LookupGoogle Scholar, Google Books, Amazon

Bruno Latour: We Have Never Been Modern

[Readings] (08.29.08, 4:48 pm)

Overview

The overarching thesis is that the divide between nature and man, as proclaimed by modernism, is artificial and never existed. This divide enabled both science and humanities to make claims for the absolute truth. Concealed by this divide was the emergence of hybrids “quasi-objects” that blurred the lines between human and nature.

The intention of this is to re-examine the role and past of hybrids, and find a new relationship between science and culture. Essentially, to humanize science, and make humanities more scientific.

Notes

Chapter 1: Crisis

Latour opens by looking at the explosion of meaning systems that can be exposed by looking at a magazine. The magazine mixes science and politics and crosses many domains. “All of culture and all of nature get churned up again every day.” This issue stems from an enormous density of connections, where the myriad ways in which science and culture affect each other are exposed, leading a curious observer down the rabbit hole of connections.

Latour seems to be claiming that while this confluence of factors is so densely connected, knowledge is separated from power and politics. There is some sort of perceived divide between the two. Latour mentions a number of writers who have expounded on how technology shapes society, (MacKenzie on guidance systems, Michel Callon on fuel cells, Thomas Hughes on the incandescent lamp, etc.). Latour is trying to find that these relations are more than merely science or politics. The claim seems to be that a culture before a technology and after are very different. And that, despite the connections provided by magazines, the larger connections remain invisible.

Latour is concerned with the feasability of this sort of technological-cultural criticism. He wants to determine whether it is possible to effectively perform this analysis at all or not. His claim is that networks are elusive for deeper anthropological problems.

Latour describes the fall of the Berlin Wall in 1989, and compares it with the first conferences on global ecology. Both socialism and capitalism began with the best intentions (to abolish man’s exploitation of man, and to reorient man’s exploitation of man to an exploitation of nature, respectively), but both paths have gone to such extremes that they turned in on themselves. It then becomes a question over what path to take next. The “antimodern” approach is to no longer dominate nature. The postmodern approach is indecisive and incomplete, while others aim to continue and push towards the modern anyway.

The idea of “modern” is hazy and difficult to define, but it is evocative. Latour’s idea is to define modern by identifying two characteristics. One is the practice of translation, which creates hybrids of nature and culture. The second practice is purification, which aims to isolate meanings and create distinct ontological zones. Translation corresponds to the development of networks, while purification enables criticism.

Reading Info:
Author/EditorLatour, Bruno
TitleWe Have Never Been Modern
Typebook
Context
Tagsdms, postmodernism
LookupGoogle Scholar, Google Books, Amazon

WJT Mitchell: The Reconfigured Eye

[Readings] (08.29.08, 4:47 pm)

Notes

Chapter 3: Intention and Artifice

This particular text is on photography, and its role as representing the truth. The credibility of photographs has, after a long period of time of claiming to represent the whole objective truth, or at least representing that something happened. This criticism runs deeper than a rejection of photographs as merely objective. The idea that the camera is a viewpoint, must take a physical position, and is thus inherently subjective has been around and acknowledged. However, the photograph still stood as a proof of existence.

Mitchell is concerned with the constructability of photographs and the fact that they are subject to the same kind of artifice that any other form of evidence is.

Photographs are fossilized light: momentary interpretations that have been made permanant by their exposure on film. They can be seen as records, but like any record, they construct a system and view of reality.

Photographs have a strong connection with matters of convention, for a long while, painting was rooted in matters of convention (Ancient Egyptian figures are a good example), but photography adheres to its own conventions as well. The characteristics of shading and lighting and all of these aspects are firmly grounded in the conventions of the photographic tradition. In turn, these conventions connect with our means of reading and perceiving information from signs encoded in this convention. Mitchell cites Roman Jakobson who explains that over the course of tradition, images become ideograms which are immediately mapped to ideas, and we no longer see pictures. This idea relates again to the traditions of media gradually becoming more transparent over time.

Photographs themselves tend to be deeply connected to their subjects, though: “since photographs are very strongly linked by contiguity to the objects they portray, we have come to regard them not as pictures but as formulae that metonymically evoke fragments of reality.”

This line of reasoning also seems to connect to the ideas of simulation sublimation. Images are seen as semiotic systems, and are internalized, or are, at least, taken to denote the truth when cast as such (which is how we have approached photography culturally). It is only recently that we have begun to exhibit a kind of awareness of the artificiality of the image that we can criticize photographs as non-representative of the truth. I imagine that similar arguments can be made toward simulation, but its artificiality is generally much more apparent.

The part of the camera that lends it its authority is its mechanical nature. Because it is a machine, seemingly untouched by other values or human intervention, it mechanically must capture the truth without the factor of human error. The ramifications of the “superiority” of machine reproduction and capture are documented (for instance with Walter Benjamin), but the near autonomous power of the machine emerges in a great deal of AI criticism.

Cameras impose the existence of a subject, though. It is impossible to take a photograph of “no particular” horse or person, though it might be possible to make a picture of them. The necessity of this particular subject is the essence of the camera’s intentionality. This intentionality is also where the camera’s subjectivity comes into play, and it introduces values based on the question of framing, perspective, and choice. Mitchell explains that this forms a spectrum: “However, Scruton’s distinction between intentional and causal components in image production is helpful, particularly if we do not insist on a clear cut dividing line between paintings and photographs but think rather of a spectrum running from nonalgorithmic to algorithmic conditions–with ideal paintings at one end and ideal photographs at the other.” (p. 29)

The problem that arises in comparing this to computation is that even algorithms have biases and values.

Digital photography and photo editing blur the lines between causal and intentional images. Something which is intentional can be hidden and be made to seem a natural causal element. This perverts the authority of the mechanical process by which photographs tend to derive their legitimacy.

Reading images requires a certain interpretive labor to be undertaken by the observer, and this process is where the influence of convention comes into play. “In forming interpretations of images, then, we use evidence of the parts to suggest possible interpretations of the whole, and we use the context of the whole to suggest possible interpretations of the parts.” (p. 34)

The problem of altering images involves balancing the alteration with the requirements for consistency and coherence in the relation of parts to the whole. Beyond that, we can check for implausibility, by comparing the subjects to that which we already know to be true (or might be able to verify externally). This sort of issue pulls back and seems to reflect ideas being worked through in AI and cognition. When faced with a black box that explains something that we have no means of verifying or checking, but are grounded in a photograph-like mechanic, we have no choice but to accept the result. The examples Mitchell gives are the photographs of the astronauts on the moon, but that idea can be extended to other regions, such as simulation.

Mitchell writes of the general acceptance of images: “In general, if an image follows the conventions of photography and seems internally coherent, if the visual evidence that it presents supports the caption, and if we can confirm that this visual evidence is consistent with other things that we accept as knowledge within the framework of the relevant discourse, then we feel justified in the attitude that seeing is believing.”

Reading Info:
Author/EditorMitchell, W.T.J.
TitleThe Reconfigured Eye
Typebook
Context
Tagsdms, visual culture
LookupGoogle Scholar, Google Books, Amazon

WJT Mitchell: Picture Theory

[Readings] (08.29.08, 4:46 pm)

Notes

Chapter 2: Metapictures

This essay is about images that refer to images. It further relates to art that refers to art (partially itself, and partially other art). Going into it, Mitchell starts with Clement Greenberg and Michael Fried, who discuss how modern art has become essential and self-referential. This partly seems to be representative of postmodernism and eternal self-reference and analysis. This also connects to self reference in language and literature, metalanguages that reflect on languages.

Mitchell’s focus is still on images, pictures, though, but this is mentioned upfront, so we keep it in mind.

The first example given is of a man drawing a world and then finding himself inside of it. This is more of a casual and tired bourgeoisie type of art (it’s a New Yorker illustration) rather than a sublime and frightening perspective of the enveloping nature of images. The fear derived seems to be of the lack of boundary and enveloping nature of images. We create images, and in turn find ourselves reflected back in them.

Looking at metapictures requires a second order discourse. The first order of an image is the “object-language”, which I suppose is the manner of direct representation. Even images which self reference (frame within a frame, or a man painting a picture of a man painting a picture…) can be posed hierarchically by a first order representation. This works (at least on paper) because images are necessarily finite. Extending this to a second order requires a blurring of the inside and outside.

In mathematics, first order logic can make generalizations about classes of things, but a second order of logic is requried to make generalizations about functions and relationships. This in mind, second order thinking involves thinking beyond direct interpretation and consider sorts of external analogies and references.

Mitchell looks at multistable images, which are ones that can be interpreted in two ways. These have different thresholds, and confuse the role of pictoral representation, but they do not formally confound representation in the sense that the New Yorker cartoon does. The second order boundary is ambiguous, but not overtly flaunted. Images in this sense have a supposed mutability. That form of multistability is also observed in various literary forms (The Lady or the Tiger).

The dilemma with multistable forms is an essential question (Wittgenstein found the rabbit-duck image to be extremely troubling). The question ultimately seems to reside in where we do the identification in our minds. Is it an internal model, an external model (with respect to vision), is it determined by some sort of grammar?

Pictures generally have some sort of presence and role in life. They can seem to have lives of their own. Thus, metapictures, which throw the reference and metaphor into ambiguity, call into question the role and self-understanding of the observer.

This is understood in looking at Foucault’s writing on Las Meninas (a classical painting by Velazquez which employs some degree of reflection and ambiguity), and how Foucault’s attention to Las Meninas is similar to Wittgenstein’s dwelling on the duck-rabbit which complicate and make the images all the less penetrable or comprehensible. They encourage us to think of not the images directly, but think of how the images relate to their culture, our culture, and our thought.

Matisse’s “Les trahison des images” serves a similar role, but instead of exploring our relationship with images (or images with images), it explores the relationship of images and words. This, in turn, is an infinite relation.

The conclusion of the chapter offers some insights: “The metapicture is not a subgenre within the fine arts but a fundamental potentiality inherent in pictoral representations as such: it is the place where pictures reveal and “know” themselves, where they reflect on the intersections of visuality, language, and similitude, where they engage in speculation and theorizing on their own nature and history.”

Reading Info:
Author/EditorMitchell, W.T.J.
TitlePicture Theory
Typebook
Context
Tagsdms, visual culture
LookupGoogle Scholar, Google Books, Amazon

Clement Greenberg: Avant-Garde and Kitsch

[Readings] (08.29.08, 4:44 pm)

Avant-Garde and Kitsch

Greenberg is writing to discern and define two relatively recent antipodes of art and culture. The avant-garde and kitsch are both products of modernism, and both have emerged concurrently, affecting each other through their opposition. Greenberg begins his essay by noting that the answer in their difference lies deeper than mere aesthetics.

avant-garde

The avant-garde is a product of Western bourgeois culture. Outside of this moment of culture and the abundance produced by capitalist society, the avant garde would not have been possible, and bohemia would never have been established. The avant-garde came to being by emigrating from the old model of aristocratic patronage, to instead pursue art for its own sake (as opposed to for funding). The avant-garde never fully detached from bourgeois society, still depending on its money.

As an academic and intellectual movement, the avant-garde is compared with “Alexandrianism”, a certain reluctance to interfere or comment on culture. The essence of Alexandrianism is a devotion to repetition of themes and old masters. A consequence of this is a non-involvement in political matters. Where the avant-garde is different is from the devotion to pushing the boundaries of art. Instead of a reverence for the tradition and history of art, the avant-garde is about its future and potential. Content is irrelevant, only the form of art itself is of importance.

This devotion to art for its own sake is coupled with a sort of desire in the recreation of the world. Greenberg notes on artistic values: “And so he turns out to be imitating, not God–and here I use ‘imitate’ in its Aristotelian sense–but the disciplines and processes of art and literature themselves.” The abstract or non-representational stems from this notion. The avant-garde artist is not imitating or re-creating the world, but imitating and re-creating the form the art itself.

This idea focusing on the form and values are exposing a certain systematicity in the ideas of art. Where traditional representation evokes or creates a system of nature, the abstract evokes or creates the underlying ideas that ground representation in the first place. Instead of art that imitates life, art that imitates art. In simulation and modeling, this line of construction is prefixed with “meta”.

While the avant-garde rejects content and the bourgeois society from which it arose, it still depends on the wealth of the bourgeois to survive. Since the only audience with the culture and education to appreciate the strange avant-garde perspective is the audience with the wealth to afford education and culture.

kitsch

Kitsch is hard to explain, but easy to give examples. Greenberg explains that “Kitsch is a product of the industrial revolution which urbanized the masses of Western Europe and America and established what is called universal literacy.” While universal literacy certainly sounds nice enough, but has some ominous undertones. Greenberg explains that kitsch arose to fill a demand for culture coming from the proletariat of industrialized countries. The proletariat consisted of peasants from the country who were settling in cities and simultaneously losing a taste for country culture and developing some degree of leisure time that required fulfillment.

The culture that arose from this is composed of artifacts of the existing cultural tradition. It borrows and synthesizes devices and forms from culture and manufactures products which are easily understandable and palatable for the prole audience. Kitsch is the ultimate synthesis of culture and media, and is also the ultimate recycler and disposal: it will use bits of artifacts that “work” and re-use them until exhausted. As a result (in the sense of Walter Benjamin), it destroys culture and then profits immensely.

Greenberg also explains some properties of kitsch: it is profitable, it is mechanical, it operates by formulas, it is vicarious, it is sensational, it adopts varying styles, it is the epitome of all that is spurious, it pretends to demand nothing from its customers but money. While not a formal definition, it helps clarify some things about what kitsch might be, but does not exactly explain, except in contrarian terms, what kitsch is not. Not kitsch is not profitable, not popular, not spurious. All of these qualifiers are exceedingly vague and subjective.

Greenberg claims that artistic values are relative, but that among cultured society, there is a general consensus over what is good art and what is bad. This may work qualitatively, in that many agree on the values of classics, there is very little agreement in contemporary works. Greenberg continues and explains that to acknowledge the relative nature of values, that artistic values must be distinguished from other values, but Kitsch is able to erase this distinction though its mechanical and industrial construction.

A key theme in this is the notion of value, and a relative situation of values. It is a sort of intellectual and educational background that defines and establishes these values for the educated audience, and lacking these, the proles miss the value inherent in abstract works. This education supplies a history and context, which is totally missing from the world of kitsch.

wrapping up

The avant garde imitates the process (and system) of art, and kitsch imitates its effect. The history of art had not enabled the interconnection of the artist with form, because of the nature of patronage as it had supported artists since the middle ages (which seems a little puzzling). Later, following the renaissance, a history of artists preoccupied and lonely in their attachment to their art begins to appear.

It is capitalism and authoritarianism that turn to kitsch as being tremendously effective at profiting from and placating the masses. Greenberg explains that socialism is the only movement that can support the avant-garde, or new culture.

critical notes

Greenberg’s primary concern seems to be that only the avant-garde is producing new cultural value, through the pushing of its limits. But, this attitude leaves something to be desired. Surely, cultural value must be seen as more than a scalar quantity?

There are many subtle assumptions underlying the criticism of Kitsch, which is that, when understood formally, as a synthetic product that seeks to make a profit, one could say that near unto all forms of art are kitsch. Ancient cultures were constantly referencing and alluding to the legitimacy of previous cultural products- Roman gods were borrowed from Greece and used to satisfy a cultural demand and need for legitimacy, yet this borrowing is not really seen as Kitschy. Even kitsch is disposed to find new ideas in itself from time to time.

Many contemporary works, must create something new, arguably have some artistic value, reference and synthesize, and some even have the misfortune of being popular. The qualifier of kitsch seems to only occur when the popularity and profit is absent. Clearly, there is a spectrum of gradations of a work in terms of its accessibility, but this is not necessarily equivalent with its artistic value. The danger of the existence of kitsch is to blur and erase this distinction, but that seems to afford the existence of kitsch much more authority than it ought to deserve.

Additional contemporary works derive from forms that might be considered kitsch, while not avant-garde, they can embrace the values of abstraction and, having emerged from a popular medium, form bubbles of artistic experimentation and radical difference and creativity. For example, within the popular medium of newspaper-style comics emerge highly experimental and complex works. These cannot be said to be kitsch in their emergence, but rather wholly new products.

In this sense, it seems that the qualitative distinction between kitsch and avant-garde, while an effective border, is little more than an arbitrary line over superficial ideas of value and imitation. The image drawn here (in 1939) is evocative of an intellectual stagnancy, one that began in the industrial revolution, but contemporary culture is certainly changed and contains new value from when Greenberg was writing. That value certainly did not all stem from avant-garde artists, nor is all of that value purely capitalist, so it must have come from elsewhere. But where?

Reading Info:
Author/EditorGreenberg, Clement
TitleAvant-Garde and Kitsch
Typebook
Context
Tagsdms, postmodernism
LookupGoogle Scholar, Google Books, Amazon

Roland Barthes: The Death of the Author

[Readings] (08.29.08, 4:43 pm)

Notes

Barthes opens his essay by looking at a quote from Balzac’s Sarrasine, and digging into the methods of understanding the quote’s author. The quote is remarking on a castrato impersonating a woman, describing the fluid evocation of the idea of “Woman” given off by the impersonator. Barthes is trying to discern who is behind the quote, though, who is saying it. It could be the story’s hero, it could be Balzac the author, Balzac the philosopher, it could be universal wisdom or romantic psychology. Barthes explains that due to the nature of writing itself, it is impossible to know. Writing is a voice with an indeterminate speaker, whose identity and body is lost through writing.

The idea of the author is a construction that derives from rationalism and the Reformation, which were concerned with elevating and unearthing the individual in all things. There is a fascination with modern society to connect the author to their work, and to understand the the human behind the work, through the work, perhaps instead of the work itself. Criticism sees a work as the product of the author, or a product of the author’s character traits.

Barthes looks into Mallarme (who was a subject of great interest by Umberto Eco), and explains that Mallarme’s work was intended as a removal of the author so that pure language remained.

Other writers see to expound on the relationship between themselves, their works, and their characters, blending them to some degree. The author’s relationship to the work can be seen as somewhat incidental and residing in chance. Writers may challenge the position that they stand on in relation to their work. Surrealism pushes this further by playing with the system of language. This playing is supposed to expose the limits of authorship (or authorial intent, I suppose?) and exhaust the idea of the person in writing, as opposed to the subject, the one who writes.

As a side, many popular contemporary authors see their writing as being very systematic. They do not control or master the writing from the top down, but rather they develop characters and let the characters act on their own. In this sense, the writing is a run of a simulation.

With this in mind, modern works may be approached with the knowledge of the author’s absence. If we “believe in” the author, it is as the parent of the book, the precursor to the work. In the modern text, the writer is born along with the text.

Barthes explains a perception of the text which is lacking the absolute message of the author (in an almost theological sense). The text is a “space of many dimensions”, it is a “tissue of citations”. Expression is merely translation from a dictionary. “Succeeding the Author, the writer no longer contains within himself passions, humors, sentiments, impressions, but that enormous dictionary, from which he derives a writing which can know no end or halt: life can only imitate the book, and the book itself is only a tissue of signs, a lost, infinitely remote imitation.”

In a post-author text, deciphering becomes impossible or useless. Imposing an author onto a text forces the text to adopt an ultimate signification, which destroys the writing. Modern writing instead can be distinguished and traversed.

Written language is inherently ambiguous, and when we remove the author, written language can be perfectly understood. Barthes mentions Greek tragedies, which use ambiguity and duplicity to convey meanings. It is the reader who is able to interpret, connect, and weave these together.

Barthes is not trying to criticize the meaning or unity of texts, but rather the idea that unity or meaning descend from an external author who precedes and begets the work. Rather, meaning and the unity of a work coalesce in the reader, who connects and strings together meanings from all places. The reader lacks history or psychology or identity in the sense that the author does. The reader’s meaning can be considered a liberation or popularization from the idea that meaning is from and for the author.

Thoughts

Authorship is interesting in a modern society, especially in terms of commercial products. In a culture where corporations are extended rights and status granted to individuals, commercial products tend to stand with the company or the corporation as their author. Some examples of this is are computer software, pharmaceuticals, fast food, etcetera. Despite the fact that many individuals are responsible for creation, and these creations have evolved and changed significantly over time, the products themselves are, even legally, authored by the corporation.

Authorship is important in simulation as well. If one ascribes to the belief that all creative expressions are systematic (that is, they are embedded with models of meaning), then these systems could be said to be authored. The systems are open works, in the Umberto Eco sense, as they are free to some interpretation, but are still constrained by their original authorship.

Reading Info:
Author/EditorBarthes, Roland
TitleThe Death of the Author
Typebook
Context
Tagsdms, postmodernism, narrative
LookupGoogle Scholar, Google Books, Amazon

Espen Aarseth: Narrativism and Genre Trouble

[Readings] (08.29.08, 4:42 pm)

Overview

Aarseth presents another critique of narrative theory as applied to games. He is challenging the idea that narrativism can be used to analyze anything, or more specifically, is challenging the interpretation of anything as a text. Aarseth defines a game as consisting of three elements: Rules, a semiotic system (the game world), and the resulting gameplay. The semiotic system is incidental, and may be exchanged. Knowledge of the semioitic space of the game world, or of the skin that has been applied to it is unnecessary for skill at the game itself. However, it may be necessary to better *appreciate* the game.

Aarseth’s claim about the relevance of semiotic systems to games is tricky, though. It may be possible to interchange skins on existing games, but there are further connections that are made between the world of the game and its rules and resulting gameplay. It wouldn’t make sense to place a skin on a game of chess that randomized the types of pieces. We have certain associations with the order of chess and the order of the skins that we apply to it. What we do when skinning something is drawing a metaphor. The mechanics are necessarily unchanged, but the associative meanings are different.

Aarseth is also looking to demonstrate the disconnection between games and stories. It doesn’t make a lot of sense to equate games as being narratives, especially when exploring abstract games. However, an inescapable fact is that many contemporary (especially successful commercial) games are grounded in stories. This is what Jesper Juul might call the “fiction” of the game. Aarseth proceeds to wonder what the relationship is between games and stories, whether it is a dichotomy, or a continuum or rivalry? He notes that games may be translated among game forms (Rogue and Diablo, for instance), much like narratives may be translated between narrative forms. These are structural equivalences, though: Game adaptations preserve rules, narrative adaptations preserve key events and relationships. Realistically, though, many successful narrative adaptations use much more creative approaches.

The key problem with adapting games to narratives and vice versa can be found in genre theory (John Cawleti): Underlying form cannot be translated, but style and convention may be adapted with relative ease. Aarseth gives the specific example of the Star Wars films and the various games associated with them. A genre that does try to mix the two is the adventure game, which Aarseth derails as being uninteresting, unsatisfying from gameplay perspectives, and limiting in terms of freedom.

Another domain, the simulation game, also employs story to a strong degree, but is flexible where the adventure game is not. Aarseth makes the significant claim that simulation is the core essence of games, and that games are the art of simulation. Aarseth further extends that by saying how simulation is much more satisfying and allows games to handle unusual situations that are not permitted in narratives. Adventure games have a conflict between player volition and the mechanics of the game. Aarseth claims that within simulation games, the player is afforded opportunities to counter authorial intention, that the authors of simulations are essentially removed from the work, and that players will have the last word.

Aarseth’s stance here is ludicrous. Simulation authors are capable of imposing very severe restrictions on players, and the simulation itself may be biased in its very model that defines it. Civilization used (and still uses) a very expansionist, colonialist model of history, and it is not possible for the player to thwart that ideology in any way. The only road to success is to ascribe to the ideology and act in accordance with it. The most recent release of Civilization opens up the model, so that advanced users can write new ideologies into the rules, or rip out the existing ones, but these users are not the average player. It also bears noting that in the discussion of simulation, Aarseth does not mention the Sims (though he had mentioned it earlier). Not sure what to make of that, though…

The important thing to note about comparing games and narratives, though, is to follow Aarseth’s initial focus, of looking at translatability. If we explore how narratives have been translated, adapted, and (especially) extended, it might be possible to make a not-too-revolutionary claim that many successful adaptations break many of the rules of narrative structure. A good example is Jane Austen adaptations, or extending Aarseth’s examples, one could look at Star Wars novels, and the extended universe developed around the world defined by the films. The resulting products might be narratives, but relationships might be changed, settings might be changed, characters might be changed. What has been translated might not be the narrative at all, but rather the world, or the underlying value system of the story. In this sense, we can make the claim that narratives themselves are artifacts of systems. We may not be able to adapt the narrative directly, but the elements of the system may be procedural in nature.

Reading Info:
Author/EditorAarseth, Espen
TitleNarrativisim and Genre Trouble
Typebook
Context
Tagsdms, ludology, simulation, games
LookupGoogle Scholar, Google Books, Amazon

Stephen Wolfram: A New Kind of Science

[Readings] (08.29.08, 4:41 pm)

Overview

Wolfram’s book, A New Kind of Science is chiefly concerned with the implications of his deep study of cellular automata, originally triggered by his MacArthur grant. The main principles and findings of this research seem to be that simple rules can lead to computational complexity and very interesting results.

While the notion that simple rules can lead to powerful results is not a new notion, especially in the sense that science and mathematics have striven to find simple and elegant ways for describing the laws and theories governing nature. Wolfram’s pursuit seems to be towards computation, and relating the simplicity of cellular automata towards emergent natural phenomena. Wolfram aims for CAs to lead towards a new type of science and mathematics that uses the (simulative?) power of CAs to make useful claims about nature.

The visual appeal of the automata is certainly very compelling, but there is an equally disconcerting lack of mathematical reason backing up his arguments. Unfortunately, he also does not give evidence as to what mathematical justification might look like, instead choosing to demonstrate problems through visual analogy.

Wolfram’s use of CAs is also an exemplary demonstration of Baudrillard’s simulation, in that when viewed through the lens of cellular automata, everything seems to become one. CAs become the universal tool which may be used to represent recreate everything.

Notes

The Notion of Computation

The idea of Computational Universality becomes something of great significance here. A function is universal if it can compute anything that is computable. The Turing machine is the fundamental example of this. A consequence of universality is that universal functions may be simulated or computed analagously. A consequence of Wolfram’s research has been to find that certain classes of CAs are universal, may be used as computing machines.

Wolfram has additionally discovered a number of CAs which are reversible, that is, their inputs may be determined uniquely from their outputs. Computationally, this represents an interesting class of functions, but it also references issues of information and disorder that are important in signal systems and in thermodynamics.

The Principle of Computational Equivalence

Wolfram’s thesis is essentially this: “all processes, whether they are produced by human effort, or occur spontaneously in nature, can be viewed as computations.”

Wolfram extends this idea to the point of declaring it a new law of nature: “The Principle of Computational Equivalence introduces a new law of nature to the effect that no system can ever carry out explicit computations that are more sophisticated than those carried out by systems like cellular automata and Turing machines.” And thus, when viewing nature as a system of computation, this principle is naturally very relevant.

An issue with representing things as computations is that it disregards the idea that not everything requires brute computation, instead, certain things may be proven rather than computed. This distinction is tricky and important. Some information may be proven only with difficulty, and other facts may be much more easily computed than proven. However, there is generally a difference between that which is computed and that which is proven. The advantage of the later is eliminating the need for the former. Wolfram’s argument hinges on the notion of raw computation, which may pale in the face of abstract proof. One may set out to compute that there are infinitely many primes 3 mod 4, which is an indefinite exercise, or one may instead aim to prove this in a finite and short number of steps.

This point is important, but flawed. Later on, Wolfram examines rules and theorems of mathematics, and uses their countable, enumerable nature to represent them as computable. In this view, theorem proving is a process of computation, rather than some mysterious or magical intellectual exercise. This fact has been used in the past, notably by Kurt Goedel to prove the incompleteness of mathematics. This means that proofs are indeed finite and computable, but that is still not a good way of approaching them.

Computation is still computation and must obey the law that computations may not be “outdone”, as it is not possible to out-compute something in the same number of logical steps. On the other hand, proof and ideal computation are different from raw computation in that they might be more efficient and save time (or computational steps). The essence to these, the way that proofs are made and solved in practice is not by computation, but rather by “intuition” and experience. The two of these may seem magical in abstract, but actually echo more strongly the ideas of pattern matching. Instead of applying rules in brute force, pattern matching relies on analogy, recognition, and application of known rules to new inputs. This approach is still computable, but not easily by CAs.

Reading Info:
Author/EditorWolfram, Stephen
TitleA New Kind of Science
Typebook
Context
Tagsdms, simulation, emergence
LookupGoogle Scholar, Google Books, Amazon
« Previous PageNext Page »