icosilune

Category: ‘Research’

John Wiltshire: Recreating Jane Austen

[Readings] (08.29.08, 4:23 pm)

Overview

Wiltshire examines the notion of adaptation and other work as fitting under the general umbrella of “recreation”. He is interested in looking at Jane Austen explicitly, but is also interested in generalizing to develop a general theory of adaptation.

Notes

Introduction

Wiltshire mentions early the referencing of Pride and Prejudice in Bridget Jones’s Diary. The connection can be considered both referential as well as adaptive. The comparison is more of a transcoding, finding different ways to meet similar ends. One interesting point of challenge raised by this is that an adaptation may attempt to transcode different components of the source story. In Bridget Jones, the plot is structurally the same as in Pride and Prejudice, but the characters and social context are different. Other extensions (eg, Mr. Darcy’s Daughters) extend and continue the characters, but are necessarily different in plot.

There are a number of processes described here– remaking, rewriting, adaptation, reworking, appropriation: all of these are loose terms on the same category. This is the general process of “recreation” that Wiltshire is trying to derive.

There is some discussion on the cultural capital and artistic value and worth of artifacts. Adaptation may be seen as trying to “take” value, or alternately, attempt to “extend” value. Contemporary works live in the shadow of capitalism and cannot be created without at least some awareness of “marketability” (referenced from W.J.T. Mitchell). Wiltshire critiques the idea of “auteur theory” and poses the remarkable idea that adaptors (specifically scriptwriters and filmmakers) should be considered readers, because they are making interpretive choices. If we push that argument a little bit more, we can go so far as to say that any author is partly a reader of systems of their own design. Referenced here is a line from Helen Fielding’s Bridget Jones’s Diary, in which Natasha is complaining of the “arrogance with which a new generation imagines that it can somehow create the world afresh.” Mark Darcy’s reply is “But that’s exactly what they do, do.” This connects to Jenkins again. “One might add that indeed each generation produces its own works of art, but not entirely out of its own materials.” (p. 5) Wiltshire connects to Donald Winnicott, who is referenced persistently later.

Some notes on Jane Austen’s image in culture: conservative, stuffy, anti-contemporary. In context, she is progressive, transgressive, challenging, etcetera.

Imagining Jane Austen’s Life

On the psychology of the biographical impulse: Comes the duality of intimacy and remoteness, construction and terrifying unknowability. Part of this is to construct Austen’s life in the romantic terms of her novels, which is a drive to make her inner life much more familiar and knowable in the terms that we have to remember her by. It also underlies the competing forces of nostalgia and progressiveness in reconstruction. These forces are opposed, nostalgia is a desire to idealize and evoke familiarity, and consequently reflects the stuffiness that tends to appear in portrayals of Austen. Austen’s progressiveness is unusual because it separates her both from this idealization and casts the shadow of separation between herself and her works.

Jane Austen in Manhattan, Metropolitan, Clueless

On some of the dogma of fidelity: Extends to differentiation of subject and object in the sense of developmental psychology (Winnicott reference). To see an “unfaithful” adaptation is to see an objectified text, which has some degree of reverence and authority. Unfaithfulness implies that the text is being changed or taken advantage of by the filmmaker, abducting it from its state of purity and compromising it. The issue of faithfulness denies the subjectivity of the text itself, not just in terms of it being interpreted subjectively, but rather its capacity to cause meaning independently. The broader scope of adaptation sees more of “borrowing” or “influence” or “persuasion” by the original material, rather than something that sounds like wedlock.

In the adaptation of Emma in Clueless: The influence is not that of a mother text, but rather an inner presence. Instead of being idealized, the text of Emma is loved, destroyed, and remade (or reborn). Clueless not only adapts, but also parodies and recontextualizes. Not bound so intimately to the original, it is free to stand as an independent work. Other adaptations have an undercurrent of anxiety, a certain nervousness in their relationship to the original. Blind faithfulness requires a reverence for the past and an unwillingness to embrace newness. With Clueless, the process of adaptation is not carrying cultural capital, but instead the essence of art. (p. 56-57)

From drama, to novel, to film

Looking at how Austen borrows conventions from drama: emotions are reflected on the surface of the characters. Austen is notable for the inner life of her characters. However, the conventional way of exposing this is to represent it externally, through soliloquy or indirect speech. Working around the lack of this is deeply problematic in adaptations: many forms of media are not able to carry the same degree of inner life, especially not with the notion of indirect speech. Adaptations may fall to stage conventions: making expression direct (for example, repetition, melodrama), and this lacks the novel’s subtlety. Notably, simulation sounds very promising in this respect. If emotional state should be expressed, there might be multiple ways of doing so. Especially Sims-like games might represent thoughts visually with bubbles, following comic conventions.

Wiltshire gives an example of a filmic adaptation of Persuasion, which uses imagery and dramatic cuts to establish a complex depth through referential and metaphorical analogy. Instead of using verbal language or melodrama to convey meaning, it resorts to filmic and visual language. This is still a form of communication, and in context it carries both the same meaning, and the same tone, in that both are suitably subtle.

Pride and Prejudice, Love and Recognition

Wiltshire gives a nice overview of the philosophy and ideology of Pride and Prejudice specifically. The “great subjects” of the book are “class, love, money and marriage”, while it is “principally about sex, and it’s about money: those are the driving motives of the plot.” (as described by Lilian Robinson, who produced the BBC television adaptation) However, beyond this is a deeper, epistemological issue, which relates is visible in the book’s title. The issue is judging and re-judging, recognition and re-cognition, “that act by which the mind can look again at a thing and if necessary make revision and amendments until it sees the thing as it really is” (as described by Tony Tanner). The deeper issue here is one of knowing another. Interestingly, the idea of impressions and knowing relate back to Goffman, who describes this interaction in slightly more mechanical terms.

Another issue at stake is the idea of “projection” as developed by David Hume. On the extremes are characters such as Mr. Collins, Lydia, and Mrs. Bennett, who project their ideals and desires onto anyone, versus Darcy, to whom no one is worthy of the projection of his ideals. The issue here is that with projection, the subject of the projection does not matter. For Mr. Collins, the identity and substance of the girl he wishes to marry do not matter, as long as she suits his plans. The girl has no existence for him, she is merely a frame around whom he may construct his desires.

Described here is what it means for characters to be identifiable as individuals, rather than caricatures. Many games have this problem, due to their existence as rule-based systems, agents (including the player) necessarily take on the roles of caricatures. This problem descends from a failure to find an inner life and psychology, as opposed to partial repetitions, gestures, tropes, etc. This idea relates again back to Goffman in terms of establishing identity and selfhood. “Such people (politicians, celebrities, for example, but also acquaintances) occupy a space in the inner theatre that is like that of a caricature, for in the economy of our psychological lives we cannot spare the energy to lend them an inner being. Instead they serve as objects: objects onto which we may project, or into which we may invest, atavistic propensities of our own. We may think of them as wholly bad, or as buffoons, or admire them as heroes and heroines. We make do, in other words, with partial and stereotyped notions of others.” (p. 103) This idea is especially strong in Pride and Prejudice, as the protagonists are faced with the realization of their false projections of their own impressions onto each other. It is this objectification (described in terms of Winnicott’s psychoanalysis, but also may echo that of Herbert Mead) that enables individuals to interact with their environments, by treating objects as partial reflections of one’s own ideas (and self).

Pulling in more philosophy, Wiltshire references Hegel. The phenomenon of projection may be seen as extending Hegel’s master-slave relationship. In interacting with others, our relationships are dominations, where we make psychological use/abuse of the other. The example described here is Darcy’s first proposal scene, where, in the “bonds of love”, he is oblivious to the person whom he is addressing. The idea of this use and objectification hearkens to the postmodern vision of others as unknowable and totally alien. However, Pride and Prejudice does seem to conclude with the notion that understanding and acceptance are ultimately possible.

Discussing the early scene in the book (where Elizabeth and Caroline walk around the room), Darcy admits to his flaw of a resentful temper. A more careful analysis is taking place here, but ultimately the exchange is difficult to wholly understand. As readers, we are given facial expressions, but do not see Darcy’s inner character in his responses in this scene, leading us to make our own interpretations. While the effects of language are procedural, they are also deep and subtle, however, we are faced with the same dilemmas as when faced with any black-box system. The reader is alone in comprehending Darcy, and as such, to us, Darcy is an interpretation and construction, much as he is to Elizabeth.

On the gradual development and recognition of emotion: Austen represents “falling in love” as an explicit sequence of thoughts and emotions, which, while slow and subtle, denotes a clear development of emotion. The process is vague and obscure, but at the same time conscious and rational. If we were to use this as grounds for representing love as a concept in a game or simulation, this sort of pacing and peculiar clarity may be very useful.

The love that does develop takes the form of a shared reality, rather than a jumble of projections. At some point, the matter of projection breaks down and in its place is a simultaneous reconstruction of something new and different. The change also takes place in the character of the protagonists as well, in addition to merely their perception of each other.

A final change and difference between the BBC adaptation and the novel is a more modern issue of responsibility. Where in the novel, a revealing point occurs when Elizabeth visits Pemberly and sees the portrait of Darcy, with the smile that she failed to recognize before, the revelation is in the discovery of knowledge that was previously missed. In the television series, however, the same portrait is somber, but intercut with the scene is Darcy riding towards Pemberly and taking a spontaneous swim in a lake, seemingly seeking escape from the pressures of responsibility, as well as a release of pent up emotion. These scenes are very divergent, and represent a shift in perspective, as well as probably a modern viewpoint (the trope of escape from responsibility). The change also reflects Darcy in a more central role as an agent in the story (as opposed to the emphasis being on Elizabeth’s recognition). But at the same time, attempts to convey the same recognition to the viewer, who can identify with and empathize with the desire for freedom.

Reading Info:
Author/EditorWiltshire, John
TitleRecreating Jane Austen
Typebook
Context
Tagsspecials, media traditions, narrative, fiction, adaptation
LookupGoogle Scholar, Google Books, Amazon

Philip Auslander: Liveness

[Readings] (08.29.08, 4:21 pm)

Notes

Chapter 1: introduction

Auslander opens by comparing theatre and media, referencing Herbert Blau and Karl Marx. Theatre and media are rivals, and, much like industrial production, media (specifically television) has filled and saturated the cultural economy. Television has essentially formed its own culture, its own environment in of itself. Television is “the” cultural context, not just one of many. Arguably, the same could be said today of the internet.

On live performance, Auslander is trying to challenge the ideas of the traditional value of liveness. Performers in theatre cling to “the magic of live theatre”. These ideas attempt to place a binary opposition between live performance and the media. Generally, this style of thought puts liveness as the defined opposite of the recorded. “In other words, the common assumption is that the live event is ‘real’, and that mediatized events are secondary and somehow artificial reproductions of the real.” (p. 3)

The term “mediatized” derives from Baudrillard, and is concerned with the idea of “mass media” and, in Baudrillard’s definition, a system of bringing all discourses under a single code (sort of a universal formulation of everything as equivalent under a particular semiotic system as a substrate, much like poured concrete). Mediatization can be applied to live performances, by which they become mediatized performances, for example, a play or event broadcast on TV.

Ultimately, Auslander emphasizes that there are no ontological differences between live performance and media. Live performances are just as susceptible to incorporating media elements as others.

Chapter 2: Live Performance in a Mediatized Culture

Initially, media events were modeled on live ones. Now that media is culturally dominant, live events are now modeled on mediatized ones. Auslander treats this change as one dependent on the historical situation, rather than as dependent on intrinsic properties. Note: This could be just an instance of remediation, where disciplines borrow, reference, and support each other. This approach treats the relationship as non-antagonistic, though.

Television strove to emulate theatre when it emerged (rather than film). Initially it had the capacity to “go live” which emphasized its quality of liveness and immediacy, even though that is not generally how it is used today.

Television’s essential properties are immediacy and intimacy. It can look on events exactly when and as they happen. Film, by contrast, is characterized by memory, repetition, and temporal displacement. Television is intimate in the sense of bringing the external to the home, without needing to travel to it. TV was seen as a cultural sanitizer, to bring only appropriate legitimate content and values into the home. Similar properties can be said of digital media, but in that case it uses the hypermediate in addition to the immediate, and the illusion of the sanitized vanished much more quickly.

Live performance is now heavily influenced (and tends to emulate) the rhetoric and practices of mediatization, with screens becoming prominent in many situations and venues.

Mediatization is reflected in production of performance by the apparatus of reproduction. Auslander references Jacques Attali on representation as a method compared to repetition. Representation developed initially with capitalism but was gradually replaced by repetition as a result of mass-production. This sounds to reference Greenberg on the Avant-Garde and Kitsch.

Referencing Benjamin: Masses have desire for proximity, and at the same time, have desire for reproduced objects. Benjamin’s claim that reproduction devalues the original can be seen as evidenced by the decay of the value of “real” liveness and intimacy, as eroded and replaced by the emulated synthetic on the screen.

Auslander concludes this section with the claim that the system of the virtual has incorporated liveness into its substance. This could be seen as that live elements may be understood as tools and media for a larger system of meaning making. I don’t think that is what Auslander is getting at, but it seems like a productive line of enquiry.

On simulation and live performance: Ronald McDonald performing in restaurants (p. 49-50). Performances occur in numerous locations, and are all live and separate, but are designed to evoke one single character, which is the template that generates each performance. “All performances of Ronald McDonald are generated from a single interpretation of the character, which functions as a template. I have chosen this example in part to make the point that a template is not the same as a script: improvisational performances, too, can be generated from a template.” (p. 50) Here, live performance aspires to the conditions of mass art.

A condition of this, related to Benjamin’s aura, is the illusion of authenticity. No occurrence of mass or scripted art can be considered authentic because of its reproducibility. Performances that derive from templates instead reference an ideal template, and attempt to borrow its aura or authority.

Using Baudrillard’s definition of the real as “that which it is possible to give an equivalent reproduction”, then the live must be defined as “that which can be recorded”.

Loose notes:

Television has become its own culture.

live performances are “more real”
but mediatized are more artificial

live performance can function as mass media?
or, mass media better enables narrative adaptation??

book is controversial because of challenging liveness of theatre?

new media for Auslander is TV

Recent web technologies,
web 2.0, instant messaging, mobile computing
aid confusion of liveness and mediation

liveness and canon?
variation of performance is acceptable in theatre
but in TV and others, liveness leads to a conflict in the establishment of canon.
some particular instance must be elevated to some degree of authority
this derives from the idea of having the perfect performance

for example, music videos which replicate live performance
but use studio track
so the idea of liveness and the reality of it is very
confused and challenged

some performances have have celebration of the virtual
for example, Gorrilaz, which hides liveness of performers
television yields a combination of immediacy and intimacy
filming live television is a SIMULATION not a REPLICATION

Reading Info:
Author/EditorAuslander, Philip
TitleLiveness
Typebook
Context
Tagsdms, performance, media theory
LookupGoogle Scholar, Google Books, Amazon

Bolter and Grusin: Remediation

[Readings] (08.29.08, 4:19 pm)

Overview

The introduction begins with a discussion of the sci-fi film “Strange Days” which partly revolves around a new technology called “The Wire”, which is a sensory recording/playback device.

Like digital media today, it threatens to obsolete other forms of media, but at the same time is bound with similar restrictions and constraints as film and other media.

Remediation is about how media strive to achieve immediacy in spite of their mediation. Newer media attempt to do exactly what their predecessors have done, billing themselves as improved versions of other media. The book is an attempt to challenge the idea that media exist in isolation.

Transparency is an effort to make the presence of the medium disappear. The rhetoric of transparency is introduced through the discovery and use of perspective in the renaissance. Perspective is a technological means of controlling space from a single location, and is also a technology in the sense of its mathematical formulation. As a technology though, it is necessarily about representation and reconstruction of the real world and the human eye. This is an example of immediacy, but that is dependent on the subject of the immediate.

If accurate reproduction is a manifestation of immediacy, then that would imply that Umberto Eco’s openness, and by extension, much of the avant garde, is “latent”.

The automation of computer graphics follows in the same trends as the automation of photography. And in doing so it gauges its accuracy via comparison to photographs. However, the aesthetic that is being pursued is the automated nature of reproduction: the effacement of the programmer, the removal or hiding of the subjective influence in the technology itself.

The introduction of interactivity makes the issue of immediacy and transparency complicated. Interactivity requires certain elements (which may be transparent in the sense that they might be evocative of other forms, eg desktop, paintbox, etc), but to provide immediacy they must be manifest and visible, counter to transparency.

Hypermediation is a situation that involves a coming together of many kinds of media, and often highlights the mediated nature of its construction. This trend can be found in medieval illumination, baroque cabinets, and of course many contemporary things, of which new media is a prime source.

adaptation and hypermediacy: adaptations do not draw attention to their nature as mediating the original. This is also called “repurposing”. Bolter notes that McLuhan once said that the content of any medium is always another medium. The representation of one medium in another is remediation. Does that mean that adaptation is a subset of this?

The remediation of an encyclopedia in digital form bills itself as, not an encyclopedia, but an improved encyclopedia. The transition, involving hyperlinking and providing digital affordances to traditional form draws attention to the electronic medium. Thus such adaptations are translucent, not transparent.

For a game adaptation of a narrative form, the discussion of transparency and mediation become interesting to compare. A goal is to not be transparent, but to evoke the original medium in a way that allows it to be read in a new and open manner.

Remediation as a figurative representation, rather than conceptual. What tends to get “taken out” (of one media and then put into another) is not content, but representational practices. For example: film techniques as adopted by games.

So, the issue is representational practices.
so, for abstract things, like fractal/generative art
these have not really had a foundation in practice, so they forged their own style,
which, admittedly, looks pretty awful, but had informed graphical representation for a long time.

Attempting to challenge the idea that the computer is totally independent of other practices and disciplines. Convergence is serving to undermine that idea, continuing to challenge the idea of the utter-newness.

“A medium is only dead when it’s not being remediated anymore.”

Reading Info:
Author/EditorBolter, Jay and Grusin, Richard
TitleRemediation: Understanding New Media
Typebook
Context
Tagsdms, media theory
LookupGoogle Scholar, Google Books, Amazon

Jon McKenzie: Perform or Else!

[Readings] (08.29.08, 4:18 pm)

Notes

Introduction

Performance is seen as a something applied to business and industry, workers, as well as to art and culture. Everything can be seen as performance.

Specifically, begins looking at a cover of Forbes magazine whose caption is used for the title of this book. This represents a set of firings taken by corporate boards of directors against chief executives. The phenomenon is not limited, though, performance reviews crop up in all lines of business, at all levels, with vauge ominous threats “–or else” if performance is unsatisfactory. “Thus, the Forbes challenge and its hold upon throats around the world: Perform–or else: be fired, redeployed, institutionally marginalized.”

McKenzie proceeds to make his comparison more subtle and complicated: the threat of retribution for performance also echoes images of Vaudeville, popular theatric performance, and cultural performance. Cultural performance strikes a vein with performance art and other richly controversial topics, such as demonstrations, drag, etcetera. The goal of performance in these cases is a certain liminality and subversion. The threat for failure in this case is social normalization.

Alongside these is a subtle and often ignored dimension, which is that of technological performance. Technological performance is ascribed generally to electronic technologies, as well as to consumer products. Failure in this case tends toward being obsolete, defunded, and discarded.

Related to this is the understanding of knowledge and education. Postmodernism tends toward a significant rethinking of the role of knowledge.

One of the main points seems to be that performance is an emergent phenomenon in a system of power and knowledge. McKenzie makes a significant claim that performance will be seen as defining the current era, the 20th and 21st centuries, much like discipline defined the previous two centuries.

A term is introduced, “the lecture machine”, whose icon is the lectern, which seems to denote the system of performance where one is empowered to know and to speak, and separates the speaker from the audience. Lecture machines are systems that enable performances which separate knower from those who do not know. The metaphor can be extended to other boundaries, such as the television and the computer screen.

Chapter 3: Technological performance

McKenzie opens this chapter by looking at some very technical perspectives on performance. Technical articles don’t generally need to define performance as they are embedded within the discipline. McKenzie tries to relate performance of these varying engineering sciences together, and notes that there isn’t anything that seems to coherently tie them together. Specifically, there is a “lack of an explicit and general definition of technological performance.”

Turning again to a scientific paper, McKenzie finds that technological performance is “effectiveness in a given task”. At this point, the difference between performance of the technological variety does not seem too far from that of the business or cultural variety. The task for business is profit, and the task for culture is a certain cultural efficacy.

McKenzie follows to explain that the ideas of effectiveness at tasks is highly context dependent and contingent on external values imposed on the system. He turns to another definition, which poses performance as a “function of effectiveness, reliability, and cost”.

Technical performance might be defined as the rate of change of effectiveness with respect to cost as opposed to just effectiveness.

McKenzie looks at the social aspect of performance, which is about projects. … “projected technologies are more social than technological, more fantastic than objective”. These projects occupy a curious pre-performance state, in that they live in an imaginary dimension before they are built and realized.

Projects become relevant and developed via the affect of social influences. They are carried through by various stakeholders, and the process of development involves the project being born from an abstract world of concepts and ideas into a concrete world deprived of interpretation and ambituity.

Referenced here is Donald MacKenzie’s work “Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance”. This text looks at the idea of accuracy as being affected by social context, as well as affecting social context.

Again, is referenced the Military-Industrial-Academic complex. Again, the cold war has influenced and spurred academic growth and the development of science and technology. This paranoia both comes from culture and comes to affect culture in return.

McKenzie comes to reference Laurel and her understanding of computers as theatre– that designing computer interfaces is really the art of designing experience. (Experience which is created by performance of the software.) McKenzie raises the idea of extending other cultural performance models to apply to HCI, (instead of just Arisototelian poetics, as Laurel uses).

Also cited is Robert Crease, who has studied experimentation in science as a sort of performance, where the laboratory is a special stage for the enactment of material and learning of special knowledge. Science teeters between presentation (of experiments) and representation (where theory is applied to the world, or interpreted from data?).

Ultimately the goal here seem so be that the idea of technological performance (as effectiveness) are still rooted in models of cultural performance, especially as defined by stakeholders in evaluation of the technology.

Finally, the tripartite collective of performance, (studies, management, and technology), are united under the category of their emergence in cold-war America, and are collectively symbolized by feedback loops and the missile.

A question: What is McKenzie trying to do? And what are we supposed to get out of this?

Reading Info:
Author/EditorMcKenzie, Jon
TitlePerform Or Else
Typebook
Context
Tagsdms, performance, media theory
LookupGoogle Scholar, Google Books, Amazon

Friedrich Kittler: There Is No Software

[Readings] (08.29.08, 4:17 pm)

Overview

In this essay Kittler frames the argument that software ultimately serves to conceal what is important in a computer system. The most clear way of using this seems to be in the manner in which software restrains and restricts the capacities of the user. Programmability is seen as a force that enables this concealment, and is, instead of an advantage, considered an indictment. While software is limiting and restrictive, so too is hardware, so it is difficult to tell exactly where and what the problem is.

I would say that ultimately the restrictions of software are the the same as the restrictions of using any system of abstracting models.

Prevalent through this essay is a theme of disgust over the notion of design, and a desire to appeal to bare mathematical concepts. Unfortunately, Kittler wildly misuses his reference to fractal theory, and his treatment of Turing’s computability too seems dubious.

A useful summary is located on mediamatic.net.

Notes

Kittler opens by noting the pervasiveness of computers, and that writing is more frequently (he says that the bulk of writing) is stored in computer memory, where it is no longer perceivable by humans. In fact, we do not even write anymore, but we use tools that are able to write by themselves.

Kittler is concerned with this technology driven evolution, and the technology that has enabled it. He specifically looks at the relations between Turing machines and microprocessors. The Turing machine can imitate any other Turing machine and compute any computable function. This fact means that computation is independent of hardware and that nature itself may be considered a Turing machine.

This is relevant from a perspective of languages, and Kittler suggests that programming languages have transcended ordinary language and have formed something of a tower of Babel founded on computational equivalence. He uses an analogy referencing fractals and the idea of self-similarity, but that does not seem to bear much resemblance to the idea in a real mathematical sense. What he is describing is a similarity among models and languages. Beyond this, the question arises of what language *does* when it has reached this universal state.

What follows is a close look at the process of executing WordPerfect on DOS. The language issues that seem to be problematic are the syntactic qualities stemming from DOS as a platform: the “exe” and “com” extensions, as well as the pervasiveness of acronyms. One of the problematic points is where the OS ends and the program begins. This relates again to the BIOS of the computer, which is another layer operating underneath even the operating system. In turn, underneath these too is additional hardware, in which information is only represented as differences in voltage. In turn, Kittler argues, the formalization of these is mathematical theory, which is composed of sentences with words and letters.

From this sequence of observations, we are to find that, really, there is no software. It is difficult to tell how this argument is supposed to coalesce. If we are to apply this sort of reduction to everything, (our bodies are cells, governed by biology, then by chemistry, then by physics), then we wind up with reductions that are of very limited use at all.

The current trend in design is that of concealment of the technicality of the underlying machine elements: BIOS conceals the hardware, the OS conceals the BIOS, and software conceals the OS. This concealment is “completed” by GUI design, and hardware level security. HCI has taught us that GUI design certainly does not always conceal the machine nature of the computer, it in fact reveals it in different ways. The fallacy of this is best exposed when programs catastrophically fail and reveal their inner workings in a most jarring manner (the Windows blue screens are most notorious in this regard).

Kittler sees the effect of the prevalence of software as pushing this trend of concealment. He invokes the idea of software as similar to one-way cryptographic functions, and as a result, cannot be easily reproduced or computed. Again, history has shown that modern programming languages, instead of being progressively harder to decompile, are much easier than in the past (especially languages such as Java or C#), similarly, all software is still universally accessible on one level or another, the communities that crack software or hack electronic devices such as game systems, portable music players, etcetera are tantamount evidence to this.

Software is seen as being able to conceal itself on every front, and have that concealment further enforced by patents and copyrights. The quality of undecidability and complexity has led too to the legal status of software as material, despite its immateriality. At this point in time, due to the internet and rapid ability for software to be copied, this nature has become much more complex, with the material elements being held to very tightly by publishers.

The capacity for software to function is limited and dependent on the power of the hardware, which ultimately limits the capacity for software to emulate other systems through its available memory.

Ultimately, Kittler concludes that software is ill fitted to take on the increasing complexity of problem solving that will be demanded in the world. Programmability limits the effective potential of any platform due to its opening and lack of focus. Hardware has the strength and power to simulate much more effectively real world systems.

Reading Info:
Author/EditorKittler, Friedrich
TitleThere is No Software
Typebook
Context
Tagsdms, media theory, postmodernism
LookupGoogle Scholar, Google Books, Amazon

Walter Benjamin: The Work of Art in the Age of Mechanical Reproduction

[Readings] (08.29.08, 4:16 pm)

Notes

Benjamin is attempting to derive an approach to the understanding of art that is useless for Fascism (in the sense of discourse). But rather, the theory of art should be useful for developing revolutionary aesthetics and values.

1

The work of art is always reproducible, at least in principle. Mechanical reproduction enables this to a dramatically increased extent. Reproduction itself has a history originating in stamping and extending through woodcuts, to lithography, to finally photography and film. The emergence of these technologies enabled two things: 1) the ability to “reproduce all transmitted works of art”, and 2) to become legitimate in their own artistic process.

That first point is made very quickly and is a very dangerous statement. Benjamin might be meaning something more limited than it sounds, but it makes the ground a little shaky.

2

Reproductions are different from the originals in that the original has a presence in time and space. It also contains a history (one might say a cultural capital), which cannot be copied. (Although it could be emulated or referenced…)

A curious bit here: Manual reproductions were considered forgeries, and these allowed the original work to preserve its authority, but the process and culture of forgery seems more curious than that, further, a really *good* forgery might have the quality of confusing experts, and this blurs and confuses the attribution of authenticity.

Technical reproductions are different in that they tend to enable more to be derived from the original work. For example, photographic enlargement or slow motion film. “…technical reproduction can put the copy of the original into situations which would be out of reach for the original itself.” The mechanical reproduction also enables distribution for the beholder to access it easily, without requiring labor or effort (travel, or, in the case of kitsch, education).

The next point is the highly controversial one: “The situations into which the product of mechanical reproduction can be brought may not touch the actual work of art, yet the quality of its presence is always depreciated.” This establishes the idea that the work of art is diminished by its reproduction, but there is a bit more subtlety going on. (And this ties strongly into Greenberg, later.) Namely, the value of the original is perceived as being less when confronted with the superiority of the mechanical reproduction. This point is still contestable, but follows directly from the previous.

The effect of reproduction is to detach the work of art from its tradition, and destroy the value of cultural heritage.

3

On the aura of natural objects: this is determined by proximity, how close one is to the object of attention. But, (in the case of the masses), there is a desire to get closer in proximity to the object, and failing direct access, that may be done via its reproduction. However, that destroys the uniqueness of the object.

Criticism can be made at this point of the role of uniqueness. After Barthes, one might say that uniqueness and meaning derive from the beholder, not from the object or its image.

4

Before the era of reproduction, art was dependent on a sort of ritual function. Later, with the rise of reproduction, art reacted with the idea of “art for the sake of art”. This idea was to deny the social function of art and ascribe to it a pure ritual or theology.

The function of mechanical reproduction is to liberate art from its dependence on ritual. Instead, art becomes art designed for reproducibility, which undermines the notion that authenticity might even exist. Thus, art begins to serve the function of politics.

5

Art operates in the service of functions. Originally, in the neolithic era, it was a function of magic and ritual, only later being termed art. The emphasis of the value of art leads to new functions, but these functions may be kept concealed.

6

Photography takes on a new set of functions and through a change in approach, takes on a new political significance. Specifically, Benjamin is looking at Atget who took photographs of deserted streets in Paris such that they looked like scenes of crime.

The fact that photography can take up political functions though, does not imply that other works cannot serve different functions than traditionally ascribed either. The significant change seems to be in referencing the work to another system of meaning (crime photography), rather than being an intrinsic property of photographs as reproducible.

7

In the early days of film, critics attempted to view film using logic of ritual.

Reading Info:
Author/EditorBenjamin, Walter
TitleThe Work of Art in the Age of Reproduction
Typebook
Context
Tagsdms, media theory, postmodernism
LookupGoogle Scholar, Google Books, Amazon

Nicholas Negroponte: Being Digital

[Readings] (08.29.08, 4:15 pm)

Overview

This is intended to be read as a futurist look ahead of the future. Negroponte gets many things wrong, but a few of his predictions have been realized remarkably: the popularity of email, emergence of netiquette, and self-published video.

Negroponte’s focus is on individual users, but does not really acknowledge other units of social organization. He seems to dislike demographics, and want to get rid of classifications that are more general than the individual. He strips culture down to the individual level, but disregards the prevalent strength of other emergent social structures. If there is no society (just users and technology), then there is nothing in the way of technology.

The book works in a perfect opposition to Duguid and Brown, who claim that social structures are resolute and hard to budge. They argue, accurately, that technology that does not account for society will fail. Their failure is to suggest that society will not change at all.

Assorted Notes:

Things are changing from atoms to bits. Things might read as important things, or the important parts of things are being understood in terms of bits instead of atoms, as information rather than matter.

A lot of this is brought about (or at least reflected by) the change and rapid growth and saturation of the world and everyday life with microprocessors. The change strikes Negroponte as inevitable, and having the potential to drastically change things. In the introduction, the argument seems to lovingly reflect a sort of futurism of the infinitely possible. Especially, Negroponte sees cultural systems as being drastically changed by digital communication.

The intense anti-futurist focus given by Duguid and Brown make some sense while compared here.

Chapter 13

Post information age: Evidently (during 1995, when this was written) we were passing into a post-information age. This seems hard to swallow. The internet, the chief vehicle of the information age had barely spread its wings by then.

In terms of post-information, what seems to be understood as this relates to consumer culture is an idea of individual knowability. That individuals are composed of information (notably as opposed to statistical rows, this has a more declarative model). Negroponte describes machines being able to understand people with a significant degree of subtlety.

Granted, some of this has been realized: notably in terms of advertising and search technologies, but even still the amount by which distributed software systems know their users is minute, and far from Negroponte’s examples of the helpful liquor store agent. (p. 165) Negroponte’s point is that “All of these are based on a model of you as an individual, not as part of a group who might buy a certain brand of soapsuds or toothpaste.” The problem is that is nearly EXACTLY what the new models do! They are based not on committee decided consumer models, but on statistical and correlational ones. Their only virtue is their emergent nature, but they are just as demeaning and reductive as any other.

Furthermore, these examples seem extremely invasive of privacy. For every bit of helpfulness that a system might generate, there is ten times that in the potential for abuse and exploitation. Negroponte claims that “advertising will be so personalized that it is indistinguishable from news. It is news.” (p. 170) This is a terrifying prospect, since an advertiser is always investing in potential return, which by nature, is not in the consumer’s best interest.

Chapter 14

Media consumption is leaning towards a pay per view model of content. This does make sense and is consistent with contemporary models.

The abundance of meta information is also forecasted here, which plays out as well. Negroponte’s preoccupation with television seems to foreshadow phenomena such as YouTube.

Chapter 15

On interpretation via decoding (looking at decoding a page as a “fax”). Sounds like it could reference classical information theory.

Describes MIDI as a potential replacement format for storing music effectively. Essentially, Negroponte is trying to make the distinction between underlying structural information and rendered “image” information. Taken to the extreme, a musical recording is nothing more than the score or a midi file.

This misses the huge stumbling stone that midi is horrible. And the matter of representation via schematic is prone towards variability in representation, rendering power, consistency, and many more difficulties. What is troubling about this argument is not that it claims that schematic transformation is possible, even if lossy, but that it claims that the image is nothing more than a rendered schematic.

Modern compression (or even a relatively old format such as mp3) has a significant compression power, but its approach is totally different.

Negroponte evidently really doesn’t like fax. Then again, who can blame him? And yes, email is really so awesome, and will be once users learn decorum. This makes sense and has happened. It’s hard to understand what is being argued, though.

Chapter 16

Negroponte looks at kids doing things with Lego/Logo systems as a way for reaching children with varying learning styles. This references the evocative capacities of computers discussed by Turkle. He strongly references Papert on this issue. Notably, computers enable a learning by doing as opposed to a learning by drill and practice.

Via representational power, computers can provide context in learning.

Example of the wild goose chase separates “street smarts” from “classroom skills”. The internet enables an abundance of information available for children to learn and assemble. An odd critique of this is (again) the matter of abuse and safety. While the internet may enable a sort of collective intelligence, Negroponte assumes that all content will necessarily be positive and complimentary.

Concluding he discusses Sheik Yamani and learning and education. Primitive people are ones who have a different means of conveying knowledge (from the modern perspective), and have a supportive social fabric to sustain them. Uneducated people are products of a modern society whose social system cannot support them.

The connection here is that children who learn with games and the internet will learn skills usable later in life, and (evidently) will be supported by the supportive embrace of technology. This sort of implicitly assumes that other educational approaches have failed in the modern age. There are still dangers here, games that support also have biases, they may have agendas, and the ones that are fun tend to be ones that do not require critical thinking. Games such as Civilization are fun and require thinking, but thinking within the model of civilization put forth by the developers. It does not truly teach critical thought, because, like rote drill and practice, promotes adoption of one model, and one model only.

How is the simulation internalization model different from the “games and violence” or “tv and violence” models of social systems? Kids learn via doing (ex with Lego/Logo, etc), and they know to distinguish real from make believe. This requires a kind of literacy and awareness, though.

Reading Info:
Author/EditorNegroponte, Nicholas
TitleBeing Digital
Typebook
Context
Tagsmedia theory, cyberculture, dms
LookupGoogle Scholar, Google Books, Amazon

Lev Manovich: The Language of New Media

[Readings] (08.29.08, 4:13 pm)

Notes, Chapter 5

In the office of Razorfish, the design of the space reflects the themes of computer culture: interactivity, lack of hierarchy, modularity. Outside, design of physical objects has shifted to evoke the idea of the computer (reversing the original principles of GUI design). Manovich returns and emphasizes that the first form of digital media is the database (which includes structure and logic). The second form is “a virtual interactive 3-D space”.

The database is the metaphor to conceptualize both individual and collective memory. In this sense, social rules and values would be encoded as points of data within databases. Even appreciating Manovich’s extended interpretation and concept of the database, this seems very far of a stretch. Furthermore, computer culture supposedly uses 3D to represent every kind of information imaginable. Examples cited describe a few visualizations, but these visualizations are probably not used in abundance currently.

Using metaphorical terms: “…increasingly the same metaphors and interfaces are used at work and at home, for business and entertainment. For instance, the user navigates through a virtual space to work and to play, whether analyzing scientific data or killing enemies in Quake.” This captures a very strange semantic moment, where, as a culture, we are approaching vastly different subjects using the same ideas and metaphorical tools. But, surely, everything isn’t reducible to these core essences? Even as a mathematician, I don’t try to apply my mathematical tools (logic being the “core of reason”) to everything. Is the effect of digital media to universalize everything under the same vocabulary?

Manovich uses the metaphor of narrative and description to tie an interesting point around the dilemmas of the information age. Traditional culture provided well defined narratives for handling information, and now we have too much information and not enough coherent narratives to tie it together. As such, information access (in the raw sense of accessing bits of unrelated, unconnected data) is much more important. Manovich says that we need an aesthetics of information to guide information design, and that all design has turned to information design. A key subtlety of this is that traditional narratives unite information in the form of knowledge, and generally provide a sense of a knower. The new aesthetics would treat information as raw and without perspective.

The Database

Manovich’s database is significantly more complicated than the database of traditional computing. It may be subject to multiple and complex methods of arrangement and sorting, as well as having ontological classification and some degree of procedural interaction. However, all of these characteristics are minute and are outside of the central point which is the data.

Games are characterized by algorithms, what others might call rules. The rules Manovich dwells on are remarkably high level (ie, the directions that enemies come from in Quake), glossing over the the many deep steps needed to get any visual effects at all.

A database represents the world as a disorganized list, and narrative orders events via cause and effect. By this, the two means of representation are in conflict. Manovich explains in clunky language that an individual working through a text, (either reading a narrative or playing a game) must uncover the underlying logic that governs that text. Manovich calls this logic an algorithm.

Later, new media is distilled into an amalgamation of multimedia: “The new media object consists of one or more interfaces to a database of multimedia material.” (p. 227)

The differing pair, database vs narrative are examined in the sense of semiotic linguistics. In traditional narrative, the database is interpreted as as a paradigm, versus the narrative which is a syntagm. The database is implicit (as a set of elements which might comprise something), where the narrative is explicit in its actual construction.

In new media, the database is made to be the explicit paradigm (as the dominant feature of new media) whereas the narrative is dematerialized as the syntagm.

Database and narratives produce endless hybrids. Examples are epic poetry, encyclopedic texts. This issue is reminiscent of Aarseth’s Cybertext. Manovich looks at video art next, and examines some of the intense variations produced by experimental mathematical visualization. While this would seem to be more evocative of the procedural nature of media, it is the capacity of variation that is interpreted as tying back into the database. Rather than procedural variants, they are understood as data variants.

Navigable Space

Manovich brings in and compares the games Doom and Myst, as elements of spatiality in digital media. He maps narrative elements onto these, but in a slightly skewed manner: “Instead of narration and description, we may be better off thinking about games in terms of narrative actions and exploration.” With this, the player is responsible for moving the narrative forward. Exploration is an aspect of progressing through that narrative space, not just the dimensional space of the game world.

Manovich leaves these reasonable conclusions to assert something strange- that navigable space is common to all areas of new media, and that the motion simulator is the “new genre” of entertainment. This can be seen, especially with games, though not with most software or other digital media. Games too use space, but are not always about the space.

One odd characteristic of space in computers is that is representation is austere, utterly blank, and empty. 3D modeling programs present the user with a void, with only the coordinate system to situate construction. Furthermore, construction is out of nothingness, computer space lacks a medium (and this can be said for most computer modeling or art programs). It lends toward disembodied work, without ties to anything else. (p. 225)

Manovich claims that the trend is to progress towards an aggregate or systematic space. Such space ties together others in a rhizomatic manner. Manovich seems to be claiming something very literal, that the internet will transform into some massive heterogeneous chimera of 3D space a-la Second Life crossed with every other science fiction portrayal of the internet. This seems to be nonsense, but there is ground to be held on this argument at a symbolic level.

Further, among navigating space, and the narratives and philosophies thereof, Manovich looks again at computer games: “The dominance of spatial exploration in games expemplifies the classical American mythology in which the individual discovers his identity and builds character by moving through space.” (p 271) Beyond games, information access is seen in terms of navigation and traversal (which can be applied regardless of 3D).

Reading Info:
Author/EditorManovich, Lev
TitleThe Language of New Media
Typebook
Context
Tagsdms, digital media, media theory
LookupGoogle Scholar, Google Books, Amazon

Barbara Stafford: Visual Analogy

[Readings] (08.29.08, 4:12 pm)

Overview

Stafford explores the concept of analogy as a kind of foundation for cognition. Analogy here is the notion of connecting, or finding similarity, which is opposed to the process of finding difference. Stafford specifically is exploring visual analogy, and using visual analysis and language in her investigation, but the concepts of analogy and its visual method can be extended to a general approach to connecting and cognition. Platonism, gnosticism, and other forms of classical philosophy bear heavily into her discussion, and she draws out classical and modern philosophy to approach consciousness and media, both new and old.

Notes

Plato’s take on analogy: Desire for union with unpossessed. Theological, philosophical, rhetorical aesthetic middle term: delayed not-yet, allusive not-quite. Analogy relates to mathematical ratio. Visual arts are uniquely suited to analogy. Compare with 19th century disanalogy or allegory. Balance between opposed duals, etc. (p. 3) About knowledge: a heuristic system in pursuit of equivalences, exposing ties, concretizing abstractions. “By raising a periscope, so to speak, over the social, biological, technological, and disciplinary landscape, I shall argue that we need both to retrieve and to construct a more nuanced picture of resemblance and connectedness.” (p. 8-9) On Hegel and Marx: Forming a new theory of subsumption of dichotomous concepts: Allegory. This loses something, instead of drawing connections between two things, this approach lumps them together into ungainly whole. (p. 9)

Contemporary reasoning focuses on difference and unlikeness. (p. 14) Analogy in science: social analogy of forces (consider Freud’s drives as Newtonian forces, social systems as particles in fields, etc). This seems to occur across sciences. (p. 19) Types of similarity in law: Analogizing morality across society. Law poses that mathematical formulae are isomorphic to behavior, generalizes throughout society. This has a tendency to dehumanize, as it likens people to bits in equations. (p. 31)

Stafford cites D’Arcy Thompson! principle of similarities as functional. This sounds like computer code: Things have similarity across morphology and functional characteristics. (p. 46) How to coordinate a mosaic out of the dissociated elements in the digital age? Fragmentary nature of data tends to replication and solipsism. Computers, search engines do not know how to reconstruct mosaic from fragments, or perceive resemblances. (p. 53)

Analogy vs Allegory: Dichotomous structures, binary, obverse/reverse of same coin. Compare: Analytic allegory vs synthetic allegory. (p. 77) Trancendental culmination here: Contemplated object passes from reach of will or representation. Consider and compare aesthetic sublime. (p. 95)

Analogy as viewed by Aristotle: Translation (!) [as compared with mysticism previously]. Mimesis is at odds with hermeneutics. Metaphor translates words from one order of reality to another. “Aristotelian mimesis, or the activity of visibly converting and reconverting words in order to see phenomena in a new or better light, is fundamentally at odds with a negative, decoding hermeneutics.” (p. 116) On Leibniz: mathematics and world view, and Gestalt psychology. “Libniz is also not far afield from the schema theory of Gestalt psychology, attempting to relate universals to particulars in accessible ways.” (p. 127)

Discussing AI: We lack a deep understanding of the nonverbal “inner-life” of the self: AI and neurobiology should look at visual connection/analogy. (p. 139) Perspectival knowing: Charles Saunders Pierce and Leibniz: reality is the end result of imaginative creating of categories that we stretch out to grasp. (p. 146) Discussion of cognition and analogy (similarity and connectedness) to how the brain works. Cartesian legacy leads to the natural conclusion that mind is a general computer learning program. (p. 158) Embodied analogy is an argument against AI (Strong AI?). We concentrate the universe in ourselves and radiate it outward. “In sum: It seems that the crux of the problem of consiousness lies in the flagrant contrast or clash between organ and awareness. How does one satisfactorily reconcile the paradox of a disembodied brain as a scientific conglomerate of dissected processes with the gut feelings, flickers of emotion, moral struggles, and secret attractions we intuitively feel? I have been arguing that the soution to this dilemma requires the full participation of humanistic imaging in that supposedly ‘interdisciplinary discipline’, cognitive science.” (p. 179)

Reading Info:
Author/EditorStafford, Barbara
TitleVisual Analogy: Consciousness as the Art of Connecting
Typebook
Context
Tagsmedia theory, visual culture, specials
LookupGoogle Scholar, Google Books, Amazon

Frederic Jameson: Postmodernism

[Readings] (08.29.08, 4:10 pm)

Notes

Jameson opens his chapter with some general and vague articulation of contemporary culture. The general trend is that the understanding of the future has changed from a catastrophic or redemptive vision, to a declarative finality: the end of ideology, art, social class, etc. This attitude is what Jameson calls postmodernism. Postmodernism is characterized by, at least in its etymology, by being the state after the movement of modernism. The postmodern is sort of a declarative end of modernism and all that it stands for.

Jameson gives some discussion of the modern- generally through examples. Many recent works are characteristic of “high-modernism”. The defining point of modernism seems to be an elitism, destroying and reconstructing traditions in pursuit of a certain ideal utopia. Postmodernism, by comparison is characterized by the popular. The postmodern seeks to erase the distinction between high and low culture or art as defined by modernism. Instead, the postmodern adopts and incorporates popular and low culture into its substance.

Seemingly, Jameson is attempting to analyze this through a “periodizing hypothesis”, as opposed to an account of postmodernism as a single movement among others. This method of investigation is intended to give credence to the nature of postmodernism as one, a new period separate from modernism, and two, a “cultural dominant” that incorporates and subsumes other features.

On footwear: Jameson gives an extended example of Van Gogh’s “A Pair of Boots” as a high modern work, because its evocative reference to a larger system of a world and values (peasants, the suffering of work life). Van Gogh uses color to romanticize and make utopian common objects. This approach makes the art form a form of powerful representation, in the sense that it is heavily symbolic and stands for ideas, and a system of meaning.

The postmodern example is Andy Warhol’s “Diamond Dust Shoes”, which is dead and lifeless, detached from a sense that the objects might have a life or history, but they are rather random and inert. This work still conveys meaning, but it is not conveyed representationally, the work does not represent meaning, but rather evokes meaning from the viewer’s association of the commodity.

Postmodernism is also characterized by a “waning of affect”, where feeling and emotion are left lacking in newer images. Instead, images are commodified and are referential to the surface only, suggesting that there is only the image. This regression of images ties back to Baudrillard and simulacra. Common themes of the modern, anxiety and alienation, are missing or are inappropriate in the postmodern.

Instead of representation and explicit signification, the postmodern uses simulacra in the sense of Baudrillard. Instead of representation, the postmodern connotates, parodies, or uses pastiche. Contemporary works can no longer represent the past, instead they can only represent our ideas and stereotypes of the past. The postmodern in this sense is able to destroy history as a concrete thing.

The understanding of history in the postmodern era is schizophrenic. Instead of having a concrete chain of signification that composes a coherent meaning, postmodern memory has a breakdown of meaning, in the form of “a rubble of distinct and unrelated signifiers.”

Jameson spends time exploring some final notes on mapping and architecture. Postmodern architecture and mapping are characterized by a lack of signification. The postmodern building is a totally self contained and nearly imaginary structure, without traditional ideas of reference and space. This leads to an alienation that can be seen as a lack of mapping. Traditional mapping connects the imaginary and the real, or the space with the map (or the model). A postmodern work/map/building must be dissociated and global. What that means, or how it would play out is left to be determined.

Reading Info:
Author/EditorJameson, Frederic
TitlePostmodernism, or, the Cultural Logic of Late Capitalism
Typebook
Context
Tagsmedia theory, dms, postmodernism
LookupGoogle Scholar, Google Books, Amazon
« Previous PageNext Page »