Archive for the ‘ACCELERATION’ Category

Metaphysical HD

Tuesday, June 7th, 2011

In December last year P3 Ambika mounted an exhibition of Terry Flaxton’s work as AHRC Creative Research Fellow at Bristol University, which was concerned with high resolution imaging. The works were mostly large scale moving image projections consisting of group portraits and landscapes, precisely crafted cinematographic vignettes, posed and composed with clear attention to detail, captured with high-end, high definition digital cinema technology.

To complement the exhibition Flaxton invited a number of people over a number of lunchtimes to talk around the question of the ‘aura’ of the work of art, famously described by Walter Benjamin as being absented from the work produced in the The Work of Art in the Age of Mechanical Reproduction (1936), and what the further implications for this might be in the age of digital high resolution video reproduction. My response was to return to glitch as an index of digital materiality and speculate what this might mean now that Standard (SD) has given way to High Definition (HD). My position, delivered as a kind of polemical rant, was an appeal to an imperative to glitch-up the medium, a delinquent reaction to the new hegemony of high definition, in short a call to “fuck shit up”.

We are, it seems, past the cusp of a transition and as the critical mass of video media shifts from SD, HD has become dominant, on consumer and professional levels, in the cinema, on television, in gallery-based artists’ work. Is it possible anymore to purchase a new TV that isn’t HD-ready, is it possible now to buy a camcorder that isn’t HD? It would seem, anecdotally at least, increasingly not.

While HD moving image media has indeed become commonplace in gallery-based artists’ work, Flaxton’s attempt to highlight the effects of its specificity is rare in this context. Ed Atkins, in a recent essay, discusses HD in such a way that seems on the face of it to accept the promise of its verisimilitude, while critiquing its effect, writing that “High Definition (HD) has surpassed what we tamely imagined to be the zenith of representational affectivity within the moving image, presenting us with lucid, liquid images that are at once both preposterously life-like and utterly dead.” The problem with HD for Atkins is that it is “…a ‘hollow’ representation, eternally distanced from life, from Being.” This is a paradox that rests, according to Atkins, on the ontological contradiction that HD is “essentially immaterial” and that this is “…concomitant to its promise of hyperreality – of previously unimaginable levels of sharpness, lucidity, believability, etc., transcending the material world to present some sort of divine insight. Though of course, HD’s occasion is entirely based upon the fantastic representation of the material and only the material.”

For Atkins the ostensible success of the medium, its realism, its astonishing representational ability, paradoxically renders what it represents as dead, dead in the sense that the very theatricality of what is represented – in his essay the image of Johnny Depp in Michael Mann’s Public Enemies (2009) – is revealed for what it is, and for not being an image of a ‘real’ life, it is an image of death, and as such a second twist of the paradox comes into play, that as an image of death it reveals the mortality of the ‘real’ Johnny Depp.

In identifying these representational phenomena as qualities of HD Atkins characterises it as existing within the Zeitgeist as it “both apprehends the progress [of the drive towards ever improved realism] and helps it on its way… It’s ambiguous yet minted enough to be understood as both transitory (how high is ‘High’?) and specific (‘Definition’).” However this perfect and perfectly dead image is in reality, the result not of a digitally immaterial medium, but the very material application of software, codecs and faster processing, as the Zeitgeist reflects the demands of the industrial media complex for ever higher resolutions, working hard to ensure an illusion of immateriality through ever improved realism. This is the reproduction of a photo-realistic vision of the world that has for centuries been locked in by grids, planes and lenses, enshrined in the conventions of Euclidian perspective, as the default condition for representational realism.

Bitmapping, colour, codec, layers, grading, sprites perspective, projection and vectors, in Making Space (Senses of Cinema, Issue 57) Sean Cubitt demonstrates how developments in compression software work to maintain illusions of spatial movement in high resolution cinema; while he admits that “…we are still trying to understand what it is that we are looking at now…” it is clear that how whatever it is becomes visible is the result of some highly sophisticated processes and processing. He describes how this is forged through complex matrices of raster grids, bitmap displays, and hardwired pixel addresses, that as digital images are compressed, crushed, some more than others depending on the delivery platform (from YouTube to BluRay and beyond), the illusion of movement relies on a dizzying array of operations of vector prediction, keyframing and tweening in Groups of Blocks, how layers of images have become key components of digital imaging in creating representational space through the parallax effect whereby relative speed stands in for relative distance and the fastest layer appears to be closest to the viewer.

So, digital moving images are not simply the product of invisible and vaguely immaterial technology; due to physical limits on storage space and bandwidth in its display, the digital moving image is very much dependent on software and hardware to construct the illusion of high resolution; far from leaving media specificity behind, once the dust of apparent verisimilitude has settled or been stirred up, once the seductive veneer of the image has become commonplace or surpassed by ever higher definition, there may be much for a digital materialist to find in post-media medium specificity.

But, digital moving image media’s apparent detachment from a physical base or specific material apparatus has been accelerated with HD. Cameras record directly to drive or card, exhibition is less likely to be through playback from dedicated physical media like tape and optical disc, but more likely to be from a hard drive of some description. Whether this is dedicated moving image equipment or the ubiquitous disk found on a local computer or network, physically and technologically it will be indistinguishable and could equally be used to store and play back sound, display text, image, the internet, a spreadsheet, an eBook, or any given combination of those things and countless others. But media forms have historically been tethered to physical material, and in reality this is no less the case with the migration of media onto digital technology, as N Katherine Hayles points out in Writing Machines “materiality is as vibrant as ever, for the computational engines and artificial intelligence that produce simulations require sophisticated bases in the real world”. However we have seen that in the digital domain materiality no longer demands physical specificity, so it is more productive to conceive of media specificity as having taken something of a metaphysical turn. While they may rely on the same physical support of hard drive machinery, specific digital media can now be best thought of as discrete ‘metaphysical objects’ – as things that we still call ‘films’, ‘photographs’, ‘sounds’, ‘poems’, ‘recipes’ – but objects nonetheless, some we might call invoices, others we will call artworks. How less of a real object is a virtual cat, chair or banana, than their physical equivalents? They all exist in the world as entities with their own essence and ontology. Medium specificity simply distinguishes media objects of a different nature, determining the medium’s essential qualities as an object separate from other objects in the world.

The essence of the medium or format, like the essence of any object, is never fully approached or appreciable, the whole of the object is never apprehended all at once; traces of its essence however are on occasions visible: the grain of the film betrays its photo-chemical nature, the scratch its physical material, etc. Remediation has ensured that the material tropes of physical analogue moving image media forms have become thoroughly subsumed into HD, but as effect rather than as material essence. Essences and questions of materiality can also be applied to electronic and digital media; as the ‘whole’ of the object is never appreciated and, like indexicality and hapticity, is unconstrained by notions of physicality, media can be considered as objects, or mega-objects, with qualities of ontologically equal, or at least non-competing status as material objects. HD as a medium isn’t some kind of dematerialized digital state of imminence ready to emulate and then better pre-existing analogue media forms, it has its own visual representational qualities made possible by a material base oriented as a manifested object as a specific

thing it itself.

In the development of his object-oriented philosophy, Graham Harman takes Heidegger’s formulation of the tool-being, starting with the broken tool analogy that a piece of equipment reveals itself as a discrete object once it stops being useful. HD can also be described in this way, it’s existence as a medium is not noticed until it no longer functions in the invisible mediation of information. Glitch effects break the medium revealing something of its essence as intended or otherwise artifacts of malfunctioning code, compression or hardware, as HD becomes commonplace weird artifacts with exotic names like macroblocking and mosquito noise are becoming everyday experiences.

Harman extends tool analysis to all objects and in this sense the tool isn’t an object that is “used”, it simply ‘is’ and that “…to refer to an object as a “tool-being” is not to say that it is brutally exploited as means to an end, but only that it is torn apart by the universal duel between the silent execution of an object’s reality and the glittering aura of its tangible surface.” (Graham Harman, ‘Object-Oriented Philosophy’, Towards Speculative Realism, 2009). In returning to Walter Benjamin’s assertion about the dubious status of the auratic in mechanical, electronic and digital reproduction as investigated in Terry Flaxton’s discussions above, we can propose that in Harman’s terms a digital medium conforms to the conditions of being an object: its visible manifestation has a tangible audio visual surface, it has aura, but it is also an object which draws attention to itself as such when it is broken, glitch artifacts, which is to say the broken workings of the code, compression and hardware, attest to its essence and its materiality.

Critical examinations of moving image medium specificity in art and cinema have been predicated on a critique of use of the media by cultural practitioners, such as Rosalind Krauss in A Voyage on the North Sea: Art in the Age of the Post-Medium Condition (2000) or Noël Carroll in Theorizing the Moving image (1996), which while diverging from Greenbergian Modernism occupy more or less the same predominantly humanist critical ground where art objects and media are framed solely in relation to the human producer and reception. However while human art practice moves away from specificity, inventing a world of relationism, process, and the ongoing project, the medium and the object have not simply ceased to exist.

Graham Harman offers the tantalizing assertion that “…the dualism between tool and broken tool actually has no need of human beings, and would hold perfectly well of a world filled with inanimate entities alone.” Where could this take us as a speculative object-orientated metaphysical materiality, conceiving of a post medium specificity which attends to the materiality of the media object, that has an essence, ontology and contingency at least equal in value and status to that of the artwork, the artist, or any other object in the world?

Views From an Accelerated Reality # 1: Vernor Vinge's Technological Singularity

Sunday, May 22nd, 2011

In preparation for the 1993 Vision-21 symposium held in Cleveland, Ohio, USA, NASA’s Lewis Research Centre issued a small press release. In it they explained:

Cyberspace, a metaphorical universe that people enter when they use computers, is the centrepiece for the symposium entitled “the Vision 21 Symposium on Interdisciplinary Science and Engineering in the Era of Cyberspace.” The Symposium will feature some remarkable visions of the future.[1]

Looking back it’s probably difficult to imagine the sort of excitement that surrounded symposiums built around this theme. Today our contemporary notions of a digitised reality centre on ideas of the social network and connectedness, in which one is either online or off. The Internet augments and points back to a reality we may or may not be engaged in, but it doesn’t offer an alternative reality that isn’t governed by the same rules as our own. The concept of cyberspace, or an immersive “virtual” reality in which the physical laws of our own do not apply, has all but disappeared from the popular consciousness. These days any talk of immersive digital worlds conjures up visions of social misfits playing non-stop sessions of World of Warcraft, or living out fantastic realities in Second Life. In 1993 cyber-hysteria was probably at its peak. In the previous year Stephen King’s virtual reality nightmare The Lawnmower Man was released in cinemas and grossed one hundred and fifty million dollars worldwide[2]. Virtual reality gaming systems, complete with VR helmet and gloves, were appearing in arcades everywhere (though they never seemed to work), the Cyberdog clothing franchise was growing exponentially and even the cartoon punk-rocker Billie Idol jumped on the bandwagon with his 1993 album Cyberpunk.  Vision-21 was probably right on the money, dangling cyberspace as a carrot in order to draw big name academics, dying to share their research on ‘speculative concepts and advanced thinking in science and technology’[3].

Amongst the collection of scientists and academics, who I imagine paid their participation fees to deliver papers with titles like Artificial Realities: The Benefits of a Cybersensory Realm, one participant sat quietly waiting to drop a theoretical bomb. Vernor Vinge (pronounced vin-jee); science fiction writer, computer scientist and former professor of mathematics at San Diego State University, was there to read from his paper entitled The Coming Technological Singularity: How to Survive in the Post-Human Era. You can almost picture the audience’s discomfort as Vinge  read out:

 

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.[4]

 

The crux of Vinge’s argument, summarised for sensational effect in the two sentences above, was that the rapid progress of computer technology and information processing, ran parallel to the decline of a dominant human sapience. Technologies built to augment and increase humanity’s intellectual and physical capabilities, would eventually

order cialis online

develop a consciousness of their own and an awareness that our presence on earth was negligible. This series of events and the resulting set of consequences are what Vinge referred to as The Technological Singularity.

This dystopic future narrative, foretelling a kind of sinister digital sentience, had already been played out on the big screen in Stanley Kubrik’s 2001: A Space Odyssey and James Cameron’s Terminator (featuring Arnold Schwarzenegger’s career defining role as the ‘Micro-processor controlled, hyper-alloy combat chassis’[5], or cyborg for short).  What rescued Vinge’s thesis, from the familiar terrain of dystopic cyber-plot lines, and a hail of academic derision, was the insertion of a second and more plausible path towards a post-human era. The traditional sci-fi route to the post-human condition has the sudden self-consciousness of superhumanly intelligent machines as its root cause. This formed part of Vinge’s initial argument.

 

If the technological singularity can happen, it will. Even if all the governments of the world were to understand the “threat” and be in deadly fear of it, progress toward the goal would continue. In fiction, there have been stories of laws passed forbidding the construction of a “machine in the likeness of the human mind”. In fact, the competitive advantage – economic, military, even artistic – of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get there first.[6]

 

Still, Vinge must have known that the creation of a superhumanly intelligent, sentient, computer was a bit of a long shot. Artificial Intelligence machines still hadn’t managed to pass Alan Turing’s test since it was introduced in 1950 and Japanese electronics seemed primarily concerned with teaching robots to dance. So in order to shore up this rather shaky portion of his post-human hypothesis Vinge introduced another pathway to the technological singularity called Intelligence Amplification (IA). What the expression refers to is a process in which normal human intelligence is boosted by information processing apparatus. Vinge explains:

 

IA is something that is proceeding very naturally, in most cases not even recognized by its developers for what it is. But every time our ability to access information and to communicate it to others is improved, in some sense we have achieved an increase over natural intelligence. Even now, the team of a PHD human and good computer workstation could probably max any written intelligence test in existence.[7]

 

What Vinge sketches out above is the kind of hypothetical example in which chess grandmaster Gary Kasparov and Deep Blue, the computer programme that beat him at his own game in 1997, would have joined forces to become a superhumanly intelligent, post-human, chess player. It’s the clunky combination of a desktop computer and PHD student that makes the prospect of a superhuman chess-God so unthreatening. Even in 1993, nobody at the vision-21 symposium would have possessed a computer small and unobtrusive enough to amplify his own intelligence levels without everyone else in the room knowing about it. Today that’s a different story. What Vinge knew then was that at the accelerated speed with which reductions in computer hardware size (and their concomitant increase in processing power) were taking place, it would only be a matter of years before powerful information processing engines could fit in the palms of our hands, or even, further down the line, become interlaced with our brain’s axons and dendrites. He knew that the scientists and academics sitting in that room knew it too.

At it’s most basic, IA takes place when you check a digital watch or solve a difficult mathematical problem with a calculator. Today the amplification of intelligence is happening on nearly every street corner, in every major city in the world, courtesy of smart-phones and instant portable access to the Internet. The speeds with which developments in computer technology led to this newfound portability are unprecedented and show no signs of abating. If anything, developments are probably getting faster. Viewing social, political and cultural life through the lens of IA, there’s a pretty strong case for Vinge’s technological singularity and the idea that we are living through its latter stages.

But what’s so bad about progress? Wouldn’t it be cool if everyone was walking around with superhumanly amplified intelligence levels? Maybe so, but implicit in Vinge’s theory is an existence many of us would struggle to define as human:

 

The post-singularity world will involve extremely high-bandwidth networking. A central feature of strongly superhuman entities will likely be their ability to communicate at variable bandwidths, including ones far higher than speech or written messages. What happens when pieces of ego can be copied and merged, when the size of self-awareness can grow or shrink to fit the nature of

the problems under consideration? These are essential features of strong superhumanity and the singularity. Thinking about them, one begins to feel how essentially strange and different the post-human era will be, no matter how cleverly or benignly it is brought to be.[8]

 

The question of access to this superhuman capacity is also a cause for concern. As the possession of advanced technological apparatus is reserved for those who can afford it, will we begin to see the emergence of an underclass of sub-humans, stuck on average levels of intelligence? And what happens when the first instance of computer/human symbiosis takes place? Will the first fully awakened, integrated superhuman man/machine see his or her own flesh as the negligible half of that pairing? We’re heading dangerously into Terminator territory again, but as fantastic as these questions sound, they are entirely plausible. Whatever the case may be, as humankind hurtles towards it’s own obsolescence; accelerated reality is a disorienting place to be.


[1] //www.nasa.gov/centers/glenn/news/pressrel/1993/93_17.html

[2] http://www.imdb.com/title/tt0104692/

[3] //www.nasa.gov/centers/glenn/news/pressrel/1993/93_17.html

[4] VINGE, Vernor, The Coming Technological Singularity: How to Survive in the Post-Human Era, 1993

[5] CAMERON, James and HURD, Gale Anne, The Terminator, Screenplay, 1983

[6] VINGE, Vernor , (as above), 1993

[7] ibid

[8] ibid

 

The future of materialist video nostalgia

Wednesday, April 27th, 2011

MiniDV logo

During the 1990s tape media based digital video (DV) in the form of the miniDV cassette format replaced existing analogue video, particularly camcorder formats such as Video 8, Hi8, and VHS. Promoted as new small portable media, the success of miniDV was due to its high quality compared to portable domestic analogue tape formats, it was capable of much higher resolution image recording than its predecessors, comfortably considered to be ‘broadcast quality’. MiniDV was embraced by both the domestic market and ‘professional’ production, and while arguments about its resolution relative to that of

cialis 5mg online

16mm film raged (possibly still do) on experimental film discussion lists such as Frameworks, for a while it became the format of choice for many low-budget feature film and documentary productions. Some of the high-end consumer (so-called ‘prosumer’) and professional model camcorders boasted high quality lens and other industry standard technologies, CCDs, chipsets and so on, which made the quality of the images produced close enough to being comparable to professional analogue formats like Betacam SP, effectively cancelling the equation that domestic video format = low resolution.

Firewire cables

Naturally enough miniDV was also embraced by artists. Concurrent with domestic level video becoming capable of these higher resolution, the means of production became accessible as never before as computers, particularly those manufactured by Apple Macintosh, shipped with ever faster processors, more RAM, increased hard drive capacity, while Firewire technology made it easy to capture the digital video from the camera tape to the computer for editing. MiniDV and desktop video made higher resolution videomaking affordable, simple and domestic, just as super-8mm filmmaking had been in decades past, and artists’ video facilities started to go out of business.

My memory of the way digital video was first received is that there was an acknowledgement of the increase in resolution and all the presumed benefits that offered, alongside an often expressed opinion that digital images lack depth, looking somehow flatter than their analogue predecessors. These latter comments seem to have diminished fairly quickly as viewers have become accustomed to viewing digital images. The wider accessibility of the means of digital video production were also the conditions that made a materialist glitch practice possible. Technological developments often provide a context for new artistic ones and accessibility to digital video afforded practitioners ample time to play with the medium, to explore its essential qualities, to discover and exploit its mutability. In some ways there is a parallel between this and the way access to 16mm film technology at film coops in the ’60s/’70s made structuralism and materialism in film possible, or how the arrival of domestic VHS in the ’80s made crash edited scratch video possible.

Transformations Stephen Sutcliffe

Transformations (2005) by Stephen Sutcliffe

However my concern here is not just with a simplistic causal technological determinism as it is with the speculation that there are qualities dependent upon these media technologies that become

canada pills

perceptible in retrospect, and that this is as much the function of the relationship of media based objects in culturally determined networks. When a format becomes displaced in the culture by another the materiality of the old media becomes more noticable. Consider how the quality of home recorded VHS broadcast images carry a complex nostalgia in the work of an artist like Stephen Sutcliffe. In his earlier works the soft electronic milkiness of the VHS format becomes a visual signifier for memory when coupled with soundtracks evoking the recently unconscious, much in the way that the grain of super 8 film is often used to evoke nostalgia, memory, other perceptual states in mythopoeic and narrative film. The essential qualities of these media become more visible in hindsight as they become intentional objects. In particular their non-mutable qualities, not those revealed or exaggerated by glitch and other materialist techniques. The assertion here is that the material qualities or essences of a medium are always present, even when hidden or not visible, but that perception of the inherent qualities of these objects change in time.

When will the essential visual qualities of miniDV begin to become visible? In the other instances mentioned above non mutable media material visibility has occurred some time after the point where a ‘higher resolution’ format has displaced the earlier one in terms of currency, after a kind of perceptual interval during which essential qualities of that format become more recognisable through a process of nostalgic unmediation, the ghosts of the media format become exorcised. That moment may not quite be imminent for miniDV, but the High Definition turn in TV, cinema and domestic video, as well as in artists’ work, suggests that it is a distinct possibility for the not-too-distant future.

In the mean time, in spite of increase in resolution of digital moving image media with the development of HD, and talk in various contexts about ‘post-media’, in the cinema, the gallery, on the internet, and elsewhere, the materialist ontology of moving image media is, as we shall see, just as contingent as it ever was.

Digging up the Future: on the imaginary archaeology in art and other sciences by Maarten Vanden Eynde

Monday, April 4th, 2011

The present returns the past to the future’ – Jorge Luis Borges

Besides prediction models based upon recovered data from the past and the present, there is nothing but imagination at hand to envision the future.

The specific interest or intent of art and all existing sciences seems to flock together whenever a distinctive humanistic evolution is inevitable, creating an épistème of knowledge [1]. In the Middle Ages we struggled to find similarities and resemblances between micro and macro, humans and god, earth and heaven. – We are all alike, mirrored by the image of God – was the prevailing dictum. It took until the 17th century before we started to look for differences, classifying species in separate models (taxonomy, Linnaeus) and paving the way for individual existence. In the 19th century Darwin and Lamarck opened the door to the past and instigated the origin of history. We discovered where we came from and started to reconstruct the string of our evolution. Marx introduced the theory of historical materialism and added why to the questions of when, where and how.
Photography was invented and gave us the first artificial tool to catch a moment. Slowly but destined we became grounded in the reality of the present.

These new certainties, knowing where we come from and the ability to define the distinctiveness of being a homo sapiens sapiens, created an outburst of self-confidence during the 20th century in art and all the other sciences, opening up endless possibilities to act within the present. The result was there, immediately visible and the responsibility was all ours. This conviction in own abilities stimulated the industrial evolution, which changed the world beyond recognition and gave way to the largest population explosion in human history. We learned to genetically manipulate life, we unravelled the mysteries of most DNA strings (including our own), we figured out a way to recreate almost anything out of almost nothing by using nanotechnology, and found ways to be

cialis buy

everywhere at the same time (radio, television, internet). We mastered the épistème of the present, leaving but the future to be destined.

The notion of consequence is the first manifestation of futurism; concern slowly replaced the initial euphoria about endless growth and infinite possibilities. The speed of new inventions and subsequently growing knowledge is accelerating just like the expansion of the universe and might bring us to what is currently known as the Singularity [2]. At that moment, predicted to occur around 2035, knowledge is doubled every minute, making it impossible to comprehend for ‘normal’ humans.

Andy Warhol, "Campbell’s Soup Cans", 1962

The Club of Rome was the first to use computer models to predict the future [3].
Some predictions proved to be farfetched since evolutions in general behave more chaotic than anticipated, but many future scenarios became reality by now. Their first report ‘Limits of Growth’ of 1972 caused a permanent interest in what is to come and it is still the best selling environmental book in world history. The second report from 1974 revised the predictions and gave a more optimistic prognosis for the future of the environment, noting that many of the factors were within human control and therefore that environmental and economic catastrophe were preventable or avoidable.

This notion of self-control in relation to making history by interfering in the present became the most important theorem of the 20th century. Also in the art world this feeling of being able to transcendent your own existence by imagining what might, what could and what should became predominant. Although a great deal of artists working with history are digging up old stories, forgotten facts and undisclosed objects of the past to reinvent and reinterpret history, a much bigger number of artists is involved in writing current history, looking at what might be relevant for future generations to remember us by. Preluded by Marcel Duchamp, Andy Warhol was probably the first artist to fully realize the potential of freezing and claiming history by randomly choosing an insignificant object like a can of Campbell soup or a box of Brillo soap and lifting it above oblivion. This self-proclaimed Deus Ex Machina or act of vanguardism was copied by many other artists, like Heim Steinbach, Jeff Koons and Damien Hirst, who, with changing luck, tried through object fetishization to declare or even force history to happen.

A similar strategy is the combination of elements from the past with the present, already cashing the idea that the present is also the future past and that future historians could unwillingly mingle both and by doing so creating a stimulus for an altered state of remembering or stronger; to rewrite history all together. These combined traces of different pasts create an endless chain of possible futures, visualised by artists like Simon Starling, Ai Wei Wei, Wim Delvoye and Brian Jungen.

Ai Wei Wei, "Han Dynasty Urn with Coca-Cola Logo", 1994

To many critics and curators focus on the past to make sense of or give value to archives, artistic research or current art production in general. By doing so, they enforce a self-fulfilling prophecy upon the work and don’t do right to the imagination and sheer curiosity of the creator towards representation of the present in the future. What will remain? What is our heritage for the future? Even artists like Gerard Richter, Roy Arden, Peter Pillar, Batia Suter and Lois Jacobs who on a first glimps seem to work with the past are rather formulating different answers to what could or should remain of the present.

Roy Arden’s Versace for instance is not looking at the past in the historical sense but merely imagining how we might look back at the past in the future. It questions the relevance or value of anything present in our contemporary society to represent that same society in the future. Many other artists like Cornelia Parker, Mark Dion, Damien Hirst and Guillaume Bijl are doing the same thing; they lay the foundation of future history. They are telling a story, our story. Cornelia Parker uses remnants of (self) destroyed parts of reality and tries to put it back together again. Mark Dion is showing the left over’s of our society in a more ‘classic’ archaeological context and Damien Hirst and Guillaume Bijl subtract a certain object or entire space out of our present world, like a slice of cake, and preserve it directly for future generations. Although using different modes of working they all work with possible remnants of our current civilisation, imagining different pieces of the puzzle that could be used in the future to puzzle back together again the history we are currently creating. They work within the future, not the past.

Roy Arden, "Versace", 2006

This interest, or calling upon, is visible not only in the current art world but across most branches of the science tree. In the field of Biology animals are duplicated, cloned, crossbred and pimped in all imaginable ways to become stronger, smaller, longer lasting, fluorescent [4], faster running,… in general better equipped for eternity. Humans haven’t only discovered how to eradicate life, destroying, willingly or not, several entire species and ecosystems in the past, by now we also know how to manipulate and maintain life. The promise of being able to cure almost any disease in the near future by using nanobots to do the dirty work, caused a real run for life extension programs like Alcor, the world leader in Cryonics [5]. More than one hundred people have been cryopreserved since the first case in 1967. More than one thousand people have made legal and financial arrangements for cryonics with one of several organizations, usually by means of affordable life insurance. The majority chose to only preserve their head, assuming that the body could be regenerated very easily in the future, using the same technique as lizards do to grow back a limb.

The current emphasis on preservation seems also in Archaeology, a science that is traditionally grounded in the past, to overrule the act of excavation. Prophesising on an eminent crisis or apocalyptic disaster inspired us to bury time capsules deep underground containing samples of current societies including their historical highlights. In 2008 the Svalbard Global Seed Vault opened its doors for all the 1,300 gene banks throughout the world. The Seed Vault functions like a safety deposit box in a bank. The Government of Norway owns the facility and the depositing gene banks own the seeds they send. The vault now contains over 20 million seeds, samples from one-third of the world’s most important food crop varieties. In 1974 Ant Farm constructed Cadillac Ranch, ten Cadillac’s, ranging from a 1949 Club Coupe to a 1963 Sedan, buried fin-up in a wheat field in Texas. Much later, in 2006, during a performance work called Burial, Paul McCarthy and Raivo Puusemp buried one of McCarty’s own sculptures in the garden of Naturalis, the National History Museum of Leiden in The Netherlands. The buried sculpture resides underground as an artefact for future discovery.

Ant Farm, "Cadillac Ranch", 1974

Currently, four time capsules are “buried” in space. The two Pioneer Plaques and the two Voyager Golden Records have been attached to a spacecraft for the possible benefit of space farers in the distant future. A fifth time capsule, the KEO satellite, will be launched around 2010, carrying individual messages from Earth’s inhabitants addressed to earthlings around the year 52,000, when KEO will return to Earth [6]. In Cosmology as well the focus is on the future. Experiments are conducted to create black holes, possible portals to travel through time. Terraforming attempts might create an atmosphere around a distant planet or moon creating a possible escape for human kind if planet earth is not viable anymore.

In 1971 the first artwork was placed on the moon. Fallen Astronaut, created by Belgian artist Paul Van Hoeydonck, is an aluminium sculpture of 8,5 cm representing a sexless abstraction of a human. It was left on the moon by the Apollo 15 crew next to a memorial plaque stating all the names of astronauts that died on their way to the moon. In 2003 a work of art by Damien Hirst consisting of 16 multi-coloured spots on a 5cm by 5cm aluminium plate was send to Mars. The colours would be used to adjust the camera while a special composed song of the British pop-band Blur would be played to check the sound and accompany the arrival of the Mars Lander, the Beagle 2. The sequel of Darwin’s exploration vessel was last seen heading for the red planet after separating from its European Space Agency mother ship Mars Express on December 19 2003. Part of a mission estimated to cost $85 million, the probe was supposed to land on Mars a few days later on Christmas Day and search for signs of life, but vanished without trace…

Closer to earth itself many artists have made works that can be seen from outer space. The biggest one, Reflections from Earth is made by Tom Van Sant in 1980: a series of mirrors over a 1.5 mile stretch of the Mojave Desert in the shape of an eye. In 1989 Pierre Comte did something similar with Signature Terre: sixteen squares of black plastic fabric with sides measuring 60m creating the “Planet Earth” symbol. Two noble attempts to leave a trace and write history but as a work of art not surpassing the early Land Art by Robert Smithson (Asphalt Rundown, 1969 and Spiral Jetty, 1970) or even smaller interventions by Richard Long (A line made by walking, 1967) or Christo and Jeanne-Claude’s Surrounded Islands of 1980-83. No single work of art however can compete with the collaborative global effort to create a new geological layer over the earth, consisting of asphalt, concrete and plastic, contemporary materials representing our current civilisation. No matter what happens, we will all be remembered, that is for sure. We just don’t know how. Will we arrive at a moment of sufficient self-alienation where we can contemplate on our own destruction as in a static spectacle? [7]. I don’t think so. We will be too busy with self-preservation, looking back to figure out what lays ahead. Like the speakers of Aymara, an Indian language of the high Andes, who think of time differently than just about everyone else in the world, we should also position the future behind us, because you cannot see it and the past ahead of us, since that is the only thing we can see. This is precisely what so many artists are doing today; looking backwards to discover the future. Whatever lies in front of you and can be seen is used as inspiration source to imagine the unknown.

(A reaction to Dieter Roelstraete’s The Way of the Shovel: On the Archeological Imaginary in Art /e-flux journal by Maarten Vanden Eynde, April 2009)



[1] Michel Foucault used the term épistème in his work The Order of Things (Les Mots et les choses. Une archéologie des sciences humaines, 1966) to mean the historical a priori that grounds knowledge and its discourses and thus represents the condition of their possibility within a particular epoch.
[2] Ray Kurzweil, ‘The Law of Accelerating Returns’, 2001
An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense ‘intuitive linear’ view. So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The ‘returns,’ such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to the Singularity—technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.
“I would define the episteme retrospectively as the strategic apparatus which permits of separating out from among all the statements which are possible those that will be acceptable within, I won’t say a scientific theory, but a field of scientificity, and which it is possible to say are true or false. The episteme is the ‘apparatus’ which makes possible the separation, not of the true from the false, but of what may from what may not be characterised as scientific”.
[3] The Club of Rome is a global think tank that deals with a variety of international political issues. It was founded in April 1968 and raised considerable public attention in 1972 with its report Limits to Growth. In 1993, it published followup called The First Global Revolution. According to this book, “It would seem that humans need a common motivation, namely a common adversary, to organize and act together in the vacuum; such a motivation must be found to bring the divided nations together to face an outside enemy, either a real one or else one invented for the purpose….The common enemy of humanity is man….democracy is no longer well suited for the tasks ahead.”, and “In searching for a new enemy to unite us we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like, would fit the bill.” This statement makes it clear that the current common adversary is the future itself.
[4] Alba, the first green fluorescent bunny made by artist Eduardo Kac in 2000, is an albino rabbit. This means that, since she has no skin pigment, under ordinary environmental conditions she is completely white with pink eyes. Alba is not green all the time. She only glows when illuminated with the correct light. When (and only when) illuminated with blue light (maximum excitation at 488 nm), she glows with a bright green light (maximum emission at 509 nm). She was created with EGFP, an enhanced version (i.e., a synthetic mutation) of the original wild-type green fluorescent gene found in the jellyfish Aequorea Victoria. EGFP gives about two orders of magnitude greater fluorescence in mammalian cells (including human cells) than the original jellyfish gene.
[5] Cryonics is the speculative practice of using cold to preserve the life of a person who can no longer be supported by ordinary medicine. The goal is to carry the person forward through time, for however many decades or centuries might be necessary, until the preservation process can be reversed, and the person restored to full health. While cryonics sounds like science fiction, there is a basis for it in real science. (www.alcor.org)
[6] KEO, The satellite that carries the hopes of the world. What reflections, what revelations do your future great grandchildren evoke in you? What would you wish to tell them about your life, your expectations, your doubts, your desires, your values, your emotions, your dreams’? (www.keo.org)
[7] Walter Benjamin (Technocalyps – Frank Theys, 2006)

Click to glitch

Tuesday, March 22nd, 2011

What was a glitch 10 years ago is not a glitch anymore. This ambiguous contingency of glitch depends on its constantly mutating materiality; the glitch exists as an unstable assemblage in which the materiality is influenced by on the one hand the construction, operation and content of the apparatus (the medium) and on the other hand the work, the writer, and the interpretation by the reader and/or user (the meaning) influence its materiality. Thus, the materiality of the glitch art is not (just) the machine the work appears on, but a constantly changing construct that depends on the interactions between text, social, aesthetical and economic dynamics and of course the point of view from which the different actors are involved and create meaning.

Rosa Menkman – Glitch Studies Manifesto

The Glitch Studies Manifesto is both timely and anachronistic; while it’s tempting to think that we’ve been here before, the Manifesto simultaneously represents a return to and a development of the glitch phenomenon bringing it new relevance. As Rosa Menkman suggests, what a glitch is now, is not what it was then; glitch as practice has begat glitch as a genre, genre relies on practice in context.

In the Manifesto Menkman declares that the “beautiful creation of a glitch is uncanny and sublime”, which she infers is an accident, the result of machine failure, contrasting this with the process of “the creation of a formally new design, either by creating a final product or by developing a new way to re-create or simulate the latest glitch-archetype” which she characterizes

buy cheap cialis online

as a domesticated “conservative glitch art”.

While the glitch aesthetic has been mutating and hybridizing, as a genre it has traveled some way from its origins. By example the name of the Soundcloud glitch group seems anomalous, the music tends to be variations on drum n bass or dubstep and there is little of the dynamic abrasion one might associate with glitch. I recently saw a performance by ‘pioneer’ glitch musician Markus Popp/Oval,

dissertation writing

and while he still employs clicks and whirs, it has become slick, sophisticated and rhythmically complex, the glitchy rawness of the sound which once gave his music its striated melodic tentativeness has become smoothed and controlled. The direct effect of the broken technological tool that reveals its own materiality through malfunction made visible as glitch artefact, seems to have undergone a kind of aesthetic remediation.

Has the glitch phenomenon become nostalgic aesthetic materialism, renowned as much for the distinction of introducing the aesthetics of digital materiality to a Kanye West video

 

as a post-digital dystopian apocalypse effect? Perhaps some future image software will have a button to click to glitch (perhaps it already exists, let me know in the comments if it does), the sort of remediative emulation that once drove the design of Adobe After Effects filters that reproduce ‘realistic’ film scratches and the grain of legacy film stocks, or the Hipstamatic iPhone app which creates digital photographic images that look like seventies snapshots.

If glitch has to some extent become redefined as an effect does it matter? Must glitch be solely conceived of as the result of the specificity and mutability of digital media? In analogue technology the sound of the scratched record, whether this be produced by an accidental nudge, or as a trope in recorded music as an innovative rhythmic force transforming the recording into a sampling instrument in the hands of Grandmaster Flash

 

or as post- analogue nostalgia for the surface noise of recorded music

 

is still emblematic of the indexical and media specific materiality, the stylus in the groove, the materiality of the sound object in itself retains its agency, intentionally or otherwise. Does glitch-as-effect, glitch producing software, maintain an aesthetic symbolic link to the materiality of the hardware, retaining the trace of mutability and digital materialism? Which is to say that if the glitch effect is not physical, then effect as the index of digital malfunction can just as just as validly be considered to be symbolic and significatory.

However if glitch as practice or genre is not to be totally pensioned off as retro kitsch remediation, where is its renewed critical currency and efficacy to be found? If we are thinking in terms of the materiality of digital media, then what of the materiality of the digital post-medium? Post-medium in that, as is well known, in the past ten or so years widely accessible increased network bandwidth, coupled with more powerful domestic computing, has made the internet a viable context for social and media based activity. After years of promise convergence has become a reality as text, moving and still image, and sound increasingly circulate on the same global network of computers on a number of complementary platforms and applications, each dedicated to variations in mode and reception of dissemination across a range of forms.

Critical Artware were formed from a collaborative group of artist-programmers-hackers based in Chicago, interested in the connections, ruptures and dislocations between early moments of Artware or Software Art and other instruction set oriented approaches to conceptual and code-based practices such as Fluxus, Conceptualism, and early Video Art. As can be seen from the video on their website

 

which partly documents their activities including their participation in Blockparty, their activities are both critically subversive and productive. The video online processes documentation through glitch techniques, echoing the fragmentary logic of glitch aesthetics, documenting both real time and space events, shot through with a fast montage of projections of material on Vimeo, flickr, YouTube, Facebook, Ustream, as well as hacked software, media and signal manipulation, games, networks, etc. The crucial activity is social, but crucially through both real-world and online meet ups, each permeates the other to the point where they becomes indistinguishable. The website itself becomes part of the expanded milieu as it becomes increasingly difficult to distinguish the video documentation itself from the images of the meet-ups and events from the web within the documentation from the browser window from the website itself, perhaps ultimately from the very desktop and screen of your computer.

This undifferentiated mash-up of objects into a kind of synergistic entropy, in which the glitch is not simply a reified materiality but also fragments and disrupts communication, accelerates the fragmentary logic of multitasking social situations on and off line, both a glitch transmission and a real world symbolic representation of the glitch logic of fragmentation as anarchic mischiefness becomes a mobilising force

While the work of Critical Artware uses social networks as both a platform and an object of critique, the potential for online video means that Rosa Menkman’s Noise Artifacts Vimeo group can approach a critical mass of its own. At the time of writing the group numbered 339 members who had posted 512 videos. The materiality of discrete media objects operate within the complex materiality of the hyperobject of the world wide web, the glitch operates within the ontology of both.