24
The Nuts and Bolts of Handling Digital Art

Ben Fino-Radin

Introduction

During the short timeframe in which there has been a discourse on the conservation of digital art, ethical, philosophic, economic, and institutional issues have been extensively discussed. It is surprisingly rare, though, to find detailed technical studies of actual on-the-ground, hands-on conservation of digital works of art, either as artwork-centric case studies, or as a general technical overview of day-to-day practices. This belies the fact that such conservation work is in fact being conducted in institutions collecting digital art. The problem is due partially to the fact that it is quite common for specialists to be temporarily contracted, from outside the conservation field, for assisting with the conservation of complex digital artworks. The technical knowledge and skills of such specialists, however, rarely penetrate the scholarly discourse of such practices, and thus, such discourse is rarely grounded in hard technical fact. A truly deep material understanding of digital art has yet to permeate the conservation field in the way it has in the older and more canonized mediums of analog video and film.

The intent of this chapter is to serve as a thorough introduction and guide to the fundamental goals, concepts, and theories of the conservation of digital works of art, and to then delve fully into a survey of tools, methods, and practices used in the day-to-day care of these works—drawing from fields such as digital preservation, digital forensics, retrocomputing, and video game preservation. To illustrate these concepts and practices, real-world examples from media conservation at New York’s Museum of Modern Art (MoMA) will be employed. Examples are not provided in a purely case-study format, but rather as simple and practical examples to more fully illustrate a broader and holistic practice. This chapter should not be interpreted as a linear guide—the various phases of the conservation lifecycle of artworks presented here, in practice, can occur in virtually any order. Indeed, certain aspects covered in this chapter may never be applied to a work at all due to limitations of resources of time and funding. Such is the reality of conservation practice in the real world, where time, budgets, and people are finite.

Fundamental Concepts

The fundamental issue addressed by this chapter is the fact that any work of art that employs technology of any kind is inherently tied to a market that is opposed to long-term preservation and stability—obsolescence has come to be essential to the economics of technology. This problem of obsolescence and reliance on the market is certainly a prevalent force in the conservation of all contemporary art;1 however, it is especially fundamental in media conservation. Obsolescence is an essential inherent vice that defines the unique responsibilities of a media conservator. The unique roles of conservation of technology-based art could be summarized as the following: mitigating obsolescence, navigating the minefield of variability, and protecting the integrity of the artwork. Pip Laurenson (2006) has formulated the following as the fundamental roles and responsibilities of contemporary conservation and media conservation:

  • Conservation is the means by which the work-defining properties are documented, understood, and maintained.
  • Conservation as a practice aims to preserve the identity of the work of art.
  • Conservation aims to be able display the work in the future.
  • Conservation enables different possible authentic installations of the work to be realized in the future.

Technologies and materials in consumer products age, stop working, and become difficult or impossible to replace. This problem will never end. That such products become part of artworks means conservators are fighting a battle that cannot be won by opposition to change and strict notions of authenticity, but rather finds success through elegantly coping with and managing acceptable degrees of change. This is best powered by deep understanding of the fundamental nature of these technologies and materials, their inherent vice, and their unique properties and characteristics.

Learning the Work—Initial Conservation Assessment and Interview

The responsibilities of conservation begin even before a work of any sort enters a collection. Conservators play an essential role in the pre-acquisition and acquisition process, helping curators to understand latent preservation risks inherent in a given work, what materials must actually be collected, and long-term costs. When dealing with digital art the very first phase of the acquisition process is an initial assessment of the work conducted either through direct interaction with and examination of the work itself, or research conducted solely through documentation of the work. This initial assessment by a conservator is critical. Were the collecting institution to rely solely on the artist’s discretion to define what digital objects and constituent parts of the work were to be collected, in many cases, this material alone would be insufficient for long-term preservation. Often it may be the case that the artist’s first impulse is to readily provide collectors and institutions with exhibition-ready materials. While best archival practices have permeated the analog video world—owing in part to the artist’s reliance on production houses for the creation of master tapes—in the digital world, where the artist is no longer reliant on professionals for production and distribution, all bets are off. The purpose of the initial conservation assessment is to understand specific and broad technical features of the work: What is the work? Is it software? Is it web-based? Is it digital video? What tools did the artist use to create the work? What external dependencies are required to properly render the work for exhibition? Does the work require an Internet connection, and if so, what is it supporting? Asking such questions of the work forms an initial set of facts—or in many cases, unanswered questions—that can inform an initial pre-acquisition conversation between the artist and the collecting institution.

Once a basic and fundamental understanding of the artwork’s material form has been reached, it is time to engage the artist in a series of questions, in order to further inform and guide the acquisition decision-making process. The desire of the institution, of course, is to acquire any and all materials and information required for the long-term stewardship of the artwork, regardless of what any future conservation strategy for the work at hand may be. After one has assessed the form and boundaries of the work, it is central to accomplishing this goal to learn exactly how the artist created the end result—understanding the production environment, tools, and decision-making process. Rather than requiring artists to meet a strict format policy based on what the institution deems may be an archival format, the intent in this phase of the acquisition is to understand what might constitute a master or archival format given the specific and particular production environment of the work at hand. The form this dialogue takes can vary greatly, depending to a very great extent on the time the conservator and artist have available. However, a surprising amount of ground can be covered through a brief e-mail exchange comprising a handful of very basic questions. There is no magic list of essential questions that covers all bases in all situations. Instead, questions should be tailored to the work, with the goal of first understanding the artist’s process. The second goal should be to understand any recreation or reformatting of the work that has occurred prior to acquisition. It is often the case that by the time a work is being acquired by an institution, the artist has had to recreate, revisit, or produce new exhibition files for a given artwork. If this is the case, it is critical to understand the process used by the artist: what files serve as the “masters,” what tools were used, what were their criteria for quality assurance, and what form the work took in any subsequent exhibition contexts.

This pre-acquisition dialogue is not to be mistaken for a formal artist’s interview. An artist interview is a critical tool for delving deeper into specific conservation issues latent in the work, usually, post-acquisition. Much has been written on the methodology of artist’s interviews (Beerkens 2012), and the interview is well established as a tool in contemporary conservation. It is in no way specific to the conservation of artworks that employ media or technology. In the mid-2000s, the Variable Media Network attempted to simplify and standardize the process of the artist’s interview through the development of a questionnaire. This methodology, however, had severe inherent biases—namely its scripted question-based format, and its rather dichotomic framing of “storage, emulation, migration, and reinterpretation” as mutually exclusive strategies. Coming to realize these flaws, the Variable Media Questionnaire2 eventually evolved away from being a static list of questions to be posed to artists, and into a more general tool for building custom questionnaires for any constituents. Despite the value that interviews with artists can offer, it is critical to integrate such documentation and evidence as simply one factor among many—not as factual guidance that should dictate the life of an artwork, but as qualitative evidence. Media conservators, of course, hold the perspective of the artist as crucial to the balance of factors that inform the understanding of an artwork, and are chiefly concerned with documenting this—however, the artist’s interview should not put the artist in the position of making “life or death” decisions such as whether or not the artwork should be discarded in the event that a fundamental technology ceases to function. To pose such drastic scenarios to the artist in the cold format of a scripted questionnaire disregards and denies the sociological complexities involved in the very situation of the interview. Glenn Wharton and Fernando Domínguez Rubio write:

As conservators of contemporary art expand their practice to include artist interviews, they have a lot to learn from allied professions with years of experience in qualitative research. Oral historians, anthropologists, and sociologists know the advantages but also the risks involved with the use of interview research […] interviews are research tools with potentially problematic assumptions and unintended consequences. […] The questions we ask, and the ones we don’t, as well as how we ask them, shapes the kind of responses and information we obtain. It is for this reason that interviews are better understood as guided conversations.

(Wharton and Rubio 2013)

Wharton and Rubio go on to discuss the interview situation as a scenario wherein the artists “stage” themselves—presenting an image and opinions, wittingly or not, that may not be consistent with the reality of the artwork. An interview is simply a recording of a specific snapshot of the artist’s evolving self in a highly contextual and loaded situation. Considering the setting of the interview, the interviewer, and the context of the institution that is collecting the work, all information produced in an interview setting is in fact far from objective fact. To take such interview questions as the canonical guide for the future conservation treatment of the work would, somewhat ironically, accomplish the opposite of what the conservator sets out to do—the act would freeze the work in time, according to the parameters and variables that were present in the particular interview. A more effective use of artist’s interview practice recognizes conservation’s own subjectivity, and views the artist’s interview as merely one factor, a very important one, in the media conservator’s holistic consideration of the work rather than an immutable checklist of questions.

Collection and Capture

Regardless of the broader form of an artwork—be it a complex installation that includes digital video, an executable piece of software, or a single-channel digital video—all digital components of an artwork are generally delivered to collecting institutions on some sort of tangible carrier of digital information—a hard drive, thumb drive, or optical disc (such as a CD or DVD). It is today understood that such tangible media carriers are not acceptable forms of storage for long-term preservation, and that the digital objects they contain must be captured and migrated to a centralized form of digital storage that is properly monitored and maintained by IT professionals. This point of capture is a critical moment in that there are essential facts of provenance that must be documented: where the digital object originally came from, what process was undertaken to capture the digital objects, by whom they were captured, and when. The digital archives field has similar concerns, and in order to meet these needs has adopted many tools and methods originating in the field of “digital forensics”:

The same forensics software that indexes a criminal suspect’s hard drive allows the archivist to prepare a comprehensive manifest of the electronic files a donor has turned over for accession; the same hardware that allows the forensics investigator to create an algorithmically authenticated “image” of a file system allows the archivist to ensure the integrity of digital content once captured from its source media; the same data-recovery procedures that allow the specialist to discover, recover, and present as trial evidence an “erased” file may allow a scholar to reconstruct a lost or inadvertently deleted version of an electronic manuscript—and do so with enough confidence to stake reputation and career.

(Kirschenbaum, Ovenden, and Redwine 2010)

Tools that provide such detailed and standards-based documentation of the original carrier media and the process by which the digital objects were extracted are not only useful in preserving whole computer environments, but are helpful even when the tangible carrier is simply a delivery device with no inherent worth as a physical artifact. If the delivery device is from the artist, it may contain contextual evidence that would be of great interest and potential use to researchers interested in technical art history. By capturing detailed metadata about the original order of files, the file system(s) present on the original hard drive, information about its partition map, technical details of the artist’s working environment are preserved—this information may prove invaluable in future conservation scenarios, just as an X-ray revealing the characteristics of a painting’s canvas weave may provide critical evidence for identification or authentication.

There are ample free and open source tools for aiding in this acquisition process. A project led by the School of Information and Library Science at the University of North Carolina, Chapel Hill (SILS) and the Maryland Institute for Technology in the Humanities (MITH) called BitCurator has endeavored to gather the best free and open source digital forensics tools in one portable environment. This offers those working in digital archives, museum conservation, and generally any cultural heritage collection tasked with the acquisition of physical media carriers, a soup-to-nuts system for managing all phases of this process while employing standards-based metadata for the documentation of process and material. There are, however, some basic needs that can be met even in the absence of the adoption of a full suite of tools such as BitCurator. In the Museum of Modern Art’s conservation department, we have developed a simple and basic tool3 to act as a stopgap until a more developed digital forensics workflow is put into practice. This tool provides the basic assurance that files are transferred from the tangible media carrier to centralized storage flawlessly, ensuring bit-for-bit authenticity.

The following sections provide examples of some solutions for the capture of specific tangible media carrier types. These carriers have been divided into two sections: contemporary carriers that can easily be connected to contemporary acquisition workstations, and legacy media carriers that are more challenging to work with. For all examples of disk imaging and capture workflows a Lenovo X230 ThinkPad with two bootable operating systems—Ubuntu 12.04 LTS, and Windows 7 (64 bit)—was used.

Mountable Contemporary Materials: 3.5" High Density Floppy Disks

Why, you may ask, is a section covering contemporary media carriers leading off with something so antiquated as a floppy disk? While certainly not contemporary, these disks are quite commonly found in artists’ personal archives if they were working with computers during the 1990s. Secondly, 3.5" floppy disks that are marked “High Density” can today be accessed very easily with contemporary USB floppy disk drives, which are at the time of writing both affordable and abundantly available. These drives, unfortunately, are only able to read 1.2 MB high density (HD) floppies, leaving 3.5" double-sided double-density (DSDD) disks, and 5.25" floppy disks out in the dark. Identifying the difference between HD and DSDD 3.5" floppy disks can be easily accomplished by counting the number of holes present in the two back corners of the disk (the side held when inserting a disk). HD floppies have two holes, while DSDD disks only have one. Identifying this difference is absolutely critical, as the two formats require completely different recovery strategies. The ability to mount 3.5" HD floppies natively on a host system is significant—it dictates an incredibly simple capture process. Furthermore—there is a fundamental best practice that must be followed when dealing with tangible media carriers in a conservation context, which the 3.5" floppy disk happens to offer as a feature of the format, and therefore serves as an introduction to the concept of write blocking.

Write blocking is the practice of employing some method of preventing one’s acquisition workstation from in any way writing data to attached media for acquisition. Nearly all contemporary operating systems (Linux being the major exception) write hidden files and metadata to removable storage media the instant it is connected and accessed. If a conservator were to inadvertently write such data to a hard drive belonging to an artist, this would be a fundamental and undocumented compromise of the authenticity and provenance of the tangible media carrier, one that would certainly be the cause of questions in future contexts, such as, “What are these files from 2014 doing among these files that the artist made in 1994?” When mounting any sort of media in conservation, archival, or forensic settings, it is a best practice to implement some form of write blocking. This ensures that the artifact may be read, but not written to. In this, our first example of 3.5" high density floppy disks, the carrier itself possesses built-in write-blocking capabilities. If one were to inspect the underside of a 3.5" HD floppy, it can be observed that one of the two holes in its form factor has a small plastic switch that alternatively renders one of the holes open or closed. When this write-blocking tab is in the “open” position such that one can see through the hole, the disk is write-protected or “safe.” If this tab is in the “closed” position, it is write-enabled. For our purposes, when handling floppy disks containing artists’ materials we always want this switch to be in the write-protected position. Once this is ensured, one can insert the disk into an external USB drive, and begin the actual process of capturing the disk.

In order to accomplish the capture and documentation of tangible media carriers from the previously discussed perspective of low-level capture for purposes of archival provenance, and enabling future scholarship of technical art history, we produce what is called a “disk image,” a recording of every bit read from the tangible media carrier. There are many tools for producing disk images, and many formats of disk images. We will begin with the most basic and oldest of tools and formats—producing “raw” disk images with the “dd” program. On Linux and Macintosh systems, the “dd” or “direct duplicate” program is a command line-based utility packaged as part of GNU coreutils.4 To produce a disk image with dd, the most basic invocation possible is:

images

where “foo” represents the path to the “device file” of the disk we seek to image, and “bar” represents the file path and name of the image we wish to create. A device file is essentially a directory or file in a Unix or Linux file system, which points to a peripheral device such as our external USB floppy drive. Let us apply this methodology to a specific use case. We have a 3.5" HD floppy disk with the words “Drawings 1994” written on its label. We know that it is a high-density floppy disk, as it has two holes in its form factor, in addition to having the “HD” logo stamped in one corner. We ensure that the write-blocking tab is set, but before we insert this disk into our external USB drive, we need to become familiar with what volumes and devices are already mounted on our host machine, so that once we insert the floppy disk, we can identify it as a new device listed among the previously identified devices. By typing the “mount” command into our Linux terminal, we are offered a listing of all currently mounted volumes (be they physical disks or disk images). This listing includes the device file, as well as the volume name, which will be useful in determining which device file has been assigned to the floppy disk drive. After inserting the floppy disk and invoking the “mount” command once more, we can see that there is a new line, listing a device file path of /dev/disk01s2 with a volume titled “Drawings 1994”. This device file /dev/disk01s2 is precisely what we need to pass to our “dd” command as the “input file.” However, before we proceed, we must unmount the attached volume. If we do not do this, dd will throw an error, reporting that the device is busy or in use. “Drawings 1994” can be unmounted by invoking “umount Drawings 1994.” Note that the “” character is employed to indicate to the terminal that there is a space in the volume name. Invoking the “mount” command once more, we now see that the entry for “Drawings 1994” is gone. Now, we can safely run our final dd command:

images

The above line specifies the input file as our previously discovered device file. The output file simply specifies what to name the output, and where to put it—and in our case we used the tilde (~) character as a shortcut for our user’s home directory, and named the output as “Drawings_1994”. While running, dd does not provide any output to the user. Upon completion we are offered a message that lists the number of bytes sent and received. This is instructive of how dd functions—its most basic function is the duplication of bytes. It is only our specific use of the device file as input that sets its role as the creation of a disk image. By default, dd reads and writes data in chunks of 526 bytes. This is perfectly acceptable for something so small in capacity as a floppy disk. For larger storage devices, however, a larger byte size may be specified with the “bs” option (i.e., bs=16M).

The above process can be seen as the most base-level strategy for the production of disk images. dd is a tried and true tool that has withstood the test of time—and as part of GNU coreutils it is by default available on standard Linux distributions, and Mac OS X. It is, however, extremely limited on several counts: firstly, the absence of user feedback presents a major usability problem. Second, dd does not provide any built-in, user auditable or human readable means for ensuring that the disk image is in fact a bit-for-bit representation of the source disk. While it can be assumed that dd employs some kind of error checking during the copy process, the user’s inability to audit this, or to retain any record of this for later audits, leaves much to be desired. There are, however, several tools that do just that. Guymager is an open source application for Linux that allows users to produce disk images in raw (dd) format as well as two formats that are used in the world of digital forensics: Expert Witness Format, and Advanced Forensic Image format. The latter two formats allow one to include metadata about the original source media and imaging process. FTK Imager is another free tool that provides a graphical user interface for Windows users, and command line interface for Linux and Macintosh users.

Mountable Contemporary Materials: Hard Drives

In the mid-1980s, as personal computers began to include internal hard disk drives, two main connection interfaces would be used: Small Computer System Interface (SCSI) and Integrated Drive Electronics (IDE). A variation of the IDE connection standard is also referred to as Parallel AT Attachment (PATA). The significance of the personal computer’s transition from complete reliance on removable storage media (the floppy disk) to the introduction of internal hard disk storage cannot be overstated. In his seminal text Mechanisms: New Media and the Forensic Imagination, Matthew Kirschenbaum writes,

my work was suddenly somehow part of the computer itself, not shunted back out to peripheral media. The computer was no longer just a processing engine […] but something more like an individualized entity, with its own unique memory. In a roomful of otherwise identical-looking terminals I could point to one in particular and say, “that’s my computer.”

(Kirschenbaum 2008)

Unlike 3.5" HD floppy disks, hard drives provide no means of built-in write blocking capabilities. Not only does this introduce the need for a dedicated hardware write-blocking device to act as an intermediary between the drive and the capture workstation, but, as described above, these drives can employ one of a variety of physical interfaces, or connections—each connection standard requiring a different kind of write blocker. The Forensics Wiki5 provides a good guide to various models of commercially available write blockers. An essential limitation, though, is that since these devices come from the law enforcement world, they are concerned with contemporary applications. Thus it is already increasingly difficult to find write blockers for hard drives with SCSI connections. After connecting a hard drive to a write blocker, connecting the write blocker to the workstation, and powering on all devices, the workflow for the production of disk images, or acquisition of files, is precisely the same as the workflow described above for 3.5" HD floppy disks.

Unmountable Media: 3.5" DD and 5.25" Disks

Unlike 3.5" HD floppy disks, and hard disk drives compatible with contemporary forensic bridges, 3.5" DSDD (double-sided double-density) disks, and all variety of 5.25" floppy disks, present a much more challenging process of capture. The methods suggested previously for the capture of the 3.5" HD floppy disks are only possible due to the ability to connect these devices to one’s workstation using physical hardware that is currently compatible with contemporary computers. When working with 3.5" DSDD floppy disks, and 5.25" disks, we must employ much more advanced tools, due to the fact that the drives capable of reading these formats are not readily compatible with contemporary computers. 3.5" DSDD floppy disks are not readable by the type of USB 3.5" floppy drives that can still be found today, and in the case of 5.25” floppy disks there is essentially no form of ordinary contemporary consumer hardware for reading these disks. In both cases we must turn to vintage hardware that would have been originally used for reading and writing such media, and rely on an intermediary device that will allow us to connect it to our contemporary workstation. In some ways this quite parallels the digital capture of legacy analog videotape. Just as one must use a U-matic video cassette deck for the playback of U-matic tapes, one must use a 3.5" DSDD drive or 5.25" floppy drive, respectively, for the reading of such disks. We then must employ some means of allowing our contemporary capture workstation to interact with this device. The key in this case is called a “floppy controller.” This is a small device that acts as an intermediary between the vintage floppy drive and the contemporary workstation. One such device is the Kryoflux, which provides a hardware device for controlling 3.5" DSDD and 5.25" drives, as well as software for the production of disk images. This device, created by the Software Preservation Society,6 also allows for the creation of incredibly low-level disk images, which record not the bits as interpreted by the workstation, but rather a recording of the actual voltage fluctuations produced by the floppy drive’s reading of the magnetic flux reversals present on the disk. This is recorded in a proprietary format, but is a useful artifact to retain in addition to a standard “sector level” (i.e., the voltages as interpreted into bits readable by a computer) disk image in a raw format. The Kryoflux web site offers a free download of the Kryoflux software, as well as a detailed manual.7

When a Disk Image is Overkill

In day-to-day operations of a collecting institution working with contemporary born-digital materials being delivered by artists, there are times when a disk image is overkill. For instance, if an artist purchases a small portable hard drive simply so that they can deliver four video files—and that is all that they place on this newly purchased hard drive—it could be argued that creating and retaining an image of this hard drive is privileging the carrier over the content. In such a case, is the lowest level capture possible of the disk drive itself what is worthy of preservation, or is a verifiably bit-perfect copy of the individual files contained on that drive what is of primary interest? If the artist happened to use a 1 TB hard drive, but the files only occupied 100 GB of storage, a raw disk image would in fact be 1 TB, retaining a recording of the empty space on disk. This is desirable, of course, when a bit-perfect digital surrogate of the hard disk is critical, such as in the case of a complex software-based artwork that is acquired with a dedicated computer, containing dependencies and a specific operating system; or in the case of an archive of artists’ materials, where materials may not be cataloged at the file level and it is desirable to take the approach of a “more productless process,” getting bit-perfect digital surrogates of the physical artifacts (disks). However, in our hypothetical scenario where the artist has simply purchased a brand-new hard drive, placed four files on it, and delivered it to the collecting institution, the disk image is arguably unnecessary. To retain a disk image in such cases would be akin to digitizing an hour-long digital betacam tape that only contained fifteen minutes of content. The materials of use and interest are the files themselves.

At the Museum of Modern Art we have devised a small tool for assisting in the acquisition of materials in cases where we want to simply extract specific files from a disk, but would also like to maintain the “original order” of these files, to have verifiable proof that they were copied from disk flawlessly, and in the end store the materials in a standards-based format that will allow us to ensure a seamless chain of custody. This tool, called pre-ingest.py, is written in Python, and can be found on GitHub.8 What it actually does is relatively trivial, and in most cases it leverages other modules for accomplishing its work, but the end result is the assurance of a perfect chain of custody. Instructions on the tool’s use can be found in its readme and help file, but here we will review its essential processes:

  1. The user invokes the script, providing a source volume or directory, as well as a destination volume or directory. As this tool was developed for use at MoMA, there is a flag for including the MoMA accession number of the artwork for which the materials being transferred belong. The purpose of this act is to automatically (via means of our collections management system’s API9) name the destination directory according to the following format: ArtistLastName_ArtistFirstName---Title_Of_Work---AcessionNumber---PersistentID.
  2. Using the hashlib module, a list of sha512 checksums are produced of the files that were specified as the source volume or directory. This is a recursive process, meaning that any and all sub-directories are included at an infinite depth.
  3. A directory is created at the specified destination, with the naming format described in step 1, and the files in the specified source are then copied to this directory, using rsync in a manner that preserves the original order of files, and all metadata inherent to the files, such as file permissions, and created, modified and last opened dates.
  4. Upon completion of the rsync transfer, the destination directory is converted to what is called a “Bag”—a standard sometimes referred to as BagIt. “BagIt is a hierarchical file packaging format designed to support disk-based or network-based storage and transfer of arbitrary digital content.”10 The types of bags that this tool creates (using the python-bagit module) have four essential parts: (1) the payload—or the files that we transferred; (2) the manifest, which is a text file that lists all files contained in the payload, and a sha512 checksum for each; (3) a file called bagit.txt, which contains information about how the bag was created; and finally (4) the tag-manifest, which is a text file that is similar to the bag manifest, except that it lists the metadata files (bag manifest, and bagit.txt), and checksums for them. The idea of the bagit standard is that with this structure and these files, one can check and validate the “fixity” of the payload—in other words, one can ensure that the files one is stewarding have not become corrupt, have not been altered, and are present and accounted for. That this metadata is stored in a flat-file format is important, as it ensures that this critical information travels with the files themselves, and does not live in some external document or application. In fact, the bagit standard was very much designed for interoperability, for easy sharing of materials between institutions. The standard has seen wide adoption in the digital preservation community, and more recently in museums of contemporary art stewarding digital collections.
  5. The final action taken by the pre-ingest tool is validation. The script reads the bag manifest, and compares the checksums of the transferred files with the list (stored in memory) of checksums it created in step 1 of the files on the mounted source media. If any checksum does not match, this means that something has mangled one of the files, and the tool notifies the user.

Post-Capture Preparation for Long-term Storage

The capture process is only the first step in the lifecycle of stewarding digital artwork. Once this process is complete, there are further actions that must be taken on the digital objects, and institutional resources that must already be in place. First and foremost is storage infrastructure—the entire purpose of extracting these digital objects from their tangible media carriers is so that they can be stored in a centralized, managed, and monitored storage environment. There are three essential requirements for any preservation-oriented digital storage system: lots of copies, lots of locations, and the ability to manage the integrity of these copies. The generally accepted recommendation is that three copies of collections materials be maintained, each in a different geographic location (Phillips et al. 2013). There are numerous ways to achieve those basic three commandments of digital preservation storage, and the nature of exactly how any given institution meets these requirements will vary greatly from one institution to another. Factors such as the size of the institution, the storage capacity required for the digital collections, the anticipated growth rate of digital collection, and budget for IT infrastructure and staffing need to be taken into consideration.

The Matters in Media Art consortium, comprising MoMA, Tate, and SFMOMA, has worked to develop recommendations for digital collections storage that takes those varying factors into account,11 providing three different tiers of solution. Ultimately, it is quite impossible to provide specific digital preservation storage recommendations without knowledge of all of the contextual parameters outlined above. For some institutions, cloud services will make sense—for instance, if the capacity needed for the collection is quite small, and if internal IT support is already taxed to the limit. Meanwhile, for a massive collection that requires ample storage capacity and happens to have robust IT support and existing storage infrastructure, including offsite locations, cloud storage would not make any sense as it would come at great cost, when, instead, existing in-house resources and expertise could be leveraged.

In addition to having geographically diverse data stores, the collections data must be monitored for integrity. In the world of enterprise grade storage systems, the storage appliances themselves conduct some measures of integrity checking, both for ensuring that data between online mirrored data stores is the same, and for ensuring the data within one site has not become corrupt. This also exists in the consumer realm: for example, a desktop RAID (Redundant Array of Independent/Inexpensive Disks) drive employs methods for knowing when a block of data has been corrupted, and must be restored from a redundantly stored block. This sort of integrity checking is not, however, sufficient for preservation purposes—the reason being that these checks occur at the block12 level, which is beneath the file level (individual files are composed of many blocks). Therefore, these sorts of integrity checks that occur at the storage appliance are completely ignorant of the unit of information we care about in our use case—we are concerned with the integrity, safety, and authenticity of the files. As well, one cannot audit a typical storage appliance for a log of proof that, for example, a given digital video file is an authentic bit-for-bit copy of the file that was received at acquisition. To achieve this goal, we produce, store, and audit checksums at the file level. Previously the BagIt standard was introduced. This standard is a perfect example of one means by which file-level fixity metadata can be produced at acquisition, stored (in this case alongside the actual collections materials themselves), and checked periodically. There are ample tools for creating, managing, and checking the validity of Bags. Part of the convenience of the Bag format’s design is that the fixity metadata for digital objects inherently travels with the objects—it is independent of whatever storage appliance the materials live on, and can travel with the digital objects even for loans between institutions. There are some cases though where one may wish to monitor file-level fixity on a set of materials that are not stored in the BagIt format, and it may be inconvenient to store the materials in Bags. Recently, New York-based consulting firm AVPreserve has released an open source tool for doing just that—easily maintaining and monitoring fixity of any digital materials, whether or not they are stored in the BagIt format.13

Beyond the File System: Digital Repositories

Most storage systems offer no more than a file system. This can be effective as a first step for many institutions implementing digital collections storage: simply maintaining a series of directories that are carefully organized, to which access is limited for collections security concerns, and which follows the best practices in terms of number of copies and geographic diversity. Often this is the first step for collecting institutions. There is, however, a next step that is absolutely critical for properly and effectively preserving and managing digital collections over the long term, and that is the implementation of a digital repository. The term “digital repository” carries different meaning in different contexts—in the academic library and archives world, “digital repository” can refer to a system that simply houses the publications of faculty and students. Generally speaking, though, in the digital preservation world, “digital repository” refers to a system which houses digital materials in a preservation-oriented system. Standards have been developed for the fundamental design model of digital preservation repositories,14 as well as for assessing the overall merit of the design, implementation, and management of such repositories.15 Within the context of collecting art institutions, the best metaphor is to think of a digital repository as the digital equivalent of art storage—a place where conditions are monitored and carefully controlled, access is tightly controlled and documented, the location of materials is carefully tracked, and any movement of collections materials in or out is robustly documented. This is precisely the purpose of a digital repository in the museum setting.

Historically speaking, such systems have long existed for digital libraries and archives,16 yet up until recent efforts by the Museum of Modern Art no such system has been designed for the particular needs of institutions collecting digital art. After years of working to define the functional requirements and use cases of such a system, MoMA began developing the first digital repository for museum collections—known as the DRMC—in 2013. Early on, it was found that the open source digital preservation processing system Archivematica17 fulfilled a great many of the DRMC’s functional requirements. Archivematica is a microservices-based18 system that processes digital objects according to the OAIS model, conducting tasks such as virus checking, filename sanitization, file format identification, characterization,19 policy-based normalization,20 and generates incredibly verbose standards-based metadata as a record of all of these activities. Archivematica then packages these digital objects and metadata in the BagIt format. These bags are called Archival Information Packages (AIP), a term from the OAIS model. Again, this fulfilled a great many of the DRMC’s requirements, but the missing piece of the repository was a system that correlated AIPs with their respective artwork in MoMA’s existing collections management system, as well as providing the ability to record the complex relationships between digital materials (i.e., x file requires y software for exhibition), managing fixity checks, and providing the capability to conduct essential collections management activities such as monitoring the growth of the collection and identifying trends and anomalies with respect to digital file formats and characteristics. MoMA worked with Artefactual Systems (the makers of Archivematica) to develop a new new system for managing digital repositories, and specifically accomplishing the aforementioned aspects of ongoing stewardship and preservation. This tool, called Binder, has been released as free and open-source software, and is available for download at github.com/artefactual/binder.

Intervention and Exhibition: Fundamental Treatment Concepts

At this point we have surveyed the absolutely critical, but rather rudimentary topics of pre-acquisition analysis, acquisition and capture procedure, storage solutions, checking and maintaining integrity, and digital repositories. These topics are less frequently discussed in conservation literature than the theoretical and conceptual aspects of time-based media conservation; however, they are absolutely critical in forming the foundation in support of conservation activities. This section will introduce, from a practical standpoint, the various concepts that inform treatment strategies employed in time-based media conservation.

Emulation

Chances are that you have used an emulator, whether you know it or not. Emulation is commonly used in commercial technology to mitigate obsolescence—for instance, the ability to play classic Nintendo Entertainment Systems on contemporary Nintendo gaming platforms. Simply put, an emulator is a piece of software that simulates the precise conditions and behaviors of a formerly hardware-based computer environment other than the one on which said software is running. In a sense, emulation can be thought of as “virtual reality” from the perspective of the software running inside of it. For example, if one runs a piece of software written for the Apple //e computer inside of an emulation of the Apple //e on a contemporary computer (even a Windows-based machine), the software has no idea that it is not actually running on an Apple //e. Emulation is an incredibly economical and effective strategy for the execution and exhibition of software-based works as one emulation can potentially provide access to many artworks (any that require the emulated environment), without any modification of the artwork’s source materials. There is an important distinction to draw here between a true emulator, and what is, within the retrocomputing world, affectionately referred to as a “hackulator.” Emulation means that the software of the emulator is designed to simulate the hardware of a specific machine and its peripherals—for instance, the Multiple Emulator Super System (MESS) offers a Macintosh IIci emulator, which simulates specifically the hardware of the Macintosh IIci. Hackulators, on the other hand, while they purport to be emulators—and are intended for the execution of obsolete software—are a mish-mash of various systems, implemented in a far less rigorous manner. The goal of a hackulator is simply to get the emulation close enough so that software that would have run on a range of similar systems will function in the hackulator. The Sheepshaver emulator is a good example of this. Sheepshaver is incredibly popular within the vintage Macintosh software community, and is a useful tool, but it is most certainly a hackulator. It is not designed to simulate the hardware of any specific Macintosh, but rather simulates a generic Power PC Macintosh processing architecture. The upside is that Sheepshaver is very easy to use, and quick to configure. Within the context of conservation, however, it can be said that true emulation is the only viable option. In order to rely on emulation as an access and display strategy in the museum setting for software-based artworks, one must be able to compare an emulation qualitatively with the artwork running on its original platform—so as to analyze the fidelity of the emulation to the properties, behavior, look, and feel of the original environment. If an emulator that one compares to an original environment is in fact a hackulator, the analysis is essentially useless, since the hackulator is not in fact attempting to simulate the precise properties of the machine to which one is comparing it.

Virtualization

Virtualization is similar to a hackulator in the sense that virtualization does not provide the emulation of a specific hardware model—this is not its purpose or intent. Virtualization is simply the act of simulating a generic processor platform for the execution of operating systems and software indented for that architecture. Virtualization occurs within the framework of a given virtualization platform—for instance, VirtualBox or VMware—and allows one to build a library of virtual machines that are managed by the platform, allowing one to save machine states, export disk images, and other such management tasks. Virtualization is today ubiquitous for web servers—rather than the days of having dedicated rack servers, today with only one rack appliance, a sysadmin can host numerous virtual servers, all with different purposes and software environments. This can be incredibly useful for the long-term preservation and access to artworks that are web-based, and require very specific server environments, as virtualization removes a device specific dependency that is not tenable over the long term. Such a solution was devised at SFMOMA for the treatment of Lynn Hershman Leeson’s Agent Ruby (1999–2002). At the outset of the conservation treatment of this work, it had been running on a woefully vintage dedicated server, one that of course would not be sustainable in the long term. The solution that was devised was to create a virtual server on SFMOMA’s existing infrastructure, and migrate the work’s environment to this new contained virtualization.21 Yet again, this act is in a sense analogous to the digitization of analog videotape, in that it is a process of taking an unstable physical asset, and producing a digital surrogate that if properly stewarded can be (completely in theory) maintained indefinitely.

Recreation, Reinterpretation, and Replacement

The most involved of all time-based media conservation strategies is the act of recreation or reinterpretation. This method entails rebuilding an artwork based on technical documentation, and qualitative documentation of the original. This sort of recreation is not always a herculean undertaking that requires complete recreation from the ground up—in some cases it could be replacement of one technical component with another (for example, control software for projectors and motors in an installation-based artwork), with fine tuning based on direct observation, study of documentation, and qualitative analysis. Such cases are certainly non-trivial in that they introduce the potential for drastic change in the look and feel of the work, and so must be engaged in with rigorous analysis of results, and weighing of acceptable levels of change. Such is the fundamentally unique nature of time-based media and digital artworks—their distinctively allographic nature (Goodman 1972; Laurenson 2006).

Intervention and Exhibition: The Magnavox Odyssey

In 2014, MoMA acquired the very first home gaming console22: the Magnavox Odyssey (1972). Its inventor, Ralph Baer, was unquestionably a visionary—not only inventing the very concept of a home gaming console, but also inventing the lightgun, which went on to become a ubiquitous accessory in both home and arcade gaming. The Magnavox Odyssey came with many different games in the form of what to the layperson would look much like cartridges. These cartridges would be inserted into the Odyssey to change games. Interestingly, though, these cartridges did not contain the games at all—they contained no software or logic. Rather, all game logic for the various games played on the Odyssey was present internally in the console. When inserted, the Odyssey’s cartridges would complete a connection of a specific circuit in the Odyssey, setting the device to change to a specific game. Therefore, in the case of the Magnavox Odyssey, there is no software; no floppy disks, hard drives, or game cartridge ROMs to stabilize. While documentation of the Odyssey was delivered on a CD-R, requiring the capture workflows we have explored, there are no digital software materials present in the Odyssey itself that require stabilization. While it does contain digital components, its logic and design can be completely documented through schematics. There is no source code. Nonetheless, the Odyssey presents immense challenges for display and exhibition in a way that allows museum visitors to play and interact with the system.

The Odyssey is entirely monochrome—black and white—and was intended to be played on consumer televisions of the late 1970s, which of course had cathode ray tubes monitors. When stepping back to observe the anatomy and mechanics of the various games available on the Odyssey, they appear to be incredibly similar due to the primitive graphics of the system. There are usually up to three points of light on screen: the two players and a ball of some kind—sometimes a stripe down the center of the screen. As a way of circumventing the limitations of this technology, and realizing a richer gaming experience, the Odyssey came with color overlays for each game. These overlays were printed on a type of acetate, and would adhere to the cathode ray tube monitor by way of static electricity. These overlays served to essentially set the scene for the game, for example, tennis (Figure 24.1) The interactive video elements produced by the Magnavox Odyssey console were so primitive that these overlays were needed to make one game more visually distinct from another—as well as indicating to the player the active areas of play.

Image described by caption and surrounding text.

Figure 24.1 Overlay for Magnavox Odyssey Tennis.

Photo courtesy of Ben Fino-Radin.

Senior Curator of Architecture and Design Paola Antoinelli, who spearheaded MoMA’s collection of video games as examples of interaction design, wanted to include the Magnavox Odyssey game Tennis in the 2014 exhibition A Collection of Ideas.23 The exhibition was to be staged in the Architecture and Design department’s third-floor gallery space devoted to rotating exhibitions of the permanent collection. It was in this gallery that MoMA exhibited the first group of video games collected, in the 2013 exhibition Applied Design. The general display strategy and visitor experience that Paola and her curatorial team designed for the video games was one that exhibited the games stripped of their original dedicated hardware. Instead of bulky arcade consoles, and in this case, the delicate vintage plastic controls of the Magnavox Odyssey, flat LCD screens were embedded in the wall, and custom shelves (devised by MoMA’s exhibition design team, carpenter, and media conservators) were mounted below the screens to host the controls of the games. The aim was to limit the viewer’s attention to the flow of interaction between the haptic experience of the controls, and on-screen graphics of the game. For this particular exhibition, there was a desire to steer away from a consideration of the arcade cabinet, or game console itself as a design object. This presented massive challenges to MoMA’s media conservation team, who are responsible for ensuring that these collections are exhibited in an authentic manner respectful of the work’s material and conceptual integrity—managing acceptable degrees of change.

The essential constraint in exhibiting the Magnavox Odyssey in this context was that we simply could not use the original vintage hardware—both due to curatorial intent and to the fact that interactive displays at MoMA see massive amounts of use, and would experience significant wear and tear. Any parts that might wear out or break due to heavy use had to be able to be replaced quickly. This alone ruled out the use of expensive and rare vintage components from a practicality standpoint. After consulting with the Odyssey’s designer Ralph Baer, we found a potential solution that would involve a bit of smoke and mirrors, and careful design, to effectively simulate the properties of the Odyssey’s look and feel. Mr. Baer produced, from time to time, contemporary replicas of his prototype for the Magnavox Odyssey—a device called the Brown Box. The Brown Box (named for the humble wooden box in which it was housed) offered all of the same games as the Odyssey, though rather than having cartridges for each game, the Brown Box simply offered a bank of switches that allowed the player to change games. We were able to test and access the Brown Box replica during a visit with Mr. Baer, and found that the interaction, game mechanics, and behavior of Tennis on the Brown Box was acceptably similar to Tennis on the Magnavox Odyssey. As the Brown Box replica was composed of contemporary components, it would be much more feasible to service during exhibition. Because of the Brown Box replica’s fidelity to the Odyssey, as well as the ability to affordably and quickly replace parts, the clear solution was to employ the Brown Box replica as a stand-in for the Odyssey.

In order to maintain the curatorial vision of stripping away the accoutrements of vintage hardware, the decision was made to rehouse the Brown Box replica, and build new controller enclosures for it. Not only would this allow us to eliminate the faux-vintage wood grain of the Brown Box replica, thus keeping within the parameters of the exhibition’s design, but this act would also allow us to reconfigure the physical layout of the controls to match the physical arrangement of the Odyssey’s controls—thus framing the Brown Box replica not as a recreation of the Brown Box, but as a recreation of the Odyssey. Before engaging in the rehousing process, an assessment was made as to the reversibility of the process of complete rehousing—reversibility being a requirement of any conservation treatment. It was found that this would be entirely executable in a reversible manner.

Aside from the design and haptic experience of the Odyssey’s controls, the most critical item of consideration was the on-screen experience of the tennis game. As stated, the Odyssey’s video output was completely black and white. The tennis game overlay, however, placed a green cast over the screen, as well as depicting the white lines of the tennis court, and two tennis players. It is undeniable that the Odyssey’s overlays are a critical aspect of the device’s aura and characteristics as a design object. While at-home players did not always use the overlays, to present this artifact to a public who is likely seeing the Odyssey for the first time without the overlays, would be to do a disservice to the subtle details of the very specific design of the Magnavox Odyssey’s gaming experience. The Brown Box replica offers the ability to flip a switch that turns the black background of the game screen to a bright green color—and in fact many museums that have exhibited the Brown Box replica have chosen to display it with this option selected. As our mission here was not to exhibit the Brown Box replica, but rather to use the replica as a behind-the-scenes engine for the means of reproducing the experience of the Odyssey, the use of this feature was out of the question. Furthermore, use of the original overlays was not possible, since the brightness of the LCD panels used for the exhibition were significantly less than a CRT would have provided, and would not properly illuminate the overlay. Thus, MoMA created a reproduction of the overlay with a level of translucency appropriate for the brightness of the LCD. Finally, as the Odyssey would have been played on a 4:3 aspect ratio CRT, the wall-embedded LCD was masked at the left and right sides, so that it bore 4:3 proportions, without the blemish of black pillar-boxes visible on screen (Figure 24.2).

Image described by caption and surrounding text.

Figure 24.2 Installation shot of Magnavox Odyssey at MOMA, New York.

Photo courtesy of Ben Fino-Radin.

The Magnavox Odyssey as exhibited at MoMA is but one example of the hybrid approach of technical expertise and media-archaeological historic knowledge that media conservators must offer when exhibiting such digital objects in the museum setting: a careful balance of servicing curatorial intent, the aura and significant properties of the original object, and the logistical realities of interactive exhibitions.

Documentation Practices

The solution for the display of the Magnavox Odyssey at MoMA we have just explored is highly specific, and was carried out with very particular technical knowledge. Were the Odyssey to be exhibited in fifty years’ time, in the absence of any documentation of the process that was undertaken to come to this solution, it would prove an immense challenge. Not only would the staff who were present during the initial staging likely no longer be present, but it is possible that there would no longer be any functioning cathode ray tube monitors. Thus, there would no longer be a possibility of assessing the fidelity of recreations and emulations against the properties of the original gaming experience. It is critical to have sufficient documentation of the artwork materials in what has been identified as an ideal state. As previously discussed, any post-treatment instantiation of a work (i.e., emulation) must be qualitatively compared directly with the work in its original state. One can only conduct such assessment and side-by-side comparison as long as the vintage and dedicated hardware of the work still functions. It is for this reason that the visual documentation of such ideal states, involving original and dedicated hardware, is of the utmost importance. Relying on a CRT for studying the visual properties of a CRT will only be possible for so long. However, relying on demonstrably accurate photographic and video documentation is certainly sustainable.

Arguably the most challenging aspect of the long-term stewardship of time-based media art and digital art is that the work truly does not exist until it is installed. For this reason, documentation of exhibitions is central to the stewardship of these works. When an artwork—be it software-based, web-based, single-channel digital video, or variable installation with digital components—is exhibited, there are critical forms of documentation that must be gathered. At MoMA, the institution’s Media Working Group—comprising all museum stakeholders that are involved in the lifecycle of digital works, from media conservation, AV, and IT to curatorial, registrar, and exhibitions—has devised policy and procedures for just this purpose. It is critical to gather any relevant documentation, and when possible interviews with stakeholders, as soon as possible after the staging of such an exhibition. This can include interviews and walkthroughs of the exhibition with the artist, artist’s technicians, the curator, art handlers, and other technical support staff that are familiar with maintaining the work during exhibition. Floor plans, technical diagrams, and technical interviews can provide additional critical evidence for the future re-instantiation of the work. Decades can pass between a work’s first and second exhibition at the same institution. Of course, during that passage of time experts in exhibiting the work will leave, or their memories will inevitably fade. As supplement to documenting one’s own exhibition of works, it is also central to the long-term stewardship of the work to seek out documentation of the work as previously installed, staged, and exhibited by the artist and other institutions. A work with variable parameters may have been exhibited—or instantiated—several times before entering an institution’s collection and coming under the stewardship of conservation. Such evidence is immensely useful, as it provides the conservator with documentation of alternate instantiations of the work.

Conclusion

We have reviewed many of the practical aspects—the nuts and bolts—of handling, storing, and caring for digital art: conversations with the artist prior to acquisition, capture of media, storage, digital repositories, intervention and treatment fundamentals, and documentation practices. This chapter marks a moment in time when the conservation field has begun to engage with the technical underpinnings of digital materials as employed by artists—truly a turning point in the field’s evolution. This evolution has come decades after artists began working with digital materials, and many years after institutions began to collect such material. Considering the thousands of years of artistic production that preceded the emergence of contemporary conservation of art as we know it today (as an evidence-based, scientific, analytical, and inherently humanistic and sociological practice), the outlook is in fact rather positive when it comes to the ability of the conservation field to meet the challenges of stewarding digital materials. As collecting institutions with the capacity for deep material and technical research increasingly commit to the curation, collection, and stewardship of digital art, the future does certainly look bright.

References

  1. Beerkens, Lydia, ed. 2012. The Artist Interview. For Conservation and Presentation of Contemporary Art. Guidelines and Practice. Heijningen, Netherlands: Jap Sam Books.
  2. Goodman, Nelson. 1972. Languages of Art: An Approach to a Theory of Symbols. Indianapolis, IN: Hackett Publishing.
  3. Kirschenbaum, Matthew G. 2008. Mechanisms: New Media and the Forensic Imagination. Cambridge, MA: The MIT Press.
  4. Kirschenbaum, Matthew G., Richard Ovenden, and Gabriela Redwine. 2010. Digital Forensics and Born-Digital Content in Cultural Heritage Collections. Washington, DC: Council on Library and Information Resources. http://www.clir.org/pubs/reports/pub149 (accessed September 15, 2014).
  5. Laurenson, Pip. 2006. “Authenticity, Change and Loss in the Conservation of Time-Based Media Installations.” Tate Papers – Tate’s Online Research Journal. (Autumn) http://www.tate.org.uk/download/file/fid/7401 (accessed January 15, 2015).
  6. Phillips, Megan, Jefferson Bailey, Andrea Goethals, and Trevor Owens. 2013. “The NDSA Levels of Digital Preservation: An Explanation and Uses.” Washington, DC: Library of Congress, National Digital Stewardship Alliance. http://www.digitalpreservation.gov/ndsa/working_groups/documents/NDSA_Levels_Archiving_2013.pdf (accessed September 15, 2014).
  7. Wharton, Glenn, and Fernando Domínguez Rubio. 2013. “Conservation Interviews: Problematic Assumptions and Unintended Consequences.” INCAA Conservation Interviews. Posted May 23. http://incca-na.org/conservation-interviews/ (accessed January 15, 2015).

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.228.95