Here you can get an idea of the project she will be presenting:
The discussion about 3D printing reminds me of an old essay by Italian semiologist Umberto Eco. In his 1986 essay “Faith in Fakes” (included in Travels in HyperReality), Eco states that “the American imagination demands the real thing and, to attain it, must fabricate the absolute fake.” His examples include the Lyndon B. Johnson Library, where ‘the past must be preserved and celebrated in full-scale authentic copy,’ heritage villages, the Madonna Inn, seven wax versions of Leonardo’s Last Supper, William Randolph Hearst’s museum-castle (the Xanadu of Orson Welles’ film Citizen Kane) and Disneyland, the home of the ‘total fake.’
Perhaps 3D printing technology makes “seeing and knowing through making” (the transformative principle of contemporary digital culture) literal to the point that the fake, the copy that we can manipulate may acquire more functional, practical, or epistemological value than the original artifact that we cannot touch. Because of its inaccessibility the original artifact might preserve at least a trace (or a shadow) of its “aura,” to use W. Benjamin’s term. Yet, my question is whether in the digital mode of reproduction, also 3D copies are charged with an emotional and cognitive investment that creates a sort of substitutive or surrogate ‘aura’ around them (as Daniel rightly points out in his post “Most of the value [of 3D printing] is explained in psychological terms and the emotional impact of physical objects. It comes up over and over again”).
Are we entering the realm of hyperrealist knowledge (the triumph of the “absolute fake,” according to Eco, or “total simulation,” according to French theorist Jean Baudrillard)? To Eco this is the ‘quintessence’ and triumph of ‘consumer ideology,’ but perhaps there is indeed value to be found in the process of total reproduction. Daniel invites us to look at different scenarios in which “making fakes” might have some value as a cognitive process. 3D modeling seems to provide the best value for scaling and mash ups, for example. A sort of engineering mentality (with an aesthetic flavor) seems to take roots within the humanities…
You may find this podcast on Educause interesting (Frischer is a pioneer in the use of 3D technology applied to Roman archeology – his most famous project is Rome Reborn that we briefly discussed in class a few weeks ago)
“We believe that the humanities are unlikely to remain relevant, unless significant changes are made in how professional humanists are trained. Our review of relevant literature and data, both from Stanford and from outside, has given us a good sense of what these changes are. We believe that Stanford — with its educational prominence, culture of innovation and its great human and material resources — should be a leader in driving those changes.”
Why do I quote this passage from a white paper about “The Future of the Humanities Ph.D. at Stanford,” mentioned in a very interesting article on today’s NYTimes, “The Repurposed Ph.D”? Not just because Franco Moretti teaches at Stanford but because I believe that Moretti’s book provides some useful indications – on a theoretical level – about a “repurposing of the humanities,” or at least of their “literary studies” branch.
Indeed, Graphs, Maps and Trees not only points to a different (or divergent) type of textual data mining and modeling, but also to a different way of training doctoral students in literary studies. The kind of brilliant analyses that Moretti performs in his book in order to demonstrate his theoretical point (“a more rational literary history,” in reaction to close reading as “secularized theology” emanating from New Haven) imply a different (and broader) set of interdisciplinary competences, drawing from quantitative history, geography and evolutionary theory. They also potentially envision broader professional applications beyond the promised (but increasingly elusive) destination of tenure-track professorships. Perhaps, as the NYTimes article also suggest, such a repurposing of the (digital) humanities can help form a new generation of culture analysts with a new set of skills which can better prepare them for “jobs within universities but outside the professoriate, like administrator or librarian, as well as nonacademic roles like government-employed historian and museum curator,” and can even help them move across the job market, from higher education to industry, governmental institutions, foundations, etc.
I will comment now on a few points in Moretti’s book that seem particularly relevant to this week’s topic, the intersection of texts and images, but also refer to things we have discussed in previous weeks. Questioning the narrowness of the canon (about 200 novels for 19th-century Britain, even fewer in the Italian), Moretti provides some methodological and theoretical guidelines that are useful for working with digital tools and large scale textual data sets for literary studies:
1. Quantitative research provides a type of data which is ideally independent of interpretations…its limit is that it provides data, not interpretation…(italics mine)
2. Quantitative data demand an interpretation that transcends the quantitative realm.
3. Quantitative data can falsify existing theoretical explanations (for example, interpretive assumptions about the history of the novel).
Moretti distinguishes between graphs, that are not really models, and maps and (especially) evolutionary trees that, instead, are such models, that is “simplified, intuitive versions of a theoretical structure.” (p. 8). We can discuss in class the difference between these forms of visualization and their particular usefulness for our specific research goals. Graphs allow Moretti to formulate and/or falsify hypotheses about the system of novelistic genres as a whole (the life cycles of genres and subgenres). Maps are a good way to prepare an individual text (from the “village stories” series, for example) for analysis: by placing a story in space, the map offers a model of the narrative universe that, compared to other maps in the series, can bring some hidden textual patterns to the surface (“the road from birth to death of a specific chronotope”). The map becomes a diagram. Diagrams look like maps, yet they represent relations not their distribution in space, “a matrix of relations, not a cluster of individual locations.” (GMT, p. 54) Diagramming the use of the free indirect discourse or the use of “clues” in detective fiction allows Moretti to make macro-morphological divergences appear: “this system of differences at the microscopic level [the sentence] adds up to something that is much larger than any individual text, and which in our case is of course the genre – or the tree – of detective fiction” (GMT, p. 76).
As Matt Kirschenbaum writes in his contribution to Reading Graphs, Maps, Trees, a volume which collects critical responses to Moretti’s book and Moretti’s own replies to his critics, “the goal of data-mining (including text-mining) is to produce new knowledge by exposing unanticipated similarities or differences, clustering or dispersal, co-occurrence and trends.” The key word, here, is “unanticipated.” Visualization tools allow us to see/make this new knowledge emerge (graphs, maps and trees “place the literary field literally in front of our eyes – and show how little we still know about it…” – as Moretti effectively puts it).
I think the ambiguity of “seeing/making” touches upon one of the crucial questions we have debated, in relation to most of the tools we have considered: visualizations are (computational) artifacts that may give us the illusion of “discovering” what we are actually “configuring” through our tools. Viceversa, they can help us find unanticipated patterns, and formulate interpretive hypotheses in controlled exploratory experiments with our data models. This, by the way, is what Moretti does best.
In conclusion, distant reading seems several steps removed from close reading: it deals with data models (and forms of visualization, or interfaces) that extract and represent abstract patterns from textual data, “translating the traditional way of formulating critical problems in the humanities into reasoning that can be tested, algorithms that can be run” (Matt Kirschenbaum). Yet, Moretti seems to suggest that a successful and meaningful application of his theory should allow the critic to make discoveries valid also at the microtextual level. Ultimately, if “theories are nets” (as Moretti states, quoting Novalis), we must “evaluate them not as ends in themselves, but for how they concretely change the way we work,” and perhaps also for how they help us humanists find new purposes for our research.
I am interested in hearing about the implicit (or explicit) tension between simplicity and complexity I’ve detected in this week’s readings. An example in Weingart: “As it stands now, network science is ill-equipped to deal with multimodal networks [networks that are intuitively more interesting or relevant for humanists]. 2-mode networks are difficult enough to work with, but once you get to three or more varieties of nodes, most algorithms used in network analysis simply do not work.” Further on in his article, Weingart adds: “Besides dealing with the single mode / multimodal issue, humanists also must struggle with fitting square pegs in round holes.” Again, humanists care more, at least in principle, or for a habit, about differences (distinguishing factors) than similarities or regularities, i.e. what makes a book, a painting, an idea, new, original or unique…(Idiographic versus Nomothetic, again): “that is the very information they are likely to lose by defining their objects as nodes.”
On the one hand, we are invited to simplify the range of questions we, as humanists, are interested in, in order to make them more manageable by network science (algorithms): on the other, we are told (by Paula Findlen, in the Republic of Letters video, for example) that visualization “adds complexity” to a relatively monotonous task such as that of “reading letters.” The good news is that when we build a network in the humanities, albeit a simple one, we already have some (in some cases, a lot of) discursive (non-formalized or semi-formalized) information (metadata, etc., existing critical lit, etc.) to play with. Read against this rich and dense backdrop, even relatively simple networks can produce complex questions and interpretive issues…
On the Republic of Letters site there is the image of a narrative panorama illustrating the project: it is the work of a Milan-based group that we already encountered, Density Lab (http://www.densitydesign.org/research/network/). What I find interesting in this and other similar output of this group is the attempt at using rich graphic representations of networks rather than the standard abstract graphs, automatically produced with the tools at our disposal…Here’s another example of how humanists can add visual complexity to relatively simple, self-generated graphs…by using them as the skeleton for their visually richer representations…
In response to various posts, including Alessandro’s: an interesting alternative to the timeline, or a dynamic fusion of timeline and mapping tools for literary studies, could be based on the concept of chronotope, thus defined by Russian scholar Mikhail Bakhtin:
“We will give the name chronotope (literally, ‘time space’) to the intrinsic connectedness of temporal and spatial relationships that are artistically expressed in literature. This term [space-time] is employed in mathematics, and was introduced as part of Einstein’s Theory of Relativity. The special meaning it has in relativity theory is not important for our purposes; we are borrowing it for literary criticism almost as a metaphor (almost, but not entirely). What counts for us is the fact that it expresses the inseparability of space and time (time as the fourth dimension of space). We understand the chronotope as a formally constitutive category of literature; we will not deal with the chronotope in other areas of culture.’ In the literary artistic chronotope, spatial and temporal indicators are fused into one carefully thought-out, concrete whole. Time, as it were, thickens, takes on flesh, becomes artistically visible; likewise, space becomes charged and responsive to the movements of time, plot and history. This intersection of axes and fusion of indicators characterizes the artistic chronotope. The chronotope in literature has an intrinsic generic significance. It can even be said that it is precisely the chronotope that defines genre and generic distinctions, for in literature the primary category in the chronotope is time. The chronotope as a formally constitutive category determines to a significant degree the image of man in literature as well. The image of man is always intrinsically chronotopic.”
from: The Dialogic Imagination: Four Essays by M.M. Bakhtin, translated by Caryl Emerson & Michael Holquist, University of Texas Press, 1981.
One could represent the whole history of literature in a non-linear way, as a series of overlapping and intertwining chronotopes…
Another definition by Anthropologist James Clifford: “The chronotope is a fictional setting where historically specific relations of power become visible and certain stories can ‘take place’ (the bourgeois salon in nineteenth-century social novels, the merchant ship in Conrad’s tales of adventure and empire).” Examples of contemporary chronotopes: the border, the green zone (gated community), the strip mall, the detention center.
I’d like to link Steven’s post about proof and intuition to our discussion on timelines as hermeneutic tools. In Steven’s essay and in other readings assigned for the week we find critical assessments of timelines as “falsely objective” rhetorical devices which establish a deterministic genealogy among artifacts, events, peoples or styles, etc. Post hoc ergo propter hoc, Latin for “after this, therefore because of this,” is an old logical fallacy (of the questionable cause variety)…
To assign chronologies and timelines an “explanatory” value is a fallacy, if we take “explanation” in the nomothetic sense, as the expression of a rule or law. In other words, timelines do not prove anything and do not explain the phenomena they represent or order (according to a chosen narrative thread or interpretive principle, or “spatial instantiations of history”). Or do they? Can we say, for example, without falling prey to a “deterministic” prejudice, that chronologies and timelines are intuitive tools that allow us in the humanities to visually grasp the elusive temporal nature of (human) life and culture – what we call “history”? By translating time into a spatial representation, visualizations help us understand, or better intuit, time as a perceivable and measurable entity. As a historiographical tool, Cartographies of Time shows how chronologies and timelines are based on specific cultural assumptions and categories and are often conditioned by the technical means of reproduction and visualization at our disposal.
Space and Time are of course inseparable in modern thought, from Kant to Heidegger and Einstein… There are various ways of representing a new intuition of time:
“Time present and time past
Are both perhaps present in time future,
And time future contained in time past.
If all time is eternally present
All time is unredeemable.”
Can we say that the difference between an equation and a poem is that the first tries to prove the intuition and the second simply put it in words…or images? And yet we continue to use (mostly linear or sequential) chronologies as “explanatory” tools (if only in a hermeneutically lighter sense, as a “weak” but useful “explanation”). Of course, there are timelines and timelines: a geological periodization of earth’s history is not the same as a paleontological or an anthropological, art historical or literary historical one, etc. The very notion of “time” varies greatly across disciplines… Perhaps multiple overlapping timelines, by showing the differential relationships and gaps between temporal series, can help us put in perspective our periodization tools.
Take a look, for example, at this “big history” project, at Berkeley, based on the application of a tool developed by Microsoft (“deep zoom” – we used a version of it for the Garibaldi on the Surface project, here at Brown):
What I find interesting here is that a new vision technology help us visualize and intuit the multidimensional, multi-scale, “deep” relativistic nature of our scientific representations of “time.” (Although a critic could easily note that certain assumptions about the linearity ot time persist and are embedded also in this tool). Nevertheless, visualizing multiple timelines at once (or zooming in and out from one to the other) can allow us to see how time is conceptualized (as a function or a variable), across the disciplinary spectrum…Thus the timeline can become a sort of self-reflective interdisciplinary tool: it doesn’t necessarily prove anything but definitely helps our intuition(s). Or does it?
It may be worth asking whether we consider the DH as a set of non-traditional, computational tools that are useful for humanists in the pursuit of their traditional scholarly goals, in both research and pedagogy, or an entirely new paradigm for knowledge work in the humanities, which transforms not just methods and techniques of inquiry and interpretation but the very nature of humanistic inquiry and interpretation. To put it in a slightly different way: do DH simply apply computational techniques to traditional data sets, or transform the very nature of the data we humanists consider in our investigative and interpretive practices? This transformation is often seen in parallel to the advent of a “fourth paradigm” in human knowledge: data-intensive scientific discovery (http://research.microsoft.com/en-us/collaboration/fourthparadigm/4th_paradigm_book_complete_lr.pdf).
These two views of DH can be characterized respectively as the “translational” and the “transformative” view. Is DH just a translation onto the digital platform of values, methods and procedures largely shaped in a pre-digital world? Or is DH an entirely new form of generative knowing, which actively shapes new cognitive values (perhaps more compatible with those of the social and biological sciences) along with new methods of inquiry?
According to the translational view, digital tools must translate on to the digital platform proven, pre-digital philological and historical techniques and methodologies, and integrate them with computational procedures. Data curation is first and foremost data preservation, which includes also the preservation of pre-digital cognitive and interpretive modes and frames embedded in pre-digital, analog documents and artifacts. Example: Text encoding.
According to the transformative view, instead, “the advent of Digital Humanities implies a reinterpretation of the humanities as a generative enterprise: one in which students and faculty alike are making things as they study and perform research, generating not just texts (in the form of analysis, commentary, narration, critique) but also images, interactions, cross-media corpora, software, and platforms” (Burdick & Others, Digital_Humanities). This translates into newly conceived data sets and models for the humanities.
It is worth noting that the generative conception of the DH adopts a scientific-technological model that is rapidly becoming dominant or hegemonic in our culture. As Evelyn Fox Keller (Professor Emerita of the History and Philosophy of Science, a historian of biology at MIT) wrote ten years ago in her book, Making Sense of Life, (Cambridge, Harvard University Press, 2002, p. 203): we “live and work in a world in which what counts as an explanation has become more and more difficult to distinguish from what count as a recipe for construction.” The gulf between understanding (as the primary cognitive mode for the ideographic humanities) and explaining (as the primary cognitive mode for the nomothetic sciences) is widening: or is it? Fox Keller argues that also the nature of explanation is changing in the age of data-intensive scientific discovery.
This seems also the cognitive attitude reflected in this quote from Trevor Owens’ “Defining Data for Humanists”: “as constructed things, data are a species of artifact” (Zoe also picked up on this quote in her post). A similar point is made by Tara Zepel who sees Visualization as a self-standing (sub) discipline within the DH: “Visualization is an entire framework for building, communicating, and most importantly experiencing knowledge.” And even more radically this point of view is advocated by the Manifesto: “The theory after Theory is anchored in MAKING.” A constructivist ethos is clearly pervading the DH.
Interestingly enough, Owens goes back to textual practices as templates for data modeling conceived as an interpretive practice: conceived or constructed as artifacts, data (notice the plural), according to Owens, “can be interpreted as texts, and can be computed in a whole host of ways to generate novel artifacts and texts which are then open to subsequent interpretation and analysis…In short, data as text, artifact, and processable information…[is] a multifaceted object which can be mobilized as evidence in support of an argument…“The production of a data set requires choices about what and how to collect and how to encode the information” – these choices can be interpreted as we interpret the argument made in a text (one may ask whether this interpretive or critical attitude toward data sets is peculiar to the humanities). “Humanists can, and should interpret data as an authored work…while a reader-response theory approach to data would require attention to how a given set of data is actually used, understood, and interpreted by various audiences…”; “data is not a kind of evidence; it is a potential source of information that can hold evidentiary value.”
In conclusion, data modeling in the DH is indeed “transformative” – new types of data and data sets are assembled and analyzed thanks to computational techniques; but it must also be “translational” – adapting interpretive practices typical of the humanities and their various disciplinary fields to the new objects (artifacts) of inquiry. A perfect compromise?
Further examples for discussing this (false or true) dichotomy
Translational or Transformative?
Projects of the LitLab at Stanford
Mapping Galileo (Lit Lab at Stanford)
The Salons Project
The Density Design Sets, see in particular: Brain Houses as an example of transformative panoramic view (based on a visual genre or genres, but innovative and “transformative” in their interpretive application).