When models are shared on 3D-sharing sites like the Thingiverse, they become ”social objects.” “Social objects are the engines of socially networked experiences, the content around which conversation happens”
I feel that the openness with which 3-d replicas are spoken about in the Turkel¶ Elliot, and Neely readings, indicative in a general change in perspective among researchers, scholars and museum educators/exhibition designers/ archivists/ curators. It is important to engage a visitor with the physical space in which he/she finds him/herself. In terms of whether or not a 3-D object can aid learning, I feel I must introduce the idea of disability accessibility. Most museums that I have been in have accessibility programs for people with low vision, blindness or communication impairments. Communication can be aided through touch. If you cannot see a screen, how would you interact with it? Similarly, every person–irregardless of having a diagnosis of any kind or not–learns differently.
The idea that engagement with 3-D modeling is made possible at a relatively affordable price and that art objects and artifacts may be reproduced and touched, turned over and held, opens possibilities for discussion on how to guide humanities education. I am often surprised when after learning about an artwork in class I then find it again in a museum or gallery and find that there are significant differences in size and quality. Projections do not cut it.
Walter Benjamin’s essay on the Age of Mechanical Reproduction was not written in an epoch when reproduction could herald the exact twin of the original.
I couldn’t help but notice that the visualizations presented by Diane Favro and John Bonnet seemed dated, looked old in a way that made it uncomfortable to trust the visualizations of their research even though they are less than a decade old. The aesthetics of data presentation might be able to adapt to the design principles employed by architects or graphic design artists. I think that focusing on the aesthetics of a 3-D visualization is important since the “timelessness” of an image may affect the reception to the data in future years and encourage open-source collaboration.
John Bonnet mentioned the issues of archiving the data visualizations. In searching for places and people concerned with the archiving the data and the hardware necessary for the access of this information I would the Kopal project which uses “data format migration” and “emulation environments” which translate older forms of data into current forms of data while also ensuring that the hardware in which the original metadata was conceived will work in future hardware systems. There are considerations such as finding a preferred format when the information is collected. Kopal uses a “Universal Object Format” that can store and convert “TIFF,” “PDF,” “XML” and ISO-images of CD-Roms.
This will, I am almost positive, lead to discussions of image rights or ownership rights that might need to be focused on in collaborative projects. Scholarly journals are peer reviewed- and although I am a little reluctant to say this- I think that perhaps research data visualizations should be peer reviewed as well.
this is the link to the digital story-telling project I mentioned in class.
I just came across this and thought it might be helpful in regards to the spatialization of indexes, and thinking about problems associated with the prioritizing of some information over others. The site is run through Computational Culture: a Journal of Software Studies at the Center for Computational Studies at Goldsmiths.
To move forward with the Digital Humanities it is important to look back at history. In regards to collections and the presentation of them in a digital space it might be useful to think of the discourse on epistemology of the 17th century whilst also keeping in mind the impetus for describing and organizing data. The impetus seems tied to a sense of responsibility that those who have a large amount of knowledge need to share it with others. The mode by which this knowledge is shared however is one of contention throughout history. Even today the codes of law that denote rights of use, public property, intellectual property and open source must be thought of as a part of the creation of visualizations and not a structure that should prevent or precede the development of technologies.
In The Order of Things, Foucault’s introduces the Cartesian origin of epistemology as a field of study. Epistemology is the formalization for a guiding principle on which to base other sciences on. Descartes looked to the fulcrum of Archemedes as a metaphor for his own explorations. Descartes proceeded in thinking that if he found a single proposition which was absolutely irrefutable he could thereafter comfortably base all of his thoughts on this new perspective. Are the Public Humanities searching for that same lever on which they can comfortably rotate the world and should our tools be the epistemes used to interpret it?
Foucault also introduces Francis Bacon and the Novum Organum where Bacon presented inductive reasoning to be used as a method for interpreting nature. Bacon added nuance to Descarte’s system of making irrefutable claims. One cannot get from point A to point B without intermediary syllogistic stopping points. In Bacon’s case getting to point B requires the accumulation of empirical observation and the use of inductive reasoning to infer in an ascending order axioms that would eventually lead to a general claim. ( “Axiom” finds its origin in the late 15th century french axiom which can be traced to the Greek axiōma–what is thought fitting, and axios–worthy.)
The axios, or worthyness of an index in a taxonomical system is exactly the thing we must try to measure when presented with a large set of data. What markers are important here, what qualities of objects make them most similar, most disparate? Carl Linnaeus whose system for plant classification Foucault mentions, concedes that the nuances of ‘data points’ are subservient to the structure of the taxonomical system itself. There must be careful consideration and effort put into the selection of what the system structure should include as its matrixes. This can be a laborious process and as we have seen in previous readings, there is a moment at which the historian or researcher must trust himself.
Once a decision on which genus to use has been made representation of the system comes into light. Anthony Grafton in his essay on the men of the Republic of Letters points us to Athanasius Kircher whose collaboration with artists allowed his work on Egypt and linguistic studies to be shared with the public.
(from Athanasius Kircher’s “Prodromus Coptus”< http://en.wikipedia.org/wiki/File:Athanasius_Kircher_Koptisches_Alphabet.jpg>)
Kircher and Lipsus… both collaborated, to spectacular effect, with artists who gave their books a radically new visual form and, in Kircher’s case, realized his vision of ancient Egypt in the piazzas of modern Rome. They still, I now believe, have much to tech us about the forgotten premodern intellectual worlds that they inhabited and explored, and also, perhaps, about how modern intellectuals could and should serve the public good in our own poisoned public sphere.
Grafton’s pessimism could be taken as a call to action for the public humanities. Responsibility to the public is key.
In Lev Manovich’s “Database as Symbolic Form,” the case is made that the web as a medium is a collection and not a narrative because information can always be added on in the form of new code or html pages. But how would this statement necessarily be in opposition to the idea of narrative?
Lev cites Frederic Jameson to illustrate his ideas on narrative vs database;
Radical breaks between periods do not generally involve complete changes but rather the restructuration of a certain number of elements already given: features that in an earlier period of system were subordinate became dominant, and features that had been dominant again became secondary.
A shuffling of priorities happens with each and every new technology. The nuanced syllogism of Francis Bacon seems to mark the role of the humanist time and time again. Can we understand narrative linguistically? Saussure could be helpful: there are the syntagmatic parts of utterances that strand together in a linear sequence. These parts of utterances, if they have enough in common become paradigmatic; “those units which have something in common are associated in theory and thus form groups within which various relationships can be found.” Lev takes that last quote directly from Saussure. I would like to know why Saussure said they are associated, in the active tense, as opposed to they became associated. There was a decision made by someone that this sentence looks past.
For Lev, it is cinema which marks a radical shift in the presentation of information and he focuses his essay on the way cinema affects linearity and narrative structure.
One image must follow the other whereas previously all information could be found placed together, as in the illuminated manuscript. John Witney’s film Catalog placed together the visual effects he was able to produce with his modified computer made from a WWII anti-aircraft gun sight. (<http://www.youtube.com/watch?v=TbV7loKp69s>)
Lev mentions Vertov’s Man with a Movie Camera, which is similary ambiguous in narrative structure but Lev argues that “Vertov is able to achieve something which new media designers still have to learn– how to merge database and narrative into a new form.”
(Convergence: The International Journal of Research into New Media Technnologies, 1999 Volume 5 Number 2
SAGE publications. <http://con.sagepub.com/content/5/2/80.full.pdf+html>)
I would like to argue that the precedent of narrative should not hold us back from exploring new ways to present information and that cinema without a seeming narrative can in itself much like the illuminated narrative of many things happening at once produce meaning.
In her blog post for the Cooper Hewitt labs Mia Ridge gives a report on the current status of the Cooper-Hewitt’s collection records. She goes over many of the problems that we have discussed in class in regards to labeling inconsistencies and object retrieval. She mentions that she herself has used programs like google-refine to clean up the data. The Cooper-Hewitt museum has made 60% of its collection available online. A visit to museum webpage reveals the self-consciousness with which they discuss how they have presented information. In many ways the following quote seems to be reflective of the same concerns of Francis Bacon; that syllogisms should be nuanced and that small steps should be taken before large claims are made.
“Is it complete?
No. The data is only tombstone information. Tombstone information is the raw data that is created by museum staff at the time of acquisition for recording the basic ‘facts’ about an object. As such, it is unedited. Historically, museum staff have used this data only for identifying the object, tracking its whereabouts in storage or exhibition, and for internal report and label creation. Like most museums, Cooper-Hewitt had never predicted that the public might use technologies, such as the web, to explore museum collections in the way that they do now. As such, this data has not been created with a “public audience” in mind. Not every field is complete for each record, nor is there any consistency in the way in which data has been entered over the many years of its accumulation. Considerable additional information is available in research files that have not yet been digitized and, as the research work of the museum is ongoing, the records will continue to be updated and change over time.”
There is a note on the Cooper-Hewitt site that mentions legal use and states that the information on items that is presented via git-hub on their site can be used in accordance to the Creative Commons 0 “No Rights Reserved” which will allow anyone to take the data from their available collection and create their own visualizations/ studies. However the Cooper-Hewitt suggests some actions of “politesse” to the user a lá Europeonea’s model such as; attribution to the Cooper-Hewitt; contribute back modifications or improvements to the data; do not mislead others or misrepresent the Metadata or its sources; be responsible; understand that they use the data at their own risk.(http://www.cooperhewitt.org/collections/data)
Certainly, the colloquial manner of these suggestions and the almost deliberately ambiguous. “Do not mislead others or misrepresent the Metadata or its resources,” is something that would be a fear for all humanists sharing data freely. I would like to ask if it is at all possible to escape the system of thinking that there is only one right path into examining data, that we can escape narrative if only to see what happens if information is presented this way to us again (to return to the pre-cinematic with the understanding of a contemporary mind). Then we might be able to go back to Archemedes and find a new lever, spin the world around again and understand it once more anew.
I find the emphasis Lev Manovich places on the cinematic telling of a possible new revolution in epistemology. Could it not be that this facet of philosophy–the theory of knowledge– has regained its strength with questions of visual representation for the Public Humanities?
Tim Wray in describing his project, “A Place for Art,” mentions Janet Murray’s concept of the container, a media theory term meaning the aggregation of objects in a category. The physicality of a container is a particularly apt term for data visualization; the html pages that come from other pages, a cabinet with drawers. However, again I sense that the digital humanist has found him/herself in the rut of having to see everything in terms of collections or containers.
Do you remember Russell’s Paradox? Call the set of all sets that are not members of themselves “R.” If R is a member of itself, then by definition it must not be a member of itself. Similarly, if R is not a member of itself, then by definition it must be a member of itself. Why should this be important to the humanist who wishes to address collections? As Russell explains in 1910;
” An analysis of the paradoxes to be avoided shows that they all result from a kind of vicious circle. The vicious circles in question arise from supposing that a collection of objects may contain members which can only be defined by means of the collection as a whole.”
In light of this, is it possible to avoid Russells paradox of collections? To avoid falling into the pit of questioning and then immediately doubting taxonomies that we think can assist in the complete comprehension of a set of data? Wray says, “Collection websites are purported to provide opportunities for browsing and exploring: links and buttons entice us to ‘Explore Archives’ and ‘Browse Collections’, yet more often than not, we’re presented with a list of categories – a series of stuffy filing cabinets, the objects locked away in containers.”
Is there a way to escape the seeming arbitrariness of taxonomical categories? Does the user-friendly data modeling interface in some way make it so that the careful user will do justice to Francis Bacon’s syllogism? Or will it be the opposite; that the ease of use will make it so that there will be a grave “misuse” of this data. On this last point, to refer back to the essay by Anthony Grafton; who will become the arbiters of how data is presented, who can claim positions as the new members of our contemporary Republic of Letters? And should such a thing exist?
1.There is a fairly semiotic conception of space brought up in Bodenhamer’s essay, The Potential of Spatial Humanities. He seems to be saying that conceiving of “near and far” in the humanities has political resonances. These resonances, Bodenhamer says were stabilized by his use of the American geography which created subsets of identity as the population grew and moved westerly and southerly. Presently, it is not as easy to contextualize difference as the humanities have grown in terminology to include even more specific fragments of population. With this in mind, the use of Geographic Information Systems can use a familiar visualization (a map) to show groupings of the data that the humanities have gathered on population differences. Although this approach is nuanced, Bodenhemer asks if it is still uncritical. Here, I would like to insert the comment that was brought up earlier in our seminar, of the affective reading of data. With a GIS, the information is still going to be interpreted subjectively. However this is a rut that I do not think the humanities wants to keep finding itself in, and again I myself find it uncomfortable to have the questions of epistemological value brought up repetitively. There is a point made in the essay about humanists relying on language and finding it difficult to use a visualization for information gathering. However, reading a map is still reading. If the data visualized is just thought of as a text, would it not be easier to jump into an analysis and to allow oneself to be the critic or parser of the data? This would mean however, that the visualization be primarily the text to be deciphered and not the statement itself, almost like an essay without a thesis. Can there be such a thing?
2. In, “Sapping Attention,” I think a productive interpretation is put forth; could not the visualization of information be a good way to limit the interpretations that a historian might have– and could this limitation not be a positive instead of a negative? The self-described medium size of the data seems to have allowed for the data to feel comprehensively expressed and the interpretation to feel fair.
3. Another note I would like to put forth: I was struck by the idea that audibly transmitted information was inferior to a visualization. When I think of audial transmission of information, I think of conversation, dialogue and interlocution. These seem to be cooperative and therefore an inclusive way into data. Why is the audial ostracized from visualization?
I appreciated the range of media discussed in Visualizing Temporal Patterns, and although I am not as familiar with film or video games as I am with Art History, I could not help but think that perhaps the use of time as a way into standardizing information could only be effective as a tool for analysis if the data was more constrained. The example given of the 35 canonical paintings is intended to cross genres to find commonalities in development, but I cannot help but feel that the individual pieces have been selected a little arbitrarily and that although the program developed is useful for sorting categories, the data should be more limited. The author of the article states that the visualizations created by separating objective qualities of the works supports the thesis that all artists represented were moving towards modernism, a statement that I think ignores the trajectory of each individual artist’s oeuvre. Although modern technology has been employed for this visualization I find that it too closely parallels the now antiquated model of a time-lime exhibit in a museum space. I suppose that my feeling towards this kind of visualization comes from my belief that ordering art in a timeline limits the kind of engagement that a viewer feels comfortable with.
The reading, Cartographies of Time, pointed to the prolific reliance on data that could be graphed in an XY coordinate-plane. Even though the reading also mentioned 3 dimensional visualizations the input for these visualizations would still rely on concrete numerical input. I do not necessarily think that this is valuable for the general user on its own. What is the difference between Baroque, Classical and Renaissance to someone who has only just begun thinking about art? As a counterpoint to moving away from XY, in Cartographies of Time the notion of a line in graphical work as a “metaphor [as] ubiquitous in everyday visual representations” puts forth the question that the line is the most cross culturally pervasive symbol for the passing of time and this suggests that it is so intuitively read that that it is valuable. I just wonder how the XY plane can be made more inclusive.
As a reaction to Professor Riva’s post, I tend to side with the notion that the DH can effectively provide a new way to generate data– that the way we construct knowledge can grow to encompass data that without digitization could not be expressed. In a practical application I was struck by the Omeka Exhibit Builder which allows a curator to present digital objects to a group of diverse people who will then contribute their own meta-data to each object. (http://omeka.org/codex/Plugins/ExhibitBuilder) I see this type of experiment as a good opportunity to introduce a cross disciplinary approach to what this meta-data can look like. In my past blog post I believe I started to mention my interest in haptic technology, the easiest example perhaps being an iPad screen, I think it is very effective to react to an object through touch and that we can learn a lot about what an object means to a person via the responses of their other senses. In looking at a painting where do a person’s eyes move? Whose eyes move in the same pattern, and are their statistically significant similarities across ages, cultures and genders? I think that the measurement of a “non-linguistic” data can perhaps be thought of a new paradigm for the DH.
This said I do think that a large portion of the time dedicated in the advancement of DH must look back in time to the fuzzy data. “Humanities to Digital Humanities” clearly lines up the facets of what the DH must come to terms with to clean up data: Enhanced critical curation, where the changes that have happened have led to an exponentially fast growing collection mean that many things are left unprocessed and therefore certain things are valued above others. Augmented Editions and Fluid Textuality, where standards for mark-ups must be created within editions but also when approaching the distribution of materials. Visualization and data design, where there must be an effort made to not mislead the viewer/participant. The animated archive, where history should be thought of as a living organism ready to become participatory. Reading and authoring, where the invitation is extended to the civilian curator. I see this last aspect as a way to gather data that can be used by institutions to better understand their public and thereby be better able to gear funding and programs to them. This last point should not be a patronizing one however. The data should also become of value to the institution as a necessary evaluation of their own curatorial scholarship.
In clicking through the links provided in the texts, I came away with a sense of play and the power of play. When something is fun it is easy to create data. For this reason I am pasting a screen shot of this text through an n-gram.