Change of Paradigm: From Individual to Community-Based Scholarship

Massimo Riva

Abstract: The change of paradigm we are witnessing involves not only the transformation of the procedures of scholarly research and learning but also and more importantly the transformation of its goals. The scholarly work is being increasingly made over into a collective, collaborative, large-scale enterprise characterized by the move from discursive to graphic-algorithmic forms of data presentation and interpretation. This change has eventful consequences for critical thinking: as humanists engaged with a potentially radical transformation of the community we belong to, our most daunting task is how to transpose traditional scholarly practices onto the new platform, envisioning new goals and outputs for our traditional tasks.

Let me clarify, first of all that my title does not refer to the application of knowledge through faculty engagement in community-based research, teaching and service -- something that is usually understood as "community-engaged scholarship." The change of paradigm I refer to, instead, is of a cognitive kind and should be understood within a broader framework: namely, the general transformation of our participatory or convergence culture in the age of social and "spreadable" media (to use a terminology made current by Richard Jenkins). Of course, as the Web 2.0 evolves toward the Web 3.0 or the semantic web, community-engaged scholarship will be one of the most crucial components of this larger and more pervasive phenomenon.

What is at stake is not only the transformation of the procedures of scholarly research but also and more importantly the transformation of its goals: traditionally based (at least in the humanities) on the individual researcher and author, knowledge work, or the scholarly mode of production, is increasingly transformed into a collective, collaborative enterprise. However, this collectivization or socialization of research is based on different models of what we mean by "collaboration." And this is what I'd like to address here, very briefly. Digital technologies do promote the implementation of collaborative, distributed or networked research practices and outputs: however, these dramatically differ from each other in methodologies and goals. To give you only one example: data mining in the humanities can be defined as a collective enterprise not only because it relies upon the input of cohorts, or even generations, of scholars, but also because it shifts the traditional goals of humanistic research toward a "collective" output, embodied in data aggregations or disaggregations which often defy more traditional, individualistic methods of analysis. The same could be said (and I would have developed this aspect in my presentation in the other panel) of geo-parsing methodologies which connect (or superimpose) geographical and historical, cultural or textual data onto each other, as exemplified by geo-archaeological studies or even literary studies which make use of geographic information retrieval techniques (mapping literary corpora such as, for example, the history of the novel, or other literary genres, according to their geographical distribution in time, for example).

Of course, individual scholars can still give their individual contribution to corpus linguistics or contribute to the geoparsing and/or timelining of data by running if not composing an algorithm; and they can base their individual analysis on the collective-automatized input. In either case, however, individual contribution is neither the determining factor nor the most representative output. It is not only the focus but also the ethos of scholarship to shift with the widespread application of these new techniques. The general move from discursive to graphic-algorithmic forms of data visualization and interpretation (including textual data), for example, has eventful consequences for critical thinking. In a sense, we are witnessing today a return to the age of positivism. In short, what is changing is the way we make sense (in the literal sense of the words). Raw data can only make sense to us if we make them...

I will give you only one example, which has to do with the scale of things: in the age of big data, the humanities, too, feel compelled to increase the scale of their objects of study. Both Massimo Lollini and I have focused our projects on individual works (Boccaccio's Decameron and Petrarca's Canzoniere) as well as their legacy or afterlife in translation, etc. Of course, these are strategic works, so to speak, as they occupy a crucial position in the history of literature (not limited to the Italian) and the history of their respective genres. Yet, our approach is becoming increasingly counterintuitive as the tools at our disposal become ever more powerful. Powerful computation applied to small scale phenomena seems, and perhaps is, a waste. Mathematical or algorithmic patterns applied to a singularity are absurd.

This change of scale is even more daunting when we move from textual phenomena to visualization in general: according to the International Data Corporation (IDC) already by 2010 we produced approximately 500 billion digital images through approximately a billion devices, and this number does not include video data generated by surveillance or for scientific use. Faced with this mindboggling capacity to produce data about everything, everywhere and always, the fundamental question is clearly which data are important and worth analyzing? On the one hand, our sensors (data gathering mechanisms) will have to become more intelligent and selective; on the other hand, we will have to develop powerful analytical tools, analytics on a scale equivalent to the exponential increase of raw data. Now, can we entrust this crucial task only to automatic, pre-programmed devices? A somewhat alternative model is offered by the new collaborative forms made possible by interconnectivity on a massive scale. For example, mass-tagging, the collective production of hyper-glosses or hyper-annotations capable to create a meta-data compact which combines individual opinions and computational procedures and can be mined by either humans or programs.

Now, are this and other forms of scholarly crowdsourcing a powerful drive toward a socialization and democratization of research? Not necessarily, although they could be. For instance, instead of entrusting the parsing of all the rhetorical figures used by Dickens in his novels or by Shakespeare, Dante, Petrarch and Boccaccio in their works, a different approach could leverage the contribution of readers willing to tag and annotate texts through a specifically designed interface.... This method seems more suited for humanistic data mining because it is a form of community-based research; indeed, it may contribute to the expansion of the community of proactive readers of literary works, and the scholarly and learning community as well. Think for example of the possibility of comparing multi-tagging results from different geographic and cultural or linguistic areas over time... Reader-response criticism could indeed be revolutionized.

The question is: which of these two methods (the one based on algorithms and the one based on more or less massive crowdsourcing and folksonomies) is more reliable? Perhaps the answer is: neither, taken alone, but matters could become interesting if we find a way of intelligently combining Web 2.0 type participatory and collaborative practices with Web 3.0 semantic tools - within a scholarly framework.

In conclusion, I gave only one example of the kind of critical thinking we will have to increasingly apply as humanists engaged with a potentially radical transformation of the community we belong to: our most daunting task is how to transpose traditional scholarly practices onto the new platform. One thing is certain, knowledge is not structured as a tree or a list any more, but as a graph. And isn't mass linking what scholars have always done with different means? Perhaps it is time to start building what Paolo D'Iorio and Michele Barbera envision: a scholarly resource (they call it Scholarsource) aimed to reproduce the old game of humanities research on the digital platform, which would allow us to navigate the information tsunami by following the relations of meaning which scholars and researchers themselves have introduced in the flux through a collaborative metadata apparatus (primary and secondary sources, annotations, etc.), instead of entrusting our luck every day to the anonymous albeit creative engineers of Google & Co....

Works cited

D'Iorio, Paolo, & Barbera, Michele. Scholarsource: A Digital Infrastructure for the Humanities. In Thomas Bartscherer (ed.), Switching Codes. Chicago: Chicago University Press. 61-88.


This article is a transcription of a video presentation for the 2013 American Association of Italian Studies annual meeting held in Eugene, Oregon. The video is embedded below and available via these links: