Refine by

Reward and Tenure

Whenever the subject of digital scholarship is raised, it almost inevitably results in the discussion of reward and tenure. This is understandable, not just because individuals wish to progress in their career, but because it exposes what is really valued in academia. If it isn't valued, goes the argument, then it isn't recognised when it comes to getting promotion.

What I want to explore in this chapter are the barriers (perceived and actual) to the recognition of digital scholarship and the ways in which some institutions are attempting to address these.

The tenure process

Promotion and tenure are usually judged on a combination of three factors: research, teaching and service or management. Some universities expand on these to include factors such as contribution to society and academic esteem, but these three represent the main categories. These are supposedly weighted equally, often with candidates required to demonstrate outstanding achievement in at least two of the three. It is often rumoured that there is an unspoken rule that research is regarded as more significant. As Harley et al. (2010) summarise it, ‘advancement in research universities is often described as a “three-legged stool,” with a “research” leg that is far more important’.

In putting together a case for promotion, academics then need to provide evidence to support their case in these three areas (although not all three may be represented equally). For teaching this is usually straightforward – a list of courses that have been taught (perhaps with student ratings). Service can equate to work on committees, or management responsibility, but can also be a little more nebulous, for example, making the case for external work such as work with a professional body. Research is the most difficult to accurately represent, particularly to committee members who are unlikely to be experts in the subject area of the individual and thus will require explanation and clarification on the nature of that individual's contribution to the field.

One can appreciate the complexity of this task across a university with many different niche subject areas, which people in the same discipline may be unfamiliar with, to say nothing of a general university panel. Whereas teaching will usually be to an understood and agreed curriculum and service is predominantly represented by university committees, which are broadly understood and appreciated, research is precisely the area of a scholar's activity that is most specialised. It is the area that is thus most difficult for a general committee to assess. There is thus something of a conundrum around research in the promotion process – it is the most highly regarded of the three strands and yet the most difficult to judge. It is this complexity in quantifying research combined with its significance that sits at the heart of many of the issues relating to digital scholarship and tenure.

Recognising and rewarding digital scholarship is significant for two reasons. The first is the message it sends to individuals within the university. Because they operate in an open, digital, networked manner, digital scholars are often well known in their institution (e.g. many of their colleagues will read their blogs). If a well-known digital scholar struggles to get their work recognised, then it sends a message to the rest of the university that this is not the type of activity that is likely to be rewarded, with a subsequent decline in its uptake. The reverse happens if that digital scholar is rewarded; it sends the positive message that academics should engage in this type of activity.

The second reason to recognise digital scholarship is to encourage institutional innovation. For example, universities are beginning to explore the use of Facebook to support students, or the use of blogs to disseminate research findings to the public, or new models of course development based on third-party content and crowdsourcing. There are very real benefits to the institution from these approaches, for instance reaching new audiences, increasing the university's profile without advertising, increasing student retention through improved peer support, lowering the costs of course production, developing new research methodology and so on. But it is difficult to realise any of these institutional approaches to new media if there is not a solid base of digital scholarship experience to draw upon.

The digital scholarship barriers

Before examining some of the approaches institutions have taken to recognising and rewarding digital scholarship, it is worth considering the barriers and obstacles that many perceive in its recognition. We have already touched upon some of these in Chapter 5, where we saw that a reluctance to engage with new technology or new methods of dissemination was often rooted in fears that this work was not recognised. This is reinforced by advice from senior academics to new researchers to concentrate on traditional, recognised publishing routes as these were the recognised paths to reward. It is worth noting that there is nothing in this argument about the actual benefits or efficacy of traditional publishing over other methods; it is based purely on a pragmatic approach to ‘playing the promotion game’.

In a comprehensive study on scholarly communication, Harley et al. (2010) found that the strong lock-in with the published journal article and monograph was the overriding factor in consideration for promotion, commenting that

enthusiasm for the development and adoption of technology should not be conflated with the hard reality of tenure and promotion requirements in highly competitive and complex professional environments. Experiments in new genres of scholarship and dissemination are occurring in every field, but they are taking place within the context of relatively conservative value and reward systems that have the practice of peer review at their core.

Chapter 12 will look at academic publishing in more detail, as it is a practice that runs through scholarship and exerts an enormous influence. It is probably the single most significant influencing factor in recognising digital scholarship.

The first, and fundamental, barrier is the recognition of digital scholarship as an activity that is worthy of appreciation. This is distinct from concerns about how best to represent and measure it. There is, undoubtedly, an element of snobbery in this. Like most bloggers, I have experienced (and still experience) sniggers at the suggestion that blogging is a serious academic practice (to say nothing of the use of social networks). This is based partly on a perception, often perpetuated by traditional media, that the use of such tools is frivolous, egotistical and unprofessional. For instance, when the BBC political broadcaster Andrew Marr dismissed bloggers as ‘socially inadequate, pimpled, single, slightly seedy, bald, cauliflower-nosed young men sitting in their mother's basements and ranting. They are very angry people’ (Plunkett 2010), there was a degree of sage nodding amongst many academics. Such responses are predictable when a new form of communication presents itself, particularly from entrenched industries, who have the most to lose. We saw similar reactions to radio, television, computers and mobile phones. Cheverie, Boettcher and Buschman (2009) argue that there is a strong bias towards print, or traditional, publication: ‘While this community talks about ‘publication’, the language used implies that digital scholarship is of significantly lesser value, and word of mouth to younger colleagues discourages digital scholarship in the hiring, tenure and promotion process.’

More significantly though the resistance to recognising digital scholarship reflects a more intractable problem – one has to experience the use of these technologies over a prolonged period to appreciate their value and the nature of interactions. In short, you have to do social media to get social media. Given that many senior managers and professors in universities are not people who are disposed towards using these tools, there is a lack of understanding about them at the level which is required to implement significant change in the institution. The membership of promotion committees is most likely to be drawn from senior academics, who have largely been successful with the traditional model of scholarship. Although these academics will have a wealth of experience, they come from a background that may have a limited understanding of the new forms of scholarly practice that utilise different media and technologies.

But there does seem to be a move in many universities to recognise digital scholarship to some extent. This starts with the reasonably uncontroversial recognition that online journals have a similar standing to print ones, particularly when many major publishers are converting many existing titles to online only. Scholfeld and Housewright (2010) reports that there is a general move to online journals with most academics now content to see this shift happen, away from print.

In the arts there has been a tradition of recognising a portfolio of work when considering promotion, and this has inevitably led to inclusion of digital artefacts. In the sciences other components have been recognised prior to more recent developments, including software and data. In an instance of Bellow's Law we now have conditions under which there is sufficient loosening of the strictures on what constitutes evidence to permit a wider range to be considered.

A willingness to recognise new types of output and activity brings into focus the next significant barrier, which is how to measure or recognise quality in these widely varied formats. The problem highlighted above of dealing with complexity in research has essentially been outsourced by universities to publishers. The peer-review process that leads to publication combined with a journal's impact factor act as a quality filter, thus removing the necessity for promotion committees to assess the quality of the outputs themselves. Journals have quality rankings, and therefore publication in any journal of sufficient standing is an indication of quality. As Waters (2000) puts it, ‘to a considerable degree people in departments stopped assessing for themselves the value of a candidate as a scholar and started waiting for the presses to decide’.

Peer review is at the core of this practice and is seen as fundamental. Harley et al. (2010) stress that ‘[t]he degree to which peer review, despite its perceived shortcomings, is considered to be an important filter of academic quality, cannot be overstated.’ This highlights the problem with recognising new types of output and activity. The power of many of the new forms of communication lies in their democratisation of the publishing process. They have removed the filter which the tenure process has come to rely on so heavily. Without this filter in place promotion committees are back in the position of having to find a means of assessing the quality of research activity of an individual in a field they know little about. This is now confounded as it may be in a format they know little about too.

Assessing quality in a reliable and transparent manner is a significant problem in the recognition of digital scholarship, and its intangibility and complexity are enough to make many give up and fall back on the practices they know and trust. However, for the reasons I suggested above, it is a problem worth grappling with, and in the next section we will look at some of the ways in which this is being attempted.

Recognising digital scholarship

The response to recognition of digital scholarship can take a variety of forms, some more radical than others. The approaches can be summarised as follows:

  • recreating the existing model

  • finding digital equivalents

  • generating guidelines that include digital scholarship

  • using metrics

  • peer review

  • micro-credit

  • developing alternative methods

Recreating the existing model

If we take these in order, recreating existing models is a reasonable first step. Methods of recreating the existing model in digital scholarship terms include adding in a layer of peer review to blog-like practices or making conventional journals more open. For instance, a number of journals now operate a model where the author (or more likely, the author's institution) pays to have an article made open access. Publishers charge between $500 and $3,000 for this model, and as Waltham (2009) reports take-up has been limited with 73 per cent of publishers reporting 5 per cent or less adoption of this model. This is hardly surprising and highlights one of the problems with attempting to recreate current practice. We will look at the economics of the academic publishing industry in more detail later, but given that scholars have provided the writing, editing, and reviewing time free of charge, it seems somewhat unlikely that they will then pay to have the article published online, when it can be done freely by their own means. An attempt then to graft the open, digital, networked approach onto existing practice and then continue as normal fails to address many of the more fundamental issues and also the possibilities afforded by the new technologies.

Digital equivalents

An improvement on this is to seek digital equivalents for the types of evidence currently accepted in promotion cases. In making a case for excellence in one of the three main promotion criteria, the scholar is required to provide evidence. We have become so accustomed to many of these forms of evidence that we have ceased to view them as evidence but rather as an endpoint in themselves. For example, a good track record in peer-review publication should not be the ultimate goal, but rather it is indicative of other more significant contributions including effective research as judged by your peers, impact upon your subject area and scholarly communication. Thus if we examine what each of the accepted pieces of evidence are seen to represent, and assuming these are scholarly values we wish to perpetuate, then it may be possible to find equivalents in an open, digital, networked context which demonstrate the same qualities. For example, the keynote talk at a conference is often cited as one valid piece of evidence of esteem for an individual seeking promotion. The reasons are twofold: Reputation – it demonstrates that they have gained significant standing in their field to be asked regularly to give a keynote talk at a conference; impact – if they are giving the keynote then everyone at the conference hears it, and they can therefore claim a significant impact in their subject.

The important element then is not the keynote itself but what it signifies. What might a digital equivalent of this be which meets the two criteria above? For example, if someone gives a talk and converts this to a slidecast of that presentation, a certain number of views might equate to impact (how many people would hear a live presentation?). If the presentation is retweeted, linked to, embedded, then this might give an indication of reputation.

It would be overly simplistic to provide straightforward translations along the lines of 500 views + 5 embeds = 1 keynote, but by focusing on the existing criteria and considering what it is they are meant to demonstrate, it is then possible to consider online equivalents.

The New Media Department at the University of Maine has taken a similar approach in suggesting a number of ‘alternative recognition measures’ (Blais, Ippolito and Smith 2007):

  • Invited/edited publications – if an individual is invited to publish in an online journal that is an indication of reputation.

  • Live conferences – they suggest raising the profile of the conference (both face to face and virtual) to a par with peer-review publication, particularly in fast-moving subjects.

  • Citations – they suggest using Google and databases to find a better measure of citations and impact.

  • Download/visitor counts – downloads of articles or visits to an academic site can be seen as equivalent to citations.

  • Impact in online discussions – forums, discussion lists and blogs are ‘the proving grounds of new media discourse’ with significant impact and a high degree of scrutiny and peer evaluation.

  • Impact in the real world – this might be in the form of newspaper references but they also argue that Google search returns can be a measure of real-world impact.

  • Net-native recognition metrics – online communities can have their own measures of value, and these represent a more appropriate measure than one imposed upon the contributor from outside.

  • Reference letters – they suggest reference letters which may counteract some of the difficulty with traditional recognition systems.

The faculty of the Humanities at the University of Nebraska-Lincoln have similarly developed a set of specific equivalents for recognition, including links to the scholar's research, peer review of digital research sites and technical innovation (http://cdrh.unl.edu/articles/promotion_and_tenure.php).

Digital scholarship guidelines

The recommendations above specify a number of approaches to recognising digital scholarship activity. A more common approach is to produce more general guidelines which set out broader criteria for assessing the quality of scholarly activity. These can include a catch-all term to accommodate new forms of outputs, for example, the Open University promotion guidelines state that ‘other appropriate outputs from scholarship can be taken into account including a demonstrable influence upon academic communication mediated through online and related web mediated technologies that influences the discipline’.

The Committee on Information Technology of the Modern Languages Association (MLA) has developed its own guidelines for promotion committees to consider when dealing with digital media in the modern languages (http://www.mla.org/guidelines_evaluation_digital):

  • Delineate and communicate responsibilities. When candidates wish to have work with digital media considered, the expectations and responsibilities connected with such work and the recognition given to it should be clearly delineated and communicated to them at the point of employment.

  • Engage qualified reviewers. Faculty members who work with digital media should have their work evaluated by persons knowledgeable about the use of these media in the candidate's field. At times this may be possible only by engaging qualified reviewers from other institutions.

  • Review work in the medium in which it was produced. Since scholarly work is sometimes designed for presentation in a specific medium, evaluative bodies should review faculty members’ work in the medium in which it was produced. For example, web-based projects should be viewed online, not in printed form.

  • Seek interdisciplinary advice. If faculty members have used technology to collaborate with colleagues from other disciplines on the same campus or on different campuses, departments and institutions should seek the assistance of experts in those other disciplines to assess and evaluate such interdisciplinary work.

  • Stay informed about accessibility issues. Search, reappointment, promotion and tenure committees have a responsibility to comply with federal regulations and to become and remain informed of technological innovations that permit persons with disabilities to conduct research and carry out other professional responsibilities effectively.

Some of these will seem like common sense, for example, reviewing work in the medium in which it was produced, but even such a small step may come up against opposition when there is a strictly regulated promotion process which has bee designed to suit the needs of print outputs.

Metrics

One approach to overcoming, or at least easing, the complexity of judging individual cases is the use of metrics or statistical calculations to measure impact or influence. This has been an area of increasing interest even with traditional publications. This measure of impact is often represented by a statistical measure such as the ‘h-index’, which is based upon bibliometric calculations of citations using a specific set of publisher databases. This measure seeks to identify references to one publication within another giving ‘an estimate of the importance, significance, and broad impact of a scientist's cumulative research contributions’ (Hirsch 2005). Promising though this may sound it is a system that can be cheated, or gamed (Falagas and Alexiou 2008), for instance, by authors referencing previous papers or between groups, and so a continual cycle of detecting such behaviours and then eliminating them is entered into, rather akin to the battle fought between computer-virus makers and antivirus software. The Research Excellence Framework (REF) examined the potential of using such measures as a part of the assessment process and found that currently available systems and data were ‘not sufficiently robust to be used formulaically or as a primary indicator of quality; but there is considerable scope for it to inform and enhance the process of expert review’ (HEFCE 2010).

There are at least three further degrees of separation from this walled garden approach to citations. The first is to use data outside of a proprietary database as a measure of an article's impact. This ‘webometrics’ approach was identified early on as offering potential to get richer information about the use of an article, by analysing the links to an article, downloads from a server and citations across the web (e.g. Marek and Valauskas 2002). Cronin et al. (1998) argue that this data could ‘give substance to modes of influence which have historically been backgrounded in narratives of science’.

The next step is to broaden this webometrics approach to include the more social, Web 2.0 tools. This covers references to articles in social networks such as Twitter and blogs, social bookmarking tools such as CiteULike and recommendation tools such as Digg (Patterson 2009). This recognises that a good deal of academic discourse now takes place outside of the formal journal, and there is a wealth of data that can add to the overall representation of an article's influence.

The ease of participation, which is a key characteristic of these tools, also makes them even more subject to potential gaming. As Priem and Hemminger (2010) report, there are services which can attempt to increase the references from services such as Digg to a site (or article) for a fee. But they are reasonably optimistic that gaming can be controlled, proposing that ‘one particular virtue of an approach examining multiple social media ecosystems is that data from different sources could be cross-calibrated, exposing suspicious patterns invisible in single source’.

A more radical move away from the citation work that has been conducted so far is to extend metrics to outputs beyond the academic article. A digital scholar is likely to have a distributed online identity, all of which can be seen to represent factors such as reputation, impact, influence and productivity. Establishing a digital scholar footprint across these services is problematic because people will use different tools, so the standard unit of the scholarly article is lacking. Nevertheless one could begin to establish a representation of scholarly activity by analysing data from a number of sites, such as the individual's blog, Twitter, Slideshare and YouTube accounts, and then also using the webometrics approach to analyse the references to these outputs from elsewhere. A number of existing tools seek to perform this function for blogs; for example, PostRank tracks the conversation around blog posts, including comments, Twitter links and delicious bookmarks. These metrics are not without their problems and achieving a robust measure is still some way off, but there is a wealth of data now available which can add to the overall case an individual makes.

Peer review

The issue of gaming is even more prevalent with metrics, and this is confounded by the mix of personal and professional outputs which are evident in many of these tools. This brings us onto the next approach in recognising digital scholarship, which is the use of peer-assessment. When the filter of peer-review publication is removed, or lowered in significance, then arguably the significance of peer review in the tenure process increases. It will be necessary to determine that the output and activity are indeed scholarly (after all, one could have a popular blog on bee-keeping which had no relevance to your position as professor of English Literature). It is also a response to the increased complexity of judging digital scholarship cases. The MLA guidelines above recommend using external experts to perform this peer review for tenure committees who may be unfamiliar with both the subject matter and the format.

Others have taken this approach further, soliciting commendations from their wider online network (e.g. Becker 2009). There is obviously an issue around objectivity with this approach, but as promotion committees seek to deal with a wider range of activity and outputs, judging their impact will need to involve feedback from the community itself.

Micro-credit

In Chapter 5 on research, I suggested that new methods of communication have allowed a finer granularity of research, that in effect the dissemination route had an influence on what could be deemed research. This finer granularity, or shift to process away from outputs, is another difficulty for recognising digital scholarship. One approach may be to shift to awarding ‘micro-credit’ for activity – so, for example, a blog post which attracts a number of comments and links can be recognised but to a lesser degree than a fully peer-reviewed article. Finer granularity in the types of evidence produced would allow recognition of not just outputs but also the type of network behaviour which is crucial to effective digital scholarship. Smith Rumsey (2010) suggests that ‘perhaps there should be different units of micro-credit depending on the type of contribution, from curating content to sustaining the social network to editing and managing the entire communication enterprise of a collaborative scholarly blogging operation’.

Alternative methods

The last of the approaches to recognising digital scholarship is really a call to encourage new practices which seek to reimagine scholarship. The seven approaches suggested above can be viewed as a continuum of departure from the conventional model. Much of the attempts to gain recognition for digital scholarship seem to be focused around making it behave like traditional scholarship; for example, permitting webometric data for journal article analysis is interesting, but it still foregrounds the peer-reviewed article as the main form of evidence.

Bending new technology to fit existing practice is a common reaction, partly because we are unaware of its potential. Stephen Heppell (2001) declares that ‘we continually make the error of subjugating technology to our present practice rather than allowing it to free us from the tyranny of past mistakes’. There is something of this in the approach to recognising digital scholarship – it is often a case of trying to make everything fit into the pre-existing shaped containers, rather than exploring new possibilities.

Promotion committees can play a significant role in this not only by recognising new forms of scholarship but also by positively encouraging them, either through guidelines or through specific projects. For example, a committee might seek to develop the sort of Web 2.0 metrics mentioned above or to encourage alternatives to the peer-review model. In analysing the peer-review process Fitzpatrick (2009) makes a strong case that we need to move beyond merely seeking equivalence measures:

What I am absolutely not arguing is that we need to ensure that peer-reviewed journals online are considered of equivalent value to peer-reviewed journals in print; in fact, I believe that such an equation is instead part of the problem I am addressing. Imposing traditional methods of peer review on digital publishing might help a transition to digital publishing in the short term, enabling more traditionally minded scholars to see electronic and print scholarship as equivalent in value; but it will hobble us in the long term, as we employ outdated methods in a public space that operates under radically different systems of authorization.

Conclusion

The already difficult task of assessing research and scholarly activity in highly specialised fields is only going to be made more difficult by introducing digital scholarship. Previously there has been an agreed set of evidence which could be seen as acting as a proxy for excellence in research. Not only does this list need to be expanded to include digital scholarship outputs but it may also be that no such definitive list can be provided any more.

In recognising digital scholarship activity in the tenure process, the initial barrier to overcome is that it constitutes valid scholarly activity and is not merely an adjunct to the traditional approaches. If this obstacle is overcome in an institution, then the next issue is in finding ways of accurately representing it which don't immediately remove the benefits of these approaches or place inappropriate constrictions. For instance, judging a blog on the same criteria as one might review a journal article fails to recognise the type of discourse which is effective in the blogging community. Many of the characteristics which would be frowned upon in scholarly articles, such as subjectivity, humour, and personal opinion, are vital elements in developing a dialogue in blogs.

There are a number of ways in which promotion committees can begin to address digital scholarship. What they may be leading to is a more portfolio-based approach, perhaps more akin to that found in the arts. Anderson (2009) suggests that the sciences have an advantage in recognising digital scholarship because they are more ready to adopt new technology, but it may be that the arts with its more individual assessment models is well disposed towards incorporating different forms of output. Such a portfolio-based approach is likely to draw on a range of tools and pieces of evidence. These may include a range of digital outputs, metrics demonstrating impact, commendations from the community and recognised experts and an overarching narrative making the case for the work as a whole.

Although the thrust of this chapter has been the ways in which the tenure process inhibits digital scholarship, and the approaches it is beginning to take in recognising it, the tenure process is not solely to blame for the reluctance of many scholars to engage with the open, digital networked approach. About a third of faculty thought that the tenure practice unnecessarily constrained publishing choice, and in assessing the importance of the various functions of a scholarly society it was the publication of peer-reviewed journals that was deemed most significant (Scholfeld and Housewright 2010). It would be inaccurate then to portray the situation as one of a reservoir of digital scholarship activity being held back by the dam of the tenure process. Peer review in particular is a process held dear by the faculty themselves and not an outside imposition by the tenure process. This is for good reason as it is the method by which researchers gain their authority. But we should consider peer review as a method of achieving other goals such as reliability and authority, not an end in itself, and there may be other means of achieving this which the new technologies allow. Peer review in itself should not be the sole determinant of authority or an obstacle to experimentation. Fitzpatrick (2009) rather colourfully suggests that ‘peer review threatens to become the axle around which the whole issue of electronic scholarly publishing gets wrapped, like Isadora Duncan's scarf, choking the life out of many innovative systems before they are fully able to establish themselves’.

Even if much of the resistance comes from faculty themselves, the role of the tenure process is still highly significant. If a third of faculty see it as a constraint, that still represents a sufficiently large group that would be encouraged to engage in digital scholarship more if the tenure process were more sympathetic. In addition, there is the message it sends and the positive reinforcement it provides. If digital scholarship activity is a route to tenure, then those seeking it will engage in the types of activity that are recognised.

What is perhaps most interesting in examining the tenure and reward process is that it is yet another example of the unintended consequences of technology adoption. This is what is really fascinating about the open, digital networked technologies – not the technologies themselves but rather what occurs as a consequence of their uptake by individuals. As existing practices are unpicked, it forces us to ask fundamental questions about these practices, which have hitherto been assumed. For example, it may seem a small step to start recognising some of the webometric measures of a journal article's influence, but this leads to questions about what constitutes impact and why is a blog with a higher readership regarded as less influential than a journal article? This in turn leads to an examination of what promotion committees recognise and, more fundamentally, what these are deemed to represent. From a fairly modest and uncontroversial starting position, institutions and individuals can soon find themselves examining some very deep issues about the nature of scholarship and universities themselves. This is another instance of Bellow's law, and it perhaps suggests why many institutions are reluctant to begin the process of recognising digital scholarship – it quickly unravels established practice and raises difficult questions.

Answering difficult questions is the essence of scholarship, and one of the most difficult of all in this area is what is the relationship between scholarship and publishing? This is what Chapter 12 will seek to address.

It is worth emphasising that monetary reward and promotion are not the sole, or even main, driver for most scholarly activity. The reasons why scholars engage in research, disseminate their findings and teach on courses are varied, but they are primarily driven by intellectual curiosity. It is not, therefore, the suggestion of this chapter that digital scholars should pursue any of the digital, networked and open approaches because they can lead to tenure. Rather, my purpose is to argue that if these approaches are achieving scholarly functions via different means, that they should be recognised as such and the tenure process acts as something of a proxy for this recognition. To ignore the context within which scholars operate within their institutions would be to disadvantage new practices compared with established ones.

  • Paperback Copy £17.99
  • pb 9781849666176
  • Available
  • ePub File £17.99
  • ePub 9781849666251
  • Available