Posted by gazjjohnson on 14 February, 2011
You can read about Day 1 or Day 2 here.
The third and final day dawned a little grey, but there was little time to admire the scenery as we had to kick off before 9am in order to fit everything in. The first session was from Ruth Murray-Wedster from Lucidus Consulting . Ruth used to work for Intute, which was very noticeable as about half her opening section seemed to be an advert for the late and somewhat lamented service. Thankfully the real meat of the session was a workshop, in small groups again, looking at metrics/KPIs and repositories. As someone who keeps a fair amount of these (and whom will be working on them a lot this week) I was quite interested to see what other people are doing in this area. In the workshop we looked at metrics we had been asked to keep by our stakeholders, those we felt offered an actual representative view of the repository activity and the challenges that prevent us from gathering some of these.
I suggested I would love to know how far people read through items in my repository, that something has been downloaded 500 times is one thing – but how far did they read? This is a stat that YouTube provides for your videos on the site, and is an excellent way to discover just how many of your viewers have engaged with the material. In the same way the base metric of downloads tells me nothing about the interaction with the scholarly research; although short of locking the PDFs down to view only mode or the like on the LRA I’m unaware of how we’d measure this one.
I had a very interesting side discussion with Paul Stainthorp and Theo Andrews about our own use of Google Analytics, and just how deep we each delved (or didn’t) into the schmorgesborg of data that this provides. Interestingly in many aspects each of our respective repositories seems to score similar values for, although the devil is very much in the details. Our group agreed that many of the metrics that are demanded of us (last year’s SCONUL audit came in for particular criticism for being somewhat poorly thought out) are not especially representative of the level of impact or activity w.r.t. repositories; no doubt due to most of them being requested by those who were not familiar with the repository world’s working. A definite need for those of us managing these resources to engage with these people more, or perhaps a lobbying/information role for both the RSP and UKCoRR.
After a break (and an advert for UKCoRR) we had the final two sessions of the morning. Personally I would have reversed the order of these sessions as the final one from Amanda Hodgson on the Research Communications Strategy work from the CRC offered little content I’d not already gleaned from their website. Perhaps when their work is more advanced this session might have more to offer. However, the preceding session from Miggie Pickton (Northampton) on her project researching researchers through their data was more engaging. Miggie even engaged us in a small workshop element as we looked at our own experiences, and tied in nicely to the sessions the previous day from Max and Mark. it also tied into elements of digital preservation and curation, a topic no one talk had tackled but a recurrent theme in many.
And so the Winter School came to a close. It had been a highly valuable three days, in what can only be described as a first class venue (squeaking door aside), and a credit to Jackie and her team for putting it on. My thanks to all the speakers and organisers! At the very least I’ve taken away the thought that me and my team face a lot of the same challenges as other repository teams, even where their exact circumstances and working environments are different. That alone brings a certain level of comfort.
What’s next? Well I’m hoping to read through the slides from the various speakers over the coming days again and perhaps pick up on one or two elements that I only half caught at the time, or that perhaps might spur me and my team on in our work in the coming year.
Posted in Leicester Research Archive, Open Access | Tagged: 2011, balancing, DAF, kpi, measurement, metrics, repositories, research communications, rsp, winter school | Leave a Comment »
Posted by gazjjohnson on 11 May, 2009
Spotted over on Gerry McKiernan’s blog, a website that ranks arXiv papers by their popularity on Twitter. I think this is a really interesting idea, and one I’d love to use for the LRA; but I suspect that a) Not that many of our papers are being discussed and b) Not many people who are using our papers are on twitter anyway.
It’s really an order of magnitude thing, the LRA has 4,000ish items arXiv has over 536,000 – we’re not even 1% of their size, and doubtless traffic also. That said this kind of qualitative real time metric is a bit different to the usual quantitative ones that we seem to be relying on for most repository measurement.
That said, I know we’ve got to consider that it’s not everyone who is reading these papers is talking about them, and taken on their own these metrics hold only a certain value. But then, isn’t that the case with every metric?
Posted in Leicester Research Archive, Web 2.0 & Emerging Technologies | Tagged: bibliometrics, metrics, twitter | 2 Comments »
Posted by gazjjohnson on 6 February, 2009
The h-index (Hirsch Number) is a metric that is increasingly becoming of interest to researchers, especially in the light of the REF. An h-index is “a number that quantifies both the actual scientific productivity and the apparent scientific impact of a scientist“. You can work it out manually, but to be honest you’d need to be mad or a bibliometrics fiend to want to.
I’ve been asked by a few people how to find it, and each time I totally forget how! So in the light of this, here’s my step by step guide to discovering an author’s h-index automatically using that wonderful Web of Knowledge tool!
- Go to Web of Knowledge and click on the big green button
- Click the Web of Science tab at the top of the screen
- Rnter the author’s name in the format surname initial* (e.g. raven e*)
- Change the search option from the drop down menu to Author
- Click Search
- At the top right of the results is the option to Create Citation Report. Click this.
- The analysis appears, along with the person’s relative h-index.
It seems simple, but I was scratching my head using WoK until I discovered that I need to just use Web of Science, not the whole WoK in order to get the value. And so, now you know! It is worth noting you do have to be fairly exact in your author naming conventions, as the citation report will not run for more than 10, 000 result records.
I did wonder if between steps 6 and 7 about selecting individual papers from the list of results, but it appears that this has no effect on the citation analysis; for example selecting 5 papers from a list of 120, 000 doesn’t enable me to run the citation reports – it appears to run in an all or nothing manner. Or maybe there’s a trick here I’m missing?
Posted in Research Support | Tagged: bibliometrics, calculating, h-index, hirsch number, impact, metrics, rae, ref, research, web of knowledge, web of science, wok | 48 Comments »
Posted by gazjjohnson on 27 January, 2009
Well according to this site, the LRA ranks 148th in the world of institutional repositories overall. It gives stats on size (222nd), visibility (186th), rich files (125th) and Scholar (125th). We’re 15th in the UK. Digging into the back files for the site I see they’ve calculated these figures as follows:
- Size (S). Number of pages recovered from the four largest engines: Google, Yahoo, Live Search and Exalead.
- Visibility (V). The total number of unique external links received (inlinks) by a site can be only confidently obtained from Yahoo Search and Exalead.
- Rich Files (R). Only the number of text files in Acrobat format (.pdf) extracted from Google and Yahoo are considered.
- Scholar (Sc). Using Google Scholar database we calculate the mean of the normalised total number of papers and those (recent papers) published between 2001 and 2008.
IMHO I might argue that I don’t agree with how they calculate their metrics – while only 20% of the overall figure is made up from size, repositories that are stuffed full of metadata get an especial boost to the top. Nor does it account for how useful the rich files are – a repository filled with images isn’t as rich as one storing research articles, books and data. Quality over quantity if you ask me!
But it’s another site for the doubtless many metrics fans across the UK HEI scene. In case you’re wondering the site is based in Spain at the Cybermetrics Lab (research group) at the Consejo Superior de Investigaciones Científicas (CSIC).
Posted in Leicester Research Archive, Open Access | Tagged: metrics, ranking, repositories, stats | Leave a Comment »