UoL Library Blog

Develop, debate, innovate.

Posts Tagged ‘bibliometrics’

EMALINK – Bibliometrics & Research Visibility

Posted by selinalock on 12 July, 2012

A rather belated report on this May Emalink event:

What are Bibliometrics and why should I care?  Ian Rowlands (University of Leicester)

  • Bibliometrics can be very sterile & specialist so they must be used in a context that makes sense.
  • Citation data – indicates relationships of influence, deference & usage – a bit like social media networks.
  • Bibliometrics have to help the institution or individual in the research process.
  • BUT bibliometrics just one small par of the puzzle and tools available.
  • How much information is there really out there about research inputs & outputs?
  • Data can be variable e.g. to pick up on Univerisity of Leicester citations then authors need to put University of Leicester in their address.
  • Currently it is difficult to deal with the variety of research outputs e.g. data, software, plays…
  • New tools emerging e.g. Readermeter from Mendely to see if your papers have been socially bookmarked.
  • IMPACT of research – very important for REF but citations do not always translate to real world impact – need to go beyond bibliometrics.
  • Some types of citations have greater ‘weight’ in terms of impact e.g. citation in a NICE guideline directly impacts how healthcare is provided.

Enhancing research visibility at Loughborough (Lizzie Gadd)

  • In 2011 Loughborough found it had slid down the THE World rankings and needed to improve their citations count.
  • The Plan to improve citations = library to run sessions on publishing & promoting research, VC commissioned Academic Champion for bibliometris, promote visibility of good research in high impact journals, recruit & retain good researchers, ciations taken into account when promoting, use ResearcherID and Google Scholar profiles to improve citations & impact & use research repository.
  • Training Implementation = publish or perish sessions for new academics, lunchtime bibliometrics seminars in Depts/Research groups, 1to1 appointments ion request and online tutorials on citation tools and impact tools.
  • Plus provide bibliometric data to support staff and promote bibliometrics training through staff conferences, webpages, blogs & newsletters.
  • The Vision for the future = joined-up thinking (work with research office, IT service etc), research visibility focus (databases of research kit, data and publications).
  • Already seeing improved citations.

Some good ideas that could be implemented elsewhere.

Research training will be high on our agenda once we get our Library Research Services team fully in place, headed up by our own bibliometrician Ian Rowlands. I’ll be moving over into that team later this year.

Posted in Meetings, Research Support, Service Delivery, Staff training | Tagged: , , , , , | Leave a Comment »

Mendeley Institutional Edition

Posted by selinalock on 25 April, 2012

Mendeley Institutional Logo 


Mendeley (the academic reference manager and social network site) have partnered with library suppliers Swets to produce the Mendeley Institutional edition, and I had a webex meeting with product manager Simon Litt to find out more.

Mendeley End User Edition

The end user edition is bascially what is already available for free from Mendeley:

  • Desktop reference management software, which allows you to organise nd cite a wide range of reference typs.
  • Desktop software also allows you to upload, read and annotate PDFs.
  • Desktop links to a web-based system which allows you to synch and share your references.
  • Web system also works as an academic social network with groups etc.
  • 1GBWeb space, 500 MBPersonal, 500 MBShared, 5 Private groups, 10 Users per group

Mendeley Institutional Edition

  • Upgrade to end user edition (normally £4.99 per month) to
    • 7GBWeb space, 3.5 GBPersonal, 3.5 GBShare, 10 Private groups, 15 Users per Private group
  • Upload a list of library holdings (journals) to allow fulltext access for institutional members.
  • Turn on institutional OpenURL.
  • Institutional groups – any mendeley users signed up with an institutional email will automatically be added to institutional group & can add further members.
  • Analytics – who’s publishing and reading what.
  • Reading tab – See what your users are reading (adding to Mendeley) by journal title and compare with library holdings.
  • See most read/popular articles.
  • Publishing tab – where your members are publishing.
  • Impact tab – worldwide usage of your members published articles e.g. most read.
  • Compare your institution with other Mendeley institutions with regards to impact/how read your institutions articles are.
  • Social tab – what groups your users are in.

The main thrust of the institutional edition is the analytic functions that Swets have worked with Mendeley to add. The pricing models are currently being worked on so no idea what the price this would be.

When I previously reviewed Mendeley (alongside RefWorks, EndNote, CiteULike & Zotero) in 2010/11 the main issue with using it an institutionally recommended product was that the desktop software needed admin access to be installed and updated regularly on user machines. As far as I can tell this issue hasn’t been addressed in the institutional edition, as user would still download the free desktop software from the Mendeley site or just use the wbe interface.

My questions surrounding the institutional edition would be…

  • Would it be able (be accepted as) a replacement for EndNote and/or Refworks? As there seems little point in getting the institutional edition for the analytics if our users were not using the desktop/web reference software.
  • Do the analytics give us enough “added-value”?
  • How does the analytical information compare with other types of bibliometris from IRIS or InCites?
  • Are the analytics only going to be useful to certain disciplines as they currently only look at journal articles and titles?

Posted in Referencing, Research Support, Web 2.0 & Emerging Technologies | Tagged: , , , , | Leave a Comment »

USTLG Winter Meeting 2

Posted by selinalock on 8 December, 2010

This follow on with my report of the USTLG Winter Meeting.

Finding the known unknowns and the unknown knowns, Yvonne Nobis, University of Cambridge.

  • Talked about their development of the aimed specifically at researchers (which I know some of our researchers rather like the look of!)
  • Researchers often don’t known what they’re looking for: unknown unknowns, as research skills might need updating, looking for something outside their field or don’t know where to begin.
  • Scientists don’t tend to use the Cambridge libraries (over 100 of them so confusing system) and they want everything electronically so looking for a way to meet their needs.
  • Found most visitors to the science library are those looking for historical (print) information, or students wanting a place to study.
  • ~95% journal are online and ~95% of monographs are still print only.
  • In response to this they will now scan on demand from their own collections for Cambridge researchers (currently a free service as charging would have copyright law implications).
  • As the staff would often need to retrieve these items from storage the scanning has not added too much extra effort.
  • Science librarians at Cambridge do a lot of training of early career researchers.
  • Science@Cambridge contextualises information within a subject area to help researchers start their searching.
  • Includes a federated search option where relevant databases have been chosen (to steer researchers away from just using Google Scholar as they don’t realise what scholar doesn’t index: unknown unknowns).
  • Trying to make resource discovery as easy as possible.
  • Have problems with making eBooks easy to access, especially individual titles on catalogue records.
  • Trialled using chat with subject  librarians but not really worked so looking at centralising enquiries more.
  • Training branded through College or Computing Services gets a better turn out than library branded training.

We use a similar idea to Science@Cambridge in our subject rooms, but could learn more from them when redeveloping our Rooms as part of our digital library overhaul? Hopefully using Summon in future will make resource discovery easier at Leicester


Obviously the most important part of any conference is the lunch provided. This time it was a good spread sponsored by Wiley Publishers, and in a very unexpected place…

USTLG Lunch in a Church!

Lunch in the Divinity School

USTLG Lunch 2

Citations Count! Experience of providing researcher training on bibliometrics, citations and Open Access publishing. Kate Bradbury,  Cardiff University.

  • Training in citation data in response to REF raising interest in bibliometrics, funders requesting bibliometric data, help deciding where to publish and to promote work. 
  • Training covers: WoS/Scopus/Google Scholar, looking for data in other sources (e.g. book citations, full text resources which include references), what each database provides e.g. impact factors, increasing citations, using open access publishing and repositories.
  • Format of training: 30 min talk and 1 hr hands-on using workbooks – activities such as finding impact factors, setting up citation alerts, looking at OA resource and using ResearcherID.
  • Also do shorter, tailored talks for Departmental meetings etc.
  • Sessions dones for subject librarians, staff development programme, specific schools/depts (e.g. Comp Sci, Engin, Psychology) and within seminar series.
  • Lessons learnt: avoid too much detail, stay up to date with new database features and REF, emphasis benefits to researchers, takes time to build interest in training, targeted sessions best, be flexible & adapt sessions to suit audience, be prepared for discussions about the validity and use of bibliometrics!
  • Stance taken: explain how to find information but leave it up to the researchers to decide if it is useful to them, including discussion of pros/cons of bibliometrics.
  • Types of questions asked:
  • How to pay for OA publishing?
  • Shouldn’t we just write controversial articles to up our citations?
  • What about highly cited, poor research?
  • My journals not indexed in WoS, how do I get citation info?
  • How to find book citation info?
  • What about self-citations? Will they be excluded from REF?
  • BMJ article said no observable citation advantage from OA publishing…
  • Can I import articles on in WoS into ResearcherID? (can do, but tricky)
  • What is a good H-Index to have?
  • Doesn’t H-Index just reflect length of career?
  • Where’s the best place to put an OA article?
  • I use a subject repository so why also use institutional repository?
  • I don’t want an early version of my work available…
  • What next in terms of training? – Continue with sessions, support subject librarians to run their own sessions, introduce Bristol Online Survey to collect feedback from attendees, respond to individual follow-up questions and do a separate presentation on OA publishing.


Wiley Publishers: WIREs, Alexa Dugan.
Next up was our sponsor for the day Wiley talking about their new product:

  • WIREs = Wiley Interdisciplinary Reviews.
  • Reference work meets journal review article –  a new concept in publishing.
  • Have been finding it difficult to find authors/researchers with enough time to devote to writing traditional reference works, especially as those works do not gain professional recognition .i.e. they are not indexed or cited.
  • WIREs is Wiley’s answer to this: invited content with high quality editorship, drawing on their research journal community ties (so like a reference work), but also managed to get them indexed in major databases and WoS so the authors can get recognition.
  • Each Review has a carefully thought out structure, which is kept up to date with a range of article types e.g. focus (news) articles, opinion pieces, basic reviews, advanced reviews etc.
  • Content is added every two months (so serial like a journal) & articles retain their title and DOIs for citation purposes.
  • One of their flagship titles: Climate Change Review has won several awards already.
  • FREE for first two years:
  • USTLG Conference

    Getting Interactive

Researcher@Library – becoming part of the research cycle, Katy Sidwell, University of Leeds.

  • Leeds, like many of us, have managed to get a certain amount of library training embedded or offered to PhD students, but what about Academics and other Researchers?
  • Started to think about how to support researchers so thought about the life cycle of a research project:
  • Ides (pre-funding) – Planning (finding application) – Action (research/life of grant) – Dissemination – Application (of research knowledge/transfer) – back to beginning of cycle.
  • They got us to think about how we all support these stages of the cycle & feedback (using post it notes – a good bit of interactivity to wake us all up!).
  • What they (and from the feedback, others might do) are:
  • Ideas = library collections, current awareness & literature search training.
  • Planning =  Identify funding sources ^ support research bids (though in Leeds this only happens in particular areas as it’s labour intensive and unscaleable).
  • Action = PhD workshops, bibliographic management, lit search support, data management advice, user behaviour research, friendly space for researchers.
  • Dissemination = RAE/REF support, etheses online, institutional repository, publications database.
  • Application = intellectual property advice (Business officer), market research for knowledge transfer e.g. patents.
  • Hard for researchers to know about training – where/how to promote?
  • Created a website for researchers to bring together the various things available to them (need user needs analysis to find out what to put there).
  • Researchers wanted a website that was not solely library resources/focused, not tutorial but advice that could be dipped into at appropriate time, simple navigation, no login but not really basic advice – appropriate to their level.
  • Work in progress – need to clarify purpose, look at navigation issues, obtain feedback and roll out across other faculties.
  • Where now? – created Library Researcher Support Group to continue the work and look at how it fits in with the new Vitae researcher development framework.

A good day all round. The presentations from the day can now be viewed at the USTLG site.

Posted in Meetings, Open Access, Research Support, Service Delivery, Subject Support, Wider profession | Tagged: , , , , , , , , , | Leave a Comment »

USTLG Winter Meeting 1

Posted by selinalock on 30 November, 2010

On 25th November 2010 I attended the University Science & Technology Librarians’ Group Meeting at Keble College, University of Oxford. The theme for the day was “The role of libraries in the research process.” I nearly walked straight past the little wooden entrance to Keble College, but was greeted with a magnificent vista on entering…

Keble College

Keble College

An academic perspective on libraries supporting research. Professor Darton, Dept Engineering, Oxford University.

Professor Darton expressed his love of books, talked about his ancestors being publishers of children’s books and having founded the Darton Juvenile Library. He also talked about how he had fought to keep the Engineering Library at Oxford under the control of the Department as he felt it played an important part in their culture.

He had brought in a couple of classic engineering texts and said it was difficult these days for academics to find time to write “classic” types of textbooks and they were hvaing to find other ways of conveying information to students.

In his time as an engineer he thought that libraries/librarians had moved from being a status symbol (the bigger the library the more knowledge) that was protected and guarded by the librarians for their specific patrons, on to being providers of information which encouraged access for all and finally, these days, being more of an online gateway with librarians as web managers.

He then went on to argue that for science and engineering researchers the library is no longer needed – they rarely use physical texts, there is a huge amount of good quality information accessible via Google (as long as you have the skills to judge quality) and more movement towards open access materials online (e.g. in his are of sustainability). He argued that he would be happy, as a researcher, for there to be a subscription team who oversaw journal subsciptions on behalf of the University, a storage/retrieval service for older print items and for the sciences to stop funding the expensive physical libraries needed by the arts. Or even move to a model where all researchers are given a portion of the library funding to “buy individual article on demand” instead of having a central library service! As you can imagine this was a controversial point of view…

The audience asked if he thought the same applied to undergraduates and he thought up until their 3rd year projects that might have different needs, but by project time they might still need a budget to buy relevant articles.

When asked if he saw any role for librarians he thought there was still an important role in training people to be critical of information, and recommends library training to his students. Also that journal subscriptions would be more cost effective than buying individual articles so perhaps librarians should become/be seen as skilled negotiaters. Librarians need to show how they can help researchers.

Professor Darton was also critical of the current peer-review system, and as an editor of a journal it was becoming very hard to find good reviewers. He suggested that publishing the names of the reviewers might improve the quality of the reviwing. He was also suprised to find younger researchers don’t have a concept of what a journal is as they have never held a print copy in their hands.




Update on REF, Kimberley Hacket (REF Team)

Main points of interest:

  • REF will be a process of critical review and some will include bibliometric information.
  • 3 elements: Outputs (research) ~60%, Impact of research ~ 25% and Environment ~15%.
  • 4 outputs per researcher (less if early career).
  • 36 sub-panels looking at different subject areas.
  • Outputs selected by HEI
  • All types of outputs can be selected as long as they conform to REF definition of an output, including open access outputs.
  • Citation information can be used by a sub-panel if they wish. However, it will be used to inform expert review and not on it’s own.
  • If panels request bibliometric information then it will be supplied by REF (not by institution) and will conform to agree simple metric methods.
  • Panels being selected and will be announced early 2011.
  • Impact is not just economic but also social, quality of life etc.
  • Do not want to discourage curiosity-driven research.
  • Data collection will be built on the RAE system – pilot in late 2012, live in 2013.
  • Assessment in 2014 – results by end of 2014.
  • Any bibliometric data used will come from a single supplier appointed by REF.

    Old Bodleian Library

    Old Bodleian Library

    Research Metrics, Anne Costigan, University of Bradford.

    Anne talked about looking at metrics with researchers and the issues around metrics:

  • Metrics can be used at author, article, journal or institution level – journal level most known.
  • Citation metrics available from Web of Knowledge, Scopus & Google Scholar.
  • Journal Citation Reports (WoK) – impact factors most famous – attempts to measure importance and quality of journal.
  • Citation Reports usually ignore books, conferences and non-journal research information/citations.
  • Researchers tend to get hung up on journal impact factor – seen as “league table of journals”. However, be wary as different subjects have different amounts of journals listed, impact factor can change over time so look at trend, encourage people to also look at ranking.
  • Often asked “what is a good impact factor?” = how long is a piece of string? Varies tremendously by subject e.g. a specialist area might have many citations missing as journals not indexed, or papers in conferences etc.
  • Self-citation can skew figures.
  • Review journals tend to be very highly cited.
  • Editors have been known to insist that articles always cited articles from within the same journal to inflate impact factor.
  • Controversial papers are usually highly cited and can skew figures (could be a “bad” paper).
  • Other options to look at: Eidenfactor (WoK) – complex algorithm where citations from highly ranked journals hold more weight. H -index e.g. 34 papers which have at least 34 citations = H-index of 34. H-index does favour those with a longer career.
  • Article metrics – times cited (WoK, Scopus, Google Scholar) – different results from each. Scopus & Google Scholar tends to include more non-journal citations.
  • Author metrics – WoK can create citation report & remove self-citations. Problems with identifying papers belonging to certain authors (e.g. similar name to someone else.)
  • Can use ResearcherID (free service via WoK) to register articles under your author name.
  • Scimago – uses Scopus data for free.
  • What about repositories?
  • MESUR – combines citation & usage data.
  • Rise of Web2.0 – vote for your favourite article?
  • Researchers like easy to undertsand metrics e.g. H-index.
  • Uses of metrics – where to publish, what to subscribe to, in recruiting researchers, at Dept or Institutional level for marketing…
  • No measure perfect – always look an a combination of things.
  • Posted in Meetings, Research Support, Service Delivery, Wider profession | Tagged: , , , , , , | 1 Comment »

    Understanding Open Access and Bibliometrics – RSO/Library event

    Posted by gazjjohnson on 11 February, 2010

    As I mentioned previously the Research Support Office and the Library are running an event for academics and postgraduates here at Leicester on these hot topics.  Here’s a more detailed run down of the event programme.

    Thursday 25th February 2010 2.00p.m. to 4.00p.m.
    Lecture Theatre A, Physics Building

    Overview of OA and OA at Leicester: Gareth J Johnson, Library
    This session examines current developments in scholarly publication and open access, focussing on its impact on the Leicester academic community.

    Research Funders’ OA mandates: Juliet Bailey, RSO
    This session examines the policies of the major research funders on open access and outlines the implications for all researchers.

    Q&A Session

    An Introduction to Bibliometrics: Nancy Bayers, Library
    This session introduces the concept of bibliometrics and its applications in evaluating research and planning for the Research Excellence Framework.

    Bibliometrics in the REF: Juliet Bailey, RSO
    This session examines how HEFCE proposes to use bibliometrics in the Research Excellence Framework and the changes proposed to scoring of outputs.

    Q&A Session & Discussion

    All sessions are open to all staff and research students but they are particularly suitable for academic staff and researchers. No registration is required.

    Posted in Leicester Research Archive, Open Access, Research Support | Tagged: , , , , , | 1 Comment »

    Ranking open access with twitter

    Posted by gazjjohnson on 11 May, 2009

    Spotted over on Gerry McKiernan’s blog, a website that ranks arXiv papers by their popularity on Twitter.  I think this is a really interesting idea, and one I’d love to use for the LRA; but I suspect that a) Not that many of our papers are being discussed and b) Not many people who are using our papers are on twitter anyway.

    It’s really an order of magnitude thing, the LRA has 4,000ish items arXiv has over 536,000 – we’re not even 1% of their size, and doubtless traffic also.  That said this kind of qualitative real time metric is a bit different to the usual quantitative ones that we seem to be relying on for most repository measurement. 

    That said, I know we’ve got to consider that it’s not everyone who is reading these papers is talking about them, and taken on their own these metrics hold only a certain value.  But then, isn’t that the case with every metric?

    Posted in Leicester Research Archive, Web 2.0 & Emerging Technologies | Tagged: , , | 2 Comments »

    Thompson Reuters InCites Bibliometrics Meeting (part 3)

    Posted by gazjjohnson on 25 March, 2009

    [Continued from parts 1 and 2]

    Emma Swann (Target Account Manager) started to talk more about InCites. Previously TR was able to produce these kinds of reports as on-demand CD outputs. InCites is the Web based version of this, in basic and premium versions.

    Basics offers citation reports and indicators (benchmarking contrasts with other institutions). Premium is a more customised service, that is set up working with an institution. She demonstrated the information that the system would pull out, which was rather impressive – stuff that would take me several days to generate could be produced more readily. I was glad to have worked at the coal face generating this data, as it enabled me to readily see the importance of the information proceed. That said there were some metrics I’d produced [link] that weren’t evidenced produced. Possibly InCites could still produce them, but I’d need some time hands on before I could say that. All the same the time the system would save in generation of this information would allow the manual discovery of this information if push came to shove.

    Emma showed how it was possible to generate custom benchmark reports for a range of institutions at an author, discipline or institutional level. It was even possible to rank all of an institution’s researchers easily.

    Basic InCites package includes:

    • 1 standard citation report with all current metrics back to 1981
    • One standard indicators report
    • Quarterly data updates
    • Internal distribution only

    Premium package includes all this plus:

    • Allows posting of data on external website
    • Can use researcher ID (RID)
    • PubMed ID match
    • Match retrievable service with WoS

    In terms of cost (depending on institutional JISC band A-E):

    • Basic
      • U$D10-25k
    • Premium
      • U$D16-40k
    • API Citations
      • U$D6-15k

    Wellcome talked about their funders point of view, and how identification of authors was only the start. What they wanted was a system that would interact with their own databases allowing them to call up extensive data on researchers they fund – in essence answering the questions “Is this person worth what we are funding” along with “Are there areas of funded research excellence that we are not funding but should”.

    Comments from the HEFCE representative continued the discussions about on unique RIDs and carry forward between institutions. UCL commented that with 1,100 address variants it was a real problem IDing researchers. Some researchers identify with a unit or division and not an institution more readily and thus ensuring all are covered can be a problem.

    The HEFCE rep suggested that bibliometrics probably won’t be used for Arts/Humanities and many social sciences reviews, noting that the finer detail of bibliometrics has a long way to go in being resolved. For example some subject areas publish in low level journals, because the whole field is publishing in these journals – it’s all relative.
    HEFCE also commented that the 2010 bibliometrics exercise may be developmental rather than a full review, but this while this is not a certainty, the time taken for the pilot was far in excess of what they expected. They have a commitment to run something, but it might not be what everyone initially expected. It should be possible for institutions to see where they stand with the first real funding related REF taking place in 2013. Autumn consultation workshops will be run in 2009 to inform REF, and then later workshops to inform submission guidelines. A comment was made that provided your publications management system is robust and embedded in custom and practice then you may have less to worry about w.r.t. REF.

    On the subject of competing products that do a similar job to InCites, the representative from UCL suggested that there were other resources they were looking at, but declined to name them.

    Finally HEFCE talked about the CERIF metadata schema, which may well be a REF requirement. It has a high use on the continent, but much lower in the UK. Scandinavian countries have been using it a lot for example. EuroCRIS and JISC are involved and advocates of it. Noted that a number of Scottish Universities are piloting it.

    As you’ve seen in this and the previous parts of this post, this was an especially information rich day with a lot useful, and sometimes surprising, information coming out. What does this mean for the REF and bibliometrics at Leicester? I think it’s too soon to say, but it certainly means I’ll be having a lot more conversations about them in the near future I suspect!

    Posted in Research Support, Wider profession | Tagged: , , , , , | Leave a Comment »

    Thompson Reuters InCites Bibliometrics Meeting (part 2)

    Posted by gazjjohnson on 25 March, 2009

    [Continued from Part 1]

    Jon continued mentioning that an API is available to extract the data from InCites– although it does need to be enabled by Jon or Emma as a request. Another ability of the software is the creation of create researcher ID (RID) profiles (unique). Using the WoS article matching tools it is possible to find records and create links to them. From this you can create local database of publishing output.

    A comment from the UCL representative was that academics haven’t been that keen to sign up for these IDs and noted that there are other unique identifiers such as HEFCE. Jon countered that with these IDs it is much easier to ID researchers and works. A short debate followed discussing the practicality of academics keeping these unique IDs between institutions. It was felt that for simplicities sake most institutions would issue new unique IDs to new staff, which rather made the usefulness of this aspect of the service somewhat diminished.

    For repositories that use the UT tag on various records, it is possible for InCites to make use of local data (if your research IDs and data are clean/clear enough).

    Can evaluate citations counts from institutional repository contents possible to purchase other institutions data, but can’t expose the information. There was a discussion noting that what the REF wants is driving this as a central process, but unclear what HEFCE wants – hence everyone is adopting a wait and see until the June results of the REF pilot are presented. It was noted that whilst the word bibliometrics is much muttered, but in terms how, what and why remains very unclear within most institutions.

    The question of citations from patents was raised (e.g. Derwents Citations Index) – would these be of value? The answer wasn’t currently clear. The question of staffing challenges was raised, which whilst the Thompson tools would help wouldn’t alleviate all the challenges.

    UCL spoke about their experience as a pilot for the REF – the experience has been a valuable one and a bit of a shock too. Went beyond where just REF wants them to go, and where the data would be of see and found that the systems they had weren’t good enough and neither was the data. More pressingly there was a need for a cultural overhaul in how researchers record their research output and usage. As a result they have a separate a project to address this.
    The HEFCE rep present commented that academics still keep sourcing their own research data from different sources for different purposes, and that this was not necessarily a good thing.

    [Continued in part 3]

    Posted in Research Support, Wider profession | Tagged: , , , , , | Leave a Comment »

    Thompson Reuters InCites Bibliometrics Meeting (part 1)

    Posted by gazjjohnson on 24 March, 2009

    On Monday I travelled down to London to a meeting organised by Thompson Reuters (TR), providers of the ISI Web of Knowlegde and Journal Citation Reports to name but a few key resources. Principally the day was to introduce and give an overview of their new InCites bibliometrics product (to be launched in May 2009), but thanks to the mix of people there from universities (myself, Kingston, UCL) and funders (Wellcome, HEFCE) some very interesting discussions around the subjects demonstrated were presented. All of this naturally is related to the REF.  As this is a long report on a very full discussion, I’m going to break it up into multiple posts.

    The day and background to the product was introduced by a senior manager, who talked about the breadth of coverage of TR’s products, noting the importance of everyone having access to the same quality data for evaluation. He mentioned that the acquisition of Evidence (based in the UK) has allowed the provision of services and tools along with access of the data themselves, even the generation of customised reports. Finally he talked about TR’s development of grant application systems as one of their next major launches.

    Jon Stroll (Key Account Manager) presented on aspects of research analysis including using data integration within institutional work flows such repositories. Focussing first on the Web of Science (WoS) and mentioning that the conference database is now also a citations database. However, Jon was unsure if the REF will take account of this data. In terms of quality TR use manual and machine harvesting of each article’s data. InCites is essentially a benchmarking citation analysis tool. He noted that currently the HEFCE Pilot,the US National Science Foundation and EU are all using the data from Thompson Reuters

    Key questions that InCites can help answer include:

    • Over all published output in 10/20 years
    • Impact and how frequently has it been cited
    • Which papers are the most influential and how do they compare to the benchmarks?
    • Who are the top (H Indexed) authors and therefore where should research funding be focussed
    • What research do our researchers pull on – are they citing the right material

    The extended license allows for use of ISI bibliographic data within IRs (populate the meta data using ISI WoS data). The current policy is now that you can use the data and expose externally with no additional charge. The only mandatory requirement is that you have a WoS subscription. A comment from Susan Miles that the repository community are currently unaware of this and would really benefit from knowing.

    [To be continued in Part 2]

    Posted in Research Support, Wider profession | Tagged: , , , , , | Leave a Comment »

    Papers you might like to read

    Posted by gazjjohnson on 20 March, 2009

    Read the last one with an awareness that it was prepared by the Publishing sector, and so there is a discernible bias in some of the considerations (perhaps no more than those from the other side of the fence, but you’re forewarned!).

    Posted in Wider profession | Tagged: , , , , , , | Leave a Comment »

    In the Shadow of Bibliometric Analysis

    Posted by gazjjohnson on 19 March, 2009

    As anyone who’s following me on Twitter knows, the last week or so has been rather dominated by my work on bibliometrics.  Let me state up front here, I’m not a bibliometrician (sounds worryingly close to mathamagician to me) nor statistician, rather I’m a former scientist who spent a lot of time working with stats in another life.  I sat in on a meeting about statistical teaching last week which served to rather poitnly remind me of all the things I used to know how to do (linear regression, chi squared, two tailed T-tests etc). 

    On the other hand I’ve always quite enjoyed working with data collection and simple anaysis; when I was a library researcher at Univ Warwick I spent quite a bit of time doing just this.  So this does mean that any outputs that I produce aren’t going to be stunningly complex, but they should help people to get a picture based on fact.  This, and my role as LRA personage involved in the Research Excellence Framework (REF) preparations, are doubtless why I was tapped by Louise to run a bibliometric profile of the Chemistry dept. 

    Bibliometrics, in case you didn’t know, is the analysis applied to texts or information.  In this case I was asked by the Dept. to run a sample profile of their publication outputs; in an attempt to establish where they stand in relation to the rest of the academic world.  In practise this meant taking a sizable sample (half the departmental academics) and looking at which journals they’ve published in over the last 9 years (2001-date).  This is a key range for a number of reasons – firstly due to the suggestion that the REF will take account of publications back to this date.  It’s also due to the fact that Journal Citation Reports (JCR) only goes back to 2000 online, so it’d be harder work to analyse publications beyond this point.

    Now whilst the results are naturally confidential at this point I can tell you about what I sample in brief

    • Article outputs– Number of articles produced and indexed within the time frame.
    • Citation counts – Number of references to articles produced.
    • H Index– The Hirsh Index quantifies both the actual scientific productivity and the apparent scientific impact of a scientist. It is increasingly viewed as a major indicator of academic esteem.   Anything over 100-120 and you’re into Nobel laureate territory.
    • Journal Impact Factors – A measure of how often all articles in a specific journal have been cited.   Usually the most common “How important is this journal?” value.
    • Cited Half Life – measures the number of years, going back from the current year, that account for half the total citations received by the cited journal in the current year. 

    From these I’ve been able to profile the departmental research publications as a whole, as well as getting an idea about the impact of the individual contributions to it.  Quite looking forward to discussing the results with Chemistry in the near future.

    The biggest challenge (data collection aside, which currently is very long winded) is knowing when to stop.  I’m still very new to bibliometrics, and my inner scientist kept suggestion other ways to contrast the data or to analyse it.  Essentially I could have been at this for weeks.  And since we’re still not quite sure what metrics the REF will be using there didn’t seem much point in going to far with the first attempt.

    There’s also the question of benchmarking.  Raw stats on our depts are all well and good – but where do they stand contrasted with the rest of the world?  That’s something that I might need to follow up on, but would likely be a far more time consuming operation than a week’s work.  For now the Chemists might just need to trade notes on H Factors held by comparator academics, in contrast with their own.

    Posted in Research Support | Tagged: , , , , , , , , | Leave a Comment »

    Finding an author’s h-index – a step by step guide

    Posted by gazjjohnson on 6 February, 2009

    The h-index (Hirsch Number) is a metric that is increasingly becoming of interest to researchers, especially in the light of the REF.  An h-index is “a number that quantifies both the actual scientific productivity and the apparent scientific impact of a scientist“.  You can work it out manually, but to be honest you’d need to be mad or a bibliometrics fiend to want to.

    I’ve been asked by a few people how to find it, and each time I totally forget how!  So in the light of this, here’s my step by step guide to discovering an author’s h-index automatically using that wonderful Web of Knowledge tool!

    1. Go to Web of Knowledge  and click on the big green button
    2. Click the Web of Science tab at the top of the screen
    3. Rnter the author’s name in the format surname initial* (e.g. raven e*)
    4. Change the search option from the drop down menu to Author
    5. Click Search
    6. At the top right of the results is the option to Create Citation Report. Click this.
    7. The analysis appears, along with the person’s relative h-index.

    It seems simple, but I was scratching my head using WoK until I discovered that I need to just use Web of Science, not the whole WoK in order to get the value.  And so, now you know!  It is worth noting you do have to be fairly exact in your author naming conventions, as the citation report will not run for more than 10, 000 result records.

    I did wonder if between steps 6 and 7 about selecting individual papers from the list of results, but it appears that this has no effect on the citation analysis; for example selecting 5 papers from a list of 120, 000 doesn’t enable me to run the citation reports – it appears to run in an all or nothing manner.  Or maybe there’s a trick here I’m missing?

    Posted in Research Support | Tagged: , , , , , , , , , , , | 71 Comments »