UoL Library Blog

Develop, debate, innovate.

Archive for the ‘Technology & Devices’ Category

EMLIP Meeting (22 January 2016)

Posted by JackieHanes on 1 February, 2016

I attended a meeting of the East Midlands Legal Information Professionals group on Friday 22 January. The meeting was held in the offices of Browne Jacobson, in a great location by the canal and courthouse in Nottingham. They also provide visitors with wondeful coffee and cookies on arrival – very welcome given the inclement weather.  Tom Laidlaw and Simon Gaunt of LexisNexis were in attendance to give a demonstration of their new product NewsDesk.

NewsDesk is a media analysis tool from MoreOver Technologies (recently acquired by LexisNexis). The NewsDesk service enables users to search news (free and subscription news content) and social media, and set up email and rss current awareness alerts. The technology powers the local news sections on the BBC website among other major clients. In legal practice, searches could include news about the firm, clients, competitor firms, and practice areas.  This is a key service development area for many law librarians in firms and industry at the moment.  The search interface was modern and intuitive – a world away from the Nexis interface. The target audience was obviously law firms, but it would be useful to any library offering research intelligence services to academics or departments.

After the meeting, LexisNexis treated us to a great pub lunch at the CanalHouse, which I imagine would be a lovely venue in the Summer …

Posted in Law, Service Delivery, Technology & Devices | Tagged: , | Leave a Comment »

Visit to the University of Virginia and North Carolina State University libraries

Posted by benwynne2 on 28 June, 2013

Thanks to the generosity of my employer, I had the opportunity to visit the libraries of the University of Virginia (UVa) and North Carolina State University (NCSU) during the week of 17 June 2013.

Both libraries have a strong track record of digital library innovation of different kinds. The University of Virginia is a leader in digital humanities and NCSU has gained a reputation for creating user friendly, Web interfaces to Library services and resources, in particular.

University of Virginia

Scholars’ Lab

UVa Libraries is home to the Scholars’ Lab – a service which supports and enables the use of technology in humanities scholarship by the postgraduate and research community at the University. There are three strands to the service:

  • a ‘walk in’ facility where students can use high end computers and applications (GIS and statistical applications, for example) with access to specialist help (provided by fellow, experienced students employed by the Library)
  • a programme of workshops and training opportunities. In particular, the Scholars’ Lab runs a graduate fellowship programme where about 6 lucky students each year are trained and supported to work together on a particular project – developing valuable technical and ‘soft’ skills (including project management)  in the process.
  • a research and development team of Web developers – from a research background – who work with academic staff on development of specific projects.
Scholars' Lab

Scholars’ Lab

This all adds up to an impressive service. The Lab benefits from some endowment funding and, unusually, the research and development team is funded from the core Library budget – not from short term, grant funding.

Recent work includes the creation of Neatline – a platform for creating digial exhibits as overlays on maps with timelines.  This is just the sort of thing we wanted to try out as part of our Jisc funded Manufacturing Pasts project – but ran out of time and didn’t have a suitable platform.  Neatline is built on Omeka – a content management system created at George Mason University. Both are open source and if you have – or can have – access to a LAMP server (which I eventually did for Manufacturing Pasts) – it doesn’t sound too difficult to try them out …

SHANTI

UVa is also home to SHANTI – the Sciences, Humanities and Arts Network of Technological Initiatives. SHANTI isn’t part of the Library but is based in the ‘main’ library – the Alderman Library – as is the Scholars’ Lab.

SHANTI provides practical support and guidance for researchers who want and need to use information technology to carry out research – but who aren’t ‘techies’. The resources it has created include a Knowledge Base – which includes a suite of software tools – many of which anyone can use.

Digital Media Lab

The Digital Media Lab is part of the Library and provides an impressive range of resources and support for use of multimedia by the University community – including creating videos, large scale data visualisation, a ‘telepresence’ lab and use of video clips for teaching. A lot of the technology is Mac based.

The Lab is based on a newly refurbished floor of one of the site libraries which is gradually being redeveloped as the ‘learning and teaching’ library (this redevelopment also includes the provision of social learning spaces).

Digital Media Lab at the University of Virginia Library

Digital Media Lab

The Lab has its origins back in the University’s audiovisual service which became part of the Library many years ago. It has evolved to become more about the creative use of technology in learning and teaching – than the simple provision of hardware and software as such – although, clearly the two need to go together.

While many libraries provide high end, ‘self service’ multimedia facilities – providing an expert, staffed service of this kind is unusual.

Staffing

Compared to many UK university libraries – and certainly compared to us – UVa Library is big – with 220 staff, 11 libraries and a complex structure.

That much is predictable for a major, American university library.  But some aspects of how the Library is organised are less obvious.

The people I met were from a diverse range of backgrounds – the outcome of a decision over ten years ago to seek applications for vacancies from both formally qualified librarians and other relevant professions.

View of part of the Learning Commons floor of the undergraduate library at the University of Virginia

Learning commons floor of the undergraduate library

The head of the Scholars’ Lab is an ‘academic’.  The Deputy University Librarian comes from an IT background (she joined the Library to head up its technical services, originally).  The head of the learning and teaching focussed library is a learning technologist.  A recent appointee to the University’s very impressive Special Collections Library has a background – amongst other things – in the rare books business.

Some recent senior retirements and resignations have led to a decision to ‘flatten’ the structure – removing some second tier posts and bringing the managers of some of these specialist, newer services into the management team.

NCSU

NCSU Libraries won an award from the ALA for its Web site in 2011.  In 2010, it won an award for its Library Course Tools project.  Back in 2000, it won the ACRL Excellence In Academic Libraries Award.

So, what has enabled NCSU to sustain this consistent record of service development and success?

Staffing and culture

Like UVa, NCSU Library is a big department – with about 220 staff and a budget of about $20m.  The University has 35,000 students.

Again, like UVa, the Library has quite a complex staffing structure.

One thing which is notable about this structure, is that there is both a Library IT and a Digital Library Initiatives (DLI) team.  Unusually, also, the Library IT team runs both servers and storage for the Library – not just Library specific applications (this wasn’t the case at UVa Library where – like us – they see servers and storage as clearly being part of central IT infrastructure).

I met some members of the DLI team.  As the team name suggests, their focus is on developing and implementing new services – such as the Library Course Tools service noted above.

The team has existed for about 12 years – and grew out of a small service which the Library had created to support use of GIS and geospatial data.

Banner promoting mobile app tour of the Hunt Library, North Carolina State University

One of the DLI team’s mobile apps

Most members of the team are librarians – who have become skilled Web Developers during the course of their careers.  As librarians, they understand the context within which they are working and the services that are being provided – and this understanding combined with the technical skills clearly makes for a powerful combination.  This is also true of some of the members of the Library IT team – with the person responsible for the specification and installation of the very extensive IT facilities in the new Hunt Library (below) being a qualified librarian with an Arts background (who then developed a specialism in IT).

NCSU has a ‘Library fellowship’ programme.  This means a number of two year, fixed term posts which are open to newly qualified library professionals.  Postholders are based in a ‘home’ department and also work on a project.  Some of these projects are very significant.  For example, one Library Fellow is developing a Web based application for browsing the contents of items in the Hunt Library’s new ‘bookBot’ (see below).

About 50 people have been through this programme since it started.  Interestingly, many members of the DLI team originally joined the Library through this route – so, it clearly seems to have worked as a way of attracting capable, highly motivated people who – crucially – are looking for on-going opportunities to learn and develop on the job.

I was interested to find out how the DLI team communicates with other teams.  The picture that was painted was of lots of horizontal communication i.e. between teams.  Ideas for service development are as likely to emerge this way as be from ‘top down’.  They said this works because individuals take responsibility to make communication with their colleagues work – they don’t wait for a ‘manager’ to do it for them.  Later on I spoke to a member of staff in a public services, student facing role – who sung the praises of the DLI team – so, she clearly saw them as student focussed and helping her to do her job.

There is still, structured, organised decision making because there needs to be.  But they have a pragmatic straightforward process for specifying and agreeing projects that are going to be resourced – taking a 2 sides of A4 approach to make sure objectives, timescales, responsibilities etc. are clear (something we have tried to do consistently in recent years).

Hunt Library

The Hunt Library is a major development for the Library, the University and the evolving concept of what a ‘university library’ is and what it is for.

Entrance area to the Hunt Library, North Caroline State University

Hunt Library entrance area, North Carolina State University

The Hunt Library opened in January 2013.  It cost $110m and, so, represents a huge investment by the University (and its primary funder the state of North Carolina).

It joins the University’s other primary library – the Hill Library – which dates from the 1970s and is a ‘traditional’ ‘book tower’ library – lots of shelves, lots of floors, lots of single study spaces (although in 2011 the entrance floor of the Hill Library was totally redesigned in ‘learning commons’ mode).

The Hunt Library is on the university’s technology park – which is also the home of its large Engineering and Textiles teaching programmes and research (NCSU is – largely – a science and technology institution).

There are a lot of things about the Hunt Library that you would expect in a modern library.

  • lots of natural daylight
  • lots of social learning space of different kinds (including 100 group study rooms!)
  • high quality interior design and fit out
  • single integrated service point and staff ‘roving’ to provide help at point of need
Some objects created using a 3D printer

What you can create with a 3D printer

Where the Hunt Library is really different is in the scale of the IT facilities it provides.  These go way beyond access to desktop PCs/Macs and wireless networking to include:

  • lending of a huge range of equipment – and accessories – including laptops (of different kinds), high end filming and photography equipment, storage devices etc.
  • data visualisation lab with very high resolution screens
  • 3D printing
  • ‘creative’, multimedia lab which includes creation of virtual environments
  • a gaming lab

The technical facilities aren’t just used by the engineering students etc. but also by their Arts and Social Sciences departments (they do exist).

View of the bookBot at the Hunt Library, North Carolina State University

The bookBot at the Hunt Library

The Hunt Library is also about books – but most of these – 1.5m – are stored away in an automated, high capacity, racked storage system (the bookBot!).  Users request items through the Catalogue and they are delivered to the Hunt Library service point within about 5 minutes (some staff intervention is required).  This system cost about $4m to install.

What about staffing such a  facility?

No new money was available to staff this library – so existing staff have been allocated between the Hunt Library and the Hill Library.  Students are employed to help at the Hunt service point and with the bookBot.  There are 4 people on duty ‘front of house’ at most – this is between 10.00am and 4pm.  So, lean front of house – which reminds me of the Information Commons at Sheffield.  The Library is open 24 hours – with staffed services continuing overnight (two Library staff employed for the purpose and a student helper – a model they already had at the Hill Library).

While use of some of the high end facilities is by appointment with specialist staff, most of the facilities can be used directly by students and they have found that students have needed very little ‘training’ to use them.

So what?

So, you visited UVa and NCSU – so what?

This clearly was a great opportunity for me personally as I have long wanted to see something of the large, North American university libraries in action (because, one way or another, what happens in the North American academic world has a huge influence on us and we are almost entirely dependent on library systems and resources provided largely for the North American market).

Data visualisation lab at the Hunt Library, North Carolina State University

Data visualisation lab at the Hunt Library

But there are also some specific questions which I think we could realistically ask ourselves based on the experience of these libraries – despite the fact that they are clearly much larger and much better resourced than we are (although they don’t necessarily support a very much more students than us).

  • How do we provide the Web development expertise – focussed on library services and context – which we are going to need to develop our Web services further? (not everyone may agree with me on this – but I see this as absolutely essential and I don’t think that the advent of cloud based services reduces the need.  We are still going to need to integrate services and build services which draw on disparate, underlying services.  That is a large part of the ‘added value’ that we can offer our users);
  • What opportunities do we have/can we create to attract technically able, highly motivated, early career professionals and then develop them on the job?
  • How do we improve access to generic software tools/solutions for digital scholarship/humanities projects at Leicester – including exploiting the tools identified/created by SHANTI, George Mason University and others? (there is a Web developer need here as well – currently the subject of a bid to the University’s Research Infrastructure Fund which Simon Dixon and Dan Porter-Brown have put together).

I’d be interested to hear your views.

Posted in Digital Strategy & Website, Research Support, Technology & Devices | Leave a Comment »

Lecture capture

Posted by Andrew Dunn on 24 April, 2012

Tony Churchill gave a presentation at DL Forum on Tuesday 24/4/12 on lecture capture.  He talked about a project funded by Echo 360 – a supplier of lecture capture software.  The project looked at uses of lecture capture software beyond simply recording and posting lectures for students to revisit.

The project looked at taking recorded lectures and cutting them up into 15 minute snapshots which can then be used a subsequent year to support students’ learning.  The snapshots could be posted in VLEs before face-to-face lectures to provide students with background knowledge and free up time in lectures for more interaction and discussion.  Recordings of face-to-face lectures can be used to support DLs.
Short snapshots of lectures can be made publicly available and used as effective recruitment tools.

Denise Sweeny reported on a lecture capture project going on at the University of Leicester at the moment.  Using Adobe Connect and/or open source software OpenEyA (see www.openeya.org for more information) lecturers from Media and Communication and from Chemistry have captured 5 hours of UG lectures and 12 hours of PGT lectures and have posted them in Bb with no guidance or instructions on how students should use them.  This term they will measure use of the captured lectures using Bb Analytics, focus groups, an online questionnaire and extended interviews.  They want to measure how often the lectures are accessed and how students use them.  They will also gather data on student demographics and their preferred modes of study.

If you want help and advise on capturing your own teaching sessions contact Simon Kear spk7@le.ac.uk in BDRA.

Posted in Projects, Research Support, Service Delivery, Subject Support, Technology & Devices, Training, Web 2.0 & Emerging Technologies, Wider profession | Tagged: , , , | Leave a Comment »

JISCrte End of Projects Event Feb 2012

Posted by gazjjohnson on 10 February, 2012

Friday 10th Feb saw me attending this end of project event at the rather nice Nottingham Trent Conference centre.  What follows are my notes from the day (typed whilst at the event) so apologies for any typos!  My thanks to the RSP for facilitating the day.

Balviar Notay gave an overview of the JISCrte programme to start the day.  There are a fair number of projects in this programme, but while I had heard of some of these projects I’d certainly not heard of all of them.,  Is that a flaw in the projects themselves – or perhaps promotion and awareness wasn’t a core part of their agenda.  Certainly looking around the room today there are very few people present whom are not involved directly in these projects – a bit of an echo chamber/silo problem – or should they be all working closer with UKCoRR?  Balviar did flag up the work of UK Repository Net+ project and it’s innovation zone, something that I think everyone in the UK repository community will be working with increasingly over the nest two years.  RIO Extension – mapping the repository metadata requirements  was flagged up; a project about which I went to a very interesting meeting on Weds with the RCUK, JISC and other people.

Next Marie-Therese Gramstadt was up next talking about eNova which worked on enhancing the MePrints tool.  Interestingly this is an EPrints tool; once again in the UK DSpace repositories feel a bit outside the room.  DSpace is the most popular repository platform in the world, but in the UK the Southampton based EPrints dominates the community.  That is not to say that there are not lessons to take away from this, but they aren’t products that we can directly apply at Leicester.

Interestingly this MePrints appears to offer the functionality for individual researchers (a dashboard of sorts) that I would dearly love to introduce on LRA – essentially Staff Profile pages.

Next Beth Lunt from DMU talked about the EXPLORER project – starting off by talking to their academics and discovering that many of them were unaware of the repository (something I’ve found sadly familiar).  The project then went on to bring about a number of developments for their DSpace repository – although adapting EPrints code isn’t possible as the two systems are not compatible at all.  Part of the upgrade is to KULTERise the repository.  DORA now has a UI that is much nicer than the out of the box DSpace.  Bitstreams in DORA also now have thumb prints of the objects within them, hence you can even see the front page of the PDF.

Interestingly they have improved name authorities but in a way that sounds like it wouldn’t work with a CRIS like we have.  This is a shame as standardising name authorities has long been a holy grail for the LRA.  Indeed one of the things that is clear is that being linked to a CRIS brings with it new advantages in terms of population, but it also introduces considerable limitations in terms of how much development and customisation you can do with the repository.  Given a lot of the projects that I’ve heard about today are talking about repos as single objects not as part of an integrated institutional information infrastructure; this is a bit of a concern.

After tea Jackie Wickham spoke about the RSP Embedding repositories guide and self assessment tool, stressing the importance of sharing the research with the world and raising the Universities’ profile globally.  There are three main ways in which they looked at embedding repositories.  The first one is where it acts as a publication database (e.g. where you don’t have a CRIS like IRIS), the second is like Leicester where a link with the CRIS  exists and finally a third option where the repository is embedded as part of the CRIS (not a satellite system).

Richard Green spoke next about Hydra in Hull, a spin off from the Hydrangea demonstrator project.  The plan was to use this to develop a successor to their Fedora based eDocs repository; which was enabled to be interactive with other systems.  It was launched in Sept 2011 and other unis are taking up the use of the code.  The codebase allows the,m to restrict access across multiple levels (so students, or local or academics or open access) – if unable to access you can’t see it.

William Nixon from Glasgow closed the morning off with an exemplar of embedding repositories with the Enlighten experience.  Noted there’s always a gap between funding the projects and getting the outputs of projects embedded and taken up within repositories workflows.  He stressed getting embedded is about getting stitched into the fabric of the institution culturally, technically and holistically. Embedding seems to be very much about working with administrators, academics, marketing, HR and researchers as a regular activity, not a one off.  Having these relationships is crucial, because it means you are “in the room” when important decisions are made.

Once again William demonstrated a repository that has the author at it’s heart with their own pages, and the ability to retrieve information on their available publications and usage.  Looking at Enlightened journey to being embedded it is easy to pick out the things we’ve done with LRA, but also the things we’re missing still – funding information, feeding profile pages and author disambiguation being key among them IMHO.  William commented that no repository can be supremely successful with only library staff involved on a daily basis; and I can well appreciate that – though there is the daily challenge of getting/keeping other members of the institutions engaged and onboard.

After lunch Robin Burgess was sadly not appearing so no sing-a-long a presentation, but Laurian Williamson filled in talking about RADAR. No, not that radar but the project at the Glasgow School of Art.

“He” was followed by Xiaohong Gao talking about MIRAGE which focussed on archiving of 3D medical images, in two phases – creation to archiving and then from archiving to creation.  This looks like a very interesting project, specially when you consider the potential not just for storing but locating and retrieving three dimensional data constructs from medicine and other disciplines; especially I’m thinking of Physics and Genetics.

Finally Miggie Pickton from Nectar came on to talk about her repository and embedding activity.  She noted she’d made great strides in making the repository the definitive location for research outputs.  One of the highlights of the improvements is to have the KULTURised version of the front page of the repository.  Another key point was that policy is driven by research committee, not the library – for advocacy and academic buy in this is essential.  Interestingly the VC for Northampton has offered the use of his University residence as a venue for the next Open Access week event – something I was awed by, such engagement from such a senior level is simply incredible.

The day finished with a breakout discussion session on embedding where we all exchanged our ideas and reflected on some of the points of the day.

Posted in Leicester Research Archive, Open Access, Technology & Devices | Tagged: , , | 1 Comment »

EPUB vs PDF

Posted by gazjjohnson on 8 July, 2011

Interesting question from my boss this morning asking about the EPUB format especially as it contrasts to PDF, which i confess I know little about.  This is on the back on one of our departments increasingly looking towards making material available on eReaders rather than our VLE (BlackBoard).  My thanks to the folks on twitter whom have kicked in the following bits of insight.

  • EPUB is basically a zipped bag of xml and css with slightly improved DC metadata in it. Best for reflowable text, unlike PDF.
  • PDF is written in stone so doesn’t flow well on ereader devices.  Best ereader for PDF is iPad. EPUBflows.
  • Calibre makes EPUB
  • EPUB will work better on e-readers like kindle – PDFs work but difficult to read
  • Think there is linked data potential in the metadata.
  • http://bit.ly/g7CzSe v.3 is particularly interesting from a metadata perspective
  • Not just for ereaders IMO. Range of advantages Inc. Reusability & accessibility

So there you are – all the wiser now.  The link above is actually well worth following as it does give quite a clear view.  Is it enough information for the boss?  I don’t know, but I’ll pass it along and see what else she’d like to know.

Posted in Service Delivery, Technology & Devices | Tagged: , , , , | 13 Comments »

JISC Information Environment Event April 2011

Posted by gazjjohnson on 8 April, 2011

Aston University Lakeside Conference venueHere are my notes and comments on the event I’m attended at the University of Aston as an invited speaker by the JISC on Thursday 7th April – resources from the event can be found here

Neil Jacobs from the JISC opened the day and gave it some context – taking us from the HE environment of 2009 and the days of the Digital Britain Report to 2011 and the current circumstances.  He detailed the various strands of the programme: Repositories, Preservation, Geospatial Data and infrastructure, Library Management Systems, Activity Data, Developer Community, Infrastructure for Resource Discovery, scholarly Communications, Rapid Innovation and Linked Data.

HE today is beginning to look to bibliometrics for research excellence and impact, which are fairly significant drivers.   Moves towards starting/supporting innovation and entrepreneurship need to be watched closely.  The event as a whole was aimed to share the highlights of learning from the various strands of the programme.

Session 1: Learning from Other Institutions

David Millard (University of Southampton) spoke first focussing on lessons learned from how educational repositories were not working .  They spoke to teachers -real teachers didn’t understand terminology or files from OERs, let alone working with digital resources even themselves.  Research repositories on the other hand give a real service to the researchers that they get (I might question that for some academics!). Looked to sharing sights (YouTube/SlideShare etc) which give teaching resources a home, have community and organisation – but it’s not through altruism for many people.  Developed software called EdShare, a post-learning object repository, that offered various advantages – not trying to force people to model their courses or materials in one particular way.  It also had light, non-restrictive metadata.  Tried to make the educational repository part of the living cycle.  Want BlackBoard to feed EdShare which feeds iTunesU as well.

Kamalsudhan Achuthan was up next (filling in at short notice)  talking about improving research information management, something close to my heart with the current local work towards implementing and integrating a CRIS.  The final report from the project can be found here.

William Nixon gave the next talk talking about embedding repositories into practice.  One of the outcomes of the project has been about  building the relationships between the repository and research office staff.  He noted that the future is embedding the repository within the institutional systems, although interoperability is not automatically easy.  The aim might well be to have an invisible repository moment, when it is seamless integrated into the whole.  The repository was used to gather a lot of the information for the min-REF that Glasgow ran, including impact and other metrics.  Embedding and integrating is about adding value, enabling reuse, reducing duplication and exploiting new opportunities.  Advocacy has evolved (as at Leicester) where it’s about working with the Research Office and other people across the campus; which I would say is a very good thing.  At the same time the project showed that there are different needs for the different disciplines.  He finished by suggesting that the job of a repository manager is moving into new, and exciting, territory.

Damian Steer closed the morning through talking about information architecture.  Interestingly he touched on data sources such as blogs and newspaper reports on the work; which would contribute towards demonstrating an impact for the REF.  Behind the scenes at Bristol they use linked data from the Semantic Web.

After lunch myself, Ben Showers from JISC and Nick Woolley (King’s College) talked about various resource and time saving activities.  I was presenting the highlights from my recent survey (my thanks to all those whom responded) rather than talking from personal experience!  You can access my slides here. Ben’s talk (Why you shouldn’t bother with advanced search) is also online.  While the session (which was repeated) was not exactly well attended, there was a spirited debate following the talks on both occasions.

Finally Margaret Coutts from the JISC Infrastructure and Resources Committee came on to deliver the keynote.  Among the comments she made, were that it is important top remember that research repositories are not solely for archiving for the REF, nor are teaching repositories solely for exploiting the content – they should both work in that area.  There is a need to develop life-cycle  management for the documents within them as well.  Academics are now more ready to come forward and expose all the extra effort they put into preparing journals – unpaid contributions and asking the questions – just what are publishers doing for us?  Will they challenge the publishers?  Uncertain as there is  desire not to damage peer review in the process.

The change in scholarly communications is a long game, and not one that will happen in the next few years, although there will be work in the right direction.  Work on LMS indicate that shared systems may well generate shared efficiencies and reduce costs.

One of the big growth areas in the coming years was suggested to be teaching and OERs, where platform rather than standard will be more important.  Likely there will be pressure for more sharing of these both within and without institutions, although there will be some items for local access as well as those for fully open access.  Digital Preservation is something that keeps falling off the edge.  We know what digital preservation is, but keeps being postponed because there are other more pressing things -but this is a time bomb.  We need to address this as a community sooner rather than later.

Urgency for solutions is going to increase.  Are there quick wins we can gain from the JISC projects, that can be put out to the sector.

Rachel Bruce then capped the day off by looking at the way ahead for JISC, which even though it has reduced funding is still charged with enabling innovation but at the same time ensuring that lessons learned and applications developed are able to be taken up by the LIS community.

Posted in Service Delivery, Technology & Devices, Wider profession | Tagged: , , , , , , , , , , , | Leave a Comment »

eBook Reader

Posted by selinalock on 4 February, 2011

Sony eReader

Sony Pocket eReader

I’d asked for a Kindle as a Xmas present, judging that would be the cheapest to get. So, I was very surprised to get a Sony Pocket eReader (350 model in pink!) instead.

Have become a convert very quickly, as it’s great for carrying around in my bag, reading on the bus, or in cafés, and for taking on weekends away.

The plan is to use it to read lots of the freely available, out of copyright, classics I’ve never read. Loaded it up with titles from authors such as Dickens, H.P. Lovecraft, M.R. James, Edgar Allan Poe, E.M. Forster, Oscar Wilder, Rudyard Kipling, Arthur Machen, Conan-Doyle, Bram Stoker, Jules Vern, and Mark Twain. Plus some Cory Doctorow  titles and some short stories by friends. Also found it good for reading drafts of novels/scripts that I’ve been sent to critique by friends.

Features I like:

  • Really nice size (a little smaller than the Kindle), which means I can hold it with one hand, while drinking a cuppa, and it doesn’t put any strain on my wrist.
  • The page turning buttons on the bottom can easily be pressed while still holding it in one hand, so no need to put my cuppa down. It also allow you to turn the pages via the touch screen but that’s a little more fiddly.
  • Touch screen is really easy to use.
  • Has a stylus for use in the trickier screens, like the touch screen keyboard.
  • Can type text memos.
  • Can handwrite notes on the books pages and highlight text (and delete).
  • Can add bookmarks (and delete).
  • Can draw pictures on it!
  • Double clicking on a word will bring up the OED definition.
  • Remembers what page you got to on any book/document you go into, so you can have several titles on the go at once.
  • Can read PDFs and if they are text only will also re-size/word wrap in the same way it does for the native ePub format – though you do get the odd formatting issue with PDFs.
  • Being able to sort books into self-titled collections.
  • and I haven’t even used all the functions yet!

Not so good stuff…

  • That you have to hook it up to your computer to add/delete books and recharge it. (No wifi).
  • The software provided for your computer (Reader Library), has a tendency to crash.  Though it’s fairly easy just to move stuff across to the reader as if it’s an external drive anyway without software.
  • You can’t do anything with the drawings you’ve made because they’re SVG (scalable vector graphics) & I haven’t been able to figure out how to convert them into jpgs.
  • Not really usable for comics. We’ve put a copy of an issue of one of our small press comics on in PDF and the pictures show up pretty well in b/w or greyscale (as the screen isn’t colour) but obviously they’re too small to read and it can’t resize them. Can zoom but really fiddly, so any comics would have to be done as a panel at a time, as they’re are for other small screen devices.

I still love printed books, but this is certainly much, much easier to use on the move.

I did a training session on eBooks and eReaders for some of our library staff yesterday, and they found it really useful to see the difference between our online library subscribed ebooks, and the type of ebooks you would download on to an eReader.

Posted in Technology & Devices | Tagged: , , , | 3 Comments »

On the Road to IRIS: Modules & Testing

Posted by gazjjohnson on 3 February, 2011

IRIS is a name you’ll be hearing me talk a lot about this year on here and in the flesh.  It’s the name we’ve given to the prospective new research information management system that our Research Office, ITS and library teams are working towards implementing.  My involvement is naturally on the repository side of things, considering how the LRA will integrate with the new system. We’re in early days as of yet, and the inks not quite dry on the supplier contract yet so I can’t speak too much about that.

What I did want to blog about was the related LRA work we’re currently doing.  One of the long standing requirements for the IRIS project is to upgrade to DSpace version we currently use (1.4.2. fact fans) to something…a little more this decade (1.7).  An upgrade to the software has been something I’ve been trying to move towards for the past couple of years, and now we’re moving towards this at speed I couldn’t be happier.

It looks like we’re going to have a test instance of the platform up and running in the next few days, and so I’m starting to think about two critical things for the live system.  The modules that are essential for the way the modern repository needs to run, and the kind of testing that we need to put the test instance through so we can be sure it’s running sweet and dandy and fine as candy.  I’ve some ideas already, some from my repository wishlist others from ideas that have come to me while I’ve been talking with the other members of the IRIS team.

But naturally I’ll welcome suggestions from any readers of the blog or pointers to resources that I clearly should already know about testing DSpace…but clearly don’t!

Posted in Leicester Research Archive, Open Access, Technology & Devices | Tagged: , , , , , | Leave a Comment »

eBooks, eBooks everywhere

Posted by selinalock on 22 October, 2010

eBook - Cybook

eBook by PPL 2A on Flickr

eBook discussion are popping up in all areas of my life at the moment, from print vs e on the British Fantasy Society forum, the new Doctor Who book by Michael Moorcock being available on the Kindle, to creating comics for the iPhone/Pad, to students asking about them in inductions, to many friends having just bought Kindles or iPads… so a very hot topic, particularly since the Kindle came down in price recently.

I was kindly allowed to gatecrash the CULN eBook & eReader session being run by the BDRA last week. so, here’s a few thoughts from that session and other things I’ve been reading:

  • What is an eBook? A document that can be read on an eReader?
  • How do you read an eBook?
  • Via computers, laptops, dedicated readers (Kindle, Sony eReader), iPad, iPhone, iTouch? Many different routes, some of which require the eBooks in certain formats.
  • There is now a Kindle app for non-Kindle devices to allow people to buy ebooks from Amazon.
  • eBooks formats: libraries still bound by publishers to use password/IP restricted sites, especially for textbooks, which only allow students to read the texts online rather than download them to their own devices.  The students are generally not impressed with this, nor the copyright restrictions that mean they can’t print much off either…
  • PDF – the favourite of academic journal publishers and still very popular with other publishers as an easy format for them to provide, but not a format that works well on dedicated eReaders.
  • Doc (word docs), txt (plain text), html.
  • Mobi (Mobipocket) format – used by the Kindle.
  • ePub format – used by Sony.
  • Why use an eReader instead of a laptop/ipad etc? eReaders like the Kindle and Sony use electronic paper technology, which mimics what ink looks like on paper. The theory being that tis makes is much easier to read the text and easier on the eyes. (Friends with a Kindle have commented they find it much easier to read than a computer screen).
  • Computer screens are backlit making them much brighter, and possibly causing more eye strain. Are younger readers more used to this technology?
  • Formats like Mobi and ePub are also designed to resize easily to the size of the device and reader requirements than traditional formats.
  • it is very easy to convert a Word document to various eBook formats using free software like calibre. (We have a go, it really is easy!). Calibre can also act as an eBook file organiser. e.g. inplace of iTunes on the Sony eReader.
  • Public libraries in the USA and Hampshire Libraries in the Uk have started experimenting with loaning eBooks using the Overdrive system. However, the Publishers Association have just announced new restrictions that look set to put a stop to a lot of eBook lending options!!
  • Lots of free (mainly out of copyright or creative commons) eBooks out there on services such as Project Gutenberg, Feedbooks and Manybooks.
  • Amazon have a new feature on all their book pages that allows you to ”Tell the publisher, I’d like this book on the Kindle” – is this where the pressure for eBooks will come from in future?
  • Also a very interesting piece by SF&F writer Charles Stross on why eBooks don’t cost much less to produce than printed books.

I’m sure there’s been lots more stuff out there that I’ve forgotten, anyone?

Posted in Mobile technologies, Service Delivery, Technology & Devices, Web 2.0 & Emerging Technologies | Tagged: , , | 5 Comments »

Mashed Library Liverpool

Posted by katiefraser on 12 July, 2010

Liver and Mash, Parr Street Studios, Liverpool

Photograph of Liver and Mash, used under Creative Commons licence, courtesy of http://www.flickr.com/photos/rbainfo

I went to my first Mashed Library event, Mash Oop North, in Huddersfield in July 2009, had a fantastic time, and was pleased to go back to Liver and Mash in Liverpool in May this year. The Mashed Library events unfold in a relatively informal unconference format, with lots of discussion of ideas and ways of quickly and easily implementing mash-ups in library and information services.

This post won’t be so much a reflection on the event as a collection of tools and ideas which I found inspiring, and hope to come back to over time. Hopefully there’ll be something to inspire others too.

Liver and Mash started with an OCLC Mashathon, a workshop that OCLC have run around the world looking at how OCLC services can be used in mash-ups to create new uses for data. Karen Coombs from OCLC has blogged a little about the Mashathon here. OCLC offer a wide range of services and resources, here are a couple which caught my eye:

  • The Worldcat Basic API is available free for up to 1,000 queries a day (assuming non-commercial use) and can return a list of books held in OCLC’s comprehensive Worldcat Catalogue from a query. The list is returned in RSS or Atom format, and can be formatted by a number of standard citation guidelines. I’d be wary of using it long-term on an academic library site with the query limit, but there are further options available to those subscribing to OCLC services.

Unfortunately, we were lacking a reliable wireless signal on the day, so weren’t able to develop much on site. The second day, however, moved on to a wider variety of applications, so I was able to take notes and experiment later. Again, here’s a selected few:

  • Tony Hirst from the Open University spoke about gathering data on use of library websites (e.g. via Google Analytics), and segmenting users into groups by types of behaviour. Gathering behavioural data definitely sounds like something I’ll need to think about in our forthcoming redesign of the library website as part of the team moving the site to the University’s new content management system, Plone.
  • Julian Cheal from the University of Bath, demonstrated some ways of using RFID. I’ve long had a bee in my bonnet about the limited uses (issue and return) we have for RFID in libraries considering we’re one of the biggest users of the technology, and it was interesting to see demonstrations of library cards generating prompts and information as users entered the library or carried out library-related activities.
  • Lastly, John McKerrell talked about using maps in mash-ups. Maps are something I’ve seen used quite a lot on library websites, but only occasionally do these services go far beyond embedding Google Maps. Services which particularly stood out were Mapstraction – which allows web developers to switch quickly and easily between different map services, Get Lat Lon – which is a quick and easy way of finding latitude and longitude values for a given location, and OpenStreetMap – a free, collaboratively-edited map.

While I’ve not jumped in and used any of these services straight away, both the Mashed Library events I’ve been to have really opened my eyes to the wide variety of options available to me for using and integrating data on the web. You may see a few of these services turning up on the library website as we get further down the line with the Plone rollout! To finish the post, here’s a video of Liver and Mash, which I think catches the atmosphere and creativity of the event pretty well.

Posted in Digital Strategy & Website, Service Delivery, Technology & Devices, Web 2.0 & Emerging Technologies | Tagged: , | Leave a Comment »

USTLG Spring Meeting Redux (Afternoon)

Posted by selinalock on 17 May, 2010

Following on from my post USTLG Spring Meeting Redux (Morning), here’s few notes on the afternoon.

Theme for afternoon: social networking.

Advocating professional social networking to academics. Paula Anne Beasley and Linda Norbury, University of Birmingham.

  • The subject librarians are well placed to advocate Web2.0 tech for gathering information via social networks.
  • Found a knowledge gap for those not using Web2.0 or not of the generation to ‘just have a go’ at things & prefer some training.
  • Surveyed staff in College of Physical Sciences & Engineering about their use/knowledge of Web2.0 using a free text survey.
  • Responses variable, but enough interest to offer training session.
  • Major issues from survey were whether Web2.0 tools were secure/stable, whether there was a University policy on using them and a lack of knowledge.
  • Anne & Linda managed to get the College Academic Enhancement Group interested in the session, and all invites went out from that group rather than from the Library.
  • The training session that was offered was originally going to cover blogging and twitter. However, as Linda got stuck abroad due to the ash cloud it became focused only on blogging on the day.
  • 31 attendees for session: academics, admin staff, researchers & Emeritus Professors.
  • Got very good feedback and the attendees were enthusiastic about blogging on the day.
  • They hope to follow-up with seminars on social networking and social bookmarking, plus a support course in Blackboard.
  • No-one else in their University is currently offering training in this area.

‘Do Librarians Dream of Electric Tweets?”, Gareth Johnson, University of Leicester.

The next presentation was from our very own Gareth, who gave a very enthusiastic talk on using Web2.0 technology for networking, and in library services.  Main points were:

  • Why use things like twitter & Blogs?
  • For professional networking, self-reflection, sharing experiences, staff development, answering enquiries, motivating staff etc.
  • Can be very powerful tools.
  • Like Gareth, I pick up lots of useful information and links to new reports via twitter now rather than by other routes.
  • When using these technologies it is important to be human: respond to people, don’t just broadcast, share things.
  • The best use of web2.0 csome when you allow it to overlap your personal, workplace and professional lives, but if you’re not comfortable with this level of engagement it can still be useful when used only in work hours.
  • Important to “find the right tools for you”.

Gareth’s full presentation:

Posted in Digital Strategy & Website, Meetings, Mobile technologies, Research Support, Service Delivery, Staff training, Subject Support, Technology & Devices, Training, Web 2.0 & Emerging Technologies | Tagged: , , , , , , , | 2 Comments »

Ready for REF CERIF Workshop (King’s March 2010)

Posted by gazjjohnson on 24 March, 2010

Waterloo Campus, King's College LondonThis Tuesday I travelled down with Steve Loddington of the Research Support Office, to King’s College London’s Waterloo Campus to attend a Ready4Ref (R4R) CERIF workshop. Following an overview of the day from Mary Davies (Kings) the day proper began. What follows are my notes and comments on the sessions, hopefully slides from the event will be available online shortly.

CERIF4REF, Richard Gartner, Kings College
CERIF standard is very complex, almost too complex for most users to fully understand. The RAE2008 was used to shape CERIF as REF2012 standards remain as of yet unannounced. Repositories and CRIS outputs on systems that adhere to the standard can be feed through CERIF4REF and provide a single output to the REF assessors (in principle). There is a data dictionary that defines the standard and the elements within it. Going to take RAE data from Kings and process as part of a trial to check that this works well.

CERIF, CRISes & Research Databases Marc Cox
Marc talked about King’s CRIS, developed in house 2004-7, and developed originally as a research management tool; although the RAE overtook and drove it towards administrators rather than academics which was not the original intent. Took data from HR, student, awards and finance & publications from TR WoS (author ID a problem) – now use the WoK API to take data, although that was quite a challenge. At the moment administrators (mostly) and academics (few) are keeping the publications up to date.

CERIF is a standard data model that describes research entities and their inter-relationships, originally developed with support of EU. It is architecture independent. 4 main data fields from RAE2008 taken for CERIF4REF. A number of system and data tweaks were needed to these four research data fields to make it compatible with CERIF. RA1 data was relatively easy, although RA2 data was more difficult to map. RA3a/b and RA4 couldn’t be mapped without the base data which created them.

Benefits from the approach however included RAE forms generated from style sheets that can be cross compared with php scripts to check accuracy. Next steps are to generate real King’s data in CERIF xml format and exchange data with other CERIF compliant systems.

Using ISI Web of Science Data in Repositories, Les Carr
EPrints has had plug ins that do this for a while on an individual basis, but due to change in licenses now have access to API for direct deposit. SWORD based ISI Deposit, for EPrints was examined, although as Les noted the technology wasn’t at the heart of the issue as all repositories work in a similar fashion in the big picture. There is a need for a repository editorial step – which is a manual step, so can be like drinking from a fire hose – too much data flooding in and how can you deal with it with established workflows. The data download may not be straight forward exercise, e.g. student papers and non-peer-reviewed items are listed on WoS as well as academic papers. Les showed an example of selecting one academic and the process to go through to weed out the non-relevant items (a manual process) – 38/items ingested initially a minute, about 10 minutes for manual process and removal of duplicates and irrelevant. Questions of how to use this – monthly update? On a per user basis?

Les moved on to look at repositories as a CRIS – since repositories manage research or teaching or academic outputs and are broader in description and purpose. But what about other databases and information resources across campus (Finance, HR, Grants database. CRISes pull all the disparate data together and present a unified view of it; which includes the repository. Eprints has attempted to accommodate the CERIF data – not just publications but projects and organisations.

E.g. previously a project was added in the metadata – now they are objects in their own right, linking from metadata record to a page about the project itself; with contributors rather than authors. Data can be exported and imported in CERIF format. This joined up integrated resource can help develop research case studies for demonstrating impact and output. I imagine useful though this is, it does add yet another load to the already busy repository administrators workflows. However, I can see a significant advantage to the repository that offers this kind of joined up service. I doubt Leicester will go this route, given our interest in a separate CRIS systems at the heart of the research management agenda.

Discussions
After a brief Q&A session we moved onto lunch. After lunch we broke into two discussion groups, one looking at the perceived benefits or flaws in CERIF; along with the practicality of auditing and standardising institutional systems with it. These sessions then reported back on the points that had been raised. Notably on average for those in attendance having data information systems that could be audited and made compatible with the CERIF standard was a reasonably attractive opportunity, however there were mixed concerns on the technical expertise being available in house at short notice to participate. When it came to staff resource available to take part in such an audit, virtually the entire group felt that this was the biggest obstacle to overcome.

Overall this was an interesting day, and while it was more on the CERIF data standard than the REF itself as I had hoped I was still able to take away some points for further thought.

[Edit: Slides from the event are now available here: http://www.kcl.ac.uk/iss/cerch/projects/portfolio/r4r.html]

Posted in Open Access, Research Support, Technology & Devices | Tagged: , , , , , | 2 Comments »