Purpose
To identify and compare existing metadata evaluation tools and mechanisms for connecting the results of those evaluations to clear, cross-community guidance.
Project plan
The following project plan is estimated to encompass work between May and October 2018.
- Populate Schemas, Tools and Guidance Information
- Review existing evaluation tools and identify what they are evaluating (completeness, consistency, linking, quality, other).
- Identify advantages/disadvantages of current evaluation tools and make recommendations for use
- Together with Project 5, create a guidance document for evaluation tool use for multiple communities and/or create a white paper or statement to the wider scholarly communications community
- Consider where there might be gaps in evaluation resources and where relevant, scope out the needs for ideal additional tools
- Complete Project 5’s best practices resources survey
Challenges
- Currently it is difficult to evaluate the completeness, consistency and accuracy of metadata deposited across multiple systems and, thus, to demonstrate evolution and improvements through time
- Researchers in particular do not have a good way of assessing the completeness of their metadata, and publishers have to take time to assess the metadata, leading to increased costs and delays
- Publishers do not have a clear idea of the compliance of metadata to multiple standards, and checking that compliance is often manual and time consuming
Possible solutions to explore
- Creation of a spirals-like metadata evaluation system for integration with submission systems, assessing the quality and completeness of metadata against different metadata standards
- Identify simple quantitative metrics that can be used to measure and monitor completeness, consistency and accuracy of metadata
- The Crossref Member metadata evaluation tool could be backed by Metadata 2020, with use cases of how publishers used the information to improve their own metadata. A similar approach could be looked at for other communities e.g. DataCite member metadata
- The creation of a catalog of metadata quality tools and further resources could be very useful to the community
Group participants
=======
- Ted Habermann, HDF Group (Group Lead)
- Cyrill Martin, Karger Publishers
- David Schott, Copyright Clearance Center
- Despoina Evangelakou, SAGE Publishing
- Ed Moore, SAGE Publishing
- Eva Mendez, Universidad Carlos III de Madrid
- Fiona Murphy, Scholarly Commons
- Helen King, BMJ
- Howard Ratner, CHORUS
- Jennifer Lin, Crossref
- John Horodyski, Optimity Advisors
- Julie Zhu, IEEE
- Kaci Resau, Washington & Lee University
- Kathryn Kaiser, UAB School of Public Health
- Kathryn Sullivan, University of Manchester
- Keri Swenson
- Kristi Holmes, Northwestern University
- Lola Estelle, SPIE
- Mahdi Moqri, RePEc
- Mark Donoghue, IEEE
- Melissa Jones, Silverchair
- Melissa Harrison, eLife
- Michelle B. Arispe, Philippine Nuclear Research Institute
- Mike Taylor, Digital Science
- Nathan Putnam, OCLC
- Neil Jefferies, Data2paper, SWORD
- Patricia Feeney, Crossref
- Peter Kraker, Open Knowledge Maps
- Ravit David, OCUL, University of Toronto Library
- Ross Mounce, Arcadia Fund
- Stephen Howe, Copyright Clearance Center
- Tyler Ruse, Digital Science
- Vladimir Alexiev, Ontotext