Skip to main content

Recommendations, Challenges and Opportunities for the Future of Crowdsourcing in Cultural Heritage: a White Paper

Published onJul 10, 2023
Recommendations, Challenges and Opportunities for the Future of Crowdsourcing in Cultural Heritage: a White Paper
·

Executive Summary

This White Paper draws on the collaboratively-written Collective Wisdom Handbook: Perspectives on crowdsourcing in cultural heritage (the Handbook), and two follow-up workshops on ‘advanced questions for crowdsourcing in cultural heritage and the digital humanities’, in addition to the research and experience of its three authors. The Collective Wisdom project aspired to synthesise expert knowledge that until this point had been unpublished, or was only available via grey literature. 

This White Paper is intended to supplement the Handbook. Its intended audience is funders and institutional leaders. The practices we recommend will support the practitioners and volunteers we spend so much time advocating for in the Handbook. We make five interrelated recommendations for funders and organisational leaders, and outline still-intractable challenges. The concept of values is interwoven with these themes, reflecting our collective belief that establishing shared values is at the core of any successful, mutually beneficial crowdsourcing project in cultural heritage.

As with previous outputs from the Collective Wisdom project, the White Paper will undergo a period of community review. You’re reading our ‘version 0’ shared for community feedback right now! Please read our earlier post on 'community review' for a sense of the feedback we're after - in short, what resonates, what needs tweaking, what examples could we include?

Not only do we expect this document to change in response to comments from the community, but our recommendations are not static in the face of changes in technology, societies and institutions. They reflect a moment in time, and are intended to shift with the needs of the field.

Recommendations, challenges and opportunities, summarised

Infrastructure: Platforms need sustainability. Funding should not always be tied to novel development, but should also support the maintenance, uptake and reuse of existing tools.

Evidencing and Evaluation: Support the creation of an evaluation toolkit for cultural heritage crowdsourcing projects. This will also help shift thinking about value from simple output/scale/product to impact and wider benefits.

Skills and Competencies: Create a self-guided skills inventory assessment tool to help projects understand which skills might be missing. This might be a worksheet with a related workshop to help practitioners assess skills gaps and create an upskilling roadmap.

Communities of Practice: Fund informal meetups, low-cost conferences, peer review panels, and other opportunities for creating communities where knowledge can be shared. They should have an international reach, e.g. beyond the UK-US limitations of the initial Collective Wisdom project funding.

Incorporating Emergent Technologies and Methods: Fund educational resources and workshops that help the field understand opportunities and anticipate the consequences of proposed technologies.


Preamble, July 2023

The recommendations, opportunities and challenges identified in this white paper are grounded in both our activities and discussions around the AHRC-funded Collective Wisdom project, and in developments in the field. Since we wrote our initial funding application in 2019, the COVID-19 pandemic triggered dramatic changes in platforms and projects, and changes in society including a global social justice movement. More recently the collapse of social media networks and systems - which previously supported some knowledge transfer and insights into projects and practices - has broken up the networks of informal dialogue and collaboration that sustained the field for a decade or so.

Our experience in the field, and that of our Collective Wisdom Handbook co-authors from a range of projects and disciplines, suggests that, more than ever, funders and organisations now have opportunities - and arguably a responsibility - to deliberately cultivate the practices and projects of the future. This includes working with organisations to collaboratively define requirements that allow for the right mix of sustainable and exploratory projects, and the careful integration (rather than wholesale adoption) of emerging technologies such as machine learning and ‘AI’.

Advances in the field of crowdsourcing in cultural heritage discussed since the Collective Wisdom project's inception in 2019 and activities in 2021 have become even more evident around us - not least the widespread interest in ChatGPT and other newly accessible generative 'AI' tools.

These developements prompted us to focus our two workshops on finding transferrable models and values-based ways of working, drawing out impactful methods, practices and approaches from fields related to crowdsourcing in cultural heritage, particularly online volunteering, human computation and integration with machine learning. Our first workshop was focused on community organising, communication and participant experience, and the second was focused on data and technologies.

We asked participants to help us find the sweet spot between those fields, and to productively address the tension between models that regard crowdsourcing as a variant of volunteer work, as digital public engagement or as outsourced data work.

Funded by the UK's Arts and Humanities Research Council (AHRC) through a grant that focused on UK-US networks, our activities and outputs reflect both the historical moment and the demographics of the individuals and organisations participating.

This White Paper presents recommendations synthesized from these workshops and from our work in the field since then. We also outline emerging, intractable and unsolved challenges that could be addressed by further funding for collaborative work.

While descriptions of technologies and practices are inherently time-specific, and our recommendations are linked to the experiences and research of our collaborators, we hope that the principles and values discussed stand the test of time.

Background to the Collective Wisdom project: goals and activities

In early 2021, we convened a group of 16 authors to write a book that captured our collective practical and theoretical knowledge of crowdsourcing in two week-long 'book sprints'. That process took us from ideas to a first public edition of The Collective Wisdom Handbook: Perspectives on Crowdsourcing in Cultural Heritage. We ran surveys that asked questions of crowdsourcing volunteers and stakeholder in advance of the book sprints, to allow us to include wider perspectives in our writing. We also held an 'open peer review' process and actively sought comments and feedback on our first edition of the Handbook for several months over 2021.

The book sprints were preceded by surveys that asked questions of crowdsourcing volunteers and stakeholders to include wider perspectives in our writing. We held an 'open peer review' process and actively sought comments and feedback on that first edition for several months over 2021.

Building on the collaborative process of writing the Handbook, and anticipating that the book sprint would raise questions we couldn't address within that process, we convened two workshops in October 2021. The workshops (also discussed above) aimed to ‘interrogate, refine and advance' questions raised during the book sprint, and 'identify high priority gaps and emerging challenges in the field that could be addressed by future research collaborations’. Invited participants included representatives from: Aotearoa New Zealand Wikimedia, Art UK, Arts and Humanities Research Council (AHRC), AVP, BBC R&D, British Library, Black Cultural Archives, DigVentures/University of Leicester, Flickr Commons, Florida State University, FromThePage, Library of Congress, Library of Virginia, National Endowment for the Humanities (NEH), National Science and Media Museum, Newberry Library, Smithsonian Transcription Center, University College London (UCL), University of Michigan Library, University of Oxford, University of Portsmouth, UK ('Railway Work, Life & Death' project), University of Texas at Austin, University of Washington, Virginia Tech, Wikimedia UK and Zooniverse.

Further information about the project's funding, collaborators and activities is available in the Appendix and our website, https://collectivewisdomproject.org.uk.

Why this paper, and why now

The authors of this white paper, drawing on shared knowledge and experience from the co-authors of the Handbook, workshop participants and wider discussions, are uniquely positioned to make these recommendations. Through our professional experience and research across the cultural heritage and technology sectors, we have a clear vision of how shifts in neighbouring research communities are poised to impact our ongoing efforts as well as providing potential opportunities for collaboration.  

The rapidly-growing popularity of machine learning and 'AI' in cultural heritage has increased the push from the commercial and academic sectors to ruthlessly optimise interactions for efficiency and accuracy. However, GLAMs (galleries, libraries, archives, museums and other cultural heritage organisations) may also want to optimise for engagement, especially after pandemic-related institutional closures publicised the opportunities offered by digital projects as tools for public engagement.


Scope

We have organised our recommendations into five tightly-interrelated themes: Infrastructure, Evidencing and Evaluation, Skills and Competencies, Communities of Practice, and Incorporating Emergent Technologies and Methods. Throughout these recommendations, the concept of values is interwoven with the themes, reflecting our collective belief that establishing shared values is at the core of any successful, mutually beneficial crowdsourcing project in cultural heritage. We recommend establishing values at the start of a project to ensure that all subsequent decisions are in line with, and provide support for, the stated values.

Key concepts

The following concepts featured throughout the Collective Wisdom project and we highlight them here to more specifically situate the recommendations that follow. 

In the Handbook we defined crowdsourcing as ‘a form of digitally-enabled participation that promises deeper, more engaged relationships with the public via meaningful tasks with cultural heritage collections’. Implicit in this definition (and stated explicitly elsewhere in the Handbook) is the understanding that the concept of mutual benefit from crowdsourcing is more likely to be achieved through establishing shared, significant goals.

Crowdsourcing tasks in GLAMs (and citizen science) can be broadly divided into three groups: collecting, analysing, and reviewing. Collecting tasks invite project participants to generate new or share existing items or information with a project. Analysing requires participants to interpret data in various ways, including transcription, translation and annotation. Reviewing asks participants to validate or improve data that has already been input to the project.

These tasks differ from commercial crowdsourcing and 'microwork' in that the act of participation is not compensated financially, and efficiency in data production is not always the main goal. Instead, efficiency is balanced with public engagement to ensure mutual benefit between participants, the teams creating the project, and the collections (whether institutional or personal) and related data that are the subject of the tasks. Benefits of participation are closely related to motivations for participation, and may be intrinsic, altruistic or extrinsic.1

The language used to talk about people involved in projects can be nuanced. Terms such as 'participant', project team member, volunteer, etc. often reflect an individual's position inside or outside an institution, rather than their skills or those required for a particular role. 

Values are discussed throughout the Handbook. For us, values are not only ethical but also aspirational. They create opportunities for joy, serendipitous discovery, human connection, curiosity and learning.

We often use 'technology' as a shorthand for tools and platforms, considering also their affordances and functions, and the processes, resources and institutional context in which they operate.

Recommendations, challenges and opportunities

[Note to the reader, July 2023: we want to strengthen these with links to projects and publications to help explain our points more concisely and provide positive examples for the changes we're calling for. If you have suggestions, please post a comment.]

Recommendation: Infrastructure

What does this mean? Platforms need long-term sustainability to ensure that new projects are not required to continually ‘reinvent the wheel’ for public interfaces and backend data integrations.

Why does this matter? (What is the impact of not doing this, what opportunities are lost?) Lack of investment in existing, effective infrastructure can lead to a proliferation of bespoke infrastructure. Custom projects can be prohibitively expensive and are difficult to sustain without continued funding. They are also vulnerable to changes in staff or institutional priorities. Supporting reusable infrastructure reduces the barriers to under-resourced institutions implementing crowdsourcing. Using existing platforms frees resources for extending their capabilities and for other activities, which is essential to developing networks of practitioners and ensuring responsible crowdsourcing practices can continue to flourish. Investment in existing resources - from updating code packages to enhancing tools to adding new features and updating the user experience as behaviours and expectations change - is crucial to sustain/maintain useful infrastructure.

What do we propose and for whom?
Rather than tie funding to bespoke or novel development for a specific research case, support the following options:

  • Researchers applying for funding can contribute grant funds towards maintenance of open-source digital infrastructure, including creating and/or updating documentation of platform features 

  • Researchers collaborating with platform maintainers can include maintenance costs in funding proposals (including bringing code up to date, implementing feature requests, expanding existing infrastructure, creating/updating documentation etc.)

  • Invest in developing common denominator data models and formats that reduce the overhead of integrating and moving data between collections management systems and crowdsourcing platforms

Investment in resources that possess demonstrable interest from researchers and usable outputs will help to sustain these resources in the long term. This will benefit researchers and their wider communities, project participants and platform maintainers.

Researchers/research communities:

This funding model will benefit researchers by ensuring the long-term provision of services, and by allowing tools to build on best practices over time, with much less risk of technical deprecation or issues due to out of date code maintenance. Furthermore, researchers can commit more time in the grant cycle to their research, communication, and administrative strengths rather than the complex process of software development. It will benefit research communities by perpetuating the useful and (re)useable output of crowdsourced research projects.

Project participants:

This funding model will benefit project participants by ensuring that their contributions are not wasted and helping to promote a positive user experience. When infrastructure is not able to be regularly maintained over time, the risk of technical error increases; for long-term projects, this can potentially lead to tasks being left incomplete, which impacts a team’s ability to analyse and publish their results. It can also lead to degraded user experience over time. Well-maintained codebases prevent errors in existing infrastructure and also make it easier to implement new feature requests. This means that feedback from participants can be implemented more quickly, which contributes to a positive and responsive user experience and starts to position participants as collaborators. Participants will also benefit from the increased ability of researchers to commit time to project design and engagement activities.

Platform maintainers:

This funding model will benefit platform maintainers by allowing them to focus on technical maintenance as a vital service in its own right. The practice of funding the development of new features without supporting the maintenance and documentation of existing platforms can accrue a large amount of technical debt. More robust platforms will also increase participant and researcher confidence in what can be accomplished collaboratively through crowdsourcing, enhancing the body of practice and creating paths for new methods.

Recommendation: Evidencing and Evaluation

What does this mean? While reported outcomes and evidence of success in crowdsourcing projects often reflect a linear direction of growth, scale is not always the best measure of a project’s health or success. The evaluation of crowdsourcing could better account for the holistic impact of the project. As people need support to develop assessment tools and design and interpret things like surveys, support can range from access to examples to consultation with professional evaluators. Access to evaluation results can provide a convincing argument that a project’s value is not inherently tied to its size. 

Numbers can be attractive because they are relatively easy to collect. They can have a 'wow' factor (when working with large-scale projects). Funders ask for them and projects are accustomed to supplying them. However, numbers and scale don't tell the whole story; they don’t communicate impact on a person-to-person basis. Too much focus on numbers can upset the delicate balance between optimization for data (quality, amount) and optimization for participant experience (personal impact, learning). Similarly, a rush to complete projects within a certain timeline can degrade the user experience in service of faster outcomes. Some projects will take longer than planned, and teams need to be prepared to adjust accordingly.

Success and impact in cultural heritage crowdsourcing is often determined by scale: how many people participated, the volume of data generated, etc. However, scale isn’t always the best measure for evaluating projects, as it cannot take into account the metrics which benefit participants. As noted in the Handbook, ‘Focusing on content-production metrics has a bias towards productivity, rather than engagement and human change'. Teams need support and resources to plan and carry out evaluation practices that consider and measure community impact as well as output (such as surveys). For example, there is an opportunity to build on work such as Europeana’s Impact Playbook.

Because crowdsourcing is at its best when all participants benefit, evaluation should include understanding of participant motivations and experiences, leading to improved practice.  Focusing on scale can lead Project Managers to practices that can tip into the exploitative by valuing the product over the people. Context, comparison, and more nuanced evaluation are more likely to help project leads identify next steps for planning and improving approaches in their projects. Evaluation results can give participants the opportunity to formally register their feedback, leading to better accountability practices for project teams.

Why does this matter? (What is the impact of not doing this, what opportunities are lost?) Without resources, many teams struggle to know what to ask participants on a survey, how to frame their questions, and how to determine what sort of ethical clearance they may need to obtain from their institution in advance of carrying out this work. Additionally, lack of resources around best practices for reporting and sharing evaluation results can lead to ineffective use.

What do we propose and for whom? An evaluation toolkit for cultural heritage crowdsourcing projects would allow project teams to allocate resources appropriately and incorporate evaluation into project work plans. Example questionnaires and surveys will help to shift thinking about value away from solely being on output/scale/product and compellingly demonstrate how good projects can also impact participants’ emotional, social, and mental wellbeing. Ideally, the toolkit will provide ‘recipes’ for measuring different kinds of success, allowing teams to more accurately convey a project’s success and impact beyond just numbers.

For projects:

Projects can invest time in articulating the success metrics that really matter to them, then matching these metrics to evaluation methods. Projects should provide feedback on the evaluation toolkit and contribute improved versions of assessment instruments in the course of their funded research and community engagement. 

For funders:

Funders can partner with practitioners to develop assessment criteria and toolkit components that meet funding requirements and benefit broader practice in crowdsourcing projects. Funders will lead the way by not just requiring assessments that speak to scale, but instead look to transformation experienced by participants/institutions/researchers. For cultural heritage crowdsourcing projects, the best and most powerful evidence of their impact often feels anecdotal because it is qualitative in nature. This type of evaluation can help to convert that ‘feeling’ (or ‘anecdata’) into easily gatherable/accessible metrics. Funding bodies can ask different types of questions within their evaluation sections/requirements, depending on their institutional goals or the terms of the funding scheme.

Unless these evaluation requirements are included directly in notices of funding opportunity, people cannot include these methods into their success metrics, project values, or proposed work plans. By including these evaluation requirements, funding bodies can convey the importance of these methods and help to break the cycle of numbers being valued over human experience.

Recommendation: Skills and Competencies

[Note to the reader, July 2023: we are looking to refine this section following our soft launch and review period. Your comments can help us improve!]

What does this mean? The key elements to designing GLAM crowdsourcing projects are: 1) recognizing the skills and competencies necessary for a project to succeed; 2) identifying where those skills and competencies exist via a skills inventory; and 3) using this information to determine which skills and competencies you need to cultivate, either internally or through partnerships.

Why does this matter? (What is the impact of not doing this, what opportunities are lost?) It is imperative that crowdsourcing projects incorporate diverse skill and subject matter expertise. Without a skills inventory, it can be difficult to recognize what skills are needed and when, and where complications may arise over the course of a project. Determining gaps in an inventory can help when preparing a budget, and failing to recognize these gaps can lead to major disruptions to a project timeline, or even catastrophic outcomes such as failure to use the results of a project.

Working on crowdsourcing projects provides incredibly unique opportunities for professional development, specifically because they integrate such a plethora of skills: project management/planning, stakeholder management, strategic communication, data management, technical architecture (tasks/workflows/data needs), user research, community outreach, plus generally chances for reflexivity and knowledge exchange in blog posts/ presentations/papers/posters. Secondments, role shadowing, and work details allow individuals to develop skills and for the specialist skills required to be shared across an organisation.

What do we propose and for whom? 

For project teams:

A self-guided skills inventory assessment tool would allow teams to undertake a skills inventory at the early design and conceptualization stage of their project. This tool could complement the information in Chapter 5 of the Handbook (“Designing cultural heritage crowdsourcing projects”) by turning the chapter contents into a worksheet that teams can fill out for their own project.

For funders and organisations:

Additionally, funding the creation of a skills assessment workshop would help train people to assess their own skills and provide a space to bring people together for discussion, deep thinking around project planning, and getting to know other members of the community, help grow their network and even identify potential collaborators. Workshop activities could include being assigned a short description of a crowdsourcing project, and being asked to determine your own skills, those available within your organisation and any remaining skills not represented by your team, then identifying a path to attain missing skills. They could also include opportunities to develop skills for writing strong applications for funding the se types of projects.

These resources will help teams in the process of resource allocation and requisition. For example, projects do not always need to have all their skills in place before applying for funding, but they do need to be able to identify which skills they already have and which are needed.

Recommendation: Communities of Practice

What does this mean? Support knowledge exchange between people at different stages of their work on cultural heritage crowdsourcing. This will consequently lead to supporting participants and projects.

For those new to the field: 

Publication in this field can be primarily in regard to research outcomes, rather than sharing best practices for design/methods/building/etc. We believe that fostering strong communities of practice can serve as a de facto support network for people building and running cultural heritage crowdsourcing projects, to help them avoid common pitfalls and become aware of available resources, funding opportunities, etc.

For experienced folks: 

Communities of practice can help practitioners identify partners with shared values, goals, data formats, needs, etc. Additionally, communities can identify emergent challenges in the field and work together to identify necessary steps to take. 

Why does this matter? (What is the impact of not doing this, what opportunities are lost?) 

Natural attrition, challenging pay and working conditions and lack of career opportunities mean that the field - like much of the wider GLAM sector - loses experienced practitioners. It's important to note that the practices that sustain crowdsourcing projects are often care and communication-based, which are traditionally less valued and compensated than the technical skills in designing and maintaining the platforms and tools that enable that work. Stronger communities of practice might encourage more knowledge exchange, reducing the impact of individuals leaving the field.

The COVID-19 pandemic has surfaced the greater possibilities of being together in this way. For example, they can help to facilitate international collaboration. Digital projects are much more easily global, and so partnerships also need to be global as well if our projects are going to serve an international community. Additionally, the need for many institutions to ‘pivot’ to digital projects during the pandemic demonstrated to many teams the value of this type of work, as well as the amount of labour and other resources required to create, run, and maintain cultural heritage crowdsourcing projects. 

Diversity, equity and inclusion work is vital when machine learning/AI technologies are integrated with crowdsourcing. Crowdsourcing cultures are influenced by collective action and volunteering cultures within a nation or region, and enabling wider representation from non-Anglo cultures would lead to more creativity.

What do we propose and for whom?

Our original funding imposed geographical limits. The ability to collaborate and build between our two countries was invaluable, but the conversation would benefit even more from a truly international reach. We propose that funders build on the momentum from this networking grant by providing expanded opportunities for international collaboration, beyond just the US and UK.

Potential formats and activities include: 

  • Funding logistical support for informal ‘meetups’ (e.g. Digital Cultural Heritage D.C.) where people can informally present projects, share lessons learned, etc., with opportunities for discussion and/or long Q&A sessions following presentations 

  • Funding formal low cost, low barrier conferences or annual workshops for people to come together and share work in progress, new/experimental methods, or present challenges for group discussion

  • Funding to create formal peer review panels, resulting in financial, professional recognition for people reviewing project proposals at a formative stage, and helping teams apply best practices in their plans. The expertise best suited to review practice and research in this field is held by practitioners: project leads, community managers, developers. Creating opportunities for practitioners to participate in open peer review not only enhances the quality and nuance of published accounts of practice, it increases the holistic research interpretations that can be made of crowdsourcing practice and its outcomes. 

  • Resources and workshops to help identify, develop, and enact values within project planning and evaluation

  • Encouraging funded projects to share outcomes with established communities of practice in related areas, such as GLAM technology discussion lists and conferences, AI4LAM, IIIF communities, etc. that don't place the same value on (or have the same access to) peer-reviewed publications

  • Supporting secondments, work details and role shadowing, to ensure skills development to support communities of practice

[Note to the reader, July 2023: how would you amplify your lessons learnt? Please share your ideas!]

Recommendation: Incorporating Emergent Technologies and Methods

What does this mean? The expectations of cultural heritage crowdsourcing participants change constantly, as technologies they encounter and use elsewhere evolve. Methods developed in related fields like Human-Computer Interaction (HCI) or Computer-Supported Cooperative Work (CSCW) need adaptation and explanation to make them intellectually legible, ethically appropriate and technically possible.

Why does this matter? (What is the impact of not doing this, what opportunities are lost?)

Opportunities

  • Develop skills within the sector 

  • Address issues of scale that even crowdsourcing cannot touch

  • Educate the field about when AI/machine learning is appropriate, and what is necessary to do it effectively as well as ethically

  • Work through existing tensions between optimization and engagement; people with experience in cultural heritage crowdsourcing have an opportunity to share their expertise around public engagement, while HCI or CSCW experts have a lot of experience optimising for various metrics (data quality, volume, etc.)

Risks

  • Unnecessary work being done by volunteers when software could do it better, leading to decreased participation from volunteers who are aware of this information

  • People unnecessarily using technologies like machine learning, or adopting a technology or method that is inappropriate for their project, leading to delays in producing results

  • People implementing machine learning processes and workflows that lead to a poor user experience

  • Unaudited biases in AI / machine learning tools introducing errors or inappropriate data

  • Black boxing of process or results  

What do we propose and for whom?

We propose funding for educational resources and workshops on the opportunities and consequences of proposed technologies, and advising on approaches for implementing them appropriately. 

Additionally, we propose a new funding call focused on continued iteration, experimentation, and careful adoption of new technologies, including examples and considerations in other contexts. This call would benefit from a requirement for interdisciplinary expertise and skill sets to ensure a broad set of reflections on the sociotechnical, cultural, and technical dimensions of crowdsourcing. Essentially, a way to share experiences with frank communication around whether the technology was worth the effort. For example, during the process of preparing this white paper, the NEH announced a funding call around the Dangers and Opportunities of Technology, with a specific focus on “humanistic research that examines the relationship between technology and society.” 

Intractable challenges for future work

Even with the above recommendations, challenges remain that we do not yet have best practices to address, and which are extremely difficult to avoid. This section details some of the challenges we have encountered and identified. They are presented in the order of their likeliness to still remain, even if the above recommendations are enacted.

Staffing projects is difficult. Most projects need lots of roles covered, but these roles are unlikely to be full time effort, unless the institutional scale is larger than the majority of projects. This challenge can be reframed through strategic messaging to leadership, and relief may be possible through creating pooled resources or more flexible hiring models, such as secondments and exchanges, creating Museum Development Officers for a region.  

Values are maintenance work, too. Participant exploitation can happen even in well-resourced scenarios. Work that is primarily needed by an organisation should not be allocated to volunteer participants for whom it might not be a core goal or have any meaning. Actions to avoid exploitation might include creating participant advisory boards (or including participants on project advisory boards) or shared ‘Bill of Rights’ style guidelines for GLAM crowdsourcing participants (in the vein of the UCLA Student Collaborators’ Bill of Rights, published in 2015). Funders can support the creation of a coalition between participants and practitioners to create these resources collaboratively.

Innovation can be harmful when it is not necessary. As the field matures, good projects will not need to be innovative, and can instead follow existing models. Funding that relies on innovation contributes to the challenges listed above, and 'beyond state of the art' criteria may even become harmful in requiring people to ignore methods we know work equitably. If a project is proposing technical development we should instead ask how this work will enhance the functionality of existing infrastructure.

Sustainability is a challenge addressed throughout this white paper, and is particularly difficult when coupled with the innovation requirements addressed above.

By contrast, innovation can stall if a monopoly of platforms emerges. When the overhead for trying new approaches is too large, consequences include stagnation in the field due to limits for experimentation. Acquisitions and mergers in vendors have consolidated collections management systems, and they may be less responsive to requests from 'soft' projects. Instead, we should consider creating opportunities for middleware and data management workflows and expertise that link platforms to content systems, or which work in tandem with existing resources.

To what extent does the framing of crowdsourcing and citizen engagement exclude certain groups or communities? Whose work is legible within these frameworks and how does that affect work on diversity, equity and inclusion? How can predominantly white institutions (PWIs) approach this? Relatedly, how should the field work alongside community archives and activist projects without expecting them to want to be like GLAM projects? These challenges are intractable as they constantly require attention and are informed by broader, infrastructural and sociocultural divisions, and legacies of exclusion.

Too many best practices can be a barrier. As the amount you need to know before you start a project grows, the overhead becomes larger. Sandbox environments or workshops with funders could take people from ‘woah’ to ‘go’ by providing the skills and information they need to work through potential project ideas, from internal conversations to funding applications. These workshops could consciously address the diversity and vitality of the field, encouraging creativity and staying away from 'that didn't work in the past'. 

Pace of change. An acceleration in the quality and availability of generative AI methods in the past year, including the ability to summarise and reformat textual information, will further change the relationship between volunteers and data managers. What effect will this have on conversations about 'quality', and the tension between work that was more traditionally in-house (such as applying structured specialist vocabularies) and work considered suitable for volunteers? This reinforces the need to constantly operationalise values, and ensure they are widely understood and translated as necessary.

Conclusion

Throughout the Collective Wisdom project, we have endeavoured to iteratively surface practical approaches and shared challenges. The recommendations here speak to collaborative futures based on current work, future trends and the need to include more diverse perspectives. 

We have committed to sharing the work we’ve created in each collaborative step of the project early and in slightly 'version 0' form because we believe that intentional engagement, complementary thinking, and provocative questions will help us develop meaningful and transferable approaches. 

We hope that you, as a reader, will share your thoughts with us and with others; and that the Collective Wisdom project provides foundations for thoughtful design and ethical practices for crowdsourcing practitioners, future projects and funding schemes, and organisational leadership. 


Appendices

Appendix: about the Collective Wisdom Project

The Collective Wisdom Project was led by Principal Investigator Mia Ridge (British Library) and Co-Investigators Meghan Ferriter (then Library of Congress) and Samantha Blickhan (Zooniverse). 

Our overarching goals were to:

  • Foster an international community of practice in crowdsourcing in cultural heritage

  • Capture and disseminate the state of the art and promote knowledge exchange in crowdsourcing and digitally-enabled participation

  • Set a research agenda and generate shared understandings of unsolved or tricky problems that could lead to future funding applications

We planned two activities to reach our goals:

  • two week-long collaborative online ‘book sprints’ (or writing workshops, March - April 2021) designed to produce an authoritative book within a month, subsequently published online as the Collective Wisdom Handbook: Perspectives on crowdsourcing in cultural heritage

  • two follow-up workshops to interrogate, refine and advance questions from the field and agree upon high priority issues for future work, laying the foundations for future collaboration and providing input for the white paper – held over two days on October 20 and 21, 2021

These activities were designed to produce the following results:

We are grateful to the Arts and Humanities Research Council for funding as an AHRC UK-US Partnership Development Grant for our proposal, 'From crowdsourcing to digitally-enabled participation: the state of the art in collaboration, access, and inclusion for cultural heritage institutions', AH/T013052/1.


Appendix: project credits and participants

Mia Ridge is the British Library's Digital Curator for Western Heritage Collections. Part of the Library’s Digital Research team, she provides advice and training on computational research, AI / machine learning and crowdsourcing. She's been researching, designing and building crowdsourcing tools, and running crowdsourcing projects for cultural heritage collections for over a decade. A Co-Investigator on Living with Machines (2018-23), she also co-curated the Living with Machines exhibition with Leeds Museums and Galleries (2022-23). She blogs at Open Objects.

Samantha Blickhan is Co-Director of the Zooniverse team at Chicago’s Adler Planetarium, and Humanities Lead for the platform. She guides the strategic vision for Zooniverse Humanities efforts, including managing the development of new tools and infrastructure, publishing original research, and collaborating with teams building Humanities-focused crowdsourcing projects.

Meghan Ferriter served as Co-I during the Collective Wisdom project while Senior Innovation Specialist with LC Labs.She's welcomed volunteers, supported staff, and designed crowdsourcing projects at the Smithsonian Institution and the Library of Congress, and also led the Computing Cultural Heritage in the Cloud (2019-2023, Co-I), and developed ethical implementation frameworks for AI. In spring 2023, Meghan moved on to a new role supporting government transformation through strategic investment at the Technology Modernization Fund in the Government Services Administration.  

Collective Wisdom Handbook book sprint co-authors

Our book sprint participants and co-authors were (in alphabetical order): 

Austin Mast

Ben Brumfield

Brendon Wilkins

Daria Cybulska

Denise Burgher

Jim Casey

Kurt Luther

Michael Haley Goldman

Nick White

Pip Willcox

Sara Brumfield

Sonya J Coleman

Ylva Berglund Prytz


Workshop participants

Collective Wisdom workshop on 'community organising, communication and participant experience for crowdsourcing in GLAMs and DH', 20 October 2021

In alphabetical order:

Andrew Ellis 

Austin Mast 

Ben Brumfield 

Brendon Wilkins

Brennan, Sheila 

Caitlin E. Haynes

Daria Cybulska

Geoff Belknap

George Oates

Hannah Ishmael 

Josie Fraser

Kelly Foster

Konrad Mould 

Lauren Algee 

Michael Haley Goldman

Mike Esbester

Nicolas White

Pip Willcox 

Siobhan Leachman

Sonya J Coleman

Victoria Morris 

Ylva Berglund Prytz


And

Mia Ridge

Samantha Blickhan 

Meghan Ferriter 


Collective Wisdom workshop on 'data and technologies for crowdsourcing in GLAMs and DH', 21 October 2021

In reverse alphabetical order:

Ylva Berglund Prytz 

Victoria Morris

Terence Gould

Sonya J Coleman

Sheila Brennan

Shawn Averkamp

Pip Willcox

Nicolas White

Michael Haley Goldman 

Matt Lease

Lauren Algee

Kurt Luther 

Justin Schell 

Josie Fraser 

George Oates

Fiona Dakin 

Daria Cybulska 

Brendon Wilkins 

Bill Thompson

Benjamin Charles Germain Lee

Ben Brumfield 

Austin Mast


And

Mia Ridge

Samantha Blickhan 

Meghan Ferriter

Comments
1
Mia Ridge:

Is there a way we could make this clearer or more vivid?