We know that somehow footnotes in this one chapter got screwed up in copying from the source to the online ePub. We’ll fix them when we update the text in response to comments.
Computers are great, except when they aren’t…
Perhaps consider to broaden, also adding more subjective viewpoints to a collection object. Thus giving space for polyvocality.
Correcting audio transcripts: http://fixitplus.americanarchive.org
And Waisda for moving images https://www.taylorfrancis.com/chapters/edit/10.4324/9781315575162-15/waisda-making-videos-findable-crowdsourced-annotations
Please also add the Europeana collection days. They are awesome.
Awesome
yes, very good definition. Perhaps add that through crowdsourcing, the polyvocal and complex nature of collections can be explored. By allowing space for multiple perspectives to be added by users.
You might want to refer to cultural-ai.nl disclosure: I’m part of the research team.
Thanks Johan!
You can argue that crowdsourcing results in a more equitable relation between ‘organization’ and ‘users’. So a way to engage more deeply with an organisation. By experts that are not professionals working for an institution, but share their time and energy because they have an intrinsic motivation to do so. In other words, we could be adding a new meaning to the word ‘your’; more based on principles such as ‘commons’, and equality.
I love that!
A discussion on recolonising or othering the subjects of potential projects needs to be considered. Cultural Safety is not something that I have been able to find within this text (but I might have missed something). The biggest challenge for me is that this publication was written from a Western / Northern Hemisphere perspective and doesn’t include the nuances that are experienced by Indigenous peoples or those from the Global South. Happy to be proved wrong.
Victoria, thanks for your comment. The funding to write the book was a US-UK networking grant, so our participants reflect that. I’ll pass your comment onto the team to ask if there are any parts of the text they’d like to highlight in response.
Our closing workshop has no limits on international attendance, and this would be a good topic to address there, especially as organisations are so much further advanced with anti-racism and de/recolonising work since even the start of the year.
footnote 4 goes to https://en.wikipedia.org/wiki/Category:User_warning_templates maybe https://blog.zooniverse.org/2013/08/02/many-zooniverse-papers-now-open-access/ would be better
I know this is where you’re mainly introducing this idea that you explore further in later parts of the book, but I wonder if it would be useful to also cite an example of a specific project (or even an aspect of one of these projects) that actually incorporated the ideas and feedback of community participants / volunteers - beyond reflection and discussion. Have any crowdsourcing programs pivoted or revised aspects of their projects based directly on user input? Not saying I have the perfect example in mind - we’ve done some of this in the Smithsonian Transcription Center - but not as well as I’d like in terms of including here; but I just wanted to throw it out there! How can crowdsourcing volunteers / communities actually play a role in developing and shaping the project itself?
is there any research into the expectations and assumptions of participants in crowdsourcing? Personally, I assume that the fruit of my efforts would be released fully as Open Data and won’t be deliberately or accidentally (because the licence is tainted by use of maps, etc.) restricted. Other participants may not know to ask…or maybe they don’t mind?
It’s definitely something that some people look for, and we’ve included a comment from our volunteers survey along those lines at https://britishlibrary.pubpub.org/pub/understanding-and-connecting-to-participant-motivations/release/1#nkkt6sw478b
One distinction that I have made is about what kind of knowledge/competence is asked for from the participants. Whether we are talking about a generic knowledge (or maybe ‘skill’ is better term here) like being able to read handwriting (that can improve actually) or specific knowledge (‘information’) that the participants have and are asked to share.
The important difference is that when we are dealing with skill-based tasks like transcription (or some basic categorisation) the limitation to participation is how much time the user can contribute whereas in the case of knowledge-based tasks we need more users with their specific knowledge to contribute (for instance geotagging images requires a user to recognize the place depicted and if there is no textual metadata it is the only way of identifying the location on the picture) and only will to contribute is not enough to successfully participate.
Vahur, thank you! That’s a really good summary of the logic underlying that grouping. I’ll look at incorporating it and credit you in the paragraph.
Do you mean ethical values? What are these values (openness seems to be one of them - are there others?) and are they universally applicable to all crowdsourcing initiatives?
…and shapes the opportunities (or lack thereof) to participate e.g. issues of connectivity in certain parts of the global south.
I guess you go on to acknowledge this in the sentence starting ‘Digital exclusion…’ but this short paragraph didn’t really hang together and felt slightly tokenistic to me. And then the following one sounded abrupt: ‘but never mind, as long as you hold certain values (undefined) it’ll all be fine’.
Appreciate this is very tricky to navigate successfully and succinctly though! Maybe explicitly acknowledging the biases and any gaps in experience of the co-authors here might help in some way?
Thanks for that really thoughtful comment. We’ll take a close look at this when re-editing and think about how to address the issues you‘ve raise.
Linking? E.g. geo-referencing; matching to WikiData.
We could make that more explicit in ‘finding information’ as identifications and classifications would often come with links
Actually there are 17th-century examples. James Jurin & the Royal Society used crowd-sourcing to gather meteorological data from across the UK (and abroad, I think). See https://publishing.cdlib.org/ucpressebooks/view?docId=ft6d5nb455;chunk.id=d0e6189;doc.view=print
Awesome, thank you! There’s also the longitude rewards https://en.wikipedia.org/wiki/Longitude_rewards which might even put things back to the 16th century
The form of the verb “immerse” is incorrect. Should be “When an expanding group of people immerse themselves”, or “When an expanding group of people immerses itself”.
Thanks for spotting that!
Crowdsourcing can also be a learning opportunity for the crowd, e.g. identifying flowers in images as a way to improve ones’ flower identification.
Excellent point - we’ll make sure that’s highlighted as it’s something we all agree on.
You don’t seem to have used Oxford commas elsewhere.
Good catch, thank you!
I’m wondering how Wikidata and other Linked Open Data crowdsourcing projects would be classified? Would those projects fall within either the “Generating data/material/collection'“ or the “share what you know” or would there be a separate type of “link what you know” project because of the type of outcome that results.
Personally I think of links as a form of description e.g. a Wikidata or ISNI link to Beethoven identifies and thereby describes the creator of a work. In this framework I was thinking about how the intellectual work relates to the item in front of you, rather than how that work is represented. The problem with simplification in an introduction is that it simplifies!
I believe this is a key mindset for those designing and running crowdsourcing projects. If all that is aimed for is a narrowly defined measure of success eg all the specimen labels transcribed as written, projects, volunteers and organisers can miss out on a whole range of rich experiences and data that can result from volunteers being encouraged to explore & engage more deeply with the project content.
Thank you for articulating that!
This appears to be the incorrect link (https://en.wikipedia.org/wiki/Wikipedia:Administration) - as noted above, something weird going on with all citations in this chapter.
Yes, for some reason they are weird in this chapter and apparently no other.
Love the illustration below!
a small thing, but wondering if this should be digital rather than digitized as not all CH crowdsourcing projects are for surrogates, for example crowdsourcing metadata or markup for a born-digital document. Digitized is also used once below.
True - I think we were trying not to over-think it, but generalisations aren’t always helpful
Um…. The link here goes to Wikimedia User Warning templates. I was hoping to get a link to the Zooniverse so I could see a sample of their contributor content agreement (or project creator agreement I guess is what you’re really describing).
Thanks Christina - we’ve discovered that the footnotes for this chapter are somehow out of whack but we haven’t had time to update the epub selectively
Here we could cite ‘From Crowdsourcing to Nichesourcing – The National Library of Finland Bulletin 2016’. Accessed 29 May 2021. https://blogs.helsinki.fi/natlibfi-bulletin/?page_id=206 for their explanation of nichesourcing for ‘complex tasks with high-quality product expectations’ and specific language skills.
Is there a chance that these refer to the footnotes on a totally different chapter, in addition to the rwong number? This marker (9) pops up the footnote body for 8, but that body (a link to the Colored Conventions project) also doesn’t seem to apply to the text preceding footnote 8 (Howe’s Wired article, presumably)
We’ll need to check them against that particular epub version to see if it’s wrong in the original too. It’s hard to see how copying and pasting chapter text wholesale could have done that but I don’t know anything about how the epub versions are made.
All the footnotes in this section seem scrambled. For example, this footnote marker 1 links to the body of footnote 10.
I’m not sure what’s going on, but it seems to apply across the whole chapter.
I thought I’d replied to this, sorry! For some reason footnotes have gone awry in this chapter and apparently only in this chapter.
i like the discussion about heritage, but i might rephrase
“In a related manner, the word “heritage” can carry unproductive connotations in certain contexts,” for
”heritage is a problematic term, that is more about veneration of the past, used for both good and ill. we need to adopt methodical historical study, that enbodies our values.”
see also The Heritage Crusade and the Spoils of History and https://www.open.edu/openlearn/history-the-arts/history/what-heritage/content-section-2.1
Thanks Jim! I’ll ask the folk who focused on that section to take a look.
This tension is neatly encapsulated in the original definition of crowdsourcing, which was ‘he act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.’ - ie it explicitly included the idea that the task involved was something that would need to be done anyway. Obviously your definition is broader, but worth pointing out?
Never mind, I read on…