Learning and Evaluation/Evaluation reports/2015/Wiki Loves Monuments/Key findings
We examined the results in terms of the stated priority goals program leaders shared in their reporting as well as outlined critical next steps toward program learning.
Read this page to learn the most important takeaways and recommended next steps.
The four most commonly cited priority goals for 2013 and 2014 Wiki Loves Monuments events were: to increase contributions; to build and engage community; to increase awareness of Wikimedia projects; and to increase diversity of participants. Increasing contributions[edit]Data available on the total number of files uploaded through the Wiki Loves Monuments contests (rather than just those captured in this report) shows
Media uploaded for Wiki Loves Monuments captured in this reporting represents 11.4% of all media uploaded to Commons by registered users during the reporting period. Media uploaded for these events has been used in articles at nearly five times the rate of commons uploads overall during the same time period. In terms of building and engaging community, participation in the examined events included more than 17,000 Wikimedia Users (Nearly 1,400 existing Active Users and 11,000 New Users). Of the new users generated through the examined contests 1.5% survived as editors on Wikimedia at three-month follow-up.
Increasing awareness of Wikimedia projects[edit]The majority of contest participants are newly registered users to Wikimedia. In addition to the reach of the event itself, nearly 90 percent of program leaders reported that they had developed blogs and other informative online documentation of their events. Promotional reach and potential learning about Wikimedia project is not captured by the data captured in this report, however, for 2013, a community-led survey was collected from contest participants, which included some items about how participants learned about the contest, the overall results indicate that, most often, participants learned of the events through banner posting (60%) while other routes were reported in much lower proportions.
Increasing diversity of participants[edit]The contests reported here were held in 51 different countries and engaged new users over existing users at a rate of two to one. Only five program leaders reported estimates of gender distributions, those reports ranged from 2% to 39% female.
| ||||||||||||||||||||||||||||||||||||||||||||
How this information can apply to program planning [edit] |
Planning for Program Inputs & Outputs[edit]
The data presented in this report suggest that small-scale events can be as effective as larger participation events with more new editors while larger participation counts tended to have more media uploaded overall. Having more images however doesn't always mean more image use. When planning a photo event, it may be useful to try to balance group size with both new and experienced users to increase use and ensure high quality uploads.
Planning event budgets based on the budgets presented here would have many pitfalls: differences in event length and number of participants, local costs, and even event style affect budgets (see also report on Other Photo Events. For example, a small photo event may comprise of lending a Wikimedian photographer a camera so they can go to a cultural festival and upload the photos they take to Commons. If the event organizer already owns the camera, the event may incur no cost, while purchasing a camera would be a significant expense for another leader who does not already have the resource.
To avoid surprises, when using budgets presented here for planning purposes, try to find an event in a location with a similar economy to your area and consider reaching out to successful program leader to discuss potential resource needs (including possible budget or donated resources). Alternatively, you can find an event based on the same model in different location and talk to the program leader about the costs incurred before translating those expenses into local prices.
The boxplots illustrating cost per participant and cost per media uploaded can also be helpful references, if, as with overall budget information, taken in the context of each event. If planning a new program, you might expect the costs to fall within middle 50% of costs per output reported (ie, within the green bar on the boxplot.) As programs move down the boxplot they create better outcomes with fewer inputs. We hope, as we continue to evaluate programs and feed the results back into program design, that we can learn more from the programs achieving the most impact using the fewest resources. | |||||||||||||||||||||||||||||||||||||||||||
How easily can the program be replicated? [edit] |
Over the years, the global event team was very effective in continuously improving and promoting the concept, which led to a growing number of countries joining the competition. The global event team also offers support for local organizers about legal advice and questions around promotion, prizes, and potential partners. Furthermore, the global event team has become continuously more sophisticated in documenting the event and providing the community with tools to measure the competition's results. All of this makes Wiki Loves Monuments the one program within our programmatic landscape that has the most cohesive and clear set of goals, measures of success, and documentation for replication of success. Still, we are currently examining a selection of events and reaching out to learn more from program leaders in order to develop a program toolkit specifically for gathering the different stories, resources, and advice for how to plan, run, and evaluate photo contests and events. Look for it later in 2015. | |||||||||||||||||||||||||||||||||||||||||||
How does the cost of the program compare to its outcomes? [edit] |
Given the fact that a comparably large number of organizations receive grant funding explicitly for their Wiki Loves Monuments events, we were able to better establish a basic cost-benefit baseline for Wiki Loves Monuments somewhat better than for any other program covered in this evaluation series. Keep in mind that given we used data mining from grant reports, this section only addresses Wiki Loves Monuments implementations that were funded by grants really, not unfunded implementations. Our cost-benefit-analysis comprises of data from Wiki Loves Monuments implementations from those events with reported budgets. Looking at the cost per uploaded image from the combined reporting, we get to result that for those events with budgets reported, the average number of uploads is going down while the average cost per upload is going up:
As photos that are being used in Wikipedia articles reach a much larger audience than those that are not included in articles, we also looked into the number of images that are currently being used in Wikimedia articles (as of November 4, 2013 for 2012 contest uploads, as of March 30, 2015 for 2013 and 2014 uploads) to see if this demonstrated increased use. As outlined in the table below, image use was actually lower for those which reported budgets than for those which did not for both 2013 and 2014 data.
Knowing the cost for an image that got uploaded through Wiki Loves Monuments and that's also in use on Wikipedia and elsewhere is important as it allows us to compare that number to its equivalent from other photo upload initiatives. Here we see a slight trend toward increased cost alongside decreased impact in this way indicating it is important to be cautious about the investment level for these photo events.
| |||||||||||||||||||||||||||||||||||||||||||
Next steps [edit] |
Join the conversation! Visit the report talk page to share and discuss:[edit]
|