Read this summary page for a description of the report, data highlights across three core outcome areas, and lessons learned across program implementations. Use the tabs in the navbar to find detailed sections that dive deeper into the data.
This report presents data on a year of edit-a-thons. It examines metrics used across a broad spectrum of programs so that goals and outcomes can be discussed across different program types. It can be used for:
designing and planning future programs,
exploring edit-a-thon effectiveness, and
celebrating edit-a-thon successes.
The authors recommend using caution drawing conclusions about individual programs based solely on the data presented here, as there is insufficient information about each unique edit-a-thon’s context and goals.
The report includes data from 121 edit-a-thons held in 19 countries between September of 2013 and December of 2014. The events include a total of 2,328 participants who added over 5 million characters to over 5,000 articles.
Edit-a-thons activate editing communities and can create significant content around a specific topic. The average edit-a-thon had 14 participants, 20,812 characters added or removed, and 7 articles created or improved. In total, the edit-a-thons included in this report added over 3,362 pages of text to Wikimedia projects.
On topic “A lot of our editathons have focussed either on the gender gap or working with institutions we're partnered with, and often they overlap. With women making up only 11% of Wikipedia's editorship, we're keen to help improve that and increase coverage of women at the same time."
Richard Nevell Wikimedia UK
Retention rates appear to be high for edit-a-thons. But we need to know more about the number and proportion of new users at edit-a-thons before we can compare to other programs. About 52% of participants identified as new users made at least one edit one month after their event, but the percentage editing dropped to 15% in the sixth months after their event. Retention rates for existing editors were steady around 70% over the analysis period.
Program leaders generate an enormous amount of resources around how to run an edit-a-thon and different event styles. Every one of the 38 events reporting this metric was run by an experienced program leader who can help others run a similar edit-a-thon. (See the data tables to find which program leaders ran which events.)
We still need more data in order to draw stronger findings about inputs (e.g. dollars or volunteer hours), outputs (e.g. bytes added), or outcomes (e.g. retention). This also means that we are limited in determining the following:
Actual costs for edit-a-thons. Key monetary costs for edit-a-thons may include renting space, purchasing food, or renting equipment, but other costs may exist depending on local context and we were not able to obtain many reports of costs. In addition to dollar cost, we need more data around other resources such as the efforts of volunteers and staff.
Associating costs to outcomes. The report is unable to draw clear conclusions about how inputs influence outputs and outcomes without more data.
Other measures and outcomes. This report is limited to certain measures and outcomes of edit-a-thon achievements. Growing volunteer abilities to run programs and increasing collaborative editing may be other likely outcomes which are not captured, among others.
Use the report data for planning future edit-a-thons. The report summarizes each input, output, and outcome metric. For planning, program leaders and funders can use the range of data to know what is generally a high or low number for each metric. Tables are provided with data from each edit-a-thon. These offer readers access to local context information and contacts. You can also find more planning resources on the Edit-a-thons Program Resources page.
Increase shared learning about edit-a-thons through data. Having more data and more measures are key to having a deeper understanding about programs and their outcomes. This includes online data (e.g. articles created) as well as offline data (e.g. budget, volunteer hours, motivation). To successfully translate data into shared learning we need both more data and more ways to share & interpret it. Two ways to increase data is to increase capacity building around data collection, as well as improve data collection tools. Two ways to increase sharing and interpreting data could be to connect program leaders and elevate existing resources to be more visible.