Learning and Evaluation/Evaluation reports/2015/Data access
Every year we host a round of data collection and encourage program leaders all over the world to submit their programs data.
Find out in this page how many programs were reported, how many mined, and what is the state of evaluation capacity in the movement.
Data collection |
The first reports included findings from 119 implementations of 7 different types of programs, as reported by 23 program leaders from over 30 countries. To learn about these implementations, we relied extensively on data submitted to us by community members. Based on community requests received last year, we focused more on surfacing data that was already reported through grant programs event pages, and program registries on wikis. For our second round of data collection, we did just that. We searched through grantee reporting high and low, and dug deeper through the reports linked blogs, events pages, and other supplemental reporting resources to identify and gather data from an increased number of program leaders and implementations worldwide. In this second round, we reached farther and deeper to gather reports of programs and captured partial data on programs by: 98 different program leaders' (157% increase from baseline) 59 different countries (197% increase from baseline) This capturing Includes reports about 733 different program implementations (617% increase from 2014). Of the 98 different leaders of the programs identified. Through email outreach we were provided additional data by 49 direct reporting program leaders (213% increase from 2014) lending primary data collection support for 222 of the identified implementations (30%). | |||||||||||||||||||||||||||||||||
Reporting data |
| |||||||||||||||||||||||||||||||||
Inputs and participation |
Inputs[edit]
Regarding inputs, program leaders were asked to report:
For those program leaders who did report on specific program inputs:
Input reporting was much higher for PEG grantees which are required to present project specific budgets. From PEG grantees, 17% of program implementations had reported budget[2]. Relatedly, non-grantees and APG grantees were more likely than others to report hours input (13% and 11%, respectively). Unfortunately, input data is some of the hardest data to obtain through secondary sources and it will be important for programs leaders to better track their inputs in order to understand the level of resources dedicated to their program activities. Participation[edit]
Regarding participation, program leaders were asked to report:
Program leaders were also asked to provide the dates, and times, if applicable, for their program. Only 39% of program reports included the total number of participants and only 23% reported the number of new user accounts created during their program events. Through mining content and event pages, our team has worked to increase the number of known participants for tracking and reporting. We expect data access in this area to increase. In the mean time, watch each report for details on data access and to what extent we have been successful in filling the gaps. | |||||||||||||||||||||||||||||||||
Content production and quality improvement |
Content production[edit]
Regarding content production, program leaders were asked to provide various types of data, about what happened during their program, depending on the level of data they were able to record and track. These data types were:
Importantly, for any program events for which there is a known date, time and user list, or a specific category or set of work, these data can be retrieved relatively easily using data tools like Quarry and Wikimetrics. For this reason, the team has also worked to fill in some of these important gaps. We will share update in terms of each program’s data in the discussion about data access for each program report. Quality improvement[edit]
The survey also asked that program leaders to report on the quality of the content that was produced during the program. Those programs focused on text content demonstrated reporting of:
For media upload programs:
Participant user status and content production data were extracted using Quarry and/or Wikimetrics based on usernames reported, or from activity measured on the content associated with the program event. This data collection round, through additional data extraction based on reported data, the reporting team was able to access additional quality and use measures for the majority of media events and many on-wiki writing contests where the content affected and participants are publicly available data.
| |||||||||||||||||||||||||||||||||
Participation, recruitment and retention |
Tools like Wikimetrics can make this possible, which means tracking usernames is important to learning about retention. For editathons and workshops, the majority of those reported on did not retain new editors six months after the event ended. A retained "active" editor was one who had averaged five or more edits a month.[1] We looked at recruitment and retention by examining:
Participant usernames were split into two groups: new or existing users, in order to learn the retention details about each cohort. This is important, since Wikimedia programs often attract both new and experienced editors, this is especially true for editathons, editing workshops, and photo events, while less true for most on-wiki writing contests which generally target existing contributors, and the Wikipedia Education Program that generally targets new editors. In terms of recruitment and retention of the data were extracted using Quarry and/or Wikimetrics based on usernames reported, listed on public event pages, or from activity measured of direct content editors associated with the content improved through the program event. Through this additional data extraction based on program leader reported data, the reporting team was able to access retention follow-up data for nearly all program implementations for which usernames were reported.[2]
| |||||||||||||||||||||||||||||||||
Replication and shared learning |
We wanted to learn if program leaders believed their program(s) could be recreated (or replicated) by others. We also wanted to know if program leaders had developed resources such as booklets, handouts, blogs, press coverage, guides, or how-to's regarding their program. We asked if the program:
|