Learning and Evaluation/Archive/Share Space/Questions/Archives/2013
Please do not post any new comments on this page. This is a discussion archive first created in 2013, although the comments contained were likely posted before and after this date. See current discussion or the archives index. |
What is the difference between programs and projects?
A program is "a group of related projects and activities that share the same objective, are repeated on a regular basis, are based on a similar theory of change, and use similar processes and interventions to make that change happen." This means that a project is a part of a program. A program has three things that make it special: a shared objective (related projects that aim to achieve the same objective), sustainable (projects in the program are ongoing and happen regularly), and a model (there are other projects that might have similar theories of change and are similar in how they are executed).
For example:
- In 2012, Wikimedia Sweden, Argentina, Poland and other chapters were running Wiki Loves Monuments projects that were all part of a international Wiki Loves Monuments program. These projects - all of the different WLM's produced by chapters - have shared objectives (to upload more photos of monuments). They are sustained because they all happen each September. They also have a model - they're based on the Wiki Takes/Wiki Loves model, which has participants photographing specific subject matter for upload to Commons.
SarahStierch (talk) 22:27, 5 July 2013 (UTC)
Can you please define Program Leader?
Hi everyone! This is a great space, but is it for contributors like me? I’m the U. S. and not affiliated with any chapter, group or consortium. I wouldn’t call myself a “program leader” but not sure what that really is, either. Could you please define? Since I’m a newbie, it is useful to have a space where I can check in, and gain wisdom – I’m hoping this might be that space.
- Jaime's Answer: A program leader is any individual, usually associated with a chapter or Wikimedia entity but sometimes an independent volunteer, who is implementing an identified Wikimedia program (e.g., Wiki Loves Monuments/WikiTakes/WikiExpeditions,Writing Contests, WikiCup, GLAM Content Donations, Edit-a-thons and Editting Workshops, and Wikipedia Education Program) aimed at improving Wikimedia. So far, we have begun with articulating the programs listed above as examples, but we will continue to expand that programs list as we grow the evaluation initiative.
I am involved with two GLAM projects. One is with the local county public library system (18 branches). We’ve done meet-ups, editing classes, and have a slide show planned (county-wide) for the librarians on “10 things you need to know about Wikipedia …but didn’t know who to ask” (slide deck available shortly). The Library District is interested in further Wiki-based adventures (especially around their local history assets), and has offered to purchase a scanner for Commons donations. More as this develops. They would probably be interested in Wikipedia Loves Libraries, despite the fact that there are no local editors for an edit-a-thon (hence the editing classes! Trying to grow some!) This larger project is still in its infancy, and will hopefully gain momentum in the fall.
The second GLAM project is cloud based - here. Currently, we are looking for a bot operator/tech lead, as the recent one retired from all his Wikimedia/Wikipedia projects – phooey! Recommendations appreciated. Due to this snafu, we haven’t done any uploading yet, and thus have no metrics/measures.
The range of projects and commitments of the volunteers worldwide is impressive! Can’t wait to lurk in the Parlour/Salon to see what else happens. Cheers, Bdcousineau (talk) 18:41, 25 July 2013 (UTC)
- Jaime's Answer: It sounds like you fit our definition of program leader in terms of your involvement organizing content and editing workshops, glad you found us =)
Will the Foundation be able to provide people with dedicated support for A/B-testing
A/B testing is a methodology in advertising of using randomized experiments with two variants, A and B, which are the control and treatment in the controlled experiment. Such experiments are commonly used in web development and marketing, as well as in more traditional forms of advertising. Other names include randomized controlled experiments, online controlled experiments, and split testing.
- Short answer: no, we'll not be able to provide that service at this point
See also
- The mailing list for people who want to tell WMF's Analytics group about capabilities they want, or questions they want to answer using data, can reach out via their
- List of analytics contributor wishlist
Terminology
Hi Sarah and Jaime,
I have taken the PE&D survey and found that questions are difficult to comprehend. The glossary page came to my help, but I still had to spend a lot of time to figure out what exactly each statement meant. I can comprehend English fairly well, and therefore answering the survey wasn't too much of a problem for me. But it could be difficult for a non-professional speaker of English to be able to fill out the form. I understand that terminology related to evaluation and design are not too simple and that you have tried your best to simplify the questions as much as you can. But I guess would still be hard for non-English survey-takers even with the glossary. Next time on, could we get the survey questions translated to at least some of the major world languages (say French, German, Arabic)? Or, could we provide some additional explanation or examples under each survey question in simple language so that it is easier to understand? Or probably, get the explanations of the glossary terms translated to different languages? We could use meta-wiki or translate-wiki for this purpose.--Netha Hussain (talk) 18:41, 26 August 2013 (UTC)
- Hi Netha. I'm sorry you're having a tough time with some of the questions on the survey. I am glad the glossary has come to your aide. I did request that it go through the translation process, but, no one has touched it yet. Perhaps I requested the translation process incorrectly. I think it would be great to have the survey translated into as many languages as possible (survey respondents speak over 20 languages). Also having more examples is a good idea, too. This is all really important feedback that I know myself, Jaime and Frank are going to add to our "lesson learned" section when we evaluate the survey itself. So thank you for sharing your thoughts. If you do have any questions about the survey questions, please reach out to me on my talk page or via email. SarahStierch (talk) 21:26, 27 August 2013 (UTC)
- And just to follow up, I did ask here about what to do about getting that page translated. SarahStierch (talk) 21:35, 27 August 2013 (UTC)
What do you recommend for evaluation within all-volunteer organizations?
I am interested in program evaluation for my chapter, but I am concerned about creating an additional workload for our volunteers. It already takes considerable volunteer effort to run our chapter, and I don't want to risk alienating people by making them participate in evaluation in addition to all of the hard work they already do. What do you recommend to make program evaluation as little of a burden as possible? harej (talk) 11:50, 27 August 2013 (UTC)
- Hi James. Jaime, our Program Evaluation Specialist, is on vacation until Monday, so I'll do my best to provide some feedback. I know she'll have some good insight about this. This is one of the biggest concerns we've heard from the community. Even I worry about it. While I'm a staff member, I have been a program leader for a few years, as a volunteer. And the idea of evaluating the edit-a-thons I've done, after they are so time consuming, was something I hated doing unless forced to do it as a Wikipedian in Residence or to report funding to WMF. And even then - it's a chore. One of the things that we are aiming to develop here at PE&D is a work flow that you'll be able to use as a volunteer regarding different types of evaluation processes. In theory, every single program you do - from edit-a-thons to workshops, GLAM partnerships, etc - should have a theory of change, a logic model, and an evaluation process - this is a pre, during and post evaluation. ANd honestly, once you understand how all of this works, it's quite easy, and makes the process easier and easier each time you do a program. I'm currently working on a logic model for some events I"ve done in the past, and I realized that even the case studies I did as a Wikipedian in Residence have helped me become a better Wikipedian in Residence. Now I Know how to bang out a case study and a basic evaluation in a matter of hours, and when I'm done, it feels good. So we're working on it, and yes, once you get the jist of how to execute these things, it'll become a part of your workflow. At first it might take sometime, but, with organizational structures and better workflows, we can make it easier. /stops babbling. SarahStierch (talk) 21:59, 27 August 2013 (UTC)
- And you were organizing in DC, so you know exactly what I'm talking about. :-) So the basic gist of what you're saying is, just integrate it seamlessly into everything else we do, and it will feel less like work. Should we adopt a form or some other sort of boilerplate to make it clearer what the goals are and what should be evaluated? (As I see it, one potential course of action is that rather than make people write out many pages of documentation from scratch, they can fill in the blanks based on what data we get out of events. Obviously that doesn't cover everything but it would provide a lot of guidance.) harej (talk) 00:47, 28 August 2013 (UTC)
How would you measure an exhibition?
- Are there any good examples of measuring the effectiveness of a travelling exhibition? I'm asking of behalf of Wikimedia Eesti (WMEE). We're wondering about our yearly WLM exhibition which has moved around country for two years now: it's a bit expensive to print out all the best pictures, so we have to consider whether we should continue this year or not. Still, such a decision should be based on some firm data, even if it is hard to figure how to get it.
- One way might be to make a survey (although handing out leaflets or doing it by hand means we should find a volunteer to do it for a whole day, and letting them sit on a counter means no one takes them).Are there any useful examples of any surveys some Wikimedia organizations might have used to study both the effects of a current project and their image in general? --Oop (talk) 07:18, 27 August 2013 (UTC)
- I'm not an expert in PE&D, but I think you can measure the effectiveness by the number of people who viewed the exhibition, number of newspapers which reported the event, volunteer-hours spent and increase in the number of WLM uploads from your area. --Netha Hussain (talk) 11:20, 27 August 2013 (UTC)
- 1. The number of people should be counted. Our exhibition has been in a tourism information centre in a city hall, in the foyer of a mall, in a university building... It is hard to count the viewers there. If it is in a museum, you can count the tickets but how many took interest in your exhibition and how many were only interested in the permanent exhibition? Also, if you don't sell tickets you need someone counting, for any useful results they have to sit there for a whole day and probably more than one day (different weekdays have usually different numbers of visitors). I'm not sure we have enough volunteers to give them jobs like that. 2. No. of newspaper articles or media reports in general is relevant. But how would you use measures like that to set the goals and estimate the results? How many articles should you get to be deemed successful and at which quantities will you say you've failed? You can't succeed if you can't fail, that's the first rule of measuring activities. 3. Volunteer-hours are not exactly a measure of success. It's important to estimate the input (although very hard to measure, most volunteers aren't used to keeping a timetable, especially if a job consists of a million pieces - making a phone call, writing a letter, getting a package from the post office, writing an advertisement...) but that's what the outcomes should balance. 4. Increase in the number of WLM uploads would be nice but it depends on many factors amongst which the exhibition is only one. You can have crowds of visitors and excellent media coverage while still failing to increase the volume of uploads. It would be hard to find out which uploads were influenced by the exhibition (we could use a survey to bomb the WLM participants but if participation is voluntary, you can't be absolutely sure if it does not influence the data). And what about people who see the exhibition and upload pictures but not for WLM? It would be even harder to find those. --Oop (talk) 11:42, 27 August 2013 (UTC)
- Hi everyone. Good question. Do keep in mind that our teams Program Specialist, Jaime, is on vacation until Monday, and she's surely the best (hence "specialist") to provide feedback about this. I'll provide a bit of feedback myself, but, when Jaime gets back, she'll stop by here, promised. Pardon my epic long response.
- One of the first steps of evaluation is to create a logic model about your program - so your program is a Wiki Love Monuments Exhibition. We don't have a logic model developed yet for exhibitions, but, I encourage you to start one! You can find details on logic models here. I'm actually doing one right now, including writing a theory of change and evaluating and comparing two edit-a-thons I did last year. You can follow my progress, and perhaps find some inspiration here (it will be posted someplace more public when I'm done). But, a logic model will allow you to look at what your theory of change is (i.e. "by having a nation wide exhibition of WLM winners images, the public will be able to learn more about the program WLM, gain an understanding of the importance of historic preservation, and will become aware of Wikimedia Estonia's mission. We hope through this, exhibition guests will become members of the chapter and/or participate in WLM in 2013." (continued next bullet point)
- Having just a general theory of change about what your program aims to do, helps you flesh out what you really do intend on doing regarding inputs (what you put into it - staff time, volunteer time, printing the images, etc), your outputs (who attends the events, how you promoted it, what came out of it - new members, people had fun at the event), and your outcomes (short/medium/long term things that come out of it - a survey can show you some of these, that exhibition attendees learned more about what Wiki Loves Monuments was (short term) and that long term would be that more people participate in WLM the following year.) All of these things can help you evaluate if your intended outcomes really happened in the end, and if it's worth having the exhibition - because it's costly and time consuming. For example, after creating your logic model, and evaluating the exhibitions with online surveys and by monitoring if your chapter has more members after the events, or the following year more volunteers participate in WLM, you can see if it's worth your time. Perhaps people are leaving with a better understanding of WLM, but, is it worth the time and money to send it around the country if they aren't actually donating to your chapter or participating in WLM the following year?
- I do agree that doing a survey is a nice idea. While filling out a paper survey is one option, I would try to gather information from attendees myself - i.e. when they arrive, or before they leave - get their name and email to add them to your chapter mailing list or something (and they can opt out of course) and then you can email them a survey. As a former curator of exhibitions, and an attendee of many, I'd rather fill something out afterwards then while I'm at the event. Netha was right on also - gathering press coverage, tracking how many attendees are at each event, how much volunteer/staff time goes into planning the events, tracking each cost, how you promoted it, and like I said, saying to yourself "what's my intended outcome over the short/medium/long term? and what's my theory of change about this?" can allow you to see if you're truly making those outcomes. It might mean you have to survey WLM Estonia participants in October and ask the question "how did you find out about WLM?" and if 500 people respond and only 2 reply saying "through an exhibition of WLM photographs," then that might be the epic deciding factor that it's not worth your time to tour the exhibition. SarahStierch (talk) 21:49, 27 August 2013 (UTC)
- Surveys are nice but they have to be conducted carefully to give meaningful results. Also, as mentioned above, they need certain resources like manhours. Most of the people who are willing to organize anything here are full-time students who also have jobs and families, so they can't spend a whole day questioning people; also, a travelling exhibition moves around in very different places - a museum in a small town is bound to give quite different results than a shopping mall in our capital's downtown. I can say that while it was up in the city hall, our exhibition had over 800 visitors, but this says nothing about the numbers in other places. Also, if we talk about raising the general awareness about WLM, our chapter, and Commons, one might say that having a hundred visitors in a county where no Wikipedia events have been before would be more important than having five hundred in a major city where half of our events are run. On the other hand, if we're having a good relationship with a certain local government, it might be worthwhile to get the general population's support. So, there are pros and cons.
- Perhaps we could use the number of different places as one of the parameters, especially the new ones? I mean, while we can get more people in the cities, our exhibition might leave a larger mark in a small place. Then again, who's going to run around there, conducting the survey, if we don't have any volunteers there? But if we don't run a survey, how do we know if we left a mark at all? National newspapers won't cover an exhibition set up in a small town. Should we ask people to blog about it? We might. Yet if they won't, would our exhibition really be a failure, just because of that? Again, what should we measure it against? What should be the criteria of success and failure?
- I would very much like to run a survey amongst the participants after WLM, but as far as I know, there already is one and it wouldn't probably be a good idea to double it; also, I'm not sure if the data protection laws that WLM is run by would allow it. I asked around a bit and was told that it might not be easy to get our local data out of that large survey. So, that option is also off the table. --Oop (talk) 16:15, 6 September 2013 (UTC)
- Hi everyone. Good question. Do keep in mind that our teams Program Specialist, Jaime, is on vacation until Monday, and she's surely the best (hence "specialist") to provide feedback about this. I'll provide a bit of feedback myself, but, when Jaime gets back, she'll stop by here, promised. Pardon my epic long response.
- 1. The number of people should be counted. Our exhibition has been in a tourism information centre in a city hall, in the foyer of a mall, in a university building... It is hard to count the viewers there. If it is in a museum, you can count the tickets but how many took interest in your exhibition and how many were only interested in the permanent exhibition? Also, if you don't sell tickets you need someone counting, for any useful results they have to sit there for a whole day and probably more than one day (different weekdays have usually different numbers of visitors). I'm not sure we have enough volunteers to give them jobs like that. 2. No. of newspaper articles or media reports in general is relevant. But how would you use measures like that to set the goals and estimate the results? How many articles should you get to be deemed successful and at which quantities will you say you've failed? You can't succeed if you can't fail, that's the first rule of measuring activities. 3. Volunteer-hours are not exactly a measure of success. It's important to estimate the input (although very hard to measure, most volunteers aren't used to keeping a timetable, especially if a job consists of a million pieces - making a phone call, writing a letter, getting a package from the post office, writing an advertisement...) but that's what the outcomes should balance. 4. Increase in the number of WLM uploads would be nice but it depends on many factors amongst which the exhibition is only one. You can have crowds of visitors and excellent media coverage while still failing to increase the volume of uploads. It would be hard to find out which uploads were influenced by the exhibition (we could use a survey to bomb the WLM participants but if participation is voluntary, you can't be absolutely sure if it does not influence the data). And what about people who see the exhibition and upload pictures but not for WLM? It would be even harder to find those. --Oop (talk) 11:42, 27 August 2013 (UTC)
- This brings in mind another question: has someone managed to make volunteers effectively measure their hours? First, how to get them used to it, and second, are there any tools that have proven to be sufficiently flexible and light for volunteers? Most of those are designed for businesses which tend to have a bit different flow of work (steady hours, less fragmented timeshares, people chained to computers, etc). --Oop (talk) 11:42, 27 August 2013 (UTC)
- That's a super good question, and one that is sort of reflected in the one below. I'm going to ping Jessie Wild from Grantmaking, and see if she has any feedback about this. I do know of some possibly non-open source options. At a few local non-profits here where I live, people use this which is accessible on mobile to track your volunteer hours. I also participated in an e-volunteer activity once on wiki where I had to track my editing time for a museum and I'd just add it onto a wiki page they created when I was done editing each time, articles related to the museum. This article is a bit more "jargon" filled talking about non profit reporting and is more US focused, but, might have some good guidelines on why it's important. They also provide examples of volunteer tracking logs. In the end, just guessing isn't so bad. I can promise that I probably but 30 hours into planning one of my major edit-a-thons. When I did the Wiki Loves Monuments US (California) last year I put 80 hours of time in - no joke - and that includes on wiki work, Google Hangouts, in person meetings, networking with local organizations and potential judges, and wading through the first pile of submissions so the judges didn't have to look through 11,000 photos. So....even just guessing is better than nothing I guess? SarahStierch (talk) 21:53, 27 August 2013 (UTC)
- I don't have "best" tools in mind, and I do think educated guessing isn't a bad way to do it. It could be something as frequent as sending a weekly email to all the volunteers asking "how many hours did you work on WLM this week?" OR even more simply: at the end of the event/program, sending a follow-up survey to all volunteers which includes a similar question (amongst others, like satisfaction levels, things they would change, etc.), "Approximately how many hours did you spend preparing for and contributing to Wiki Loves Monuments?" This is a question we started asking all the applicants after they finish applying to one of our (WMF) grants programs -- obviously the answers aren't 100% accurate, but they give us a range of what type of commitment we are asking from people. Jwild (talk) 22:31, 27 August 2013 (UTC)
- Thanks. I'm not sure I'm educated enough for a guess but luckily I know some statisticians. Maybe we can set some kind of framework to ease the hard job of guessing. --Oop (talk) 16:15, 6 September 2013 (UTC)
- I don't have "best" tools in mind, and I do think educated guessing isn't a bad way to do it. It could be something as frequent as sending a weekly email to all the volunteers asking "how many hours did you work on WLM this week?" OR even more simply: at the end of the event/program, sending a follow-up survey to all volunteers which includes a similar question (amongst others, like satisfaction levels, things they would change, etc.), "Approximately how many hours did you spend preparing for and contributing to Wiki Loves Monuments?" This is a question we started asking all the applicants after they finish applying to one of our (WMF) grants programs -- obviously the answers aren't 100% accurate, but they give us a range of what type of commitment we are asking from people. Jwild (talk) 22:31, 27 August 2013 (UTC)
- That's a super good question, and one that is sort of reflected in the one below. I'm going to ping Jessie Wild from Grantmaking, and see if she has any feedback about this. I do know of some possibly non-open source options. At a few local non-profits here where I live, people use this which is accessible on mobile to track your volunteer hours. I also participated in an e-volunteer activity once on wiki where I had to track my editing time for a museum and I'd just add it onto a wiki page they created when I was done editing each time, articles related to the museum. This article is a bit more "jargon" filled talking about non profit reporting and is more US focused, but, might have some good guidelines on why it's important. They also provide examples of volunteer tracking logs. In the end, just guessing isn't so bad. I can promise that I probably but 30 hours into planning one of my major edit-a-thons. When I did the Wiki Loves Monuments US (California) last year I put 80 hours of time in - no joke - and that includes on wiki work, Google Hangouts, in person meetings, networking with local organizations and potential judges, and wading through the first pile of submissions so the judges didn't have to look through 11,000 photos. So....even just guessing is better than nothing I guess? SarahStierch (talk) 21:53, 27 August 2013 (UTC)
- Some chapters have actually attempted to have all volunteers and staff log their time to the minute. In some volunteer organizations, this can, and does work, however these are usually large organizations that have to maintain bureaucracy and infrastructure for this function and most often such time logging is associated with time served rewards for persistent volunteer scheduling. Unless this is something that actually works for your setting, I would not recommend it as this is a high investment practice for relatively little added value over estimated time. In asking people to estimate their time you do accept some trade-offs in terms of accuracy, but most people can estimate their time fairly well so long as you: let them know you will be asking them, ask them within a reasonable time frame of their efforts, and ask them to estimate in an appropriate unit of time (i.e., hours per day/week/month). There are several ways to estimate also, you could ask them to estimate their average rate or you could regularly ask them to estimate their amount of hours at regular intervals, or you could also choose to sample their efforts if their efforts are occurring with enough regularity. JAnstee (WMF) (talk) 18:08, 16 September 2013 (UTC)