Jump to content

Learning and Evaluation/News/Pre-conference workshop day for Wikimedia Conference

From Meta, a Wikimedia project coordination wiki

New in-person pre-conference workshop day for Wikimedia Conference!

[edit]

Evaluation and Program Design Pre-Conference

[edit]

The Learning and Evaluation team are planning to present a half-day pre-conference Wednesday May 13, 2015 in advance of Wikimedia Conference 2015. The workshop will include an introduction to program evaluation and design, overview of tools and strategies for evaluation planning, program monitoring, and outcomes assessment.

Workshop goals include that participants will:

  • Gain a basic shared understanding of program evaluation
  • Gain increased fluency in a common language of evaluation
  • Learn about different sources for data
  • Increase skills in using evaluation and design tools and resources
  • Enjoy time networking with other program leaders

Important information

[edit]
  • The Wednesday pre-conference workshop is open to 30 participants.
  • Support is available for the additional hotel stay May 12th and 13th for approved participants traveling from outside of Europe, and for May 13th for regional participants.
  • Support and slots for participating are still available on a first come, first served basis for those registered to attend Wikimedia Conference 2015 in Berlin.

To start planning your conference see the conference page!

May
Mon Tue Wed Thu Fri Sat Sun
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31


Pre Conference Agenda for Learning & Evaluation team:
May 13 (half a day): Introductory topics: data, tracking resources, tools and designing a SMART proposal.
May 14 (full day): Applied workshops: Reporting and Storytelling, Designing an Annual Plan, Wikimetrics, Survey Design, Program Toolkits

May 15-17: Wikimedia Conference 2015 in Berlin.

Agenda

[edit]


Wednesday
[Closed Pre-Conference sessions]

11:30 Introduction and overview
12:00 Eat and Greet (Quick Lunch)
12:30 Hopes and Fears
13:00 Logic Models
14:30 Data, Tracking and Reporting
15:15 Tools cafe
16:30 Moving Forward: review of the day and next steps.
17:00 Break
19:00 Dinner

Thursday
[Open Pre-Conference sessions]

09:00 Eat and greet
09:30 Designing WMF Proposals and Annual Plans
10:45 Wikimetrics
12:00 LUNCH
13:00 Survey design
14:30 Reporting and Storytelling Toolkit
15:15 Program Toolkits
16:30 Key Take-aways and Next Steps
17:15 Break before opening party

*Times and session order subject to change

Workshop outputs

[edit]

Wednesday, May 13

[edit]

Goals

[edit]

At the beginning of the conference, participants shared the goals each of them had for the session that day. This is the captured list:

  • How to structure projects along a spectrum and keep people interested.
  • Bring back hope and learning from this session.
  • Learn to problem solve.
  • Learn to fly higher.
  • Suggestions for how to keep working on current and future projects.
  • Learn how to have more impact and quality (be more daring)
  • How to divide labor for unified evaluation without relying on one person.
  • How to expand education to a whole country.
  • Learn frustrations and difficulties of evaluation.
  • Talk, share and learn together.
  • Learn how not to burn out
  • How to document for evaluation
  • New ways of thinking and adapting.
  • Learn tools!
  • How to improve our work —> success!
  • More structured programs - Logic Models.
  • How to get things done.
  • New ideas.
  • Understand diversity of our work.
  • Improve evaluation + share.
  • Feel positive about evaluation + next steps
  • Meet everyone and learn success and challenges and get to know how to support with grants.
  • How to grow education initiatives.
  • Wikimetrics


Hopes and Fears

[edit]

Hopes

  1. That people will learn a useful tool they can share with their communities.
  2. Improve the process of the chapter through the tools.
  3. Clear route to professionalization.
  4. Find the best practice to identify the problem and find the best solution.
  5. Evaluation provides us with information to improve our activities.
  6. Become wiser and more confident about evaluation and learning topics as to be more able to incite passion in others.
  7. Understand others in doing their projects and programs more and more.
  8. Evaluation seen as a tool to help expand Wikimedia programs.
  9. Be a master in new tools!
  10. We are doing workshops and online publicity to create awareness so that we can engage more people in our movement.
  11. To learn something I don’t know in Wikimetrics.
  12. Inclusive growths.
  13. WMF Learning and Evaluation takes inputs and makes more tools for data collection and analysis.
  14. How to have structured activities and easy way to follow up.
  15. Leaving with constructive ideas.
  16. Increase the quality and the impact of our activities.
  17. Inclusive.
  18. To learn wikimetrics
  19. The Grants & Evaluation team will bridge information gap.
  20. Small achievements will lead us to bigger goals.

Fears

  1. Wisdom and Data vs process (could also be wisdom and vs data/process; wisdom vs data and process)
  2. Not many people want to do volunteers work, so we have a lack of volunteers.
  3. Missing the diversity of Wikimedia Universe
  4. Not understand everything
  5. Burning out
  6. Prioritize quantity to quality. Bytes are not the only measure.
  7. Tools and metrics
  8. Reports and evaluation could take much time and effort. Volunteers want to avoid them.
  9. Scaring budding volunteers by reporting / paperwork.
  10. While implementing learnings from the workshop, I will make things even worse evaluation-wise in the chapter.
  11. Drowning in paper work.
  12. Not getting everything successful will discourage us.
  13. New idea which will take much time and effort to bring to life but won’t be as useful to others.
  14. That people will be overwhelmed by the concepts and tools (+1)
  15. Evaluation seen as “the bad and the ugly”, pointing out what doesn’t work.
  16. Find useful info from countless resources / reports.
  17. Tools are not successful
  18. That grants and evaluation may fail to support new and innovative grants.
  19. What happens when we do not achieve our goals?


Logic Models

[edit]

This resource was presented by Christof Pins and Manuel Merz (Wikimedia Deutschland). The group worked through the notion of a staircase logic model, and then grouped to create a logic model for a specific Wikimedia program. These are the logic models created:

Learn more about Logic Models on its page in the evaluation portal.

Ah-Ha's

[edit]

After the session ended, we asked participants to share one thing that implied a particular realization for them. This is the list of takeaways:

  • Logic Models were really logical.
  • I know what’s a logic model.
  • Belly button exercise!
  • Data sources
  • I learned that WMDE rocks logic models.
  • GLAM and wiki tools depend absolutely on volunteer work :O
  • Education extension exists.
  • Bring case studies for next workshop.
  • Logic models.
  • There are some tools to generate a report.
  • Using evaluation tools (CatScan etc) to find problem articles on Wikipedia.
  • CatScan 3.0
  • We have someone to turn to if we don’t remember or understand something.
  • There is a logic model for every wiki idea!
  • So many useful tools already exist! (Without localization, though=().
  • Education extension + GLAMorous tool.
  • Satisfaction as part of logic model.
  • Program vs implementation vs project
  • There are so many tools that have existed.


Next actions

[edit]

We also asked participants what they were foreseeing as next steps with the takeaways they had. This is what they told us:

  • Use GLAMorous
  • Translate tutorial video on Commons and transcribe script
  • Use metrics
  • Make an intro to the community about all tools.
  • I will use the tools presented and introduce them to other people.
  • Workshops / Training on building context-relevant metrics.
  • Share a logic model as a best practice to doing a project.
  • Use logic models and education program extension.
  • Education program extension a must. Shall be used in India.
  • Put order in the house (after Wikimania).
  • I will use CatScan and GLAMourous to know more about Wiki Loves Monuments in Spain in 2014. And improve WLM 2015 and 2016.
  • Use logic models (or at least learn how to use them).
  • Dig deeper in potential of Education extension for evaluation.
  • Ask the community about deploying new extensions/tools.
  • Use Wikimetrics more.
  • Get data from our education program (300 schools in be).
  • Continue with discussions around Education Program extension.
  • Building logic models for our activities/projects.
  • Ask professor to agree us using education program extension.
  • Translate tools! Our community needs them.


Remaining questions

[edit]
  • As a reminder that we keep working to find an evaluation framework that helps us all, we asked participants to share remaining questions after the workshop:
  • How to find realistic metrics (Set realistic goals)?
  • How to build up a roadmap of learning these tools?
  • How to break up project management tasks among different roles?
  • How to learn when the external factors don’t work as we plan?
  • I will think about it.
  • How to answer cranky admins who oppose new tools/strategies?
  • How to share the learnings from the workshops with local volunteers in an inspiring way?
  • List of custom tools needed.
  • Category wise User contribution breakup tool without making it public?
  • How to read data from GLAMorous / CatScan?

Further resources

[edit]

Thursday, May 14

[edit]

This day was dedicated to Evaluation clinic and presentation on specific topics around evaluation, like survey design and SMART planning. We also presented program toolkits and the draft of storytelling toolkit.

Further resources

[edit]