Talk:Learning and Evaluation/Archives/2014
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Font size
Is font size on this page really that small or it's my own issue? Glossary is hardly readable indeed. -- Ата (talk) 19:50, 31 January 2014 (UTC)
- For me it's small but well readable - but as you I think know I'm used to small fonts. --Base (talk) 11:59, 1 February 2014 (UTC)
- It seems at 100%, on a small laptop screen, the font size is rather small (seems around 10 pt). However, you can usually zoom in on the browser window and the text will adjust to whatever font size you are most comfortable with. Are you viewing from a smaller device? JAnstee (WMF) (talk) 20:01, 2 February 2014 (UTC)
Portal templates
Hi! I purged the front page and everything is broken - I have no idea where the break occurred aside from TNT template. Can someone look into it? User:Verdy p? Thanks! heather walls (talk) 22:07, 11 February 2014 (UTC)
- OK I see that. I have seen that the Module interface in Lua has been changed recently in MediaWiki in the way parameters are passed to the module. This has already caused compatibility problems with other templates that are hard to track, notably for numbered parameters (or parameters with implicit positions). For some strange reasons also the namespace names are resolved differently, and the Lua interface does NO longer honor its documentation and can throw errors where the doc says the opposite.
- Something was changed, but there are workarounds; I can see that immediately for this page.
- verdy_p (talk) 22:37, 11 February 2014 (UTC)
- OK this is fixed: a consequence of what I described earlier: some numbered/positional parameters are now hidden in the Lua Module interface and no longer exposed to the expansion performed by the module. In some cases ths causes infinite recursions as we can no longer detect that TNT has already been called (this affects mostly the autotranslatable templates, you have to track precisely where to call TNT or not).
- I have splitted the news feed in several parts (with the workaround but not lots of change). (Unfortunately I had to battle as well in the past with a bot that does not update itself to allow the code to be translated more efficiently to other languages, on some places; it should place TNT calls so that we should not depend on autotranslatable templates; to stop that battle, some pages updated by the bot are new autotranslatable, meaning that they should not be reincluded via TNT; there's a detection code that worked until recently but no longer works in Lua due to changes in the parameter frames for numbered positional parameters, and this affects precisely the way TNT is invoked, with the template to expand in a positional parameter; recently I allowed this parameter to be passed by name instead, in order to avoif shifting other numbered parameters, so that the frame could be reused during template expansion.
- This is very tricky; and I fear that the next announced change in Lua may break more things ; there are ongoing discussions about this in EN.WP where Lua code no longer works for various modules (for example Math); some API will soon be changed, but normally it should not affect The TNT Lua module which does not use these API.
- But ideally we should not need to use any Lua module to perform what TNT currently performs, it should be part of the internationalization core extensions of MediaWiki ("Special:MyLanguage/pagename" or "Special:Language/lang-code:pagename" to specify another language code than the UI language), and would be faster (it would also implement language fallbacks, just like MediaWiki does for "int:" and "MediaWiki:" special namespaces/functions to translate its own UI). verdy_p (talk) 23:27, 11 February 2014 (UTC)
- Thank you for fixing it! Amazing! heather walls (talk) 01:18, 12 February 2014 (UTC)
Example forms - URGENT
Can these be put in a USABLE FORM - ie not read-only google docs that can't be copied? Just put the texts on a normal wiki page for heaven's sake. I have 36 people coming for an event in less that 24 hours, so speedy action would be appreciated. Thanks Johnbod (talk) 16:19, 3 March 2014 (UTC)
- Sorry I missed this comment. The program tracking and reporting form is linked to an open office download at the start and then, in addition, we provided a pdf and google form format you can make a copy of and edit. . We have done this to provide multiple modes for working with the example forms. However, since it is a form that has detailed table formatting, it is not easily suited to the wiki page formatting option, but we can work toward that goal. Similarly, the Wikimetrics opt-in example google form is one you can also make a copy of and edit, however, that example form is based on the information in the already existing wikipage for language and instructions for gaining consent (in the Wikimetrics tools). JAnstee (WMF) (talk) 22:58, 7 March 2014 (UTC)
Question about Community Coordinator Job Description Bullet point
Message posted to announcement list - Reposting here and responding JAnstee (WMF) (talk) 16:42, 8 March 2014 (UTC)
MESSAGE QUOTE (From Federico Leva):
- Weird bullet: «Ability to motivate people and to ensure that participants stick to deadlines and results as agreed upon». Who are the "participants" who must "stick to deadlines and results"? Surely not
"volunteers, and probably also not employees of independent organisations? END QUOTE
- As for the Ability to motivate. Yes, this would mean to help motivate anyone who has agreed to certain evaluation strategies and deliverables, key point being, "as agreed upon." The role is intended to provide both motivation, coordination, and support to evaluation participants (i.e., our evaluation team, program leaders and evaluators, and individual volunteers who seek to evaluate more) so that we keep progressing toward "agreed upon" targets. I think the indication that anyone "must" do so, is where there is a false assumption here. Currently, and for at least the next year, the plan is for reporting to the evaluation team to remain voluntary, working to build strategy and capacity for self-evaluation and higher level analysis of programs.
- It is an important task that the coordinator maintain an awareness of how each team member, and participating community member involved, is continually routed to the supports they need and kept motivated and able to make progress toward consistent goals (as agreed upon). This is not a one-way responsibility but a 360 coordination of efforts (As we continually try to convey, "we are in this together"). Sometimes this means directing someone to upcoming events and opportunities, directing people to portal resources and other supports, or walking someone through a Wikimetrics cohort analysis, to name just a few ways the coordinator will likely need to engage with program evaluation participants in this way. While much of the work is on-going, the evaluation team must "stick" to some deadlines around things like event planning and report creation, as these deadlines for input approach, the coordinator needs to provide coordination and communication to support participants voluntary involvement. It is part of the community coordinator's responsibilities to be sure everyone stays on the same page, and on-track, regarding those timelines as much as is possible and that the team is aware of any shifts or blocks to progress.
- I hope this helps to clarify the meaning behind this abbreviated responsibility bullet point. We are allowed only a limited length of information on the actual jobvite. While this particular point has been carried over from the original JD created last year, I can see where one might mistrust it based on interpretation of its simple wording. I ask for understanding there are many layers to this position's responsibilities that require much more detail than is articulated in these brief bullet points. Thanks. JAnstee (WMF) (talk) 16:42, 8 March 2014 (UTC)
- There was no mistrust on my end, I just noticed and reported two typo-like uncertainties. It's still not clear to me who the "participants" are and what they agreed to, but I suppose it's fine as long as it's clear to them. :) --Nemo 16:55, 8 March 2014 (UTC)
Links to social media, blogs etc. on programs
I am tempted to boldly add these to the page. Would anyone object for me to add these to the page? Perhaps with a template so that the same link is visible on many pages. -- とある白い猫 chi? 22:07, 24 March 2014 (UTC)
- Hello, chi?. Please feel free to suggest those links and/or try out your idea for improving the portal space on the library page. If links added are not directly relevant, they may be revised or possibly removed, as with any edit. Thanks for your input into making the evaluation portal more effective. JAnstee (WMF) (talk) 19:13, 31 March 2014 (UTC)
Publishing raw or almost-raw data
Thank you again for organizing these studies, and setting the first bar for future case studies and evaluation. I have learned many things about what sorts of feedback is easy and hard to get, who has been participating so far, how project- and language-barriers affect these sorts of studies, and about many of the specific cases explored.
I would appreciate seeing the underlying data, and the wording of the surveys. Is it possible to publish this for each case study?
Raw data would be preferred, but if some anonymization is necessary please make it minimal. This would allow others to repeat or extend the research done here. The level of anonymization and aggregation done to produce these reports is ok for the specific summary that was published, but other views are also of interest.
Regards, –SJ talk 16:52, 14 May 2014 (UTC)
- Hi SJ! Yes, the raw data (except for their identification) are contained in the last three data tables at the bottom of each program report. The reporting items, or survey items are housed in a google doc and have since been reduced in some ways, and expanded in others, based on our piloting experience and provided in the guidance and sample forms published in our Tracking and Reporting Toolkit. The toolkit currently covers sample ways of collecting usernames and consent from in-person meet-ups as well as the useful reporting points for tracking program inputs and outputs identified so far. We have not yet set out guidance for the parameters on the follow-up metrics yet as we are awaiting the release of the upcoming Editor Engagement Vital Signs dashboards so that we can move forward with maximum alignment on standardized metrics. Please let me know if I can be of further assistance. JAnstee (WMF) (talk) 18:33, 14 May 2014 (UTC)
Glossary: accessibility
Team: may I suggest a tweak to the formatting for readability of the glossary? The font itself is quite nice, but the font-size seems to be smaller than I'm used to on WMF sites. This might be ok if not in combination with such a dark background colour. Could you experiment using a lighter background? Maybe even the same general hue, but less "opaque", if that's the technical term for it. Tony (talk) 03:55, 6 June 2014 (UTC)
- Hi Tony! Yes, we have actually just begun on a redesign to deal with exactly these issues. The new page format will look similar to the Grants landing page. We will be generating a mock up and gathering feedback as we move forward with the redesign to make the portal space more visually accessible as well as more functional in terms of searchability and easier to use in terms of clear page names and functions. If you have any other recommendations please add them to our discussion on the redesign talk page JAnstee (WMF) (talk) 23:21, 6 June 2014 (UTC)
- Gradually working through the glossary, particularly where I think the grammar might be more difficult for translators. Also, in "edit-a-thon" the hyphens grate on my eyes in that font (they're positioned high in the line, and they grate on me in any font, actually). I wonder what you think about a plain "editathon". And I've tampered with the definition of "Benchmarking" and reframed the example. I'm not qualified in evaluation theory, so please check and change as necessary. Tony (talk) 08:35, 11 June 2014 (UTC)
- discrepancy perspective: "The viewpoint that sees {{<tvar|evaluation>Glossary|evaluation|evaluation}} as a process for identifying and measuring differences and inconsistencies between (i) what that process is in reality (or what you have), and (ii) what you wanted or expected." I found this a bit hard to understand, even after trying to edit it. Tony (talk) 12:16, 13 June 2014 (UTC)
- Thanks for the head's up and the help in prep for translatability, Tony! I will keep an eye, but so far so good from what I see =) JAnstee (WMF) (talk) 18:51, 13 June 2014 (UTC)
- Actually I wad the one who did all the work of preparing for translatability, including testing for RTL layout (Arabic is created and shows the layout but it is not translated at all), defining and fixing all anchors in lowercase English and no spaces for id's, using uniform presentation of definitions and examples, adding all the tvars, testing and fixing the Glossary template, fixing some CSS issues, checking anchors from pages referencing it from some learning modules or other doc pages. Then I have started translating it.
- Tony reworded a few definitions with minor fixes after me. verdy_p (talk) 21:36, 13 June 2014 (UTC)
- Note: the Learning module about Evaluation Basics is also completely translatable now (and I have fully translated it to French, to make sure it was fully ready for translation without complication for translators, and it is tested on Arabic. I also had to fix the RTL layout of some templates there, such as the cycle diagram with arrows which uses a table working only in LTR direction, but for which there are alignment contraints to set on each table cell). I have also added there a few Glossary terms.
- I have added one definition, there's still a missing one TODO for "evaluator". verdy_p (talk) 21:45, 13 June 2014 (UTC)
- Another note: the character index at top of page and the A-Z sections are only displayed in English. In translated pages, the entries are not sorted. To allow sorting terms, it would require putting each definition in a subtemplate, and then create a localized list of entries (also in a subpage), manually sorted. But then such localized list cannot be generated by the translate tool, and this would need maintenance of each sorted list. In addition it would require manual creation of the character index for each language (or script: it is not necessarily Latin).
- For this reason I have just hidden the index at top (except in English), and translations will display items in apparent random order (but still matching the same order as English). I stll don't see how to define it differently for now without some tool (such as a Lua module that would sort the list of subtemplates and would expand them, or a Bot that would resynchronize these sorted lists. Or a mediawiki extension allowing to sort a list of elements in the page and capable of regenerating an index with a sort key.
- But may be it's possible to have the list sorted and enumerated automatically by using a category for each language as a helper, by indexing in it all translated subtemplates. and then the Lua module would be very minimal and would just read the sorted contents of that category (I'm not sure if this is technically possible without having custom-typed metadata on pages and some MediaWiki extension to loop over a dynamic list of entries read from a category, and for now categories cannot set collators tailored separately for languages changing for each category page, as it still does not have metadata to indicate the actual content language for each page). verdy_p (talk) 22:10, 13 June 2014 (UTC)
- Thanks Verdy: you're doing great work (pity there aren't more of him). I still intend to come back for about three sessions to get through the alphabet. Tony (talk) 03:36, 14 June 2014 (UTC)
- Finished up to "H". Two points: in my view there's over-fragmentation of definitional categories in a few places. The sub-terms are, after all, highlighted within. "Evaluation" is broken into four entries, and might be more digestible and no less "findable" as one entry. Need advice on this. I haven't changed anything in this respect. Second thing: we might decide whether the opening sentence for each definition should plunge straight into the rheme, that is, what the thing is, rather than repeating the term at the outset. I slightly favour the more economical formula, but it could go either way. Tony (talk) 07:23, 15 June 2014 (UTC)
- About fragmentation, it should be OK to group expressions to their base term (they are separated because in English complements are written before the main term so expressions do not sort together).
- It should remain possible to place an anchor to these expressions (and possibly alter a bit the template used to format them, by adding a parameter adding some indentation in the left margin (right margin for RTL languages). This grouing should remain valid in all languages (note: I have still not started translating the definitions themselves, because there's active work on them. However I(ve translated terms in French and tested the layout in Arabic (but did not translate anything).
- Do we really need the sort order and the character index ? (it's impossible to reproduce it in translations). So a logical grouping could still be used, so that the page does not require navigating up and down to see dependant terms (only backward links should be needed, except in case of cycles caused by lists or related terms, but cyclic references should be narrowed). This would facilitate the understanding (and would also ensure that definitions are not cyclicly defined like "A is a B that..." and "B is a A that...").
- verdy_p (talk) 10:26, 15 June 2014 (UTC)
- Thanks Verdy: you're doing great work (pity there aren't more of him). I still intend to come back for about three sessions to get through the alphabet. Tony (talk) 03:36, 14 June 2014 (UTC)
- Thanks for the head's up and the help in prep for translatability, Tony! I will keep an eye, but so far so good from what I see =) JAnstee (WMF) (talk) 18:51, 13 June 2014 (UTC)
New portal
What do you think about the new portal design so far? We have some updates to do to archived pages, update old links, etc. But the key resources are live and ready to be used! Let us know what you think! --EGalvez (WMF) (talk) 23:11, 7 August 2014 (UTC)
- This portal seems as a source of amazingly useful information, but the home page does not make much sense to me. When I come I see several sections (Study, Plan, Measure, Share), none of look very attractive, and to get to actual content and see what is inside, I need to go through many screens consisting of mostly whitespace, before I run into something that catches my attention (like case studies). If I want to plan an event, going through bank of survey questions will surely help me. But this information is in some other box. So it might be useful to reconsider whether it's worthwhile to force people to decide between the predefined boxes. --Petr Novák / che 10:54, 8 August 2014 (UTC)
- Thanks Petr Novák che! We tried to hone in on four sections because they follow the process of learning and doing an evaluation: One learns how evaluation works by reading/studying, you then start to plan it for your own programs, using tools to measure (including surveys), and then one shares their results. There are definitely some resources that belong in two sections and we use the "related links" to show this, but I can find a way to make that more clear (and maybe find places to reduce white space...) We are also working on an "Intro to Evaluation" which will be a short page, prominently featured at the top of the portal, that will help new visitors navigate the portal. This is a new beginning for the portal (what it used to look like). We are hoping to see it evolve as the resources grow and are refined, and as we received feedback from the community. Thanks so much! --EGalvez (WMF) (talk) 11:55, 10 August 2014 (UTC)
what's the difference between google spreadsheet and qualtrics.com? (moved from a different page)
Hi there, I am wondering the question as my topic explains. Why? --Reke (talk) 10:51, 17 August 2014 (UTC)
- Hi Reke! Thanks for reaching out. I asume that by Google spreadsheets you mean Google Forms? The difference between this resource and Qualtrics are many. For starters, Google Forms only allows up to 10 questions. Qualtrics allows to build more complex surveys, with more questions available per survey, and also with more features for formatting each question. On Qualtrics you have more assistance to work applying logic. You can have collaborators to build a survey and translate it on this software, as well. And the developer tools are richer: you can modify almost any style in a Qualtrics survey inserting CSS code, among other things.
- Do you have some more specific questions or comments about this software? It could be easier to answer this with a more specific question (there are many more differences than the ones I mentioned! :-)). MCruz (WMF) (talk) 20:50, 18 August 2014 (UTC)
- Yes, I mean Google Forms. But I've created a form more than 10 questions (even in one page) before. By the way, there's a big problem to me that Qualtrics seems like have no user interface with Chinese. That would cause some trouble for both developer and user in Taiwan. --61.64.65.129 02:35, 19 August 2014 (UTC)
- Ooooops, 61.64.65.129 is me. I forgot log in before edit.--Reke (talk) 02:38, 19 August 2014 (UTC)
- The reason of this question is for the strategy on which platform we should use to conduct the survey in Wikimedia Taiwan. We (Reke and I) are mostly using Chinese, and most of our members can barely understand English. However, L&E team recommend us to use Qualtrics, which of support is totally in English. We are easier to go with google forms (It's not only allows us to have 10 questions, actually it can give us a lot of options) So we want to know if there is anything featured only in quatrics, so we have to use this for better evaluation? Maybe the feature of Survey Library in qualtrics is the unique feature? --Liang(WMTW) (talk) 06:49, 19 August 2014 (UTC)
- Hi Reke and Shangkuanlc - Thanks for following up. Google forms would work fine if you have a simple survey. Qualtrics is able to deal with greater complexity in the survey design and survey questions. They use better question methods, they allow for complicated skip patterns (if someone answers Question 1 as yes, then you would be taken to a certain set of questions; if they answer Question 1 as No, then it would take you to a different set of questions), they have a specialized delivery system (like for sending reminders to specific people), and have some reporting tools. Qualtrics does allow you to translate surveys into Chinese, but the support for actually using Qualtrics is in English. Google forms might work just fine if you have a simple survey. They do have some degree of question skipping (see this video) and they do accept more than 10 questions (survey monkey is the one that limits free accounts to 10 questions). Does this help? We're always happy to review any surveys before you send them out. Thanks! --EGalvez (WMF) (talk) 17:30, 19 August 2014 (UTC)
- The reason of this question is for the strategy on which platform we should use to conduct the survey in Wikimedia Taiwan. We (Reke and I) are mostly using Chinese, and most of our members can barely understand English. However, L&E team recommend us to use Qualtrics, which of support is totally in English. We are easier to go with google forms (It's not only allows us to have 10 questions, actually it can give us a lot of options) So we want to know if there is anything featured only in quatrics, so we have to use this for better evaluation? Maybe the feature of Survey Library in qualtrics is the unique feature? --Liang(WMTW) (talk) 06:49, 19 August 2014 (UTC)
I took the survey for Wikimania participants
...some observations:
- The mail inviting me to take the survey had "3rd request for feedback" as its subject. I haven't seen any 1st or 2nd request.
- When answering to which "organizations" I belong, why are there options for the FDC, Affcom, and Board? That affects 9, 10 and 9 people each only.
- "Upon survey completion you will be sent a thank you email with a separate survey link through which you may submit your personal contact information for a chance to win a Wikimedia hoodie or other cool swag in a raffle-type drawing open to all survey participants.":
- The e-mail indeed had a "separate survey link". I found this to be most confusing, since no survey was behind that; the link merely led to the form where I could enter my address data in order to win stuff. It shouldn't be labelled "survey" then. I almost thought the mail didn't have the announced link at all.
- I have no idea what a cool swag in a raffle-type drawing is.
- The mail said "we invite you to enter a drawing for your chance to win a Wikimedia Globe hoodie", but the form then only wanted the address data, no drawing.
--MF-W 23:50, 20 August 2014 (UTC)
- Hello, MF- Warburg. Thank you for taking the survey and for this helpful feedback. That is a very good observation that had not been made when those categories were added in the question review process. The intention was obviously to find out if those people are represented in survey feedback, but clearly these low numbers are not suitable for such distinction and this was an oversight. Let me assure you that no data will be analyzed or reported at that level of aggregation and we will revise the question for the future. I am also thankful for your feedback about the survey form used for the drawing submissions. As it is indeed a type of survey, so that is what the default referred to it as, but I was able to correct it to say "Enter the Drawing" now. I am sorry for the confusion, the form for address data are for the drawing so we can send winners their prizes. As for the swag and raffle, a set of drawing participants will be randomly selected as the winner of a globe hoodie or other cool stuff. The reason we did not specify "other cool swag" is because there was still discussion about what potential prizes might also be available, however, always certain, was that the high tier drawing prize would be a globe hoodie. Thank you again for participating and sharing this feedback. JAnstee (WMF) (talk) 01:44, 22 August 2014 (UTC)
The history of Wikimedia programs: Hackathons
At the Learning and Evaluation team, we are looking into Wikimedia's most popular programs, to understand how they started, what was the original motivation behind them, and what is the status of those programs now. Does anyone know how the first Wikimedia Hackathon took place? Where it happened, who was the main coordinator, and any other information will be most welcome! MCruz (WMF) (talk) 17:38, 4 September 2014 (UTC)
How do you measure an objective like «Making contributing more fun»? How do you make it SMART?
Is this among your goals? How do you measure if people are having fun? Share! MCruz (WMF) (talk) 21:35, 12 September 2014 (UTC)</tvar|evaluation>