Talk:Wikimedia Fellowships/Project Ideas/A Digital Wiki Coach To Provide Focused Advice
Add topicA.G. West Feedback
[edit]Hi Oren, and thanks for contacting me about this proposal.
Foremost, I'll say that the proposal is a sound and interesting one. Stern and punitive ex-posto facto warnings do little to help user retention and only require additional work from patrollers. Your suggestion for a priori warnings and softer suggestion-based behavior-shaping seems beneficial and would be possible at minimal marginal cost.
Your suggestions reminded me as an extended version of English WP's Edit Filter. Don't you think? Of course, your proposal seems a bit more elegant and human, but the technical infrastructure seems quite similar. To the best of my knowledge, though, the edit filter currently permits only simple and statically written triggers (and would need extended to handle, say, machine-learning built models). Morover, some of the metadata fields I have used in my anti-vandalism/spam research, and you would certainly need, don't seem to be available natively in that extension. Minor issue.
I am nearly certain this would need to run in-software rather than as a bot (else, you'd be contacting them after they made edits and have less capability to shape behavior). Getting the community to accept something like this is of course challenging, but perhaps you could piggy-back on the EditFilter and existing BRFAs. Moreover, your position internal to the development community could certainly help expedite this manner.
I think a good question to ask is "why aren't we doing this already?" Consider (on EN:WP) the shear number of vandalism edits that CBNG reverts each day. Clearly, the CBNG logic exists to know if an edit is vandalism *before* it is written to the database. But instead of behavior-shaping, we let the person commit the edit, then come along and revert it 1 second later and place a talk-page message they might never see; a lost opportunity. The same thing can be said about many of the bots doing post-processing of edits in near real-time: I have an anti-spam scorer (hey, you might be adding spam!), a dead-link checker (hey, you might have screwed up your links!); SineBot works in this fashion (hey, you need to sign on talk pages!). Bringing these tools internal to the interface is smart at multiple levels; it allows for a priori behavior-shaping, allows for the tools to be hosted by the WMF (just look at CBNG's occassional downtime), and probably lessens bandwidth consumption (there aren't 100 tools hitting the API to get the details of every edit).
In many ways, I see your proposal as Edit Filter 2.0 -- and as one that focuses less on egregrious isntances of poor behavior (as the EF currently does), and one whose scope has expanded to fixing and gently-correcting even minor editing flaws. I am curious to what extent this proposal is completely novel, and to what extent it what be the elegant integration of existing tools. Of course, no one has developed classifiers for all dimensions of tagging/misbehavior/warnings/etc. Moreover, most of the work that has been done operates only on EN:WP. Expanding this projects scope in a scalable manner to different languages and projects is a large challenge. However, given my experience on EN:WP, I could see how an operational system like that you describe would be enormously beneficial.
Thanks, West.andrew.g (talk) 17:27, 6 March 2012 (UTC)
- While I'll watchlist this article, I don't spend a lot of time hanging around Meta. If discussion gets interesting, ping me at the same username on en:wp. Thanks, West.andrew.g (talk) 17:29, 6 March 2012 (UTC)
Former similar implantation
[edit]Hi. I already implemented a very similar idea on Ar.Wp around a year ago using ar:User:CipherBot, and I will be glad to share information about this experience and probably build on it. --Ciphers (talk) 05:14, 21 March 2012 (UTC)
Hi - this is not a small development project, therefore any work or insights would be welcome. I also think that if this were to get a green light I would setup a task force and would need to choose where to deploy it first.
My first two question are:
- how complex is the policy on Ar.Wp compared to En.Wp ?
- did you develop a bot or an extension ?
OrenBochman (talk) 15:25, 24 March 2012 (UTC)
Feedback
[edit]Thanks for contacting me about your proposal, Oren. Here are some quick thoughts on a first read-through.
First, this is a great statement of the problem: "in a wiki an editor has to choose between two evils: accepts new content of poor quality or reject the new editors who are the long-term future of the project"
Some evidence about the importance/scope of the problem is the huge percentage of potentially good faith deletions. 22% of all deletions (and 37% of Speedies) are A7: No indication of importance. That's based on data from 1 June 2007 to 1 July 2011, and this paper: Geiger, R. S., & Ford, H. (2011). Participation in Wikipedia's article deletion processes. In WikiSym 2011. (Brief summary in this research newsletter.
Your overall idea -- reducing cooperation costs and increasing cooperation rewards (while penalizing non-cooperative behavior) -- seems reasonable. But it's not really clear what "non-cooperative behavior" means in this context. In particular, it's important to balance the needs of newcomers with the needs of established Wikipedians. So, for instance, what motivates patroling, and how do we ensure that e.g. "3-5 inline tags must be introduced into the article by the reviewer." isn't too much work? What makes you think that quizzes are going to help: "Notice requesting the editor demonstrate his domain expertise by taking a quiz. (could be fun within the scope of user engagement)."?
So I think it's a direction worth working in, but one that needs some refinement for the social context. I'd definitely be interested in talking more about that!
Oh, and one minor thing: > Media in the MediaWiki software's ecosystem is considered using McLuhan's tetradic method This could use a link to, say, en:Tetrad_of_media_effects. (I had to look it up.)
Jodi.a.schneider (talk) 22:53, 23 April 2012 (UTC)
- Thanks for the input here are some answers, I'll be updating the proposal once I get though the papers. I would also be glad to chat on these subjects as well.
- I had thought to flesh the cooperative protocols later on but I will provide more information on this now that there is more concrete basis for doing so.
- By Non-cooperative behaviour I am referring to a group of selfish actions by more experienced users ranging from xenophobia, hazing, biting new comers to mere slap in the face SD of good faith work. Selfish in the sense that with a greater effort a more social beneficial alternative could have been made.
- For the last two weeks I have started to developing a rigorous game theoretic description of the editing game for (good faith/bad faith editors against patrollers).
- Once the version with information asymmetry is ready the equilibria will indicate the balance between patrollers and good faith editors.
- Based on your research and the above link I can now envision a more detailed extensive game model. If accurate payoffs could be constructed then by solving/simulating the game dynamics an agent like Simone could find the path of least resistance for staging the behavioral interventions' required to reverse the editor loss problem. Such a game model could become the master strategy guiding Simone in the long-term. However I believe you already come up with a good enough plan for the initial attack based on the A7 intervention.
- Wikipedia makes it difficult for readers to develop reputations as Domain Experts. This is a dual software social problem (hiding who contributed most to which subject headings) and a social one (no policy on this subject has been passed AFAIK). But by making a quiz page on different projects/articles it would be possible to let domain experts self-certify and establish a hierarchy. If a reader quickly provides a quiz based badge, this should be a good indicator that the user is in good faith domain and shift the debate from keep/delete to accept/improve.
- What are quizes ?
- There is a Quiz extension in use in the wiki university projects. One would change it to have a persistnat option and so that pages of different project would aggregate quiz type questions. These would range from an article comprehension questions to tests of deeper knowledge. The graded quizzes would be agragated from the articles and either used as a captcha type challenge during edits or as a qualification type challenges during a discussion.
- Benefits of having quiz questions would be in
- rapid establishment of autority during editing and in quality debates.
- a test your understanding of the article.
- a weakest link social game feature.
- Actually my work experience shows that one can get great mileage for just a few questions. OrenBochman (talk) 12:39, 24 April 2012 (UTC)