Jump to content

Grants:IEG/Artificial Intelligence for Wikimedia projects

From Meta, a Wikimedia project coordination wiki

status: withdrawn

Individual Engagement Grants
Individual Engagement Grants
Review grant submissions
review
grant submissions
Visit IdeaLab submissions
visit
IdeaLab submissions
eligibility and selection criteria

project:

Artificial Intelligence for Wikimedia projects


project contact:

User:とある白い猫 Special:EmailUser/とある白い猫

participants:





summary:

Promote development of Artificial Intelligence tools to aid editing on Wikimedia sites.

engagement target:

Wikipedia, Commons, other Wikimedia projects

strategic priority:

Improving Quality

total amount requested:

? USD


2014 round 1

Project idea

[edit]

What is the problem you're trying to solve?

[edit]

Since its creation Wikipedia and other Wikimedia projects have relied on volunteers to handle all tasks through crowdsourcing, including mundane tasks. Currently Wikimedians are dealing with an overwhelming amount of content that is exponentially increasing in amount while the volunteer/userbase does not have such a growth.

What is your solution?

[edit]

Artificial Intelligence (AI) is a branch of computer science that makes use of machines/agents/computers to process information to find patterns in relationships and use this to predict how to handle future data. Artificial intelligence has grown in its use particularly in the past decade with applications ranging from search engines to space exploration. With improvements in Artificial Intelligence we are able to delegate mundane tasks to machines to a certain degree.

Key problem with Artificial Intelligence research is researchers are often not experienced Wikimedians so they do not realize the potential of tools Wikimedians know and take for granted. To give an example, only a few people outside of the circles of experienced Wikimedians know that images deleted on Wikimedia projects aren't really deleted but just hidden from public view. One researcher I talked to called the deleted image archive of Commons a "gold mine". Indeed in any kind of machine learning task classified content (in case of commons that could very well be seen as "wanted" and "unwanted" content) can lead to supervised learning. You can have a system that uses deleted content, deletion summaries, content on the deleted image description pages, etc. to determine if other similar unwanted content exists that may need to be deleted or if newer uploads are similar to deleted content. This is just one of the many examples where artificial intelligence can assist editing.

To expand on the idea, tools such as Copyscape and TinEye are not customized to specifically serve Wikimedia projects. Their general purpose accuracy as a result is limited which in turn means their use to satisfy the needs of Wikimedia projects is limited. Innovative use of AI methods such as information retrieval, text mining and image retrieval can lead to more advanced tools.

Conferences and workshops on artificial intelligence can be the ideal solution to create tools optimized for Wikimedia use. I had the opportunity to evaluate CLEF (Cross-Language Evaluation Forum) 2011 thanks to a Wikimedia grant. I observed that a considerable number of workshops and even keynote speakers were using content of Wikimedia websites as their dataset.

Project goals

[edit]

It is important to note that this is meant to be a long term project. Conferences/workshops tend to take place annually, biennially or even triennially. This project is broken into steps that mirrors Software Development Life Cycle (SDLC). First five steps concerns the development of AI tools within an experimental setting (perhaps a scaled down version of the real world data) and would involve the Wikimedia community and researchers.

The last step would be the deployment and maintenance of the experimental AI tools to the real world (Wikimedia sites) and would involve engineers. This can and perhaps be considered a separate project as it would involve a completely different team whom would begin to work on the results at least a year after the first step (requirement analysis).

Requirement Analysis

[edit]

Prioritize problems faced by Wikimedia sites.

  • Community feedback would be most helpful in establishing community needs. Community would determine and anticipate the outcome of the research rather than be surprised by it.
  • This may appear obvious but determining importance of the problems and afterwards determining which applications of AI would be relevant is a non-trivial task.
  • Establish benchmarks such as technical specifications and resource restrictions that would be later utilized in design and evaluation.
  • Some potential ideas are presented in a Wikimania 2014 submission which will be presented if approved.

Evaluation of AI conferences/workshops

  • Not every AI conferences/workshops may have research relevant to Wikimedia websites.
    • An AI conference/workshop on game AI or robotics may not necessarily be relevant for Wikimedia websites.
  • Some of these evaluations may require actual attendance as conference/workshop websites do not always offer the clearest picture.
  • Competitive conferences/workshops would be preferred.
  • The already evaluated CLEF (Cross-Language Evaluation Forum) conference can be the pilot.

Design & Promotion

[edit]

Design research to solve problems in Wikimedia projects.

  • Relevant Wikimedia data should be made available.
  • Communicate with organizers of conferences/workshops to seek interest in Wikimedia projects.
  • Tools developed by research should be open sourced and freely licensed to be compatible with Wikimedia sites (we only use open sourced freely licensed software).
  • Research projects should meet benchmarks in a manner that would ease the deployment step.
  • Development of a Strategic Plan based on the requirement analysis.

Promotion of the research project.

  • Researchers may be unaware of the conferences (at least the new Wikimedia track). Often when a new research track is added it takes some time for it to become popular among researchers.
  • Use of media, Wikimedia sitenotice, social media, etc. to promote research tracks can be considered.
  • A small monetary prize or some sort of trophy/medal can be considered. A trophy/medal may have a higher impact than a small monetary prize.

Implementation & Testing

[edit]

Tools would be implemented in order to participate/compete in the conferences/workshops.

  • Development of the tools would be by the participating researchers.
  • An experimental setting such as a server with scaled down Wikimedia data for experiments at the disposal of the researchers (perhaps a toolserver-like environment) not intended for the use of the general public (not much of a user interface etc.).
    • Enforce technical specifications & resource limitations for developed software to meet benchmarks.

Tools would be tested at the conferences/workshops.

  • Conference organizers would test software in a scientifically/statistically sound manner in order to compare competing AI applications in order to determine the winner.

Evaluation

[edit]

Evaluation for next round of research to improve the tools.

  • Evaluate the performance of the tool.
  • Cost/benefit assessment of integration directly to some existing Wikimedia infrastructure (such as Wikimedia Labs resources) or some external system that only mirrors the relevant Wikimedia data (such as how Toolserver existed in the past).
  • Community feedback would be most helpful for evaluation of results.
  • Reassess requirement analysis to improve the tools further. Past research leads to better future research.

Deployment & Maintenance

[edit]

Dedicated engineering work would be needed for deployment & maintenance for the use of the general public.

  • Deployment will have engineering and hardware costs depending on the scale and nature of research (research for text processing requires less resources than research for image processing).
  • Once deployment is complete, Artificial Intelligence tools would need maintenance.
  • This may (and perhaps should) be treated as a separate project as it would involve a completely separate team.

Project plan

[edit]

Scope

[edit]

Activities

[edit]

I feel it is important to keep in mind that this proposal is meant to be a pilot for projects of much larger scope.

Budget

[edit]

Total amount requested

[edit]

Funding will depend on the community reaction and the availability of existing resources. For instance, if the community decides on a Wikimedia problem that is covered by the previously mentioned CLEF conference/workshops, then the assessment of the conference is already complete.

Cost of the award depends on what kind of an award we decide on which could be a cash prize or a custom designed trophy or even both to be handed on 2015.

It would still be necessary to attend the conference this year in order to attract researchers by introducing the task and answering their questions as well as to perhaps display the award. Because registrations aren't even open, it is difficult to predict the budget but this should cost about EUR 1000 with conference registration, travel and accommodation costs. CLEF Conference this year is in the UK so costs would be in GBP.

This project is proposed as a pilot and has two components.

  1. Pilot project to solve a Wikimedia problem selected by the community.
    • Proof of concept that this approach can work.
  2. Evaluate other AI conferences/workshops with attendance (registration etc.), travel and accommodation costs.
    • There are a LOT of conferences to choose from. Attending all would not be reasonable.
    • Each conference would receive a separate report.

Budget breakdown

[edit]

Intended impact

[edit]

Target audience

[edit]

As a pilot targeting English Wikipedia or Commons would make more sense, however the targeted audience depends on the community reaction and priorities.

Community engagement

[edit]

This project intends to promote the development of AI applications to solve backlogged problems on various Wikimedia projects through AI conferences/workshops. AI can solve just about any backlogged problem on any Wikimedia projects/editions. As a consequence community direction is very important for this project to focus on the right problems.

Part of the requirement analysis step of this project is intended to gather feedback from the wider Wikimedia Community here on meta. After community decides on the problem (or problems) for AI to solve, the next task is to determine which conference/workshop among the wide variety of AI conferences/workshops would be most suitable.

Furthermore, if approved AI applications to problems on Wikimedia sites will be presented on Wikimania 2014.

Fit with strategy

[edit]

This project aims to improve quality and encourage innovation as described in the WMF Strategic Plan. This program can also stabilize infrastructure by leading to the development of better tools that rely on AI for collaborative content creation.

Sustainability

[edit]

Many countries/institutions require academic staff to produce peer-reviewed scientific publications as a condition of graduation, promotion, and/or maintaining academic employment. Research in AI typically involves processing of some dataset. Coming up with a dataset from scratch is a fairly difficult process which often involves significant amount of effort and even funding. A dataset for the context of Wikimedia projects can be seen as any specific backlogged task which often is at least partially processed through crowd-sourcing on Wikimedia projects. AI applications would use the processed content to predict/process the unprocessed content.

All this gives an incentive for scientists to participate in conferences/workshops which provides them with both datasets and a mean for a peer reviewed publication. These AI research conferences/workshops happen with regular intervals and often need new datasets to keep interest. Basically, nobody would want to work on a dataset that generated near perfect results in the last conference/workshop.

As a result, research arm of this project would be self-sustaining as long as these conferences/workshops are motivated to process Wikimedia data and their results are available as feedback for Wikimedia sites. These conferences will more likely provide such feedback if we provide them with well formulated problems for AI research and the respective datasets. Such a symbiotic relationship would be beneficial for both involved parties.

Measures of success

[edit]

The pilot project will be considered successful if conference/workshop organizers are convinced in either converting an existing track or creating a new track for Wikimedia projects to solve Wikimedia-related problems. This will be measured by using the following criteria:

  • How significant was the participation of researchers to Wikimedia track(s)? Did the promotion of Wikimedia tracks help?
  • Did research generate open sourced & freely licensed tools (even if experimental)? Did they follow the specifications and restrictions set by the benchmarks?
  • How successful was the research in solving the tasked Wikimedia-related problem? Is it successful enough to move forward in deployment of the tool for the use of the general Wikimedia community?

Need target-setting tips?

Participant(s)

[edit]

Discussion

[edit]

Community Notification

[edit]

Please paste a link below to where the relevant communities have been notified of this proposal, and to any other relevant community discussions. Need notification tips?

Endorsements

[edit]

Do you think this project should be selected for an Individual Engagement Grant? Please add your name and rationale for endorsing this project in the list below. Other feedback, questions or concerns from community members are also highly valued, but please post them on the talk page of this proposal.

  • Community member: add your name and rationale here.