Jump to content

Abstract Wikipedia/Updates/2021-06-17

From Meta, a Wikimedia project coordination wiki
Abstract Wikipedia Updates Translate

Abstract Wikipedia via mailing list Abstract Wikipedia on IRC Wikifunctions on Telegram Wikifunctions on Mastodon Wikifunctions on Twitter Wikifunctions on Facebook Wikifunctions on YouTube Wikifunctions website Translate

Apologies for missing the update last week. Times are even busier than usual.
Why allow several implementations of a function?

Before we dive into today’s topic, two quick reminders:

  • Our first office hour is upcoming on Tuesday, 22 June 2021, at 16:00 UTC. It will be held on the Telegram channel and on Libera.Chat IRC channel #wikipedia-abstractconnect.
  • On Thursday, 24 June 2021, we will be presenting at the Arctic Knot conference. Community member Mahir Morshed will present on how to get the lexicographic data ready to be used in Abstract Wikipedia at 15:00 UTC, and Denny will present on Abstract Wikipedia and Wikifunctions in general at 16:00 UTC.

Wikifunctions’s core model is centered around functions. Every function can have several implementations. All implementations of a function should return the same results given the same inputs.

One may ask: Why? What’s the point of having several implementations that all do the same?

There are several answers to that. For example, for many functions, different algorithms exist which could be used by different implementations. The traditional example in computer science classes is the sorting function: a sorting function takes two arguments, a list of elements to be sorted (i.e. to be brought into a specific, linear order), and a comparator operator that, given two elements, tells us which element should go first. There are many different sorting algorithms, any of which could be used to implement the sorting function. A particularly interesting visualization of the different sorting algorithms can be found in the form of traditional folk dances.

The person calling the sorting function will often not care much about which algorithm is being used, as long as it works, is correct, and returns sufficiently quickly. But having different algorithms implemented allows the service to run the different algorithms and compare their runtime behaviors against each other. Different algorithms will often require different amounts of memory or computation cycles. Keeping track of the runtime behavior of the different implementations will eventually allow the function orchestrator to predict and select the most efficient implementation for a given input and at a given instant. When spare compute cycles are available, it can also run some implementations with different inputs, in order to learn more about the differing behavior of these implementations.

One benefit of allowing for multiple implementations is that it reduces the potential for conflicts when editing Wikifunctions. If a contributor wants to try a different implementation, thinking it might be more efficient, they are welcome to do so and submit their implementation to the system. There is no need for the well-known arguments around different programming languages and their relative merits and qualities to spill over to Wikifunctions: everyone will be welcome to provide implementations of their favorite algorithms in their favorite programming languages, and the system will take care of validating, testing, and selecting the right implementation automatically.

Another benefit of having multiple implementations is that we can test them against each other rigorously. Sure, we will have the user-written suite of testers for an initial correctness check (and also to start collecting runtime metadata). But when you have several independent implementations of a function, you can either synthetically create more inputs, or you can run actual user-submitted function executions against different implementations to gather more metadata about the executions. Since we have several implementations, we can use these to cross-validate the different implementations, compare the results from the different implementations, and bubble up inconsistencies that arise to the community.

Besides having implementations of different algorithms, we also expect to have implementations in different programming languages. Implementations in different programming languages will be useful for the same reasons that different algorithms are useful, i.e. being able to cross-validate each other, and to allow for the selection of the most efficient implementation for a given function call. But they will have the additional advantage of being able to run on different configurations of the Wikidata function evaluator. What do I mean by that?

Whereas we plan to support numerous different programming languages for adding implementations in Wikifunctions, we do not plan to actually run evaluators for all of them in production. This is due to several reasons: the maintenance cost of keeping all these evaluators up and running and up to date will likely be severe. The more programming languages we support, the more likely it is that the Foundation or the community will be exposed to bugs or security concerns in the run-times of these languages. And it is likely that, beyond five or six programming languages, the return on investment will greatly diminish. So what’s the point of having implementations in programming that we do not plan to run in production?

Remember that we are planning for an ecosystem around Wikifunctions where there are many function evaluators independent of the one run by the Wikimedia Foundation. We are hoping for evaluators to be available as apps on your smartphone, to have evaluators available on your own machine at home, or in your browser, or in the cloud, to have third parties embed evaluators for certain functions within their systems, or even to have a peer to peer network of evaluators exchanging resources and results. Within these contexts, the backends may choose to support a different set of programming languages from those supported by Wikifunctions, either because it is favorable to their use cases, or because they are constrained to or passionate about a specific programming language. Particularly for running Wikifunctions functions that are embedded within the system of a third party app, it can easily provide quite a performance boost to run these functions in the same language as the embedding app.

Another advantage of having implementations in different programming languages is that in case an evaluator has to be suddenly taken down, e.g. because a security issue has been reported and not fixed yet, or because the resource costs of that particular evaluator have developed in a problematic way, we can take that evaluator down, and change our configuration to run a different set of programming languages. This gives us a lot of flexibility in how to support the operation of Wikifunctions without disrupting people using the service.

An implementation of a function can also be given as a function composition: instead of contributed code in a programming language, a composition takes existing functions from Wikifunctions and nests them together in order to implement a given function. Here’s an example: let’s say we want to implement a function second, which returns the second letter of a word. Assume that we already have a function first which returns the first letter of a word, and a function tail which chops off the first letter of a word and returns the rest, then second(w) can be implemented as first(tail(w)), i.e. the first letter of the result after chopping off the first letter. We will talk about function composition in more detail at another time.

Composition has the advantage that we don’t require implementations of every function in code or as a built-in, and yet we can evaluate these functions. The backend will properly chain the function calls and pipe the results from one function to the next until it arrives at the requested result. This might be particularly useful for third-party evaluators who offer a different set of programming languages, or even focus on one specific programming language; they still might be able to use a large set of the functions, even without local implementations.

We expect composition to usually offer a less competitive performance compared to running code. Our meta-data will be able to pinpoint especially resource intensive function calls. We plan to surface these results on the wiki, highlighting to the community where more efficient implementations would have the most impact. I am hoping for a feature that, e.g., will allow a contributor to see how many CPU cycles have been saved thanks to their porting a function into WebAssembly.

One interesting approach to function composition could be that, if we have code in a specific programming language for all functions participating in a composition, it might sometimes be possible to synthesize and compile the code for the composed function in that programming language. This might lead to a situation where, say, two different programming languages offer the most efficient implementation for some of the participating functions, but the actual function call will run yet more efficiently in the new synthesized result.

And finally, there’s also caching; any of the function calls, including nested calls in composed functions, might be cached and re-used. This cache would be shared across all our projects, and provide significant speed-up: after all, it is likely that certain calculations are going to be much more popular than others, similar to how some articles are much more popular than others at a given time. And just as Wikipedia saves tremendous amounts of CPU cycles by keeping pages in the cache instead of re-rendering them every time someone wants to read them, we can reap similar benefits by keeping a cache of function calls and their results.

In summary: having multiple implementations for a function gives us not only more flexibility in how to plan and run a function, and thus to potentially save resources, but it also gives us a higher confidence in the correctness of the system as a whole due to the cross-validation of the different implementations and reduces the potential for conflicts when editing Wikifunctions.

We are very curious to see how this will play out in practice. The few paragraphs above describe ideas that require a lot of smart development work on the back-end, and for which we don’t really know how well each of them will perform. We sure expect to come across even more ideas (Maybe from you? Maybe from a research lab?), and to discover that some of the ideas sketched out here don’t work. We do not promise to implement everything described above. The good thing is that many of those ideas are ultimately optimizations: even simple versions of all of this should provide correct results. But smarter approaches will likely save us huge amounts of resources, and enable us to scale to the full potential of the project.

As with our other projects, we plan to publish our data and metadata, and we invite external organizations, in academia and in industry as well as hobbyists and independent developers and researchers, to help us tackle these interesting and difficult challenges.


Again, reminder: our first office hour is upcoming Tuesday, June 22, 2021, at 16:00 UTC on the Telegram channel and Libera.Chat IRC channel #wikipedia-abstractconnect.