Gnutella
Gnutella | |
---|---|
Status of the proposal | |
Status | closed |
Reason | Inactive proposal. --Sannita (talk) 12:44, 18 September 2013 (UTC) |
A discussion page has been created. See P2P.
This is a page about a proposition of mine, suggesting that Wikipedia articles and images will be made available via Gnutella, by means of drugging and tying the resident sysadmin and installing a modified Gnutella client on the Wikipedia server, RIP. I’ve brought the subject up in the Wikipedia IRC channel (once, so as not to ruin a good thing), and having not been overly shouted at, figured I had probably logged on to the wrong server. Some of the technical information contained in the ‘’Implementation” section derives from this one chat.
About this article
[edit]Who wrote this article?
[edit]This article was originally written by Itai (over at User:Itai/Gnutella), who would just like you to note that he is not to be shot at. This question, along with this exceptionally witty answer, will be both be deleted as soon as other people will agree to take part of the blame.
Why does this article even exist?
[edit]Because I'm evil.
Justifications
[edit]Why Gnutella?
[edit]Because pigeons won't do. Other than that, however, I would have great preferred a Freenet server. Gnutella is more accessible, on the other hand, and this may even prove to be a good thing.
What are the justifications for enabling Gnutella access?
[edit]Minuscule reduction in bandwidth costs and Censorship evasion (due to aversion). I'm ever vigilant against the day the Evil-Government-Agents-cum-Martians blow Wikipedia's servers to smithereens. We should probably untie the sysadmin first.
Some more reasons?
[edit]Unfortunately. These reasons, it should be noted, have nothing to do with Wikipedia per-se. This is true for most things found in this world of mine, however, so it should be of little consequence. As you surely noted, it's exceptionally hard to defend Gnutella these days (indeed, since it's inception), when it mostly conveys files of dubious legality. I've skimmed the Gnutella specification, however, and nowhere does it say that the network is confined to pornography and MP3s, at least not until version 3.0. Granting Gnutella access to Wikipedia would mean that whenever one absentmindedly types "pornography" into one's Gnutella client's search box, one will be given a chance to download Wikipedia's superb article on the subject.
That is evil.
[edit]I know.
Implementation
[edit]Who will implement this?
[edit]Me, to begin with, although - after the fashion of all programmers - I would much rather having somebody else do it, or at least collaborate with me. It should be noted that I can only program INTERCAL.
Can you program?
[edit]Well... I can manage a "Hello worBuffer overrun. Local network will be shut down. Please step away from your computer before the killer robots arrive."
How can this be implemented?
[edit]Divine intervention may be required. Atheistically, however, I can see two ways in which implementation of this will be made possibly, if not worthwhile:
- Hack and Slash: Modify the MediaWiki code to save a copy of every article created or modified. Install a simple Gnutella client, and set its shared folder to said cache. While we're (note how I cleverly dragged the astute-yet-befuddled reader into this) at it, we could also do away with the Squid proxies, serving HTTP pages from this cache. I would rather go with the second option, however, as I would much rather avoid making modifications to the MediaWiki code, the reason being that there is a body of programmers tending to the said programmers, who are likely to tear my own body asunder at the very mention of such modifications. Plan B, then.
- Modify the Gnutella Client: Now you're wheezing. Rather than take a sensible approach, a capable, or at least earnest, INTERCAL programmer could (after having single-handedly composed an INTERCAL MySQL API, for some reason not currently available) modify the Gnutella's client search function and upload functions so that they will scan the MySQL database and dynamically create and deliver articles. The problem with this is that if a change to article parsing is made, it will have to be duplicated in both the PHP and INTERCAL renderers. A better solution still is have the article provided by the proper PHP class. While PHP - as opposed, say, to INTERCAL - was never meant to be used this way, this might just work.
Really?
[edit]No.
What about images?
[edit]Images - the only thing, incidentally, people may actually download - are not quite as hard to share, being quite comfortably stored in a single folder. A complication of this peaceful-and-thus-ephemeral state of affairs is serving Gnutella descriptions of the images taking from the MySQL database. An alternative is to provide nothing but the description: "a bloody image", which, while not very informative, is pithy and bandwidth-sensitive, as are all things likely to prevent excess downloads.
Another question related to images is what is to be done with articles containing images. One option, for instance, is to dynamically bundle the HTML article alongside the assorted images thereby referred into a TAR archive, thus burdening the Wikipedia server beyond any thing approaching usability and saving us all quite a bit of time. An alternative is to decide that images are rarely if ever required to understand an article, and provide HTML, making sure the HTML image reference (and, come to think of it, all links) use absolute paths. If one is really anxious to get the images, one can always perform another Gnutella search, or launch Paint and improvise. Applying crayons to the computer monitor, incidentally, is not recommended.
What needs to be done?
[edit]Quite a lot of things. The first thing, however, is to decide which Gnutella open-source client is to be used [1]. There aren't as many INTERCAL clients as you'd think. Once this is accomplished, modifications can commence. It should be noted that this had all best be done quietly, for fear that they'll shut down the Internet.
[1] I was all in favor of LimeWire, for no reason other than the fact that I like the name WikiWire, but then found out that it was written in Java, a language to which I am, to say the least, partial. However, if it be aggreed that LimeWire is best, or that WikiWire is really a good name, I shall have no qualms about using the LimeWire core, preferably without telling the LimeWire team.