Mapping Commons Artwork template to Wikidata painting item
Problem: Currently, GLAM organizers face some hurdles to be able to submit simultaneously to Wikimedia Commons and Wikidata. This is particularly true for itens that should uncontroversially be on Wikidata, such as paintings (check Wikiproject Sum of All Paintings). Pattypan is a tool for mass upload to Commons, which relies on a spreadsheet; Quick Statements is a tool for mass upload to Wikidata, whose code can be generated from a spreadsheet. These tools, that have common goals --to ease massive uploads--, do not work too well together, and may be require from massive uploaders the double work of using one then the other, wasting precious time for work that could be done simultaneously.
Who would benefit: Massive uploaders, from Wikidata and Wikimedia Commons; institutional uploaders.
Proposed solution: The integration of Pattypan and Quick Statements seem like a technical small step, that would certainly require the creation of new columns associated to parameters one needs to fill in when working with Wikidata. As a pilot, I would recommend to strictly limit the integration to paintings. I understand GLAMpipe has been proposed as a solution for the issue of integrating Commons and Wikidata upload, but I believe this integration can also be set up for Pattypan.
Endorse. This has been an issue since we started curating artworks on Wikidata. It is complicated by the ever-growing backlog of images of artworks on Wikimedia Commons that were uploaded with the default uploader, which includes an information template, but not an artwork or "art photo" template. Jane023 (talk) 11:11, 15 November 2017 (UTC)[reply]
@Jane023: Hi, I wanted to let you know I've changed the {{support}} vote you added to "Endrose". The voting phase begins on November 27. Until then we don't want to confuse people into thinking votes will amount to anything. Sorry about that, and thank you for participating in the survey! MusikAnimal (WMF) (talk) 16:51, 15 November 2017 (UTC)[reply]
Endorse. I have been working with Quick Statements recently, and this integration with the mass commons uploading tools has turned a job that is already laborious into a three or more step laborious job. Ederporto (talk) 16:57, 16 November 2017 (UTC)[reply]
Endorse, as an author of Pattypan :). This idea is close to my heart as an avid Wikidata editor. Just to let you know, one of the things that is planned is early support for Commons' Structured Data. Yarl (talk) 19:32, 17 November 2017 (UTC)[reply]
Problem: It is my firm opinion that the control of incoming data and of changes to the data stored in Wikidata does not work as well as it should. What is a tiny edit on one of Wikidata's ~ 40 million items can cause wrong information to appear in many thousands of pages in more than a hundred other projects, even those which chose to have flagged revisions as part of their quality control. There are many examples where the threat of vandalism not being detected is significantly higher than the possibility that the information provided actually needs to be changed (for instance: Chiapas is located in Mexico), some information is actually timelessly true (such as population numbers from a census at a specific date in time). Allowing such stable data to be actually stabilzed and only be changed under circumstances yet to be defined could not only increase Wikidata's reputation as trustworthy database but also increase its usage, while it lessens its vulnerability.
Who would benefit: Wikidata as a whole (reputation, usage), Wikidata volunteers, other projects' volunteers (less need to focus constantly on changes to Wikidata items), readers (don't get wrong information)
Proposed solution: Develop some kind of flagging single data on Wikidata (i.e. not the whole revision, but a specific statement)
Support An excellent idea that would help prevent much of the subtle, insidious vandalism that Wikidata is particularly vulnerable to. Gareth (talk) 10:13, 28 November 2017 (UTC)[reply]
Support the idea of systems that help to detect or prevent vandalism. However I am unclear about details of this proposal. In Wikipedia we can protect vulnerable articles. Maybe we can set some system of protecting individual statements or whole items. --Jarekt (talk) 13:02, 1 December 2017 (UTC)[reply]
Problem: One of the biggest challenges with the trust in Wikidata from existing community members in bigger language communtiies, like English, is lack of trust in the sources, and without full sourcing support, we can't provide the high quality references Wikipedians are used to.
Who would benefit: Reusers of Wikidata, and the Wikidata community.
Proposed solution: Integrate citoid into the tooling for Wikidata. This should not be too hard, because many of the source types have already been modeled in Wikidata
More comments: I recognize that this may be on the Development pipeline of the Wikidata team, but it appears to be competing with a number of other priorities, and most of the expertise for Citoid is at the Wikimedia Foundation
I use the Zotero tool, but the hard part for books is still that the author and publisher should be Q Items, not strings, and we need to create both a “work” and an “edition” of that work before we can use it as a reference. A template interface that accepts an ISBN, looks for a match, and if it doesn’t find one creates both work and edition, interlinked, with tools for searching for the author and publisher in wikidata, would be ideal. It would also need to detect when the work exists but the not the specific edition indicated by the ISBN. I appreciate that this use case is hugely complicated. - PKM (talk) 04:47, 18 November 2017 (UTC)[reply]
Here's another approach, if what we want is to make it easy for editors to add references to books. (I don't think this is original but I have no idea where I saw it.) Add a new property "reference ISBN". It works like <reference URL> - user enters just the ISBN and qualifies with page number, section, etc. as needed. ISBN is clickable (to OCLC or some other source with book data). Companion bot follows behind - if the ISBN matches an Item, it adds the appropriate <stated in> reference. If the ISBN doesn't exist, bot looks for a match on author and title and creates an edition if found. If no match, bot creates minimal linked work and edition records (and adds them to a worklist for Books project or a new References project volunteers to review?). - PKM (talk) 19:23, 18 November 2017 (UTC)[reply]
Support Anything that facilitates citation on Wikidata is very welcome. Especially support User:PKM's point that a tool for books should create entries for both the work and the edition. MartinPoulter (talk) 10:18, 6 December 2017 (UTC)[reply]
Problem: Creating reference items on Wikidata is lengthy and complicated. As result it is rarely used and replaced by substandard referencing (stated in : item about database or reference url: bare url) or no reference at all.
Who would benefit: WikiData editors, WikiData users (including users of WikiData powered infoboxes)
Proposed solution: Create Citoid-like one click reference tool, that will create reference item automatically (+check if reference item for same source already exist).
I agree. Can I close this and we just focus on the other? Anything here you'd like to add to the other one? This will help get more votes in one place instead of having them split over two places. -- NKohli (WMF) (talk) 19:11, 21 November 2017 (UTC)[reply]
FWIW, I just read the other proposal before this one, and was ready to vote on this one although I didn't vote on the other one. I suppose it may be that the language of this proposal is clearer than the other one's regarding the benefits for editors like me. If that clarity isn't preserved through a merge (including possibly a more descriptive title), the total number of votes could end up actually reduced rather than increased. --Waldir (talk) 11:45, 3 December 2017 (UTC)[reply]
This one is a bit broader. Yes, Citoid may be the one click reference tool, but it can be other tool as well. As mentioned above Citoid may be unknown even for experienced editors, so they do not know its functions.--Jklamo (talk) 16:48, 6 December 2017 (UTC)[reply]
Problem: Many items have large number of identifiers pointing to information in other databases. Those links often serve as references to statements in Wikidata. There should be a way for a user to quickly add specific identifier as a references to statement, without typing, cutting and pasting from multiple places.
Who would benefit: Wikidata maintainers and users
Proposed solution: There are several options: like build it into the wikidata interface, write a new gadget or extent existing gadget.
I think a bot could help with this. If for example I add <stated in> = "GNIS" to an item, the bot would automatically pull the <GNIS ID> for the item and append it to the reference. If the reference is to a GNIS ID other than the one for the item, of course the editor would need to manually add that ID. This would work for any identifier. - PKM (talk) 21:04, 15 November 2017 (UTC)[reply]
I am fine with a bot to fix incomplete references but even adding <stated in> = "GNIS" takes some cutting and pasting. Also unless I am running the bot, I like to leave a page in a state I would like to find it if someone else was editing it. --Jarekt (talk) 21:23, 15 November 2017 (UTC)[reply]
it would be nice to be able to drag-and-drop an identifier to a statement and have it added as a properly structured reference. I’d also like to be able to drag-and-drop a <described by source> or <described at URL> statement and have it “dropped” as a properly structured reference. - - PKM (talk) 04:30, 18 November 2017 (UTC)[reply]
I think it should be solved rather client-side, i.e. on the wiki that uses data from Wikidata, to avoid data duplication. I recently implemented it on huwiki. (It’s not an issue for #statement, as AFAIK it can’t display references at all. Of course it may be worth include this in the implementation if the parser function ever becomes capable of displaying references.) --Tacsipacsi (talk) 16:45, 9 December 2017 (UTC)[reply]
Problem: current QR coding requires copy/paste of individual urls of a primary language wikipedia from there down load the create png files to a hard drive for using in the artwork of the signage. Then if an article gets renamed the QR code becomes broken incuring a cost to the end user to recreate the signage which can be significant. It also impacts on the trust and relationships necessary for the long term support required
Who would benefit: affiliates making wikitowns, and qr projects, also enable GLAMs to use the qrcodes to access Wikimedia information directly in displays.
Proposed solution: create a bot to make qr codes for all
More comments: by using Wikidata items, they are static and dont change. its able to access all associations both internally and external in available languages, on the subject which expands WD role as a central data hub. This would also facilitate the creation of WMF based QR reader that will mean people wont be using commercial ad based services and keep the user within the safe WMF environment which is a plus for schools. Additionally it can use Wikivoyage journeys, Wikisource material as part the whole experience.
yes and no if the qr code is created by a bot automatically and retained as part of a data item then its just becomes possible to down load the codes as needed Gnangarra (talk) 10:21, 20 November 2017 (UTC)[reply]
I missed the deadline (I wanted to support it) but I would like to add something. This proposal would be extremely useful for non-Latin languages where QRpedia codes are not usable. Here is a real-life example: File:QRpedia code in Odessa - Bristol Hotel - 2.jpg. This is a good code but not quite usable. What we would need is a code a) that would definitely keep link with an item even if the item is merged, b) allows to choose user's default device language (if available) or otherwise fallback to a pre-defined local language (e.g. show a page in French in France if the language of device is not available). As far as I can see these codes would be way easier than the current ones — NickK (talk) 18:15, 11 December 2017 (UTC)[reply]
Support Would love to see qr.wikimedia.org, which would generate 'permanent' ids and QR codes. You could have a link, that would be stable for pagemoves, and translations, also for users please ! —TheDJ (talk • contribs) 17:20, 29 November 2017 (UTC)[reply]
Problem: When one't watchlist is set to display edits made on linked statements on Wikidata, they are always displayed in numerical code even if labels exist on the Wikidata entries. For example, this diff on enWikipedia's watchlist displays as "Created claim: Property:P4552: Q5456; Added reference to claim: Property:P4552: Q5456" whereas on Wikidata it's two diffs with two edit summaries, "Added reference to claim: mountain range (P4552): Andes (Q5456)" and "Created claim: mountain range (P4552): Andes (Q5456)".
Who would benefit: People who use their watchlist on a non-Wikidata project to monitor changes to the Wikidata item linked to an article they have watchlisted. On enWikipedia some templates draw information from Wikidata so making it easy to monitor the edit content may be beneficial.
Proposed solution: The watchlist should display the language label if it does exist in lieu of the numerical code; in this case the summary should be "Created claim: Property:mountain range: Andes; Added reference to claim: Property:mountain range: Andes" perhaps with the "property" omitted if it makes the summary overlong.
More comments: I hope I didn't send this in too late.
Phabricator tickets: phab:T108688; phab:T171027 may be worth paying attention to since it's a technical issue that could impact on this project.
Problem: The edit summary for this diff does display that a reference was added, but not which reference it is. References can be unreliable, spam links etc. so having them be easy to monitor is desirable.
Who would benefit: People who patrol Wikidata items for problematic edits, since the content of the diff is immediately displayed.
Proposed solution: Add the content of the reference to the edit summary; in this case it would be "Added reference (imported from:English Wikipedia) to claim: mountain range (P4552): Andes (Q5456)"
More comments: I hope that this isn't too late. This feature would be useful as well if it displayed in crosswiki watchlists. There may be length issues when the reference is long.
Problem: Creating texts on Wikisource is a multi-step process. The biggest and most important step is probably just getting the source file onto Commons. Then there are several more steps to get the file into Wikisource (in whichever language). Currently on Wikidata, in the P18 property for image files, the file is recognized automatically while the user types in the Commons filename. There should be a similar effect in reference urls if the file is on Commons and has been added to any Wikisource project. See e.g. these two items, that both reference articles from the same dictionary of biography that is currently on English Wikisource: d:Q38103276 and d:Q43194364. Currently only the first one has an associated article from the Wikisource file at d:Q38103904.
Who would benefit: Anyone contributing to Wikidata and adding references from Commons or via Commons, Wikisource.
Proposed solution: Make the Commons link obligatory when adding Wikisource articles to Wikidata so that they can be "recognized" on the basis of the .djvu filename. This will dissolve the language silos that keep this information unobtainable to reference-contributors.
@Jane023: maybe I miss your point but there is something strange with your links to Wikidata, could you check them? (d:Q38103276 and d:Q43194364 are not biography entries but the actual people Q5 and I linked Mary H. Graves - the woman, to Mary H. Graves - the dictionary entry)
I didn't ask for anything to be obligatory. I would like the links to be recognized for what they are (like sitelinks are recognized when you type them in). Jane023 (talk) 12:29, 2 December 2017 (UTC)[reply]
Problem: Wikidat calendar Now the only calendar for inputting data in Wikidata is the Gregorian calendar. there are some other local calendars which are supported by MediaWiki at here (Arabic, Persian, Hebrew, Thai solar, Minguo, Japanese nengo). some article's data are base on these calendars and users should convert them to Gregorian to input them at Wikidata at it is difficult.
Who would benefit: At Wikidata local users can edit simply the dates base on their local calendar
Proposed solution: Develop a gadget which converts these calendar to Gregorian to input them at wiki data. I didn't propose to change the database date format I only suggested a converter gadget.
I think it would be better to allow data in different calendar being input into the form as it is instead of having a program to convert them? Some calendars are used differently in different era and people in different place also follow different rules that it might cause some troubles in conversion....C933103 (talk) 09:05, 10 November 2017 (UTC)[reply]
In my opinion, have a unique input type for the database is a good idea having different calendar make some difficulty for bots and external data users and I support a simple converter.10:10, 10 November 2017 (UTC)
Both your views can be conciliated, it only requires that both data to be stored side by side. From a traceability point view, it's clearly better to keep the reported data as close at possible to the source it come from. Calendars are a difficult topic, usage across documents are nowhere close to a linear synchronized trustful information. On the other hand, simple converter which feed a separated field provide a convenient way to build some interesting aggregation which are aware and fine with such a naive approach, which have both its pros and cons. So it would be more prudent to expose dates in several flavour of accuracy with explicit qualifiers for each. --Psychoslave (talk) 10:37, 12 November 2017 (UTC)[reply]
I do not like the idea of storing the dates differently based on a calendar. A point in time is a point in time no matter which calendar it was specified in by the source. Of course a reference need to specify the calendar the date was specified by the source. So I support user:Yamaha5 idea of date input in other calendars, but oppose the idea of storing dates in multiple formats side by side, as it is done in d:Q22687867 or d:Q165671. --Jarekt (talk) 18:44, 15 November 2017 (UTC)[reply]
If the proposal were accepted, the developers should carefully determine the earliest past date and the date furthest in the future for which the converter can convert accurately. The converter should refuse to convert any date outside the range for which the converter has been rigorously tested. This will require an understanding of history as well as computer programming. Jc3s5h (talk) 22:36, 15 November 2017 (UTC)[reply]
Problem: Every time an item is created, we have to remake by hand all the information. We could use another item as a template when creating, cloning it and then changing the relevant information (like, for example, coordinates).
Who would benefit: Wikidata users that don't use extensively bots or tools. Data would be more coherent in this way, also.
Proposed solution: Having a clone button on each item.
Support That would be very useful for chemistry, where I repeatedly run into compound classes, where I want to mint entries for specific structures, with proper (stereo) isomorphism. Egon Willighagen (talk) 08:34, 6 December 2017 (UTC)[reply]
Problem: Sometimes, the user needs a copy (in another language) of a page (like talk pages of d:Q11214943) but this page is not important according to d:Wikidata:Notability
Who would benefit: All
Proposed solution: Show Interlanguage links on these pages automatically:talk pages (Depending on the original page item), mediaWiki pages, userpages, creator pages (depending on d:Property:P1472), and special pages
Perhaps this is slightly different, but I often want to read the same searchword pages in other languages, as content differs between Swedish, German and English articles on the same topic. Wich is fine, as for example historical persons, are relevant in different ways depending on your national / cultural viewpoint. But if you have the language skill, reading articles in other languages, often broadens your outlook on things. — The preceding unsigned comment was added by Bertiloman (talk) 09:43, 10 December 2017 (UTC)[reply]
Support User pages with Cognate, Creator/Institution with metadata. (I’m in doubt whether it makes sense on MediaWiki and talk pages, but if it’s enabled on them, MediaWiki should also work with Cognate.) Tacsipacsi (talk) 20:03, 9 December 2017 (UTC)[reply]
Problem: Central pages on Wikidata that are linked from many places and vandalism affects many users don't have enough watchers. This means vandalism doesn't get reverted as fast as if more people would have the items on their watchlist.
Who would benefit: Any Wikidata user and data reusers who want that Wikidata is faster at reverting vandalism.
Proposed solution: If I add the statement "Earth (Q2) highest point (P610) Mount Everest (Q513)" in addition to adding "Earth (Q2)" to the watchlist "highest point (P610)" and "Mount Everest (Q513)" should also be added to the watchlist. Of course, there should be an option in preferences to deactivate this feature.
Nice idea, but this seems to me like it should be opt-in. Not everyone who uses Wikidata wants to spend time fighting vandalism - and I imagine it would be kind of strange to see that multiple pages have been added to their watchlist when someone chooses to watch just one page - especially as a default behaviour. -- numbermaniac07:00, 9 November 2017 (UTC)[reply]
Fighting vandalism isn't the only effect of following central pages. It also means that it's possible to have conversations about them. Currently, it's hard to get a conversation between different people who use a property like "highest point" to clarify it's usage, but I'm open about whether or not this is enabled-by-default. Ideally I would like to see a A/B test for whether it makes sense to enable or disable such a feature by default. ChristianKl (talk) 16:07, 9 November 2017 (UTC)[reply]
Problem: Currently, many descriptions created by bots can be derived and introduced logically, by using P31 and other properties (e.g. "Wikimedia disambiguation page", "Wikimedia category", "scientific journal", "scientific journal article", "species of animal", "protein-coding gene in the species Homo sapiens", etc.). In many case, once bots have created descriptions, the items escape the control of bots, so now it is very troublesome to add a translation in the new language to all of the existing items, or to delete descriptions in all languages when they are logically wrong.
Who would benefit: All editors, especially using minor languages.
Proposed solution: Triggered by the creation or updating of P31 (and other utilized properties) claim, database system generates the default description according to the centrally managed manually defined patterns, for each language. If the description is empty, system shows the default description. If a pattern for the corresponding language is undefined, system shows nothing.
More comments: With this new function, descriptions in Wikidata will be easily maintainable.
Problem: Many properties related to an item are stored in other items and are presently very hard to access by templates using wikidata. For example:
taxon items have property parent taxon (P171). If I have an infobox that shows genus, family or order of a given organism I need a way to move up a chain of P171s until some rank is met.
people items have property place of death (P20) that stores most exact item related to place of death, which could be a house, street, hospital, neighborhood, etc. If I have an infobox that shows place of death of a person I usually need city or town where someone died. A query to look up a city of death of Pyotr Tchaikovsky is SELECT DISTINCT ?city { ?city ^(wdt:P20/wdt:P131*) wd:Q7315; wdt:P31/wdt:P279* wd:Q515 . }. It is very hard to access that information using Lua calls and it is totally not accessible through {{#statement:...}} calls. Similar issue would be for "Country of birth" or "Country of death".
We have several properties which have inverse constraint, for example mother (P25) / child (P40). We could retire child (P40) property and automatically calculate it from mother (P25) property. That would allow us to keep the information only in one place.
Who would benefit: users of the wikidata, infobox writers, maintainers of the wikidata
Proposed solution: Create infrastructure to allow read-only properties which are not directly editable but precomputed based on some SPARQL query and other properties and items. Users would see and access them in a way similar to the current properties.
This sounds like adding a reasoning layer to wikidata query service. Maybe should be a separate instance, but in principle this sounds like it could be a good idea. Or by pre-compute do you mean to re-compute these things every time something changes? That might be a lot trickier (do you update every location in a country if the country name is changed?) ArthurPSmith (talk) 20:22, 15 November 2017 (UTC)[reply]
I do not know how often one would have to refresh such pre-computed statements, but whatever it is I am sure it will be more efficient than the current state of infobox templates doing in Lua operations equivalent to SELECT DISTINCT ?city { ?city ^(wdt:P20/wdt:P131*) wd:Q7315; wdt:P31/wdt:P279* wd:Q515 . }, just to get a city of death. I was looking into doing it for the c:Module:Creator infobox I maintain, and was advised by others that already implemented it, but with potential of loading multiple items to get this one piece of information, I figured out that there has to be a better solution. --Jarekt (talk) 21:31, 15 November 2017 (UTC)[reply]
I don't know that we need a new class of statements--what we would need would be to be able to say "this property can be followed" in the data exposed to the API, have the API follow it, until such time as it cannot be followed or it hits some predefined limit (by an editor maybe), or some such. --Izno (talk) 03:41, 16 November 2017 (UTC)[reply]
@Jarekt:, the problem here is that we're unlikely to have a Wikidata Query Service cluster powerful enough for what's proposed here until late 2018, so everything involving SPARQL queries falls outside of the scope of this proposal. See also my rejection of a somewhat related proposal here - SPARQL queries can be very slow and it's completely out of question that we allow them to slow down page viewing/editing. I'm not archiving this proposal, however, to see if a limited simpler solution that doesn't involve WDQS can be viable. Max Semenik (talk) 09:01, 24 November 2017 (UTC)[reply]
Max Semenik I understand, that the proposal might not be technically feasible, but if nothing else we can figure out how many people think it is a good idea. I was also imagining that the values would be precomputed and stored or cashed. I was also imagining that it might be saving time, because when I was asking about how to access city of birth using Lua I was told about modules that already do equivalent of SELECT DISTINCT ?city { ?city ^(wdt:P20/wdt:P131*) wd:Q7315; wdt:P31/wdt:P279* wd:Q515 . } in Lua. However since that would require loading of many items for a single piece of information, precomputing seemed like more sane solution.
Problem: Because Wikidata only supports the storage of Universal Time dates, dates given for other time zones are not true. For example, 2016 World Series (Q24906838) claims that the 2016 World Series game 7 ended on November 2, 2016, and it did, according to Pacific Time. But in Universal Time, it ended November 3, 2016.
Who would benefit: Anyone who wants accurate dates.
So, what about people, who have birthdates? Good luck. This isn't a fixable problem by the software--it's fundamentally on the editors to fix this. --Izno (talk) 03:36, 16 November 2017 (UTC)[reply]
It is currently impossible for an editor to enter an accurate birth date for anyone who wasn't born where the offset from UT was 0 at the place of birth (the UK in winter, for example). It is beyond the ability of editors to fix this. The developers should allow dates in local time to be stored.
Let me give an example. I read in a reliable source that a certain person was born in Australia on July 15, 1975. The date of birth is given, but not the time of day. If the person was born at 1 AM local time, the UT birth date is July 15. But if the person were born at 11 PM local time, the UT birth date is July 16. The editor can't solve this, it's on the developers. Jc3s5h (talk) 13:09, 16 November 2017 (UTC) edited for clarity 15:11 18 November 2017 (UT)[reply]
I agree. Most on Wikidata are in local timezone and we should not give impression that they are in Universal Timezone if they are not. --Jarekt (talk) 14:06, 16 November 2017 (UTC)[reply]
This is a symptom of a more fundamental issue: it's not possible to store full datetime values in Wikidata, you can only store the date and it assumes that the time is 0h UTC. It would be much better to allow the full datetime to be edited, so that the hour/minute/second can be specified as well as the day. Thanks. Mike Peel (talk) 20:47, 18 November 2017 (UTC)[reply]
For most time properties we have, like date of birth or death, you are not going to find sources with dates more precise than a day. Some obituaries might have time of death but it would be provides in local time, with whatever time savings adjustment would be proper for the place and era. So a lot of dates even if you know the time would be hard to convert to Universal Time. --Jarekt (talk) 12:56, 20 November 2017 (UTC)[reply]
With some rare exceptions, all dates are timezone-less. We export them as dateTime, but in fact virtually all of them are just dates, with no time. Since base JSON model still pretends they are date-times so does RDF model, but maybe it's time to refine and change that. What I am absolutely opposed to and think is the worst thing we could ever do is start converting dates to "local timezone" and "UTC". That would lead to utter insanity - we do not have historical data on timezones, and even if we had and somehow managed to make all data conversions work (which we with 99.9999% probability can't) it would result in converting all our data to utter junk as nobody ever cared what was the date in Greenwich, UK at the time certain event happened - unless that event happened in Greenwich, UK. What everybody cares about is what the date was in the place where event happened, and that's the only thing we should ever deal with. There should absolutely be no such thing as "UT birth date". What we have now is we deal with dates right, but we record them wrong - we pretend as if they are date-times in UTC, where they are just dates, without time. That is something that we may want to change - and it probably requires wider discussion in the community. It's certainly not a developer question until we decide to remove the pretense of having "time" part from dates - at which point yes, the developers should take note.
However, this may make date-times and dates (if we ever have proper date-times) incomparable - which the effectively are, absent historic timezone data since beginning of the universe, but may be inconvenient for practical reasons. Or, we decide we give up on times altogether (given that we didn't use them for years now and still are fine) and eliminate times from the data model.
Support clarifying that all Wikidata dates are in local timezone. If we need to store precise time info than we should indicate the timezone, but the default should be local timezone because that is the timezone used by most references. --Jarekt (talk) 13:29, 1 December 2017 (UTC)[reply]
Problem: QuickStatements is a vital part of a lot of work on Wikidata. It has a broad range of capabilities, however from time to time you run into types of statements that can not be added by QuickStatements. For example:
If I remember correctly it's also impossible to add "no value" claims via QuickStatements, for example, South Pole: country: no value. Kaldari (talk) 18:44, 21 November 2017 (UTC)[reply]
Support QuickStatements has become one of the most important tools in Wikimedia, and Magnus is too overbooked to maintain it fully in his spare time. Syced (talk) 05:33, 11 December 2017 (UTC)[reply]
Problem: When a page on another Wikimedia project wants to use data from Wikidata, bespoke Lua scripts are needed for anything more complicated than getting the main object of a property. This has led to a proliferation of Lua scripts that do the same task in different Wikimedia projects.
Who would benefit: Wikimedia projects that want to use Wikidata, particularly small Wikipedias
Proposed solution: Extend the current {{#statements:}} parser function to accommodate qualifiers and sources. This will cover most standard use cases of Wikidata in infoboxes.
I would love to not only support, but actually work on this. Unfortunately this issues description is extremely vague. What exactly do you mean when you write "accommodate qualifiers and sources"? Should the parser function also output qualifiers, references, or both? Should it accept them as filters? How do you expect this to look like in wikitext? --Thiemo Kreuz (WMDE) (talk) 16:42, 28 November 2017 (UTC)[reply]
Comment I doubt you can squeeze more out of {{#statement}}, but I think we should have more Lua libraries shared among all the projects allowing access to Wikidata statements. For example my c:Module:Wikidata date is used on Commons for formatting date statements in any language. It would be good to share such modules across other projects, so more people can improve it. --Jarekt (talk) 14:06, 4 December 2017 (UTC)[reply]
Problem: If we use Wikidata in Wikipedia, we can get label in language of Wikipedia, some fallback language or bare Q number. It would be nice to have a category for articles where there is a missing label.
Who would benefit: Readers and editors of both Wikipedia and Wikidata.
Proposed solution: Implement a tracking category or special page where items used on Wikipedia without native label can be found. Possibly highlight places where this label is used.
More comments:
I understand that such category can be built into modules that retrieve data from Wikidata, but there might be multiple diffeent modules used or even bare {{#property:}} call.
I understand that for bigger Wikipedias there might be a lot of items with this. I do not have clear ideas on how to better present large amount of pages.
Example:Direct Énergie, article about cycling team in Russian Wikipedia. There is a module which gets current team roster from Wikidata. As of now, part of rider names are in Russian, part is English. It would be nice to have an overview which pages have this problem. This is a problem in Wikipedias which use different script (Russian, Macedonian, Bulgarian, Greek, Japanese, Arabic) and the rare case where proper names have to be transcribed (Latvian, Lithuanian (as a option)).
Example 2: Item has only label in French, but some module in English Wikipedia needs it and displays the Q number or maybe a fallback language label. What would be a fallback language in English Wikipedia?
I agree with the problem. An easy solution would be to write a short template that adds such category and than add it to every article on Wikipedia. But it might be easier if it is handled by the MediaWiki software. --Jarekt (talk) 14:14, 16 November 2017 (UTC)[reply]
Currently it is already done with Lua modules, but a built-in feature is good idea and will make it more consistent between wikis. For implementation it requires some more thinking on the design - getting it from parser functions (#statements,#property) can be handled quite easily. As for Lua modules we are unaware of the use, e.g - the data from wikidata may not be shown directly in Wikipedia but use for other intentions - e.g one can categorize based on the first letter in the name etc. This could be either left for the specific implementation within modules, or some kind of customizable Wikidata label formatter could be added to wikibase. 10:32, 17 November 2017 (UTC) — The preceding unsigned comment was added by ערן (talk)
Problem: Currently you can't quickly copy statements from an item and paste them to another item. Instead you have to do it manually one-by-one for every item, or use the semi automatic tools. There are items with many statements that all could be added to many different items.
Who would benefit: Not all users want to use external tools, so this would help users from other projects to edit Wikidata.
Proposed solution: Create an environment where users can select statements (by checkboxing them?) and then copy them. Then they can open another item and paste selected statements there.
More comments: I don't know is a visual environment the best implementation, or should there be something like wikitext that users can copy and when pasted and saved, it will be migrated to visual mode.
Phabricator tickets: phab:T161259 "moving/copying statements to other items"
Unless I misunderstand something, the moveClaim script only does them one at a time (though it does make them easy to move) - the proposal here is to select many claims at once to copy and/or move to another item. ArthurPSmith (talk) 20:10, 15 November 2017 (UTC)[reply]
Wikipedia articles are stored in wikidata according to concept
Often, two concepts that are similar to each other but different would be stored separately in wikidata
In these cases, the current solution is to use either the Template:Interwiki_extra on local wikipedia or old-style interwiki link which are both cumbersome and does not fit with the model of storing all the info in wikidata.
These solutions aren't being maintained and monitored across all the different wikis either
Often, that would mean users cannot jump from a wiki language version A with a concept 1 to a wiki language version B with a concept 1*, unless the user really know well about wikidata system and click into wikidata and check related wikidata item
Who would benefit:
All users in all wikimedia projects trying to check out interlanguage links.
Proposed solution:
Use a wikidata property on all the items that this method can be applied with value being other wikidata items.
For all wikidata item using the property, after checking the interlanguage link within same wikidata item, the program should also look for if there are any missing language version in any of those other linked wikidata items. If there are such wiki version, then they should also be served as a interwiki link to end users.
So this is not so much really a wikidata request (adding a property like this ought to be straightforward) - what you are proposing is a modification of the UI on all the mediawiki platforms to pull interwiki links from other items. I think it's a reasonable approach to solving this problem, something along these lines is needed. ArthurPSmith (talk) 20:13, 15 November 2017 (UTC)[reply]
d:Q3008463 is about human-powered cars on railroad. Article for the concept have been created in English, French, Dutch, and Chinese Wikipedia.
d:Q42832982 is about railroad system that use human-powered cars. Article for the concept have been created in German, Italian, Japanese, and Korean Wikipedia.
And then d:Q381727 is about such system in Taiwam, which is available in English and Chinese Wikipedia.
Of course the topic between these three subjects would overlap, however there are currently no mechanism in wiki that could allow visitor to jump between the English wiki version article for handcar and Japanese wikiversion article for handcar railway despite the handcar article would also discuss about railway using them and handcar railway article would also discuss the railway they are used.C933103 (talk) 20:18, 21 November 2017 (UTC)[reply]
Problem: Currently each Wikidata item can only link to one entry from the same site, and thus multilingual mediawiki site (including betawikiversity, incubator, oldwikisource) as well as wiki projects that are currently have more than one article for different script due to inability to automatically convert script variant (include for example some entry in Tatar Wikipedia and some entry in Min Dong Wikipedia) are forced to link the same concept to different wikidata items if it is necessary to link them onto wikidata. In wikidata, these different items will then be linked together via properties, which will result from difficulty in management (because of existence of multiple concept for the same thing) and also make users harder to go between different sites (As entry in different wikidata items are not visible in interwiki link and are not displayed on the same wikidata item page)
Who would benefit:
All Incubator/Mediawiki/Oldwikisource users will be benefited by being able to directly connect each item within those site to the wikidata database.
It will also help with organization of data on wikidata as it will no longer need to maintain permanetly duplicated items that are marked as https://www.wikidata.org/wiki/Property:P2959
Users using alternative script in different wiki can also access wikidata interlanguage link directly.
Proposed solution: Allow multiple article/entry from same wiki site onto same wikidata item, with additional labeling on how each variant different from other. (In incubator's case, that would be labeling the language name, in the case of wikipedia with multiple script, that would be labelling the script name.
Endorse. See also wikidata:Wikidata:Requests for comment/Allow the creation of links to redirects in Wikidata which has strong majority (but not overwhelming) support from the wikidata commuity. Support for multiple links should be along the same lines though an even more significant change to the data model. I think the biggest problem (in both cases) is how to handle inter-language links consistently. Perhaps multiple links for a given language or wikimedia site could be lumped into an interstitial page that lists the more specific options, rather than showing all the direct links on every page? ArthurPSmith (talk) 13:49, 7 November 2017 (UTC)[reply]
This doesn't seem to be a technical issue but more of a policy issue about the data-model and as such ill-equipped for the Community Wishlist. I think that the implementation of the linked RfC will also reduce the need for this. ChristianKl (talk) 22:44, 7 November 2017 (UTC)[reply]
It is the data modelling that have become a technical issue. I can't see why the implementation of the lilnked RFC would be in any way reduce the demand of this. C933103 (talk) 13:24, 8 November 2017 (UTC)[reply]
there are two sides of this proposal. On one hand it will help commons and incubator. On the other on Wikipedias interwikis have to be carefully chosen as the articles do not always cover exactly the same meaning. Creating multiple links to simmilar articles and not the one would create a total interwiki mess. masti<talk>13:50, 5 December 2017 (UTC)[reply]
Support abandoning one-sitelink-per-project restriction. I would extend it to one-sitelink-per-project-per-namespace restriction. That would solve issues with Wikipedias using multiple scripts. It would also solved an issue with sitelinks to Commons, which are a big unsolved issue. --Jarekt (talk) 13:25, 1 December 2017 (UTC)[reply]
Sitelinks to commons are a mess, as some sitelinks can be to categories and some to galleries. Creating huge uncertainty and slowing down development of Wikidata use on Commons. Same with wikis that have he same article in multiple scripts. Permanent duplicated item (P2959) property used for linking multiple items related to the same concept but linking to different pages on the same site, is a crazy hack, that tries to make up this limitation. Also thare is an issue of using string datatype for linking to pages on Commons for which proposed solution was to just store all the links to pages on other projects as sitelinks instead of properties. --Jarekt (talk) 13:33, 5 December 2017 (UTC)[reply]
It is a developer discussion, as it is not clear is such thing is technically allowed at the moment. Once it is an option (technically) than it becomes community discussion to use it or not. --Jarekt (talk) 13:33, 5 December 2017 (UTC)[reply]
Support The status quo around Commons is a total mess, Incubator projects should also have the same technical options as normal wikis. On multilingual wikis (like Commons or Meta) translated pages can’t be linked with Wikidata, too (altough it may be solved in Translate extension by providing the same interlanguage links as on the English original page). Tacsipacsi (talk) 14:40, 9 December 2017 (UTC)[reply]
Stop using string datatype for linking to pages on other projects
Problem: Currently some statements with links to pages on other projects, like for example Commons category (P373), are stored as a string datatype. That creates several types of issues:
prevents automatic detection if the pages actually exist
non-uniform storage format (spaces vs. underscores in page name) makes it hard to detect duplicates
lack of integration with page moves and deletions
clickable links are added through some javascript, that does not seem to work properly (see phabricator:T177698) and frequently requires cutting and pasting pagenames between projects instead of clicking on a link.
Many of the above issues are being managed through property constraints, but the maintenance would be much simpler if the underlying datatype was more similar to CommonsMedia datatype or to the sitelinks.
Who would benefit: maintainers who work on keeping those links up to date. Currently there is a big backlog of constraint violations for many properties holding links to Commons pages.
Proposed solution: There are several possible solutions:
Create a new datatype for links to Commons pages or to pages on other projects
Extend CommonsMedia datatype to allow links to other pages
You know, this would be fixed if Wikidata didn't need an unhealthy dose of including categories on pages about topics, and instead managed these through item statements to category items. But, you know, the world might melt from climate change before that happens. --Izno (talk) 03:24, 16 November 2017 (UTC)[reply]
Izno This is not about about article-items and category-items, but about how we save links to pages on other projects. String datatype just does not work well for that purpose. --Jarekt (talk) 13:37, 16 November 2017 (UTC)[reply]
You're making my point for me. We should save links for article items on article items and category items on category items, and then link the items, even if that means creating new items, not the really dumb mix that is employed today. --Izno (talk) 14:20, 16 November 2017 (UTC)[reply]
Move/deletion integration seems problematic. How would you act on a page being deleted and undeleted? Or deleted and rewritten? Or a noticeboard page being moved because someone prefers that method of archival? --Tgr (WMF) (talk) 04:02, 20 November 2017 (UTC)[reply]
Problem: Currently it's hard to use duration property, because displayed value is not user friendly. It's hard to add source, when I see, that Adele's Rolling in the Deep is 228 second long -- it should display 3 minutes 48 seconds instead. It also hard to insert new data, because you need to recalculate value.
Who would benefit: Wikidata readers and editors, especially in cultural area (duration of movies, albums, songs...).
Proposed solution: There should be possibility to enter value via Wikidata interface in HH:mm:ss or mm:ss format. It could be recalculated to base unit (seconds) on front-end side and saved to database. Same situation with displaying values.
Support just make it clear how to fill this property - if there is only one way a template to fill only digits can be helpful Klaas `Z4␟` V: 09:38, 10 December 2017 (UTC)[reply]
Problem: A multitude of tools make mass edits possible, which is nice. But sometimes errors happen and it seems like some erroneous (mass) edits stay in our database just because it's such a big turnoff to go through hundreds or thousands of edits manually.
Who would benefit: Wikidata editors, data quality (i.e. Wikidata users)
Proposed solution: It should really be as easy to revert your changes as it is to do them in the first place.So I guess we need a tool that lets you select in an intuitive way exactly what (own?) changes to revert, e.g. by exact period of time, by patterns in the edit summary, ... or maybe just by easily preselecting (and deselecting) some items or ranges of items from a list with single clicks/keystrokes before initiating the mass revert action. Integration into the Wikidata website would be nice.
QuickStatements can be used to mass-remove statements. That combined with SPARQL queries allow you to build a query that catches all the bad statements and than build list of QuickStatements commands to remove them. How would your proposed toll be different? --Jarekt (talk) 15:43, 15 November 2017 (UTC)[reply]
QuickStatements usage for this purpose is far from intuitive and as a result many people don't use it this way. I think it would be great on Special:Contributions/ there would be a way to undo all edits you have done between time X and time Y. It should allow everybody to undo his own edits in bulk. As far as undoing others edits in bulk I'm less sure. We might give that feature out along with the rollbacker permission or even limit it for admins. ChristianKl (talk) 22:32, 20 November 2017 (UTC)[reply]
Problem: A Wikidata item is not understandable in its current configuration. There is no one view that provides an easy understanding to the information that lurks on many scrollable pages
Who would benefit: Every user who wants to understand the values in a Wikidata item
Proposed solution: Have Reasonator or something similar available for everybody, using an obvious button.
Support I cannot edit Wikidata without understanding the data. I use Reasonator for all my information requirement. The current UI is hardly usable. GerardM (talk) 07:46, 28 November 2017 (UTC)[reply]
Support maybe we can have option of different Wikidata skins that makes page look more like Reasonator. Current Wikidata front is convenient for editing and Reasonator is convenient for reading. I think Wikidata should move closer to Resonator, but it should be still optimized for editing. At least in one skin. --Jarekt (talk) 13:15, 1 December 2017 (UTC)[reply]
Support When description is only filled in a non-latin script many don't understand; minimum should be description i English IMNSHO Klaas `Z4␟` V: 09:43, 10 December 2017 (UTC)[reply]
I'd love a custom UI for this, with a more focused approach. Particularly if you are coming from a project, towards wikidata this can be quite confusing at times. —TheDJ (talk • contribs) 14:35, 18 November 2017 (UTC)[reply]
Problem: Different languages have different grammar rules to display dates. This works well for example on Wikipedia, but doesn't work on Wikidata. Using English for the surface data format is 18 November 2017, while using for example Hungarian language it shows 18. November 2017. The correct form would be in this example 2017. november 18.. There are other language examples in the linked phabricator ticket. It looks an easy problem, but there is no progress in the last years.
Who would benefit: Users of Wikidata
Proposed solution: Implement the language rules of date format from the existing code (of MediaWiki?)
Merging these requests about similar topics would gain more attention and higher number of votes, while the tasks itself looks rather small issues for me. Samat (talk) 13:50, 18 November 2017 (UTC)[reply]
This issue and those issues are unrelated besides that they deal with the time data type (and P2047 deals with the number with units data type!). Merging them would create the kind of task that is specifically requested not to be added on the main page. --Izno (talk) 03:57, 19 November 2017 (UTC)[reply]
I've been following this Phab thread for some time too and would support this proposal. I agree with Izno that merging may not be helpful as a wishlist entry because they're different technical issues that have separate solutions. Deryck C.00:12, 21 November 2017 (UTC)[reply]
Strong support The situation for centuries is even much worse, not only ugly but misleading and even outrightly wrong causing errors in the database. --Marsupium (talk) 22:05, 29 November 2017 (UTC)[reply]
Problem: On the English Wikipedia, at least, Wikidata has a reputation for being prone to vandalism and errors. In my experience, even on the more visible items, vandalism may take more than a day to be removed and on other items can last for months. Vandalism reversion is also tedious and difficult, particularly since label vandalism can be in hundreds of languages.
Who would benefit: Wikidata editors, and users of the data on other wikis and elsewhere
Proposed solution: Provide better and faster-to-use vandalism-fighting tool(s). This could be one or more tools along the lines of -
Huggle, STiki and Twinkle on the English Wikipedia, with translation capability
Better vandalism fighting bots and edit filters (e.g. preventing height, weight, and gender/sex statements from being added or changed by new users) to better prevent drive-by nonsense insertion
Autoblock IPs and new users who are blocked on other WMF wikis to prevent vandalism from being "exported"
Allow fine-grained protection of individual labels, descriptions and statements to prevent them from being vandalized.
This would be incredibly useful. The handling of vandalism on Wikidata needs to be up to the same standards as enwp, if not better. Thanks. Mike Peel (talk) 21:01, 18 November 2017 (UTC)[reply]
@Jc86035: Hi. This proposal as it stands currently is too broad and vague. It's way too much work to build all of the tools being asked for here. Do you have any objection if I edit the proposal and narrow the scope to investigating the vandalism issues and building one or more tools to prevent that? -- NKohli (WMF) (talk) 18:31, 21 November 2017 (UTC)[reply]
@NKohli (WMF) and MusikAnimal (WMF): Feel free to revamp the proposal to make it more specific. I haven't really spent a lot of time reverting vandalism and haven't really looked into the tools much, so input and changes from users with more experience would be much appreciated. Jc86035 (talk) 15:28, 24 November 2017 (UTC)[reply]
Support Just getting a good Cluebot-like system would be an improvement, but there's probably a lot of good that could be done with ORES and STiki WhatamIdoing (talk) 19:17, 1 December 2017 (UTC)[reply]
Support I would happy to have a recent changes page that would show (filter) only Items' general statements edits, my local wiki Language descriptions changes (I can't monitor languages I don't know) and changes in links to my language articles. If an Item doesn't have any description or links in my language, I don't want to see it at all. If the change was marked as patrolled, I don't want to see it either . Hummingbird (talk) 02:06, 5 December 2017 (UTC)[reply]
Support sure but IMHo the vandalism on wikidata is not as bad as before, i have found much less recently... whilst it might be worse on English wikipedia than one years ago. I feel it is a human factor. I got reverted within minutes in an excessive way while I could find within hours a terrbile vandalism in a key page undetected for a longer time... is wikidata going to end like this? flooded with patrollers who act mechanically and have a kinda superficial interest for the content? Let's hope they will keep their "play" somewhere else... it's the patroller not the vandal that start to worry me. I still believe that you need first more real users, and pushing to increase that.--Alexmar983 (talk) 18:42, 10 December 2017 (UTC)[reply]