Talk:Fundraising 2011/archive/1
Add topicThis looks ugly
[edit]-- ℳono 04:02, 15 July 2011 (UTC)
- Seriously. Who the hell wrote this thing? --ShakataGaNai ^_^ 04:14, 15 July 2011 (UTC)
- Hi. The diff calls Fundraising 2011/Updates at the present state. So it's a bit hard to put it into perspective that way. Cheers. Killiondude 06:08, 15 July 2011 (UTC)
- Still looks ugly. ℳono 17:10, 31 July 2011 (UTC)
- Theo10011 17:53, 31 July 2011 (UTC)
- I'm certain you'll agree that the aesthetics of this page are not the primary concern for this team right now.... Philippe (WMF) 19:01, 31 July 2011 (UTC)
- We can haz constructive criticism plz? Pcoombe (WMF) 16:14, 1 August 2011 (UTC)
- Theo10011 17:53, 31 July 2011 (UTC)
- Still looks ugly. ℳono 17:10, 31 July 2011 (UTC)
- Hi. The diff calls Fundraising 2011/Updates at the present state. So it's a bit hard to put it into perspective that way. Cheers. Killiondude 06:08, 15 July 2011 (UTC)
Messy
[edit]The schema for 2011 fundraiser pages feels messy. I don't like that the page people can make comments on is the same page used for announcements from fundraising staff (this page). It might be more appropriate to format a page similar to w:WP:AC/N and have a discussion link under announcements that leads to the talk page or something. Killiondude 21:54, 1 August 2011 (UTC)
- Thats going to be the plan, I have already created new pages to mirror something like AC/N, they just havn't been moved yet :) Ill get to that asap. 81.104.163.142 20:59, 4 August 2011 (UTC)
- Also, it's also ugly, revert to Mono's version. Theo10011 21:28, 3 August 2011 (UTC)
- ℳono 21:25, 5 August 2011 (UTC)
User Made Banners
[edit]Are we even testing the user made banners, or have we all but given up on them? Last year I was exceedingly pissed by how dozens of user made banners were proposed and the whole thing turned into the 'Jimbo Show'. Jimbo in stoic pose, Jimbo in funny costume, etc. Sven Manguard 21:11, 3 August 2011 (UTC)
- I'm guessing it has to be very very very very statistics driven since its all about numbers and nothing else. So I'm guessing jimbo will be back. Theo10011 21:31, 3 August 2011 (UTC)
- Many of the user made banners were tried last year, but it rapidly became clear that the Jimmy banners performed much better. No one wants to rely on Jimmy all the time though, and that's why a major focus of our testing so far has been on finding alternatives. We've had some successes already (e.g. the personal appeal from Brandon Harris), and the creative team are currently busy interviewing people at Wikimania to try and find even more powerful stories. Pcoombe (WMF) 21:10, 4 August 2011 (UTC)
- There were many user submitted banners which have still never been tested even though there were resources and infrastructure built specifically to do so: at least a majority and above 70% if I remember correctly. How do you know that 70+% didn't contain any that performed above the Jimbo banners? Things like "Help us recruit more authors" weren't even tried! When will those be tested? What makes this specifically upsetting to statisticians is that we know the variance is large enough that there probably were top performers which have still not yet been tested. P.S. Please test photos of people with arms crossed against arms at sides and hands on hips (basic body language.) b:User:Jsalsman 21:29, 21 August 2011 (UTC)
- There were several reasons why a lot of banners were not tested. One reason is that there were alot that ran on similar themes to banners that were tested and wern't successful, another is that the contribution campaign never ran last year, which meant that all the banner for that never ran unfortunately. There is a problem in that there is still only so much you can test in any one fundraiser. The more unsuccessful banners you run, the more you alienate readers, the longer the fundraiser has to go on for and the less money you raise. Although there is the possibility you could stumble accross the new jimmy, you dont want to wasting banner space. and you have to take into account the time to design, build, quality checks, monitor and report. It is also more than just banners that we look at. We test appeals, landing pages, forms, banner images and banner text which all take up production time and its not just stumbling across things either, its about refining them as well.
- Also last year, the foundation was still somewhat finding its feet in testing and fundraising and although the capability to test existed, it really was limited in its resources. Lessons were learnt though and this year we should have a substantially better capacity to test. We have double the dedicated community technical staff, double the creative people, a dedicated statistician, double those working on production in addition to a similarly large donor response team like last year.
- I will say that that last suggestion you mentioned is very similar to a test we ran earlier in the month (results to come) and what we found was that a picture of someone from the shoulders and up has consistently preformed well (these were banners with ryan kaldari and showed similar results to similar shots of jimmy ran last year). Jseddon (WMF) 22:28, 22 August 2011 (UTC)
- I went over this math last year, and I forget the exact numbers, but you only need about 34 times the reciprocal of the average click through rate (less than 1000?) impressions ber banner to get a statistically significant measure of its click-through rate at a 95% confidence level. Your tests last year were wasting tens of thousands of impressions per banner. You have plenty of opportunity to test all the remaining banners in less than a day, if you wanted to. If I'm wrong, please tell me why. I know you test a lot of things, but the initial click through is the first step of the process and you really should at least optimize over the suggestions you already have before trying to squeeze a few percent on some downstream element. I'm guessing that the test harness you have now must have some manual component and you need more coding to be able to test a set of banner text strings in batch mode. Is that right? James Salsman 00:04, 23 August 2011 (UTC).
- Speaking based on my own personal opinion as someone who has been involved in the fundraiser for 3 years (supporting and then running for 2 years with WMUK and now involved at the foundation), I would say we have already learnt the lesson and that is that personal appeals perform vastly better than non personal ones. We saw the exact same thing in 2009 as well. This isn't a new lesson either, its one that is at the core of fundraising for almost all non-profits, so it makes sense that we continue to look into this. In fact our testing this year is has shown we can equal jimmy with other personal appeals. I also dont think we the community should expect the foundation to test everything thats thrown at them. Its just not realistic given we dont have endless resources. I can promise you that we will be testing more this year, we will be testing more efficiently, and we will have more capacity which will allow us to test heavily not just in the US and Canada in the english language but globally. You are right that we have to optamize that first click, but the appeal really is just as important and thats been very clear through testing. You have to work through all stages and not just the first click. Jseddon (WMF) 17:00, 23 August 2011 (UTC)
- Thank you. I don't see any way to rule out the possibilities that simple, direct appeals can perform as well as your best performers, because of the statistical variance among messages which has already been established by existing testing; in fact I think it's mathematically likely that at least a few will. There is only one way to find out, and I hope you can obtain the tools necessary to test the large number of untested messages quickly, without manual intervention, using only a limited, statistically significant number of impressions, and I hope you will continue to share the results with the community so that more people can try to understand the patterns and refine the top performers. James Salsman 01:55, 24 August 2011 (UTC)
- Is that possible now? If not, what new software is needed to do this? 76.254.20.205 20:37, 25 August 2011 (UTC)
- Thank you. I don't see any way to rule out the possibilities that simple, direct appeals can perform as well as your best performers, because of the statistical variance among messages which has already been established by existing testing; in fact I think it's mathematically likely that at least a few will. There is only one way to find out, and I hope you can obtain the tools necessary to test the large number of untested messages quickly, without manual intervention, using only a limited, statistically significant number of impressions, and I hope you will continue to share the results with the community so that more people can try to understand the patterns and refine the top performers. James Salsman 01:55, 24 August 2011 (UTC)
- Speaking based on my own personal opinion as someone who has been involved in the fundraiser for 3 years (supporting and then running for 2 years with WMUK and now involved at the foundation), I would say we have already learnt the lesson and that is that personal appeals perform vastly better than non personal ones. We saw the exact same thing in 2009 as well. This isn't a new lesson either, its one that is at the core of fundraising for almost all non-profits, so it makes sense that we continue to look into this. In fact our testing this year is has shown we can equal jimmy with other personal appeals. I also dont think we the community should expect the foundation to test everything thats thrown at them. Its just not realistic given we dont have endless resources. I can promise you that we will be testing more this year, we will be testing more efficiently, and we will have more capacity which will allow us to test heavily not just in the US and Canada in the english language but globally. You are right that we have to optamize that first click, but the appeal really is just as important and thats been very clear through testing. You have to work through all stages and not just the first click. Jseddon (WMF) 17:00, 23 August 2011 (UTC)
- I went over this math last year, and I forget the exact numbers, but you only need about 34 times the reciprocal of the average click through rate (less than 1000?) impressions ber banner to get a statistically significant measure of its click-through rate at a 95% confidence level. Your tests last year were wasting tens of thousands of impressions per banner. You have plenty of opportunity to test all the remaining banners in less than a day, if you wanted to. If I'm wrong, please tell me why. I know you test a lot of things, but the initial click through is the first step of the process and you really should at least optimize over the suggestions you already have before trying to squeeze a few percent on some downstream element. I'm guessing that the test harness you have now must have some manual component and you need more coding to be able to test a set of banner text strings in batch mode. Is that right? James Salsman 00:04, 23 August 2011 (UTC).
- There were many user submitted banners which have still never been tested even though there were resources and infrastructure built specifically to do so: at least a majority and above 70% if I remember correctly. How do you know that 70+% didn't contain any that performed above the Jimbo banners? Things like "Help us recruit more authors" weren't even tried! When will those be tested? What makes this specifically upsetting to statisticians is that we know the variance is large enough that there probably were top performers which have still not yet been tested. P.S. Please test photos of people with arms crossed against arms at sides and hands on hips (basic body language.) b:User:Jsalsman 21:29, 21 August 2011 (UTC)
- Many of the user made banners were tried last year, but it rapidly became clear that the Jimmy banners performed much better. No one wants to rely on Jimmy all the time though, and that's why a major focus of our testing so far has been on finding alternatives. We've had some successes already (e.g. the personal appeal from Brandon Harris), and the creative team are currently busy interviewing people at Wikimania to try and find even more powerful stories. Pcoombe (WMF) 21:10, 4 August 2011 (UTC)
- Let's allow user-made banners but only show them to logged in users who opt to view them. This lets users continue to make new and creative banners. If only insiders see them, it won't matter if they underperform statistically. --Metametameta 10:28, 10 August 2011 (UTC)
- The CentralNotice software isn't set up to allow opt-in banners, and I don't think such a thing would be possible. I guess you could do something like that with gadgets on individual projects if you like, but that wouldn't need WMF involvement. On the English Wikipedia you can also contribute to Template:Wikipedia ads, I'm sure other projects have similar schemes.
- Note that although we're not soliciting banners from the community in the same way as last year, if you have a wonderful new idea that you think would work well, you're always welcome to suggest it. Also we are seeking Wikimedians who want to share stories about their experiences, and why people should donate, for use in a personal appeal. If you're interested in that, you can contact wikistorywikimedia.org. Pcoombe (WMF) 22:27, 12 August 2011 (UTC)
- The response is certainly population dependent, so I would advise against this. For example, "Help us recruit more authors", which has still not been tested to my knowledge, could perform much differently among logged in users. James Salsman 01:57, 24 August 2011 (UTC)
- Let's allow user-made banners but only show them to logged in users who opt to view them. This lets users continue to make new and creative banners. If only insiders see them, it won't matter if they underperform statistically. --Metametameta 10:28, 10 August 2011 (UTC)
A new model for fundraising: synergism with Movement Roles
[edit]Movement Roles is looking at proposed projects to add to WM. People like being part of something new and exciting-- 'protecting the old' just isn't as inspiring as 'creating the new'.
- Get a list of pre-screened proposals from the Movement Roles group
- Tell donors about the proposals, ask for their help to build this next new thing.
- If you have multiple proposals, let donors choose which proposals they personally support most.
At fundraiser time, you can say "This year we hope to start $NewProject! But we need your help to do it!"
Goals motivate. Give people goals-- new, exciting goals.
DonorsChoose.org has done great work because they tangibly connect the donation to the positive effects. Maybe that model could work here too. --Metametameta 10:46, 10 August 2011 (UTC)
- Afaik (and correct me if I'm wrong), any proposals to add new projects to Wikimedia are at a very early stage. But it's certainly something to bear in mind. Do you have a link to more information? Pcoombe (WMF) 22:33, 12 August 2011 (UTC)
- >Do you have a link to more information?
- Sam Klein appears to be heading this up, but that's my informal opinion just from the Haifa panel.
- They may not have any names we could use in time for Dec, but they definitely will have ones available for the 2012 fundraiser. So if we have even one potentially serious candidate-project name we could use, we could learn a lot about whether candidate-projects drive donations. My instinct is they'll be very strong drivers.
- >any proposals to add new projects to Wikimedia are at a very early stage.
- I think this depends on what you mean by "project". Creating brand new wikis all by ourselves is still stalled. But working with chapters, GLAMs, existent wikis, and software developers to provide exciting new services to our readers-- that is going to be happening in some form in 2012.
- I don't know if we'll call new things 'features', 'projects', 'tools', or what. But "brilliant new people are creating exciting, brand-new things! WMF going to be helping them do it!
- Doesn't that make you want to donate? --Metametameta 08:12, 13 August 2011 (UTC)