The first round of the Wikimedia Foundation's new financial arrangements has proceeded as planned, with the publication of scores and feedback by Funds Dissemination Committee (FDC) staff on applications for funding by 11 entities—10 chapters, independent membership organisations supporting the WMF's mission in different countries, and the foundation itself. The results are preliminary assessments that will soon be put to the FDC's seven voting members and two non-voting board representatives. The FDC in turn will send its recommendations to the board of trustees on 15 November, which will announce its decision by 15 December. Funding applications have been on-wiki since 1 October, and the talk pages of applications were open for community comment and discussion from 2 to 22 October, though apart from queries by FDC staff, there was little activity.
A total of US$10.4M was requested in round 1—almost the entire FDC budget for the first two six-monthly rounds. Figure 1 shows the requests. Wikimedia Deutschland, by far the largest chapter, topped the nation-based requests with $1.8M, followed by Wikimedia France at $1.0M, and Wikimedia UK at $0.9M.
A key part of the foundation's new financial arrangements involves the encouragement of good governance and transparency in the chapters, which are authorised to use the WMF's trademarks and to an increasing extent share responsibility for upholding the foundation's international reputation. The scores and accompanying feedback for round 1 are the first taste of just how this might play out as the FDC process evolves. The three staff members—Meera, a consultant at the Bridgespan Group, which is assisting in the setting up of the FDC; Anasuya Sengupta, director of global learning and grantmaking for the WMF; and Winifred Olliff, grants administrator for the WMF—provided this information for 13 criteria (labelled A–M). Their scores were based on a 1–5 Likert-scale: the minimum score of 1 indicated "weak or no alignment with the criterion"; 3 indicated "moderate alignment"; and the maximum of 5 indicated "strong alignment". Figure 2 below shows the average score given for each of the 13 criteria. The graph spans the minimum to maximum scores, and the bars are colour-coded for the five dimensions:
Not surprisingly, criterion A gained a relatively high average score of 4.0, since it primarily measures the lip-service paid to the movement's global targets in the application form. A good case for B is more challenging to make, at it involves the potential to fulfil the targets; predictably, this scored a lower average of 3.6. Criterion C is a judgement of likely impact on the ground, which is harder to convincingly argue. C had an average of 3.4, approaching the "moderate" threshold. Germany and the WMF itself each received the best scores for the three impact criteria: 5, 5, and 4 respectively. Also highly rated on likely impact were Argentina, the Netherlands, and Austria.
Criterion D mainly concerns the human resources, skills, and capacity, and E the entity's record of achievement. These both scored an average of 3.3, with two entities marked down for their past records by comparison with their current capacities: Sweden ("a track record of success in many initiatives ... , but new initiatives are on the table which are more uncertain"), and the foundation more obviously, with a 5 for current capacity, and only 3 for past record ("WMF and the programs in question are well staffed with diverse capabilities", but "These initiatives have had mixed results."). Poor scorers were Australia (2 for both criteria, with concerns about challenges in hiring staff and a lack of clear indicators of past project performance) and Switzerland ("Significant concerns about entity's ability to execute on a plan of this magnitude, given staff and volunteer capacity").
On the whole, the leadership of the entities was rated slightly better, at 3.5—midway between moderate and strong. But there were two outliers: there was a complaint that the board of Wikimedia France "has communicated challenges around leadership and governance which may affect the entity's ability to execute successfully on the plan in the short term." And for Australia, stability of leadership "over the past few years" was an issue, as was the response "through the beginning of the FDC process". Leadership in Austria and the UK was rated in "moderate" alignment (3), and for the rest, moderate to high (4).
In terms of whether the funding request was reasonable given the proposed initiatives (G), "some initiatives are likely to be over- or under-budgeted" for seven entities (including the foundation and Germany), and for France, with a 2, the judgement was: "significant over- or under-budgeting, and/or has not thoroughly accounted for costs". On track-records of using funds efficiently, Australia was hammered for "underspending and financial conservatism", and Hungary for underspending "because of lack of execution and because of overestimation of costs."
This drew the lowest average scores, with moderate or less than moderate alignment. France, Israel, Hungary, and Australia scored poorly. The benchmarking assessments may be a wake-up message for chapters to examine their procedures for benchmarking, with complaints about lack of detail common in the assessments. Curiously, one chapter scored higher on having a "feasible" plan to track the proposed metrics (K) than in having "a plan to track the proposed metrics" (J).
The entities were ranked a disappointing average of 3.3 for criterion L (ability to replicate the plan elsewhere in the movement), suggesting that this aspect needs more attention when plans are conceived. In a criterion rather subtly distinct from this, criterion M, the potential of plans to bring benefits to the movement, gained an average of almost 4; the foundation scored a 5, and 4s were handed out for all but two of the remaining 10 entities.
The queries by the three FDC-related staff members on application talk pages, and the scores and feedback provided to the FDC, suggest that insufficient detail was provided by some of the entities for some criteria. The staff have flagged the ability to execute plans, the efficiency of spending, and measures of success as areas that need the greatest improvement by applicants. The FDC will consider the staff input when it meets to decide on what to recommend to the board of trustees on 15 November.
Between 24 and 27 October, the WMF board of trustees met in San Francisco for a retreat and several meetings to discuss the foundation's programmatic scope and to decide on a number of governance issues.
The board approved the legal fees assistance program, which the community supported in a RfC process in September. The program is designed to safeguard editors threatened legally by third parties due to fulfilling their Wikimedia community governance roles such as adminship or ArbCom membership.
The trustees approved resolutions concerning the conduct of two board committees. The audit committee was reformed to reflect the reforms of WIkimedia's financial structure, and the human resources committee charter was approved, after the board voted through the language committee's framework pre-meeting on 16 October. The changes to the institutional frameworks have been accompanied by new office appointments across board committees.
Matt Halprin, who has served since 2009 as one of the four board-appointed trustees, will not stand for re-appointment at the end of his current term in December 2012. He has driven governance reforms such as the board governance committee, organizing the annual evaluation of trustee performance and the board's working procedures, and making the voting of trustees on board resolutions publicly transparent.
The board also looked at the proposals by Sue Gardner, the foundation's executive director, to strengthen the focus of the WMF and put more resources into core projects such as the visual editor, tackling the switch from desktop to mobile devices among readers and editors, and editor retention.
This week, we're checking out ways to motivate editors and recognize valuable contributions by focusing on the awards and rewards of WikiProject Military History. Anyone unfamiliar with WikiProject Military History is encouraged to start at the report's first article about the project and make your way forward. While many WikiProjects provide a barnstar that can be awarded to helpful contributors, WikiProject Military History has gone a step further by creating a variety of awards with different criteria ranging from the all-purpose WikiChevrons to rewards for participating in drives and improving special topics to medals for improving articles up to A-class status to the coveted "Military Historian of the Year" award. We interviewed Grandiose, secretlondon, Nick-D, Ian Rose, Crisco 1492, Marcus Qwertyus, Hchc2009, and Dank.
How long have you been a member of WikiProject Military History? Do you prefer working on articles related to particular subjects, people, or time periods?
Tell us about the project's contest department. When did it start and did you take inspiration from any other WikiProjects? How are contributors rewarded for participating in the monthly contests? Has it actually helped motivate the project's members to improve articles?
The project also hands out "WikiChevrons", service awards, and a variety of other goodies throughout the year. Please describe some of these awards and why the project started offering them. Are new recipients of the awards honored in the project's newsletter or in any other ways?
Has WikiProject Military History hosted any backlog elimination drives like the Guild of Copyeditors and WikiProject Wikify? How do these backlog drives compare to the four ongoing "Operations" hosted by WikiProject Military History? What are the benefits and limitations to these kinds of goal-oriented activities?
The project's large assortment of awards is likely related to the sheer size of WikiProject Military History's community. Would contests, awards, and drives like these be feasible for smaller projects? What are the best ways to motivate contributors regardless of a project's size and scope?
Anything else you'd like to add?
Next week, we'll sing for our supper. Until then, listen to the advice in our previous reports.
Reader comments
The TimedMediaHandler extension (TMH), which brings dramatic improvements to MediaWiki's video handling capabilities, will go live to the English Wikipedia this week following a long and turbulent development, WMF Director of Platform Engineering Rob Lanphier announced on Monday (and later clarified).
The extension, which has been under development for the best part of two years, will introduce a new interface with subtitle support and a simple "|start=2.3|end=5" syntax for extracting video segments. Other features listed include "multi-format multi-bitrate transcoding with auto source selection, ... gallery and search pop-up players, viral iframe sharing / embedding, etc." although it is unclear how many of these will be available at launch. Deployments to other wikis and Commons have been pencilled in for the coming fortnight.
Readers with longer memories will note that some of these features have already surfaced in the 2009-released mwEmbed gadget, the development of which preceded work on TimedMediaHandler. The two have in common a number of features, most notably their choice of default interface – "Kaltura" video player, which was developed starting in January 2008 with assistance from the for-profit company of the same name and demoed at Wikimania 2009 (see contemporary Signpost coverage). The development path since mwEmbed has focussed on performance, security and code review concerns. Accompanying work has focussed on serving video more efficiently and it is likely that any TMH deployment will also make the possibility of accepting a larger number of video input formats a more attractive option to Wikimedia decision makers.
The deployment, should it go according to plan, is likely to be warmly welcomed by developers and readers alike. Among more seasoned developers, however, the smiles will surely be borne less from joy and more from relief that a project spanning four-and-a-half years of legal concerns, technical debates over code quality, endless technical delays and an uncertain payment structure has finally come to fruition.
Not all fixes may have gone live to WMF sites at the time of writing; some may not be scheduled to go live for several weeks.
Thirteen featured articles were promoted this week:
Ten featured lists were promoted this week:
Nine featured pictures were promoted this week:
One featured topic was promoted this week:
One featured portal was promoted this week:
A paper in the Journal of the American Society for Information Science and Technology, coming from the social control perspective and employing the repertory grid technique, has contributed interesting observations about the governance of Wikipedia.[1] The paper begins with a helpful if cursory overview of governance theories, moving towards the governance of open source communities and Wikipedia. That cursory treatment is not foolproof, though: for example, the authors mention "bazaar style governance", but attribute it incorrectly—rather than the 2006 work they cite, the coining of this term dates to Eric S. Raymond's 1999 The Cathedral and the Bazaar. The authors have interviewed a number of Wikipedians and identified a number of formal and informal governance mechanisms. Only one formal mechanism was found important—the policies—while seven informal mechanisms were deemed important: collaboration among users, discussions on article talk pages, facilitation by experienced users, individuals acting as guardians of the articles, inviting individuals to participate, large numbers of editors, and participation by highly reputable users. Notably, the interviewed editors did not view elements such as administrator involvement, mediation or voting as important.
The paper concludes that "in the everyday practice of content creation, the informal mechanisms appear to be significantly more important than the formal mechanisms", and note that this likely means that the formal mechanisms are used much more sparingly than informal ones, most likely only in the small percentage of cases where the informal mechanisms fail to provide an agreeable solution for all the parties. It was stressed that not all editors are equal, and certain editors (and groups) have much more power than others, a fact that is quickly recognized by all editors. The authors note the importance of transparent interactions in spaces like talk pages, and note that "the reported use of interaction channels outside the Wikipedia platform (e.g., e-mail) is a cause for concern, as these channels limit involvement and reduce transparency." Citing Ostrom's governance principles, they note that "ensuring participation and transparency is crucial for maintaining the stability of self-governing communities."
This paper looks at the relationships between Wikipedians from the social network analysis perspective (nodes are defined as authors, and links as indicators of collaboration on the same article), treating Wikipedia as an online social network (similar to Facebook).[2] The authors note that while Wikipedia is not primarily a social network site, it has enough social networking qualities to justify being seen as such. They find that Wikipedia can be seen as a very good source of information about online relationships between actors, due to the transparent and public nature of its data. The authors present a brief overview of previous work with a similar approach. Rather unsurprisingly, the authors find that in the very early days of Wikipedia, editors were much more likely to know one another and collaborate on articles than in the later years. They find that the number of editors is highly correlated to the editors' familiarity with one another, and is more relevant than the number of articles, as they find that from 2007, when the number of editors roughly stabilized, so did their levels of connectedness through collaboration.
The paper shows that with very few exceptions (low activity, specialized editors) all Wikipedia editors are connected to one another, and there are no isolated groups (or topic areas). The authors also find that the Wikipedia collaborations can be analyzed using the small-world network approach (suggesting that the distance between editors, defined as the average path length, with links being articles contributed to, is very small). The article focuses primarily on the mathematical side of social network analysis, and unfortunately offers little commentary or analysis of the findings. The validity of the results can also be questioned, as the authors treat bots and semi-automated accounts as "regular authors"; considering that the majority of Wikipedia articles have been edited by bots or editors using scripts, the finding that editor A can be connected to editor B through the fact that they both edited different pages which in turn were edited by the same bot or script-equipped editor is hardly surprising.
Earlier this month, the Journal of Personality Assessment published a paper titled "More Challenges Since Wikipedia: The Effects of Exposure to Internet Information About the Rorschach on Selected Comprehensive System Variables".[3] Summarizing past events (well-known to Wikipedians) from the point of view of psychologists adhering to the Rorschach test as a diagnostic tool, they write: "The availability of Rorschach information online has become of even greater concern in the last few years, since James Heilman, an emergency-room physician from Canada, posted images of all 10 Rorschach inkblots on the popular online encyclopedia, Wikipedia (Cohen, 2009; Wikipedia, 2004[sic]). This Wikipedia article also describes “common responses” to each blot, which frequently correspond to percepts that would be scored Popular under the current coding rules of Exner’s (2003) Comprehensive System (CS)." They remark that "Although many psychologists decried the publishing of the Rorschach inkblots on Wikipedia, before this study, no published studies had examined whether viewing the inkblots and other Rorschach information posted on Wikipedia would impact examinees’ scores." (As reported last year in this newsletter - see "Psychologists gauge impact of Wikipedia's Rorschach test coverage" - one of the authors had coauthored a study that had investigated the rise in prominence of information about the test on the Internet due to Wikipedia, but not tested its impact on the test itself.)
Before reporting their own results, the authors cite an unpublished dissertation,[4] which had compared test subjects' Rorschach results before and after reading the article. Its tentative results suggested a "significant increase in shading responses [which] then likely affected the corresponding increase in [one variable], but otherwise indicated "that the majority of CS variables do not appear to be affected by exposure to information in the Wikipedia article."
The authors' own study involved 50 participants, half of whom had to read an excerpt of the Rorschach test article (while the control group read one of the Philadelphia Phillies article) before trying to "fake good" on the test, impersonating a character which would have a huge incentive to achieve certain results in the test ("Jack is a 35-year-old father of two wonderful children ...The judge ordered that Jack have a psychological evaluation done to determine whether or not he should be given custody of his kids.")
Among the test features defined in the "CS" system, only "Populars" was found to differ significantly "between the control and experimental groups [...] likely due to the fact that the Rorschach [Wikipedia article excerpt] provided pictures of each of the inkblots, along with "common responses," which, in many cases, corresponded to those responses that are actually coded as Popular according to the CS. However, the Wikipedia information on its own did not appear to directly impact other variables associated with perceptual accuracy."
Commenting on the paper, Heilman told this research report:
That reading about the Rorschach before testing affects scores in a group of "normal" individuals is not really surprising. This analysis, however, does not show that the availability of information regarding psychological tests affects clinical important outcomes.
A paper titled "Is Wikipedia Inefficient? Modelling Effort and Participation in Wikipedia"[5] will be presented at next year's HICSS '13 conference. The main research concern of the authors is whether the saturation observed in the growth of Wikipedia is due to the maturity of the project or is rather caused by editorial obstacles and inefficient collaboration processes. To address this question, they try to investigate the efficiency of collaboration in 39 language editions of Wikipedia. Two different processes are studied. 1) editor recruitment; the ability of Wikipedia projects to attract editors from the pool of potential editors and 2) the article creation process. For each of these two processes corresponding input and output parameters are chosen and by applying a set of Data Envelopment Analysis the relative efficiency of language projects is calculated. For the editor recruitment process the input parameter is the size of the population speaking the language, having access to Internet and being at a tertiary-level of education and the output is the number of Wikipedia editors contributing to the Wikipedia edition of that language. It is shown that the efficiency of some language editions, e.g. Estonian, Hungarian, Norwegian, and Finnish, are much higher than some other language editions, e.g., Malaysian, Arabic, and Chinese. A decreasing return to scale is reported for all of the studied projects; however, the effect is more pronounced for larger ones. In other words, larger projects can be considered as inefficient in attracting new editors. For the production process, the number of Wikipedia editors is considered this time as the input and 3 outputs: number of edits, number of articles, and number of Featured articles. Here, the results generally suggest that for the larger projects the returns to scale are systematically decreasing, showing the difficulties of maintaining the efficiency of the workflow as the project grows. Some projects, such as the Malaysian and Persian Wikipedias, are not as successful in editor recruitment but are still efficient in creating articles given the capacity of their human resources. As for the quality of articles, it is shown that in larger projects like French and German, the focus is more on increasing the quality of the existing articles, whereas in intermediate-size projects, e.g., Russian and Italian, the main effort is still on increasing the number of articles.
The paper notes a positive correlation between efficiency in the number of edits and the efficiency in number of articles and featured articles. Among the limitations of the study, the authors name the time period of the analysed data, being limited to one month, and the possible flaws in the demographic data used to estimate the input of the editor recruitment process. Excluding contributions from unregistered users due to technical reasons could also have induced biases in the results. Since the article starts by raising the question of efficiency of Wikipedia in general, it ends up by comparing different language editions to each other and presenting the results in only relative terms. The English Wikipedia, which could be a benchmark for such comparisons, is entirely excluded from the study. More importantly, applying the data envelopment analysis, which is originally introduced for evaluating activities of not-for-profit entities participating in public programs, on Wikipedia activity data is not well justified.
How students find and evaluate information is a perpetual concern for librarians, who act as educators and guides to finding the best resources for student information needs as well as collection curators. Since the arrival of Wikipedia, librarians have grappled with how the site fits in with and compares to a more traditionally published and reviewed collection, and how best to help students understand and use Wikipedia. This study is an up-to-date addition to the body of literature on this subject.[6] Colón-Aguirre and Fleming-May use a coded qualitative interview approach to understanding undergraduate opinions about Wikipedia, compared to their use of and attitude towards traditional library resources.
The authors conducted interviews with 21 undergraduate students in one college in a large public university in the United States. Based on student responses about their research habits, the authors divided their respondents into three categories: avid library users, occasional library users, and library avoiders. While all categories of students used Wikipedia, there were differences in purpose; avid library users used Wikipedia to gather background information before turning to library-supplied resources like books and journals, while library avoiders relied more on Wikipedia and were lost if they could not find the information they needed on the site or via Google searches. Most of the students interviewed reported getting to Wikipedia via Google or other search engines, and the authors do not report any deep awareness by the students of how the site works or how to evaluate articles; awareness of ability to contribute was not mentioned. Student use of the library versus Wikipedia was also influenced by their perceptions of library resources being difficult to use (both in-person stacks and subscription online resources), particularly compared to the ease of using Wikipedia and online searching; students were also swayed in whether they used the library by their assignment requirements and faculty advice, including professors who advised against using Wikipedia as being "not credible" and required using library resources specifically.
The authors conclude that librarians need to work more with teaching faculty to craft research assignments, and that hands-on instruction in the use of the library does aid student comfort with research. This short article will be most of interest to practicing librarians and undergraduate instructors, who will doubtless see reflections of their own students in the student interviews. Wikipedians who are involved in academic classroom education and outreach will also find this study interesting, if for no other reason than to reinforce the importance of helping students become more knowledgeable about the ways that Wikipedia works with and differs from traditional academic publications.