:
- J-Mo, I'll attempt to respond to your points in the order you have raised the issues, to the extent it is feasible while summarizing what I believe are the concerns that have been raised here.
- 1. First off, thank you for confirming the research did not proceed as planned before community concerns began to surface; based on the timeline presented in the Meta proposal and the date at which push-back began to develope from the community, that was not at all clear. I was hoping Robertkraut or Diyi would speak to that question, but given the strong sense of certainty you provide in your assurances, I assume you are privy to additional information (that was not made public in the Meta or VP discussion) as to where in their process they stopped. Therefore if you are saying they ceased pursuing this project before any engagement in human subject testing, I'm sure we can of course take you at your word about that and put those concerns to bed. I would note, however, that this episode underscores a need for the local communities who are to be the subject of research to be directly informed of such proposals so that objections can be raised much sooner--rather than just before (or during) the actual research itself. If the goal is to solicit the community's feedback on the proposal, a page on meta, absent promotion on the target project itself, is never going to be very effective in addressing concerns before they become urgent. That's the first thing that needs to change in our procedures.
- That issue addressed, I must tell you that I nevertheless do not at this moment in time have as rosey an outlook as you do with regard to the professionalism and ethics displayed in how this research was approached. I'm going to hazard a guess here, based upon your previous comments in the VP discussion and your current assessment here, that you have only ever been a 'researcher' in the commercial/private meaning of that term. Because, had you ever undertaken research in the behavioural sciences in an academic setting, I believe you would better recognize why there are some serious questions here with regard to how these researchers approached issues such as informed consent and privacy protections for human subjects. IRB approval and hand-waving regarding "low risk" or not, I must tell you that approaching subjects in this fashion would not generally be seen as acceptable by most researchers in the social, behavioural, and psychological sciences--nor by most institutions and professional associations that provide oversight for such research. Indeed, going even farther, I believe this research, had it proceeded, could have run afoul of federal regulations (and potentially state statutes) governing the testing of human subjects--particularly with regard to privacy protections and (even more so) the use of underage subjects, for whom the assent of the subject and permission of their parent or guardian is always required (outside a handful of exceptions which do not apply here) and cannot be assumed. It is for exactly this reason that I wonder if the IRB was given all salient information here when making their determination, because I have a hard time seeing how they would approve this research if they knew that nearly a fourth of Wikipedia's editors are below the requisite age of independent consent that is relevant to this particular research.
- You have spoken repeatedly in the previous discussions and here about other "potentially less ethical" researchers invading the project if we do not present a welcoming front to those willing to submit proposals. But it is worth noting that in every example you have provided thus far, the research in question at least made the individual being approached aware of the fact that they were talking to a researcher, and sought their willing engagement with the process. While I agree that the examples you provide nevertheless present issues that we as a community (and individuals) should be concerned about, such voluntary procedures--those which use surveys and passive studies of previous (non-induced) data--are considered by oversight entities (both governmental and institutional) to be fundamentally different from the process of exposing a subject to a test stimulus and then observing their reaction. These types of experiments are generally classed separately and, even in the rare case where an exception for consent might be permissible, that exception is not made for minors, and there must be controls for the protection of the privacy of all subjects--something that would have been infeasible on a platform such as this. My main point here under this first section of response being that the ethical questions that are at least raised here are not by any means trivial ones, and they aren't the type you should be eager to dismiss simply by repeatedly re-asserting that you personally think they are well balanced to achieve benefits with "low risk".
- 2. This is not really where my main concerns lay, and obviously I cannot speak on behalf of those who have raised these concerns. But I will say that your assertion that Facebook does not typically get privileged access to data in its grant agreements is by no means a universal principle--to be fair to you, you did throw in the "generally" there, but I think the general thrust of your statements in this area attempt to provide a degree of assurance that is undue given Facebook's historical (and indeed recent) practices--especially insofar as I presume that you have no particular knowledge as to what degree of data sharing that was agreed to with regard to this particular grant. In fact, this is a big problem for us in general, and I don't see any reason why we should not require disclosures of both financial backing and data-sharing arrangements made by any researcher wishing to advance a proposal here; there's no reason they shouldn't be required to show the same level of transparency towards us as they do the review boards at their respective institutions. As a project, Wikipedia has as much skin in the game (including potential liabilities) as any party, and if researchers wish to avail themselves of this platform for their research, they can be up front with us about anything that might look like a conflict of interest or source of potential exposure for the privacy and personal data of our community members.
- 3. I'm not sure as to that myself I presume (absent any information to the contrary) that the previous research used sourced data rather than direct human subject testing, and so it is not super relevant, other than Kudpung's stated purpose in showing a previous close working relationship between Mr. Halfaker and the researchers here. However, I suspect part of the reason this was raised was because it seemed as if the WMF's researchers were circling the wagons to insulate the study's researchers (and the proposal itself) from criticism. As someone who did not participate in that thread and now now is on the outside looking in, I must tell you that it's very difficult to tell how much you and EpochFail were commenting as community members there and to what degree you were speaking in your WMF capacities, which I'm sure you will agree is potentially problematic. Further, there are places there where I would describe your comments as needlessly antagonistic towards expressed community concerns. I understand that this was obviously motivated by a desire to protect a pair of individuals whom you respect and whom you felt had acted in good faith. But the most ideal way of doing this is not to accuse others of bad faith, as you did during that discussion and elsewhere. I see no one in the entirety of that thread who seemed to be acting out of anything but concern for the project and its users, or in any other way which would entail "bad faith" as that term is usually used on this project (vandalism, trolling, gamesmanship, sockpuppetry, ect.). Even where they were focusing on issues that you and I may agree were not the most salient issues to contemplate (devaluation of the barnstar and so forth) I wouldn't say that they were completely irrelevant concerns for the community--and, in any event, the community members were obviously being sincere. I think that "bad faith" wording and some other comments represent poorly chosen language on your part that may have served to inflame perceptions of bias on the part of the WMF researchers in this situation.
- 4. You're right, we don't have access to the IRB's thinking, and in my view, that's another problem. Before we ever consider human testing research on this project again, we may very well consider requiring that exact information; IRBs are required by law to keep minutes of their research review meetings in a very proscribed format, and we could consider requiring a copy of these documents be presented with any human subject proposals in the future. Afterall, we have our own ethical obligations to our community members and I see no reason why we should have less access to the researcher's accounting of the ethical questions raised by their study and how they intend to control for it, for the purposes of making our own decision on whether to allow it to proceed.
- 5. & 6. I'm not sure just how effective our procedures are here; from where I am standing, they could do with some strengthening, and this situation demonstrates precisely why. I also feel like you are presenting us with false choice here between relaxing protections to make "good actors" feel welcome and actively driving them underground. First off, if they are truly good actors as that term should be applied to behavioural researchers, they wouldn't be inclined to subvert our rules and normal ethical considerations based on how warm a welcome they receive. If a given researcher can't be trusted to comport themselves with out community rules and the mandates of their own profession with regard to ethical research, simply because they are concerned about being grilled here, and they would consider just ignoring our processes instead...then they certainly can't be trusted with the much more demanding responsibilities of protecting user confidentiality and seeking proper informed consent--and they therefore aren't the type of person we should be tailoring our approach towards in any event.
- Also, I disagree that there's nothing to be done about the "bad actors". Where they are simply hoovering up information, of course we can't stop that, but there's also no reason to stop them, insofar as everyone who participates on this project agrees to allow their contributions and statements to be freely accessible and usable for almost all purposes. But where an outside researcher is trying to trigger a response, that kind of activity is going to leave a record and people are going to notice suspicious patterns. If "nefarious" researchers attempt this without having their projects approved by the community and seeking informed consent, we can shut them down just like we would any other WP:disruptive user. And supposing that that they did get past our guard and engage in shady behaviour and they are academics, as soon as they publish or present findings, they can be reported at many different levels of oversight that will have potential professional complications for them--depending on the exact nature of their conduct. Commercial researchers, of course, are a little less amenable to such controls unless they do something blatantly illegal or which would bring them negative press. But commercial researchers (to the extent they come here, which I think is uncertain) are probably not likely to come through the approval process in any event, and there's not point in trying to adjust it to their whims.
- 7. & 8. Good faith is a two way street. I can't disagree with you that research is of vital importance to us, but it has to be approached in a non-disruptive and ethical fashion or else it will be a net negative to the project. And no researchers should ever be allowed to waltz through the front door to conduct whatever tests they want on our contributors, based solely on their own idiosyncratic analysis (not even when informed by their own IRB process) as to whether the risks and consequences outweigh the benefits. The community should always conduct its own analysis of that question, and should be afforded a high degree of transparency with regard to the researchers' intentions, methodologies, previous institutional reviews, the uses to which their data will be put (including especially with whom and in what way confidential information will be shared), and any potential conflicts of interest. And regardless of the answers to those questions, where their work involves treating our community members as test subjects, we should always, without a single exception, require that they get informed consent from anybody they wish to utilize in that fashion. For anyone who is unwilling to meet those requirements, WP:NOTLAB applies and accounts operating outside of our policies should be shut down, same as with any other disruptive user.
- As to your final paragraph I agree with you thoroughly. I would only add that I don't think anyone has treated the researchers here as the "de-facto enemy". The concerns raised have been reasonable and in keeping with the objective of learning from this episode and designing more robust procedures that will benefit both researchers and the community. Nobody's objective, insofar as I have seen, is to "shame" anyone. But there are serious questions raised here as regards respecting the privacy and autonomy rights of volunteers to this project. They are necessary questions to contemplate whenever we consider approving research on this platform. Snow let's rap 04:15, 7 February 2019 (UTC)[reply]
|
← Back to Op-ed