this post was submitted on 11 Jun 2023
1361 points (100.0% liked)

Technology

37742 readers
494 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

As quoted from the linked post.

It looks like you’re part of one of our experiments. The logged-in mobile web experience is currently unavailable for a portion of users. To access the site you can log on via desktop, the mobile apps, or wait for the experiment to conclude.

This is separate from the API issue. This will actually BLOCK you from even viewing reddit on your phone without using the official app.

Archive.org link in case the post is removed.

https://web.archive.org/web/20230611224026/https://old.reddit.com/r/help/comments/135tly1/helpdid_reddit_just_destroy_mobile_browser_access/jim40zg/

you are viewing a single comment's thread
view the rest of the comments
[–] MJBrune@beehaw.org 15 points 1 year ago (2 children)

The fact they are running experiments on their users without opt-in is disgusting. In what world is that okay? Facebook also ran many psychological experiments on their users like shadow-banning them just to see if they felt more alone without telling people. It's gross.

[–] TauZero@mander.xyz 13 points 1 year ago (2 children)

If I tried to do these experiments in an academic setting I would be run out of the university by the IRB, but apparently if you experiment on humans for "business" it's A-OK.

[–] Laxaria@beehaw.org 8 points 1 year ago* (last edited 1 year ago) (2 children)

I don't like the kind of A/B testing that large corporations do, but I'm not so certain that this particular user experiment Reddit is running would qualify for rigorous IRB review in most academic settings.

Firstly, let's talk about consent. An IRB can make a determination to waive the requirement to obtain informed consent for research. The IRB must find and document:

(i) The research involves no more than minimal risk to the subjects;
(ii) The research could not practicably be carried out without the requested waiver or alteration;
(iii) If the research involves using identifiable private information or identifiable biospecimens, the research could not practicably be carried out without using such information or biospecimens in an identifiable format;
(iv) The waiver or alteration will not adversely affect the rights and welfare of the subjects; and
(v) Whenever appropriate, the subjects or legally authorized representatives will be provided with additional pertinent information after participation.

Secondly, some kinds of research are exempt from documenting informed consent:

(i) That the only record linking the subject and the research would be the informed consent form and the principal risk would be potential harm resulting from a breach of confidentiality. Each subject (or legally authorized representative) will be asked whether the subject wants documentation linking the subject with the research, and the subject’s wishes will govern; (ii) That the research presents no more than minimal risk of harm to subjects and involves no procedures for which written consent is normally required outside of the research context; or (iii) If the subjects or legally authorized representatives are members of a distinct cultural group or community in which signing forms is not the norm, that the research presents no more than minimal risk of harm to subjects and provided there is an appropriate alternative mechanism for documenting that informed consent was obtained.

Insofar as the kind of UI/UX A/B testing which is employed poses minimal risk to the participant and the waiving of the need to obtain informed consent has no adverse effect on the participant, an IRB is likely to make a determination that consent can be waived. It would not surprise me that Universities themselves utilize UI/UX A/B testing for their own websites, both external and internal facing, for improvement. I doubt many would explicitly file a with their IRB to conduct such an experiment, but some may reach out to inquire if there are explicit concerns.

However, at a level even above informed consent, is a question of whether the research is actually subjected to IRB review to begin with. There is a classification of research that is exempt from human subjects review, and some kinds of research do qualify for human subjects review exemptions.

For UI/UX A/B testing, this particular section will have some application, considering a lot of UI/UX A/B testing only cares for aggregate responses to human behavior:

(2) Research that only includes interactions involving educational tests (cognitive, diagnostic, aptitude, achievement), survey procedures, interview procedures, or observation of public behavior (including visual or auditory recording) if at least one of the following criteria is met:

(i) The information obtained is recorded by the investigator in such a manner that the identity of the human subjects cannot readily be ascertained, directly or through identifiers linked to the subjects;
(ii) Any disclosure of the human subjects’ responses outside the research would not reasonably place the subjects at risk of criminal or civil liability or be damaging to the subjects’ financial standing, employability, educational advancement, or reputation; or
(iii) The information obtained is recorded by the investigator in such a manner that the identity of the human subjects can readily be ascertained, directly or through identifiers linked to the subjects, and an IRB conducts a limited IRB review to make the determination required by §46.111(a)(7).

Now, one can make the argument that the kind of profile information websites like Reddit have on you is identifying. In practice, "personally identifying information" has specific definitions and the information that Reddit has on you is unlikely to satisfy that criteria.

Finally, this particular set of charts is a helpful reference for whether research even qualifies for IRB/human subjects review to begin with, and walks through the decision points.

In short, if you ran the exact same experiment Reddit is running in an University setting for an university website (for example, testing whether making visitors create an university website account [????] to view an article published by the University press versus not having to do so), I doubt you would be run out of the University by the IRB. Perhaps the IRB might have a stern word if you failed to check-in with them prior, but even so I'm not confident that will be the case.

Now with ALL of that said, I still dislike the fact businesses run these experiments. It's definitely not ethical in the sense that businesses should not be aggressively using its everyday users as guinea pigs for their experiments, but just merely being a shitty thing to do is not sufficient by itself to merit the full IRB process.

[–] TauZero@mander.xyz 3 points 1 year ago (1 children)

Thank you for providing examples of specific language used in regulating research ethics! It confirms my suspicion that the type of experiments done by big companies on their users violates most if not every single one of these requirements. Here's my take on it:

The research involves no more than minimal risk to the subjects;

If it were A/B testing of simple things like whether the "buy now!" link is underlined or not, I'd agree. But the situation linked in OP is exactly of a user who was so upset by unexpected behavior secretly thrust upon him that he had to go online to ask others for help, wondering whether he was just stupid and doing something repeatedly wrong. Yes, he was not literally infected with syphilis by shady doctors, but emotional harm is very much real, and risk of it in hindsight was not minimal. Or that experiment Facebook did with shadowbanning people at random to see whether their feelings of depression would increase - WTF?

The research could not practicably be carried out without the requested waiver or alteration;

Research involving deception is carried out all the time and researchers still manage to get consent in advance. They just don't tell you ahead of time exactly what kind of deception will take place. In tech, the companies definitely have the option for an OPT-IN experiment program. Firefox for example has a "nightly" version for users who opt in to download it and want to test out the latest features and sometimes participate in A/B experiments. The companies CHOOSE not to do it, preferring to experiment on innocent unwitting users at large, because *gasp* there is no law stopping them.

Whenever appropriate, the subjects or legally authorized representatives will be provided with additional pertinent information after participation.

The victims of corporate A/B testing are typically never informed after the fact. Again there is no law requiring it. The user in OP only found out because he started asking around online, and one of the admins just happened to see it. Don't kid yourself hoping he would have been informed afterwards otherwise. The admin was not acting as a pertinent legally authorized representative for purposes of this question. Much more likely he was acting beyond his authorization, and would be disciplined for this unauthorized disclosure and his response would be deleted if it ever became trouble for the company.

Each subject (or legally authorized representative) will be asked whether the subject wants documentation linking the subject with the research, and the subject’s wishes will govern;

Was never asked, does not apply.

That the research presents no more than minimal risk of harm to subjects and involves no procedures for which written consent is normally required outside of the research context;

More than minimal risk of harm, unless you are sociopathic enough to believe emotional harm is not real. Also odd that corporations that love to thrust EULA missives at you to sign all the time just happen to choose a written-consent-to-experiment-form as not "normally required". A consent to random experiments on page 132 of EULA is not informed.

If the subjects or legally authorized representatives are members of a distinct cultural group or community in which signing forms is not the norm

Does not apply.

Research that only includes interactions involving educational tests (cognitive, diagnostic, aptitude, achievement), survey procedures, interview procedures, or observation of public behavior

Watching server logs for traffic patterns is fine. It counts as observation of public behavior for me. Actively interfering with users by thrusting them into atypical situations like randomly shadowbanning them is not.

a question of whether the research contributes to generalizable knowledge

True, if it's not for generalizable knowledge then it's not "research" covered under 45 CFR 46.101. Which is why what the corporations are doing is not literally illegal. But if I walk around testing how close I can swing my fist to passersby's noses without hitting them, I'm not in the clear based on "hurr durr technically it's not research because it's not generalizable so it's not covered by ethics standards", I'm just an asshole.

By the way, here's how the link defines minimal risk:

(j) Minimal risk means that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.

Is the "reddit mobile web not working for no reason causing me discomfort" typical in ordinary daily life? It would be a very cynical outlook on the quality of their own product for reddit admins to claim that it is! :D

[–] Laxaria@beehaw.org 2 points 1 year ago* (last edited 1 year ago) (1 children)

But the situation linked in OP is exactly of a user who was so upset by unexpected behavior secretly thrust upon him that he had to go online to ask others for help

You will need to convince the IRB that such an outcome is more than just minimal risk. The very definition of minimal risk refers to the probability and magnitude of harm. Being unable to use a website unexpectedly or being prompted to sign up for an account before being able to view something is very much not a kind of "harm" that is greater than those ordinary encountered in daily life, no more different than accidentally spilling some creamer on a granite tabletop or realizing one forgot their keys.

We can torture the words as much as we want to lead to a particular interpretation, but by and large these are not the kind of word meanderings that IRBs tolerate.


I've read the rest of your comment and my primary issue is your take on it and the subsequent interpretation(s) are not how these policies and practices are implemented, interpreted, and actioned at IRBs nationally.

For example, this is how Indiana University guides researchers for making exempt determinations when reaching out to their IRB. University of South Carolina's IRB provides explicit examples.

Firefox's approach to its in-browser experiments is very much in line with desired and ethical research practices, in so far as we view Mozilla as a privacy-first organization. The availability and "opt-in" to a nightly build is not considered research.

Your contention with not being provided information after the experiment is qualified with "whenever appropriate" and "additional pertinent information". Debriefing after a deception study is very much appropriate and considered required. However, these are considerations in context of waiving informed consent. I would also point out that "legally authorized representative" in this phrasing refers to people who are legally designated as a representative of the subject, and not the admin of the site in question. For example (broadly), minors cannot legally satisfy "informed consent", therefore their legally authorized representative, like their parent or legal guardian, are those who sign the forms on behalf of their children. Adolescents may qualify for informed assent. There's a whole set of additional considerations that experiments much consider for when working with adolescents who reach the age of majority during the research process in context of this.

The waiving of the requirement to document informed consent requires any of the listed qualifications apply, not all. No one is saying that emotional harm is not real, but rather the contention is whether the kind of emotional harm that comes from being forced to log-in to view a website is so significant of a magnitude that it rises above the kind of everyday "harm" experienced in ordinary life. Can you demonstrate that this is the case?

The rest of the discussion about shadowbanning from Facebook and/or other related things are interesting comments but not the point I am making.

Rather, my point is if you replicated the exact same experiment Reddit runs on a University academic setting, it is highly unlikely you will be run out of the university by the IRB from doing so. This was your original claim, arguing that Reddit's experiment here violates some kind of IRB policy if it were to be run in an academic setting. My point is that, NO, you will NOT be run out of an academic research institute by their IRB for doing something like this, and the way the IRB determines minimum risk and exemptions is part of the reason why, because this specific experiment largely meets many of the metrics for it.

To iterate again, being unable to view the content of a webpage without logging may lead to some discomfort, but this discomfort is not going to rise up to any meaningful level to lead to most IRBs to call it as greater than minimal risk. We can of course twist the situation to make it so, but torturing the very situation to achieve a particular interpretation does not fly with IRB review.

[–] TauZero@mander.xyz 0 points 1 year ago (1 children)

I would also point out that “legally authorized representative” in this phrasing refers to people who are legally designated as a representative of the subject, and not the admin of the site in question.

Right, sorry. I understood it meant legal guardian in the other contexts, but misread this line in particular

For example, this is how Indiana University guides researchers for making exempt determinations when reaching out to their IRB. University of South Carolina's IRB provides explicit examples.

Again, you are posting links that tell me that the type of research done by Reddit and Facebook would not be covered. "Exempt" in these links means exempt from the full scope of requirements of the Federal Policy for the Protection of Human Subjects for research that presents no more than minimal risk and falls into one of one of predefined categories. "Exempt" research may still require informed consent. Your prior link was for exemption to informed consent specifically. In my view, the Reddit experiment satisfies neither the conditions for "exempt" research nor the conditions for "informed consent exemption".

It may be hard to keep track of all the legalese flowcharts, all the AND and OR conjugated lists of preconditions, but I think I got it right. To take a look at UCSB flowchart for example, how would I argue that my Reddit-like experiment is "exempt"? I would still need to meet with the IRB to determine whether that my research is exempt in the first place (Reddit Inc has no IRB), and to do so I'd have to show that ALL of these are true:

  • meets the definition of “research”
  • meets the definition of a “human subject”, such as involving collecting data about living individuals
  • qualifies as no risk or minimal risk to subjects

Then I would pick a predefined category, probably Exempt Category 3 - Benign Behavioral Interventions with Adults, and show that I meet at least ONE of these is preconditions:

  • a) information obtained is recorded in such a manner that human subjects cannot be identified, directly or through identifiers linked to the subjects
  • b) any disclosure of the human subjects' responses outside the research could reasonably place the subjects at risk of criminal or civil liability or be damaging to > the subjects' financial standing, employability, or reputation
  • or c) information obtained is recorded in such a manner that human subjects can be identified, directly or through identifiers linked to the subjects and if the IRB conducts a limited review for provisions for protecting privacy and maintaining confidentiality.

Embarrassingly, b) is probably a typo, since on every other site the language used is that the responses would NOT reasonably place the subjects at risk of criminal or civil liability, but whatever. Assuming we want to keep track of as much PI as possible (whether or not you agree that any username-related information online is PI at all), we'll take the (corrected) option b) since there is no criminal liability for the user from our experiments. Then we have to follow all of these example rules:

  • This category does not include minors.
  • Benign behavioral interventions must be brief in duration, harmless, painless and not physically invasive and there is no reason to think the interventions will be offensive or embarrassing.
  • Interventions should not have a last significant adverse impact on the participants.
  • Research involving deception is allowed if the participant is prospectively informed, and agrees to, that they will be unaware of, or misled regarding the nature or purpose of the research.

Reddit violates ALL of these example rules.

  • minors use reddit and there is no indication that reddit experiments exclude them. Minors are not prohibited on the site and there is no tracking of age other than the vague "show me NSFW results" checkbox
  • intervention is not brief, has lasted a week or more
  • intervention is not harmless, user was definitely made upset by being blocked from mobile web for no reason
  • intervention definitely embarrassed the user, making them think they are dumb and doing something obviously wrong, and there was ample reason to expect this outcome
  • intervention had a lasting adverse impact on the participant, both in being unable to use reddit itself for no reason, and in them thinking about the mysterious problem for days to the point where they go online to complain about it - much different reaction than from say somebody playing Yakety Sax in your ear while you try to solve math problems like in the example below
  • the subject was deceived and was definitely not informed prospectively that deception may take place, neither has agreed to it; subject was not informed even retrospectively other than some random admin suggesting they were part of an experiment after they complained online; for that matter the subject was not informed that an experiment would be taking place at all and has never agreed to anything, other than possibly in the ToS.

So you see, even your own examples of specific IRB guidelines disagree with you.

The examples of research listed under Category 3 are:

  • A random assignment of participants to take a test under various noise conditions.
  • A study involving randomly assigning participants to various experimental conditions where they decide how to allocate cash between themselves and others.

The Reddit experiment goes beyond these, as it is designed to specifically manipulate the user's emotional state, to deliberately frustrate the user to see whether they would download the native app or abandon reddit entirely when their web access is blocked. It is much more similar to the Facebook shadowbanning experiment (which you agreed can shove it) than to these examples above. I'd say the level of frustration and embarrassment is similar to the Asch conformity experiment, which if you wanted to repeat it now I was taught under modern rules DOES require IRB review, DOES require informed consent (if not with details of the deception then at least with the very fact that an experiment is taking place), DOES require post-experiment debfrief, all because it DOES present a risk of causing emotional harm.

Can you find instances of modern Asch experiment research papers that specifically show they are "exempt" research and/or that have received an exemption from collecting informed consent prospectively and/or retrospectively? If you do, it will help convince me that that's how modern research ethics really works.

[–] Laxaria@beehaw.org 1 points 1 year ago* (last edited 1 year ago) (1 children)

Reddit violates ALL of these example rules.

No.

minors use reddit and there is no indication that reddit experiments exclude them. Minors are not prohibited on the site and there is no tracking of age other than the vague “show me NSFW results” checkbox

Strictly speaking, COPPA prevents Reddit from collecting information from users under the age of 13. While there are no explicit guarantees that a person on the site isn't 13 or older, and also recognizing that age of majority is typically 18, then in a general literal sense yes there are minors involved, in so far as the activity discussed is research.

intervention is not brief, has lasted a week or more

A research intervention is an intervention in so far as it intersects with the participant. A drug trial that lasts for 10 weeks total but only gives doses to participants for 2 weeks (and then results monitoring during, 2-weeks post, and 4-weeks post) does not mean that the "intervention" lasts 10 weeks.

In most practical terms, running an experiment for 2-3 weeks is very common to collect sufficient data. However, the intervention itself may be quite brief (for example, a short 45-minute interview with a participant would be the "intervention" for a 2-3 week long study interviewing physicians on their concerns about organizational capacity for change).

For a repeated measures experiment, the intervention usually involves the actual experiment encounter and maybe some additional time between them.

For the case of A/B testing, usually the "intervention" in this case is the A/B test as it applies to the user at the moment, and not the entire duration of when the testing is taking place for all participants.

interventions having harm

Once again, to iterate, you are equating the inability to view the content of a website without logging into an account with such substantial emotional and psychological harm that is comparable to being verbally derided in public, for a week, shamed on a public channel, and/or comparable situations. You are not going to convince an IRB that being able to view the content of a website without logging in, then subsequently going to a different community to ask for help, and then hypothetically ruminate about the matter for weeks, is going to exceed the kind of everyday ordinary harm to qualify a risk level above minimal risk.

the subject was deceived and was definitely not informed prospectively that deception may take place, neither has agreed to it; subject was not informed even retrospectively other than some random admin suggesting they were part of an experiment after they complained online; for that matter the subject was not informed that an experiment would be taking place at all and has never agreed to anything, other than possibly in the ToS.

Deception goes beyond simply "lying" to or "not informing" the participant. Duke University gives some good considerations here:

  • If, in order to counter the demand effect, researchers cannot disclose their research hypotheses, the failure to disclose is not considered deception.
  • General statements about the purpose of the research, as well as a full description of the research tasks and activities, should be provided in the consent form. (emphasis, should, not must).

Additionally, a waiver for utilizing deception in research has to:

  1. The risk must be no more than minimal.
  2. The rights and welfare of the subjects will not be adversely affected.
  3. The research could not practicably be carried out without the waiver. This does not mean that it would be inconvenient to conduct the study without the waiver. It means that deception is necessary to accomplish the goals of the research.

Satisfying #1 and #2 in an UI/UX A/B testing regime that Reddit used here is pretty easy. You are specifically hung up on the implicit harms involved but in reality they are of no particular serious concern.

#3 is particularly interesting, because effectively what that means is you need to demonstrate deception is necessary in experimental design for the experiment to actually work. If you are seeing whether a person will interact more with a site or not if they are blocked from seeing content without logging into an account, telling them ahead of time could already bias the outcome. This is in very specific consideration that #1 and #2 are already met: being unable to view a website without logging into an account is not anything more than minimal risk. And even then, it is important to emphasize that the failure to disclose the research hypothesis to counter the demand effect is NOT deception.

The kind of UI/UX A/B testing Reddit employed in this specific instance is absolutely not equivocal to the Asch conformity studies.

To be very clear here, when YOU operate under the belief that being unable to view the content of the website, then posting about it elsewhere and potentially being ridiculed for it is sufficient of a bar to meet beyond minimal risk, then we have very different definitions of what "minimal risk" entails. Since we cannot come to consensus on this particular topic, and instead you've gone so far as to associate this kind of activity's harm of equivalent to the Asch conformity studies is frankly ludicrous.

If we cannot agree on this, then so be it. However, I will repeat (with added finality): YOU running the same A/B experiment Reddit is doing on an university-sanctioned website will NOT get you run out of the university by the IRB. In a real-world scenario. you would likely discuss this experiment with an administrator or similar at your department, then maybe send an e-mail off to the IRB for any clarification, would likely have some back-and-forth, and then would ultimately receive a determination that it is exempt (at worst), or not considered human subjects research at all. I can see a few circumstances where such an effort might merit an expedited review, but to do this would involve some torturous twisting of the situation that could be easily avoided (for example, running this testing on a page that has instructions for performing the Heimlich maneuver).

The Asch conformity study experiments are absolutely not equivalent to doing A/B testing on a wide scale with regards to how users interact with a website when presented with a pop-up preventing further interaction without logging in.

Edit: I would add that, with regards to "harm", IRBs usually don't contend themselves excessively with imaginative processes leading to catastrophic harm. Some kinds of harm are obvious (for example, disclosing that someone has a degenerative life disease discovered incidentally during clinical sequencing), and others are very much in the ballpark of "it could happen, but if you make reasonable accommodation it's not a big deal", such as being concerned that someone might risk choking to death while being a participant on a blind taste test of different beverages using tiny samples. Allergies are real and participants should be informed about these risks ahead of time; potentially choking on a small sip of Coke is good to note but is not going to register on even the participant's radar.

[–] TauZero@mander.xyz 2 points 1 year ago

Thank you for agreeing that Asch conformity experiment falls under human experiment ethics considerations and informed consent requirements! I am surprised though that you consider the reddit experiment unlike the Asch experiment (I personally see reddit as the worse one actually!). Could you explain how in your mind, Asch does carry a risk of causing harm in a way that reddit does not? Asch is just looking at a bunch of lines on a page after all! How can a bunch of lines cause harm? I also find it odd your cavalier attitude towards the word "should".

General statements about the purpose of the research, as well as a full description of the research tasks and activities, should be provided in the consent form. (emphasis, should, not must).

If "should" isn't prescriptive, why even have any "should" statements in our guidelines if you are just going to ignore them all? And yet again your link disagrees with you:

  • The risk must be no more than minimal. “Minimal risk means that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.”
  • The rights and welfare of the subjects will not be adversely affected.
  • The research could not practicably be carried out without the waiver. This does not mean that it would be inconvenient to conduct the study without the waiver. It means that deception is necessary to accomplish the goals of the research.

Ameliorating Deception

Protocols must include procedures for ameliorating possible negative effects of deception. In addition to thorough debriefing that explains the need for deception, emphasis should be placed on correcting any false feedback given to participants about their performance, competency, or other personal characteristics.

Participants whose behavior was recorded without their knowledge, such as during a fake “break” in study, should be given the opportunity to request that the recording be destroyed.

If a study was designed to provoke negative behaviors, participants should be told that most people react the way they reacted and that their behavior was a normal response.

Debriefing

Debriefing for participants who were deceived includes a description of the deception and an explanation about why it was necessary. The discussion should presented in lay language and should be sufficiently detailed that participants will understand how and why they were deceived. If the study included multiple deceptions, each should be addressed. If participants were filmed without their knowledge, they must be given the option to ask that the researchers do not use the film

Reddit never had any intention to ameliorate the deception, to debrief the participants, or to give them an opportunity to delete their experiment records after the fact. Reddit never implemented alternative practical research methods like opt-in studies. Are you seriously arguing that because the page says "should" and not "must", reddit was perfectly ethical simply not doing any of this at all? This isn't some RFC, this is normal people language!

Or if you are saying this wasn't deception, then why link to the entire Duke deception page at all? The only relevant sentence here to you is:

If, in order to counter the demand effect, researchers cannot disclose their research hypotheses, the failure to disclose is not considered deception.

And it only refers to the disclosing the research hypothesis itself, not the very fact that you are taking part in some experiment! And you agree that it is not impossible to perform usage experiments without informing participants in advance (I brought up Firefox as a better example alternative), it is just more laborious. Moreover, reddit did engage in actual deception beyond simply keeping the fact of the experiment secret:

being unable to view a website without logging into an account is not anything more than minimal risk. And even then, it is important to emphasize that the failure to disclose the research hypothesis to counter the demand effect is NOT deception.

What happened went beyond that. If you read the reddit OP:

I’m logged in on my phone (iOS) but I use a browser, not the app. As of an hour ago, the mobile view is showing that I’m logged out, with no option to log in and a permanent “this looks better in the app” banner on the page.

This isn't some simple A/B testing of things like text size or link color. This isn't like Facebook or Instagram blocking everyone equally from seeing communities without logging in. OP was logged in. Reddit lied to them saying they were not logged in when they were. Reddit lied to them saying there is no way to log in. Reddit lied to them saying the only way to see the content was to download the app. This is the deception part. This is the part that's similar to Asch and the people in the room with you lying that they are participants like you. You think you are in a normal situation but you are not. You've been singled out and no one believes you.

[–] Toast@lemmy.film 2 points 1 year ago (1 children)

Answers like this are why I come to Lemmy

[–] Laxaria@beehaw.org 1 points 1 year ago* (last edited 1 year ago)

I in general find lay people have a very weak understanding of how research functions. This is a very generic statement, but everything from IRB processes to how science is reported in manuscripts and everything in between tends to be a quagmire, and this is absolutely with recognition that some of this process is mired in red-tape, bureaucracy, and endless administration.

For example, there's a long-standing idea that IRBs are the gatekeepers of research. In reality, any IRB worth their weight (and really, all of them are for compliance) should be viewed as a research stakeholder. They should be there to make research happen and let scientists do the best research they can with the minimal amount of harm to participants. Sometimes this involves compromises, or finding alternatives that are less harmful, and this is a good thing. No researcher should dislike their IRB at all, but criticizing unnecessary process & paperwork is very much a matter of process rather than a matter of what the IRB does. A IRB taking a long time to turn something around is a different matter from the IRB exhaustively reviewing the proposed work and returning questions that need to be addressed.

Another common example is scientific studies are frequently criticized about sample sizes. Yes, a lot of research would definitely benefit from better sampling and larger samples, but narrowly focusing on sample sizes misses a lot of the other considerations taken for evaluating statistical power. For example, if one wants to know whether beheading people results in injuries incongruent with life, one doesn't really need a large sample to come to this conclusion because the effect (size) is so large. Of course more numbers help, but past a point more numbers only add to the cost of the research without measurably improving the quality of the statistical inferences made. We could instead save some of this money and repurpose it for repeating a study.

And on that note, study replication is very much a really needed thing, particularly in applied research areas where we care a lot about behavioral outcomes. We don't have enough of this, it is not funded well enough, and by and large the general public is right in that we do need more of this. It's not exciting to do at all. On the other hand, meta-analysis papers which pull results from a large number of papers looking at a particular topic usually give a helpful benchmark on the broad direction and general take-away conclusions of a particular topic.

In this topic about IRBs, A/B UI/UX testing for the set-up that Reddit did it, and being run out of an university setting? That's hyperbole. I don't like businesses doing aggressive user-focused testing without informing the user, particularly with UI/UX changes I dislike (looking at you too Twitch with your constant layout changes), but at the end of the day these kinds of testing generally don't ever rise up to the threshold needed to be a particularly meaningful blip. Insinuating otherwise vastly mischaracterize how research is done in formal, structured settings.

[–] Landrin201@lemmy.ml 7 points 1 year ago

Ethics is meaningless when corporations are doing "research" for profit.

I put it in quotes because a lot of the "research" done by companies doesn't even resemble real scientific research. They use the word to give "we forced a new feature in front of unwilling users, never asked if they liked it, and took them continuing to use the site as liking it" legitimacy when talking to news outlets.

[–] fishhf@reddthat.com 4 points 1 year ago (1 children)

A/B testing is cheaper than hiring real testers

[–] MJBrune@beehaw.org 1 points 1 year ago (1 children)

Absolutely and frankly I'd be perfectly fine with A/B testing if it was opt-in. Pop up a little window or notification that says "Hey, this is a new feature, you want it?"

If

  1. people don't opt-in
  2. they opt-in and don't like it
  3. they opt-in and then quickly opt-out

You know the feature isn't good and to move on. A lot of people would call that data inconclusive because they want to believe the feature is good but not being able to convince people to opt-in is feedback.

Experimenting on users should be illegal.

[–] fishhf@reddthat.com 1 points 1 year ago

Most A/B tests are experiments on features, not on their users.

There's a difference in finding out what features users likes and let's see if we can manipulate their feelings or get them depressed.

The one Facebook did is not really A/B feature testing.

Those who opt-in to tests can have biased results, it's like asking those running Windows 11 if they like Windows 11 more than Windows 10.