Hostname: page-component-78c5997874-mlc7c Total loading time: 0 Render date: 2024-11-19T18:43:06.022Z Has data issue: false hasContentIssue false

Dark patterns in online shopping: do they work and can nudges help mitigate impulse buying?

Published online by Cambridge University Press:  09 May 2022

Ray Sin*
Affiliation:
Deep Labs, Inc, San Francisco, CA, USA
Ted Harris
Affiliation:
Deep Labs, Inc, San Francisco, CA, USA
Simon Nilsson
Affiliation:
Deep Labs, Inc, San Francisco, CA, USA
Talia Beck
Affiliation:
Deep Labs, Inc, San Francisco, CA, USA
*
*Correspondence to: E-mail: ray.sin@deep-labs.com
Rights & Permissions [Opens in a new window]

Abstract

Dark patterns – design interfaces or features that subtly manipulate people in making suboptimal decisions – are ubiquitous especially in e-commerce websites. Yet, there is little research on the effectiveness of dark patterns, and even lesser studies on testing interventions that can help mitigate their influence on consumers. To that end, we conducted two experiments. The first experiment tests the effectiveness of different dark patterns within a hypothetical single product online shopping context. Results show that, indeed, dark patterns increase the purchase impulsivity across all dark patterns, relative to the control. The second experiment tests the effectiveness of three behaviorally informed interventions on four different dark patterns also in a hypothetical online shopping scenario, but this time offering multiple products instead of a single product. Between-subject analysis shows that not all interventions are equally effective, with uneven impact across dark patterns. However, within-subject results indicate that all interventions significantly reduce purchase impulsivity pre- versus post-intervention, indicating that any intervention is better than none when it comes to combating dark patterns. We then end by discussing the policy implications of our results.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © Deep Labs, Inc., 2022. Published by Cambridge University Press

Introduction

Consider this scenario: You would like to buy a new pair of shoes. You have a particular pair in mind that you are interested in, but not yet convinced that you really need or want it. You go online to check this pair of shoes out. However, once you found it on a shopping website, you were confronted with: ‘Low supply. Buy now!’ Almost instantly, you feel this impulse to buy the shoes immediately, dreading that if you take the time to responsibly consider whether you really need the shoes or could even afford it, you run the risk of the shoes selling out. Sounds familiar? This limited-quantity scarcity messages and others like these (e.g., countdown timer, confirmshaming, hidden subscriptions, etc.) are common ‘dark patterns’ found on online retail stores, generally with the goal to induce impulse buying. Beyond online shopping, dark patterns have been implemented in other situations. Some examples include Turbotax hiding and redirecting consumers away from the legally mandated option to file taxes for free toward their paid services (Elliott & Kiel, Reference Elliott and Kiel2019) and Uber implementing ‘dark nudges’ to prod drivers to work harder and longer (Scheiber, Reference Scheiber2017), just to name a few.

Coined by Brignull (Reference Brignull2010), dark patterns are subtle design features embedded in websites that prey on human psychology to steer consumers into making decisions that, if fully informed or under optimal conditions, they might not make. From a behavioral science perspective, dark patterns are designed to prompt consumers to evoke System 1 thinking rather than a more deliberate and thoughtful System 2 thinking by exploiting cognitive biases like scarcity bias or social proof (Stanovich & West, Reference Stanovich and West2000; Kahneman, Reference Kahneman2011). Central to the motivation as to why online vendors incorporate dark patterns is the attendant impact on impulse buying.

Impulse buying, generally defined as sudden and unplanned purchases that are emotionally driven and hedonically complex (Stern, Reference Stern1962; Rook, Reference Rook1987), has been on a rise alongside the unabated growth of e-commerce. A recent survey found that more than five in six Americans have made impulse purchases, with each person spending an average of $81.75 per session, amounting to almost $18 billion in total (McDermott, Reference McDermott2021). Other studies found that online purchases amounted to approximately 40% of consumers’ online expenditure (Liu et al., Reference Liu, Li and Hu2013). This trend is exacerbated by COVID-19 where online shopping has grown dramatically during the pandemic, at an estimated 39.1% year-over-year growth in Q1 2021 (Census Bureau, 2021). Slickdeals (an online shopping website) administered two surveys before the pandemic in January and again in April during the stay-at-home restrictions and found that impulse purchases have grown by 18% (Cain, Reference Cain2020). Indeed, the propensity toward impulse buying contributes in no small part to a broader issue of overspending in America today.

Related Literature on Dark Patterns

Generally, there are three bodies of literature. The first and dominant literature builds upon Brignull's (Reference Brignull2010) work to uncover, identify, and typologize different dark patterns (Bösch et al., Reference Bösch, Erb, Kargl, Kopp and Pfattheicher2016; Mathur et al., Reference Mathur, Acar, Friedman, Lucherini, Mayer, Chetty and Narayanan2019; Di Geronimo et al., Reference Di Geronimo, Braz, Fregnan, Palomba and Bacchelli2020). For example, Mathur and his colleagues developed an automated web crawler to extract dark patterns from a set of more than 11,000 shopping websites. They uncovered almost 2000 distinct dark patterns which were then recategorized into a seven-category taxonomy, each generally with a distinct underlying psychological mechanism.

The second body of research moves the focus away from identifying dark patterns to assessing their impact on consumers (Nouwens et al., Reference Nouwens, Liccardi, Veale, Karger and Kagal2020; Bongard-Blanchy et al., Reference Bongard-Blanchy, Rossi, Rivas, Doublet, Koenig and Lenzini2021; Luguri & Strahilevitz, Reference Luguri and Strahilevitz2021). Nouwens and his colleagues ran an experiment to understand the impacts of different dark patterns on obtaining users’ consent for personal data collection. They found that not all dark patterns are equally effective. Notification styles (barriers that prevent users from interacting until consent response is received vs. banners that ask for consent but do not block access) do not impact consent rate while not showing a ‘reject all’ button on the first page increases consent rate by 22–23 percentage points. In this body of work, more research is needed to better understand which dark patterns are effective, under what conditions, and among whom when it comes to online purchase impulsivity. In Study 1, we conduct a ‘sludge audit’ (Sunstein, Reference Sunstein2020) by experimentally testing three dark patterns commonly seen on e-commerce websites to assess their respective effectiveness at increasing online purchase impulsivity.

Building upon the earlier two bodies of research that establish dark patterns as ubiquitous with varying effectiveness, the third is a small burgeoning body of research that evaluates interventions to mitigate the impacts of dark patterns (Mills, Reference Mills2020; Moser, Reference Moser2020). Though not explicitly examining the impact of price anchoring on pre-discounted cost as a dark pattern, Moser (Reference Moser2020) conducted a series of experiments to test the efficacy of different interventions on purchase impulsivity and purchase intent on a list of discounted shopping items. She found that both reflection (list three reasons why you should/should not buy) and distraction (count number of red squares in two 10 × 15 tables) interventions reduce purchase impulsivity and intent significantly, relative to the control group. A gap in this literature is that we need more research on understanding which intervention is effective against which dark pattern. Accordingly, in Study 2, we examine the efficacy of different interventions on different dark patterns within the context of online shopping as it relates to impulse buying.

The outline of the article is as follows: first, we introduce Study 1 with a theoretical discussion of dark patterns as sludge, their respective hypotheses, experimental design and results. Following that, we discuss Study 2 in a similar fashion to Study 1 but delved into the theoretical considerations of the interventions tested instead. Lastly, we addressed the limitations of both studies and a nuanced but in-depth discussion about the policy implications.

Study 1: Do Dark Patterns Work?

The Sludging Impact of Dark Patterns

Conventionally, sludges are defined as excessive friction with the intent to inhibit a more deliberative System 2 thinking (Thaler, Reference Thaler2018; Sunstein, Reference Sunstein2020). More recently, Shahab and Lades (Reference Shahab and Lades2021) integrated the concept of sludge with transaction cost economics to develop a typology of different types of costs incurred from sludges, namely search, evaluation, implementation, and psychological costs. Accordingly, dark patterns are sludges intentionally incorporated into the choice architecture of decision-makers to increase the search costs (e.g., misdirection), evaluation costs (e.g., concealment), as well as psychological costs (e.g., induced anxiety brought about by scarcity bias, as illustrated in the introductory scenario). However, are all dark patterns equally effective? Or are some more effective than others, as suggested by the literature? As documented by Mathur et al. (Reference Mathur, Acar, Friedman, Lucherini, Mayer, Chetty and Narayanan2019), there are almost 2000 different types of dark patterns, making it impossible to test them all. In Study 1, we limit it to three commonly experienced dark patterns.

Social norms are one of the most commonly used nudges and have been applied to a wide variety of settings ranging from energy conservation (Allcott, Reference Allcott2011) to voting (Gerber & Rogers, Reference Gerber and Rogers2009). The central reasoning for this is that when making decisions under uncertainty, most people either rely on descriptive norms (‘what do most people do’) and/or injunctive norms (‘what you should do’) to guide their behavior (Cialdini, Reference Cialdini2006). As a testament to the popularity of social norms, a recent global survey of behavioral practitioners in corporate settings revealed that 83% of them had used social influence (a broad umbrella term that encompasses social norms and social proof) as a behavioral change technique at work (Wendel, Reference Wendel2020). In the online retail space, customer testimonials are common instantiations of social proof. A large body of research ranging from reviews, ratings, and recommendations found evidence affirming the effectiveness of using social proof to improve sales (Amblee & Bui, Reference Amblee and Bui2011; Luca, Reference Luca2016; Gavilan et al., Reference Gavilan, Avello and Martinez-Navarro2018). An unintended consequence of the success in leveraging social proof to drive business growth is the emergence and proliferation of fake reviews, as a type of dark pattern (Luca & Zervas, Reference Luca and Zervas2016).

H1: Consumers exposed to social proof dark pattern (a customer testimonial) will have significantly higher buying impulse than the control group

Exploiting scarcity bias is another common dark pattern deployed in the retail space. The more scarce a product is perceived to be, the more valuable it is in the eyes of consumers (Cialdini, Reference Cialdini2006; Mullainathan & Shafir, Reference Mullainathan and Shafir2013). Because product scarcity is able to influence product price and popularity, limited edition products and time-limited offers are now part and parcel of most company's marketing strategies (Shi et al., Reference Shi, Li and Chumnumpan2020). Even though there are typically three types of scarcity messaging used in the marketplace: time scarcity (e.g., ‘limited time only’), quantity scarcity (e.g., ‘only 10 left in stock’), and demand-related scarcity (e.g., ‘in high demand’), the majority of research is focused on evaluating the first two scarcity messages, to the exclusion of the third (Aggarwal et al., Reference Aggarwal, Jun and Huh2011; Luo et al., Reference Luo, Cheng, Zhou, Song, Yu and Lin2021; Wu et al., Reference Wu, Xin, Li, Yu and Guo2021). For example, in an experiment using hypothetical advertisements, Aggarwal and his colleagues (Reference Aggarwal, Jun and Huh2011) found that, relative to the control, the average purchase intent of participants in both limited-quantity (‘First 100 customers only’) and limited-time scarcity messages (‘For six days only’) were significantly higher. Interestingly, not all scarcity messaging is the same. The average intent to purchase was higher for those exposed to the limited-quantity advertisement relative to the limited-time advertisement. Similarly, Wu et al. (Reference Wu, Xin, Li, Yu and Guo2021) examined both the limited-time and limited-quantity messaging in a field experiment and found both to be effective in stimulating impulse purchase. A notable exception is a study that compared demand-related scarcity (‘in high demand’) versus supply-related scarcity (‘limited supply only’) messaging and found that the former was more effective than the latter (Aguirre-Rodriguez, Reference Aguirre-Rodriguez2013). Similarly, in study 1, we test the effectiveness of both the demand-related and supply-related scarcity messaging.

H2a: Consumers exposed to supply-related scarcity messaging (‘Only 5 left in stockOrder soon’) will have significantly higher buying impulse than the control group

H2b: Consumers exposed to demand-related scarcity messaging (‘Item in high demandOrder soon’) will have significantly higher buying impulse than the control group

H2c: Consumers exposed to supply-related scarcity messaging will have significantly lower buying impulse than those exposed to the demand-related scarcity messaging.

Experimental Design and Methodology

To recap, Study 1 experimentally assesses the impact of three dark patterns, specifically limited-quantity, testimonials, and high-demand, against a control group on purchase impulsivity within a hypothetical online shopping experience, involving only one product. An a priori power analysis utilizing a 95% confidence interval, achieving power of 80%, and predicting a small effect size (f = 0.1) determined that a minimum of 274 respondents were required per condition, for a total minimum sample size of 1096. Sample characteristics are presented in Table 1. We administered an online experimental survey to 1342 respondents from Amazon Mechanical Turk (MTurk) to ensure that we obtained sufficient power. Using IQR completion timeFootnote 1 and attention checksFootnote 2 as criteria, 122 respondents (9%) were identified as non-serious and subsequently dropped from the analysis.

Table 1. Sample characteristics, studies 1 and 2.

Participants were presented with a hypothetical online shopping experience, involving only one product – Red Yeast Rice, a supplement purported to reduce cholesterol. We chose Red Yeast Rice for two main reasons: (1) even though the product does exist, it may be obscure enough to minimize the impact of prior preference as a potential confound; (2) supplements, including Red Yeast Rice are not regulated by the FDA nor is their uniformity and safety independently verified (see Klimek et al., Reference Klimek, Wang and Ogunkanmi2009), a controversy where the manipulative nature of dark patterns may be exploited. Participants were presented with three pieces of information:

  • A fact sheet about Red Yeast Rice, adapted from the National Institute of Health, to eschew variation in prior knowledge as a potential confound (Figure 1).

  • Real production information about Red Yeast Rice, taken from Amazon.com (Figure 2).

  • One out of three dark patterns, randomly selected, or a control (information on an unrelated product recommendation, Roku Express, taken from Amazon.com) (Figure 3).

Figure 1. Fact Sheet About Red Yeast Rice, Adapted From the National Institute of Health.

Figure 2. Product Information About Red Yeast Rice, Taken From Amazon.com.

Figure 3. (L) Control; (R) Dark Patterns: (T) High-Demand, (M) Positive Testimonials, (B) Limited-Quantity.

After viewing each piece of information in sequential order, participants were asked, on a scale from one to seven, what their urge to purchase Red Yeast Rice can be described as.

Measuring impulse buying is notoriously challenging (Wells et al., Reference Wells, Parboteeah, Eastern New Mexico University and Valacich2011; Chan et al., Reference Chan, Cheung and Lee2017). Because of social desirability bias, most consumers may be reluctant to admit to indulging in impulse buying – a behavior many regard as undesirable. Beatty and Ferrell (Reference Beatty and Ferrell1998) point out that capturing impulse purchases in a timely fashion and in situ can be difficult. In addition to that, research in online impulse buying has also experienced limited success in capturing actual impulse purchase behavior (Koufaris, Reference Koufaris2002). As a result, ‘felt urge to buy impulsively’ is the dominant outcome surrogate used to measure impulse buying (for a review, see Chan et al., Reference Chan, Cheung and Lee2017). Consistent with the literature, we decided to use this outcome variable as an imperfect but adequate proxy to measure impulse buying.

Results

Results from a one-way ANOVA were significant, indicating that at least the average purchase impulsivity of one experimental condition is significantly different from the rest [F(3, 1222) = 4.83, p < 0.01], and with a small effect size (ɳp 2 = 0.01). A post hoc pairwise comparison with FDR adjusted p-values using the BH procedure (Benjamini & Hochberg, Reference Benjamini and Hochberg1995) revealed that average purchase impulsivity for the control group (M = 3.78; SD = 2.21) is significantly lower across all treatment groups [high-demand (M = 4.31; SD = 2.14), limited-quantity (M = 4.22; SD = 2.11) and social proof (4.38; SD = 2.09)]. Hypotheses 1, 2a, and 2b are supported. Additionally, there is no statistically significant difference detected between each treatment condition, indicating no support for hypothesis 2c. The results are summarized in Figure 4. Departing from previous studies that found that not all dark patterns are equally effective, our results suggest that all the dark patterns that we tested do have an impact in increasing average purchase impulsivity, albeit a small effect size. Next, after establishing that dark patterns do impact purchase impulsivity within an online shopping environment, in Study 2, we investigated which interventions may help attenuate the negative influence of each dark pattern.

Figure 4. Results From Pairwise t-Test of Dark Patterns and Average Purchase Impulsivity, Separated by Experimental Conditions. *p < 0.05, **p < 0.01, ***p < 0.001.

Study 2: Can Nudges Help Mitigate the Impact of Dark Patterns?

Study 2 builds off Moser's (Reference Moser2020) work to test the efficacy of three different interventions (postponement, reflection, and distraction) against four distinct dark patterns (limited-time scarcity, limited-quantity scarcity, high-demand scarcity, and positive testimonials) within the context of online shopping.

Intervention #1: Postponement

More commonly referred to as ‘cooling-off periods,’ postponement interventions operate like a circuit-breaker that interrupts rash decision-making. Also, postponements are typically implemented during the period when a provisional decision has been made and when that decision is binding or permanent. Studies show that when people are in a viscerally charged state (i.e., having strong emotions), they tend to make suboptimal decisions (Loewenstein, Reference Loewenstein1996; Kahneman, Reference Kahneman2011). Having some time for these strong emotions to wane gives people an opportunity to reconsider and, perhaps, make better decisions. Similarly, when discussing consumer behavior, Hoch and Loewenstein (Reference Hoch and Loewenstein1991) argue that establishing temporal proximity, that is giving time for hot heads to cool, has the potential to reduce purchase impulsivity. Hence, it is of little wonder we see postponement interventions implemented across different domains ranging from negotiations (Oechssler et al., Reference Oechssler, Roider and Schmitz2015) to divorce (Wie & Kim, Reference Wie and Kim2015), and is now almost a standard practice in consumer protection regulations (Camerer et al., Reference Camerer, Issacharoff, Loewenstein, O'Donoghue and Rabin2003). Testing this hypothesis, Moser (Reference Moser2020) ran a series of experiments and found mixed results on the effectiveness of postponement with regards to purchase impulsivity. In sum, the literature suggests that postponement may be an effective nudge to attenuate the impacts of dark patterns of online shopping, though with inconclusive evidence.

H1a: Average purchase impulsivity is significantly lower for groups exposed to the postponement intervention than the control group.

H1b: Average purchase impulsivity is significantly lower for participants after exposure to postponement intervention than before.

Intervention #2: Reflection

Another common technique for debiasing is the consider-the-opposite strategy (Lord et al., Reference Lord, Lepper and Preston1984), or simply referred to as ‘reflection’ (Moser, Reference Moser2020). Generally, reflection tasks ask participants to provide justifications as to why their position is valid, and also consider why the opposite position may be equally valid. Stemming from the concept of disfluency (Mills, Reference Mills2020), reflection increases the cognitive load of decision-making, forcing participants to confront the assumptions of their pre-existing position, and at the same time open the door to considering alternates. Experimental studies have shown the effectiveness of reflection across different contexts such as managerial decision-making (Nagtegaal et al., Reference Nagtegaal, Tummers, Noordegraaf and Bekkers2020), price estimation of cars and probabilities of election outcomes (Mussweiler et al., Reference Mussweiler, Strack and Pfeiffer2000). Moser (Reference Moser2020) found that reflection does significantly reduce purchase impulsivity and intent, but she only tested it against anchoring on discounted pricing. This study tests reflection against other forms of dark patterns within online shopping.

H2a: Average purchase impulsivity is significantly lower for groups exposed to the reflection intervention than the control group.

H2b: Average purchase impulsivity is significantly lower for participants after exposure to reflection intervention than before.

Intervention #3: Distraction

Contrary to popular belief that distractions are anathema to optimal decision making, experimental studies actually found the opposite. Known as ‘deliberation-without-attention,’ researchers found that those who were distracted with an unrelated task before rendering a decision made a better choice, relative to those who made their decision immediately, and those who spent some time deeply thinking about it (Dijksterhuis, Reference Dijksterhuis2004; Dijksterhuis et al., Reference Dijksterhuis, Bos, Nordgren and Baaren2006). To elaborate, consumers who were randomly assigned distracted tasks were more competent at differentiating between attractive and unattractive product alternatives (Dijksterhuis et al., Reference Dijksterhuis, Bos, Nordgren and Baaren2006) and reported more satisfaction with their selections (Dijksterhuis & van Olden, Reference Dijksterhuis and van Olden2006), relative to other groups who either evaluated the products immediately or thought consciously about them. Even though some subsequent studies on distractions did not find an impact (e.g., Acker, Reference Acker2008), a meta-analysis found evidence that distraction is statistically significant (though admittedly a small effect size) and moderators such as mind-set help explain the discrepancies between studies (Strick et al., Reference Strick, Dijksterhuis, Bos, Sjoerdsma, Baaren and Nordgren2011).

Alternatively, distraction tasks may lead to cognitive depletion that inhibits consumers from making more deliberative processing, increasing their susceptibility to dark patterns (Stroop, Reference Stroop1935; Pocheptsova et al., Reference Pocheptsova, Amir, Dhar and Baumeister2009; Vonasch et al., Reference Vonasch, Vohs, Ghosh and Baumeister2017).Footnote 3 In other words, contrary to the earlier theoretical prediction mentioned above, distraction tasks could backfire. As such, we test the effectiveness of distraction as a digital intervention aimed at reducing purchase impulsivity.

H3a: Average purchase impulsivity is significantly lower for groups exposed to the distraction intervention than the control group.

H3b: Average purchase impulsivity is significantly higher for groups exposed to the distraction intervention than the control group.

H3c: Average purchase impulsivity is significantly lower for participants after exposure to distraction intervention than before.

In sum, Study 2 contributes to the literature by examining the efficacy of different interventions on different dark patterns within the context of online shopping. This study also contributes to the growing body of behavioral science research that moves away from adding yet another to the ever-growing list of cognitive biases to a more actionable direction of proposing and experimentally testing interventions that may potentially help people make better decisions (e.g., Thaler & Sunstein, Reference Thaler and Sunstein2009; Sin et al., Reference Sin, Murphy and Lamas2019; Milkman, Reference Milkman2021).

Differences Between Studies 1 and 2

There are three main differences between Study 1 and 2. First is the focus. Study 2 is focused on experimentally testing interventions against dark patterns, while Study 1 is interested in understanding the impact of dark patterns. The second difference is that we added one more dark pattern – ‘limited-time’ scarcity – to the mix, increasing the number of dark patterns to be tested from three in Study 1 to four in Study 2. The last difference is the decision to raise the level of realism in Study 2, relative to Study 1, and we achieved this in two ways. First, unlike Study 1 where we only presented one product to participants, we increased the number of products presented to participants to ten in Study 2. The advantage of doing this is that it makes the experience of online shopping more realistic, and less like a laboratory setting. Second, all ten products were top-rated best seller items across different categories, ranging from Home & Kitchen to Electronics, that we carefully curated from Amazon.com. Some examples are a blender, Roku Stick, laptop backpack, and so on. In other words, we intentionally replaced an obscure product like Red Yeast Rice with popular ones, in hope of maximizing the probability that at least one of the items is of interest to participants. However, the disadvantage of raising realism is that we unavoidably introduce confounds such as prior preferences, a potential issue that is inherent and compounded by introducing ten products instead of one. As such, it would be hard to rule out other unexpected confounds and external influences. McGrath (Reference McGrath, Baecker, Grudin, Buxton and Greenberg1995) notes that all research designs are essentially a satisficing process between generalizability, precision, and realism. No one design can maximize all three. In designing this research study, we intentionally increase the level of realism, while acknowledging that in doing so, precision and generalizability may not be optimal.

Experimental Design and Methodology

The results from our power analysis revealed that a minimum of 104 respondents were required per condition to achieve statistical power of 80% at a 95% confidence interval that predicts a small effect size (f = 0.1), achieving a total minimum sample size of 2080. The demographics of participants from both studies are presented in Table 1. Accordingly, a 5 × 4 between- and within-subject experimental online survey was administered to 2123 respondents, also from MTurk. Similar to Study 1, we used IQR completion time and attention checks to identify those who were ‘non-serious’ and a total of 158 participants (8%) were dropped from the analysis. The final sample size is 1945. Participants were randomly assigned, at the start, into 1 out of 20 possible arms. With the exception of the baseline and control groups, each arm is a combination that is made up of one out of four dark patterns (limited-quantity scarcity, limited-time scarcity, high demand, and positive testimonies) and one out of three interventions (reflection, distraction, and postponement). Arms that were not randomly assigned any intervention nor dark pattern constituted the baseline, and arms that either have only dark patterns but no interventions, or vice versa, constituted the control groups. Table 2 illustrates the overall experimental design. For the sake of brevity and sticking to answering the research question, we limit our discussion that compares each combination of dark patterns and interventions against the control group (e.g., testimonials + postponement vs. control).

Table 2. Experimental design with 20 conditions, enumerated.

At the start of the experiment, all participants were presented with a list of ten highly rated best seller products that were carefully curated across different categories from Amazon.com. The list shows a picture of each item and a short description of what the item is, both of which were also taken verbatim from Amazon. From the list of ten products randomly displayed, we asked all participants to click on the item that they felt the strongest urge to purchase. Once selected, the item was accompanied with one out of four possible four dark patterns, or none at all, representing a control or a baseline, as mentioned earlier (see Figure 5). The four dark patterns are:

  • limited-quantity scarcity (‘Only 5 Left in Stock – Order Soon’),

  • limited-time scarcity (‘Limited Time Offer – Order Soon’),

  • high demand (‘Item is in High Demand – Order Soon’),

  • positive testimonies (one real review taken from Amazon.com).

Figure 5. Example of an Item with Limited-Time (Top Left), Limited-Quantity (Top Right), High-Demand (Bottom Left) and Positive Testimonials (Bottom Right) Dark Patterns. The Control Group Does Not Have Any Accompanying Dark Patterns.

Participants were also given the option to continue or go back to the list and reselect another item, if they so choose. But if participants reselected a different product, they would still be exposed to the same dark pattern to ensure continuity.

Once participants finalized their product selection, we asked them to self-report, on a scale from 1 (no urge) to 7 (very strong urge), what their purchase impulsivity is (‘at this moment, the urge to buy the product that I selected can be described as’), the same outcome variable as Study 1. Following from that, participants were tasked to engage with the interventions, that is, they had to complete one of three tasks (randomly assigned) before continuing (see Figure 6):

  • postponement (watching a 2-minute delay video before continuing),

  • reflection (listing three reasons why they should/should not purchase the item selected),

  • distraction (counting number of ‘1's from a 10 × 10 grid filled with ones and zeros).

Figure 6. Three Types of Intervention Tasks: Two Minute Delay Postponement (Left), Reflection (Middle) and 10 × 10 Distraction Grid Task (Right). Control Groups Were Not Asked to Complete Any Tasks.

Controls and baseline groups proceeded with the study without any interventions. After engaging with the interventions, and with the exception of participants in the baseline and non-intervention groups, we re-measured their purchase impulsivity, using the exact same wording.Footnote 4

Two-Way Mixed ANOVA on Purchase Impulsivity

We ran a two-way mixed ANOVA to compare the main effects of experimental arm (between-subject) and intervention (within-subject), as well as their interaction, on purchase impulsivity (results summarized in Table 3). The results from two-way Mixed ANOVA show the interaction between intervention and experimental arms was significant (F(19, 1945) = 9.01, p < 0.0001, η 2 = 0.01), suggesting that at least one arm had a significant change in average purchase impulse from pre- to post-intervention. Following the convention of finding a significant interaction in a mixed ANOVA, we proceeded to look at the simple main effects.

Table 3. Two-way mixed ANOVA results on purchase impulsivity.

A separate one-way ANOVA was conducted on impulse to purchase before and after intervention. Purchase impulsivity was not significantly different between each arm pre-intervention (F(19, 1945) = 1.32, p = 0.16, η 2 = 0.01). However, purchase impulsivity was significantly different between each arm post-intervention (F(19, 1945) = 3.48, p < 0.00, η 2 = 0.03), suggesting that at least one combination of dark pattern-digital intervention was effective in reducing purchase impulse, relative to another combination, post-intervention. Similar to Study 1, a post hoc pairwise comparison with FDR adjusted p-values using the BH procedure (Benjamini & Hochberg, Reference Benjamini and Hochberg1995) revealed that at least one intervention is effective in significantly reducing average purchase impulsivity for three out of four dark patterns, relative to control groups. Due to space constraints, we limit our discussion to only experimental conditions that were meaningfully significant.Footnote 5

For participants who experienced limited-time scarcity messaging, all interventions [postponement (M = 4.32, SD = 1.89); reflection (M = 4.66, SD = 1.65); distraction (M = 4.70, SD = 1.83)] significantly reduced average purchase impulsivity relative to control (M = 5.30, SD = 1.40) (Figure 7). In contrast, as illustrated in Figure 8, none of the interventions produced a significant impact against limited-quantity scarcity messaging [(postponement (M = 4.80, SD = 1.65); reflection (M = 4.83, SD = 1.53); distraction (M = 4.63, SD = 1.67); control (M = 5.20, SD = 1.55)]. For the remaining two dark patterns, not all interventions were equally effective.

Figure 7. Limited-Time Dark Pattern: Results of Pairwise Comparison t-Test on Average Purchase Impulsivity, Separated by Interventions. *p < 0.05, **p < 0.01, ***p < 0.001.

Figure 8. Limited-Quantity Dark Pattern: Results of Pairwise Comparison t-Test on Average Purchase Impulsivity, Separated by Interventions. *p < 0.05, **p < 0.01, ***p < 0.001.

Among the participants who were exposed to positive testimonials as a dark pattern, only the reflection intervention (Figure 9) led to a significantly lower average purchase impulsivity (M = 4.36, SD = 1.80) than the control (M = 5.30, SD = 1.47) group. When it comes to combating high-demand messaging (Figure 10), participants who engaged with postponement (M = 4.71; SD = 1.81) and reflection (M = 4.55; SD = 1.62) had significantly lower average purchase impulsivity than the control (M = 5.27; SD = 1.39) group.

Figure 9. Positive Testimonials Dark pattern: Results of Pairwise Comparison t-Test on Average Purchase Impulsivity, Separated by Interventions. *p < 0.05, **p < 0.01, ***p < 0.001.

Figure 10. High-Demand Dark Pattern: Results of Pairwise Comparison t-Test on Average Purchase Impulsivity, Separated by Interventions. *p < 0.05, **p < 0.001, ***p < 0.001.

Overall, similar to previous studies, we found that no one size fits all. In other words, not all interventions are equally effective across all dark patterns, some interventions are effective against certain dark patterns while others are not. For example, reflection significantly reduces average purchase impulsivity for all dark patterns, except limited-quantity scarcity, whereas distraction is effective against limited-time scarcity only. As a result, we find support for hypotheses 1a, 2a, 3a, and 3b.

Next, we move on from between-subject to within-subject analysis. We ran a repeated measure ANOVA and, consistent with previous analyses, a post hoc pairwise comparison (with FDR adjusted p-values using the BH procedure) for all experimental treatment conditions to evaluate the corresponding impact of the interventions on the change in average purchase impulsivity. Table 4 summarizes the results of the repeated measure ANOVA and their corresponding effect sizes, grouped by dark patterns. Overall, the results indicate that average purchase impulse declined significantly across all interventions. Figure 11 presents the average purchase impulsivity and their corresponding change before and after intervention, separated by treatment conditions, and the adjusted p-value. Similarly, the results indicate that even though there is some variation in the decline post-intervention between each arm, overall, average purchase impulsivity reduced significantly for all treatment groups. Hypotheses 1b, 2b, and 3c are supported (Figure 11).

Figure 11. Comparison of Average Purchase Impulsivity Before and After Interventions Across Experimental Conditions.

Table 4. Repeated measure ANOVA of interventions on average purchase impulsivity, by experimental groups.

*p < 0.05, **p < 0.01, ***p < 0.001.

Limitations

One unexpected result from Study 2 is the lack of a statistical difference between the control group (dark pattern but no interventions) and baseline (no dark patterns and no interventions), a finding that is dissimilar from Study 1 and previous studies (e.g., Moser, Reference Moser2020; Luguri & Strahilevitz, Reference Luguri and Strahilevitz2021). One possible explanation for this is, as elaborated earlier in the methodology, is our decision to up the realism in Study 2. In Study 1, we offered participants only one product, Red Yeast Rice, that had a relatively low probability of anyone having any prior preferences or even prior knowledge. However, in Study 2, we introduced a variety of highly differentiated products that are common and well-rated. As a result, confounds such as prior preferences or even current ownership (the possibility that many participants have already owned those products, and thus were not in the market to make additional purchases of the same item) may have attenuated the impact of dark patterns in the control groups, relative to the baseline. In other words, a limitation of our experimental design in Study 2 is that it is not set up to sufficiently test the effectiveness of dark patterns, that is detecting a significant difference between controls (dark pattern but no intervention) and baseline (no dark pattern and no intervention), a trade-off we made for raising realism in Study 2. However, we believe that it does not significantly impact the results presented above with respect to evaluating the effectiveness of each intervention against different dark patterns.

The second limitation is the use of Mturk as our sample where their demographics tend to be more diverse than the general U.S. population (Berinsky et al., Reference Berinsky, Huber and Lenz2012; Shank, Reference Shank2016). Despite that, studies found that, in terms of data quality, Mturk is comparable to population-based samples, indicating an acceptable level of validity and reliability for experimental studies (Buhrmester et al., Reference Buhrmester, Kwang and Gosling2011; Weinberg et al., Reference Weinberg, Freese and McElhattan2014; Litman et al., Reference Litman, Robinson and Rosenzweig2015).

A third limitation is that this study is based on a hypothetical online shopping scenario. It remains to be seen whether these interventions would work in a real-life situation where consumers are enacting their preference and spending their own money. Future studies should not only attempt to field test these just-in-time interventions on real online shopping websites, but also in different e-commerce contexts like travel booking websites.

Discussion and Policy Implications

Our study contributes to three different categories of policy implications: (1) nature of dark patterns, (2) recommending evidence-based interventions to combat dark patterns, and (3) broad legislative and commercial considerations.

Nature of Dark Patterns: Persuasive Marketing or Manipulative Motifs

The first is addressing the debate on whether dark patterns are manipulative motifs or just persuasive marketing. Some may argue that dark patterns are nothing more than just behaviorally driven marketing strategies, sometimes known as ‘behavioral marketing,’ that are geared toward persuasion, not deception (Funkhouser & Parker, Reference Funkhouser and Parker1999; Kivetz & Netzer, Reference Kivetz and Netzer2008). Others, on the other hand, may view these seemingly innocuous marketing gimmicks as manipulative motifs that exploit human psychology to engender suboptimal decision-making (Crisp, Reference Crisp1987; Sher, Reference Sher2011; Bösch et al., Reference Bösch, Erb, Kargl, Kopp and Pfattheicher2016; Mathur et al., Reference Mathur, Acar, Friedman, Lucherini, Mayer, Chetty and Narayanan2019; Bongard-Blanchy et al., Reference Bongard-Blanchy, Rossi, Rivas, Doublet, Koenig and Lenzini2021). This is especially more acute when few of the claims made by marketers can be independently or easily verified by consumers. Is the product really low in stock? Does the time-limit offer really expire after the stated date? Put differently, from a market liberalism perspective, dark patterns are low-cost nudges that can help deliver value to shareholders. From a consumer advocate perspective, dark patterns are manipulative motifs that prey on psychology to prod consumers into making decisions that may or may not be in their best interests. Addressing deceptive persuasion marketing in research is not new (Held & Germelmann, Reference Held and Germelmann2018); some have developed frameworks (Sher, Reference Sher2011), while others have tested interventions (Sagarin et al., Reference Sagarin, Cialdini, Rice and Serna2002).

Admittedly, not all persuasive marketing are necessarily dark patterns. We consider those that are deceptive and do not have cosumers' best interest in mind to be dark patterns. However, that line is increasingly blurred today. Here's an example. Ferreira and Goh (Reference Ferreira and Goh2021) found that intentionally concealing the full array of product choices by rotating product assortment throughout the season introduces uncertainty into the purchasing decision-making. As a result, consumers, especially mypoic ones, end up purchasing more than they intended to. Using the parlance of sludge, concealment increases the evaluation and the psychological costs that resulted in consumers’ overspending. Concealment is different from curation. The former is intended to introduce uncertainty while the latter reduces decision paralysis. Nonetheless, whether concealment is a clever nudge that helps improve business outcomes or a deceptive practice that exploits human psychology is equivocal. We suggest two ways forward for policy practitioners to address such ambiguity.

First is investigating dark patterns through the lens of standpoint epistemology (Harding, Reference Harding1992; Sprague, Reference Sprague2005; Collins, Reference Collins2010). At its core, standpoint epistemology interrogates the relationship between social identities and knowledge, also known as perspectival differences. Perspectival differences posit that knowledge is shaped by one's social location (e.g., gender, socio-economic status, race/ethnicity, etc.) due to their unique lived experiences. As a result, that situated knowledge is shared only among those who are in the same social location and outgroup members, in principle, do not have access to knowledge outside of their social location. For example, understanding parenting from the social location of men versus women will yield qualitatively distinct sets of knowledge because their lived experiences are not the same due to differences in social location, and in this case, gender. In other words, standpoint epistemology argues that knowledge is always partial, situated and emerges from one's social location (Haraway, Reference Haraway1988). Similarly, whether dark patterns are persuasive marketing strategies or manipulative motifs is largely a matter of standpoint. In such a situation where both outcomes may be true, policy practitioners should err on the side of caution and always put the interests of consumers first. We should continue to identify and call out dark patterns that improve business outcomes at the expense of consumers, like in the case of Turbotax and Uber that we recounted in the introduction.

The second way is taking a pareto approach. Mills (Reference Mills2020) argues against a dichotomized thinking about ‘good nudges’ versus ‘bad sludges.’ Instead, he offers a pareto (everyone benefits) versus rent-seeking (only the choice architect, such as the company, benefits) framework instead. As such, mitigating the impact of dark patterns can potentially reduce impulsive shopping among consumers (that is good for the user) and also reduce common business consequences of impulsive shopping such as chargeback, returns and friendly frauds which can be costly to companies (also good for the firm). In short, reducing the prevalence of dark patterns and attenuating the negative consequences of dark patterns can result in a pareto situation where both parties (consumers and businesses) may stand to gain.

Evidence-Based Interventions Against Dark Patterns

From a practitioner perspective looking to implement interventions to combat dark patterns, which intervention works best? Because results from the within-subject analyses show that all interventions did lead to a significant decline post-intervention, we rely on effect sizes in Table 4 to make our assessment. Based on the relative effect sizes for each dark pattern, the results suggest that when confronting limited-quantity scarcity messaging, postponement works best (η 2 = 0.05), followed by distraction (η 2 = 0.04). Postponement (η 2 = 0.08) is the most effective against high demand, followed by reflection (η 2 = 0.06). When it comes to attenuating the effects of positive testimonials, postponement and reflection are equally effective (η 2 = 0.04). Lastly, postponement (η 2 = 0.05), followed by distraction (η 2 = 0.03) are the most effective against limited-time scarcity. In sum, similar to previous studies (e.g., Nouwens et al., Reference Nouwens, Liccardi, Veale, Karger and Kagal2020; Luguri & Strahilevitz, Reference Luguri and Strahilevitz2021), our results again indicate that no one size fits all when it comes to combating dark patterns. In spite of this, postponement appears to have a nontrivial effect size (ranging from small to medium) across all dark patterns when it comes to reducing purchase impulse. In other words, when in doubt, use postponement.

Broad Legislative and Commercial Considerations

So what concrete steps can be taken from a legislative and commercial position to combat dark patterns? We identify three. First, the proliferation and impact of dark patterns on consumers did not go unnoticed on Capitol Hill. A legislation called ‘Deceptive Experiences to Online Users Reduction Act (DETOUR Act)’ was introduced in Congress in 2019 and is currently pending in the Senate. In 2021, California passed a regulation, updating the 2018 California Consumer Privacy Act to ban the use of dark patterns, but only in the context of data privacy. It is unclear if there would be enough political currency to enact an outright ban preventing companies from using dark patterns. Even if there is, it is equally unclear if the ban would be limited to a specific context like data privacy or across the board. Additionally, enforcement of the ban may also be tricky. In sum, the legislative solution approach, while probably the most impactful, may take a much longer time for the details to be fleshed out.

Second, the fight against dark patterns could take a public awareness route, similar to financial literacy, where authorities take the educate-the-problem-away approach. On some level, this makes sense because it is hard to convince consumers that they are being manipulated if they do not know that simple design features that they now take for granted as commonplace are subtly influencing their decision-making process. However, to use financial literacy as a parallel again, a meta-analysis found that financial literacy has minimal impact on people's financial behaviors (Fernandes et al., Reference Fernandes, Lynch and Netemeyer2014). In fact, financial literacy only explains 0.1% of the variance in their behaviors. The same probably goes for dark patterns. Bongard-Blanchy and his colleagues (Reference Bongard-Blanchy, Rossi, Rivas, Doublet, Koenig and Lenzini2021) found that people are generally aware of the presence of dark patterns but they remain unsure of the impacts dark patterns have on them. As such, awareness and education may be necessary but ultimately insufficient to effectively combat dark patterns.

Lastly, is technology, specifically the integration of nudges with technology which we find most promising. As mentioned earlier, not all interventions are equally effective, with uneven impact across dark patterns. To fully protect consumers, there may be a need to develop tailored solutions suited to combat not only the specific dark pattern at hand, but take into account consumers’ personality, needs, wants, and social location like economic status. Minimizing the impact of dark patterns inducing impulse shopping may have a larger impact on the most economically vulnerable than the wealthy. Similarly, attenuating the influence of dark patterns on data privacy may be more crucial among young kids who tend to engage with social media the most. To achieve this, a promising solution is to leverage artificial intelligence and machine learning to develop a tool that marries the data from web-crawlers that scrape websites identifying dark patterns – the same way that Mathur and his colleagues (Reference Milkman2019) did – and implement specific just-in-time behavioral interventions that can most effectively combat that dark pattern at hand.

Conclusion

To address the problem of embedding dark patterns in e-commerce to induce impulsive shopping, we conducted two experiments. The first (Study 1) examines the impact of three different dark patterns (limited-quantity scarcity, high-demand, and positive testimonials) alongside a single product on impulse buying behavior. The results show that all three dark patterns significantly increase purchase impulsivity, relative to the control group. Interestingly, unlike previous studies, we did not find any significant differences between dark patterns. The second experiment (Study 2) evaluates the effectiveness of three types of behaviorally informed interventions (reflection, distraction, and postponement) against four different types of dark patterns (limited-quantity scarcity, limited-time scarcity, high-demand social proof, and positive testimonies), this time presented alongside multiple products. Within-subject analysis found that any exposure to interventions, regardless of dark patterns, significantly reduces purchase impulsivity. This suggests that most consumers will benefit from having any intervention, relative to having none, when it comes to reducing the impact of dark patterns. Yet, between-subject analysis found that, contingent on which dark pattern is at play, some interventions were effective while others were not, indicating that not all interventions are equally effective. In sum, when it comes to combating dark patterns, a tailored approach may be most effective, if implemented at scale.

Acknowledgments

We are grateful to Kate Thorne, Ben Ball, and Christian Pedersen for their invaluable feedback and comments on earlier drafts of this manuscript.

Footnotes

1 Across all conditions, the average time to complete was about 5.2 minutes. The upper bounds of the IQR time were 10.2 minutes and the lower bounds were −1.2 minutes. Any participant who are outside those bounds were dropped.

2 We incorporate attention checks such as asking for their zip codes and Mturk ID more than once and we dropped respondents who have inconsistent responses.

3 We are grateful to a thoughtful reviewer for pointing this out.

4 Baseline and non-intervention treatment groups were only asked about their purchase impulsivity once because we could not find a reasonable way to include a placebo or do it in a way that does not interrupt the flow of the experiment.

5 There are a total of 33 significant comparisons but not all of them are meaningful. For example, the average post-intervention purchase impulsivity is significantly different between limited-time + postponement and high-demand + distractions, but the comparisons and takeaways are not meaningful because they neither share a dark pattern nor an intervention. Hence, we do not discuss such comparisons, and others like them, in the article.

References

Acker, F. (2008), ‘New findings on unconscious versus conscious thought in decision making: Additional empirical data and meta-analysis’, Judgment and Decision Making, 3(4): 292303.Google Scholar
Aggarwal, P., Jun, S. Y. and Huh, J. H. (2011), ‘Scarcity messages’, Journal of Advertising, 40(3): 1930. https://doi.org/10.2753/JOA0091-3367400302.CrossRefGoogle Scholar
Aguirre-Rodriguez, A. (2013), ‘The effect of consumer persuasion knowledge on scarcity appeal persuasiveness’, Journal of Advertising, 42(4): 371379. https://doi.org/10.1080/00913367.2013.803186.CrossRefGoogle Scholar
Allcott, H. (2011), ‘Social norms and energy conservation’, Journal of Public Economics, 95(9): 10821095. https://doi.org/10.1016/j.jpubeco.2011.03.003.CrossRefGoogle Scholar
Amblee, N. and Bui, T. (2011), ‘Harnessing the influence of social proof in online shopping: The effect of electronic word of mouth on sales of digital microproducts’, International Journal of Electronic Commerce, 16(2): 91114. https://doi.org/10.2753/JEC1086-4415160205.CrossRefGoogle Scholar
Beatty, S. E. and Ferrell, E. M. (1998), ‘Impulse buying: Modeling its precursors’, Journal of Retailing, 74(2): 169191. https://doi.org/10.1016/S0022-4359(99)80092-X.CrossRefGoogle Scholar
Benjamini, Y. and Hochberg, Y. (1995), ‘Controlling the false discovery rate: A practical and powerful approach to multiple testing’, Journal of the Royal Statistical Society: Series B (Methodological), 57(1): 289300. https://doi.org/10.1111/j.2517-6161.1995.tb02031.x.Google Scholar
Berinsky, A. J., Huber, G. A. and Lenz, G. S. (2012), ‘Evaluating online labor markets for experimental research: Amazon.com's Mechanical Turk’, Political Analysis, 20(3): 351368. https://doi.org/10.1093/pan/mpr057.CrossRefGoogle Scholar
Bongard-Blanchy, K., Rossi, A., Rivas, S., Doublet, S., Koenig, V. and Lenzini, G. (2021), ‘I am definitely manipulated, even when I am aware of it. It's ridiculous! – Dark patterns from the end-user perspective’, ArXiv:2104.12653 [Cs]. https://doi.org/10.1145/3461778.3462086.CrossRefGoogle Scholar
Bösch, C., Erb, B., Kargl, F., Kopp, H. and Pfattheicher, S. (2016), ‘Tales from the dark side: Privacy dark strategies and privacy dark patterns’, Proceedings on Privacy Enhancing Technologies, 2016(4): 237254. https://doi.org/10.1515/popets-2016-0038.CrossRefGoogle Scholar
Brignull, H. (2010), Dark patterns: User interfaces designed to trick people. UX Brighton Conference, Brighton, UK. Retrieved from: https://www.slideshare.net/harrybr/ux-brighton-dark-patterns.Google Scholar
Buhrmester, M., Kwang, T. and Gosling, S. D. (2011), ‘Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data?Perspectives on Psychological Science, 6(1): 35. https://doi.org/10.1177/1745691610393980.CrossRefGoogle ScholarPubMed
Cain, S. L. (2020), Slickdeals coronavirus pandemic impulse spending survey 2020. Slickdeals. Retrieved from: https://slickdeals.net/article/news/pandemic-impulse-spending-survey-2020/.Google Scholar
Camerer, C., Issacharoff, S., Loewenstein, G., O'Donoghue, T. and Rabin, M. (2003), ‘Regulation for conservatives: Behavioral economics and the case for “asymmetric paternalism”’, University of Pennsylvania Law Review, 151(3): 1211. https://doi.org/10.2307/3312889.CrossRefGoogle Scholar
Census Bureau (2021), Quarterly retail E-commerce sales: 1st quarter 2021. Census Bureau. Retrieved from: https://www.census.gov/retail/mrts/www/data/pdf/ec_current.pdf.Google Scholar
Chan, T. K. H., Cheung, C. M. K. and Lee, Z. W. Y. (2017), ‘The state of online impulse-buying research: A literature analysis’, Information & Management, 54(2): 204217. https://doi.org/10.1016/j.im.2016.06.001.CrossRefGoogle Scholar
Cialdini, R. B. (2006). Influence: The psychology of persuasion (Revised ed.). NY: Harper Business.Google Scholar
Collins, P. H. (2010), Black feminist thought: Knowledge, consciousness, and the politics of empowerment. NY: Routledge.Google Scholar
Crisp, R. (1987), ‘Persuasive advertising, autonomy, and the creation of desire’, Journal of Business Ethics, 6(5): 413418. https://doi.org/10.1007/BF00382898.CrossRefGoogle Scholar
Di Geronimo, L., Braz, L., Fregnan, E., Palomba, F. and Bacchelli, A. (2020), ‘UI dark patterns and where to find them: A study on mobile applications and user perception’, in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14. https://doi.org/10.1145/3313831.3376600.CrossRefGoogle Scholar
Dijksterhuis, A. (2004), ‘Think different: The merits of unconscious thought in preference development and decision making’, Journal of Personality and Social Psychology, 87(5): 586598. https://doi.org/10.1037/0022-3514.87.5.586.CrossRefGoogle ScholarPubMed
Dijksterhuis, A. and van Olden, Z. (2006), ‘On the benefits of thinking unconsciously: Unconscious thought can increase post-choice satisfaction’, Journal of Experimental Social Psychology, 42(5): 627631. https://doi.org/10.1016/j.jesp.2005.10.008.CrossRefGoogle Scholar
Dijksterhuis, A., Bos, M. W., Nordgren, L. F. and Baaren, R. B. v. (2006), ‘On making the right choice: The deliberation-without-attention effect’, Science, 311(5763): 10051007. https://doi.org/10.1126/science.1121629.CrossRefGoogle ScholarPubMed
Elliott, J. and Kiel, P. (2019), Inside TurboTax's 20-year fight to stop Americans from filing their taxes for free. ProPublica. Retrieved from: https://www.propublica.org/article/inside-turbotax-20-year-fight-to-stop-americans-from-filing-their-taxes-for-free.Google Scholar
Fernandes, D., Lynch, J. G. and Netemeyer, R. G. (2014), ‘Financial literacy, financial education, and downstream financial behaviors’, Management Science, 60(8): 18611883. https://doi.org/10.1287/mnsc.2013.1849.CrossRefGoogle Scholar
Ferreira, K. J. and Goh, J. (2021), ‘Assortment rotation and the value of concealment’, Management Science, 67(3): 14891507.CrossRefGoogle Scholar
Funkhouser, G. R. and Parker, R. (1999), ‘An action-based theory of persuasion in marketing’, Journal of Marketing Theory and Practice, 7(3): 2740. https://doi.org/10.1080/10696679.1999.11501838.CrossRefGoogle Scholar
Gavilan, D., Avello, M. and Martinez-Navarro, G. (2018), ‘The influence of online ratings and reviews on hotel booking consideration’, Tourism Management, 66: 5361. https://doi.org/10.1016/j.tourman.2017.10.018.CrossRefGoogle Scholar
Gerber, A. S. and Rogers, T. (2009), ‘Descriptive social norms and motivation to vote: Everybody's voting and so should you’, The Journal of Politics, 71(1): 178191. https://doi.org/10.1017/S0022381608090117.CrossRefGoogle Scholar
Haraway, D. (1988), ‘Situated knowledges: The science question in feminism and the privilege of partial perspective’, Feminist Studies, 14(3): 579599.CrossRefGoogle Scholar
Harding, S. (1992), ‘Rethinking standpoint epistemology: What is “strong objectivity?”The Centennial Review, 36(3): 437470. https://www.jstor.org/stable/23739232.Google Scholar
Held, J. and Germelmann, C. C. (2018), ‘Deception in consumer behavior research: A literature review on objective and perceived deception’, Projectics/Proyectica/Projectique, 21(3): 119145. 10.3917/proj.021.0119.Google Scholar
Hoch, S. J. and Loewenstein, G. F. (1991), ‘Time-inconsistent preferences and consumer self-control’, Journal of Consumer Research, 17(4): 492507. https://doi.org/10.1086/208573.CrossRefGoogle Scholar
Kahneman, D. (2011). Thinking, fast and slow (1st ed.). NY: Farrar, Straus and Giroux.Google Scholar
Kivetz, R. and Netzer, O. (2008), ‘The synthesis of preference: Bridging behavioral decision research and marketing science’, Journal of Consumer Psychology, 18(3): 179186. https://doi.org/10.1016/j.jcps.2008.04.005.CrossRefGoogle Scholar
Klimek, M., Wang, S. and Ogunkanmi, A. (2009), ‘Safety and efficacy of Red Yeast Rice (Monascus purpureus) as an alternative therapy for hyperlipidemia’, Pharmacy and Therapeutics, 34(6): 313327.Google ScholarPubMed
Koufaris, M. (2002), ‘Applying the technology acceptance model and flow theory to online consumer behavior’, Information Systems Research, 13(2): 19.CrossRefGoogle Scholar
Litman, L., Robinson, J. and Rosenzweig, C. (2015), ‘The relationship between motivation, monetary compensation, and data quality among US- and India-based workers on Mechanical Turk’, Behavior Research Methods, 47(2): 519528. https://doi.org/10.3758/s13428-014-0483-x.CrossRefGoogle ScholarPubMed
Liu, Y., Li, H. and Hu, F. (2013), ‘Website attributes in urging online impulse purchase: An empirical investigation on consumer perceptions’, Decision Support Systems, 55(3): 829837. https://doi.org/10.1016/j.dss.2013.04.001.CrossRefGoogle Scholar
Loewenstein, G. (1996), ‘Out of control: Visceral influences on behavior’, Organizational Behavior and Human Decision Processes, 65(3): 272292. https://doi.org/10.1006/obhd.1996.0028.CrossRefGoogle Scholar
Lord, C. G., Lepper, M. R. and Preston, E. (1984), ‘Considering the opposite: A corrective strategy for social judgment’, Journal of Personality and Social Psychology, 47(6): 12311243. https://doi.org/10.1037/0022-3514.47.6.1231.CrossRefGoogle ScholarPubMed
Luca, M. (2016), Reviews, reputation, and revenue: The case of Yelp.Com. Harvard Business School NOM Unit Working Paper No. 12-016, Available at SSRN: https://ssrn.com/abstract=1928601 or http://dx.doi.org/10.2139/ssrn.1928601.CrossRefGoogle Scholar
Luca, M. and Zervas, G. (2016), ‘Fake it till you make it: Reputation, competition, and yelp review fraud’, Management Science, 62(12): 34123427. https://doi.org/10.1287/mnsc.2015.2304.CrossRefGoogle Scholar
Luguri, J. and Strahilevitz, L. J. (2021), ‘Shining a light on dark patterns’, Journal of Legal Analysis, 13(1): 43109. https://doi.org/10.1093/jla/laaa006.CrossRefGoogle Scholar
Luo, H., Cheng, S., Zhou, W., Song, W., Yu, S. and Lin, X. (2021), Research on the impact of online promotions on consumers’ impulsive online shopping intentions. 19.CrossRefGoogle Scholar
Mathur, A., Acar, G., Friedman, M. J., Lucherini, E., Mayer, J., Chetty, M. and Narayanan, A. (2019), ‘Dark patterns at scale: Findings from a crawl of 11 K shopping websites’, Proceedings of the ACM on Human-Computer Interaction, 3(CSCW): 132. https://doi.org/10.1145/3359183.CrossRefGoogle Scholar
McDermott, J. (2021), America's problem with impulse spending. Finder.Com. Retrieved from: https://www.finder.com/impulse-buying-stats.Google Scholar
McGrath, J. E. (1995). Methodology matters: Doing research in the behavioral and social sciences. In Baecker, R. M., Grudin, J., Buxton, W. A. S., & Greenberg, S. (eds.), Readings in human–computer interaction. Morgan Kaufmann, 152169. https://doi.org/10.1016/B978-0-08-051574-8.50019-4Google Scholar
Milkman, K. L. (2021), How to change: The science of getting from where you are to where you want to be. NY: Penguin Putnam.Google Scholar
Mills, S. (2020), ‘Nudge/sludge symmetry: On the relationship between nudge and sludge and the resulting ontological, normative and transparency implications’, Behavioral Public Policy, 124. https://doi.org/10.1017/bpp.2020.61.CrossRefGoogle Scholar
Moser, C. (2020). Impulse buying: Designing for self-control with E-commerce [Doctoral Dissertation]. University of Michigan.Google Scholar
Mullainathan, S. and Shafir, E. (2013), Scarcity: Why having too little means so much. NY: Times Books.Google Scholar
Mussweiler, T., Strack, F. and Pfeiffer, T. (2000), ‘Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility’, Personality and Social Psychology Bulletin, 26(9): 11421150. https://doi.org/10.1177/01461672002611010.CrossRefGoogle Scholar
Nagtegaal, R., Tummers, L., Noordegraaf, M. and Bekkers, V. (2020), ‘Designing to debias: Measuring and reducing public managers’ anchoring bias’, Public Administration Review, 80(4): 565576. https://doi.org/10.1111/puar.13211.CrossRefGoogle Scholar
Nouwens, M., Liccardi, I., Veale, M., Karger, D., & Kagal, L. (2020). ‘Dark patterns after the GDPR: Scraping consent pop-ups and demonstrating their influence’, in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3313831.3376321CrossRefGoogle Scholar
Oechssler, J., Roider, A. and Schmitz, P. W. (2015), ‘Cooling off in negotiations: Does it work?Journal of Institutional and Theoretical Economics, 171(4): 565588. https://www.jstor.org/stable/43956713.CrossRefGoogle Scholar
Pocheptsova, A., Amir, O., Dhar, R. and Baumeister, R. F. (2009), ‘Deciding without resources: Resource depletion and choice in context’, Journal of Marketing Research, 46(3): 344355. https://doi.org/10.1509/jmkr.46.3.344.CrossRefGoogle Scholar
Rook, D. W. (1987), ‘The buying impulse’, Journal of Consumer Research, 14(2): 189199. https://doi.org/10.1086/209105.CrossRefGoogle Scholar
Sagarin, B. J., Cialdini, R. B., Rice, W. E. and Serna, S. B. (2002), ‘Dispelling the illusion of invulnerability: The motivations and mechanisms of resistance to persuasion’, Journal of Personality and Social Psychology, 83(3): 526541. https://doi.org/10.1037//0022-3514.83.3.526.CrossRefGoogle ScholarPubMed
Scheiber, N. (2017), How Uber uses psychological tricks to push its drivers’ buttons. The New York Times. Retrieved from: https://www.nytimes.com/interactive/2017/04/02/technology/uber-drivers-psychological-tricks.html.Google Scholar
Shahab, S. and Lades, L. K. (2021), ‘Sludge and transaction costs’, Behavioral Public Policy, 122. https://doi.org/10.1017/bpp.2021.12.CrossRefGoogle Scholar
Shank, D. B. (2016), ‘Using crowdsourcing websites for sociological research: The case of Amazon Mechanical Turk’, The American Sociologist, 47(1): 4755. https://doi.org/10.1007/s12108-015-9266-9.CrossRefGoogle Scholar
Sher, S. (2011), ‘A framework for assessing immorally manipulative marketing tactics’, Journal of Business Ethics, 102(1): 97118. https://doi.org/10.1007/s10551-011-0802-4.CrossRefGoogle Scholar
Shi, X., Li, F. and Chumnumpan, P. (2020), ‘The use of product scarcity in marketing’, European Journal of Marketing, 54(2): 380418. https://doi.org/10.1108/EJM-04-2018-0285.CrossRefGoogle Scholar
Sin, R., Murphy, R. O. and Lamas, S. (2019), ‘Goals-based financial planning: How simple lists can overcome cognitive blind spots’, Journal of Financial Planning, 32(4): 3443.Google Scholar
Sprague, J. (2005), Feminist methodologies for critical researchers: Bridging differences. Walnut Creek: AltaMira Press.Google Scholar
Stanovich, K. E. and West, R. F. (2000), ‘Individual differences in reasoning: Implications for the rationality debate?Behavioral and Brain Sciences, 23(5): 645665. https://doi.org/10.1017/S0140525X00003435.CrossRefGoogle ScholarPubMed
Stern, H. (1962), ‘The significance of impulse buying today’, Journal of Marketing, 26(2): 5962. https://doi.org/10.2307/1248439.CrossRefGoogle Scholar
Strick, M., Dijksterhuis, A., Bos, M., Sjoerdsma, A., Baaren, R. and Nordgren, L. (2011), ‘A meta-analysis on unconscious thought effects’, Social Cognition, 29: 738762. https://doi.org/10.1521/soco.2011.29.6.738.CrossRefGoogle Scholar
Stroop, J. R. (1935), ‘Studies of interference in serial verbal reactions’, Journal of Experimental Psychology, 18(6): 643662. https://doi.org/10.1037/h0054651.CrossRefGoogle Scholar
Sunstein, C. R. (2020), ‘Sludge audits’, Behavioral Public Policy, 120. https://doi.org/10.1017/bpp.2019.32.CrossRefGoogle Scholar
Thaler, R. H. (2018), ‘Nudge, not sludge’, Science, 361(6401): 431. https://doi.org/10.1126/science.aau9241.CrossRefGoogle Scholar
Thaler, R. H. and Sunstein, C. R. (2009), Nudge: Improving decisions about health, wealth, and happiness. NY: Penguin Books.Google Scholar
Vonasch, A. J., Vohs, K. D., Ghosh, A. P. and Baumeister, R. F. (2017), ‘Ego depletion induces mental passivity: Behavioral effects beyond impulse control’, Motivation Science, 3(4): 321336. https://psycnet.apa.org/doi/10.1037/mot0000058.CrossRefGoogle Scholar
Weinberg, J., Freese, J. and McElhattan, D. (2014), ‘Comparing data characteristics and results of an online factorial survey between a population-based and a crowdsource-recruited sample’, Sociological Science, 1(19): 292310. https://doi.org/10.15195/v1.a19.CrossRefGoogle Scholar
Wells, J., Parboteeah, V., Eastern New Mexico University, , Valacich, J., & Washington State University. (2011). ‘Online impulse buying: Understanding the interplay between consumer impulsiveness and website quality’, Journal of the Association for Information Systems, 12(1): 3256. https://doi.org/10.17705/1jais.00254CrossRefGoogle Scholar
Wendel, S. (2020), Who is doing applied behavioral science? Results from a Global Survey of Behavioral Teams – By Stephen Wendel. Behavioral Scientist. Retrieved from: https://behavioralscientist.org/who-is-doing-applied-behavioral-science-results-from-a-global-survey-of-behavioral-teams/.Google Scholar
Wie, D. and Kim, H. (2015), ‘Between calm and passion: The cooling-off period and divorce decisions in Korea’, Feminist Economics, 21(2): 187214. https://doi.org/10.1080/13545701.2014.999008.CrossRefGoogle Scholar
Wu, Y., Xin, L., Li, D., Yu, J. and Guo, J. (2021), ‘How does scarcity promotion lead to impulse purchase in the online market?A field experiment. Information & Management, 58(1): 103283. https://doi.org/10.1016/j.im.2020.103283.CrossRefGoogle Scholar
Figure 0

Table 1. Sample characteristics, studies 1 and 2.

Figure 1

Figure 1. Fact Sheet About Red Yeast Rice, Adapted From the National Institute of Health.

Figure 2

Figure 2. Product Information About Red Yeast Rice, Taken From Amazon.com.

Figure 3

Figure 3. (L) Control; (R) Dark Patterns: (T) High-Demand, (M) Positive Testimonials, (B) Limited-Quantity.

Figure 4

Figure 4. Results From Pairwise t-Test of Dark Patterns and Average Purchase Impulsivity, Separated by Experimental Conditions. *p < 0.05, **p < 0.01, ***p < 0.001.

Figure 5

Table 2. Experimental design with 20 conditions, enumerated.

Figure 6

Figure 5. Example of an Item with Limited-Time (Top Left), Limited-Quantity (Top Right), High-Demand (Bottom Left) and Positive Testimonials (Bottom Right) Dark Patterns. The Control Group Does Not Have Any Accompanying Dark Patterns.

Figure 7

Figure 6. Three Types of Intervention Tasks: Two Minute Delay Postponement (Left), Reflection (Middle) and 10 × 10 Distraction Grid Task (Right). Control Groups Were Not Asked to Complete Any Tasks.

Figure 8

Table 3. Two-way mixed ANOVA results on purchase impulsivity.

Figure 9

Figure 7. Limited-Time Dark Pattern: Results of Pairwise Comparison t-Test on Average Purchase Impulsivity, Separated by Interventions. *p < 0.05, **p < 0.01, ***p < 0.001.

Figure 10

Figure 8. Limited-Quantity Dark Pattern: Results of Pairwise Comparison t-Test on Average Purchase Impulsivity, Separated by Interventions. *p < 0.05, **p < 0.01, ***p < 0.001.

Figure 11

Figure 9. Positive Testimonials Dark pattern: Results of Pairwise Comparison t-Test on Average Purchase Impulsivity, Separated by Interventions. *p < 0.05, **p < 0.01, ***p < 0.001.

Figure 12

Figure 10. High-Demand Dark Pattern: Results of Pairwise Comparison t-Test on Average Purchase Impulsivity, Separated by Interventions. *p < 0.05, **p < 0.001, ***p < 0.001.

Figure 13

Figure 11. Comparison of Average Purchase Impulsivity Before and After Interventions Across Experimental Conditions.

Figure 14

Table 4. Repeated measure ANOVA of interventions on average purchase impulsivity, by experimental groups.