top of page

Crafting a Definition for Deceptive Design/Dark Patterns Is Harder Than It Seems

By Caroline Sinders

This article talks about the need for a definition of Deceptive Design by exploring the following questions: what makes an aspect of design ‘deceptive’ or ‘manipulative’? Where do the boundaries of exploitative design stop and bad design or poor product decision start?


A few weeks ago, I noticed something strange. Instagram stories of mine started appearing on Facebook as stories, even though I had never linked the accounts. Facebook often changes settings, from security settings to terms of service and others. It was perplexing to have my stories linked- I have different communities on different platforms and some of the content I post on Instagram I *don’t* want on Facebook. I’m sure there was some small pop up in small language that I perhaps clicked yes on that allowed for this linking- but this lack of of a UI cue, and a lack of a legible and understandable cue, feels like a form of deceptive design patterns, sometimes commonly referred to as dark patterns.

What Makes Design “Deceitful” or “Manipulative”?

This piece aims to explore the power of naming a phenomena and drawing boundaries around what is considered to be a part of that phenomena so that anyone, including the general public and researchers, alike can better understand that phenomena. Within that, this piece aims to explore: what makes an aspect of design ‘deceptive’ or ‘manipulative’? Where do the boundaries of exploitative design stop and bad design or poor product decision start? As a design and policy researcher, this is an area of focus I’ve been exploring in a post-doctoral fellowship. To start trying to answer this question, it’s worth starting at the beginning with a definition. But, clarity isn’t there, either. The definition itself varies from researcher to researcher, institution to institution, with some definitions including intentionality, while others push for intentional and unintentional or removing intentionality all together.

There is an on-going, necessary debate about what to name manipulative design practices (deceptive versus dark) that is very much worth discussing but due to the brevity of this article’s character limit, I can’t get into here. I’ll be referring to them as ‘deceptive design patterns/dark patterns’ or shortened to ‘DDP/DP’.

Going back to my original quandary- how can we draw boundaries around what are considered deceptive design patterns/ dark patterns? There can be other thresholds created to help determine or decipher when a product has DDP/DP within it.

Catalina Goanta, an Associate Professor of Law of at Utrecht University, outlined and compiled different signifiers that could be used to recognize DDP/DP such as:

  • impeding or subverting information flows

  • the content itself being problematic with wording and context of the sentences and related information “such as the use of framing, motivational language, humor, etc.”

  • "choice information” including the information that is shown or given consumers when presented with choices in products and services which can include prices, features, ingredients, components, etc

  • the presentation of information be it visual or in copywriting including long or complex privacy policies, information hidden by hyperlinks or less visible buttons, pop ups, etc

  • the presentation of information, including visual or graphical user interface design with examples like default settings or how the disclosure of consumers’ information is presented

  • the lack of information; the hiding or obscuring of information and more.

Even with this guidance, Goanta makes a point that I’ve been ruminating over for the past five years in my practice:

By using too low of a threshold for categorizing an information flow as a dark pattern, we risk the watering down of the concept itself: if everything is a dark pattern, then nothing is a dark pattern.

What makes something DDP/DP? In my mind, design and context matters. I use ‘design’ as a term quite broadly- everything around us is designed, including even the most technical of software. Design patterns are “reusable/recurring components which designers use to solve common problems in user interface (UI) design,” which can manifest as check out flows, login or onboarding flows, or how to design mobile apps for iOS or Android. Design patterns can extend to the non visual. For example, while voice recognition does not have a visual interface, it does have an interface, and a series of interaction design patterns that users come to know and engage with as seen with prompts of “Alexa” or “Okay, Google”, etc. Others describe design patterns as well-verified solutions to recurring problems. Think of design patterns as a form of way finding; they are ways to situate users, heighten user experience, and hopefully, minimize frustration, friction and cognitive burden.

The goal of a design pattern is to be a manifestation of best practices- with input from designers and user testing. It’s with users’ learned behaviors and conditions that are reinforced from those industry standard design patterns that DDP/DP subvert, manipulate and exploit.

In situating what makes a deceptive design pattern deceptive, the context of the DDP/DP must be defined and introduced. In positing if DDP/DP are visual only, we could say no, but I don’t think a general definition or example will suffice. A DDP/DP definition and taxonomy for gaming could and should be slightly different, or rather extremely context specific than a DDP/DP for general technology software and products, and a DDP/DP for non visual products like voice interfaces and general iOT devices should be different than one for general software.

Drawing upon actor-network theory which situate how objects, people and events have interconnected and related relationships, we can see why contextual and context specific definitions, taxonomies and boundaries are needed to understand the impacts of design upon individual users and groups of communities. Thus, just thinking about DDP/DP as visual elements would not be enough; we need to zoom out and look at the context of how a product is used, where, what are similar products and patterns, the overall ‘design’ of a product and then the particular, granular product features that are weaponised. It’s dance of the specific, small and explicit examples coupled with context and entire product where actor-network theory is a good lens to help analyze the phenomena of DDP/DP.

Putting this into practice, when I’m evaluating what is DDP/DP, I look at pre-existing design patterns, design flows, and related design research. Part of what creates a DDP/DP is subverting the design flow and finding a way to exploit, manipulate or confuse the user- this is how I start an evaluation process. A microservice app that combines and thus hides a fee within a tipping UI, an email unsubscribe that flips end subscription with stay subscribed, the only way to canceling being calling a representative, hiding cookie consent and reject under a hyperlink are all forms of dark patterns– but within each of these descriptions, we can see the way things ‘should have been’ and point to a series of steps that were subverted and changed.

But what if the ‘standard’ design pattern itself is harmful? One could argue the surveillance capitalism of the always listening machines of voice interfaces and iOT Devices are harmful. But when those standards are the norm, what becomes the deceptive or exploitation or manipulation of that product? Referring back to the ‘standard’ unlike the previous examples, wouldn’t help ‘prove’ or illuminate what was DDP/DP in the voice flow when how the standardized flow itself is also the harm. Voice patterns and iOT will need another standard or another way of analysis to determine DDP/DP than non voice interfaces.

What is needed is a broad definition, and a series of specific and contextual frameworks, definitions and examples either per domain (e-commerce, social media, or privacy, e-commerce/shopping, and gaming) and then where the software is placed in hardware. For example, with voice interfaces, a contextual framework along with the broad definition would guide the analysis of a voice based product, allowing us to analyze how the voice interface is placed into a product or device, and look at the device of the voice interaction, along with the physical device as distinct but overlapping design analysis. Or with gaming software, seeing how the gaming software is placed in the gaming hardware like Play stations, Switch devices, etc. Moving forward, it’s understanding how ‘traditional software’ might differ from VR/AR/XR and seeing those as separate categories with their own standards that can build on a shared, broad definition of DDP/DP. I argue for this recognizing the context of how software is made and where it’s placed is important. A voice interface is different from an e-commerce app, but both can be riddled with manipulation, confusion, and deceit.

Download the full article here:

Crafting a Definition for Deceptive Design_Dark Patterns Is Harder Than It Seems_ThePranav
Download • 92KB


Caroline Sinders

Caroline Sinders is an award winning design researcher and critical designer. She’s a postdoctoral fellow with the UK’s Information Commissioner's Office, focusing on online choice architecture, deceptive design patterns and AI. She’s also the founder of human rights and design lab, Convocation Research + Design. She’s worked with the Tate Exchange at the Tate Modern, the United Nations, Ars Electronica’s AI Lab, the Harvard Kennedy School and others.


Sinders, C. (2023). Crafting a Definition for Deceptive Design/Dark Patterns Is Harder Than It Seems. The Unpacking Deceptive Design Research Series. The Pranava Institute. <>


bottom of page