top of page

Unpacking Deceptive Design: Centering Trust and Consumer Safety in Digital Interactions

By Titiksha Vashist

How do we unpack the impact of deceptive design practices on people in Global South countries? How does deceptive design intersect with privacy, competition in markets and consumer protection? This series looks at the understudied harms, communities and experiences written by researchers, artists and civil society.

Deceptive designs, also called ‘dark patterns’ or deceptive patterns are prevalent across the web today- embedded across digital interfaces (screens, voice) as we learn, gain information, socialise and shop online. These UI/UX choices are called ‘deceptive’ because of their impact on the user. Deceptive patterns have been shown to lead to consumer harms such as loss of privacy, financial loss, loss of time, cognitive burden, psychological distress, and decline of trust online, among others, by way of manipulating or deceiving users into making decisions and choices they may not otherwise take. These patterns impair decisional autonomy, obscure information in online markets and trick users to behave in certain ways which are often beneficial to platforms which deploy them. Often a part of default settings on apps and websites, these practices are often in-built into online experiences and platforms we navigate.

Academics and researchers across fields of privacy and data protection, human-computer interaction, digital rights and responsible design have gathered ample evidence since 2010 (when the term ‘dark patterns’ was first coined) to understand how deceptive design can cause active harms, especially impacting marginalised communities, including senior citizens, women and gender minorities, children, families with lower incomes, and people who are new to the digital sphere. Research on deceptive design focuses on how these patterns can be classified and understood, in order to bring them into a shared taxonomy, given that deceptive patterns come in several dynamic shapes and forms, and often evolve rapidly. Classification and taxonomies help create linguistic consensus, and aid the evolution of legal and policy language around the term, thus facilitating policy and regulatory movement.

Fig. Mapping personal and structural consumer harms resulting from deceptive design practices.

While deceptive design is increasingly being recognised in countries across the world as a multi-faceted issue, research suggests that it impacts not just privacy of consumers, but also has systemic impacts on the digital economy. The last two years have been significant in policy shifts pertaining to deceptive design with the Federal Trade Commission in the US taking up the issue, and the Digital Services Act in the EU taking initial steps to regulate deceptive patterns. In a significant legal win, the Italian Data Protection Authority issued a decision against Ediscom, specifically referring to “dark patterns” in the hearing. According to Prof. Christiana Santos, a legal scholar working on deceptive design, this use of the term in a legal sense for the first time, sets a precedent for further use of the term in case law and regulatory decisions. The Australian Competition and Consumer Commission (ACCC), which released its consultation on introducing laws against ‘unfair business practices’ by companies such as Amazon. Research shows that subscription traps, such as those on e-commerce websites impact 3 out of 4 consumers in Australia. Australian consumers have to navigate confusing language and deliberately confusing UX to unsubscribe from services, unlike Europe, where unsubscribing was made a simple two-step process in 2022 after the European Commission stated that Amazon had breached the unfair commercial practices directive.

The use of deceptive design has also been linked to anti-competitive practices in the digital marketplace, and may pave the way for large companies to use personal data. Apart from privacy, several consumer harms have attracted regulatory attention from consumer protection agencies in several countries, including Norway, the European Union, the United States, India and Australia. However, as technology products and companies expand their presence across the globe and open new markets and opportunities, deceptive design, too, becomes a global challenge. The conversation around deceptive design has largely focussed on the European and American experience of deception online. Harms of deceptive practices manifest differently in various social and political contexts, including Global South countries like India. Communities of users in these contexts may find themselves increasingly vulnerable, especially taking into account the English-first nature of the internet, and the fact that millions of people are coming online for the first time, often without exposure to digital literacy. These exacerbate vulnerability along language, socio-economic position, access to education, etc.

This research series aims to expand the scope of the conversation around deceptive design as it happens outside dominant contexts and forefront perspectives from diverse regions, sectors and experiences.

The 'Unpacking Deceptive Design Series' was envisioned as a collaborative space for researchers, artists, civil society advocates, and interested individuals of the public to contribute from diverse disciplinary perspectives and fill knowledge and awareness gaps on the issue. The series invited contributions reflecting on deceptive design practices as it intersects with competition in digital markets, data protection and privacy, consumer protection online, financial security, human rights and social security across jurisdictions among other topics. The research series is part of The Design Beyond Deception project by The Pranava Institute, and this book serves as a companion to those who may benefit from the Manual for Designers created as the core output of the project.We have been fortunate to receive essays from researchers working in this space, who have contributed to the discussion in unique ways.

In her essay titled ‘Crafting a Definition for Deceptive Design/Dark Patterns Is Harder Than It Seems’, design researcher and critical designer Caroline Sinders asks the fundamental question- what makes an aspect of design manipulative or deceptive?The lack of a common definition of deceptive design or patterns, the ubiquitousness of these patterns, and finally- their ever-morphing nature makes Caroline suggest that in order to regulate such practices, design and context must be taken into account. Sinders draws from her work with the Information Commissioner’s Office (ICO) in the UK to bridge the gap between design and regulation, by working on evolving a fundamental definition.

Turning to the global south context, Monami Dasgupta, Vinith Kurian and Rajashree Gopalakrishnan painstakingly gather evidence of deceptive patterns in India’s fintech apps. Their analysis breaks down the user journey of nine popular apps from four financial services categories - lending, insurance, investments, and neo-banking, guided by the OECD taxonomy of deceptive patterns to map the patterns observed against the various types of harm they may cause to the user. These findings are a first of its kind evidence linking online deceptive patterns to possible harms in India’s rapidly growing fintech sector. This research becomes crucial, especially if one takes into account the impact of these practices in tier-two and tier-three towns in India, and corroborates with instances of documented financial loss reported.

Is deceptive design limited only to visual interfaces? The increase in use of voice interfaces, especially in India and non-English speaking countries shows how speech can increase the net of digital communication and allow more people to access services online. Research shows that much of the developing world uses voice interfaces in regional and local languages for search, evidenced by Google Search usage data. Add to this technologies like Amazon Alexa and IoT devices which are flooding the market due to their accessibility and often low-cost. Saumyaa Naidu and Shweta Mohandas explore how deceptive design practices as they play out in voice interface technologies, and may often make it hard for consumers to unsubscribe to services through voice (while they can subscribe using voice command), and make discoverability beyond defaults a challenge. This essay locates Deceptive Design in Voice Interfaces, and analyses its impacts on inclusivity, accessibility, and privacy.

While 2022 was a big year for policy tackling deceptive design at the global level, including in the US and the EU, Sacha Robahmed and Noor Chaabene explored the status of deceptive design reporting, policy and regulation in South West Asia and North Africa (SWANA). Before there can be policy change and increased awareness in the SWANA region, there needs to be a way to identify, gather and share data on deceptive design practices. Where are SWANA’s internet users experiencing being tricked, fooled, or deceived by deceptive design practices? What types of deceptive design are they facing? And are technology companies doing anything about deceptive design practices - are they implementing new ones or remedying existing practices? These questions serve as important starting points for their essay titled Exploring the Potential of App Reviews to Identify Deceptive Design Practices in Arabic-Speaking Countries.

Can accessibility and overlay tools that are expected to enhance accessibility for visually impaired users create deceptive interfaces? Maitreya Shah, Fellow at the Berkman Klein Center at Harvard University examines the user interface (UI) design strategies of accessibility overlay tools, and their implications on the access of the internet for people with disabilities. Shah also makes a pertinent point. People with disabilities form a large community online, and with a greater reliance on technology compared to their non-disabled counterparts. But the design of the internet often makes them more vulnerable to harm, and hinders access in a big way. This also makes them more vulnerable to deceptive patterns. For instance, screen readers are being used to access the web, and anything that gets tempered in that experience would impact the user. Therefore, factoring vulnerability becomes crucial for communities and groups which have a greater reliance on technology, and designing for accessibility and trust must go hand in hand.

Finally, do deceptive designs impact issues such as competition in digital markets? Isha Suri in her piece ‘Identifying Anticompetitive Harms from Deceptive Design in India’ investigates how multi-sided platforms and information platforms deploy deceptive design, often to skew markets in their favour, creating anti-competitive harms based on personal data collection, cost switching, and recommendation algorithms on platforms. Suri also recommends approaches for policymakers which are necessary to tackle an issue as multi-sided as deceptive design.

As technologies continue to evolve and newer interfaces emerge, deception can take different forms when it manifests itself in new technologies. Adopting principles of ethical technology design and human-rights centered design is crucial while building new technologies in the future. The boundaries of technology are constantly and rapidly changing, giving rise to interfaces beyond just the digital screen. Immersive digital experiences like AR, VR can be potential grounds for new forms of deception.The rapidly-evolving pace of generative AI models for user interfaces, trained on existing interfaces that are riddled with deception, is likely to exponentially increase both the volume and novelty of deceptive design in the digital sphere. This is not limited to apps, websites or voice interfaces alone, but also more immersive digital experiences like AR/VR, where the line of deception becomes increasingly blurred and more complex to categorise and therefore harder to regulate. It is therefore imperative that regulatory measures do not limit themselves to existing interfaces and their taxonomies but instead locate deception within human-technology interaction as a whole to design a collective future that is beyond deception.



Download the full article here:

Unpacking Deceptive Design: Centering Trust and Consumer Safety in Digital Interactions
Download P • 265KB


Titiksha Vashist

Co-Founder and Lead Researcher at

The Pranava Institute


Titiksha Vashist. (2023). Unpacking Deceptive Design: Centering Trust and Consumer Safety in Digital Interactions. The Pranava Institute <>


bottom of page