Skepticism is not a problem to be rectified. It is a signal to be heard.
When a first-generation digital user in rural Indonesia is directed to an application for an AI-powered loan, hesitancy on their part is not a lack of knowledge. It is earned. They have witnessed systems that promised help and extracted instead. They have been treated as data points.
Low-trust markets are also some of the fastest-growing digital markets on the planet. Southeast Asia, Sub-Saharan Africa, Latin America, and parts of South Asia are witnessing rapid growth in AI-powered services – digital banking, telemedicine, insurance, and government tools. As you can imagine, the commercial opportunity is enormous. The design challenge is no less serious.
You can’t take a product aimed at users who will be highly trusting and expect it to work. The assumptions built into it don’t hold, and when they fail, they support the skepticism that made the challenge difficult to begin with.
This blog explains what it actually requires to design AI for these markets.
Understanding Why Trust Is Low
Distrust of technology is not irrational in markets where it runs deep. It is the rational response to an established history.
Formal financial systems have made the poor unwelcome and made opaque decisions, with no appeals. Healthcare has underserved rural populations. Government services have been unavailable or corrupt. Now, AI is designed for the most part in rich countries for users who look like the designers show up, asking for data, permissions, and trust.
AI systems moving into these markets have often looked like more advanced versions of the institutions that already failed these users. That is a design problem, not a communications problem.
Understanding the specific sources of distrust in a target market is the prerequisite to building anything that is going to be actually used.
Transparency Is Not a Feature. It Is the Product.
In high-trust markets, AI transparency is seen as compliance. In markets of low trust, it is what makes the product viable.
Users who have no baseline trust in AI need to understand what the system is doing, why, and what it will do with their information in every specific interaction.
This means explaining the use of data at the point of collection, not buried in terms nobody reads. It means showing reasoning behind AI decisions, why a loan was declined, and why a recommendation was made in language users can act on. It means being explicit about limitations and the availability of humans.
Default patterns, vague permission requests, generic error messages, and faceless automated decisions are built for trust already there. In low-trust environments, they are saying precisely what users are afraid of hearing.
Permission Architecture in Low-Trust Contexts
Data collection is where the low-trust users form their first impression of whether a product respects them or not.
The standard approach, asking for a broad array of permissions in the first place, with vague justifications, reads like extraction. In the absence of clear explanations that are linked to the benefit of users, these requests indicate that the product harvests data but does not serve.
Permission design for low-trust markets begins by asking for the least amount that is required to accomplish the immediate task, and why. Progressive requests in which data is collected bit by bit as the user develops a sense of trust have much higher acceptance rates than front-loaded permission walls.
The Language of Trust
How an AI product communicates has a profound impact on the creation of trust.
Clinical language is a sign of distance. “Your application has been processed, and a determination has been issued,” reads as evasion. “Your application was declined because your income history goes back only two months. Here is what changes that” reads as a product is actually trying to help.
Systems that recognize uncertainty and legitimate user concerns do build trust faster.
Many low-trust markets are multilingual and multi-iterate. Users may not be comfortable with the dominant language or accessing a product on a small screen with an unreliable connection. These factors need to influence communication design.
Chatbots and Conversational AI in Low-Trust Markets
Conversational interfaces hold especially high promise in low-trust markets; they speak to users in the most human of human mediums, not requiring users to navigate unfamiliar ways of working in digital life.
A well-designed chatbot ux design for a platform for microfinance guides a user through an application in their local language, explaining the reason why information is needed and communicating outcomes in a plain way, fundamentally different from a form-based interface where users are expected to trust without explanation.
But conversational AI fails badly when it is being dishonest about what it is. Presenting a chatbot as a human advisor, deceptive anthropomorphism is especially harmful where users are suspicious of being duped. Damage done by this trust is severe and seldom recoverable.
Good conversational ui ux in low-trust contexts is clear from the first interaction, how it is what it can and cannot do, and how to escalate to humans.
Building Credibility Through Design
Trust in low-trust markets is not only built by using transparency, but by visible signals of credibility recognised by users.
According to Edelman’s Trust Barometer, when it comes to emerging markets, trust in technology companies is influenced to a substantial extent by association with local institutions and peer networks instead of global brand recognition. An AI product that has been recommended by a local NGO or community health worker is more credible than one with its own branding.
Credibility signals should not default to Western conventions’ safety badges, corporate logos, but be developed in and around trust cues relevant to the target market. Can the design surface local endorsements authentically?
Feedback Loops and Visible Accountability
Low-trust users must see that their feedback is important. Accountability is the mechanism of trust at the core of the institutional level, where institutional trust has been lacking.
An AI product that makes decisions with no appeal mechanism reproduces the institutional patterns that led to distrust. A product that explains appeals in layman’s terms and makes escalation a real and not hidden behind automated deflection builds trust that compounds over time.
According to a World Bank study of digital financial services in the low-income markets, products with visible complaint and appeal mechanisms had significantly higher repeat usage rates. Accountability is not simply an ethical matter. It is commercially rational.
Skepticism Is an Asset If You Design for It
Most product teams treat low-trust users as a problem to work around, find ways to reduce friction, and get users past their hesitation.
That framing is wrong. Products based on it fail, most often when some competitor enters with a design that actually develops trust.
Low-trust markets aren’t waiting for improved onboarding animations. They’re waiting for products from AI that work differently from the systems that broke their trust: products that explain themselves, ask for only what they need, are honest about limitations, and offer recourse if something goes wrong.
Some of the most loyal users to win are the hardest to win. The product that gains trust at a time when trust is in short supply owns something no competitor can easily replicate.
Design for skepticism. It is the most honest design brief in the world.
Ready to create AI products that win trust in the markets where it matters most?
Our team designs experiences for users that need a reason to believe.
A market with low trust is one where the intended user population has documented historical reasons to distrust digital systems based on actual patterns: financial exclusion, decisions about algorithms that are opaque, misuse of data, or design for different populations that was deployed without adjustment. It is a rational response to documented experience.
Through research beyond the normal usability testing: ethnographic research, community immersion, interviews with trusted intermediaries, and analysis of the particular systems that caused distrust. The goal is to understand what experiences have shaped the way users relate to technology research that cannot be desk-based.
standard patterns presuppose a baseline of trust. Permission requests assume that users will give them. Data collection assumes users accept the idea that products need data. Decision presentation assumes that the users will accept outcomes without explanation. None of these assumptions is true in low-trust situations. Products built on these assumptions are seen as extractive and opaque.
A significant one. Trust in markets of low trust is relational and community-based. Products distributed through trusted local entities, NGOs, health programmes, microfinance institutions, and community leaders have a transfer of credibility to them that no global branding can match. Local partnership should be a design input, not just a go-to-market strategy. How the product will present itself, what language it will be in, and how it will handle escalation.
Honestly and with unacceptable accountability. Errors that are covered up destroy trust irreversibly. Errors that are recognized and corrected by accessible processes show that the product is responsible for precisely what makes trustworthy AI what it is and differentiates it from the systems that failed these users.