BangladeshAI.orgIntelligence Builds Nations
Back to Research
Policy2025-10-20· 17 min read

AI Risk Management & Non-Negotiables for Bangladesh

Not all AI is good AI. This paper identifies the 8 specific AI risks most relevant to Bangladesh's context, proposes non-negotiable red lines, and presents a risk management framework appropriate for Bangladesh's governance capacity.

AI Risk Management & Non-Negotiables for Bangladesh

Publication Date: October 2025

Classification: Policy Research Paper

Note: This paper is deliberately written to be readable by non-technical policymakers.

---

Introduction: The Other Side of AI

BangladeshAI.org is fundamentally optimistic about AI's potential for Bangladesh. But responsible AI advocacy requires equal clarity about AI's risks — not to prevent AI adoption, but to ensure Bangladesh adopts AI intelligently, with eyes open to what can go wrong.

This paper has two purposes:

1. Risk mapping: Identify the 8 AI risks most likely to harm Bangladesh specifically — not the risks most discussed in American or European AI policy circles, but the risks that matter for a country with Bangladesh's specific governance context, demographics, and vulnerabilities.

2. Non-negotiables: Define clear red lines — specific AI applications that Bangladesh should prohibit regardless of claimed benefits — and a risk management framework that is calibrated to Bangladesh's actual governance capacity.

---

The 8 Priority Risks for Bangladesh

Risk 1: AI in Political Manipulation

The risk: Generative AI makes it trivially cheap to produce realistic fake videos (deepfakes), AI-generated disinformation at scale, and targeted political messaging that can be personalised to millions of voters simultaneously.

Why especially dangerous for Bangladesh:

  • Social media penetration is high (40M+ Facebook users); content spreads rapidly
  • Bangla deepfake detection tools barely exist
  • Bangladesh has a history of politically charged rumour cycles that cause violence
  • Institutional verification infrastructure (fact-checking, media literacy) is limited

Real examples already occurring:

  • Deepfake audio recordings attributed to political figures have circulated in Bangladesh
  • AI-generated images of political events spread during 2024 political unrest
  • WhatsApp misinformation chains with AI-generated content

Required responses:

  • Bangla deepfake detection tools (open-source; funded by BAIRI)
  • Electoral AI code of conduct (Election Commission; restrict AI-generated political advertising)
  • Platform liability for AI disinformation (BTRC regulation)
  • AI media literacy in school curriculum

Red line: AI-generated political advertisements, fake videos of public figures, and AI-automated disinformation operations must be criminalised.

---

Risk 2: Biometric Surveillance Expansion

The risk: AI-powered facial recognition and biometric surveillance systems are cheap, effective, and increasingly being deployed by governments — including Bangladesh — without adequate legal frameworks.

Bangladesh context:

  • National Biometric Database (Smart NID) — 110M+ citizens registered
  • CCTV expansion in Dhaka, Chittagong
  • Bangladesh has used phone surveillance with limited judicial oversight
  • Smart NID data has been reportedly sold on the dark web (2021 data breach)

The harm pathway: Biometric surveillance without legal safeguards enables:

  • Political surveillance of opposition, journalists, civil society
  • Mass tracking of religious minorities and protesters
  • "Function creep" — data collected for one purpose used for another
  • Discriminatory enforcement (AI facial recognition has significantly higher error rates for dark-skinned faces)

Required safeguards (non-negotiable):

  • No real-time biometric surveillance in public spaces without judicial warrant
  • No retroactive biometric searches without warrant
  • Audit trail for all biometric database queries
  • National Biometric Privacy Act before any AI-linked biometric expansion
  • Independent oversight of Smart NID data access

Red line: Mass biometric surveillance without warrant — this must be explicitly prohibited in Bangladesh's AI Act.

---

Risk 3: AI-Enabled Financial Exclusion

The risk: AI credit scoring and financial decision systems trained on limited or biased data can exclude large segments of Bangladesh's population from financial services — particularly rural women, day labourers, and informal sector workers.

Bangladesh context:

  • Mobile financial services (bKash, Nagad) have reached 50M+ previously unbanked users
  • Banks are increasingly exploring AI credit scoring
  • Most AI credit models are trained on formal employment and formal credit history — irrelevant for 60%+ of Bangladeshi workers
  • Women have lower formal financial history even when creditworthy

The harm pathway:

An AI that scores based on formal employment and fixed address will systematically deny credit to:

  • Rural smallholder farmers (income is seasonal and informal)
  • Women working in home-based industries
  • Day labourers with irregular income
  • Rickshaw pullers, domestic workers, small vendors

This is not hypothetical — early AI credit scoring in India showed these exact bias patterns.

Required responses:

  • Mandatory bias audits for AI credit systems (Bangladesh Bank requirement)
  • Alternative data frameworks (mobile money transaction history, utility payments, agricultural records)
  • Human appeal right for AI credit decisions
  • Minimum representation requirements for training data (must include rural and female data)

Non-negotiable: No AI credit system deployed without bias audit across gender, district, and income level.

---

Risk 4: Labour Displacement Without Safety Net

The risk: AI-driven automation in Bangladesh's key employment sectors (RMG, BPO, agriculture) could displace millions of workers faster than they can retrain or find alternative employment — without Bangladesh having adequate social protection.

Why this risk is acute for Bangladesh:

  • 4.2M RMG workers are in highly automatable roles
  • Bangladesh has no unemployment insurance (only limited severance for formal workers)
  • Social protection (allowances) covers only 15% of population
  • Skills gap between current worker skills and AI-economy jobs is significant

Interaction with other risks: Labour displacement intersects with gender (women are 85% of RMG workers), regional inequality (Dhaka-centred reskilling opportunities), and political stability (unemployed youth + automation frustration = social instability).

Required responses (pre-emptive):

  • National AI Reskilling Fund (Tk 1,000 crore) before major automation
  • Unemployment insurance pilots for formal sector workers
  • Automation tax on companies replacing >10% of workforce with AI (revenue to reskilling fund)
  • Monitoring obligation: companies deploying automation must report to MoLE 2 years in advance

Non-negotiable: AI automation in government-licensed sectors (RMG, banking, telecom) must not proceed without approved worker transition plans.

---

Risk 5: Healthcare AI Without Safeguards

The risk: AI diagnostic tools, triage systems, and treatment recommendation systems deployed in Bangladesh's resource-limited healthcare settings could cause harm if they are inaccurate, untested on Bangladeshi patients, or deployed without appropriate human oversight.

Bangladesh-specific vulnerabilities:

  • High proportion of community clinics with limited physician oversight (AI errors may not be caught)
  • AI diagnostic tools are commonly trained on non-Bangladeshi (primarily Western) patient populations
  • Regulatory pathway for AI medical devices does not yet exist (DGDA)
  • Malnutrition, infectious disease profiles, and genetic factors in Bangladesh differ significantly from training populations

The harm pathway: An AI trained on US patient data deployed for TB screening in Bangladesh may perform well on average but have significantly higher error rates for malnutrition-related presentations common in rural Bangladesh — resulting in missed diagnoses or unnecessary treatment.

Required responses:

  • DGDA AI medical device regulatory pathway (urgently needed)
  • Mandatory clinical validation on Bangladeshi patient populations before deployment
  • "Human in the loop" requirement for all AI-assisted diagnosis in Bangladesh
  • Bias testing across age, gender, nutritional status, and regional population groups

Non-negotiable: No AI medical diagnostic tool deployed in Bangladesh government health facilities without Bangladesh-specific clinical validation.

---

Risk 6: Data Colonialism

The risk: Bangladesh's citizens' data — health records, financial transactions, agricultural patterns, communications — is being extracted by foreign AI companies and used to train AI systems that are then sold back to Bangladesh at premium prices, while Bangladesh builds no domestic AI capability.

This is happening now:

  • Major global AI companies are actively building Bangla language datasets from Bangladesh internet usage
  • Bangladeshi citizens' ChatGPT queries (including sensitive questions about health, law, finance) train OpenAI models
  • Agricultural sensor data from Bangladesh-deployed IoT systems flows to foreign company servers
  • Health data from NGO programs operating in Bangladesh has been used in global AI training without adequate consent

Why this matters:

  • Bangladesh's most valuable data asset — information about 170M people — generates no value for Bangladesh
  • Foreign AI built on Bangladesh data can enter Bangladesh market and capture value without creating local capability
  • Bangladesh has no negotiating leverage once its data is incorporated into foreign models

Required responses:

  • Data Sovereignty Act: Bangla citizens' personal data may not be used for AI training without explicit consent and benefit-sharing agreement
  • AI companies operating in Bangladesh must process citizen data within Bangladesh (or approved jurisdictions)
  • Data Benefit Trust: revenues from licensed data use are returned to Bangladesh AI development fund

Non-negotiable: Bangladesh citizen data used for AI training without consent or benefit-sharing is a national resource theft that must be sanctioned.

---

Risk 7: Algorithmic Corruption

The risk: Government AI systems become tools for corruption — not reduced corruption — when deployed without accountability, transparency, or independence.

The failure mode:

Bangladesh deploys AI for land record management, and corrupt officials manipulate AI training data or parameters to produce biased outputs that favour clients willing to pay. The AI provides a veneer of objectivity while serving the same corrupt purposes as the manual system it replaced.

This is not theoretical: AI procurement tools in some countries have been deliberately configured with non-neutral parameters. AI credit scoring systems have been deliberately trained on data that produces discriminatory-but-profitable outputs.

Bangladesh context: A government AI system that is not independently audited, whose parameters can be adjusted by politically connected officials, and whose outputs are not transparent to affected citizens is potentially worse than the manual system it replaces — it combines the speed and scale of automation with the opacity of a black box.

Required responses:

  • All government AI systems are subject to independent technical audit (BNAIA)
  • Parameter changes to government AI systems require documented justification and BNAIA notification
  • Citizens affected by government AI decisions have the right to explanation and human review
  • Whistleblower protection for government employees who report AI manipulation

Non-negotiable: Government AI systems must be independently auditable. No government AI deployment without audit pathway.

---

Risk 8: AI Concentration of Power

The risk: AI amplifies the advantages of those who control it. Without deliberate democratisation, Bangladesh's AI transformation could widen the gap between Dhaka's educated elite and the rest of the country.

The concentration pathways:

  • AI benefits available only in English exclude 100M+ Bangla-only citizens
  • AI tools accessible only with smartphones exclude rural and elderly populations
  • AI investment concentrated in Dhaka tech sector deepens regional inequality
  • AI used by large companies to outcompete small businesses (AI-driven price undercutting, logistics optimisation) eliminates SME livelihoods

Bangladesh's inequality context:

Bangladesh has made extraordinary progress on poverty reduction, but inequality between Dhaka and the rest of the country, between educated and uneducated, and between men and women in economic participation remains significant. AI deployed without equity considerations will deepen these divides.

Required responses:

  • Bangla-first AI mandate for all government citizen services
  • AI access equity requirements in government AI procurement
  • Rural AI connectivity investment (last-mile internet, AI tools on 2G networks)
  • SME AI support program (subsidised AI tools for small businesses)
  • Gender AI equity targets in all reskilling programs

Non-negotiable: No government AI initiative without explicit equity impact assessment.

---

Bangladesh AI Non-Negotiables (Red Lines)

Based on the risk analysis above, BangladeshAI.org proposes the following absolute prohibitions for Bangladesh's AI Act:

Category 1: Surveillance Prohibitions

  • Real-time mass facial recognition in public spaces without judicial warrant
  • Retroactive biometric searches of government databases without individual warrant
  • AI-powered monitoring of political activities, religious practice, or protected speech
  • Social scoring systems that assign citizens composite political reliability or social behaviour scores

Category 2: Manipulation Prohibitions

  • AI-generated deepfakes of public figures for political purposes
  • Automated AI disinformation operations (bot networks, synthetic media for deception)
  • AI targeting of children with psychological profiling for commercial purposes
  • Subliminal AI techniques designed to manipulate users without their awareness

Category 3: Autonomous Harm Prohibitions

  • Autonomous lethal weapons (weapons systems that select and engage targets without human authorisation)
  • Fully automated legal judgments without human judge (AI may assist, not replace)
  • Fully automated decisions on immigration status, welfare eligibility, or medical treatment without human review right

Category 4: Exploitation Prohibitions

  • AI training on Bangladesh citizen data without consent or benefit-sharing
  • AI credit/employment decisions without bias audit and human appeal right
  • AI deployment in healthcare without Bangladesh-specific clinical validation

---

Risk Management Framework

Bangladesh cannot manage all AI risks simultaneously with current governance capacity. The framework must be calibrated to what is actually enforceable.

Tier 1 — Prohibit immediately (no capacity required):

All Category 1–4 non-negotiables above. Prohibition is a legal tool requiring minimal enforcement capacity — especially for government deployments, which are fully controllable.

Tier 2 — Regulate from Year 1 (moderate capacity):

  • Government AI deployments (controllable by BNAIA)
  • Licensed financial AI (enforceable through Bangladesh Bank)
  • Medical AI (enforceable through DGDA)

Tier 3 — Regulate from Year 3 (as BNAIA builds capacity):

  • Private sector high-risk AI (registration + audit)
  • AI in employment decisions
  • AI in education assessment

Tier 4 — Regulate from Year 5 (full regulatory capacity):

  • Comprehensive AI market oversight
  • International AI product compliance
  • AI in SME and informal sector

---

Conclusion

AI risk management is not about being anti-AI. It is about being intelligent about AI. Every technology has risks; every technology has benefits. The question is not whether to adopt AI but how.

Bangladesh's AI risks are not the same as Europe's AI risks. Bangladesh's most urgent concerns are biometric surveillance without safeguards, AI in a context of weak judicial independence, the specific vulnerability of its labour force, and the data sovereignty threat. These require Bangladesh-specific responses — not copy-pasting of EU AI Act provisions that were written for a very different context.

The non-negotiables in this paper are called non-negotiable because the harms from crossing them are irreversible. Once a mass surveillance infrastructure is built, it is not returned. Once a citizens' data is extracted and incorporated into foreign AI systems, it is not recovered. Once an automated system makes millions of flawed decisions without appeal rights, those wrongs are not easily corrected.

Bangladesh can build the AI future it deserves. Doing so requires knowing what it will not accept.

Policy submissions: research@bangladeshai.org | Cite as: BangladeshAI.org Risk Management Paper, October 2025