BangladeshAI.orgIntelligence Builds Nations
Back to Research
Policy2026-02-01· 18 min read

AI Policy & Regulatory Framework for Bangladesh

A comprehensive framework for Bangladesh's AI regulation: what laws are needed, what institutions must be created, and how to build a regulatory environment that enables innovation while protecting rights.

AI Policy & Regulatory Framework for Bangladesh

Publication Date: February 2026

Classification: Policy Framework Paper

Status: Active — updated as legislation progresses

---

Executive Summary

Bangladesh currently has no dedicated AI law, no AI regulatory agency, and no formal government AI ethics standards. This is not unusual for 2026 — only 28 of 193 nations have enacted comprehensive AI regulation. But Bangladesh's window to establish foundational rules before large-scale AI deployment is closing rapidly.

This paper presents a proposed framework for Bangladesh's AI regulatory architecture: the institutions needed, the laws required, the standards to adopt, and the sequencing that allows innovation to proceed while rights protections are built in parallel.

Core recommendation: Bangladesh should adopt a risk-based regulatory model (similar to the EU AI Act's tiered approach) adapted to Bangladesh's administrative capacity, enforcement capabilities, and development priorities.

---

The Regulatory Gap

What Bangladesh Has Now

  • National ICT Policy 2018 — references AI in passing; no binding AI standards
  • Digital Security Act 2018 — addresses digital crimes; no AI-specific provisions
  • Personal Data Protection Act — passed 2023; implementation rules incomplete
  • a2i Programme — government innovation lab; no formal regulatory authority
  • Bangladesh Bank AI Guidance 2023 — sector-specific guidelines for financial AI; no enforcement teeth

What Bangladesh Lacks

  • National AI Strategy with legal backing
  • AI regulatory authority with inspection and sanction powers
  • Mandatory AI impact assessment requirements
  • Government AI deployment standards
  • Biometric data use framework
  • AI liability framework (who is responsible when AI causes harm?)
  • Algorithmic transparency requirements for public-sector AI

---

Proposed Regulatory Architecture

Tier 1: The National AI Act

Priority: Immediate (2026–2027)

Model: Risk-based; tiered by potential harm

Key provisions:

Article 1 — Scope: Applies to any AI system deployed in Bangladesh by any entity (government, private, foreign).

Article 2 — Risk Classification:

  • Unacceptable Risk (Prohibited): Social scoring systems; real-time biometric surveillance in public spaces without judicial warrant; AI systems manipulating users through subliminal techniques; AI targeting children with psychological profiling.
  • High Risk: AI in employment decisions, credit decisions, healthcare diagnosis, education assessment, law enforcement, immigration, electoral systems. Requires mandatory conformity assessment before deployment.
  • Medium Risk: AI in customer service, content recommendation, HR screening. Requires registration and transparency disclosure.
  • Low Risk: Chatbots, spam filters, grammar checkers. Requires only transparency notice to users.

Article 3 — Government AI Standards: All government AI systems must pass the National AI Standards Board review before deployment. Existing systems must be reviewed within 24 months.

Article 4 — Algorithmic Transparency: Citizens have the right to know when an AI system made a decision affecting them and to request human review.

Article 5 — AI Liability: Developers are liable for foreseeable harms from high-risk AI systems. Government ministries are liable for harms from government-deployed AI.

Article 6 — Innovation Sandboxes: Any company can apply for a 24-month regulatory sandbox to test AI products with reduced compliance burden, subject to data protection and consumer harm safeguards.

---

Tier 2: Bangladesh National AI Authority (BNAIA)

Priority: Establish by 2027

Model: Independent statutory authority (similar to Bangladesh Bank's independence structure)

Functions:

  • License and inspect high-risk AI systems
  • Receive and investigate citizen complaints about AI decisions
  • Conduct mandatory audits of government AI deployments
  • Issue sector-specific AI codes of practice
  • Represent Bangladesh in international AI governance forums (ITU, UNESCO, OECD)
  • Publish annual State of AI in Bangladesh report

Structure:

  • Board of 7: 2 government nominees, 2 technical experts, 2 civil society representatives, 1 judicial nominee
  • Secretary: Civil service appointment (Grade 1)
  • Technical staff: 50 FTEs (AI engineers, lawyers, economists, ethicists)
  • Budget: Tk 200 crore/year (partly from licensing fees)

Independence guarantee: BNAIA cannot be directed by any ministry on individual decisions; its findings are judicially reviewable but not overridable by executive instruction.

---

Tier 3: National AI Standards Board (NAISB)

Priority: Establish by 2026 (precursor to BNAIA)

Composition: BUET, DU, CUET, ICT Division, Bangladesh Bank, relevant ministries.

Functions:

  • Develop Bangladesh National AI Standards (BNAS) — minimum requirements for AI systems in government use
  • Adapt international standards (ISO/IEC 42001, IEEE 7000) to Bangladesh context
  • Certify AI systems for government procurement

---

Tier 4: Sectoral Regulators

Each existing sectoral regulator should develop AI-specific rules within their domain:

| Sector | Regulator | AI Rules Needed |

|--------|-----------|-----------------|

| Financial | Bangladesh Bank | Credit scoring transparency; algorithmic trading limits |

| Telecom | BTRC | AI content moderation standards; data localisation |

| Healthcare | DGDA | AI medical device approval process |

| Education | MoE | AI in student assessment standards |

| Labour | MoLE | AI in hiring decisions; worker monitoring limits |

---

Data Protection and AI

Bangladesh's Personal Data Protection Act 2023 provides the foundation but requires AI-specific implementing regulations:

Required implementing rules:

1. Automated Decision-Making Rights — Citizens must be able to opt out of fully automated high-stakes decisions (employment, credit, healthcare prioritisation) and request human review.

2. AI Training Data Standards — Data used to train AI systems must meet quality, representativeness, and consent standards. Use of unlicensed personal data for commercial AI training is prohibited.

3. Cross-Border Data Transfers for AI — AI training data and AI outputs containing personal data cannot be transferred to countries without adequate data protection frameworks without BNAIA approval.

4. Children's Data — AI systems using data from persons under 18 require parental consent and are prohibited from using such data for commercial profiling.

---

Government AI Procurement Standards

A major near-term priority is establishing rules for how Bangladesh government purchases AI systems:

Proposed Government AI Procurement Rules (GAPR):

1. No Black Boxes: All AI systems used in government decision-making must provide explainable outputs. Vendors must disclose training data categories, model architecture type, and performance metrics.

2. Bias Testing Requirement: Vendors must submit pre-procurement bias audits demonstrating AI performance across gender, district, age, and income groups.

3. Local Data Processing: AI systems processing sensitive citizen data must process that data within Bangladesh (data sovereignty requirement).

4. Performance Guarantees: Contracts must specify measurable performance benchmarks and remedies for underperformance.

5. Source Code Escrow: Government AI contracts above Tk 10 crore must include source code escrow with the BNAIA.

6. Bangla Language Support: AI systems used for citizen-facing services must support Bangla language with minimum 90% accuracy.

---

International Alignment

Bangladesh should selectively align with international AI frameworks — not adopt wholesale, but reference strategically:

| Framework | Recommendation | Rationale |

|-----------|---------------|-----------|

| UNESCO AI Ethics Recommendation 2021 | Adopt as policy basis | Bangladesh signed; free; comprehensive |

| EU AI Act | Reference for risk classification | Best available model; adapt to BD capacity |

| ISO/IEC 42001 (AI Management System) | Require for govt AI vendors | Practical; vendor-certifiable |

| OECD AI Principles | Adopt officially | Already referenced in ICT Policy 2018 |

| G20 AI Principles | Align | Bangladesh is not G20 but principles are useful |

Avoid: Direct adoption of regulations designed for high-income countries with extensive enforcement capacity. Bangladesh's BNAIA will initially have limited inspection capacity — regulations must be calibrated to what can realistically be enforced.

---

Enforcement Realities

Bangladesh's enforcement capacity is limited. The regulatory framework must be sequenced to match what can actually be enforced:

Phase 1 (2026–2027) — Standards without inspection:

  • Publish BNAS standards
  • Create voluntary certification pathway
  • Require compliance declarations for government AI procurement
  • Establish BNAIA in skeleton form

Phase 2 (2027–2029) — Registration and audits:

  • Mandatory registration of high-risk AI systems
  • BNAIA audits of 50 highest-risk government AI deployments per year
  • Citizen complaint mechanism operational
  • Financial sector AI rules enforced by Bangladesh Bank

Phase 3 (2029+) — Full enforcement:

  • BNAIA at full operational capacity
  • All risk tiers covered
  • Cross-border enforcement cooperation with peer regulators

---

Priority Actions (2026)

1. Enact Digital Bangladesh AI Act — Begin parliamentary drafting process immediately. Target passage by December 2027.

2. Establish NAISB — Can be done by Cabinet decision; no legislation required. Target: July 2026.

3. Issue Government AI Procurement Rules — Finance Division circular; no legislation required. Target: September 2026.

4. Publish Bangladesh National AI Standards v1.0 — NAISB output. Target: December 2026.

5. Ratify UNESCO AI Ethics Recommendation — Administrative step; national action plan required. Target: 2026.

---

Conclusion

Bangladesh does not need a perfect AI law — it needs a workable one that can be enacted, implemented, and enforced with existing institutions while BNAIA is built. A risk-based approach that prohibits the worst harms, creates accountability for government AI, and enables innovation sandboxes will serve Bangladesh better than comprehensive regulations that cannot be enforced.

The alternative — leaving AI ungoverned — is not a neutral choice. It means the harms of AI will fall disproportionately on those least able to seek redress: rural citizens, low-income workers, and people whose Bangla-language needs are ignored by systems trained on English data.

This framework paper is updated quarterly. Stakeholder submissions welcome via contact@bangladeshai.org