When a typhoon is bearing down on a coastal city, the last thing an emergency coordinator wants to do is figure out how to prompt an AI model. But that’s exactly the gap OpenAI and the Bill & Melinda Gates Foundation set out to close with a series of hands-on AI workshops for disaster response teams across Asia — and the results, published March 29, 2026, reveal a surprisingly practical story about what it actually takes to put AI to work in humanitarian crises.
Why Asia, Why Now
Asia is, statistically, the world’s most disaster-prone region. According to the UN Office for Disaster Risk Reduction, the continent accounts for roughly 45% of all global disaster events and nearly 80% of people affected by natural disasters each year. Floods in Bangladesh, earthquakes in Nepal, super-typhoons across the Philippines — these aren’t hypotheticals. They’re annual realities for millions of people.
Meanwhile, AI tools have matured fast. ChatGPT can now summarize thousands of situation reports in seconds, generate public-facing alerts in multiple languages, draft resource allocation plans, and help teams work through logistics scenarios that used to require days of manual effort. The tools exist. The question is whether the people who need them most actually know how to use them under pressure.
That’s the core premise behind this initiative. It’s not about deploying AI autonomously in disaster zones — it’s about training the humans who respond to disasters to use AI as a force multiplier. Think of it less like deploying a robot and more like handing a very capable research assistant to an overworked emergency coordinator at 2 a.m.
OpenAI has been quietly expanding its humanitarian work over the past year. The OpenAI Foundation’s $1 billion pledge toward health, jobs, and AI safety signaled a broader push beyond commercial products, and this Gates Foundation collaboration fits squarely into that trajectory.
What the Workshops Actually Covered
The workshops weren’t lecture-hall affairs. Participants — drawn from government emergency agencies, NGOs, and regional disaster management bodies across multiple Asian countries — worked through real-world scenarios using ChatGPT and other AI tools, guided by facilitators from both OpenAI and the Gates Foundation’s teams.
Here’s a breakdown of the core skill areas the workshops targeted:
- Rapid information synthesis: Using AI to parse and summarize large volumes of incoming data — satellite reports, field dispatches, social media signals — into concise operational briefs.
- Multilingual communication: Drafting public alerts, evacuation instructions, and community updates in local languages, where professional translation resources are often unavailable at speed.
- Logistics and resource planning: Running scenarios through ChatGPT to model supply chain decisions — how many relief kits to pre-position, which routes to prioritize, how to handle shelter overflow.
- After-action reporting: Accelerating the documentation process that follows a disaster response, which is typically slow but critical for improving future responses and securing donor funding.
- Prompt engineering basics: Teaching responders how to ask AI the right questions — specificity, context-setting, iterating on outputs — so they get useful answers rather than generic ones.
That last point is more important than it sounds. There’s a well-documented gap between what AI tools can do in theory and what non-technical users get out of them in practice. A poorly constructed prompt yields a generic, often useless response. A well-constructed one can surface actionable intelligence in under a minute. The workshops were designed to close that gap for people who don’t have time to take a six-week AI course.
The Languages Problem
One of the most significant challenges in Asian disaster response is linguistic diversity. A single response operation in the Philippines might require communication in Tagalog, Cebuano, Ilocano, and English simultaneously. In Bangladesh, responders need to work across Bengali dialects and coordinate with international teams in English. Standard AI tools have historically been weakest in lower-resource languages, and that’s a real limitation in this context.
OpenAI’s newer models have improved considerably on multilingual tasks, but workshop participants reportedly surfaced specific gaps — particularly around regional dialects and technical emergency terminology that doesn’t translate cleanly. This kind of direct feedback from field practitioners is arguably more valuable than any benchmark test, and it’s one reason OpenAI seems to be structuring these workshops as two-way learning exercises, not just training sessions.
What Participants Built
A particularly interesting dimension of the program was its hands-on build component. Teams weren’t just learning to use existing ChatGPT features — they were prototyping simple AI-assisted workflows for their own organizations. Think custom prompt templates for specific disaster types, automated drafts for standard reporting formats, or quick-reference guides their colleagues could use in the field without any prior AI experience.
This mirrors an approach that’s worked well in enterprise settings. When STADLER rolled out ChatGPT across 650 employees, the most effective adoption came not from top-down mandates but from internal champions who built concrete, role-specific use cases their colleagues could immediately recognize as useful. The same logic applies here — a disaster coordinator in Myanmar is more likely to trust and use a workflow a peer built for their specific context than a generic AI tutorial.
The Bigger Picture: AI’s Role in Humanitarian Work
Let’s be direct about something: AI is not going to replace the judgment, local knowledge, and human relationships that make disaster response actually work. Anyone telling you otherwise hasn’t spent time in a flood-affected village where the most critical asset is a community leader who knows which families are most vulnerable and where they’re likely to shelter.
What AI can do is handle the parts of disaster response that are time-consuming, cognitively draining, and don’t require that irreplaceable human judgment. Writing the fifteenth situation report in three days. Translating donor updates. Checking whether a logistics plan violates any known constraints. These are real bottlenecks, and reducing them frees up human attention for the things that genuinely require it.
The Gates Foundation’s involvement here is significant. The foundation has decades of experience funding and evaluating humanitarian interventions, and they’re famously rigorous about evidence. Their partnership with OpenAI on this initiative suggests a belief that AI tools are mature enough to deploy responsibly in high-stakes humanitarian contexts — not just as experiments, but as operational tools.
Competitors Aren’t Standing Still
OpenAI isn’t the only AI company with humanitarian ambitions. Google has been expanding Gemini‘s reach across public sector and nonprofit applications, and Anthropic’s Claude has made significant inroads with organizations that prioritize safety and careful outputs — both relevant qualities for disaster response. Microsoft, through its AI for Good initiative, has been funding AI applications in climate resilience and disaster preparedness for years.
What OpenAI is doing differently here is the direct capacity-building angle — not just making tools available, but actively training practitioners to use them. That’s a meaningful distinction. Access and capability are not the same thing, and the organizations doing the most disaster response work often have the least technical infrastructure to bridge that gap on their own.
What This Means for Humanitarian Organizations
If your organization does disaster response work in Asia — or frankly anywhere — here’s how to think about this:
- Start with your bottlenecks, not the technology. Identify two or three specific tasks that consume disproportionate time during a response. Those are your first AI use cases.
- Invest in prompt literacy. A one-day internal workshop on how to construct effective prompts will pay dividends faster than almost any other training investment right now.
- Test multilingual outputs carefully. If you’re using AI to draft communications in local languages, have native speakers review outputs before distribution. The stakes are too high for unchecked translation errors.
- Document your workflows. The organizations that will get the most out of AI aren’t the ones with the most access — they’re the ones that systematize what works and share it across their teams.
- Engage with programs like this one. OpenAI and the Gates Foundation are actively looking for practitioner feedback to improve these tools. Participation is a way to shape development, not just consume it.
It’s also worth thinking about what OpenAI gets out of this. Real-world deployment in demanding, resource-constrained environments surfaces product limitations faster than any internal testing program. The multilingual gaps, the prompt complexity barriers, the trust issues — all of this is data that will feed back into model and product development. This is genuinely a two-way relationship, and humanitarian organizations should negotiate accordingly. Their operational insights have real value.
OpenAI has also been building out safety and trust frameworks in parallel — the OpenAI bug bounty program for AI safety risks is one example of how the company is trying to institutionalize responsible deployment. That kind of infrastructure matters more, not less, when AI is being used in life-or-death contexts.
The next few years will test whether AI tools can hold up under the kind of operational pressure that disaster response actually involves — connectivity outages, language barriers, exhausted users, information overload. I wouldn’t be surprised if this Asia workshop series becomes a model for similar programs in sub-Saharan Africa and Latin America, where disaster vulnerability and AI adoption gaps follow similar patterns. The question isn’t whether AI belongs in humanitarian work anymore. It’s whether the humanitarian sector can build the internal capacity to use it well before the next major disaster hits.
Frequently Asked Questions
What was the goal of the OpenAI and Gates Foundation AI disaster response workshops?
The workshops were designed to train disaster response practitioners across Asia to use AI tools like ChatGPT in their operational workflows — covering everything from synthesizing field reports to drafting multilingual public communications. The aim was practical skill-building, not theoretical awareness.
Which countries or organizations participated?
OpenAI’s announcement references disaster response teams across Asia broadly, with participants drawn from government emergency agencies, NGOs, and regional disaster management bodies. Specific country-level breakdowns weren’t fully detailed in the initial release, but the program appears to span multiple nations given the multilingual focus of the curriculum.
How does this differ from just giving humanitarian organizations access to ChatGPT?
Access and capability are different problems. Many organizations already have access to AI tools but lack the training to use them effectively under pressure. These workshops focused on building that operational capability — including prompt literacy, workflow design, and context-specific use cases — which is the harder and more valuable intervention.
Are these workshops available to organizations outside Asia?
The current program is specifically focused on Asia, where disaster frequency and vulnerability are particularly acute. However, given the Gates Foundation’s global reach and OpenAI’s stated humanitarian ambitions, expansion to other regions seems like a logical next step, though no formal announcement has been made yet.