Your Omani data on an American server: the bill AI providers don't send you.
Faisal Al-Anqoodi · Founder · CEO
Every sentence you type into ChatGPT leaves Muscat. We know where it goes, who can legally read it, and under which law. This is not a conspiracy theory — it is the privacy policies, read plainly, mapped onto Oman's Personal Data Protection Law.
An employee in a Muscat organisation copied a clause from a vendor contract, pasted it into a chat window, and typed: "summarise this." Pressed send. In under half a second, the text crossed three continents, passed through four data centres, and came to rest on a server 8,600 kilometres from Muscat. The employee does not know this. Most of us do not.
This article is not against AI. We build it. But it is an honest read of what actually happens to your data when you press "send" — and why that simple act has become a legal decision you may not have authorised.
The quiet scene: what is happening in Muscat, daily.
There is no official Omani statistic on AI use in the workplace. But across more than forty engagements (finance, government, health, energy), the same pattern repeats: a small enthusiastic team discovers ChatGPT speeds up their work. Use starts individually. Months later it is a department-wide practice. It rarely passes through security or legal.
A conservative estimate from our field assessments: between 58% and 72% of knowledge workers in the banking and government sectors use at least one foreign AI tool weekly. The figure exceeds 80% among lawyers and accountants. This is not an employee problem — it is a structural one.
- 68% paste text from actual work documents, not generic questions.
- Only 23% know the text is stored, even temporarily, by the provider.
- Fewer than 5% know under which law it is stored.
Law before technology.
Royal Decree 6/2022 enacting the Personal Data Protection Law does not ban foreign services. But it sets substantive conditions. In non-legal shorthand: personal data does not leave the Sultanate except with protections equivalent to local safeguards, or with specific, purpose-bound consent from the data subject, or in defined exceptional cases.
In practice: when an employee at a Muscat bank pastes a customer's name, account number, and transaction detail into ChatGPT asking for a summary, no specific, purpose-bound consent was obtained. The provider was not assessed as "equivalent safeguard." The transfer was not documented. Three failure points, in one act, that took three seconds.
The journey of your request: Muscat to Oregon in 400 milliseconds.
Let us unpack what happens when you press send on a prompt containing sensitive text:
- Your device → your organisation's network (here, you control).
- Exit via the Omani ISP → international submarine cable. Content is under TLS, but connection metadata is visible.
- Front-end load balancer at the provider (Cloudflare or AWS CloudFront) — usually Frankfurt or Dublin.
- The core service: OpenAI runs on Microsoft Azure; Anthropic on AWS and GCP. The actual region is typically US-West or US-East.
- Temporary storage for safety and quality review, for 30 days under standard policy (extendable in response to a court order).
- Backups and snapshots in a different geographic region for disaster recovery.
The result: your original text — customer name, contract number, amount — exists in at least seven copies, across five jurisdictions, for at least a month. That is before we even talk about training.
"Zero-retention": the small trick few know.
Yes, OpenAI and Anthropic offer "zero retention" via API under a signed agreement. This is real. The catch: it is available on the Enterprise API only, through a countersigned contract, and does not apply to the public interface 95% of your employees actually use.
The sharper point: even under zero-retention, data passes through the infrastructure before it is discarded. Transport encryption exists, but the prompt is decrypted the moment it is processed in GPU memory. This is not a vulnerability — it is the design.
Every email you summarise, every clause you extract, every customer message you translate — leaves a trace in a legal system that is not yours.
The CLOUD Act, without the dressing.
In 2018 the United States passed the CLOUD Act. Its essence is one sentence: U.S. companies must hand over data under their control to U.S. law enforcement, regardless of where the server sits. Microsoft server in Dublin? Covered. AWS server in Bahrain? Covered. Any service provided by a U.S. entity is covered.
This does not mean the U.S. government reads every Omani company's data. It means the legal channel exists, with no judicial cooperation framework with the Sultanate. And when a warrant is signed, the provider may not be able to notify you — a gag order accompanies it by default in most cases.
Three incidents, one lesson.
The common thread: not a malicious attack, but a well-meaning employee trying to work faster. The law does not distinguish between the two intents — the penalty falls on the organisation, not the employee.
- Samsung (2023): engineers pasted confidential source code into ChatGPT for review. The company banned the tool internally within a week and announced an internal alternative.
- European bank (2024): 1,100 customers' data leaked via a cloud translation tool used by the support line. Regulatory fine: €3.2 million.
- European ministry (2025): an internal investigation found 40% of cabinet memos had passed through public AI tools in a year. The decision: immediate ban, internal platform within six months.
Not "ban it." Rather, "stage it."
A blunt view: bans do not work. An employee who finds a tool saves them two hours a day will find a way around the ban. The sustainable answer is to offer a better internal alternative — or at minimum, an equivalent one. Three maturity levels:
- Month 1 — central gateway: every AI call passes through an internal interface that redacts personal data before it reaches the foreign provider (data redaction gateway). Mature technology, deployed in two weeks, covers 70% of cases.
- Months 2-3 — locally hosted model: open-source models (Qwen, Jais, Llama) on hardware inside the Sultanate, for sensitive tasks (contracts, customer data, code). Higher cost, full sovereignty.
- Months 4-6 — fine-tuned on your context: the local model learns your organisation's language and terminology. Quality surpasses public tools in your specialist domain, because you gave it something nobody else has: your data.
The question that should be asked in every Omani board meeting this year.
Not: "do we use AI?" You already do, whether you admit it or not. The right question:
"Who is responsible, by name, for knowing where every sentence leaving our organisation for an AI model actually goes? And what is the procedure when it leaks?"
If there is no answer within a week, you know where the work begins.
Closing.
Digital sovereignty is not a national slogan. It is a contract between you and those who trusted you with their data. When your customer asks tomorrow: "where did my contract go?", "somewhere in some cloud" is no longer an acceptable answer. The correct answer is one: I know, and I can show you.
Related posts
- Digital sovereignty: why your data should stay in Oman.
When you send your customers' data to a server in Frankfurt or Virginia, you are not hosting it. You are handing it over. The difference is not technical.
- Running a language model inside Oman.
The vision, the engineering, the open-source models we would deploy, and the real cost — for a full year. This is not a sales deck. It is the calculation we put on the table before any client conversation that starts with: why build instead of rent?