iTranslated by AI

The content below is an AI-generated translation. This is an experimental feature, and may contain errors. View original article
🛫

Location Constraints when Calling OpenAI API from Cloudflare Workers

に公開
1

What Happened

  • I am calling OpenAI models from an app running on Cloudflare Workers using LangChain.js.
  • It doesn't happen all the time, but the call sometimes fails with an error: "403 Country, region, or territory not supported."
  • The frequency of occurrence is about "it happens occasionally."

Checking the Logs

From the error message, I have a hunch that it depends on the edge location where the Worker is running... but let's check the logs anyway.

When successful

colo: NRT

When failed

colo: HKG

What is "colo"?

It seems to indicate which Cloudflare edge location the worker is running in.

https://www.cloudflarestatus.com/

As you can see from this page, it seems to use airport 3-letter codes for its notation.
Tokyo uses the code "NRT" for Narita Airport, and as you can probably guess from the letters, "HKG" stands for Hong Kong.

Note that the others located in Japan are:

  • FUK (Fukuoka)
  • OKA (Naha)
  • KIX (Kansai/Osaka)

I think OKA is a familiar code for people who go on "mileage runs" (there is such a thing as the OKA-SIN touch...)

OpenAI Supported Countries

OpenAI's documentation clearly lists the countries and regions where access is supported.
https://platform.openai.com/docs/supported-countries

"China", "Hong Kong", and "Macau" are not listed. "Taiwan" is listed. In other words, OpenAI does not seem to permit access from Hong Kong.

Essentially, this means that when Hong Kong is selected as the processing location for some reason, the request is blocked by OpenAI because it is recognized as "access from Hong Kong."

Anthropic is the same.
https://www.anthropic.com/supported-countries

What to do (Options)

Use Cloudflare Workers' Smart Placement feature (Hypothesis)

https://blog.cloudflare.com/announcing-workers-smart-placement/

Edge locations usually prioritize processing as close to the user as possible, but this feature states:

When your app needs to connect to APIs, databases, or other resources that are not close to the end user, running your app closer to the resource rather than the user might improve performance.

As such, it optimizes the operating region by considering running the app near the resources rather than the user.
Theoretically, if you are accessing the OpenAI API which is unavailable in China or Hong Kong, it seems likely that it would run the Worker in a different region... however, there is no 100% guarantee that it will work perfectly.
Also, it appears to still be a preview feature.

Use Cloudflare Workers' Regional Services

https://developers.cloudflare.com/data-localization/how-to/workers/

This one states:

ensure that processing of a Workers project occurs only in-region

So it looks like I could limit the Worker's operating region.

A limitation is that this cannot be used without a custom domain (it's not available for *.workers.dev).
Therefore, it appears necessary to set up a custom domain first and then call the API.
https://developers.cloudflare.com/data-localization/regional-services/get-started/#configure-regional-services-via-api

I'd like to say I've tested it, but this feature seems to be Enterprise-only, so I can't use it on the free plan.

Also, a downside is that I don't necessarily want to fix the location to Japan, as I might use it while traveling overseas. If I go to the UK, I'd prefer it to run in a UK edge location.

Use DeepSeek

The mindset of "why not just use a model that can be called from China?"
Thinking outside the box.

Do Nothing

It doesn't happen frequently, so I will wait and see.

What to Do

For now, I'll wait and see. If the frequency of occurrence becomes high enough to be a concern, I will use the Smart Placement feature.
If that still doesn't work, I'd like to consider switching models (DeepSeek is a bit... but maybe I could use AWS Bedrock. It's helpful that LangChain.js makes it easy to switch models).

Discussion

つぶらつぶら

直接openai叩かずにopenrouter挟むことで解決できました!