Subscribe to GLM Coding Plan
Open https://z.ai/subscribe (or bigmodel.cn for CNY billing) and pick a tier. Lite at $3 is fine for evaluation. You will need Alipay, WeChat Pay or a USD card.
Anthropic blocks mainland China. GLM Coding Plan and Kimi Code both speak the Anthropic protocol — point Claude Code CLI at them and the China problem disappears.
Install Claude Code CLI → set ANTHROPIC_BASE_URL + ANTHROPIC_API_KEY to GLM or Kimi → run claude. That is the whole guide. Details below.
Open https://z.ai/subscribe (or bigmodel.cn for CNY billing) and pick a tier. Lite at $3 is fine for evaluation. You will need Alipay, WeChat Pay or a USD card.
After subscribing, open the Z.ai console → API Keys → create a key scoped to your coding plan. Copy it to your clipboard.
Follow the official install — npm install -g @anthropic-ai/claude-code or the native installer. This step does not require a proxy; npm mirrors work fine.
npm install -g @anthropic-ai/claude-codeExport two environment variables and skip the Anthropic login flow entirely. Add these to ~/.zshrc or ~/.bashrc so every shell picks them up.
export ANTHROPIC_BASE_URL=https://open.bigmodel.cn/api/anthropic
export ANTHROPIC_API_KEY=your_zai_key_hereOpen a project and type claude. The CLI will talk to GLM-5.1 transparently. Slash commands, MCP servers and sub-agents all continue to work.
cd your-project
claudeInside Claude Code type /model or /debug to confirm GLM is actually serving requests. You should see GLM-5.1 or GLM-5-Turbo in the active model line.
No. Once you set ANTHROPIC_BASE_URL to the Z.ai or Bigmodel endpoint, all Claude Code traffic routes through mainland-friendly domains.
Yes. Unset the environment variables or maintain a separate shell profile. Claude Code has no lock-in once configured via env vars.
The CLI features (slash commands, MCP, sub-agents) work. The specific model quality is whatever GLM provides, not Opus — so Max 20x features like extended thinking depth are not replicated.