Merge pull request #750 from code-yeongyu/refactor/remove-builtin-google-auth
This commit is contained in:
19
README.ja.md
19
README.ja.md
@@ -327,11 +327,10 @@ opencode auth login
|
||||
|
||||
#### 4.2.2 oh-my-opencode エージェントモデルのオーバーライド
|
||||
|
||||
`opencode-antigravity-auth` プラグインは内蔵 Google auth と異なるモデル名を使用します。`oh-my-opencode.json`(または `.opencode/oh-my-opencode.json`)でエージェントモデルをオーバーライドし、内蔵 `google_auth` を無効化してください:
|
||||
`opencode-antigravity-auth` プラグインは特定のモデル名を使用します。エージェントモデルを `oh-my-opencode.json`(または `.opencode/oh-my-opencode.json`)でオーバーライドしてください:
|
||||
|
||||
```json
|
||||
{
|
||||
"google_auth": false,
|
||||
"agents": {
|
||||
"frontend-ui-ux-engineer": { "model": "google/antigravity-gemini-3-pro-high" },
|
||||
"document-writer": { "model": "google/antigravity-gemini-3-flash" },
|
||||
@@ -393,7 +392,7 @@ opencode auth login
|
||||
|
||||
**ユーザーが明示的に要求しない限り、モデル設定を変更したり、機能(エージェント、フック、MCP)を無効化しないでください。**
|
||||
|
||||
プラグインはデフォルトで完璧に動作します。推奨される `google_auth: true` 設定を除き、明示的な要求なしに他の設定を変更したり機能をオフにしないでください。
|
||||
プラグインはデフォルトで完璧に動作します。`opencode-antigravity-auth` を使用する場合は、上記の通りエージェントモデルをオーバーライドしてください。明示的な要求なしに他の設定を変更したり機能をオフにしないでください。
|
||||
|
||||
### セットアップの確認
|
||||
|
||||
@@ -760,9 +759,6 @@ Oh My OpenCode は以下の場所からフックを読み込んで実行しま
|
||||
{
|
||||
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
|
||||
|
||||
// Antigravity OAuth 経由で Google Gemini を有効にする
|
||||
"google_auth": false,
|
||||
|
||||
/* エージェントのオーバーライド - 特定のタスクに合わせてモデルをカスタマイズ */
|
||||
"agents": {
|
||||
"oracle": {
|
||||
@@ -779,11 +775,10 @@ Oh My OpenCode は以下の場所からフックを読み込んで実行しま
|
||||
|
||||
**推奨**: 外部の [`opencode-antigravity-auth`](https://github.com/NoeFabris/opencode-antigravity-auth) プラグインを使用してください。マルチアカウントロードバランシング、より多くのモデル(Antigravity 経由の Claude を含む)、活発なメンテナンスを提供します。[インストール > Google Gemini](#42-google-gemini-antigravity-oauth) を参照。
|
||||
|
||||
`opencode-antigravity-auth` 使用時は内蔵 auth を無効化し、`oh-my-opencode.json` でエージェントモデルをオーバーライドしてください:
|
||||
`opencode-antigravity-auth` 使用時は `oh-my-opencode.json` でエージェントモデルをオーバーライドしてください:
|
||||
|
||||
```json
|
||||
{
|
||||
"google_auth": false,
|
||||
"agents": {
|
||||
"frontend-ui-ux-engineer": { "model": "google/antigravity-gemini-3-pro-high" },
|
||||
"document-writer": { "model": "google/antigravity-gemini-3-flash" },
|
||||
@@ -792,14 +787,6 @@ Oh My OpenCode は以下の場所からフックを読み込んで実行しま
|
||||
}
|
||||
```
|
||||
|
||||
**代替案**: 内蔵 Antigravity OAuth を有効化(単一アカウント、Gemini モデルのみ):
|
||||
|
||||
```json
|
||||
{
|
||||
"google_auth": true
|
||||
}
|
||||
```
|
||||
|
||||
### Agents
|
||||
|
||||
内蔵エージェント設定をオーバーライドできます:
|
||||
|
||||
31
README.md
31
README.md
@@ -354,11 +354,10 @@ Read the [opencode-antigravity-auth documentation](https://github.com/NoeFabris/
|
||||
|
||||
##### oh-my-opencode Agent Model Override
|
||||
|
||||
The `opencode-antigravity-auth` plugin uses different model names than the built-in Google auth. Override the agent models in `oh-my-opencode.json` (or `.opencode/oh-my-opencode.json`) and disable the built-in `google_auth`:
|
||||
The `opencode-antigravity-auth` plugin uses different model names than the built-in Google auth. Override the agent models in `oh-my-opencode.json` (or `.opencode/oh-my-opencode.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"google_auth": false,
|
||||
"agents": {
|
||||
"frontend-ui-ux-engineer": { "model": "google/antigravity-gemini-3-pro-high" },
|
||||
"document-writer": { "model": "google/antigravity-gemini-3-flash" },
|
||||
@@ -420,7 +419,7 @@ opencode auth login
|
||||
|
||||
**Unless the user explicitly requests it, do not change model settings or disable features (agents, hooks, MCPs).**
|
||||
|
||||
The plugin works perfectly by default. Except for the recommended `google_auth: true` setting, do not change other settings or turn off features without an explicit request.
|
||||
The plugin works perfectly by default. Do not change settings or turn off features without an explicit request.
|
||||
|
||||
### Verify the setup
|
||||
|
||||
@@ -804,9 +803,6 @@ When both `oh-my-opencode.jsonc` and `oh-my-opencode.json` files exist, `.jsonc`
|
||||
{
|
||||
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
|
||||
|
||||
// Enable Google Gemini via Antigravity OAuth
|
||||
"google_auth": false,
|
||||
|
||||
/* Agent overrides - customize models for specific tasks */
|
||||
"agents": {
|
||||
"oracle": {
|
||||
@@ -821,28 +817,7 @@ When both `oh-my-opencode.jsonc` and `oh-my-opencode.json` files exist, `.jsonc`
|
||||
|
||||
### Google Auth
|
||||
|
||||
**Recommended**: Use the external [`opencode-antigravity-auth`](https://github.com/NoeFabris/opencode-antigravity-auth) plugin. It provides multi-account load balancing, more models (including Claude via Antigravity), and active maintenance. See [Installation > Google Gemini](#google-gemini-antigravity-oauth).
|
||||
|
||||
When using `opencode-antigravity-auth`, disable the built-in auth and override agent models in `oh-my-opencode.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"google_auth": false,
|
||||
"agents": {
|
||||
"frontend-ui-ux-engineer": { "model": "google/antigravity-gemini-3-pro-high" },
|
||||
"document-writer": { "model": "google/antigravity-gemini-3-flash" },
|
||||
"multimodal-looker": { "model": "google/antigravity-gemini-3-flash" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Alternative**: Enable built-in Antigravity OAuth (single account, Gemini models only):
|
||||
|
||||
```json
|
||||
{
|
||||
"google_auth": true
|
||||
}
|
||||
```
|
||||
**Recommended**: For Google Gemini authentication, install the [`opencode-antigravity-auth`](https://github.com/NoeFabris/opencode-antigravity-auth) plugin. It provides multi-account load balancing, more models (including Claude via Antigravity), and active maintenance. See [Installation > Google Gemini](#google-gemini-antigravity-oauth).
|
||||
|
||||
### Agents
|
||||
|
||||
|
||||
@@ -353,11 +353,10 @@ opencode auth login
|
||||
|
||||
##### oh-my-opencode 智能体模型覆盖
|
||||
|
||||
`opencode-antigravity-auth` 插件使用与内置 Google 认证不同的模型名称。在 `oh-my-opencode.json`(或 `.opencode/oh-my-opencode.json`)中覆盖智能体模型,并禁用内置的 `google_auth`:
|
||||
`opencode-antigravity-auth` 插件使用特定的模型名称。在 `oh-my-opencode.json`(或 `.opencode/oh-my-opencode.json`)中覆盖智能体模型:
|
||||
|
||||
```json
|
||||
{
|
||||
"google_auth": false,
|
||||
"agents": {
|
||||
"frontend-ui-ux-engineer": { "model": "google/antigravity-gemini-3-pro-high" },
|
||||
"document-writer": { "model": "google/antigravity-gemini-3-flash" },
|
||||
@@ -419,7 +418,7 @@ opencode auth login
|
||||
|
||||
**除非用户明确要求,否则不要更改模型设置或禁用功能(智能体、钩子、MCP)。**
|
||||
|
||||
该插件默认情况下运行良好。未使用外部 Antigravity 插件时保持 `google_auth: true`;如果按上方说明接入 `opencode-antigravity-auth`,请将 `google_auth` 设为 `false` 并覆盖智能体模型。除此之外,不要在没有明确请求的情况下更改其他设置或关闭功能。
|
||||
该插件默认情况下运行良好。如果使用 `opencode-antigravity-auth`,请按上方说明覆盖智能体模型。除此之外,不要在没有明确请求的情况下更改其他设置或关闭功能。
|
||||
|
||||
### 验证安装
|
||||
|
||||
@@ -803,9 +802,6 @@ Oh My OpenCode 从以下位置读取和执行钩子:
|
||||
{
|
||||
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
|
||||
|
||||
// 通过 Antigravity OAuth 启用 Google Gemini
|
||||
"google_auth": false,
|
||||
|
||||
/* 智能体覆盖 - 为特定任务自定义模型 */
|
||||
"agents": {
|
||||
"oracle": {
|
||||
@@ -820,13 +816,12 @@ Oh My OpenCode 从以下位置读取和执行钩子:
|
||||
|
||||
### Google 认证
|
||||
|
||||
**推荐**:使用外部 [`opencode-antigravity-auth`](https://github.com/NoeFabris/opencode-antigravity-auth) 插件。它提供多账号负载均衡、更多模型(包括通过 Antigravity 的 Claude)和积极的维护。参见[安装 > Google Gemini](#google-gemini-antigravity-oauth)。
|
||||
使用外部 [`opencode-antigravity-auth`](https://github.com/NoeFabris/opencode-antigravity-auth) 插件进行 Google 认证。它提供多账号负载均衡、更多模型(包括通过 Antigravity 的 Claude)和积极的维护。参见[安装 > Google Gemini](#google-gemini-antigravity-oauth)。
|
||||
|
||||
使用 `opencode-antigravity-auth` 时,禁用内置认证并在 `oh-my-opencode.json` 中覆盖智能体模型:
|
||||
使用 `opencode-antigravity-auth` 时,在 `oh-my-opencode.json` 中覆盖智能体模型:
|
||||
|
||||
```json
|
||||
{
|
||||
"google_auth": false,
|
||||
"agents": {
|
||||
"frontend-ui-ux-engineer": { "model": "google/antigravity-gemini-3-pro-high" },
|
||||
"document-writer": { "model": "google/antigravity-gemini-3-flash" },
|
||||
@@ -835,14 +830,6 @@ Oh My OpenCode 从以下位置读取和执行钩子:
|
||||
}
|
||||
```
|
||||
|
||||
**替代方案**:启用内置 Antigravity OAuth(单账号,仅 Gemini 模型):
|
||||
|
||||
```json
|
||||
{
|
||||
"google_auth": true
|
||||
}
|
||||
```
|
||||
|
||||
### 智能体
|
||||
|
||||
覆盖内置智能体设置:
|
||||
|
||||
@@ -2099,9 +2099,6 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"google_auth": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"sisyphus_agent": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
|
||||
@@ -16,14 +16,10 @@
|
||||
"types": "./dist/index.d.ts",
|
||||
"import": "./dist/index.js"
|
||||
},
|
||||
"./google-auth": {
|
||||
"types": "./dist/google-auth.d.ts",
|
||||
"import": "./dist/google-auth.js"
|
||||
},
|
||||
"./schema.json": "./dist/oh-my-opencode.schema.json"
|
||||
},
|
||||
"scripts": {
|
||||
"build": "bun build src/index.ts src/google-auth.ts --outdir dist --target bun --format esm --external @ast-grep/napi && tsc --emitDeclarationOnly && bun build src/cli/index.ts --outdir dist/cli --target bun --format esm --external @ast-grep/napi && bun run build:schema",
|
||||
"build": "bun build src/index.ts --outdir dist --target bun --format esm --external @ast-grep/napi && tsc --emitDeclarationOnly && bun build src/cli/index.ts --outdir dist/cli --target bun --format esm --external @ast-grep/napi && bun run build:schema",
|
||||
"build:schema": "bun run script/build-schema.ts",
|
||||
"clean": "rm -rf dist",
|
||||
"prepublishOnly": "bun run clean && bun run build",
|
||||
|
||||
@@ -1,55 +0,0 @@
|
||||
# AUTH KNOWLEDGE BASE
|
||||
|
||||
## OVERVIEW
|
||||
Google Antigravity OAuth for Gemini models. Token management, fetch interception, thinking block extraction.
|
||||
|
||||
## STRUCTURE
|
||||
```
|
||||
auth/
|
||||
└── antigravity/
|
||||
├── plugin.ts # Main export, hooks registration (554 lines)
|
||||
├── oauth.ts # OAuth flow, token acquisition
|
||||
├── token.ts # Token storage, refresh logic
|
||||
├── fetch.ts # Fetch interceptor (798 lines)
|
||||
├── response.ts # Response transformation (598 lines)
|
||||
├── thinking.ts # Thinking block extraction (755 lines)
|
||||
├── thought-signature-store.ts # Signature caching
|
||||
├── message-converter.ts # Format conversion
|
||||
├── accounts.ts # Multi-account management (up to 10 accounts)
|
||||
├── browser.ts # Browser automation for OAuth
|
||||
├── cli.ts # CLI interaction
|
||||
├── request.ts # Request building
|
||||
├── project.ts # Project ID management
|
||||
├── storage.ts # Token persistence
|
||||
├── tools.ts # OAuth tool registration
|
||||
├── constants.ts # API endpoints, model mappings
|
||||
└── types.ts
|
||||
```
|
||||
|
||||
## KEY COMPONENTS
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| fetch.ts | URL rewriting, multi-account rotation, endpoint fallback |
|
||||
| thinking.ts | Thinking block extraction, signature management, budget mapping |
|
||||
| response.ts | Streaming SSE parsing and response transformation |
|
||||
| accounts.ts | Load balancing across up to 10 Google accounts |
|
||||
| thought-signature-store.ts | Caching signatures for multi-turn thinking conversations |
|
||||
|
||||
## HOW IT WORKS
|
||||
1. **Intercept**: `fetch.ts` intercepts Anthropic/Google requests.
|
||||
2. **Route**: Rotates accounts and selects best endpoint (daily → autopush → prod).
|
||||
3. **Auth**: Injects Bearer tokens from `token.ts` persistence.
|
||||
4. **Process**: `response.ts` parses SSE; `thinking.ts` manages thought blocks.
|
||||
5. **Recovery**: Detects GCP permission errors and triggers recovery/rotation.
|
||||
|
||||
## FEATURES
|
||||
- Multi-account load balancing (up to 10 accounts)
|
||||
- Strategic endpoint fallback: daily → autopush → prod
|
||||
- Persistent thought signatures for continuity in thinking models
|
||||
- Automated GCP permission error recovery
|
||||
|
||||
## ANTI-PATTERNS
|
||||
- Hardcoding endpoints: Use `constants.ts` or let `fetch.ts` route.
|
||||
- Manual token handling: Use `token.ts` and `storage.ts` abstraction.
|
||||
- Sync OAuth calls: All auth flows must be non-blocking/async.
|
||||
- Ignoring account rotation: Let `fetch.ts` handle load balancing.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,244 +0,0 @@
|
||||
import { saveAccounts } from "./storage"
|
||||
import { parseStoredToken, formatTokenForStorage } from "./token"
|
||||
import {
|
||||
MODEL_FAMILIES,
|
||||
type AccountStorage,
|
||||
type AccountMetadata,
|
||||
type AccountTier,
|
||||
type AntigravityRefreshParts,
|
||||
type ModelFamily,
|
||||
type RateLimitState,
|
||||
} from "./types"
|
||||
|
||||
export interface ManagedAccount {
|
||||
index: number
|
||||
parts: AntigravityRefreshParts
|
||||
access?: string
|
||||
expires?: number
|
||||
rateLimits: RateLimitState
|
||||
lastUsed: number
|
||||
email?: string
|
||||
tier?: AccountTier
|
||||
}
|
||||
|
||||
interface AuthDetails {
|
||||
refresh: string
|
||||
access: string
|
||||
expires: number
|
||||
}
|
||||
|
||||
interface OAuthAuthDetails {
|
||||
type: "oauth"
|
||||
refresh: string
|
||||
access: string
|
||||
expires: number
|
||||
}
|
||||
|
||||
function isRateLimitedForFamily(account: ManagedAccount, family: ModelFamily): boolean {
|
||||
const resetTime = account.rateLimits[family]
|
||||
return resetTime !== undefined && Date.now() < resetTime
|
||||
}
|
||||
|
||||
export class AccountManager {
|
||||
private accounts: ManagedAccount[] = []
|
||||
private currentIndex = 0
|
||||
private activeIndex = 0
|
||||
|
||||
constructor(auth: AuthDetails, storedAccounts?: AccountStorage | null) {
|
||||
if (storedAccounts && storedAccounts.accounts.length > 0) {
|
||||
const validActiveIndex =
|
||||
typeof storedAccounts.activeIndex === "number" &&
|
||||
storedAccounts.activeIndex >= 0 &&
|
||||
storedAccounts.activeIndex < storedAccounts.accounts.length
|
||||
? storedAccounts.activeIndex
|
||||
: 0
|
||||
|
||||
this.activeIndex = validActiveIndex
|
||||
this.currentIndex = validActiveIndex
|
||||
|
||||
this.accounts = storedAccounts.accounts.map((acc, index) => ({
|
||||
index,
|
||||
parts: {
|
||||
refreshToken: acc.refreshToken,
|
||||
projectId: acc.projectId,
|
||||
managedProjectId: acc.managedProjectId,
|
||||
},
|
||||
access: index === validActiveIndex ? auth.access : acc.accessToken,
|
||||
expires: index === validActiveIndex ? auth.expires : acc.expiresAt,
|
||||
rateLimits: acc.rateLimits ?? {},
|
||||
lastUsed: 0,
|
||||
email: acc.email,
|
||||
tier: acc.tier,
|
||||
}))
|
||||
} else {
|
||||
this.activeIndex = 0
|
||||
this.currentIndex = 0
|
||||
|
||||
const parts = parseStoredToken(auth.refresh)
|
||||
this.accounts.push({
|
||||
index: 0,
|
||||
parts,
|
||||
access: auth.access,
|
||||
expires: auth.expires,
|
||||
rateLimits: {},
|
||||
lastUsed: 0,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
getAccountCount(): number {
|
||||
return this.accounts.length
|
||||
}
|
||||
|
||||
getCurrentAccount(): ManagedAccount | null {
|
||||
if (this.activeIndex >= 0 && this.activeIndex < this.accounts.length) {
|
||||
return this.accounts[this.activeIndex] ?? null
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
getAccounts(): ManagedAccount[] {
|
||||
return [...this.accounts]
|
||||
}
|
||||
|
||||
getCurrentOrNextForFamily(family: ModelFamily): ManagedAccount | null {
|
||||
for (const account of this.accounts) {
|
||||
this.clearExpiredRateLimits(account)
|
||||
}
|
||||
|
||||
const current = this.getCurrentAccount()
|
||||
if (current) {
|
||||
if (!isRateLimitedForFamily(current, family)) {
|
||||
const betterTierAvailable =
|
||||
current.tier !== "paid" &&
|
||||
this.accounts.some((a) => a.tier === "paid" && !isRateLimitedForFamily(a, family))
|
||||
|
||||
if (!betterTierAvailable) {
|
||||
current.lastUsed = Date.now()
|
||||
return current
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const next = this.getNextForFamily(family)
|
||||
if (next) {
|
||||
this.activeIndex = next.index
|
||||
}
|
||||
return next
|
||||
}
|
||||
|
||||
getNextForFamily(family: ModelFamily): ManagedAccount | null {
|
||||
const available = this.accounts.filter((a) => !isRateLimitedForFamily(a, family))
|
||||
|
||||
if (available.length === 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
const paidAvailable = available.filter((a) => a.tier === "paid")
|
||||
const pool = paidAvailable.length > 0 ? paidAvailable : available
|
||||
|
||||
const account = pool[this.currentIndex % pool.length]
|
||||
if (!account) {
|
||||
return null
|
||||
}
|
||||
|
||||
this.currentIndex++
|
||||
account.lastUsed = Date.now()
|
||||
return account
|
||||
}
|
||||
|
||||
markRateLimited(account: ManagedAccount, retryAfterMs: number, family: ModelFamily): void {
|
||||
account.rateLimits[family] = Date.now() + retryAfterMs
|
||||
}
|
||||
|
||||
clearExpiredRateLimits(account: ManagedAccount): void {
|
||||
const now = Date.now()
|
||||
for (const family of MODEL_FAMILIES) {
|
||||
if (account.rateLimits[family] !== undefined && now >= account.rateLimits[family]!) {
|
||||
delete account.rateLimits[family]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
addAccount(
|
||||
parts: AntigravityRefreshParts,
|
||||
access?: string,
|
||||
expires?: number,
|
||||
email?: string,
|
||||
tier?: AccountTier
|
||||
): void {
|
||||
this.accounts.push({
|
||||
index: this.accounts.length,
|
||||
parts,
|
||||
access,
|
||||
expires,
|
||||
rateLimits: {},
|
||||
lastUsed: 0,
|
||||
email,
|
||||
tier,
|
||||
})
|
||||
}
|
||||
|
||||
removeAccount(index: number): boolean {
|
||||
if (index < 0 || index >= this.accounts.length) {
|
||||
return false
|
||||
}
|
||||
|
||||
this.accounts.splice(index, 1)
|
||||
|
||||
if (index < this.activeIndex) {
|
||||
this.activeIndex--
|
||||
} else if (index === this.activeIndex) {
|
||||
this.activeIndex = Math.min(this.activeIndex, Math.max(0, this.accounts.length - 1))
|
||||
}
|
||||
|
||||
if (index < this.currentIndex) {
|
||||
this.currentIndex--
|
||||
} else if (index === this.currentIndex) {
|
||||
this.currentIndex = Math.min(this.currentIndex, Math.max(0, this.accounts.length - 1))
|
||||
}
|
||||
|
||||
for (let i = 0; i < this.accounts.length; i++) {
|
||||
this.accounts[i]!.index = i
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
async save(path?: string): Promise<void> {
|
||||
const storage: AccountStorage = {
|
||||
version: 1,
|
||||
accounts: this.accounts.map((acc) => ({
|
||||
email: acc.email ?? "",
|
||||
tier: acc.tier ?? "free",
|
||||
refreshToken: acc.parts.refreshToken,
|
||||
projectId: acc.parts.projectId ?? "",
|
||||
managedProjectId: acc.parts.managedProjectId,
|
||||
accessToken: acc.access ?? "",
|
||||
expiresAt: acc.expires ?? 0,
|
||||
rateLimits: acc.rateLimits,
|
||||
})),
|
||||
activeIndex: Math.max(0, this.activeIndex),
|
||||
}
|
||||
|
||||
await saveAccounts(storage, path)
|
||||
}
|
||||
|
||||
toAuthDetails(): OAuthAuthDetails {
|
||||
const current = this.getCurrentAccount() ?? this.accounts[0]
|
||||
if (!current) {
|
||||
throw new Error("No accounts available")
|
||||
}
|
||||
|
||||
const allRefreshTokens = this.accounts
|
||||
.map((acc) => formatTokenForStorage(acc.parts.refreshToken, acc.parts.projectId ?? "", acc.parts.managedProjectId))
|
||||
.join("|||")
|
||||
|
||||
return {
|
||||
type: "oauth",
|
||||
refresh: allRefreshTokens,
|
||||
access: current.access ?? "",
|
||||
expires: current.expires ?? 0,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,37 +0,0 @@
|
||||
import { describe, it, expect, mock, spyOn } from "bun:test"
|
||||
import { openBrowserURL } from "./browser"
|
||||
|
||||
describe("openBrowserURL", () => {
|
||||
it("returns true when browser opens successfully", async () => {
|
||||
// #given
|
||||
const url = "https://accounts.google.com/oauth"
|
||||
|
||||
// #when
|
||||
const result = await openBrowserURL(url)
|
||||
|
||||
// #then
|
||||
expect(typeof result).toBe("boolean")
|
||||
})
|
||||
|
||||
it("returns false when open throws an error", async () => {
|
||||
// #given
|
||||
const invalidUrl = ""
|
||||
|
||||
// #when
|
||||
const result = await openBrowserURL(invalidUrl)
|
||||
|
||||
// #then
|
||||
expect(typeof result).toBe("boolean")
|
||||
})
|
||||
|
||||
it("handles URL with special characters", async () => {
|
||||
// #given
|
||||
const urlWithParams = "https://accounts.google.com/oauth?state=abc123&redirect_uri=http://localhost:51121"
|
||||
|
||||
// #when
|
||||
const result = await openBrowserURL(urlWithParams)
|
||||
|
||||
// #then
|
||||
expect(typeof result).toBe("boolean")
|
||||
})
|
||||
})
|
||||
@@ -1,51 +0,0 @@
|
||||
/**
|
||||
* Cross-platform browser opening utility.
|
||||
* Uses the "open" npm package for reliable cross-platform support.
|
||||
*
|
||||
* Supports: macOS, Windows, Linux (including WSL)
|
||||
*/
|
||||
|
||||
import open from "open"
|
||||
|
||||
/**
|
||||
* Debug logging helper.
|
||||
* Only logs when ANTIGRAVITY_DEBUG=1
|
||||
*/
|
||||
function debugLog(message: string): void {
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.log(`[antigravity-browser] ${message}`)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Opens a URL in the user's default browser.
|
||||
*
|
||||
* Cross-platform support:
|
||||
* - macOS: uses `open` command
|
||||
* - Windows: uses `start` command
|
||||
* - Linux: uses `xdg-open` command
|
||||
* - WSL: uses Windows PowerShell
|
||||
*
|
||||
* @param url - The URL to open in the browser
|
||||
* @returns Promise<boolean> - true if browser opened successfully, false otherwise
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* const success = await openBrowserURL("https://accounts.google.com/oauth...")
|
||||
* if (!success) {
|
||||
* console.log("Please open this URL manually:", url)
|
||||
* }
|
||||
* ```
|
||||
*/
|
||||
export async function openBrowserURL(url: string): Promise<boolean> {
|
||||
debugLog(`Opening browser: ${url}`)
|
||||
|
||||
try {
|
||||
await open(url)
|
||||
debugLog("Browser opened successfully")
|
||||
return true
|
||||
} catch (error) {
|
||||
debugLog(`Failed to open browser: ${error instanceof Error ? error.message : String(error)}`)
|
||||
return false
|
||||
}
|
||||
}
|
||||
@@ -1,156 +0,0 @@
|
||||
import { describe, it, expect, beforeEach, afterEach, mock } from "bun:test"
|
||||
|
||||
const CANCEL = Symbol("cancel")
|
||||
|
||||
type ConfirmFn = (options: unknown) => Promise<boolean | typeof CANCEL>
|
||||
type SelectFn = (options: unknown) => Promise<"free" | "paid" | typeof CANCEL>
|
||||
|
||||
const confirmMock = mock<ConfirmFn>(async () => false)
|
||||
const selectMock = mock<SelectFn>(async () => "free")
|
||||
const cancelMock = mock<(message?: string) => void>(() => {})
|
||||
|
||||
mock.module("@clack/prompts", () => {
|
||||
return {
|
||||
confirm: confirmMock,
|
||||
select: selectMock,
|
||||
isCancel: (value: unknown) => value === CANCEL,
|
||||
cancel: cancelMock,
|
||||
}
|
||||
})
|
||||
|
||||
function setIsTty(isTty: boolean): () => void {
|
||||
const original = Object.getOwnPropertyDescriptor(process.stdout, "isTTY")
|
||||
|
||||
Object.defineProperty(process.stdout, "isTTY", {
|
||||
configurable: true,
|
||||
value: isTty,
|
||||
})
|
||||
|
||||
return () => {
|
||||
if (original) {
|
||||
Object.defineProperty(process.stdout, "isTTY", original)
|
||||
} else {
|
||||
// Best-effort restore: remove overridden property
|
||||
// eslint-disable-next-line @typescript-eslint/no-dynamic-delete
|
||||
delete (process.stdout as unknown as { isTTY?: unknown }).isTTY
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
describe("src/auth/antigravity/cli", () => {
|
||||
let restoreIsTty: (() => void) | null = null
|
||||
|
||||
beforeEach(() => {
|
||||
confirmMock.mockReset()
|
||||
selectMock.mockReset()
|
||||
cancelMock.mockReset()
|
||||
restoreIsTty?.()
|
||||
restoreIsTty = null
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
restoreIsTty?.()
|
||||
restoreIsTty = null
|
||||
})
|
||||
|
||||
it("promptAddAnotherAccount returns confirm result in TTY", async () => {
|
||||
// #given
|
||||
restoreIsTty = setIsTty(true)
|
||||
confirmMock.mockResolvedValueOnce(true)
|
||||
|
||||
const { promptAddAnotherAccount } = await import("./cli")
|
||||
|
||||
// #when
|
||||
const result = await promptAddAnotherAccount(2)
|
||||
|
||||
// #then
|
||||
expect(result).toBe(true)
|
||||
expect(confirmMock).toHaveBeenCalledTimes(1)
|
||||
})
|
||||
|
||||
it("promptAddAnotherAccount returns false in TTY when confirm is false", async () => {
|
||||
// #given
|
||||
restoreIsTty = setIsTty(true)
|
||||
confirmMock.mockResolvedValueOnce(false)
|
||||
|
||||
const { promptAddAnotherAccount } = await import("./cli")
|
||||
|
||||
// #when
|
||||
const result = await promptAddAnotherAccount(2)
|
||||
|
||||
// #then
|
||||
expect(result).toBe(false)
|
||||
expect(confirmMock).toHaveBeenCalledTimes(1)
|
||||
})
|
||||
|
||||
it("promptAddAnotherAccount returns false in non-TTY", async () => {
|
||||
// #given
|
||||
restoreIsTty = setIsTty(false)
|
||||
|
||||
const { promptAddAnotherAccount } = await import("./cli")
|
||||
|
||||
// #when
|
||||
const result = await promptAddAnotherAccount(3)
|
||||
|
||||
// #then
|
||||
expect(result).toBe(false)
|
||||
expect(confirmMock).toHaveBeenCalledTimes(0)
|
||||
})
|
||||
|
||||
it("promptAddAnotherAccount handles cancel", async () => {
|
||||
// #given
|
||||
restoreIsTty = setIsTty(true)
|
||||
confirmMock.mockResolvedValueOnce(CANCEL)
|
||||
|
||||
const { promptAddAnotherAccount } = await import("./cli")
|
||||
|
||||
// #when
|
||||
const result = await promptAddAnotherAccount(1)
|
||||
|
||||
// #then
|
||||
expect(result).toBe(false)
|
||||
})
|
||||
|
||||
it("promptAccountTier returns selected tier in TTY", async () => {
|
||||
// #given
|
||||
restoreIsTty = setIsTty(true)
|
||||
selectMock.mockResolvedValueOnce("paid")
|
||||
|
||||
const { promptAccountTier } = await import("./cli")
|
||||
|
||||
// #when
|
||||
const result = await promptAccountTier()
|
||||
|
||||
// #then
|
||||
expect(result).toBe("paid")
|
||||
expect(selectMock).toHaveBeenCalledTimes(1)
|
||||
})
|
||||
|
||||
it("promptAccountTier returns free in non-TTY", async () => {
|
||||
// #given
|
||||
restoreIsTty = setIsTty(false)
|
||||
|
||||
const { promptAccountTier } = await import("./cli")
|
||||
|
||||
// #when
|
||||
const result = await promptAccountTier()
|
||||
|
||||
// #then
|
||||
expect(result).toBe("free")
|
||||
expect(selectMock).toHaveBeenCalledTimes(0)
|
||||
})
|
||||
|
||||
it("promptAccountTier handles cancel", async () => {
|
||||
// #given
|
||||
restoreIsTty = setIsTty(true)
|
||||
selectMock.mockResolvedValueOnce(CANCEL)
|
||||
|
||||
const { promptAccountTier } = await import("./cli")
|
||||
|
||||
// #when
|
||||
const result = await promptAccountTier()
|
||||
|
||||
// #then
|
||||
expect(result).toBe("free")
|
||||
})
|
||||
})
|
||||
@@ -1,37 +0,0 @@
|
||||
import { confirm, select, isCancel } from "@clack/prompts"
|
||||
|
||||
export async function promptAddAnotherAccount(currentCount: number): Promise<boolean> {
|
||||
if (!process.stdout.isTTY) {
|
||||
return false
|
||||
}
|
||||
|
||||
const result = await confirm({
|
||||
message: `Add another Google account?\nCurrently have ${currentCount} accounts (max 10)`,
|
||||
})
|
||||
|
||||
if (isCancel(result)) {
|
||||
return false
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
export async function promptAccountTier(): Promise<"free" | "paid"> {
|
||||
if (!process.stdout.isTTY) {
|
||||
return "free"
|
||||
}
|
||||
|
||||
const tier = await select({
|
||||
message: "Select account tier",
|
||||
options: [
|
||||
{ value: "free" as const, label: "Free" },
|
||||
{ value: "paid" as const, label: "Paid" },
|
||||
],
|
||||
})
|
||||
|
||||
if (isCancel(tier)) {
|
||||
return "free"
|
||||
}
|
||||
|
||||
return tier
|
||||
}
|
||||
@@ -1,69 +0,0 @@
|
||||
import { describe, it, expect } from "bun:test"
|
||||
import {
|
||||
ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS,
|
||||
ANTIGRAVITY_ENDPOINT_FALLBACKS,
|
||||
ANTIGRAVITY_CALLBACK_PORT,
|
||||
} from "./constants"
|
||||
|
||||
describe("Antigravity Constants", () => {
|
||||
describe("ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS", () => {
|
||||
it("should be 60 seconds (60,000ms) to refresh before expiry", () => {
|
||||
// #given
|
||||
const SIXTY_SECONDS_MS = 60 * 1000 // 60,000
|
||||
|
||||
// #when
|
||||
const actual = ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS
|
||||
|
||||
// #then
|
||||
expect(actual).toBe(SIXTY_SECONDS_MS)
|
||||
})
|
||||
})
|
||||
|
||||
describe("ANTIGRAVITY_ENDPOINT_FALLBACKS", () => {
|
||||
it("should have exactly 3 endpoints (sandbox → daily → prod)", () => {
|
||||
// #given
|
||||
const expectedCount = 3
|
||||
|
||||
// #when
|
||||
const actual = ANTIGRAVITY_ENDPOINT_FALLBACKS
|
||||
|
||||
// #then
|
||||
expect(actual).toHaveLength(expectedCount)
|
||||
})
|
||||
|
||||
it("should have sandbox endpoint first", () => {
|
||||
// #then
|
||||
expect(ANTIGRAVITY_ENDPOINT_FALLBACKS[0]).toBe(
|
||||
"https://daily-cloudcode-pa.sandbox.googleapis.com"
|
||||
)
|
||||
})
|
||||
|
||||
it("should have daily endpoint second", () => {
|
||||
// #then
|
||||
expect(ANTIGRAVITY_ENDPOINT_FALLBACKS[1]).toBe(
|
||||
"https://daily-cloudcode-pa.googleapis.com"
|
||||
)
|
||||
})
|
||||
|
||||
it("should have prod endpoint third", () => {
|
||||
// #then
|
||||
expect(ANTIGRAVITY_ENDPOINT_FALLBACKS[2]).toBe(
|
||||
"https://cloudcode-pa.googleapis.com"
|
||||
)
|
||||
})
|
||||
|
||||
it("should NOT include autopush endpoint", () => {
|
||||
// #then
|
||||
const endpointsJoined = ANTIGRAVITY_ENDPOINT_FALLBACKS.join(",")
|
||||
const hasAutopush = endpointsJoined.includes("autopush-cloudcode-pa")
|
||||
expect(hasAutopush).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe("ANTIGRAVITY_CALLBACK_PORT", () => {
|
||||
it("should be 51121 to match CLIProxyAPI", () => {
|
||||
// #then
|
||||
expect(ANTIGRAVITY_CALLBACK_PORT).toBe(51121)
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -1,267 +0,0 @@
|
||||
/**
|
||||
* Antigravity OAuth configuration constants.
|
||||
* Values sourced from cliproxyapi/sdk/auth/antigravity.go
|
||||
*
|
||||
* ## Logging Policy
|
||||
*
|
||||
* All console logging in antigravity modules follows a consistent policy:
|
||||
*
|
||||
* - **Debug logs**: Guard with `if (process.env.ANTIGRAVITY_DEBUG === "1")`
|
||||
* - Includes: info messages, warnings, non-fatal errors
|
||||
* - Enable debugging: `ANTIGRAVITY_DEBUG=1 opencode`
|
||||
*
|
||||
* - **Fatal errors**: None currently. All errors are handled by returning
|
||||
* appropriate error responses to OpenCode's auth system.
|
||||
*
|
||||
* This policy ensures production silence while enabling verbose debugging
|
||||
* when needed for troubleshooting OAuth flows.
|
||||
*/
|
||||
|
||||
// OAuth 2.0 Client Credentials
|
||||
export const ANTIGRAVITY_CLIENT_ID =
|
||||
"1071006060591-tmhssin2h21lcre235vtolojh4g403ep.apps.googleusercontent.com"
|
||||
export const ANTIGRAVITY_CLIENT_SECRET = "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"
|
||||
|
||||
// OAuth Callback
|
||||
export const ANTIGRAVITY_CALLBACK_PORT = 51121
|
||||
export const ANTIGRAVITY_REDIRECT_URI = `http://localhost:${ANTIGRAVITY_CALLBACK_PORT}/oauth-callback`
|
||||
|
||||
// OAuth Scopes
|
||||
export const ANTIGRAVITY_SCOPES = [
|
||||
"https://www.googleapis.com/auth/cloud-platform",
|
||||
"https://www.googleapis.com/auth/userinfo.email",
|
||||
"https://www.googleapis.com/auth/userinfo.profile",
|
||||
"https://www.googleapis.com/auth/cclog",
|
||||
"https://www.googleapis.com/auth/experimentsandconfigs",
|
||||
] as const
|
||||
|
||||
// API Endpoint Fallbacks - matches CLIProxyAPI antigravity_executor.go:1192-1201
|
||||
// Claude models only available on SANDBOX endpoints (429 quota vs 404 not found)
|
||||
export const ANTIGRAVITY_ENDPOINT_FALLBACKS = [
|
||||
"https://daily-cloudcode-pa.sandbox.googleapis.com",
|
||||
"https://daily-cloudcode-pa.googleapis.com",
|
||||
"https://cloudcode-pa.googleapis.com",
|
||||
] as const
|
||||
|
||||
// API Version
|
||||
export const ANTIGRAVITY_API_VERSION = "v1internal"
|
||||
|
||||
// Request Headers
|
||||
export const ANTIGRAVITY_HEADERS = {
|
||||
"User-Agent": "google-api-nodejs-client/9.15.1",
|
||||
"X-Goog-Api-Client": "google-cloud-sdk vscode_cloudshelleditor/0.1",
|
||||
"Client-Metadata": JSON.stringify({
|
||||
ideType: "IDE_UNSPECIFIED",
|
||||
platform: "PLATFORM_UNSPECIFIED",
|
||||
pluginType: "GEMINI",
|
||||
}),
|
||||
} as const
|
||||
|
||||
// Default Project ID (fallback when loadCodeAssist API fails)
|
||||
// From opencode-antigravity-auth reference implementation
|
||||
export const ANTIGRAVITY_DEFAULT_PROJECT_ID = "rising-fact-p41fc"
|
||||
|
||||
|
||||
|
||||
// Google OAuth endpoints
|
||||
export const GOOGLE_AUTH_URL = "https://accounts.google.com/o/oauth2/v2/auth"
|
||||
export const GOOGLE_TOKEN_URL = "https://oauth2.googleapis.com/token"
|
||||
export const GOOGLE_USERINFO_URL = "https://www.googleapis.com/oauth2/v1/userinfo"
|
||||
|
||||
// Token refresh buffer (refresh 60 seconds before expiry)
|
||||
export const ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS = 60_000
|
||||
|
||||
// Default thought signature to skip validation (CLIProxyAPI approach)
|
||||
export const SKIP_THOUGHT_SIGNATURE_VALIDATOR = "skip_thought_signature_validator"
|
||||
|
||||
// ============================================================================
|
||||
// System Prompt - Sourced from CLIProxyAPI antigravity_executor.go:1049-1050
|
||||
// ============================================================================
|
||||
|
||||
export const ANTIGRAVITY_SYSTEM_PROMPT = `<identity>
|
||||
You are Antigravity, a powerful agentic AI coding assistant designed by the Google Deepmind team working on Advanced Agentic Coding.
|
||||
You are pair programming with a USER to solve their coding task. The task may require creating a new codebase, modifying or debugging an existing codebase, or simply answering a question.
|
||||
The USER will send you requests, which you must always prioritize addressing. Along with each USER request, we will attach additional metadata about their current state, such as what files they have open and where their cursor is.
|
||||
This information may or may not be relevant to the coding task, it is up for you to decide.
|
||||
</identity>
|
||||
|
||||
<tool_calling>
|
||||
Call tools as you normally would. The following list provides additional guidance to help you avoid errors:
|
||||
- **Absolute paths only**. When using tools that accept file path arguments, ALWAYS use the absolute file path.
|
||||
</tool_calling>
|
||||
|
||||
<web_application_development>
|
||||
## Technology Stack
|
||||
Your web applications should be built using the following technologies:
|
||||
1. **Core**: Use HTML for structure and Javascript for logic.
|
||||
2. **Styling (CSS)**: Use Vanilla CSS for maximum flexibility and control. Avoid using TailwindCSS unless the USER explicitly requests it; in this case, first confirm which TailwindCSS version to use.
|
||||
3. **Web App**: If the USER specifies that they want a more complex web app, use a framework like Next.js or Vite. Only do this if the USER explicitly requests a web app.
|
||||
4. **New Project Creation**: If you need to use a framework for a new app, use \`npx\` with the appropriate script, but there are some rules to follow:
|
||||
- Use \`npx -y\` to automatically install the script and its dependencies
|
||||
- You MUST run the command with \`--help\` flag to see all available options first
|
||||
- Initialize the app in the current directory with \`./\` (example: \`npx -y create-vite-app@latest ./\`)
|
||||
</web_application_development>
|
||||
`
|
||||
|
||||
// ============================================================================
|
||||
// Thinking Configuration - Sourced from CLIProxyAPI internal/util/gemini_thinking.go:481-487
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Maps reasoning_effort UI values to thinking budget tokens.
|
||||
*
|
||||
* Key notes:
|
||||
* - `none: 0` is a sentinel value meaning "delete thinkingConfig entirely"
|
||||
* - `auto: -1` triggers dynamic budget calculation based on context
|
||||
* - All other values represent actual thinking budget in tokens
|
||||
*/
|
||||
export const REASONING_EFFORT_BUDGET_MAP: Record<string, number> = {
|
||||
none: 0, // Special: DELETE thinkingConfig entirely
|
||||
auto: -1, // Dynamic calculation
|
||||
minimal: 512,
|
||||
low: 1024,
|
||||
medium: 8192,
|
||||
high: 24576,
|
||||
xhigh: 32768,
|
||||
}
|
||||
|
||||
/**
|
||||
* Model-specific thinking configuration.
|
||||
*
|
||||
* thinkingType:
|
||||
* - "numeric": Uses thinkingBudget (number) - Gemini 2.5, Claude via Antigravity
|
||||
* - "levels": Uses thinkingLevel (string) - Gemini 3
|
||||
*
|
||||
* zeroAllowed:
|
||||
* - true: Budget can be 0 (thinking disabled)
|
||||
* - false: Minimum budget enforced (cannot disable thinking)
|
||||
*/
|
||||
export interface AntigravityModelConfig {
|
||||
thinkingType: "numeric" | "levels"
|
||||
min: number
|
||||
max: number
|
||||
zeroAllowed: boolean
|
||||
levels?: string[] // lowercase only: "low", "high" (NOT "LOW", "HIGH")
|
||||
}
|
||||
|
||||
/**
|
||||
* Thinking configuration per model.
|
||||
* Keys are normalized model IDs (no provider prefix, no variant suffix).
|
||||
*
|
||||
* Config lookup uses pattern matching fallback:
|
||||
* - includes("gemini-3") → Gemini 3 (levels)
|
||||
* - includes("gemini-2.5") → Gemini 2.5 (numeric)
|
||||
* - includes("claude") → Claude via Antigravity (numeric)
|
||||
*/
|
||||
export const ANTIGRAVITY_MODEL_CONFIGS: Record<string, AntigravityModelConfig> = {
|
||||
"gemini-2.5-flash": {
|
||||
thinkingType: "numeric",
|
||||
min: 0,
|
||||
max: 24576,
|
||||
zeroAllowed: true,
|
||||
},
|
||||
"gemini-2.5-flash-lite": {
|
||||
thinkingType: "numeric",
|
||||
min: 0,
|
||||
max: 24576,
|
||||
zeroAllowed: true,
|
||||
},
|
||||
"gemini-2.5-computer-use-preview-10-2025": {
|
||||
thinkingType: "numeric",
|
||||
min: 128,
|
||||
max: 32768,
|
||||
zeroAllowed: false,
|
||||
},
|
||||
"gemini-3-pro-preview": {
|
||||
thinkingType: "levels",
|
||||
min: 128,
|
||||
max: 32768,
|
||||
zeroAllowed: false,
|
||||
levels: ["low", "high"],
|
||||
},
|
||||
"gemini-3-flash-preview": {
|
||||
thinkingType: "levels",
|
||||
min: 128,
|
||||
max: 32768,
|
||||
zeroAllowed: false,
|
||||
levels: ["minimal", "low", "medium", "high"],
|
||||
},
|
||||
"gemini-claude-sonnet-4-5-thinking": {
|
||||
thinkingType: "numeric",
|
||||
min: 1024,
|
||||
max: 200000,
|
||||
zeroAllowed: false,
|
||||
},
|
||||
"gemini-claude-opus-4-5-thinking": {
|
||||
thinkingType: "numeric",
|
||||
min: 1024,
|
||||
max: 200000,
|
||||
zeroAllowed: false,
|
||||
},
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Model ID Normalization
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Normalizes model ID for config lookup.
|
||||
*
|
||||
* Algorithm:
|
||||
* 1. Strip provider prefix (e.g., "google/")
|
||||
* 2. Strip "antigravity-" prefix
|
||||
* 3. Strip UI variant suffixes (-high, -low, -thinking-*)
|
||||
*
|
||||
* Examples:
|
||||
* - "google/antigravity-gemini-3-pro-high" → "gemini-3-pro"
|
||||
* - "antigravity-gemini-3-flash-preview" → "gemini-3-flash-preview"
|
||||
* - "gemini-2.5-flash" → "gemini-2.5-flash"
|
||||
* - "gemini-claude-sonnet-4-5-thinking-high" → "gemini-claude-sonnet-4-5"
|
||||
*/
|
||||
export function normalizeModelId(model: string): string {
|
||||
let normalized = model
|
||||
|
||||
// 1. Strip provider prefix (e.g., "google/")
|
||||
if (normalized.includes("/")) {
|
||||
normalized = normalized.split("/").pop() || normalized
|
||||
}
|
||||
|
||||
// 2. Strip "antigravity-" prefix
|
||||
if (normalized.startsWith("antigravity-")) {
|
||||
normalized = normalized.substring("antigravity-".length)
|
||||
}
|
||||
|
||||
// 3. Strip UI variant suffixes (-high, -low, -thinking-*)
|
||||
normalized = normalized.replace(/-thinking-(low|medium|high)$/, "")
|
||||
normalized = normalized.replace(/-(high|low)$/, "")
|
||||
|
||||
return normalized
|
||||
}
|
||||
|
||||
export const ANTIGRAVITY_SUPPORTED_MODELS = [
|
||||
"gemini-2.5-flash",
|
||||
"gemini-2.5-flash-lite",
|
||||
"gemini-2.5-computer-use-preview-10-2025",
|
||||
"gemini-3-pro-preview",
|
||||
"gemini-3-flash-preview",
|
||||
"gemini-claude-sonnet-4-5-thinking",
|
||||
"gemini-claude-opus-4-5-thinking",
|
||||
] as const
|
||||
|
||||
// ============================================================================
|
||||
// Model Alias Mapping (for Antigravity API)
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Converts UI model names to Antigravity API model names.
|
||||
*
|
||||
* NOTE: Tested 2026-01-08 - Gemini 3 models work with -preview suffix directly.
|
||||
* The CLIProxyAPI transformations (gemini-3-pro-high, gemini-3-flash) return 404.
|
||||
* Claude models return 404 on all endpoints (may require special access/quota).
|
||||
*/
|
||||
export function alias2ModelName(modelName: string): string {
|
||||
if (modelName.startsWith("gemini-claude-")) {
|
||||
return modelName.substring("gemini-".length)
|
||||
}
|
||||
return modelName
|
||||
}
|
||||
@@ -1,798 +0,0 @@
|
||||
/**
|
||||
* Antigravity Fetch Interceptor
|
||||
*
|
||||
* Creates a custom fetch function that:
|
||||
* - Checks token expiration and auto-refreshes
|
||||
* - Rewrites URLs to Antigravity endpoints
|
||||
* - Applies request transformation (including tool normalization)
|
||||
* - Applies response transformation (including thinking extraction)
|
||||
* - Implements endpoint fallback (daily → autopush → prod)
|
||||
*
|
||||
* **Body Type Assumption:**
|
||||
* This interceptor assumes `init.body` is a JSON string (OpenAI format).
|
||||
* Non-string bodies (ReadableStream, Blob, FormData, URLSearchParams, etc.)
|
||||
* are passed through unchanged to the original fetch to avoid breaking
|
||||
* other requests that may not be OpenAI-format API calls.
|
||||
*
|
||||
* Debug logging available via ANTIGRAVITY_DEBUG=1 environment variable.
|
||||
*/
|
||||
|
||||
import { ANTIGRAVITY_ENDPOINT_FALLBACKS } from "./constants"
|
||||
import { fetchProjectContext, clearProjectContextCache, invalidateProjectContextByRefreshToken } from "./project"
|
||||
import { isTokenExpired, refreshAccessToken, parseStoredToken, formatTokenForStorage, AntigravityTokenRefreshError } from "./token"
|
||||
import { AccountManager, type ManagedAccount } from "./accounts"
|
||||
import { loadAccounts } from "./storage"
|
||||
import type { ModelFamily } from "./types"
|
||||
import { transformRequest } from "./request"
|
||||
import { convertRequestBody, hasOpenAIMessages } from "./message-converter"
|
||||
import {
|
||||
transformResponse,
|
||||
transformStreamingResponse,
|
||||
isStreamingResponse,
|
||||
} from "./response"
|
||||
import { normalizeToolsForGemini, type OpenAITool } from "./tools"
|
||||
import { extractThinkingBlocks, shouldIncludeThinking, transformResponseThinking, extractThinkingConfig, applyThinkingConfigToRequest } from "./thinking"
|
||||
import {
|
||||
getThoughtSignature,
|
||||
setThoughtSignature,
|
||||
getOrCreateSessionId,
|
||||
} from "./thought-signature-store"
|
||||
import type { AntigravityTokens } from "./types"
|
||||
|
||||
/**
|
||||
* Auth interface matching OpenCode's auth system
|
||||
*/
|
||||
interface Auth {
|
||||
access?: string
|
||||
refresh?: string
|
||||
expires?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Client interface for auth operations
|
||||
*/
|
||||
interface AuthClient {
|
||||
set(providerId: string, auth: Auth): Promise<void>
|
||||
}
|
||||
|
||||
/**
|
||||
* Debug logging helper
|
||||
* Only logs when ANTIGRAVITY_DEBUG=1
|
||||
*/
|
||||
function debugLog(message: string): void {
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.log(`[antigravity-fetch] ${message}`)
|
||||
}
|
||||
}
|
||||
|
||||
function isRetryableError(status: number): boolean {
|
||||
if (status === 0) return true
|
||||
if (status === 429) return true
|
||||
if (status >= 500 && status < 600) return true
|
||||
return false
|
||||
}
|
||||
|
||||
function getModelFamilyFromModelName(modelName: string): ModelFamily | null {
|
||||
const lower = modelName.toLowerCase()
|
||||
if (lower.includes("claude") || lower.includes("anthropic")) return "claude"
|
||||
if (lower.includes("flash")) return "gemini-flash"
|
||||
if (lower.includes("gemini")) return "gemini-pro"
|
||||
return null
|
||||
}
|
||||
|
||||
function getModelFamilyFromUrl(url: string): ModelFamily {
|
||||
if (url.includes("claude")) return "claude"
|
||||
if (url.includes("flash")) return "gemini-flash"
|
||||
return "gemini-pro"
|
||||
}
|
||||
|
||||
function getModelFamily(url: string, init?: RequestInit): ModelFamily {
|
||||
if (init?.body && typeof init.body === "string") {
|
||||
try {
|
||||
const body = JSON.parse(init.body) as Record<string, unknown>
|
||||
if (typeof body.model === "string") {
|
||||
const fromModel = getModelFamilyFromModelName(body.model)
|
||||
if (fromModel) return fromModel
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
return getModelFamilyFromUrl(url)
|
||||
}
|
||||
|
||||
const GCP_PERMISSION_ERROR_PATTERNS = [
|
||||
"PERMISSION_DENIED",
|
||||
"does not have permission",
|
||||
"Cloud AI Companion API has not been used",
|
||||
"has not been enabled",
|
||||
] as const
|
||||
|
||||
function isGcpPermissionError(text: string): boolean {
|
||||
return GCP_PERMISSION_ERROR_PATTERNS.some((pattern) => text.includes(pattern))
|
||||
}
|
||||
|
||||
function calculateRetryDelay(attempt: number): number {
|
||||
return Math.min(200 * Math.pow(2, attempt), 2000)
|
||||
}
|
||||
|
||||
async function isRetryableResponse(response: Response): Promise<boolean> {
|
||||
if (isRetryableError(response.status)) return true
|
||||
if (response.status === 403) {
|
||||
try {
|
||||
const text = await response.clone().text()
|
||||
if (text.includes("SUBSCRIPTION_REQUIRED") || text.includes("Gemini Code Assist license")) {
|
||||
debugLog(`[RETRY] 403 SUBSCRIPTION_REQUIRED detected, will retry with next endpoint`)
|
||||
return true
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
interface AttemptFetchOptions {
|
||||
endpoint: string
|
||||
url: string
|
||||
init: RequestInit
|
||||
accessToken: string
|
||||
projectId: string
|
||||
sessionId: string
|
||||
modelName?: string
|
||||
thoughtSignature?: string
|
||||
}
|
||||
|
||||
interface RateLimitInfo {
|
||||
type: "rate-limited"
|
||||
retryAfterMs: number
|
||||
status: number
|
||||
}
|
||||
|
||||
type AttemptFetchResult = Response | null | "pass-through" | "needs-refresh" | RateLimitInfo
|
||||
|
||||
async function attemptFetch(
|
||||
options: AttemptFetchOptions
|
||||
): Promise<AttemptFetchResult> {
|
||||
const { endpoint, url, init, accessToken, projectId, sessionId, modelName, thoughtSignature } =
|
||||
options
|
||||
debugLog(`Trying endpoint: ${endpoint}`)
|
||||
|
||||
try {
|
||||
const rawBody = init.body
|
||||
|
||||
if (rawBody !== undefined && typeof rawBody !== "string") {
|
||||
debugLog(`Non-string body detected (${typeof rawBody}), signaling pass-through`)
|
||||
return "pass-through"
|
||||
}
|
||||
|
||||
let parsedBody: Record<string, unknown> = {}
|
||||
if (rawBody) {
|
||||
try {
|
||||
parsedBody = JSON.parse(rawBody) as Record<string, unknown>
|
||||
} catch {
|
||||
parsedBody = {}
|
||||
}
|
||||
}
|
||||
|
||||
debugLog(`[BODY] Keys: ${Object.keys(parsedBody).join(", ")}`)
|
||||
debugLog(`[BODY] Has contents: ${!!parsedBody.contents}, Has messages: ${!!parsedBody.messages}`)
|
||||
if (parsedBody.contents) {
|
||||
const contents = parsedBody.contents as Array<Record<string, unknown>>
|
||||
debugLog(`[BODY] contents length: ${contents.length}`)
|
||||
contents.forEach((c, i) => {
|
||||
debugLog(`[BODY] contents[${i}].role: ${c.role}, parts: ${JSON.stringify(c.parts).substring(0, 200)}`)
|
||||
})
|
||||
}
|
||||
|
||||
if (parsedBody.tools && Array.isArray(parsedBody.tools)) {
|
||||
const normalizedTools = normalizeToolsForGemini(parsedBody.tools as OpenAITool[])
|
||||
if (normalizedTools) {
|
||||
parsedBody.tools = normalizedTools
|
||||
}
|
||||
}
|
||||
|
||||
if (hasOpenAIMessages(parsedBody)) {
|
||||
debugLog(`[CONVERT] Converting OpenAI messages to Gemini contents`)
|
||||
parsedBody = convertRequestBody(parsedBody, thoughtSignature)
|
||||
debugLog(`[CONVERT] After conversion - Has contents: ${!!parsedBody.contents}`)
|
||||
}
|
||||
|
||||
const transformed = transformRequest({
|
||||
url,
|
||||
body: parsedBody,
|
||||
accessToken,
|
||||
projectId,
|
||||
sessionId,
|
||||
modelName,
|
||||
endpointOverride: endpoint,
|
||||
thoughtSignature,
|
||||
})
|
||||
|
||||
// Apply thinking config from reasoning_effort (from think-mode hook)
|
||||
const effectiveModel = modelName || transformed.body.model
|
||||
const thinkingConfig = extractThinkingConfig(
|
||||
parsedBody,
|
||||
parsedBody.generationConfig as Record<string, unknown> | undefined,
|
||||
parsedBody,
|
||||
)
|
||||
if (thinkingConfig) {
|
||||
debugLog(`[THINKING] Applying thinking config for model: ${effectiveModel}`)
|
||||
applyThinkingConfigToRequest(
|
||||
transformed.body as unknown as Record<string, unknown>,
|
||||
effectiveModel,
|
||||
thinkingConfig,
|
||||
)
|
||||
debugLog(`[THINKING] Thinking config applied successfully`)
|
||||
}
|
||||
|
||||
debugLog(`[REQ] streaming=${transformed.streaming}, url=${transformed.url}`)
|
||||
|
||||
const maxPermissionRetries = 10
|
||||
for (let attempt = 0; attempt <= maxPermissionRetries; attempt++) {
|
||||
const response = await fetch(transformed.url, {
|
||||
method: init.method || "POST",
|
||||
headers: transformed.headers,
|
||||
body: JSON.stringify(transformed.body),
|
||||
signal: init.signal,
|
||||
})
|
||||
|
||||
debugLog(
|
||||
`[RESP] status=${response.status} content-type=${response.headers.get("content-type") ?? ""} url=${response.url}`
|
||||
)
|
||||
|
||||
if (response.status === 401) {
|
||||
debugLog(`[401] Unauthorized response detected, signaling token refresh needed`)
|
||||
return "needs-refresh"
|
||||
}
|
||||
|
||||
if (response.status === 403) {
|
||||
try {
|
||||
const text = await response.clone().text()
|
||||
if (isGcpPermissionError(text)) {
|
||||
if (attempt < maxPermissionRetries) {
|
||||
const delay = calculateRetryDelay(attempt)
|
||||
debugLog(`[RETRY] GCP permission error, retry ${attempt + 1}/${maxPermissionRetries} after ${delay}ms`)
|
||||
await new Promise((resolve) => setTimeout(resolve, delay))
|
||||
continue
|
||||
}
|
||||
debugLog(`[RETRY] GCP permission error, max retries exceeded`)
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
if (response.status === 429) {
|
||||
const retryAfter = response.headers.get("retry-after")
|
||||
let retryAfterMs = 60000
|
||||
if (retryAfter) {
|
||||
const parsed = parseInt(retryAfter, 10)
|
||||
if (!isNaN(parsed) && parsed > 0) {
|
||||
retryAfterMs = parsed * 1000
|
||||
} else {
|
||||
const httpDate = Date.parse(retryAfter)
|
||||
if (!isNaN(httpDate)) {
|
||||
retryAfterMs = Math.max(0, httpDate - Date.now())
|
||||
}
|
||||
}
|
||||
}
|
||||
debugLog(`[429] Rate limited, retry-after: ${retryAfterMs}ms`)
|
||||
await response.body?.cancel()
|
||||
return { type: "rate-limited" as const, retryAfterMs, status: 429 }
|
||||
}
|
||||
|
||||
if (response.status >= 500 && response.status < 600) {
|
||||
debugLog(`[5xx] Server error ${response.status}, marking for rotation`)
|
||||
await response.body?.cancel()
|
||||
return { type: "rate-limited" as const, retryAfterMs: 300000, status: response.status }
|
||||
}
|
||||
|
||||
if (!response.ok && (await isRetryableResponse(response))) {
|
||||
debugLog(`Endpoint failed: ${endpoint} (status: ${response.status}), trying next`)
|
||||
return null
|
||||
}
|
||||
|
||||
return response
|
||||
}
|
||||
|
||||
return null
|
||||
} catch (error) {
|
||||
debugLog(
|
||||
`Endpoint failed: ${endpoint} (${error instanceof Error ? error.message : "Unknown error"}), trying next`
|
||||
)
|
||||
return null
|
||||
}
|
||||
}
|
||||
|
||||
interface GeminiResponsePart {
|
||||
thoughtSignature?: string
|
||||
thought_signature?: string
|
||||
functionCall?: Record<string, unknown>
|
||||
text?: string
|
||||
[key: string]: unknown
|
||||
}
|
||||
|
||||
interface GeminiResponseCandidate {
|
||||
content?: {
|
||||
parts?: GeminiResponsePart[]
|
||||
[key: string]: unknown
|
||||
}
|
||||
[key: string]: unknown
|
||||
}
|
||||
|
||||
interface GeminiResponseBody {
|
||||
candidates?: GeminiResponseCandidate[]
|
||||
[key: string]: unknown
|
||||
}
|
||||
|
||||
function extractSignatureFromResponse(parsed: GeminiResponseBody): string | undefined {
|
||||
if (!parsed.candidates || !Array.isArray(parsed.candidates)) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
for (const candidate of parsed.candidates) {
|
||||
const parts = candidate.content?.parts
|
||||
if (!parts || !Array.isArray(parts)) {
|
||||
continue
|
||||
}
|
||||
|
||||
for (const part of parts) {
|
||||
const sig = part.thoughtSignature || part.thought_signature
|
||||
if (sig && typeof sig === "string") {
|
||||
return sig
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
|
||||
async function transformResponseWithThinking(
|
||||
response: Response,
|
||||
modelName: string,
|
||||
fetchInstanceId: string
|
||||
): Promise<Response> {
|
||||
const streaming = isStreamingResponse(response)
|
||||
|
||||
let result
|
||||
if (streaming) {
|
||||
result = await transformStreamingResponse(response)
|
||||
} else {
|
||||
result = await transformResponse(response)
|
||||
}
|
||||
|
||||
if (streaming) {
|
||||
return result.response
|
||||
}
|
||||
|
||||
try {
|
||||
const text = await result.response.clone().text()
|
||||
debugLog(`[TSIG][RESP] Response text length: ${text.length}`)
|
||||
|
||||
const parsed = JSON.parse(text) as GeminiResponseBody
|
||||
debugLog(`[TSIG][RESP] Parsed keys: ${Object.keys(parsed).join(", ")}`)
|
||||
debugLog(`[TSIG][RESP] Has candidates: ${!!parsed.candidates}, count: ${parsed.candidates?.length ?? 0}`)
|
||||
|
||||
const signature = extractSignatureFromResponse(parsed)
|
||||
debugLog(`[TSIG][RESP] Signature extracted: ${signature ? signature.substring(0, 30) + "..." : "NONE"}`)
|
||||
if (signature) {
|
||||
setThoughtSignature(fetchInstanceId, signature)
|
||||
debugLog(`[TSIG][STORE] Stored signature for ${fetchInstanceId}`)
|
||||
} else {
|
||||
debugLog(`[TSIG][WARN] No signature found in response!`)
|
||||
}
|
||||
|
||||
if (shouldIncludeThinking(modelName)) {
|
||||
const thinkingResult = extractThinkingBlocks(parsed)
|
||||
if (thinkingResult.hasThinking) {
|
||||
const transformed = transformResponseThinking(parsed)
|
||||
return new Response(JSON.stringify(transformed), {
|
||||
status: result.response.status,
|
||||
statusText: result.response.statusText,
|
||||
headers: result.response.headers,
|
||||
})
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
|
||||
return result.response
|
||||
}
|
||||
|
||||
/**
|
||||
* Create Antigravity fetch interceptor
|
||||
*
|
||||
* Factory function that creates a custom fetch function for Antigravity API.
|
||||
* Handles token management, request/response transformation, and endpoint fallback.
|
||||
*
|
||||
* @param getAuth - Async function to retrieve current auth state
|
||||
* @param client - Auth client for saving updated tokens
|
||||
* @param providerId - Provider identifier (e.g., "google")
|
||||
* @param clientId - Optional custom client ID for token refresh (defaults to ANTIGRAVITY_CLIENT_ID)
|
||||
* @param clientSecret - Optional custom client secret for token refresh (defaults to ANTIGRAVITY_CLIENT_SECRET)
|
||||
* @returns Custom fetch function compatible with standard fetch signature
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* const customFetch = createAntigravityFetch(
|
||||
* () => auth(),
|
||||
* client,
|
||||
* "google",
|
||||
* "custom-client-id",
|
||||
* "custom-client-secret"
|
||||
* )
|
||||
*
|
||||
* // Use like standard fetch
|
||||
* const response = await customFetch("https://api.example.com/chat", {
|
||||
* method: "POST",
|
||||
* body: JSON.stringify({ messages: [...] })
|
||||
* })
|
||||
* ```
|
||||
*/
|
||||
export function createAntigravityFetch(
|
||||
getAuth: () => Promise<Auth>,
|
||||
client: AuthClient,
|
||||
providerId: string,
|
||||
clientId?: string,
|
||||
clientSecret?: string,
|
||||
accountManager?: AccountManager | null
|
||||
): (url: string, init?: RequestInit) => Promise<Response> {
|
||||
let cachedTokens: AntigravityTokens | null = null
|
||||
let cachedProjectId: string | null = null
|
||||
let lastAccountIndex: number | null = null
|
||||
const fetchInstanceId = crypto.randomUUID()
|
||||
let manager: AccountManager | null = accountManager || null
|
||||
let accountsLoaded = false
|
||||
|
||||
const fetchFn = async (url: string, init: RequestInit = {}): Promise<Response> => {
|
||||
debugLog(`Intercepting request to: ${url}`)
|
||||
|
||||
// Get current auth state
|
||||
const auth = await getAuth()
|
||||
if (!auth.access || !auth.refresh) {
|
||||
throw new Error("Antigravity: No authentication tokens available")
|
||||
}
|
||||
|
||||
// Parse stored token format
|
||||
let refreshParts = parseStoredToken(auth.refresh)
|
||||
|
||||
if (!accountsLoaded && !manager && auth.refresh) {
|
||||
try {
|
||||
const storedAccounts = await loadAccounts()
|
||||
if (storedAccounts) {
|
||||
manager = new AccountManager(
|
||||
{ refresh: auth.refresh, access: auth.access || "", expires: auth.expires || 0 },
|
||||
storedAccounts
|
||||
)
|
||||
debugLog(`[ACCOUNTS] Loaded ${manager.getAccountCount()} accounts from storage`)
|
||||
}
|
||||
} catch (error) {
|
||||
debugLog(`[ACCOUNTS] Failed to load accounts, falling back to single-account: ${error instanceof Error ? error.message : "Unknown"}`)
|
||||
}
|
||||
accountsLoaded = true
|
||||
}
|
||||
|
||||
let currentAccount: ManagedAccount | null = null
|
||||
if (manager) {
|
||||
const family = getModelFamily(url, init)
|
||||
currentAccount = manager.getCurrentOrNextForFamily(family)
|
||||
|
||||
if (currentAccount) {
|
||||
debugLog(`[ACCOUNTS] Using account ${currentAccount.index + 1}/${manager.getAccountCount()} for ${family}`)
|
||||
|
||||
if (lastAccountIndex === null || lastAccountIndex !== currentAccount.index) {
|
||||
if (lastAccountIndex !== null) {
|
||||
debugLog(`[ACCOUNTS] Account changed from ${lastAccountIndex + 1} to ${currentAccount.index + 1}, clearing cached state`)
|
||||
} else if (cachedProjectId) {
|
||||
debugLog(`[ACCOUNTS] First account introduced, clearing cached state`)
|
||||
}
|
||||
cachedProjectId = null
|
||||
cachedTokens = null
|
||||
}
|
||||
lastAccountIndex = currentAccount.index
|
||||
|
||||
if (currentAccount.access && currentAccount.expires) {
|
||||
auth.access = currentAccount.access
|
||||
auth.expires = currentAccount.expires
|
||||
}
|
||||
|
||||
refreshParts = {
|
||||
refreshToken: currentAccount.parts.refreshToken,
|
||||
projectId: currentAccount.parts.projectId,
|
||||
managedProjectId: currentAccount.parts.managedProjectId,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Build initial token state
|
||||
if (!cachedTokens) {
|
||||
cachedTokens = {
|
||||
type: "antigravity",
|
||||
access_token: auth.access,
|
||||
refresh_token: refreshParts.refreshToken,
|
||||
expires_in: auth.expires ? Math.floor((auth.expires - Date.now()) / 1000) : 3600,
|
||||
timestamp: auth.expires ? auth.expires - 3600 * 1000 : Date.now(),
|
||||
}
|
||||
} else {
|
||||
// Update with fresh values
|
||||
cachedTokens.access_token = auth.access
|
||||
cachedTokens.refresh_token = refreshParts.refreshToken
|
||||
}
|
||||
|
||||
// Check token expiration and refresh if needed
|
||||
if (isTokenExpired(cachedTokens)) {
|
||||
debugLog("Token expired, refreshing...")
|
||||
|
||||
try {
|
||||
const newTokens = await refreshAccessToken(refreshParts.refreshToken, clientId, clientSecret)
|
||||
|
||||
cachedTokens = {
|
||||
type: "antigravity",
|
||||
access_token: newTokens.access_token,
|
||||
refresh_token: newTokens.refresh_token,
|
||||
expires_in: newTokens.expires_in,
|
||||
timestamp: Date.now(),
|
||||
}
|
||||
|
||||
clearProjectContextCache()
|
||||
|
||||
const formattedRefresh = formatTokenForStorage(
|
||||
newTokens.refresh_token,
|
||||
refreshParts.projectId || "",
|
||||
refreshParts.managedProjectId
|
||||
)
|
||||
|
||||
await client.set(providerId, {
|
||||
access: newTokens.access_token,
|
||||
refresh: formattedRefresh,
|
||||
expires: Date.now() + newTokens.expires_in * 1000,
|
||||
})
|
||||
|
||||
debugLog("Token refreshed successfully")
|
||||
} catch (error) {
|
||||
if (error instanceof AntigravityTokenRefreshError) {
|
||||
if (error.isInvalidGrant) {
|
||||
debugLog(`[REFRESH] Token revoked (invalid_grant), clearing caches`)
|
||||
invalidateProjectContextByRefreshToken(refreshParts.refreshToken)
|
||||
clearProjectContextCache()
|
||||
}
|
||||
throw new Error(
|
||||
`Antigravity: Token refresh failed: ${error.description || error.message}${error.code ? ` (${error.code})` : ""}`
|
||||
)
|
||||
}
|
||||
throw new Error(
|
||||
`Antigravity: Token refresh failed: ${error instanceof Error ? error.message : "Unknown error"}`
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// Fetch project ID via loadCodeAssist (CLIProxyAPI approach)
|
||||
if (!cachedProjectId) {
|
||||
const projectContext = await fetchProjectContext(cachedTokens.access_token)
|
||||
cachedProjectId = projectContext.cloudaicompanionProject || ""
|
||||
debugLog(`[PROJECT] Fetched project ID: "${cachedProjectId}"`)
|
||||
}
|
||||
|
||||
const projectId = cachedProjectId
|
||||
debugLog(`[PROJECT] Using project ID: "${projectId}"`)
|
||||
|
||||
// Extract model name from request body
|
||||
let modelName: string | undefined
|
||||
if (init.body) {
|
||||
try {
|
||||
const body =
|
||||
typeof init.body === "string"
|
||||
? (JSON.parse(init.body) as Record<string, unknown>)
|
||||
: (init.body as unknown as Record<string, unknown>)
|
||||
if (typeof body.model === "string") {
|
||||
modelName = body.model
|
||||
}
|
||||
} catch {
|
||||
// Ignore parsing errors
|
||||
}
|
||||
}
|
||||
|
||||
const maxEndpoints = Math.min(ANTIGRAVITY_ENDPOINT_FALLBACKS.length, 3)
|
||||
const sessionId = getOrCreateSessionId(fetchInstanceId)
|
||||
const thoughtSignature = getThoughtSignature(fetchInstanceId)
|
||||
debugLog(`[TSIG][GET] sessionId=${sessionId}, signature=${thoughtSignature ? thoughtSignature.substring(0, 20) + "..." : "none"}`)
|
||||
|
||||
let hasRefreshedFor401 = false
|
||||
|
||||
const executeWithEndpoints = async (): Promise<Response> => {
|
||||
for (let i = 0; i < maxEndpoints; i++) {
|
||||
const endpoint = ANTIGRAVITY_ENDPOINT_FALLBACKS[i]
|
||||
|
||||
const response = await attemptFetch({
|
||||
endpoint,
|
||||
url,
|
||||
init,
|
||||
accessToken: cachedTokens!.access_token,
|
||||
projectId,
|
||||
sessionId,
|
||||
modelName,
|
||||
thoughtSignature,
|
||||
})
|
||||
|
||||
if (response === "pass-through") {
|
||||
debugLog("Non-string body detected, passing through with auth headers")
|
||||
const headersWithAuth = {
|
||||
...init.headers,
|
||||
Authorization: `Bearer ${cachedTokens!.access_token}`,
|
||||
}
|
||||
return fetch(url, { ...init, headers: headersWithAuth })
|
||||
}
|
||||
|
||||
if (response === "needs-refresh") {
|
||||
if (hasRefreshedFor401) {
|
||||
debugLog("[401] Already refreshed once, returning unauthorized error")
|
||||
return new Response(
|
||||
JSON.stringify({
|
||||
error: {
|
||||
message: "Authentication failed after token refresh",
|
||||
type: "unauthorized",
|
||||
code: "token_refresh_failed",
|
||||
},
|
||||
}),
|
||||
{
|
||||
status: 401,
|
||||
statusText: "Unauthorized",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
debugLog("[401] Refreshing token and retrying...")
|
||||
hasRefreshedFor401 = true
|
||||
|
||||
try {
|
||||
const newTokens = await refreshAccessToken(
|
||||
refreshParts.refreshToken,
|
||||
clientId,
|
||||
clientSecret
|
||||
)
|
||||
|
||||
cachedTokens = {
|
||||
type: "antigravity",
|
||||
access_token: newTokens.access_token,
|
||||
refresh_token: newTokens.refresh_token,
|
||||
expires_in: newTokens.expires_in,
|
||||
timestamp: Date.now(),
|
||||
}
|
||||
|
||||
clearProjectContextCache()
|
||||
|
||||
const formattedRefresh = formatTokenForStorage(
|
||||
newTokens.refresh_token,
|
||||
refreshParts.projectId || "",
|
||||
refreshParts.managedProjectId
|
||||
)
|
||||
|
||||
await client.set(providerId, {
|
||||
access: newTokens.access_token,
|
||||
refresh: formattedRefresh,
|
||||
expires: Date.now() + newTokens.expires_in * 1000,
|
||||
})
|
||||
|
||||
debugLog("[401] Token refreshed, retrying request...")
|
||||
return executeWithEndpoints()
|
||||
} catch (refreshError) {
|
||||
if (refreshError instanceof AntigravityTokenRefreshError) {
|
||||
if (refreshError.isInvalidGrant) {
|
||||
debugLog(`[401] Token revoked (invalid_grant), clearing caches`)
|
||||
invalidateProjectContextByRefreshToken(refreshParts.refreshToken)
|
||||
clearProjectContextCache()
|
||||
}
|
||||
debugLog(`[401] Token refresh failed: ${refreshError.description || refreshError.message}`)
|
||||
return new Response(
|
||||
JSON.stringify({
|
||||
error: {
|
||||
message: refreshError.description || refreshError.message,
|
||||
type: refreshError.isInvalidGrant ? "token_revoked" : "unauthorized",
|
||||
code: refreshError.code || "token_refresh_failed",
|
||||
},
|
||||
}),
|
||||
{
|
||||
status: 401,
|
||||
statusText: "Unauthorized",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
}
|
||||
)
|
||||
}
|
||||
debugLog(`[401] Token refresh failed: ${refreshError instanceof Error ? refreshError.message : "Unknown error"}`)
|
||||
return new Response(
|
||||
JSON.stringify({
|
||||
error: {
|
||||
message: refreshError instanceof Error ? refreshError.message : "Unknown error",
|
||||
type: "unauthorized",
|
||||
code: "token_refresh_failed",
|
||||
},
|
||||
}),
|
||||
{
|
||||
status: 401,
|
||||
statusText: "Unauthorized",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
}
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
if (response && typeof response === "object" && "type" in response && response.type === "rate-limited") {
|
||||
const rateLimitInfo = response as RateLimitInfo
|
||||
const family = getModelFamily(url, init)
|
||||
|
||||
if (rateLimitInfo.retryAfterMs > 5000 && manager && currentAccount) {
|
||||
manager.markRateLimited(currentAccount, rateLimitInfo.retryAfterMs, family)
|
||||
await manager.save()
|
||||
debugLog(`[RATE-LIMIT] Account ${currentAccount.index + 1} rate-limited for ${family}, rotating...`)
|
||||
|
||||
const nextAccount = manager.getCurrentOrNextForFamily(family)
|
||||
if (nextAccount && nextAccount.index !== currentAccount.index) {
|
||||
debugLog(`[RATE-LIMIT] Switched to account ${nextAccount.index + 1}`)
|
||||
return fetchFn(url, init)
|
||||
}
|
||||
}
|
||||
|
||||
const isLastEndpoint = i === maxEndpoints - 1
|
||||
if (isLastEndpoint) {
|
||||
const isServerError = rateLimitInfo.status >= 500
|
||||
debugLog(`[RATE-LIMIT] No alternative account or endpoint, returning ${rateLimitInfo.status}`)
|
||||
return new Response(
|
||||
JSON.stringify({
|
||||
error: {
|
||||
message: isServerError
|
||||
? `Server error (${rateLimitInfo.status}). Retry after ${Math.ceil(rateLimitInfo.retryAfterMs / 1000)} seconds`
|
||||
: `Rate limited. Retry after ${Math.ceil(rateLimitInfo.retryAfterMs / 1000)} seconds`,
|
||||
type: isServerError ? "server_error" : "rate_limit",
|
||||
code: isServerError ? "server_error" : "rate_limited",
|
||||
},
|
||||
}),
|
||||
{
|
||||
status: rateLimitInfo.status,
|
||||
statusText: isServerError ? "Server Error" : "Too Many Requests",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
"Retry-After": String(Math.ceil(rateLimitInfo.retryAfterMs / 1000)),
|
||||
},
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
debugLog(`[RATE-LIMIT] No alternative account available, trying next endpoint`)
|
||||
continue
|
||||
}
|
||||
|
||||
if (response && response instanceof Response) {
|
||||
debugLog(`Success with endpoint: ${endpoint}`)
|
||||
const transformedResponse = await transformResponseWithThinking(
|
||||
response,
|
||||
modelName || "",
|
||||
fetchInstanceId
|
||||
)
|
||||
return transformedResponse
|
||||
}
|
||||
}
|
||||
|
||||
const errorMessage = `All Antigravity endpoints failed after ${maxEndpoints} attempts`
|
||||
debugLog(errorMessage)
|
||||
|
||||
return new Response(
|
||||
JSON.stringify({
|
||||
error: {
|
||||
message: errorMessage,
|
||||
type: "endpoint_failure",
|
||||
code: "all_endpoints_failed",
|
||||
},
|
||||
}),
|
||||
{
|
||||
status: 503,
|
||||
statusText: "Service Unavailable",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
return executeWithEndpoints()
|
||||
}
|
||||
|
||||
return fetchFn
|
||||
}
|
||||
|
||||
/**
|
||||
* Type export for createAntigravityFetch return type
|
||||
*/
|
||||
export type AntigravityFetch = (url: string, init?: RequestInit) => Promise<Response>
|
||||
@@ -1,13 +0,0 @@
|
||||
export * from "./types"
|
||||
export * from "./constants"
|
||||
export * from "./oauth"
|
||||
export * from "./token"
|
||||
export * from "./project"
|
||||
export * from "./request"
|
||||
export * from "./response"
|
||||
export * from "./tools"
|
||||
export * from "./thinking"
|
||||
export * from "./thought-signature-store"
|
||||
export * from "./message-converter"
|
||||
export * from "./fetch"
|
||||
export * from "./plugin"
|
||||
@@ -1,306 +0,0 @@
|
||||
/**
|
||||
* Antigravity Integration Tests - End-to-End
|
||||
*
|
||||
* Tests the complete request transformation pipeline:
|
||||
* - Request parsing and model extraction
|
||||
* - System prompt injection (handled by transformRequest)
|
||||
* - Thinking config application (handled by applyThinkingConfigToRequest)
|
||||
* - Body wrapping for Antigravity API format
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from "bun:test"
|
||||
import { transformRequest } from "./request"
|
||||
import { extractThinkingConfig, applyThinkingConfigToRequest } from "./thinking"
|
||||
|
||||
describe("Antigravity Integration - End-to-End", () => {
|
||||
describe("Thinking Config Integration", () => {
|
||||
it("Gemini 3 with reasoning_effort='high' → thinkingLevel='high'", () => {
|
||||
// #given
|
||||
const inputBody: Record<string, unknown> = {
|
||||
model: "gemini-3-pro-preview",
|
||||
reasoning_effort: "high",
|
||||
messages: [{ role: "user", content: "test" }],
|
||||
}
|
||||
|
||||
// #when
|
||||
const transformed = transformRequest({
|
||||
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-3-pro-preview:generateContent",
|
||||
body: inputBody,
|
||||
accessToken: "test-token",
|
||||
projectId: "test-project",
|
||||
sessionId: "test-session",
|
||||
modelName: "gemini-3-pro-preview",
|
||||
})
|
||||
|
||||
const thinkingConfig = extractThinkingConfig(
|
||||
inputBody,
|
||||
inputBody.generationConfig as Record<string, unknown> | undefined,
|
||||
inputBody,
|
||||
)
|
||||
if (thinkingConfig) {
|
||||
applyThinkingConfigToRequest(
|
||||
transformed.body as unknown as Record<string, unknown>,
|
||||
"gemini-3-pro-preview",
|
||||
thinkingConfig,
|
||||
)
|
||||
}
|
||||
|
||||
// #then
|
||||
const genConfig = transformed.body.request.generationConfig as Record<string, unknown> | undefined
|
||||
const thinkingConfigResult = genConfig?.thinkingConfig as Record<string, unknown> | undefined
|
||||
expect(thinkingConfigResult?.thinkingLevel).toBe("high")
|
||||
expect(thinkingConfigResult?.thinkingBudget).toBeUndefined()
|
||||
const systemInstruction = transformed.body.request.systemInstruction as Record<string, unknown> | undefined
|
||||
const parts = systemInstruction?.parts as Array<{ text: string }> | undefined
|
||||
expect(parts?.[0]?.text).toContain("<identity>")
|
||||
})
|
||||
|
||||
it("Gemini 2.5 with reasoning_effort='high' → thinkingBudget=24576", () => {
|
||||
// #given
|
||||
const inputBody: Record<string, unknown> = {
|
||||
model: "gemini-2.5-flash",
|
||||
reasoning_effort: "high",
|
||||
messages: [{ role: "user", content: "test" }],
|
||||
}
|
||||
|
||||
// #when
|
||||
const transformed = transformRequest({
|
||||
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-2.5-flash:generateContent",
|
||||
body: inputBody,
|
||||
accessToken: "test-token",
|
||||
projectId: "test-project",
|
||||
sessionId: "test-session",
|
||||
modelName: "gemini-2.5-flash",
|
||||
})
|
||||
|
||||
const thinkingConfig = extractThinkingConfig(
|
||||
inputBody,
|
||||
inputBody.generationConfig as Record<string, unknown> | undefined,
|
||||
inputBody,
|
||||
)
|
||||
if (thinkingConfig) {
|
||||
applyThinkingConfigToRequest(
|
||||
transformed.body as unknown as Record<string, unknown>,
|
||||
"gemini-2.5-flash",
|
||||
thinkingConfig,
|
||||
)
|
||||
}
|
||||
|
||||
// #then
|
||||
const genConfig = transformed.body.request.generationConfig as Record<string, unknown> | undefined
|
||||
const thinkingConfigResult = genConfig?.thinkingConfig as Record<string, unknown> | undefined
|
||||
expect(thinkingConfigResult?.thinkingBudget).toBe(24576)
|
||||
expect(thinkingConfigResult?.thinkingLevel).toBeUndefined()
|
||||
})
|
||||
|
||||
it("reasoning_effort='none' → thinkingConfig deleted", () => {
|
||||
// #given
|
||||
const inputBody: Record<string, unknown> = {
|
||||
model: "gemini-2.5-flash",
|
||||
reasoning_effort: "none",
|
||||
messages: [{ role: "user", content: "test" }],
|
||||
}
|
||||
|
||||
// #when
|
||||
const transformed = transformRequest({
|
||||
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-2.5-flash:generateContent",
|
||||
body: inputBody,
|
||||
accessToken: "test-token",
|
||||
projectId: "test-project",
|
||||
sessionId: "test-session",
|
||||
modelName: "gemini-2.5-flash",
|
||||
})
|
||||
|
||||
const thinkingConfig = extractThinkingConfig(
|
||||
inputBody,
|
||||
inputBody.generationConfig as Record<string, unknown> | undefined,
|
||||
inputBody,
|
||||
)
|
||||
if (thinkingConfig) {
|
||||
applyThinkingConfigToRequest(
|
||||
transformed.body as unknown as Record<string, unknown>,
|
||||
"gemini-2.5-flash",
|
||||
thinkingConfig,
|
||||
)
|
||||
}
|
||||
|
||||
// #then
|
||||
const genConfig = transformed.body.request.generationConfig as Record<string, unknown> | undefined
|
||||
expect(genConfig?.thinkingConfig).toBeUndefined()
|
||||
})
|
||||
|
||||
it("Claude via Antigravity with reasoning_effort='high'", () => {
|
||||
// #given
|
||||
const inputBody: Record<string, unknown> = {
|
||||
model: "gemini-claude-sonnet-4-5",
|
||||
reasoning_effort: "high",
|
||||
messages: [{ role: "user", content: "test" }],
|
||||
}
|
||||
|
||||
// #when
|
||||
const transformed = transformRequest({
|
||||
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-claude-sonnet-4-5:generateContent",
|
||||
body: inputBody,
|
||||
accessToken: "test-token",
|
||||
projectId: "test-project",
|
||||
sessionId: "test-session",
|
||||
modelName: "gemini-claude-sonnet-4-5",
|
||||
})
|
||||
|
||||
const thinkingConfig = extractThinkingConfig(
|
||||
inputBody,
|
||||
inputBody.generationConfig as Record<string, unknown> | undefined,
|
||||
inputBody,
|
||||
)
|
||||
if (thinkingConfig) {
|
||||
applyThinkingConfigToRequest(
|
||||
transformed.body as unknown as Record<string, unknown>,
|
||||
"gemini-claude-sonnet-4-5",
|
||||
thinkingConfig,
|
||||
)
|
||||
}
|
||||
|
||||
// #then
|
||||
const genConfig = transformed.body.request.generationConfig as Record<string, unknown> | undefined
|
||||
const thinkingConfigResult = genConfig?.thinkingConfig as Record<string, unknown> | undefined
|
||||
expect(thinkingConfigResult?.thinkingBudget).toBe(24576)
|
||||
})
|
||||
|
||||
it("System prompt not duplicated on retry", () => {
|
||||
// #given
|
||||
const inputBody: Record<string, unknown> = {
|
||||
model: "gemini-3-pro-high",
|
||||
reasoning_effort: "high",
|
||||
messages: [{ role: "user", content: "test" }],
|
||||
}
|
||||
|
||||
// #when - First transformation
|
||||
const firstOutput = transformRequest({
|
||||
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-3-pro-high:generateContent",
|
||||
body: inputBody,
|
||||
accessToken: "test-token",
|
||||
projectId: "test-project",
|
||||
sessionId: "test-session",
|
||||
modelName: "gemini-3-pro-high",
|
||||
})
|
||||
|
||||
// Extract thinking config and apply to first output (simulating what fetch.ts does)
|
||||
const thinkingConfig = extractThinkingConfig(
|
||||
inputBody,
|
||||
inputBody.generationConfig as Record<string, unknown> | undefined,
|
||||
inputBody,
|
||||
)
|
||||
if (thinkingConfig) {
|
||||
applyThinkingConfigToRequest(
|
||||
firstOutput.body as unknown as Record<string, unknown>,
|
||||
"gemini-3-pro-high",
|
||||
thinkingConfig,
|
||||
)
|
||||
}
|
||||
|
||||
// #then
|
||||
const systemInstruction = firstOutput.body.request.systemInstruction as Record<string, unknown> | undefined
|
||||
const parts = systemInstruction?.parts as Array<{ text: string }> | undefined
|
||||
const identityCount = parts?.filter((p) => p.text.includes("<identity>")).length ?? 0
|
||||
expect(identityCount).toBe(1) // Should have exactly ONE <identity> block
|
||||
})
|
||||
|
||||
it("reasoning_effort='low' for Gemini 3 → thinkingLevel='low'", () => {
|
||||
// #given
|
||||
const inputBody: Record<string, unknown> = {
|
||||
model: "gemini-3-flash-preview",
|
||||
reasoning_effort: "low",
|
||||
messages: [{ role: "user", content: "test" }],
|
||||
}
|
||||
|
||||
// #when
|
||||
const transformed = transformRequest({
|
||||
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-3-flash-preview:generateContent",
|
||||
body: inputBody,
|
||||
accessToken: "test-token",
|
||||
projectId: "test-project",
|
||||
sessionId: "test-session",
|
||||
modelName: "gemini-3-flash-preview",
|
||||
})
|
||||
|
||||
const thinkingConfig = extractThinkingConfig(
|
||||
inputBody,
|
||||
inputBody.generationConfig as Record<string, unknown> | undefined,
|
||||
inputBody,
|
||||
)
|
||||
if (thinkingConfig) {
|
||||
applyThinkingConfigToRequest(
|
||||
transformed.body as unknown as Record<string, unknown>,
|
||||
"gemini-3-flash-preview",
|
||||
thinkingConfig,
|
||||
)
|
||||
}
|
||||
|
||||
// #then
|
||||
const genConfig = transformed.body.request.generationConfig as Record<string, unknown> | undefined
|
||||
const thinkingConfigResult = genConfig?.thinkingConfig as Record<string, unknown> | undefined
|
||||
expect(thinkingConfigResult?.thinkingLevel).toBe("low")
|
||||
})
|
||||
|
||||
it("Full pipeline: transformRequest + thinking config preserves all fields", () => {
|
||||
// #given
|
||||
const inputBody: Record<string, unknown> = {
|
||||
model: "gemini-2.5-flash",
|
||||
reasoning_effort: "medium",
|
||||
messages: [
|
||||
{ role: "system", content: "You are a helpful assistant." },
|
||||
{ role: "user", content: "Write a function" },
|
||||
],
|
||||
generationConfig: {
|
||||
temperature: 0.7,
|
||||
maxOutputTokens: 1000,
|
||||
},
|
||||
}
|
||||
|
||||
// #when
|
||||
const transformed = transformRequest({
|
||||
url: "https://generativelanguage.googleapis.com/v1internal/models/gemini-2.5-flash:generateContent",
|
||||
body: inputBody,
|
||||
accessToken: "test-token",
|
||||
projectId: "test-project",
|
||||
sessionId: "test-session",
|
||||
modelName: "gemini-2.5-flash",
|
||||
})
|
||||
|
||||
const thinkingConfig = extractThinkingConfig(
|
||||
inputBody,
|
||||
inputBody.generationConfig as Record<string, unknown> | undefined,
|
||||
inputBody,
|
||||
)
|
||||
if (thinkingConfig) {
|
||||
applyThinkingConfigToRequest(
|
||||
transformed.body as unknown as Record<string, unknown>,
|
||||
"gemini-2.5-flash",
|
||||
thinkingConfig,
|
||||
)
|
||||
}
|
||||
|
||||
// #then
|
||||
// Verify basic structure is preserved
|
||||
expect(transformed.body.project).toBe("test-project")
|
||||
expect(transformed.body.model).toBe("gemini-2.5-flash")
|
||||
expect(transformed.body.userAgent).toBe("antigravity")
|
||||
expect(transformed.body.request.sessionId).toBe("test-session")
|
||||
|
||||
// Verify generation config is preserved
|
||||
const genConfig = transformed.body.request.generationConfig as Record<string, unknown> | undefined
|
||||
expect(genConfig?.temperature).toBe(0.7)
|
||||
expect(genConfig?.maxOutputTokens).toBe(1000)
|
||||
|
||||
// Verify thinking config is applied
|
||||
const thinkingConfigResult = genConfig?.thinkingConfig as Record<string, unknown> | undefined
|
||||
expect(thinkingConfigResult?.thinkingBudget).toBe(8192)
|
||||
expect(thinkingConfigResult?.include_thoughts).toBe(true)
|
||||
|
||||
// Verify system prompt is injected
|
||||
const systemInstruction = transformed.body.request.systemInstruction as Record<string, unknown> | undefined
|
||||
const parts = systemInstruction?.parts as Array<{ text: string }> | undefined
|
||||
expect(parts?.[0]?.text).toContain("<identity>")
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -1,206 +0,0 @@
|
||||
/**
|
||||
* OpenAI → Gemini message format converter
|
||||
*
|
||||
* Converts OpenAI-style messages to Gemini contents format,
|
||||
* injecting thoughtSignature into functionCall parts.
|
||||
*/
|
||||
|
||||
import { SKIP_THOUGHT_SIGNATURE_VALIDATOR } from "./constants"
|
||||
|
||||
function debugLog(message: string): void {
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.log(`[antigravity-converter] ${message}`)
|
||||
}
|
||||
}
|
||||
|
||||
interface OpenAIMessage {
|
||||
role: "system" | "user" | "assistant" | "tool"
|
||||
content?: string | OpenAIContentPart[]
|
||||
tool_calls?: OpenAIToolCall[]
|
||||
tool_call_id?: string
|
||||
name?: string
|
||||
}
|
||||
|
||||
interface OpenAIContentPart {
|
||||
type: string
|
||||
text?: string
|
||||
image_url?: { url: string }
|
||||
[key: string]: unknown
|
||||
}
|
||||
|
||||
interface OpenAIToolCall {
|
||||
id: string
|
||||
type: "function"
|
||||
function: {
|
||||
name: string
|
||||
arguments: string
|
||||
}
|
||||
}
|
||||
|
||||
interface GeminiPart {
|
||||
text?: string
|
||||
functionCall?: {
|
||||
name: string
|
||||
args: Record<string, unknown>
|
||||
}
|
||||
functionResponse?: {
|
||||
name: string
|
||||
response: Record<string, unknown>
|
||||
}
|
||||
inlineData?: {
|
||||
mimeType: string
|
||||
data: string
|
||||
}
|
||||
thought_signature?: string
|
||||
[key: string]: unknown
|
||||
}
|
||||
|
||||
interface GeminiContent {
|
||||
role: "user" | "model"
|
||||
parts: GeminiPart[]
|
||||
}
|
||||
|
||||
export function convertOpenAIToGemini(
|
||||
messages: OpenAIMessage[],
|
||||
thoughtSignature?: string
|
||||
): GeminiContent[] {
|
||||
debugLog(`Converting ${messages.length} messages, signature: ${thoughtSignature ? "present" : "none"}`)
|
||||
|
||||
const contents: GeminiContent[] = []
|
||||
|
||||
for (const msg of messages) {
|
||||
if (msg.role === "system") {
|
||||
contents.push({
|
||||
role: "user",
|
||||
parts: [{ text: typeof msg.content === "string" ? msg.content : "" }],
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
if (msg.role === "user") {
|
||||
const parts = convertContentToParts(msg.content)
|
||||
contents.push({ role: "user", parts })
|
||||
continue
|
||||
}
|
||||
|
||||
if (msg.role === "assistant") {
|
||||
const parts: GeminiPart[] = []
|
||||
|
||||
if (msg.content) {
|
||||
parts.push(...convertContentToParts(msg.content))
|
||||
}
|
||||
|
||||
if (msg.tool_calls && msg.tool_calls.length > 0) {
|
||||
for (const toolCall of msg.tool_calls) {
|
||||
let args: Record<string, unknown> = {}
|
||||
try {
|
||||
args = JSON.parse(toolCall.function.arguments)
|
||||
} catch {
|
||||
args = {}
|
||||
}
|
||||
|
||||
const part: GeminiPart = {
|
||||
functionCall: {
|
||||
name: toolCall.function.name,
|
||||
args,
|
||||
},
|
||||
}
|
||||
|
||||
// Always inject signature: use provided or default to skip validator (CLIProxyAPI approach)
|
||||
part.thoughtSignature = thoughtSignature || SKIP_THOUGHT_SIGNATURE_VALIDATOR
|
||||
debugLog(`Injected signature into functionCall: ${toolCall.function.name} (${thoughtSignature ? "provided" : "default"})`)
|
||||
|
||||
parts.push(part)
|
||||
}
|
||||
}
|
||||
|
||||
if (parts.length > 0) {
|
||||
contents.push({ role: "model", parts })
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if (msg.role === "tool") {
|
||||
let response: Record<string, unknown> = {}
|
||||
try {
|
||||
response = typeof msg.content === "string"
|
||||
? JSON.parse(msg.content)
|
||||
: { result: msg.content }
|
||||
} catch {
|
||||
response = { result: msg.content }
|
||||
}
|
||||
|
||||
const toolName = msg.name || "unknown"
|
||||
|
||||
contents.push({
|
||||
role: "user",
|
||||
parts: [{
|
||||
functionResponse: {
|
||||
name: toolName,
|
||||
response,
|
||||
},
|
||||
}],
|
||||
})
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
debugLog(`Converted to ${contents.length} content blocks`)
|
||||
return contents
|
||||
}
|
||||
|
||||
function convertContentToParts(content: string | OpenAIContentPart[] | undefined): GeminiPart[] {
|
||||
if (!content) {
|
||||
return [{ text: "" }]
|
||||
}
|
||||
|
||||
if (typeof content === "string") {
|
||||
return [{ text: content }]
|
||||
}
|
||||
|
||||
const parts: GeminiPart[] = []
|
||||
for (const part of content) {
|
||||
if (part.type === "text" && part.text) {
|
||||
parts.push({ text: part.text })
|
||||
} else if (part.type === "image_url" && part.image_url?.url) {
|
||||
const url = part.image_url.url
|
||||
if (url.startsWith("data:")) {
|
||||
const match = url.match(/^data:([^;]+);base64,(.+)$/)
|
||||
if (match) {
|
||||
parts.push({
|
||||
inlineData: {
|
||||
mimeType: match[1],
|
||||
data: match[2],
|
||||
},
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return parts.length > 0 ? parts : [{ text: "" }]
|
||||
}
|
||||
|
||||
export function hasOpenAIMessages(body: Record<string, unknown>): boolean {
|
||||
return Array.isArray(body.messages) && body.messages.length > 0
|
||||
}
|
||||
|
||||
export function convertRequestBody(
|
||||
body: Record<string, unknown>,
|
||||
thoughtSignature?: string
|
||||
): Record<string, unknown> {
|
||||
if (!hasOpenAIMessages(body)) {
|
||||
debugLog("No messages array found, returning body as-is")
|
||||
return body
|
||||
}
|
||||
|
||||
const messages = body.messages as OpenAIMessage[]
|
||||
const contents = convertOpenAIToGemini(messages, thoughtSignature)
|
||||
|
||||
const converted = { ...body }
|
||||
delete converted.messages
|
||||
converted.contents = contents
|
||||
|
||||
debugLog(`Converted body: messages → contents (${contents.length} blocks)`)
|
||||
return converted
|
||||
}
|
||||
@@ -1,262 +0,0 @@
|
||||
import { describe, it, expect, beforeEach, afterEach, mock } from "bun:test"
|
||||
import { buildAuthURL, exchangeCode, startCallbackServer } from "./oauth"
|
||||
import { ANTIGRAVITY_CLIENT_ID, GOOGLE_TOKEN_URL, ANTIGRAVITY_CALLBACK_PORT } from "./constants"
|
||||
|
||||
describe("OAuth PKCE Removal", () => {
|
||||
describe("buildAuthURL", () => {
|
||||
it("should NOT include code_challenge parameter", async () => {
|
||||
// #given
|
||||
const projectId = "test-project"
|
||||
|
||||
// #when
|
||||
const result = await buildAuthURL(projectId)
|
||||
const url = new URL(result.url)
|
||||
|
||||
// #then
|
||||
expect(url.searchParams.has("code_challenge")).toBe(false)
|
||||
})
|
||||
|
||||
it("should NOT include code_challenge_method parameter", async () => {
|
||||
// #given
|
||||
const projectId = "test-project"
|
||||
|
||||
// #when
|
||||
const result = await buildAuthURL(projectId)
|
||||
const url = new URL(result.url)
|
||||
|
||||
// #then
|
||||
expect(url.searchParams.has("code_challenge_method")).toBe(false)
|
||||
})
|
||||
|
||||
it("should include state parameter for CSRF protection", async () => {
|
||||
// #given
|
||||
const projectId = "test-project"
|
||||
|
||||
// #when
|
||||
const result = await buildAuthURL(projectId)
|
||||
const url = new URL(result.url)
|
||||
const state = url.searchParams.get("state")
|
||||
|
||||
// #then
|
||||
expect(state).toBeTruthy()
|
||||
})
|
||||
|
||||
it("should have state as simple random string (not JSON/base64)", async () => {
|
||||
// #given
|
||||
const projectId = "test-project"
|
||||
|
||||
// #when
|
||||
const result = await buildAuthURL(projectId)
|
||||
const url = new URL(result.url)
|
||||
const state = url.searchParams.get("state")!
|
||||
|
||||
// #then - positive assertions for simple random string
|
||||
expect(state.length).toBeGreaterThanOrEqual(16)
|
||||
expect(state.length).toBeLessThanOrEqual(64)
|
||||
// Should be URL-safe (alphanumeric, no special chars like { } " :)
|
||||
expect(state).toMatch(/^[a-zA-Z0-9_-]+$/)
|
||||
// Should NOT contain JSON indicators
|
||||
expect(state).not.toContain("{")
|
||||
expect(state).not.toContain("}")
|
||||
expect(state).not.toContain('"')
|
||||
})
|
||||
|
||||
it("should include access_type=offline", async () => {
|
||||
// #given
|
||||
const projectId = "test-project"
|
||||
|
||||
// #when
|
||||
const result = await buildAuthURL(projectId)
|
||||
const url = new URL(result.url)
|
||||
|
||||
// #then
|
||||
expect(url.searchParams.get("access_type")).toBe("offline")
|
||||
})
|
||||
|
||||
it("should include prompt=consent", async () => {
|
||||
// #given
|
||||
const projectId = "test-project"
|
||||
|
||||
// #when
|
||||
const result = await buildAuthURL(projectId)
|
||||
const url = new URL(result.url)
|
||||
|
||||
// #then
|
||||
expect(url.searchParams.get("prompt")).toBe("consent")
|
||||
})
|
||||
|
||||
it("should NOT return verifier property (PKCE removed)", async () => {
|
||||
// #given
|
||||
const projectId = "test-project"
|
||||
|
||||
// #when
|
||||
const result = await buildAuthURL(projectId)
|
||||
|
||||
// #then
|
||||
expect(result).not.toHaveProperty("verifier")
|
||||
expect(result).toHaveProperty("url")
|
||||
expect(result).toHaveProperty("state")
|
||||
})
|
||||
|
||||
it("should return state that matches URL state param", async () => {
|
||||
// #given
|
||||
const projectId = "test-project"
|
||||
|
||||
// #when
|
||||
const result = await buildAuthURL(projectId)
|
||||
const url = new URL(result.url)
|
||||
|
||||
// #then
|
||||
expect(result.state).toBe(url.searchParams.get("state")!)
|
||||
})
|
||||
})
|
||||
|
||||
describe("exchangeCode", () => {
|
||||
let originalFetch: typeof fetch
|
||||
|
||||
beforeEach(() => {
|
||||
originalFetch = globalThis.fetch
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
globalThis.fetch = originalFetch
|
||||
})
|
||||
|
||||
it("should NOT send code_verifier in token exchange", async () => {
|
||||
// #given
|
||||
let capturedBody: string | null = null
|
||||
globalThis.fetch = mock(async (url: string, init?: RequestInit) => {
|
||||
if (url === GOOGLE_TOKEN_URL) {
|
||||
capturedBody = init?.body as string
|
||||
return new Response(JSON.stringify({
|
||||
access_token: "test-access",
|
||||
refresh_token: "test-refresh",
|
||||
expires_in: 3600,
|
||||
token_type: "Bearer"
|
||||
}))
|
||||
}
|
||||
return new Response("", { status: 404 })
|
||||
}) as unknown as typeof fetch
|
||||
|
||||
// #when
|
||||
await exchangeCode("test-code", "http://localhost:51121/oauth-callback")
|
||||
|
||||
// #then
|
||||
expect(capturedBody).toBeTruthy()
|
||||
const params = new URLSearchParams(capturedBody!)
|
||||
expect(params.has("code_verifier")).toBe(false)
|
||||
})
|
||||
|
||||
it("should send required OAuth parameters", async () => {
|
||||
// #given
|
||||
let capturedBody: string | null = null
|
||||
globalThis.fetch = mock(async (url: string, init?: RequestInit) => {
|
||||
if (url === GOOGLE_TOKEN_URL) {
|
||||
capturedBody = init?.body as string
|
||||
return new Response(JSON.stringify({
|
||||
access_token: "test-access",
|
||||
refresh_token: "test-refresh",
|
||||
expires_in: 3600,
|
||||
token_type: "Bearer"
|
||||
}))
|
||||
}
|
||||
return new Response("", { status: 404 })
|
||||
}) as unknown as typeof fetch
|
||||
|
||||
// #when
|
||||
await exchangeCode("test-code", "http://localhost:51121/oauth-callback")
|
||||
|
||||
// #then
|
||||
const params = new URLSearchParams(capturedBody!)
|
||||
expect(params.get("grant_type")).toBe("authorization_code")
|
||||
expect(params.get("code")).toBe("test-code")
|
||||
expect(params.get("client_id")).toBe(ANTIGRAVITY_CLIENT_ID)
|
||||
expect(params.get("redirect_uri")).toBe("http://localhost:51121/oauth-callback")
|
||||
})
|
||||
})
|
||||
|
||||
describe("State/CSRF Validation", () => {
|
||||
it("should generate unique state for each call", async () => {
|
||||
// #given
|
||||
const projectId = "test-project"
|
||||
|
||||
// #when
|
||||
const result1 = await buildAuthURL(projectId)
|
||||
const result2 = await buildAuthURL(projectId)
|
||||
|
||||
// #then
|
||||
expect(result1.state).not.toBe(result2.state)
|
||||
})
|
||||
})
|
||||
|
||||
describe("startCallbackServer Port Handling", () => {
|
||||
it("should prefer port 51121", () => {
|
||||
// #given
|
||||
// Port 51121 should be free
|
||||
|
||||
// #when
|
||||
const handle = startCallbackServer()
|
||||
|
||||
// #then
|
||||
// If 51121 is available, should use it
|
||||
// If not available, should use valid fallback
|
||||
expect(handle.port).toBeGreaterThan(0)
|
||||
expect(handle.port).toBeLessThan(65536)
|
||||
handle.close()
|
||||
})
|
||||
|
||||
it("should return actual bound port", () => {
|
||||
// #when
|
||||
const handle = startCallbackServer()
|
||||
|
||||
// #then
|
||||
expect(typeof handle.port).toBe("number")
|
||||
expect(handle.port).toBeGreaterThan(0)
|
||||
handle.close()
|
||||
})
|
||||
|
||||
it("should fallback to OS-assigned port if 51121 is occupied (EADDRINUSE)", async () => {
|
||||
// #given - Occupy port 51121 first
|
||||
const blocker = Bun.serve({
|
||||
port: ANTIGRAVITY_CALLBACK_PORT,
|
||||
fetch: () => new Response("blocked")
|
||||
})
|
||||
|
||||
try {
|
||||
// #when
|
||||
const handle = startCallbackServer()
|
||||
|
||||
// #then
|
||||
expect(handle.port).not.toBe(ANTIGRAVITY_CALLBACK_PORT)
|
||||
expect(handle.port).toBeGreaterThan(0)
|
||||
handle.close()
|
||||
} finally {
|
||||
// Cleanup blocker
|
||||
blocker.stop()
|
||||
}
|
||||
})
|
||||
|
||||
it("should cleanup server on close", () => {
|
||||
// #given
|
||||
const handle = startCallbackServer()
|
||||
const port = handle.port
|
||||
|
||||
// #when
|
||||
handle.close()
|
||||
|
||||
// #then - port should be released (can bind again)
|
||||
const testServer = Bun.serve({ port, fetch: () => new Response("test") })
|
||||
expect(testServer.port).toBe(port)
|
||||
testServer.stop()
|
||||
})
|
||||
|
||||
it("should provide redirect URI with actual port", () => {
|
||||
// #given
|
||||
const handle = startCallbackServer()
|
||||
|
||||
// #then
|
||||
expect(handle.redirectUri).toBe(`http://localhost:${handle.port}/oauth-callback`)
|
||||
handle.close()
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -1,285 +0,0 @@
|
||||
/**
|
||||
* Antigravity OAuth 2.0 flow implementation.
|
||||
* Handles Google OAuth for Antigravity authentication.
|
||||
*/
|
||||
import {
|
||||
ANTIGRAVITY_CLIENT_ID,
|
||||
ANTIGRAVITY_CLIENT_SECRET,
|
||||
ANTIGRAVITY_REDIRECT_URI,
|
||||
ANTIGRAVITY_SCOPES,
|
||||
ANTIGRAVITY_CALLBACK_PORT,
|
||||
GOOGLE_AUTH_URL,
|
||||
GOOGLE_TOKEN_URL,
|
||||
GOOGLE_USERINFO_URL,
|
||||
} from "./constants"
|
||||
import type {
|
||||
AntigravityTokenExchangeResult,
|
||||
AntigravityUserInfo,
|
||||
} from "./types"
|
||||
|
||||
/**
|
||||
* Result from building an OAuth authorization URL.
|
||||
*/
|
||||
export interface AuthorizationResult {
|
||||
/** Full OAuth URL to open in browser */
|
||||
url: string
|
||||
/** State for CSRF protection */
|
||||
state: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Result from the OAuth callback server.
|
||||
*/
|
||||
export interface CallbackResult {
|
||||
/** Authorization code from Google */
|
||||
code: string
|
||||
/** State parameter from callback */
|
||||
state: string
|
||||
/** Error message if any */
|
||||
error?: string
|
||||
}
|
||||
|
||||
export async function buildAuthURL(
|
||||
projectId?: string,
|
||||
clientId: string = ANTIGRAVITY_CLIENT_ID,
|
||||
port: number = ANTIGRAVITY_CALLBACK_PORT
|
||||
): Promise<AuthorizationResult> {
|
||||
const state = crypto.randomUUID().replace(/-/g, "")
|
||||
|
||||
const redirectUri = `http://localhost:${port}/oauth-callback`
|
||||
|
||||
const url = new URL(GOOGLE_AUTH_URL)
|
||||
url.searchParams.set("client_id", clientId)
|
||||
url.searchParams.set("redirect_uri", redirectUri)
|
||||
url.searchParams.set("response_type", "code")
|
||||
url.searchParams.set("scope", ANTIGRAVITY_SCOPES.join(" "))
|
||||
url.searchParams.set("state", state)
|
||||
url.searchParams.set("access_type", "offline")
|
||||
url.searchParams.set("prompt", "consent")
|
||||
|
||||
return {
|
||||
url: url.toString(),
|
||||
state,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Exchange authorization code for tokens.
|
||||
*
|
||||
* @param code - Authorization code from OAuth callback
|
||||
* @param redirectUri - OAuth redirect URI
|
||||
* @param clientId - Optional custom client ID (defaults to ANTIGRAVITY_CLIENT_ID)
|
||||
* @param clientSecret - Optional custom client secret (defaults to ANTIGRAVITY_CLIENT_SECRET)
|
||||
* @returns Token exchange result with access and refresh tokens
|
||||
*/
|
||||
export async function exchangeCode(
|
||||
code: string,
|
||||
redirectUri: string,
|
||||
clientId: string = ANTIGRAVITY_CLIENT_ID,
|
||||
clientSecret: string = ANTIGRAVITY_CLIENT_SECRET
|
||||
): Promise<AntigravityTokenExchangeResult> {
|
||||
const params = new URLSearchParams({
|
||||
client_id: clientId,
|
||||
client_secret: clientSecret,
|
||||
code,
|
||||
grant_type: "authorization_code",
|
||||
redirect_uri: redirectUri,
|
||||
})
|
||||
|
||||
const response = await fetch(GOOGLE_TOKEN_URL, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/x-www-form-urlencoded",
|
||||
},
|
||||
body: params,
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorText = await response.text()
|
||||
throw new Error(`Token exchange failed: ${response.status} - ${errorText}`)
|
||||
}
|
||||
|
||||
const data = (await response.json()) as {
|
||||
access_token: string
|
||||
refresh_token: string
|
||||
expires_in: number
|
||||
token_type: string
|
||||
}
|
||||
|
||||
return {
|
||||
access_token: data.access_token,
|
||||
refresh_token: data.refresh_token,
|
||||
expires_in: data.expires_in,
|
||||
token_type: data.token_type,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch user info from Google's userinfo API.
|
||||
*
|
||||
* @param accessToken - Valid access token
|
||||
* @returns User info containing email
|
||||
*/
|
||||
export async function fetchUserInfo(
|
||||
accessToken: string
|
||||
): Promise<AntigravityUserInfo> {
|
||||
const response = await fetch(`${GOOGLE_USERINFO_URL}?alt=json`, {
|
||||
headers: {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
},
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to fetch user info: ${response.status}`)
|
||||
}
|
||||
|
||||
const data = (await response.json()) as {
|
||||
email?: string
|
||||
name?: string
|
||||
picture?: string
|
||||
}
|
||||
|
||||
return {
|
||||
email: data.email || "",
|
||||
name: data.name,
|
||||
picture: data.picture,
|
||||
}
|
||||
}
|
||||
|
||||
export interface CallbackServerHandle {
|
||||
port: number
|
||||
redirectUri: string
|
||||
waitForCallback: () => Promise<CallbackResult>
|
||||
close: () => void
|
||||
}
|
||||
|
||||
export function startCallbackServer(
|
||||
timeoutMs: number = 5 * 60 * 1000
|
||||
): CallbackServerHandle {
|
||||
let server: ReturnType<typeof Bun.serve> | null = null
|
||||
let timeoutId: ReturnType<typeof setTimeout> | null = null
|
||||
let resolveCallback: ((result: CallbackResult) => void) | null = null
|
||||
let rejectCallback: ((error: Error) => void) | null = null
|
||||
|
||||
const cleanup = () => {
|
||||
if (timeoutId) {
|
||||
clearTimeout(timeoutId)
|
||||
timeoutId = null
|
||||
}
|
||||
if (server) {
|
||||
server.stop()
|
||||
server = null
|
||||
}
|
||||
}
|
||||
|
||||
const fetchHandler = (request: Request): Response => {
|
||||
const url = new URL(request.url)
|
||||
|
||||
if (url.pathname === "/oauth-callback") {
|
||||
const code = url.searchParams.get("code") || ""
|
||||
const state = url.searchParams.get("state") || ""
|
||||
const error = url.searchParams.get("error") || undefined
|
||||
|
||||
let responseBody: string
|
||||
if (code && !error) {
|
||||
responseBody =
|
||||
"<html><body><h1>Login successful</h1><p>You can close this window.</p></body></html>"
|
||||
} else {
|
||||
responseBody =
|
||||
"<html><body><h1>Login failed</h1><p>Please check the CLI output.</p></body></html>"
|
||||
}
|
||||
|
||||
setTimeout(() => {
|
||||
cleanup()
|
||||
if (resolveCallback) {
|
||||
resolveCallback({ code, state, error })
|
||||
}
|
||||
}, 100)
|
||||
|
||||
return new Response(responseBody, {
|
||||
status: 200,
|
||||
headers: { "Content-Type": "text/html" },
|
||||
})
|
||||
}
|
||||
|
||||
return new Response("Not Found", { status: 404 })
|
||||
}
|
||||
|
||||
try {
|
||||
server = Bun.serve({
|
||||
port: ANTIGRAVITY_CALLBACK_PORT,
|
||||
fetch: fetchHandler,
|
||||
})
|
||||
} catch (error) {
|
||||
server = Bun.serve({
|
||||
port: 0,
|
||||
fetch: fetchHandler,
|
||||
})
|
||||
}
|
||||
|
||||
const actualPort = server.port as number
|
||||
const redirectUri = `http://localhost:${actualPort}/oauth-callback`
|
||||
|
||||
const waitForCallback = (): Promise<CallbackResult> => {
|
||||
return new Promise((resolve, reject) => {
|
||||
resolveCallback = resolve
|
||||
rejectCallback = reject
|
||||
|
||||
timeoutId = setTimeout(() => {
|
||||
cleanup()
|
||||
reject(new Error("OAuth callback timeout"))
|
||||
}, timeoutMs)
|
||||
})
|
||||
}
|
||||
|
||||
return {
|
||||
port: actualPort,
|
||||
redirectUri,
|
||||
waitForCallback,
|
||||
close: cleanup,
|
||||
}
|
||||
}
|
||||
|
||||
export async function performOAuthFlow(
|
||||
projectId?: string,
|
||||
openBrowser?: (url: string) => Promise<void>,
|
||||
clientId: string = ANTIGRAVITY_CLIENT_ID,
|
||||
clientSecret: string = ANTIGRAVITY_CLIENT_SECRET
|
||||
): Promise<{
|
||||
tokens: AntigravityTokenExchangeResult
|
||||
userInfo: AntigravityUserInfo
|
||||
state: string
|
||||
}> {
|
||||
const serverHandle = startCallbackServer()
|
||||
|
||||
try {
|
||||
const auth = await buildAuthURL(projectId, clientId, serverHandle.port)
|
||||
|
||||
if (openBrowser) {
|
||||
await openBrowser(auth.url)
|
||||
}
|
||||
|
||||
const callback = await serverHandle.waitForCallback()
|
||||
|
||||
if (callback.error) {
|
||||
throw new Error(`OAuth error: ${callback.error}`)
|
||||
}
|
||||
|
||||
if (!callback.code) {
|
||||
throw new Error("No authorization code received")
|
||||
}
|
||||
|
||||
if (callback.state !== auth.state) {
|
||||
throw new Error("State mismatch - possible CSRF attack")
|
||||
}
|
||||
|
||||
const redirectUri = `http://localhost:${serverHandle.port}/oauth-callback`
|
||||
const tokens = await exchangeCode(callback.code, redirectUri, clientId, clientSecret)
|
||||
const userInfo = await fetchUserInfo(tokens.access_token)
|
||||
|
||||
return { tokens, userInfo, state: auth.state }
|
||||
} catch (err) {
|
||||
serverHandle.close()
|
||||
throw err
|
||||
}
|
||||
}
|
||||
@@ -1,554 +0,0 @@
|
||||
/**
|
||||
* Google Antigravity Auth Plugin for OpenCode
|
||||
*
|
||||
* Provides OAuth authentication for Google models via Antigravity API.
|
||||
* This plugin integrates with OpenCode's auth system to enable:
|
||||
* - OAuth 2.0 with PKCE flow for Google authentication
|
||||
* - Automatic token refresh
|
||||
* - Request/response transformation for Antigravity API
|
||||
*
|
||||
* @example
|
||||
* ```json
|
||||
* // opencode.json
|
||||
* {
|
||||
* "plugin": ["oh-my-opencode"],
|
||||
* "provider": {
|
||||
* "google": {
|
||||
* "options": {
|
||||
* "clientId": "custom-client-id",
|
||||
* "clientSecret": "custom-client-secret"
|
||||
* }
|
||||
* }
|
||||
* }
|
||||
* }
|
||||
* ```
|
||||
*/
|
||||
|
||||
import type { Auth, Provider } from "@opencode-ai/sdk"
|
||||
import type { AuthHook, AuthOuathResult, PluginInput } from "@opencode-ai/plugin"
|
||||
|
||||
import { ANTIGRAVITY_CLIENT_ID, ANTIGRAVITY_CLIENT_SECRET } from "./constants"
|
||||
import {
|
||||
buildAuthURL,
|
||||
exchangeCode,
|
||||
startCallbackServer,
|
||||
fetchUserInfo,
|
||||
} from "./oauth"
|
||||
import { createAntigravityFetch } from "./fetch"
|
||||
import { fetchProjectContext } from "./project"
|
||||
import { formatTokenForStorage, parseStoredToken } from "./token"
|
||||
import { AccountManager } from "./accounts"
|
||||
import { loadAccounts } from "./storage"
|
||||
import { promptAddAnotherAccount, promptAccountTier } from "./cli"
|
||||
import { openBrowserURL } from "./browser"
|
||||
import type { AccountTier, AntigravityRefreshParts } from "./types"
|
||||
|
||||
/**
|
||||
* Provider ID for Google models
|
||||
* Antigravity is an auth method for Google, not a separate provider
|
||||
*/
|
||||
const GOOGLE_PROVIDER_ID = "google"
|
||||
|
||||
/**
|
||||
* Maximum number of Google accounts that can be added
|
||||
*/
|
||||
const MAX_ACCOUNTS = 10
|
||||
|
||||
/**
|
||||
* Type guard to check if auth is OAuth type
|
||||
*/
|
||||
function isOAuthAuth(
|
||||
auth: Auth
|
||||
): auth is { type: "oauth"; access: string; refresh: string; expires: number } {
|
||||
return auth.type === "oauth"
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates the Google Antigravity OAuth plugin for OpenCode.
|
||||
*
|
||||
* This factory function creates an auth plugin that:
|
||||
* 1. Provides OAuth flow for Google authentication
|
||||
* 2. Creates a custom fetch interceptor for Antigravity API
|
||||
* 3. Handles token management and refresh
|
||||
*
|
||||
* @param input - Plugin input containing the OpenCode client
|
||||
* @returns Hooks object with auth configuration
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* // Used by OpenCode automatically when plugin is loaded
|
||||
* const hooks = await createGoogleAntigravityAuthPlugin({ client, ... })
|
||||
* ```
|
||||
*/
|
||||
export async function createGoogleAntigravityAuthPlugin({
|
||||
client,
|
||||
}: PluginInput): Promise<{ auth: AuthHook }> {
|
||||
// Cache for custom credentials from provider.options
|
||||
// These are populated by loader() and used by authorize()
|
||||
// Falls back to defaults if loader hasn't been called yet
|
||||
let cachedClientId: string = ANTIGRAVITY_CLIENT_ID
|
||||
let cachedClientSecret: string = ANTIGRAVITY_CLIENT_SECRET
|
||||
|
||||
const authHook: AuthHook = {
|
||||
/**
|
||||
* Provider identifier - must be "google" as Antigravity is
|
||||
* an auth method for Google models, not a separate provider
|
||||
*/
|
||||
provider: GOOGLE_PROVIDER_ID,
|
||||
|
||||
/**
|
||||
* Loader function called when auth is needed.
|
||||
* Reads credentials from provider.options and creates custom fetch.
|
||||
*
|
||||
* @param auth - Function to retrieve current auth state
|
||||
* @param provider - Provider configuration including options
|
||||
* @returns Object with custom fetch function
|
||||
*/
|
||||
loader: async (
|
||||
auth: () => Promise<Auth>,
|
||||
provider: Provider
|
||||
): Promise<Record<string, unknown>> => {
|
||||
const currentAuth = await auth()
|
||||
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.log("[antigravity-plugin] loader called")
|
||||
console.log("[antigravity-plugin] auth type:", currentAuth?.type)
|
||||
console.log("[antigravity-plugin] auth keys:", Object.keys(currentAuth || {}))
|
||||
}
|
||||
|
||||
if (!isOAuthAuth(currentAuth)) {
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.log("[antigravity-plugin] NOT OAuth auth, returning empty")
|
||||
}
|
||||
return {}
|
||||
}
|
||||
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.log("[antigravity-plugin] OAuth auth detected, creating custom fetch")
|
||||
}
|
||||
|
||||
let accountManager: AccountManager | null = null
|
||||
try {
|
||||
const storedAccounts = await loadAccounts()
|
||||
if (storedAccounts) {
|
||||
accountManager = new AccountManager(currentAuth, storedAccounts)
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.log(`[antigravity-plugin] Loaded ${accountManager.getAccountCount()} accounts from storage`)
|
||||
}
|
||||
} else if (currentAuth.refresh.includes("|||")) {
|
||||
const tokens = currentAuth.refresh.split("|||")
|
||||
const firstToken = tokens[0]!
|
||||
accountManager = new AccountManager(
|
||||
{ refresh: firstToken, access: currentAuth.access || "", expires: currentAuth.expires || 0 },
|
||||
null
|
||||
)
|
||||
for (let i = 1; i < tokens.length; i++) {
|
||||
const parts = parseStoredToken(tokens[i]!)
|
||||
accountManager.addAccount(parts)
|
||||
}
|
||||
await accountManager.save()
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.log("[antigravity-plugin] Migrated multi-account auth to storage")
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.error(
|
||||
`[antigravity-plugin] Failed to load accounts: ${
|
||||
error instanceof Error ? error.message : "Unknown error"
|
||||
}`
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
cachedClientId =
|
||||
(provider.options?.clientId as string) || ANTIGRAVITY_CLIENT_ID
|
||||
cachedClientSecret =
|
||||
(provider.options?.clientSecret as string) || ANTIGRAVITY_CLIENT_SECRET
|
||||
|
||||
// Log if using custom credentials (for debugging)
|
||||
if (
|
||||
process.env.ANTIGRAVITY_DEBUG === "1" &&
|
||||
(cachedClientId !== ANTIGRAVITY_CLIENT_ID ||
|
||||
cachedClientSecret !== ANTIGRAVITY_CLIENT_SECRET)
|
||||
) {
|
||||
console.log(
|
||||
"[antigravity-plugin] Using custom credentials from provider.options"
|
||||
)
|
||||
}
|
||||
|
||||
// Create adapter for client.auth.set that matches fetch.ts AuthClient interface
|
||||
const authClient = {
|
||||
set: async (
|
||||
providerId: string,
|
||||
authData: { access?: string; refresh?: string; expires?: number }
|
||||
) => {
|
||||
await client.auth.set({
|
||||
body: {
|
||||
type: "oauth",
|
||||
access: authData.access || "",
|
||||
refresh: authData.refresh || "",
|
||||
expires: authData.expires || 0,
|
||||
},
|
||||
path: { id: providerId },
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
// Create auth getter that returns compatible format for fetch.ts
|
||||
const getAuth = async (): Promise<{
|
||||
access?: string
|
||||
refresh?: string
|
||||
expires?: number
|
||||
}> => {
|
||||
const authState = await auth()
|
||||
if (isOAuthAuth(authState)) {
|
||||
return {
|
||||
access: authState.access,
|
||||
refresh: authState.refresh,
|
||||
expires: authState.expires,
|
||||
}
|
||||
}
|
||||
return {}
|
||||
}
|
||||
|
||||
const antigravityFetch = createAntigravityFetch(
|
||||
getAuth,
|
||||
authClient,
|
||||
GOOGLE_PROVIDER_ID,
|
||||
cachedClientId,
|
||||
cachedClientSecret
|
||||
)
|
||||
|
||||
return {
|
||||
fetch: antigravityFetch,
|
||||
apiKey: "antigravity-oauth",
|
||||
accountManager,
|
||||
}
|
||||
},
|
||||
|
||||
/**
|
||||
* Authentication methods available for this provider.
|
||||
* Only OAuth is supported - no prompts for credentials.
|
||||
*/
|
||||
methods: [
|
||||
{
|
||||
type: "oauth",
|
||||
label: "OAuth with Google (Antigravity)",
|
||||
// NO prompts - credentials come from provider.options or defaults
|
||||
// OAuth flow starts immediately when user selects this method
|
||||
|
||||
/**
|
||||
* Starts the OAuth authorization flow.
|
||||
* Opens browser for Google OAuth and waits for callback.
|
||||
* Supports multi-account flow with prompts for additional accounts.
|
||||
*
|
||||
* @returns Authorization result with URL and callback
|
||||
*/
|
||||
authorize: async (): Promise<AuthOuathResult> => {
|
||||
const serverHandle = startCallbackServer()
|
||||
const { url, state: expectedState } = await buildAuthURL(undefined, cachedClientId, serverHandle.port)
|
||||
|
||||
const browserOpened = await openBrowserURL(url)
|
||||
|
||||
return {
|
||||
url,
|
||||
instructions: browserOpened
|
||||
? "Opening browser for sign-in. We'll automatically detect when you're done."
|
||||
: "Please open the URL above in your browser to sign in.",
|
||||
method: "auto",
|
||||
|
||||
callback: async () => {
|
||||
try {
|
||||
const result = await serverHandle.waitForCallback()
|
||||
|
||||
if (result.error) {
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.error(`[antigravity-plugin] OAuth error: ${result.error}`)
|
||||
}
|
||||
return { type: "failed" as const }
|
||||
}
|
||||
|
||||
if (!result.code) {
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.error("[antigravity-plugin] No authorization code received")
|
||||
}
|
||||
return { type: "failed" as const }
|
||||
}
|
||||
|
||||
if (result.state !== expectedState) {
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.error("[antigravity-plugin] State mismatch - possible CSRF attack")
|
||||
}
|
||||
return { type: "failed" as const }
|
||||
}
|
||||
|
||||
const redirectUri = `http://localhost:${serverHandle.port}/oauth-callback`
|
||||
const tokens = await exchangeCode(result.code, redirectUri, cachedClientId, cachedClientSecret)
|
||||
|
||||
if (!tokens.refresh_token) {
|
||||
serverHandle.close()
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.error("[antigravity-plugin] OAuth response missing refresh_token")
|
||||
}
|
||||
return { type: "failed" as const }
|
||||
}
|
||||
|
||||
let email: string | undefined
|
||||
try {
|
||||
const userInfo = await fetchUserInfo(tokens.access_token)
|
||||
email = userInfo.email
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.log(`[antigravity-plugin] Authenticated as: ${email}`)
|
||||
}
|
||||
} catch {
|
||||
// User info is optional
|
||||
}
|
||||
|
||||
const projectContext = await fetchProjectContext(tokens.access_token)
|
||||
const projectId = projectContext.cloudaicompanionProject || ""
|
||||
const tier = await promptAccountTier()
|
||||
|
||||
const expires = Date.now() + tokens.expires_in * 1000
|
||||
const accounts: Array<{
|
||||
parts: AntigravityRefreshParts
|
||||
access: string
|
||||
expires: number
|
||||
email?: string
|
||||
tier: AccountTier
|
||||
projectId: string
|
||||
}> = [{
|
||||
parts: {
|
||||
refreshToken: tokens.refresh_token,
|
||||
projectId,
|
||||
managedProjectId: projectContext.managedProjectId,
|
||||
},
|
||||
access: tokens.access_token,
|
||||
expires,
|
||||
email,
|
||||
tier,
|
||||
projectId,
|
||||
}]
|
||||
|
||||
await client.tui.showToast({
|
||||
body: {
|
||||
message: `Account 1 authenticated${email ? ` (${email})` : ""}`,
|
||||
variant: "success",
|
||||
},
|
||||
})
|
||||
|
||||
while (accounts.length < MAX_ACCOUNTS) {
|
||||
const addAnother = await promptAddAnotherAccount(accounts.length)
|
||||
if (!addAnother) break
|
||||
|
||||
const additionalServerHandle = startCallbackServer()
|
||||
const { url: additionalUrl, state: expectedAdditionalState } = await buildAuthURL(
|
||||
undefined,
|
||||
cachedClientId,
|
||||
additionalServerHandle.port
|
||||
)
|
||||
|
||||
const additionalBrowserOpened = await openBrowserURL(additionalUrl)
|
||||
if (!additionalBrowserOpened) {
|
||||
await client.tui.showToast({
|
||||
body: {
|
||||
message: `Please open in browser: ${additionalUrl}`,
|
||||
variant: "warning",
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
try {
|
||||
const additionalResult = await additionalServerHandle.waitForCallback()
|
||||
|
||||
if (additionalResult.error || !additionalResult.code) {
|
||||
additionalServerHandle.close()
|
||||
await client.tui.showToast({
|
||||
body: {
|
||||
message: "Skipping this account...",
|
||||
variant: "warning",
|
||||
},
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
if (additionalResult.state !== expectedAdditionalState) {
|
||||
additionalServerHandle.close()
|
||||
await client.tui.showToast({
|
||||
body: {
|
||||
message: "State mismatch, skipping...",
|
||||
variant: "warning",
|
||||
},
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
const additionalRedirectUri = `http://localhost:${additionalServerHandle.port}/oauth-callback`
|
||||
const additionalTokens = await exchangeCode(
|
||||
additionalResult.code,
|
||||
additionalRedirectUri,
|
||||
cachedClientId,
|
||||
cachedClientSecret
|
||||
)
|
||||
|
||||
if (!additionalTokens.refresh_token) {
|
||||
additionalServerHandle.close()
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.error("[antigravity-plugin] Additional account OAuth response missing refresh_token")
|
||||
}
|
||||
await client.tui.showToast({
|
||||
body: {
|
||||
message: "Account missing refresh token, skipping...",
|
||||
variant: "warning",
|
||||
},
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
let additionalEmail: string | undefined
|
||||
try {
|
||||
const additionalUserInfo = await fetchUserInfo(additionalTokens.access_token)
|
||||
additionalEmail = additionalUserInfo.email
|
||||
} catch {
|
||||
// User info is optional
|
||||
}
|
||||
|
||||
const additionalProjectContext = await fetchProjectContext(additionalTokens.access_token)
|
||||
const additionalProjectId = additionalProjectContext.cloudaicompanionProject || ""
|
||||
const additionalTier = await promptAccountTier()
|
||||
|
||||
const additionalExpires = Date.now() + additionalTokens.expires_in * 1000
|
||||
|
||||
accounts.push({
|
||||
parts: {
|
||||
refreshToken: additionalTokens.refresh_token,
|
||||
projectId: additionalProjectId,
|
||||
managedProjectId: additionalProjectContext.managedProjectId,
|
||||
},
|
||||
access: additionalTokens.access_token,
|
||||
expires: additionalExpires,
|
||||
email: additionalEmail,
|
||||
tier: additionalTier,
|
||||
projectId: additionalProjectId,
|
||||
})
|
||||
|
||||
additionalServerHandle.close()
|
||||
|
||||
await client.tui.showToast({
|
||||
body: {
|
||||
message: `Account ${accounts.length} authenticated${additionalEmail ? ` (${additionalEmail})` : ""}`,
|
||||
variant: "success",
|
||||
},
|
||||
})
|
||||
} catch (error) {
|
||||
additionalServerHandle.close()
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.error(
|
||||
`[antigravity-plugin] Additional account OAuth failed: ${
|
||||
error instanceof Error ? error.message : "Unknown error"
|
||||
}`
|
||||
)
|
||||
}
|
||||
await client.tui.showToast({
|
||||
body: {
|
||||
message: "Failed to authenticate additional account, skipping...",
|
||||
variant: "warning",
|
||||
},
|
||||
})
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
const firstAccount = accounts[0]!
|
||||
try {
|
||||
const accountManager = new AccountManager(
|
||||
{
|
||||
refresh: formatTokenForStorage(
|
||||
firstAccount.parts.refreshToken,
|
||||
firstAccount.projectId,
|
||||
firstAccount.parts.managedProjectId
|
||||
),
|
||||
access: firstAccount.access,
|
||||
expires: firstAccount.expires,
|
||||
},
|
||||
null
|
||||
)
|
||||
|
||||
for (let i = 1; i < accounts.length; i++) {
|
||||
const acc = accounts[i]!
|
||||
accountManager.addAccount(
|
||||
acc.parts,
|
||||
acc.access,
|
||||
acc.expires,
|
||||
acc.email,
|
||||
acc.tier
|
||||
)
|
||||
}
|
||||
|
||||
const currentAccount = accountManager.getCurrentAccount()
|
||||
if (currentAccount) {
|
||||
currentAccount.email = firstAccount.email
|
||||
currentAccount.tier = firstAccount.tier
|
||||
}
|
||||
|
||||
await accountManager.save()
|
||||
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.log(`[antigravity-plugin] Saved ${accounts.length} accounts to storage`)
|
||||
}
|
||||
} catch (error) {
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.error(
|
||||
`[antigravity-plugin] Failed to save accounts: ${
|
||||
error instanceof Error ? error.message : "Unknown error"
|
||||
}`
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
const allRefreshTokens = accounts
|
||||
.map((acc) => formatTokenForStorage(
|
||||
acc.parts.refreshToken,
|
||||
acc.projectId,
|
||||
acc.parts.managedProjectId
|
||||
))
|
||||
.join("|||")
|
||||
|
||||
return {
|
||||
type: "success" as const,
|
||||
access: firstAccount.access,
|
||||
refresh: allRefreshTokens,
|
||||
expires: firstAccount.expires,
|
||||
}
|
||||
} catch (error) {
|
||||
serverHandle.close()
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.error(
|
||||
`[antigravity-plugin] OAuth flow failed: ${
|
||||
error instanceof Error ? error.message : "Unknown error"
|
||||
}`
|
||||
)
|
||||
}
|
||||
return { type: "failed" as const }
|
||||
}
|
||||
},
|
||||
}
|
||||
},
|
||||
},
|
||||
],
|
||||
}
|
||||
|
||||
return {
|
||||
auth: authHook,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Default export for OpenCode plugin system
|
||||
*/
|
||||
export default createGoogleAntigravityAuthPlugin
|
||||
|
||||
/**
|
||||
* Named export for explicit imports
|
||||
*/
|
||||
export const GoogleAntigravityAuthPlugin = createGoogleAntigravityAuthPlugin
|
||||
@@ -1,274 +0,0 @@
|
||||
/**
|
||||
* Antigravity project context management.
|
||||
* Handles fetching GCP project ID via Google's loadCodeAssist API.
|
||||
* For FREE tier users, onboards via onboardUser API to get server-assigned managed project ID.
|
||||
* Reference: https://github.com/shekohex/opencode-google-antigravity-auth
|
||||
*/
|
||||
|
||||
import {
|
||||
ANTIGRAVITY_ENDPOINT_FALLBACKS,
|
||||
ANTIGRAVITY_API_VERSION,
|
||||
ANTIGRAVITY_HEADERS,
|
||||
ANTIGRAVITY_DEFAULT_PROJECT_ID,
|
||||
} from "./constants"
|
||||
import type {
|
||||
AntigravityProjectContext,
|
||||
AntigravityLoadCodeAssistResponse,
|
||||
AntigravityOnboardUserPayload,
|
||||
AntigravityUserTier,
|
||||
} from "./types"
|
||||
|
||||
const projectContextCache = new Map<string, AntigravityProjectContext>()
|
||||
|
||||
function debugLog(message: string): void {
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.log(`[antigravity-project] ${message}`)
|
||||
}
|
||||
}
|
||||
|
||||
const CODE_ASSIST_METADATA = {
|
||||
ideType: "IDE_UNSPECIFIED",
|
||||
platform: "PLATFORM_UNSPECIFIED",
|
||||
pluginType: "GEMINI",
|
||||
} as const
|
||||
|
||||
function extractProjectId(
|
||||
project: string | { id: string } | undefined
|
||||
): string | undefined {
|
||||
if (!project) return undefined
|
||||
if (typeof project === "string") {
|
||||
const trimmed = project.trim()
|
||||
return trimmed || undefined
|
||||
}
|
||||
if (typeof project === "object" && "id" in project) {
|
||||
const id = project.id
|
||||
if (typeof id === "string") {
|
||||
const trimmed = id.trim()
|
||||
return trimmed || undefined
|
||||
}
|
||||
}
|
||||
return undefined
|
||||
}
|
||||
|
||||
function getDefaultTierId(allowedTiers?: AntigravityUserTier[]): string | undefined {
|
||||
if (!allowedTiers || allowedTiers.length === 0) return undefined
|
||||
for (const tier of allowedTiers) {
|
||||
if (tier?.isDefault) return tier.id
|
||||
}
|
||||
return allowedTiers[0]?.id
|
||||
}
|
||||
|
||||
function isFreeTier(tierId: string | undefined): boolean {
|
||||
if (!tierId) return true // No tier = assume free tier (default behavior)
|
||||
const lower = tierId.toLowerCase()
|
||||
return lower === "free" || lower === "free-tier" || lower.startsWith("free")
|
||||
}
|
||||
|
||||
function wait(ms: number): Promise<void> {
|
||||
return new Promise((resolve) => setTimeout(resolve, ms))
|
||||
}
|
||||
|
||||
async function callLoadCodeAssistAPI(
|
||||
accessToken: string,
|
||||
projectId?: string
|
||||
): Promise<AntigravityLoadCodeAssistResponse | null> {
|
||||
const metadata: Record<string, string> = { ...CODE_ASSIST_METADATA }
|
||||
if (projectId) metadata.duetProject = projectId
|
||||
|
||||
const requestBody: Record<string, unknown> = { metadata }
|
||||
if (projectId) requestBody.cloudaicompanionProject = projectId
|
||||
|
||||
const headers: Record<string, string> = {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
"Content-Type": "application/json",
|
||||
"User-Agent": ANTIGRAVITY_HEADERS["User-Agent"],
|
||||
"X-Goog-Api-Client": ANTIGRAVITY_HEADERS["X-Goog-Api-Client"],
|
||||
"Client-Metadata": ANTIGRAVITY_HEADERS["Client-Metadata"],
|
||||
}
|
||||
|
||||
for (const baseEndpoint of ANTIGRAVITY_ENDPOINT_FALLBACKS) {
|
||||
const url = `${baseEndpoint}/${ANTIGRAVITY_API_VERSION}:loadCodeAssist`
|
||||
debugLog(`[loadCodeAssist] Trying: ${url}`)
|
||||
try {
|
||||
const response = await fetch(url, {
|
||||
method: "POST",
|
||||
headers,
|
||||
body: JSON.stringify(requestBody),
|
||||
})
|
||||
if (!response.ok) {
|
||||
debugLog(`[loadCodeAssist] Failed: ${response.status} ${response.statusText}`)
|
||||
continue
|
||||
}
|
||||
const data = (await response.json()) as AntigravityLoadCodeAssistResponse
|
||||
debugLog(`[loadCodeAssist] Success: ${JSON.stringify(data)}`)
|
||||
return data
|
||||
} catch (err) {
|
||||
debugLog(`[loadCodeAssist] Error: ${err}`)
|
||||
continue
|
||||
}
|
||||
}
|
||||
debugLog(`[loadCodeAssist] All endpoints failed`)
|
||||
return null
|
||||
}
|
||||
|
||||
async function onboardManagedProject(
|
||||
accessToken: string,
|
||||
tierId: string,
|
||||
projectId?: string,
|
||||
attempts = 10,
|
||||
delayMs = 5000
|
||||
): Promise<string | undefined> {
|
||||
debugLog(`[onboardUser] Starting with tierId=${tierId}, projectId=${projectId || "none"}`)
|
||||
|
||||
const metadata: Record<string, string> = { ...CODE_ASSIST_METADATA }
|
||||
if (projectId) metadata.duetProject = projectId
|
||||
|
||||
const requestBody: Record<string, unknown> = { tierId, metadata }
|
||||
if (!isFreeTier(tierId)) {
|
||||
if (!projectId) {
|
||||
debugLog(`[onboardUser] Non-FREE tier requires projectId, returning undefined`)
|
||||
return undefined
|
||||
}
|
||||
requestBody.cloudaicompanionProject = projectId
|
||||
}
|
||||
|
||||
const headers: Record<string, string> = {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
"Content-Type": "application/json",
|
||||
"User-Agent": ANTIGRAVITY_HEADERS["User-Agent"],
|
||||
"X-Goog-Api-Client": ANTIGRAVITY_HEADERS["X-Goog-Api-Client"],
|
||||
"Client-Metadata": ANTIGRAVITY_HEADERS["Client-Metadata"],
|
||||
}
|
||||
|
||||
debugLog(`[onboardUser] Request body: ${JSON.stringify(requestBody)}`)
|
||||
|
||||
for (let attempt = 0; attempt < attempts; attempt++) {
|
||||
debugLog(`[onboardUser] Attempt ${attempt + 1}/${attempts}`)
|
||||
for (const baseEndpoint of ANTIGRAVITY_ENDPOINT_FALLBACKS) {
|
||||
const url = `${baseEndpoint}/${ANTIGRAVITY_API_VERSION}:onboardUser`
|
||||
debugLog(`[onboardUser] Trying: ${url}`)
|
||||
try {
|
||||
const response = await fetch(url, {
|
||||
method: "POST",
|
||||
headers,
|
||||
body: JSON.stringify(requestBody),
|
||||
})
|
||||
if (!response.ok) {
|
||||
const errorText = await response.text().catch(() => "")
|
||||
debugLog(`[onboardUser] Failed: ${response.status} ${response.statusText} - ${errorText}`)
|
||||
continue
|
||||
}
|
||||
|
||||
const payload = (await response.json()) as AntigravityOnboardUserPayload
|
||||
debugLog(`[onboardUser] Response: ${JSON.stringify(payload)}`)
|
||||
const managedProjectId = payload.response?.cloudaicompanionProject?.id
|
||||
if (payload.done && managedProjectId) {
|
||||
debugLog(`[onboardUser] Success! Got managed project ID: ${managedProjectId}`)
|
||||
return managedProjectId
|
||||
}
|
||||
if (payload.done && projectId) {
|
||||
debugLog(`[onboardUser] Done but no managed ID, using original: ${projectId}`)
|
||||
return projectId
|
||||
}
|
||||
debugLog(`[onboardUser] Not done yet, payload.done=${payload.done}`)
|
||||
} catch (err) {
|
||||
debugLog(`[onboardUser] Error: ${err}`)
|
||||
continue
|
||||
}
|
||||
}
|
||||
if (attempt < attempts - 1) {
|
||||
debugLog(`[onboardUser] Waiting ${delayMs}ms before next attempt...`)
|
||||
await wait(delayMs)
|
||||
}
|
||||
}
|
||||
debugLog(`[onboardUser] All attempts exhausted, returning undefined`)
|
||||
return undefined
|
||||
}
|
||||
|
||||
export async function fetchProjectContext(
|
||||
accessToken: string
|
||||
): Promise<AntigravityProjectContext> {
|
||||
debugLog(`[fetchProjectContext] Starting...`)
|
||||
|
||||
const cached = projectContextCache.get(accessToken)
|
||||
if (cached) {
|
||||
debugLog(`[fetchProjectContext] Returning cached result: ${JSON.stringify(cached)}`)
|
||||
return cached
|
||||
}
|
||||
|
||||
const loadPayload = await callLoadCodeAssistAPI(accessToken)
|
||||
|
||||
// If loadCodeAssist returns a project ID, use it directly
|
||||
if (loadPayload?.cloudaicompanionProject) {
|
||||
const projectId = extractProjectId(loadPayload.cloudaicompanionProject)
|
||||
debugLog(`[fetchProjectContext] loadCodeAssist returned project: ${projectId}`)
|
||||
if (projectId) {
|
||||
const result: AntigravityProjectContext = { cloudaicompanionProject: projectId }
|
||||
projectContextCache.set(accessToken, result)
|
||||
debugLog(`[fetchProjectContext] Using loadCodeAssist project ID: ${projectId}`)
|
||||
return result
|
||||
}
|
||||
}
|
||||
|
||||
// No project ID from loadCodeAssist - try with fallback project ID
|
||||
if (!loadPayload) {
|
||||
debugLog(`[fetchProjectContext] loadCodeAssist returned null, trying with fallback project ID`)
|
||||
const fallbackPayload = await callLoadCodeAssistAPI(accessToken, ANTIGRAVITY_DEFAULT_PROJECT_ID)
|
||||
const fallbackProjectId = extractProjectId(fallbackPayload?.cloudaicompanionProject)
|
||||
if (fallbackProjectId) {
|
||||
const result: AntigravityProjectContext = { cloudaicompanionProject: fallbackProjectId }
|
||||
projectContextCache.set(accessToken, result)
|
||||
debugLog(`[fetchProjectContext] Using fallback project ID: ${fallbackProjectId}`)
|
||||
return result
|
||||
}
|
||||
debugLog(`[fetchProjectContext] Fallback also failed, using default: ${ANTIGRAVITY_DEFAULT_PROJECT_ID}`)
|
||||
return { cloudaicompanionProject: ANTIGRAVITY_DEFAULT_PROJECT_ID }
|
||||
}
|
||||
|
||||
const currentTierId = loadPayload.currentTier?.id
|
||||
debugLog(`[fetchProjectContext] currentTier: ${currentTierId}, allowedTiers: ${JSON.stringify(loadPayload.allowedTiers)}`)
|
||||
|
||||
if (currentTierId && !isFreeTier(currentTierId)) {
|
||||
// PAID tier - still use fallback if no project provided
|
||||
debugLog(`[fetchProjectContext] PAID tier detected (${currentTierId}), using fallback: ${ANTIGRAVITY_DEFAULT_PROJECT_ID}`)
|
||||
return { cloudaicompanionProject: ANTIGRAVITY_DEFAULT_PROJECT_ID }
|
||||
}
|
||||
|
||||
const defaultTierId = getDefaultTierId(loadPayload.allowedTiers)
|
||||
const tierId = defaultTierId ?? "free-tier"
|
||||
debugLog(`[fetchProjectContext] Resolved tierId: ${tierId}`)
|
||||
|
||||
if (!isFreeTier(tierId)) {
|
||||
debugLog(`[fetchProjectContext] Non-FREE tier (${tierId}) without project, using fallback: ${ANTIGRAVITY_DEFAULT_PROJECT_ID}`)
|
||||
return { cloudaicompanionProject: ANTIGRAVITY_DEFAULT_PROJECT_ID }
|
||||
}
|
||||
|
||||
// FREE tier - onboard to get server-assigned managed project ID
|
||||
debugLog(`[fetchProjectContext] FREE tier detected (${tierId}), calling onboardUser...`)
|
||||
const managedProjectId = await onboardManagedProject(accessToken, tierId)
|
||||
if (managedProjectId) {
|
||||
const result: AntigravityProjectContext = {
|
||||
cloudaicompanionProject: managedProjectId,
|
||||
managedProjectId,
|
||||
}
|
||||
projectContextCache.set(accessToken, result)
|
||||
debugLog(`[fetchProjectContext] Got managed project ID: ${managedProjectId}`)
|
||||
return result
|
||||
}
|
||||
|
||||
debugLog(`[fetchProjectContext] Failed to get managed project ID, using fallback: ${ANTIGRAVITY_DEFAULT_PROJECT_ID}`)
|
||||
return { cloudaicompanionProject: ANTIGRAVITY_DEFAULT_PROJECT_ID }
|
||||
}
|
||||
|
||||
export function clearProjectContextCache(accessToken?: string): void {
|
||||
if (accessToken) {
|
||||
projectContextCache.delete(accessToken)
|
||||
} else {
|
||||
projectContextCache.clear()
|
||||
}
|
||||
}
|
||||
|
||||
export function invalidateProjectContextByRefreshToken(_refreshToken: string): void {
|
||||
projectContextCache.clear()
|
||||
debugLog(`[invalidateProjectContextByRefreshToken] Cleared all project context cache due to refresh token invalidation`)
|
||||
}
|
||||
@@ -1,224 +0,0 @@
|
||||
import { describe, it, expect } from "bun:test"
|
||||
import { ANTIGRAVITY_SYSTEM_PROMPT } from "./constants"
|
||||
import { injectSystemPrompt, wrapRequestBody } from "./request"
|
||||
|
||||
describe("injectSystemPrompt", () => {
|
||||
describe("basic injection", () => {
|
||||
it("should inject system prompt into empty request", () => {
|
||||
// #given
|
||||
const wrappedBody = {
|
||||
project: "test-project",
|
||||
model: "gemini-3-pro-preview",
|
||||
request: {} as Record<string, unknown>,
|
||||
}
|
||||
|
||||
// #when
|
||||
injectSystemPrompt(wrappedBody)
|
||||
|
||||
// #then
|
||||
const req = wrappedBody.request as { systemInstruction?: { role: string; parts: Array<{ text: string }> } }
|
||||
expect(req).toHaveProperty("systemInstruction")
|
||||
expect(req.systemInstruction?.role).toBe("user")
|
||||
expect(req.systemInstruction?.parts).toBeDefined()
|
||||
expect(Array.isArray(req.systemInstruction?.parts)).toBe(true)
|
||||
expect(req.systemInstruction?.parts?.length).toBe(1)
|
||||
expect(req.systemInstruction?.parts?.[0]?.text).toContain("<identity>")
|
||||
})
|
||||
|
||||
it("should inject system prompt with correct structure", () => {
|
||||
// #given
|
||||
const wrappedBody = {
|
||||
project: "test-project",
|
||||
model: "gemini-3-pro-preview",
|
||||
request: {
|
||||
contents: [{ role: "user", parts: [{ text: "Hello" }] }],
|
||||
} as Record<string, unknown>,
|
||||
}
|
||||
|
||||
// #when
|
||||
injectSystemPrompt(wrappedBody)
|
||||
|
||||
// #then
|
||||
const req = wrappedBody.request as { systemInstruction?: { role: string; parts: Array<{ text: string }> } }
|
||||
expect(req.systemInstruction).toEqual({
|
||||
role: "user",
|
||||
parts: [{ text: ANTIGRAVITY_SYSTEM_PROMPT }],
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("prepend to existing systemInstruction", () => {
|
||||
it("should prepend Antigravity prompt before existing systemInstruction parts", () => {
|
||||
// #given
|
||||
const wrappedBody = {
|
||||
project: "test-project",
|
||||
model: "gemini-3-pro-preview",
|
||||
request: {
|
||||
systemInstruction: {
|
||||
role: "user",
|
||||
parts: [{ text: "existing system prompt" }],
|
||||
},
|
||||
} as Record<string, unknown>,
|
||||
}
|
||||
|
||||
// #when
|
||||
injectSystemPrompt(wrappedBody)
|
||||
|
||||
// #then
|
||||
const req = wrappedBody.request as { systemInstruction?: { parts: Array<{ text: string }> } }
|
||||
expect(req.systemInstruction?.parts?.length).toBe(2)
|
||||
expect(req.systemInstruction?.parts?.[0]?.text).toBe(ANTIGRAVITY_SYSTEM_PROMPT)
|
||||
expect(req.systemInstruction?.parts?.[1]?.text).toBe("existing system prompt")
|
||||
})
|
||||
|
||||
it("should preserve multiple existing parts when prepending", () => {
|
||||
// #given
|
||||
const wrappedBody = {
|
||||
project: "test-project",
|
||||
model: "gemini-3-pro-preview",
|
||||
request: {
|
||||
systemInstruction: {
|
||||
role: "user",
|
||||
parts: [
|
||||
{ text: "first existing part" },
|
||||
{ text: "second existing part" },
|
||||
],
|
||||
},
|
||||
} as Record<string, unknown>,
|
||||
}
|
||||
|
||||
// #when
|
||||
injectSystemPrompt(wrappedBody)
|
||||
|
||||
// #then
|
||||
const req = wrappedBody.request as { systemInstruction?: { parts: Array<{ text: string }> } }
|
||||
expect(req.systemInstruction?.parts?.length).toBe(3)
|
||||
expect(req.systemInstruction?.parts?.[0]?.text).toBe(ANTIGRAVITY_SYSTEM_PROMPT)
|
||||
expect(req.systemInstruction?.parts?.[1]?.text).toBe("first existing part")
|
||||
expect(req.systemInstruction?.parts?.[2]?.text).toBe("second existing part")
|
||||
})
|
||||
})
|
||||
|
||||
describe("duplicate prevention", () => {
|
||||
it("should not inject if <identity> marker already exists in first part", () => {
|
||||
// #given
|
||||
const wrappedBody = {
|
||||
project: "test-project",
|
||||
model: "gemini-3-pro-preview",
|
||||
request: {
|
||||
systemInstruction: {
|
||||
role: "user",
|
||||
parts: [{ text: "some prompt with <identity> marker already" }],
|
||||
},
|
||||
} as Record<string, unknown>,
|
||||
}
|
||||
|
||||
// #when
|
||||
injectSystemPrompt(wrappedBody)
|
||||
|
||||
// #then
|
||||
const req = wrappedBody.request as { systemInstruction?: { parts: Array<{ text: string }> } }
|
||||
expect(req.systemInstruction?.parts?.length).toBe(1)
|
||||
expect(req.systemInstruction?.parts?.[0]?.text).toBe("some prompt with <identity> marker already")
|
||||
})
|
||||
|
||||
it("should inject if <identity> marker is not in first part", () => {
|
||||
// #given
|
||||
const wrappedBody = {
|
||||
project: "test-project",
|
||||
model: "gemini-3-pro-preview",
|
||||
request: {
|
||||
systemInstruction: {
|
||||
role: "user",
|
||||
parts: [
|
||||
{ text: "not the identity marker" },
|
||||
{ text: "some <identity> in second part" },
|
||||
],
|
||||
},
|
||||
} as Record<string, unknown>,
|
||||
}
|
||||
|
||||
// #when
|
||||
injectSystemPrompt(wrappedBody)
|
||||
|
||||
// #then
|
||||
const req = wrappedBody.request as { systemInstruction?: { parts: Array<{ text: string }> } }
|
||||
expect(req.systemInstruction?.parts?.length).toBe(3)
|
||||
expect(req.systemInstruction?.parts?.[0]?.text).toBe(ANTIGRAVITY_SYSTEM_PROMPT)
|
||||
})
|
||||
})
|
||||
|
||||
describe("edge cases", () => {
|
||||
it("should handle request without request field", () => {
|
||||
// #given
|
||||
const wrappedBody: { project: string; model: string; request?: Record<string, unknown> } = {
|
||||
project: "test-project",
|
||||
model: "gemini-3-pro-preview",
|
||||
}
|
||||
|
||||
// #when
|
||||
injectSystemPrompt(wrappedBody)
|
||||
|
||||
// #then - should not throw, should not modify
|
||||
expect(wrappedBody).not.toHaveProperty("systemInstruction")
|
||||
})
|
||||
|
||||
it("should handle request with non-object request field", () => {
|
||||
// #given
|
||||
const wrappedBody: { project: string; model: string; request?: unknown } = {
|
||||
project: "test-project",
|
||||
model: "gemini-3-pro-preview",
|
||||
request: "not an object",
|
||||
}
|
||||
|
||||
// #when
|
||||
injectSystemPrompt(wrappedBody)
|
||||
|
||||
// #then - should not throw
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe("wrapRequestBody", () => {
|
||||
it("should create wrapped body with correct structure", () => {
|
||||
// #given
|
||||
const body = {
|
||||
model: "gemini-3-pro-preview",
|
||||
contents: [{ role: "user", parts: [{ text: "Hello" }] }],
|
||||
}
|
||||
const projectId = "test-project"
|
||||
const modelName = "gemini-3-pro-preview"
|
||||
const sessionId = "test-session"
|
||||
|
||||
// #when
|
||||
const result = wrapRequestBody(body, projectId, modelName, sessionId)
|
||||
|
||||
// #then
|
||||
expect(result).toHaveProperty("project", projectId)
|
||||
expect(result).toHaveProperty("model", "gemini-3-pro-preview")
|
||||
expect(result).toHaveProperty("request")
|
||||
expect(result.request).toHaveProperty("sessionId", sessionId)
|
||||
expect(result.request).toHaveProperty("contents")
|
||||
expect(result.request.contents).toEqual(body.contents)
|
||||
expect(result.request).not.toHaveProperty("model") // model should be moved to outer
|
||||
})
|
||||
|
||||
it("should include systemInstruction in wrapped request", () => {
|
||||
// #given
|
||||
const body = {
|
||||
model: "gemini-3-pro-preview",
|
||||
contents: [{ role: "user", parts: [{ text: "Hello" }] }],
|
||||
}
|
||||
const projectId = "test-project"
|
||||
const modelName = "gemini-3-pro-preview"
|
||||
const sessionId = "test-session"
|
||||
|
||||
// #when
|
||||
const result = wrapRequestBody(body, projectId, modelName, sessionId)
|
||||
|
||||
// #then
|
||||
const req = result.request as { systemInstruction?: { parts: Array<{ text: string }> } }
|
||||
expect(req).toHaveProperty("systemInstruction")
|
||||
expect(req.systemInstruction?.parts?.[0]?.text).toContain("<identity>")
|
||||
})
|
||||
})
|
||||
@@ -1,378 +0,0 @@
|
||||
/**
|
||||
* Antigravity request transformer.
|
||||
* Transforms OpenAI-format requests to Antigravity format.
|
||||
* Does NOT handle tool normalization (handled by tools.ts in Task 9).
|
||||
*/
|
||||
|
||||
import {
|
||||
ANTIGRAVITY_API_VERSION,
|
||||
ANTIGRAVITY_ENDPOINT_FALLBACKS,
|
||||
ANTIGRAVITY_HEADERS,
|
||||
ANTIGRAVITY_SYSTEM_PROMPT,
|
||||
SKIP_THOUGHT_SIGNATURE_VALIDATOR,
|
||||
alias2ModelName,
|
||||
} from "./constants"
|
||||
import type { AntigravityRequestBody } from "./types"
|
||||
|
||||
/**
|
||||
* Result of request transformation including URL, headers, and body.
|
||||
*/
|
||||
export interface TransformedRequest {
|
||||
/** Transformed URL for Antigravity API */
|
||||
url: string
|
||||
/** Request headers including Authorization and Antigravity-specific headers */
|
||||
headers: Record<string, string>
|
||||
/** Transformed request body in Antigravity format */
|
||||
body: AntigravityRequestBody
|
||||
/** Whether this is a streaming request */
|
||||
streaming: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Build Antigravity-specific request headers.
|
||||
* Includes Authorization, User-Agent, X-Goog-Api-Client, and Client-Metadata.
|
||||
*
|
||||
* @param accessToken - OAuth access token for Authorization header
|
||||
* @returns Headers object with all required Antigravity headers
|
||||
*/
|
||||
export function buildRequestHeaders(accessToken: string): Record<string, string> {
|
||||
return {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
"Content-Type": "application/json",
|
||||
"User-Agent": ANTIGRAVITY_HEADERS["User-Agent"],
|
||||
"X-Goog-Api-Client": ANTIGRAVITY_HEADERS["X-Goog-Api-Client"],
|
||||
"Client-Metadata": ANTIGRAVITY_HEADERS["Client-Metadata"],
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract model name from request body.
|
||||
* OpenAI-format requests include model in the body.
|
||||
*
|
||||
* @param body - Request body that may contain a model field
|
||||
* @returns Model name or undefined if not found
|
||||
*/
|
||||
export function extractModelFromBody(
|
||||
body: Record<string, unknown>
|
||||
): string | undefined {
|
||||
const model = body.model
|
||||
if (typeof model === "string" && model.trim()) {
|
||||
return model.trim()
|
||||
}
|
||||
return undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract model name from URL path.
|
||||
* Handles Google Generative Language API format: /models/{model}:{action}
|
||||
*
|
||||
* @param url - Request URL to parse
|
||||
* @returns Model name or undefined if not found
|
||||
*/
|
||||
export function extractModelFromUrl(url: string): string | undefined {
|
||||
// Match Google's API format: /models/gemini-3-pro:generateContent
|
||||
const match = url.match(/\/models\/([^:]+):/)
|
||||
if (match && match[1]) {
|
||||
return match[1]
|
||||
}
|
||||
return undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Determine the action type from the URL path.
|
||||
* E.g., generateContent, streamGenerateContent
|
||||
*
|
||||
* @param url - Request URL to parse
|
||||
* @returns Action name or undefined if not found
|
||||
*/
|
||||
export function extractActionFromUrl(url: string): string | undefined {
|
||||
// Match Google's API format: /models/gemini-3-pro:generateContent
|
||||
const match = url.match(/\/models\/[^:]+:(\w+)/)
|
||||
if (match && match[1]) {
|
||||
return match[1]
|
||||
}
|
||||
return undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a URL is targeting Google's Generative Language API.
|
||||
*
|
||||
* @param url - URL to check
|
||||
* @returns true if this is a Google Generative Language API request
|
||||
*/
|
||||
export function isGenerativeLanguageRequest(url: string): boolean {
|
||||
return url.includes("generativelanguage.googleapis.com")
|
||||
}
|
||||
|
||||
/**
|
||||
* Build Antigravity API URL for the given action.
|
||||
*
|
||||
* @param baseEndpoint - Base Antigravity endpoint URL (from fallbacks)
|
||||
* @param action - API action (e.g., generateContent, streamGenerateContent)
|
||||
* @param streaming - Whether to append SSE query parameter
|
||||
* @returns Formatted Antigravity API URL
|
||||
*/
|
||||
export function buildAntigravityUrl(
|
||||
baseEndpoint: string,
|
||||
action: string,
|
||||
streaming: boolean
|
||||
): string {
|
||||
const query = streaming ? "?alt=sse" : ""
|
||||
return `${baseEndpoint}/${ANTIGRAVITY_API_VERSION}:${action}${query}`
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the first available Antigravity endpoint.
|
||||
* Can be used with fallback logic in fetch.ts.
|
||||
*
|
||||
* @returns Default (first) Antigravity endpoint
|
||||
*/
|
||||
export function getDefaultEndpoint(): string {
|
||||
return ANTIGRAVITY_ENDPOINT_FALLBACKS[0]
|
||||
}
|
||||
|
||||
function generateRequestId(): string {
|
||||
return `agent-${crypto.randomUUID()}`
|
||||
}
|
||||
|
||||
/**
|
||||
* Inject ANTIGRAVITY_SYSTEM_PROMPT into request.systemInstruction.
|
||||
* Prepends Antigravity prompt before any existing systemInstruction.
|
||||
* Prevents duplicate injection by checking for <identity> marker.
|
||||
*
|
||||
* CRITICAL: Modifies wrappedBody.request.systemInstruction (NOT outer body!)
|
||||
*
|
||||
* @param wrappedBody - The wrapped request body with request field
|
||||
*/
|
||||
export function injectSystemPrompt(wrappedBody: { request?: unknown }): void {
|
||||
if (!wrappedBody.request || typeof wrappedBody.request !== "object") {
|
||||
return
|
||||
}
|
||||
|
||||
const req = wrappedBody.request as Record<string, unknown>
|
||||
|
||||
// Check for duplicate injection - if <identity> marker exists in first part, skip
|
||||
if (req.systemInstruction && typeof req.systemInstruction === "object") {
|
||||
const existing = req.systemInstruction as Record<string, unknown>
|
||||
if (existing.parts && Array.isArray(existing.parts)) {
|
||||
const firstPart = existing.parts[0]
|
||||
if (firstPart && typeof firstPart === "object" && "text" in firstPart) {
|
||||
const text = (firstPart as { text: string }).text
|
||||
if (text.includes("<identity>")) {
|
||||
return // Already injected, skip
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Build new parts array - Antigravity prompt first, then existing parts
|
||||
const newParts: Array<{ text: string }> = [{ text: ANTIGRAVITY_SYSTEM_PROMPT }]
|
||||
|
||||
// Prepend existing parts if systemInstruction exists with parts
|
||||
if (req.systemInstruction && typeof req.systemInstruction === "object") {
|
||||
const existing = req.systemInstruction as Record<string, unknown>
|
||||
if (existing.parts && Array.isArray(existing.parts)) {
|
||||
for (const part of existing.parts) {
|
||||
if (part && typeof part === "object" && "text" in part) {
|
||||
newParts.push(part as { text: string })
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Set the new systemInstruction
|
||||
req.systemInstruction = {
|
||||
role: "user",
|
||||
parts: newParts,
|
||||
}
|
||||
}
|
||||
|
||||
export function wrapRequestBody(
|
||||
body: Record<string, unknown>,
|
||||
projectId: string,
|
||||
modelName: string,
|
||||
sessionId: string
|
||||
): AntigravityRequestBody {
|
||||
const requestPayload = { ...body }
|
||||
delete requestPayload.model
|
||||
|
||||
let normalizedModel = modelName
|
||||
if (normalizedModel.startsWith("antigravity-")) {
|
||||
normalizedModel = normalizedModel.substring("antigravity-".length)
|
||||
}
|
||||
const apiModel = alias2ModelName(normalizedModel)
|
||||
debugLog(`[MODEL] input="${modelName}" → normalized="${normalizedModel}" → api="${apiModel}"`)
|
||||
|
||||
const requestObj = {
|
||||
...requestPayload,
|
||||
sessionId,
|
||||
toolConfig: {
|
||||
...(requestPayload.toolConfig as Record<string, unknown> || {}),
|
||||
functionCallingConfig: {
|
||||
mode: "VALIDATED",
|
||||
},
|
||||
},
|
||||
}
|
||||
delete (requestObj as Record<string, unknown>).safetySettings
|
||||
|
||||
const wrappedBody: AntigravityRequestBody = {
|
||||
project: projectId,
|
||||
model: apiModel,
|
||||
userAgent: "antigravity",
|
||||
requestType: "agent",
|
||||
requestId: generateRequestId(),
|
||||
request: requestObj,
|
||||
}
|
||||
|
||||
injectSystemPrompt(wrappedBody)
|
||||
|
||||
return wrappedBody
|
||||
}
|
||||
|
||||
interface ContentPart {
|
||||
functionCall?: Record<string, unknown>
|
||||
thoughtSignature?: string
|
||||
[key: string]: unknown
|
||||
}
|
||||
|
||||
interface ContentBlock {
|
||||
role?: string
|
||||
parts?: ContentPart[]
|
||||
[key: string]: unknown
|
||||
}
|
||||
|
||||
function debugLog(message: string): void {
|
||||
if (process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.log(`[antigravity-request] ${message}`)
|
||||
}
|
||||
}
|
||||
|
||||
export function injectThoughtSignatureIntoFunctionCalls(
|
||||
body: Record<string, unknown>,
|
||||
signature: string | undefined
|
||||
): Record<string, unknown> {
|
||||
// Always use skip validator as fallback (CLIProxyAPI approach)
|
||||
const effectiveSignature = signature || SKIP_THOUGHT_SIGNATURE_VALIDATOR
|
||||
debugLog(`[TSIG][INJECT] signature=${effectiveSignature.substring(0, 30)}... (${signature ? "provided" : "default"})`)
|
||||
debugLog(`[TSIG][INJECT] body keys: ${Object.keys(body).join(", ")}`)
|
||||
|
||||
const contents = body.contents as ContentBlock[] | undefined
|
||||
if (!contents || !Array.isArray(contents)) {
|
||||
debugLog(`[TSIG][INJECT] No contents array! Has messages: ${!!body.messages}`)
|
||||
return body
|
||||
}
|
||||
|
||||
debugLog(`[TSIG][INJECT] Found ${contents.length} content blocks`)
|
||||
let injectedCount = 0
|
||||
const modifiedContents = contents.map((content) => {
|
||||
if (!content.parts || !Array.isArray(content.parts)) {
|
||||
return content
|
||||
}
|
||||
|
||||
const modifiedParts = content.parts.map((part) => {
|
||||
if (part.functionCall && !part.thoughtSignature) {
|
||||
injectedCount++
|
||||
return {
|
||||
...part,
|
||||
thoughtSignature: effectiveSignature,
|
||||
}
|
||||
}
|
||||
return part
|
||||
})
|
||||
|
||||
return { ...content, parts: modifiedParts }
|
||||
})
|
||||
|
||||
debugLog(`[TSIG][INJECT] injected signature into ${injectedCount} functionCall(s)`)
|
||||
return { ...body, contents: modifiedContents }
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect if request is for streaming.
|
||||
* Checks both action name and request body for stream flag.
|
||||
*
|
||||
* @param url - Request URL
|
||||
* @param body - Request body
|
||||
* @returns true if streaming is requested
|
||||
*/
|
||||
export function isStreamingRequest(
|
||||
url: string,
|
||||
body: Record<string, unknown>
|
||||
): boolean {
|
||||
// Check URL action
|
||||
const action = extractActionFromUrl(url)
|
||||
if (action === "streamGenerateContent") {
|
||||
return true
|
||||
}
|
||||
|
||||
// Check body for stream flag
|
||||
if (body.stream === true) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
export interface TransformRequestOptions {
|
||||
url: string
|
||||
body: Record<string, unknown>
|
||||
accessToken: string
|
||||
projectId: string
|
||||
sessionId: string
|
||||
modelName?: string
|
||||
endpointOverride?: string
|
||||
thoughtSignature?: string
|
||||
}
|
||||
|
||||
export function transformRequest(options: TransformRequestOptions): TransformedRequest {
|
||||
const {
|
||||
url,
|
||||
body,
|
||||
accessToken,
|
||||
projectId,
|
||||
sessionId,
|
||||
modelName,
|
||||
endpointOverride,
|
||||
thoughtSignature,
|
||||
} = options
|
||||
|
||||
const effectiveModel =
|
||||
modelName || extractModelFromBody(body) || extractModelFromUrl(url) || "gemini-3-pro-high"
|
||||
|
||||
const streaming = isStreamingRequest(url, body)
|
||||
const action = streaming ? "streamGenerateContent" : "generateContent"
|
||||
|
||||
const endpoint = endpointOverride || getDefaultEndpoint()
|
||||
const transformedUrl = buildAntigravityUrl(endpoint, action, streaming)
|
||||
|
||||
const headers = buildRequestHeaders(accessToken)
|
||||
if (streaming) {
|
||||
headers["Accept"] = "text/event-stream"
|
||||
}
|
||||
|
||||
const bodyWithSignature = injectThoughtSignatureIntoFunctionCalls(body, thoughtSignature)
|
||||
const wrappedBody = wrapRequestBody(bodyWithSignature, projectId, effectiveModel, sessionId)
|
||||
|
||||
return {
|
||||
url: transformedUrl,
|
||||
headers,
|
||||
body: wrappedBody,
|
||||
streaming,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Prepare request headers for streaming responses.
|
||||
* Adds Accept header for SSE format.
|
||||
*
|
||||
* @param headers - Existing headers object
|
||||
* @returns Headers with streaming support
|
||||
*/
|
||||
export function addStreamingHeaders(
|
||||
headers: Record<string, string>
|
||||
): Record<string, string> {
|
||||
return {
|
||||
...headers,
|
||||
Accept: "text/event-stream",
|
||||
}
|
||||
}
|
||||
@@ -1,598 +0,0 @@
|
||||
/**
|
||||
* Antigravity Response Handler
|
||||
* Transforms Antigravity/Gemini API responses to OpenAI-compatible format
|
||||
*
|
||||
* Key responsibilities:
|
||||
* - Non-streaming response transformation
|
||||
* - SSE streaming response transformation (buffered - see transformStreamingResponse)
|
||||
* - Error response handling with retry-after extraction
|
||||
* - Usage metadata extraction from x-antigravity-* headers
|
||||
*/
|
||||
|
||||
import type { AntigravityError, AntigravityUsage } from "./types"
|
||||
|
||||
/**
|
||||
* Usage metadata extracted from Antigravity response headers
|
||||
*/
|
||||
export interface AntigravityUsageMetadata {
|
||||
cachedContentTokenCount?: number
|
||||
totalTokenCount?: number
|
||||
promptTokenCount?: number
|
||||
candidatesTokenCount?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Transform result with response and metadata
|
||||
*/
|
||||
export interface TransformResult {
|
||||
response: Response
|
||||
usage?: AntigravityUsageMetadata
|
||||
retryAfterMs?: number
|
||||
error?: AntigravityError
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract usage metadata from Antigravity response headers
|
||||
*
|
||||
* Antigravity sets these headers:
|
||||
* - x-antigravity-cached-content-token-count
|
||||
* - x-antigravity-total-token-count
|
||||
* - x-antigravity-prompt-token-count
|
||||
* - x-antigravity-candidates-token-count
|
||||
*
|
||||
* @param headers - Response headers
|
||||
* @returns Usage metadata if found
|
||||
*/
|
||||
export function extractUsageFromHeaders(headers: Headers): AntigravityUsageMetadata | undefined {
|
||||
const cached = headers.get("x-antigravity-cached-content-token-count")
|
||||
const total = headers.get("x-antigravity-total-token-count")
|
||||
const prompt = headers.get("x-antigravity-prompt-token-count")
|
||||
const candidates = headers.get("x-antigravity-candidates-token-count")
|
||||
|
||||
// Return undefined if no usage headers found
|
||||
if (!cached && !total && !prompt && !candidates) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
const usage: AntigravityUsageMetadata = {}
|
||||
|
||||
if (cached) {
|
||||
const parsed = parseInt(cached, 10)
|
||||
if (!isNaN(parsed)) {
|
||||
usage.cachedContentTokenCount = parsed
|
||||
}
|
||||
}
|
||||
|
||||
if (total) {
|
||||
const parsed = parseInt(total, 10)
|
||||
if (!isNaN(parsed)) {
|
||||
usage.totalTokenCount = parsed
|
||||
}
|
||||
}
|
||||
|
||||
if (prompt) {
|
||||
const parsed = parseInt(prompt, 10)
|
||||
if (!isNaN(parsed)) {
|
||||
usage.promptTokenCount = parsed
|
||||
}
|
||||
}
|
||||
|
||||
if (candidates) {
|
||||
const parsed = parseInt(candidates, 10)
|
||||
if (!isNaN(parsed)) {
|
||||
usage.candidatesTokenCount = parsed
|
||||
}
|
||||
}
|
||||
|
||||
return Object.keys(usage).length > 0 ? usage : undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract retry-after value from error response
|
||||
*
|
||||
* Antigravity returns retry info in error.details array:
|
||||
* {
|
||||
* error: {
|
||||
* details: [{
|
||||
* "@type": "type.googleapis.com/google.rpc.RetryInfo",
|
||||
* "retryDelay": "5.123s"
|
||||
* }]
|
||||
* }
|
||||
* }
|
||||
*
|
||||
* Also checks standard Retry-After header.
|
||||
*
|
||||
* @param response - Response object (for headers)
|
||||
* @param errorBody - Parsed error body (optional)
|
||||
* @returns Retry after value in milliseconds, or undefined
|
||||
*/
|
||||
export function extractRetryAfterMs(
|
||||
response: Response,
|
||||
errorBody?: Record<string, unknown>,
|
||||
): number | undefined {
|
||||
// First, check standard Retry-After header
|
||||
const retryAfterHeader = response.headers.get("Retry-After")
|
||||
if (retryAfterHeader) {
|
||||
const seconds = parseFloat(retryAfterHeader)
|
||||
if (!isNaN(seconds) && seconds > 0) {
|
||||
return Math.ceil(seconds * 1000)
|
||||
}
|
||||
}
|
||||
|
||||
// Check retry-after-ms header (set by some transformers)
|
||||
const retryAfterMsHeader = response.headers.get("retry-after-ms")
|
||||
if (retryAfterMsHeader) {
|
||||
const ms = parseInt(retryAfterMsHeader, 10)
|
||||
if (!isNaN(ms) && ms > 0) {
|
||||
return ms
|
||||
}
|
||||
}
|
||||
|
||||
// Check error body for RetryInfo
|
||||
if (!errorBody) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
const error = errorBody.error as Record<string, unknown> | undefined
|
||||
if (!error?.details || !Array.isArray(error.details)) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
const retryInfo = (error.details as Array<Record<string, unknown>>).find(
|
||||
(detail) => detail["@type"] === "type.googleapis.com/google.rpc.RetryInfo",
|
||||
)
|
||||
|
||||
if (!retryInfo?.retryDelay || typeof retryInfo.retryDelay !== "string") {
|
||||
return undefined
|
||||
}
|
||||
|
||||
// Parse retryDelay format: "5.123s"
|
||||
const match = retryInfo.retryDelay.match(/^([\d.]+)s$/)
|
||||
if (match?.[1]) {
|
||||
const seconds = parseFloat(match[1])
|
||||
if (!isNaN(seconds) && seconds > 0) {
|
||||
return Math.ceil(seconds * 1000)
|
||||
}
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse error response body and extract useful details
|
||||
*
|
||||
* @param text - Raw response text
|
||||
* @returns Parsed error or undefined
|
||||
*/
|
||||
export function parseErrorBody(text: string): AntigravityError | undefined {
|
||||
try {
|
||||
const parsed = JSON.parse(text) as Record<string, unknown>
|
||||
|
||||
// Handle error wrapper
|
||||
if (parsed.error && typeof parsed.error === "object") {
|
||||
const errorObj = parsed.error as Record<string, unknown>
|
||||
return {
|
||||
message: String(errorObj.message || "Unknown error"),
|
||||
type: errorObj.type ? String(errorObj.type) : undefined,
|
||||
code: errorObj.code as string | number | undefined,
|
||||
}
|
||||
}
|
||||
|
||||
// Handle direct error message
|
||||
if (parsed.message && typeof parsed.message === "string") {
|
||||
return {
|
||||
message: parsed.message,
|
||||
type: parsed.type ? String(parsed.type) : undefined,
|
||||
code: parsed.code as string | number | undefined,
|
||||
}
|
||||
}
|
||||
|
||||
return undefined
|
||||
} catch {
|
||||
// If not valid JSON, return generic error
|
||||
return {
|
||||
message: text || "Unknown error",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Transform a non-streaming Antigravity response to OpenAI-compatible format
|
||||
*
|
||||
* For non-streaming responses:
|
||||
* - Parses the response body
|
||||
* - Unwraps the `response` field if present (Antigravity wraps responses)
|
||||
* - Extracts usage metadata from headers
|
||||
* - Handles error responses
|
||||
*
|
||||
* Note: Does NOT handle thinking block extraction (Task 10)
|
||||
* Note: Does NOT handle tool normalization (Task 9)
|
||||
*
|
||||
* @param response - Fetch Response object
|
||||
* @returns TransformResult with transformed response and metadata
|
||||
*/
|
||||
export async function transformResponse(response: Response): Promise<TransformResult> {
|
||||
const headers = new Headers(response.headers)
|
||||
const usage = extractUsageFromHeaders(headers)
|
||||
|
||||
// Handle error responses
|
||||
if (!response.ok) {
|
||||
const text = await response.text()
|
||||
const error = parseErrorBody(text)
|
||||
const retryAfterMs = extractRetryAfterMs(response, error ? { error } : undefined)
|
||||
|
||||
// Parse to get full error body for retry-after extraction
|
||||
let errorBody: Record<string, unknown> | undefined
|
||||
try {
|
||||
errorBody = JSON.parse(text) as Record<string, unknown>
|
||||
} catch {
|
||||
errorBody = { error: { message: text } }
|
||||
}
|
||||
|
||||
const retryMs = extractRetryAfterMs(response, errorBody) ?? retryAfterMs
|
||||
|
||||
// Set retry headers if found
|
||||
if (retryMs) {
|
||||
headers.set("Retry-After", String(Math.ceil(retryMs / 1000)))
|
||||
headers.set("retry-after-ms", String(retryMs))
|
||||
}
|
||||
|
||||
return {
|
||||
response: new Response(text, {
|
||||
status: response.status,
|
||||
statusText: response.statusText,
|
||||
headers,
|
||||
}),
|
||||
usage,
|
||||
retryAfterMs: retryMs,
|
||||
error,
|
||||
}
|
||||
}
|
||||
|
||||
// Handle successful response
|
||||
const contentType = response.headers.get("content-type") ?? ""
|
||||
const isJson = contentType.includes("application/json")
|
||||
|
||||
if (!isJson) {
|
||||
// Return non-JSON responses as-is
|
||||
return { response, usage }
|
||||
}
|
||||
|
||||
try {
|
||||
const text = await response.text()
|
||||
const parsed = JSON.parse(text) as Record<string, unknown>
|
||||
|
||||
// Antigravity wraps response in { response: { ... } }
|
||||
// Unwrap if present
|
||||
let transformedBody: unknown = parsed
|
||||
if (parsed.response !== undefined) {
|
||||
transformedBody = parsed.response
|
||||
}
|
||||
|
||||
return {
|
||||
response: new Response(JSON.stringify(transformedBody), {
|
||||
status: response.status,
|
||||
statusText: response.statusText,
|
||||
headers,
|
||||
}),
|
||||
usage,
|
||||
}
|
||||
} catch {
|
||||
// If parsing fails, return original response
|
||||
return { response, usage }
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Transform a single SSE data line
|
||||
*
|
||||
* Antigravity SSE format:
|
||||
* data: { "response": { ... actual data ... } }
|
||||
*
|
||||
* OpenAI SSE format:
|
||||
* data: { ... actual data ... }
|
||||
*
|
||||
* @param line - SSE data line
|
||||
* @returns Transformed line
|
||||
*/
|
||||
function transformSseLine(line: string): string {
|
||||
if (!line.startsWith("data:")) {
|
||||
return line
|
||||
}
|
||||
|
||||
const json = line.slice(5).trim()
|
||||
if (!json || json === "[DONE]") {
|
||||
return line
|
||||
}
|
||||
|
||||
try {
|
||||
const parsed = JSON.parse(json) as Record<string, unknown>
|
||||
|
||||
// Unwrap { response: { ... } } wrapper
|
||||
if (parsed.response !== undefined) {
|
||||
return `data: ${JSON.stringify(parsed.response)}`
|
||||
}
|
||||
|
||||
return line
|
||||
} catch {
|
||||
// If parsing fails, return original line
|
||||
return line
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Transform SSE streaming payload
|
||||
*
|
||||
* Processes each line in the SSE stream:
|
||||
* - Unwraps { response: { ... } } wrapper from data lines
|
||||
* - Preserves other SSE control lines (event:, id:, retry:, empty lines)
|
||||
*
|
||||
* Note: Does NOT extract thinking blocks (Task 10)
|
||||
*
|
||||
* @param payload - Raw SSE payload text
|
||||
* @returns Transformed SSE payload
|
||||
*/
|
||||
export function transformStreamingPayload(payload: string): string {
|
||||
return payload
|
||||
.split("\n")
|
||||
.map(transformSseLine)
|
||||
.join("\n")
|
||||
}
|
||||
|
||||
function createSseTransformStream(): TransformStream<Uint8Array, Uint8Array> {
|
||||
const decoder = new TextDecoder()
|
||||
const encoder = new TextEncoder()
|
||||
let buffer = ""
|
||||
|
||||
return new TransformStream({
|
||||
transform(chunk, controller) {
|
||||
buffer += decoder.decode(chunk, { stream: true })
|
||||
const lines = buffer.split("\n")
|
||||
buffer = lines.pop() || ""
|
||||
|
||||
for (const line of lines) {
|
||||
const transformed = transformSseLine(line)
|
||||
controller.enqueue(encoder.encode(transformed + "\n"))
|
||||
}
|
||||
},
|
||||
flush(controller) {
|
||||
if (buffer) {
|
||||
const transformed = transformSseLine(buffer)
|
||||
controller.enqueue(encoder.encode(transformed))
|
||||
}
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Transforms a streaming SSE response from Antigravity to OpenAI format.
|
||||
*
|
||||
* Uses TransformStream to process SSE chunks incrementally as they arrive.
|
||||
* Each line is transformed immediately and yielded to the client.
|
||||
*
|
||||
* @param response - The SSE response from Antigravity API
|
||||
* @returns TransformResult with transformed streaming response
|
||||
*/
|
||||
export async function transformStreamingResponse(response: Response): Promise<TransformResult> {
|
||||
const headers = new Headers(response.headers)
|
||||
const usage = extractUsageFromHeaders(headers)
|
||||
|
||||
// Handle error responses
|
||||
if (!response.ok) {
|
||||
const text = await response.text()
|
||||
const error = parseErrorBody(text)
|
||||
|
||||
let errorBody: Record<string, unknown> | undefined
|
||||
try {
|
||||
errorBody = JSON.parse(text) as Record<string, unknown>
|
||||
} catch {
|
||||
errorBody = { error: { message: text } }
|
||||
}
|
||||
|
||||
const retryAfterMs = extractRetryAfterMs(response, errorBody)
|
||||
|
||||
if (retryAfterMs) {
|
||||
headers.set("Retry-After", String(Math.ceil(retryAfterMs / 1000)))
|
||||
headers.set("retry-after-ms", String(retryAfterMs))
|
||||
}
|
||||
|
||||
return {
|
||||
response: new Response(text, {
|
||||
status: response.status,
|
||||
statusText: response.statusText,
|
||||
headers,
|
||||
}),
|
||||
usage,
|
||||
retryAfterMs,
|
||||
error,
|
||||
}
|
||||
}
|
||||
|
||||
// Check content type
|
||||
const contentType = response.headers.get("content-type") ?? ""
|
||||
const isEventStream =
|
||||
contentType.includes("text/event-stream") || response.url.includes("alt=sse")
|
||||
|
||||
if (!isEventStream) {
|
||||
// Not SSE, delegate to non-streaming transform
|
||||
// Clone response since we need to read it
|
||||
const text = await response.text()
|
||||
try {
|
||||
const parsed = JSON.parse(text) as Record<string, unknown>
|
||||
let transformedBody: unknown = parsed
|
||||
if (parsed.response !== undefined) {
|
||||
transformedBody = parsed.response
|
||||
}
|
||||
return {
|
||||
response: new Response(JSON.stringify(transformedBody), {
|
||||
status: response.status,
|
||||
statusText: response.statusText,
|
||||
headers,
|
||||
}),
|
||||
usage,
|
||||
}
|
||||
} catch {
|
||||
return {
|
||||
response: new Response(text, {
|
||||
status: response.status,
|
||||
statusText: response.statusText,
|
||||
headers,
|
||||
}),
|
||||
usage,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!response.body) {
|
||||
return { response, usage }
|
||||
}
|
||||
|
||||
headers.delete("content-length")
|
||||
headers.delete("content-encoding")
|
||||
headers.set("content-type", "text/event-stream; charset=utf-8")
|
||||
|
||||
const transformStream = createSseTransformStream()
|
||||
const transformedBody = response.body.pipeThrough(transformStream)
|
||||
|
||||
return {
|
||||
response: new Response(transformedBody, {
|
||||
status: response.status,
|
||||
statusText: response.statusText,
|
||||
headers,
|
||||
}),
|
||||
usage,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if response is a streaming SSE response
|
||||
*
|
||||
* @param response - Fetch Response object
|
||||
* @returns True if response is SSE stream
|
||||
*/
|
||||
export function isStreamingResponse(response: Response): boolean {
|
||||
const contentType = response.headers.get("content-type") ?? ""
|
||||
return contentType.includes("text/event-stream") || response.url.includes("alt=sse")
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract thought signature from SSE payload text
|
||||
*
|
||||
* Looks for thoughtSignature in SSE events:
|
||||
* data: { "response": { "candidates": [{ "content": { "parts": [{ "thoughtSignature": "..." }] } }] } }
|
||||
*
|
||||
* Returns the last found signature (most recent in the stream).
|
||||
*
|
||||
* @param payload - SSE payload text
|
||||
* @returns Last thought signature if found
|
||||
*/
|
||||
export function extractSignatureFromSsePayload(payload: string): string | undefined {
|
||||
const lines = payload.split("\n")
|
||||
let lastSignature: string | undefined
|
||||
|
||||
for (const line of lines) {
|
||||
if (!line.startsWith("data:")) {
|
||||
continue
|
||||
}
|
||||
|
||||
const json = line.slice(5).trim()
|
||||
if (!json || json === "[DONE]") {
|
||||
continue
|
||||
}
|
||||
|
||||
try {
|
||||
const parsed = JSON.parse(json) as Record<string, unknown>
|
||||
|
||||
// Check in response wrapper (Antigravity format)
|
||||
const response = (parsed.response || parsed) as Record<string, unknown>
|
||||
const candidates = response.candidates as Array<Record<string, unknown>> | undefined
|
||||
|
||||
if (candidates && Array.isArray(candidates)) {
|
||||
for (const candidate of candidates) {
|
||||
const content = candidate.content as Record<string, unknown> | undefined
|
||||
const parts = content?.parts as Array<Record<string, unknown>> | undefined
|
||||
|
||||
if (parts && Array.isArray(parts)) {
|
||||
for (const part of parts) {
|
||||
const sig = (part.thoughtSignature || part.thought_signature) as string | undefined
|
||||
if (sig && typeof sig === "string") {
|
||||
lastSignature = sig
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Continue to next line if parsing fails
|
||||
}
|
||||
}
|
||||
|
||||
return lastSignature
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract usage from SSE payload text
|
||||
*
|
||||
* Looks for usageMetadata in SSE events:
|
||||
* data: { "usageMetadata": { ... } }
|
||||
*
|
||||
* @param payload - SSE payload text
|
||||
* @returns Usage if found
|
||||
*/
|
||||
export function extractUsageFromSsePayload(payload: string): AntigravityUsage | undefined {
|
||||
const lines = payload.split("\n")
|
||||
|
||||
for (const line of lines) {
|
||||
if (!line.startsWith("data:")) {
|
||||
continue
|
||||
}
|
||||
|
||||
const json = line.slice(5).trim()
|
||||
if (!json || json === "[DONE]") {
|
||||
continue
|
||||
}
|
||||
|
||||
try {
|
||||
const parsed = JSON.parse(json) as Record<string, unknown>
|
||||
|
||||
// Check for usageMetadata at top level
|
||||
if (parsed.usageMetadata && typeof parsed.usageMetadata === "object") {
|
||||
const meta = parsed.usageMetadata as Record<string, unknown>
|
||||
return {
|
||||
prompt_tokens: typeof meta.promptTokenCount === "number" ? meta.promptTokenCount : 0,
|
||||
completion_tokens:
|
||||
typeof meta.candidatesTokenCount === "number" ? meta.candidatesTokenCount : 0,
|
||||
total_tokens: typeof meta.totalTokenCount === "number" ? meta.totalTokenCount : 0,
|
||||
}
|
||||
}
|
||||
|
||||
// Check for usage in response wrapper
|
||||
if (parsed.response && typeof parsed.response === "object") {
|
||||
const resp = parsed.response as Record<string, unknown>
|
||||
if (resp.usageMetadata && typeof resp.usageMetadata === "object") {
|
||||
const meta = resp.usageMetadata as Record<string, unknown>
|
||||
return {
|
||||
prompt_tokens: typeof meta.promptTokenCount === "number" ? meta.promptTokenCount : 0,
|
||||
completion_tokens:
|
||||
typeof meta.candidatesTokenCount === "number" ? meta.candidatesTokenCount : 0,
|
||||
total_tokens: typeof meta.totalTokenCount === "number" ? meta.totalTokenCount : 0,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for standard OpenAI-style usage
|
||||
if (parsed.usage && typeof parsed.usage === "object") {
|
||||
const u = parsed.usage as Record<string, unknown>
|
||||
return {
|
||||
prompt_tokens: typeof u.prompt_tokens === "number" ? u.prompt_tokens : 0,
|
||||
completion_tokens: typeof u.completion_tokens === "number" ? u.completion_tokens : 0,
|
||||
total_tokens: typeof u.total_tokens === "number" ? u.total_tokens : 0,
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Continue to next line if parsing fails
|
||||
}
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
@@ -1,388 +0,0 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from "bun:test"
|
||||
import { join } from "node:path"
|
||||
import { homedir } from "node:os"
|
||||
import { promises as fs } from "node:fs"
|
||||
import { tmpdir } from "node:os"
|
||||
import type { AccountStorage } from "./types"
|
||||
import { getDataDir, getStoragePath, loadAccounts, saveAccounts } from "./storage"
|
||||
|
||||
describe("storage", () => {
|
||||
const testDir = join(tmpdir(), `oh-my-opencode-storage-test-${Date.now()}`)
|
||||
const testStoragePath = join(testDir, "oh-my-opencode-accounts.json")
|
||||
|
||||
const validStorage: AccountStorage = {
|
||||
version: 1,
|
||||
accounts: [
|
||||
{
|
||||
email: "test@example.com",
|
||||
tier: "free",
|
||||
refreshToken: "refresh-token-123",
|
||||
projectId: "project-123",
|
||||
accessToken: "access-token-123",
|
||||
expiresAt: Date.now() + 3600000,
|
||||
rateLimits: {},
|
||||
},
|
||||
],
|
||||
activeIndex: 0,
|
||||
}
|
||||
|
||||
beforeEach(async () => {
|
||||
await fs.mkdir(testDir, { recursive: true })
|
||||
})
|
||||
|
||||
afterEach(async () => {
|
||||
try {
|
||||
await fs.rm(testDir, { recursive: true, force: true })
|
||||
} catch {
|
||||
// ignore cleanup errors
|
||||
}
|
||||
})
|
||||
|
||||
describe("getDataDir", () => {
|
||||
it("returns path containing opencode directory", () => {
|
||||
// #given
|
||||
// platform is current system
|
||||
|
||||
// #when
|
||||
const result = getDataDir()
|
||||
|
||||
// #then
|
||||
expect(result).toContain("opencode")
|
||||
})
|
||||
|
||||
it("returns XDG_DATA_HOME/opencode when XDG_DATA_HOME is set on non-Windows", () => {
|
||||
// #given
|
||||
const originalXdg = process.env.XDG_DATA_HOME
|
||||
const originalPlatform = process.platform
|
||||
|
||||
if (originalPlatform === "win32") {
|
||||
return
|
||||
}
|
||||
|
||||
try {
|
||||
process.env.XDG_DATA_HOME = "/custom/data"
|
||||
|
||||
// #when
|
||||
const result = getDataDir()
|
||||
|
||||
// #then
|
||||
expect(result).toBe("/custom/data/opencode")
|
||||
} finally {
|
||||
if (originalXdg !== undefined) {
|
||||
process.env.XDG_DATA_HOME = originalXdg
|
||||
} else {
|
||||
delete process.env.XDG_DATA_HOME
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
it("returns ~/.local/share/opencode when XDG_DATA_HOME is not set on non-Windows", () => {
|
||||
// #given
|
||||
const originalXdg = process.env.XDG_DATA_HOME
|
||||
const originalPlatform = process.platform
|
||||
|
||||
if (originalPlatform === "win32") {
|
||||
return
|
||||
}
|
||||
|
||||
try {
|
||||
delete process.env.XDG_DATA_HOME
|
||||
|
||||
// #when
|
||||
const result = getDataDir()
|
||||
|
||||
// #then
|
||||
expect(result).toBe(join(homedir(), ".local", "share", "opencode"))
|
||||
} finally {
|
||||
if (originalXdg !== undefined) {
|
||||
process.env.XDG_DATA_HOME = originalXdg
|
||||
} else {
|
||||
delete process.env.XDG_DATA_HOME
|
||||
}
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe("getStoragePath", () => {
|
||||
it("returns path ending with oh-my-opencode-accounts.json", () => {
|
||||
// #given
|
||||
// no setup needed
|
||||
|
||||
// #when
|
||||
const result = getStoragePath()
|
||||
|
||||
// #then
|
||||
expect(result.endsWith("oh-my-opencode-accounts.json")).toBe(true)
|
||||
expect(result).toContain("opencode")
|
||||
})
|
||||
})
|
||||
|
||||
describe("loadAccounts", () => {
|
||||
it("returns parsed storage when file exists and is valid", async () => {
|
||||
// #given
|
||||
await fs.writeFile(testStoragePath, JSON.stringify(validStorage), "utf-8")
|
||||
|
||||
// #when
|
||||
const result = await loadAccounts(testStoragePath)
|
||||
|
||||
// #then
|
||||
expect(result).not.toBeNull()
|
||||
expect(result?.version).toBe(1)
|
||||
expect(result?.accounts).toHaveLength(1)
|
||||
expect(result?.accounts[0].email).toBe("test@example.com")
|
||||
})
|
||||
|
||||
it("returns null when file does not exist (ENOENT)", async () => {
|
||||
// #given
|
||||
const nonExistentPath = join(testDir, "non-existent.json")
|
||||
|
||||
// #when
|
||||
const result = await loadAccounts(nonExistentPath)
|
||||
|
||||
// #then
|
||||
expect(result).toBeNull()
|
||||
})
|
||||
|
||||
it("returns null when file contains invalid JSON", async () => {
|
||||
// #given
|
||||
const invalidJsonPath = join(testDir, "invalid.json")
|
||||
await fs.writeFile(invalidJsonPath, "{ invalid json }", "utf-8")
|
||||
|
||||
// #when
|
||||
const result = await loadAccounts(invalidJsonPath)
|
||||
|
||||
// #then
|
||||
expect(result).toBeNull()
|
||||
})
|
||||
|
||||
it("returns null when file contains valid JSON but invalid schema", async () => {
|
||||
// #given
|
||||
const invalidSchemaPath = join(testDir, "invalid-schema.json")
|
||||
await fs.writeFile(invalidSchemaPath, JSON.stringify({ foo: "bar" }), "utf-8")
|
||||
|
||||
// #when
|
||||
const result = await loadAccounts(invalidSchemaPath)
|
||||
|
||||
// #then
|
||||
expect(result).toBeNull()
|
||||
})
|
||||
|
||||
it("returns null when accounts is not an array", async () => {
|
||||
// #given
|
||||
const invalidAccountsPath = join(testDir, "invalid-accounts.json")
|
||||
await fs.writeFile(
|
||||
invalidAccountsPath,
|
||||
JSON.stringify({ version: 1, accounts: "not-array", activeIndex: 0 }),
|
||||
"utf-8"
|
||||
)
|
||||
|
||||
// #when
|
||||
const result = await loadAccounts(invalidAccountsPath)
|
||||
|
||||
// #then
|
||||
expect(result).toBeNull()
|
||||
})
|
||||
|
||||
it("returns null when activeIndex is not a number", async () => {
|
||||
// #given
|
||||
const invalidIndexPath = join(testDir, "invalid-index.json")
|
||||
await fs.writeFile(
|
||||
invalidIndexPath,
|
||||
JSON.stringify({ version: 1, accounts: [], activeIndex: "zero" }),
|
||||
"utf-8"
|
||||
)
|
||||
|
||||
// #when
|
||||
const result = await loadAccounts(invalidIndexPath)
|
||||
|
||||
// #then
|
||||
expect(result).toBeNull()
|
||||
})
|
||||
})
|
||||
|
||||
describe("saveAccounts", () => {
|
||||
it("writes storage to file with proper JSON formatting", async () => {
|
||||
// #given
|
||||
// testStoragePath is ready
|
||||
|
||||
// #when
|
||||
await saveAccounts(validStorage, testStoragePath)
|
||||
|
||||
// #then
|
||||
const content = await fs.readFile(testStoragePath, "utf-8")
|
||||
const parsed = JSON.parse(content)
|
||||
expect(parsed.version).toBe(1)
|
||||
expect(parsed.accounts).toHaveLength(1)
|
||||
expect(parsed.activeIndex).toBe(0)
|
||||
})
|
||||
|
||||
it("creates parent directories if they do not exist", async () => {
|
||||
// #given
|
||||
const nestedPath = join(testDir, "nested", "deep", "oh-my-opencode-accounts.json")
|
||||
|
||||
// #when
|
||||
await saveAccounts(validStorage, nestedPath)
|
||||
|
||||
// #then
|
||||
const content = await fs.readFile(nestedPath, "utf-8")
|
||||
const parsed = JSON.parse(content)
|
||||
expect(parsed.version).toBe(1)
|
||||
})
|
||||
|
||||
it("overwrites existing file", async () => {
|
||||
// #given
|
||||
const existingStorage: AccountStorage = {
|
||||
version: 1,
|
||||
accounts: [],
|
||||
activeIndex: 0,
|
||||
}
|
||||
await fs.writeFile(testStoragePath, JSON.stringify(existingStorage), "utf-8")
|
||||
|
||||
// #when
|
||||
await saveAccounts(validStorage, testStoragePath)
|
||||
|
||||
// #then
|
||||
const content = await fs.readFile(testStoragePath, "utf-8")
|
||||
const parsed = JSON.parse(content)
|
||||
expect(parsed.accounts).toHaveLength(1)
|
||||
})
|
||||
|
||||
it("uses pretty-printed JSON with 2-space indentation", async () => {
|
||||
// #given
|
||||
// testStoragePath is ready
|
||||
|
||||
// #when
|
||||
await saveAccounts(validStorage, testStoragePath)
|
||||
|
||||
// #then
|
||||
const content = await fs.readFile(testStoragePath, "utf-8")
|
||||
expect(content).toContain("\n")
|
||||
expect(content).toContain(" ")
|
||||
})
|
||||
|
||||
it("sets restrictive file permissions (0o600) for security", async () => {
|
||||
// #given
|
||||
// testStoragePath is ready
|
||||
|
||||
// #when
|
||||
await saveAccounts(validStorage, testStoragePath)
|
||||
|
||||
// #then
|
||||
const stats = await fs.stat(testStoragePath)
|
||||
const mode = stats.mode & 0o777
|
||||
expect(mode).toBe(0o600)
|
||||
})
|
||||
|
||||
it("uses atomic write pattern with temp file and rename", async () => {
|
||||
// #given
|
||||
// This test verifies that the file is written atomically
|
||||
// by checking that no partial writes occur
|
||||
|
||||
// #when
|
||||
await saveAccounts(validStorage, testStoragePath)
|
||||
|
||||
// #then
|
||||
// If we can read valid JSON, the atomic write succeeded
|
||||
const content = await fs.readFile(testStoragePath, "utf-8")
|
||||
const parsed = JSON.parse(content)
|
||||
expect(parsed.version).toBe(1)
|
||||
expect(parsed.accounts).toHaveLength(1)
|
||||
})
|
||||
|
||||
it("cleans up temp file on rename failure", async () => {
|
||||
// #given
|
||||
const readOnlyDir = join(testDir, "readonly")
|
||||
await fs.mkdir(readOnlyDir, { recursive: true })
|
||||
const readOnlyPath = join(readOnlyDir, "accounts.json")
|
||||
|
||||
await fs.writeFile(readOnlyPath, "{}", "utf-8")
|
||||
await fs.chmod(readOnlyPath, 0o444)
|
||||
|
||||
// #when
|
||||
let didThrow = false
|
||||
try {
|
||||
await saveAccounts(validStorage, readOnlyPath)
|
||||
} catch {
|
||||
didThrow = true
|
||||
}
|
||||
|
||||
// #then
|
||||
const files = await fs.readdir(readOnlyDir)
|
||||
const tempFiles = files.filter((f) => f.includes(".tmp."))
|
||||
expect(tempFiles).toHaveLength(0)
|
||||
|
||||
if (!didThrow) {
|
||||
console.log("[TEST SKIP] File permissions did not work as expected on this system")
|
||||
}
|
||||
|
||||
// Cleanup
|
||||
await fs.chmod(readOnlyPath, 0o644)
|
||||
})
|
||||
|
||||
it("uses unique temp filename with pid and timestamp", async () => {
|
||||
// #given
|
||||
// We verify this by checking the implementation behavior
|
||||
// The temp file should include process.pid and Date.now()
|
||||
|
||||
// #when
|
||||
await saveAccounts(validStorage, testStoragePath)
|
||||
|
||||
// #then
|
||||
// File should exist and be valid (temp file was successfully renamed)
|
||||
const exists = await fs.access(testStoragePath).then(() => true).catch(() => false)
|
||||
expect(exists).toBe(true)
|
||||
})
|
||||
|
||||
it("handles sequential writes without corruption", async () => {
|
||||
// #given
|
||||
const storage1: AccountStorage = {
|
||||
...validStorage,
|
||||
accounts: [{ ...validStorage.accounts[0]!, email: "user1@example.com" }],
|
||||
}
|
||||
const storage2: AccountStorage = {
|
||||
...validStorage,
|
||||
accounts: [{ ...validStorage.accounts[0]!, email: "user2@example.com" }],
|
||||
}
|
||||
|
||||
// #when - sequential writes (concurrent writes are inherently racy)
|
||||
await saveAccounts(storage1, testStoragePath)
|
||||
await saveAccounts(storage2, testStoragePath)
|
||||
|
||||
// #then - file should contain valid JSON from last write
|
||||
const content = await fs.readFile(testStoragePath, "utf-8")
|
||||
const parsed = JSON.parse(content) as AccountStorage
|
||||
expect(parsed.version).toBe(1)
|
||||
expect(parsed.accounts[0]?.email).toBe("user2@example.com")
|
||||
})
|
||||
})
|
||||
|
||||
describe("loadAccounts error handling", () => {
|
||||
it("re-throws non-ENOENT filesystem errors", async () => {
|
||||
// #given
|
||||
const unreadableDir = join(testDir, "unreadable")
|
||||
await fs.mkdir(unreadableDir, { recursive: true })
|
||||
const unreadablePath = join(unreadableDir, "accounts.json")
|
||||
await fs.writeFile(unreadablePath, JSON.stringify(validStorage), "utf-8")
|
||||
await fs.chmod(unreadablePath, 0o000)
|
||||
|
||||
// #when
|
||||
let thrownError: Error | null = null
|
||||
let result: unknown = undefined
|
||||
try {
|
||||
result = await loadAccounts(unreadablePath)
|
||||
} catch (error) {
|
||||
thrownError = error as Error
|
||||
}
|
||||
|
||||
// #then
|
||||
if (thrownError) {
|
||||
expect((thrownError as NodeJS.ErrnoException).code).not.toBe("ENOENT")
|
||||
} else {
|
||||
console.log("[TEST SKIP] File permissions did not work as expected on this system, got result:", result)
|
||||
}
|
||||
|
||||
// Cleanup
|
||||
await fs.chmod(unreadablePath, 0o644)
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -1,74 +0,0 @@
|
||||
import { promises as fs } from "node:fs"
|
||||
import { join, dirname } from "node:path"
|
||||
import type { AccountStorage } from "./types"
|
||||
import { getDataDir as getSharedDataDir } from "../../shared/data-path"
|
||||
|
||||
export function getDataDir(): string {
|
||||
return join(getSharedDataDir(), "opencode")
|
||||
}
|
||||
|
||||
export function getStoragePath(): string {
|
||||
return join(getDataDir(), "oh-my-opencode-accounts.json")
|
||||
}
|
||||
|
||||
export async function loadAccounts(path?: string): Promise<AccountStorage | null> {
|
||||
const storagePath = path ?? getStoragePath()
|
||||
|
||||
try {
|
||||
const content = await fs.readFile(storagePath, "utf-8")
|
||||
const data = JSON.parse(content) as unknown
|
||||
|
||||
if (!isValidAccountStorage(data)) {
|
||||
return null
|
||||
}
|
||||
|
||||
return data
|
||||
} catch (error) {
|
||||
const errorCode = (error as NodeJS.ErrnoException).code
|
||||
if (errorCode === "ENOENT") {
|
||||
return null
|
||||
}
|
||||
if (error instanceof SyntaxError) {
|
||||
return null
|
||||
}
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
export async function saveAccounts(storage: AccountStorage, path?: string): Promise<void> {
|
||||
const storagePath = path ?? getStoragePath()
|
||||
|
||||
await fs.mkdir(dirname(storagePath), { recursive: true })
|
||||
|
||||
const content = JSON.stringify(storage, null, 2)
|
||||
const tempPath = `${storagePath}.tmp.${process.pid}.${Date.now()}`
|
||||
await fs.writeFile(tempPath, content, { encoding: "utf-8", mode: 0o600 })
|
||||
try {
|
||||
await fs.rename(tempPath, storagePath)
|
||||
} catch (error) {
|
||||
await fs.unlink(tempPath).catch(() => {})
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
function isValidAccountStorage(data: unknown): data is AccountStorage {
|
||||
if (typeof data !== "object" || data === null) {
|
||||
return false
|
||||
}
|
||||
|
||||
const obj = data as Record<string, unknown>
|
||||
|
||||
if (typeof obj.version !== "number") {
|
||||
return false
|
||||
}
|
||||
|
||||
if (!Array.isArray(obj.accounts)) {
|
||||
return false
|
||||
}
|
||||
|
||||
if (typeof obj.activeIndex !== "number") {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
@@ -1,288 +0,0 @@
|
||||
/**
|
||||
* Tests for reasoning_effort and Gemini 3 thinkingLevel support.
|
||||
*
|
||||
* Tests the following functions:
|
||||
* - getModelThinkingConfig()
|
||||
* - extractThinkingConfig() with reasoning_effort
|
||||
* - applyThinkingConfigToRequest()
|
||||
* - budgetToLevel()
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from "bun:test"
|
||||
import type { AntigravityModelConfig } from "./constants"
|
||||
import {
|
||||
getModelThinkingConfig,
|
||||
extractThinkingConfig,
|
||||
applyThinkingConfigToRequest,
|
||||
budgetToLevel,
|
||||
type ThinkingConfig,
|
||||
type DeleteThinkingConfig,
|
||||
} from "./thinking"
|
||||
|
||||
// ============================================================================
|
||||
// getModelThinkingConfig() tests
|
||||
// ============================================================================
|
||||
|
||||
describe("getModelThinkingConfig", () => {
|
||||
// #given: A model ID that maps to a levels-based thinking config (Gemini 3)
|
||||
// #when: getModelThinkingConfig is called with google/antigravity-gemini-3-pro-high
|
||||
// #then: It should return a config with thinkingType: "levels"
|
||||
it("should return levels config for Gemini 3 model", () => {
|
||||
const config = getModelThinkingConfig("google/antigravity-gemini-3-pro-high")
|
||||
expect(config).toBeDefined()
|
||||
expect(config?.thinkingType).toBe("levels")
|
||||
expect(config?.levels).toEqual(["low", "high"])
|
||||
})
|
||||
|
||||
// #given: A model ID that maps to a numeric-based thinking config (Gemini 2.5)
|
||||
// #when: getModelThinkingConfig is called with gemini-2.5-flash
|
||||
// #then: It should return a config with thinkingType: "numeric"
|
||||
it("should return numeric config for Gemini 2.5 model", () => {
|
||||
const config = getModelThinkingConfig("gemini-2.5-flash")
|
||||
expect(config).toBeDefined()
|
||||
expect(config?.thinkingType).toBe("numeric")
|
||||
expect(config?.min).toBe(0)
|
||||
expect(config?.max).toBe(24576)
|
||||
expect(config?.zeroAllowed).toBe(true)
|
||||
})
|
||||
|
||||
// #given: A model that doesn't have an exact match but includes "gemini-3"
|
||||
// #when: getModelThinkingConfig is called
|
||||
// #then: It should use pattern matching fallback to return levels config
|
||||
it("should use pattern matching fallback for gemini-3", () => {
|
||||
const config = getModelThinkingConfig("gemini-3-pro")
|
||||
expect(config).toBeDefined()
|
||||
expect(config?.thinkingType).toBe("levels")
|
||||
expect(config?.levels).toEqual(["low", "high"])
|
||||
})
|
||||
|
||||
// #given: A model that doesn't have an exact match but includes "claude"
|
||||
// #when: getModelThinkingConfig is called
|
||||
// #then: It should use pattern matching fallback to return numeric config
|
||||
it("should use pattern matching fallback for claude models", () => {
|
||||
const config = getModelThinkingConfig("claude-opus-4-5")
|
||||
expect(config).toBeDefined()
|
||||
expect(config?.thinkingType).toBe("numeric")
|
||||
expect(config?.min).toBe(1024)
|
||||
expect(config?.max).toBe(200000)
|
||||
expect(config?.zeroAllowed).toBe(false)
|
||||
})
|
||||
|
||||
// #given: An unknown model
|
||||
// #when: getModelThinkingConfig is called
|
||||
// #then: It should return undefined
|
||||
it("should return undefined for unknown models", () => {
|
||||
const config = getModelThinkingConfig("unknown-model")
|
||||
expect(config).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
// ============================================================================
|
||||
// extractThinkingConfig() with reasoning_effort tests
|
||||
// ============================================================================
|
||||
|
||||
describe("extractThinkingConfig with reasoning_effort", () => {
|
||||
// #given: A request payload with reasoning_effort set to "high"
|
||||
// #when: extractThinkingConfig is called
|
||||
// #then: It should return config with thinkingBudget: 24576 and includeThoughts: true
|
||||
it("should extract reasoning_effort high correctly", () => {
|
||||
const requestPayload = { reasoning_effort: "high" }
|
||||
const result = extractThinkingConfig(requestPayload)
|
||||
expect(result).toEqual({ thinkingBudget: 24576, includeThoughts: true })
|
||||
})
|
||||
|
||||
// #given: A request payload with reasoning_effort set to "low"
|
||||
// #when: extractThinkingConfig is called
|
||||
// #then: It should return config with thinkingBudget: 1024 and includeThoughts: true
|
||||
it("should extract reasoning_effort low correctly", () => {
|
||||
const requestPayload = { reasoning_effort: "low" }
|
||||
const result = extractThinkingConfig(requestPayload)
|
||||
expect(result).toEqual({ thinkingBudget: 1024, includeThoughts: true })
|
||||
})
|
||||
|
||||
// #given: A request payload with reasoning_effort set to "none"
|
||||
// #when: extractThinkingConfig is called
|
||||
// #then: It should return { deleteThinkingConfig: true } (special marker)
|
||||
it("should extract reasoning_effort none as delete marker", () => {
|
||||
const requestPayload = { reasoning_effort: "none" }
|
||||
const result = extractThinkingConfig(requestPayload)
|
||||
expect(result as unknown).toEqual({ deleteThinkingConfig: true })
|
||||
})
|
||||
|
||||
// #given: A request payload with reasoning_effort set to "medium"
|
||||
// #when: extractThinkingConfig is called
|
||||
// #then: It should return config with thinkingBudget: 8192
|
||||
it("should extract reasoning_effort medium correctly", () => {
|
||||
const requestPayload = { reasoning_effort: "medium" }
|
||||
const result = extractThinkingConfig(requestPayload)
|
||||
expect(result).toEqual({ thinkingBudget: 8192, includeThoughts: true })
|
||||
})
|
||||
|
||||
// #given: A request payload with reasoning_effort in extraBody (not main payload)
|
||||
// #when: extractThinkingConfig is called
|
||||
// #then: It should still extract and return the correct config
|
||||
it("should extract reasoning_effort from extraBody", () => {
|
||||
const requestPayload = {}
|
||||
const extraBody = { reasoning_effort: "high" }
|
||||
const result = extractThinkingConfig(requestPayload, undefined, extraBody)
|
||||
expect(result).toEqual({ thinkingBudget: 24576, includeThoughts: true })
|
||||
})
|
||||
|
||||
// #given: A request payload without reasoning_effort
|
||||
// #when: extractThinkingConfig is called
|
||||
// #then: It should return undefined (existing behavior unchanged)
|
||||
it("should return undefined when reasoning_effort not present", () => {
|
||||
const requestPayload = { model: "gemini-2.5-flash" }
|
||||
const result = extractThinkingConfig(requestPayload)
|
||||
expect(result).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
// ============================================================================
|
||||
// budgetToLevel() tests
|
||||
// ============================================================================
|
||||
|
||||
describe("budgetToLevel", () => {
|
||||
// #given: A thinking budget of 24576 and a Gemini 3 model
|
||||
// #when: budgetToLevel is called
|
||||
// #then: It should return "high"
|
||||
it("should convert budget 24576 to level high for Gemini 3", () => {
|
||||
const level = budgetToLevel(24576, "gemini-3-pro")
|
||||
expect(level).toBe("high")
|
||||
})
|
||||
|
||||
// #given: A thinking budget of 1024 and a Gemini 3 model
|
||||
// #when: budgetToLevel is called
|
||||
// #then: It should return "low"
|
||||
it("should convert budget 1024 to level low for Gemini 3", () => {
|
||||
const level = budgetToLevel(1024, "gemini-3-pro")
|
||||
expect(level).toBe("low")
|
||||
})
|
||||
|
||||
// #given: A thinking budget that doesn't match any predefined level
|
||||
// #when: budgetToLevel is called
|
||||
// #then: It should return the highest available level
|
||||
it("should return highest level for unknown budget", () => {
|
||||
const level = budgetToLevel(99999, "gemini-3-pro")
|
||||
expect(level).toBe("high")
|
||||
})
|
||||
})
|
||||
|
||||
// ============================================================================
|
||||
// applyThinkingConfigToRequest() tests
|
||||
// ============================================================================
|
||||
|
||||
describe("applyThinkingConfigToRequest", () => {
|
||||
// #given: A request body with generationConfig and Gemini 3 model with high budget
|
||||
// #when: applyThinkingConfigToRequest is called with ThinkingConfig
|
||||
// #then: It should set thinkingLevel to "high" (lowercase) and NOT set thinkingBudget
|
||||
it("should set thinkingLevel for Gemini 3 model", () => {
|
||||
const requestBody: Record<string, unknown> = {
|
||||
request: {
|
||||
generationConfig: {},
|
||||
},
|
||||
}
|
||||
const config: ThinkingConfig = { thinkingBudget: 24576, includeThoughts: true }
|
||||
|
||||
applyThinkingConfigToRequest(requestBody, "gemini-3-pro", config)
|
||||
|
||||
const genConfig = (requestBody.request as Record<string, unknown>).generationConfig as Record<string, unknown>
|
||||
const thinkingConfig = genConfig.thinkingConfig as Record<string, unknown>
|
||||
expect(thinkingConfig.thinkingLevel).toBe("high")
|
||||
expect(thinkingConfig.thinkingBudget).toBeUndefined()
|
||||
expect(thinkingConfig.include_thoughts).toBe(true)
|
||||
})
|
||||
|
||||
// #given: A request body with generationConfig and Gemini 2.5 model with high budget
|
||||
// #when: applyThinkingConfigToRequest is called with ThinkingConfig
|
||||
// #then: It should set thinkingBudget to 24576 and NOT set thinkingLevel
|
||||
it("should set thinkingBudget for Gemini 2.5 model", () => {
|
||||
const requestBody: Record<string, unknown> = {
|
||||
request: {
|
||||
generationConfig: {},
|
||||
},
|
||||
}
|
||||
const config: ThinkingConfig = { thinkingBudget: 24576, includeThoughts: true }
|
||||
|
||||
applyThinkingConfigToRequest(requestBody, "gemini-2.5-flash", config)
|
||||
|
||||
const genConfig = (requestBody.request as Record<string, unknown>).generationConfig as Record<string, unknown>
|
||||
const thinkingConfig = genConfig.thinkingConfig as Record<string, unknown>
|
||||
expect(thinkingConfig.thinkingBudget).toBe(24576)
|
||||
expect(thinkingConfig.thinkingLevel).toBeUndefined()
|
||||
expect(thinkingConfig.include_thoughts).toBe(true)
|
||||
})
|
||||
|
||||
// #given: A request body with existing thinkingConfig
|
||||
// #when: applyThinkingConfigToRequest is called with deleteThinkingConfig: true
|
||||
// #then: It should remove the thinkingConfig entirely
|
||||
it("should remove thinkingConfig when delete marker is set", () => {
|
||||
const requestBody: Record<string, unknown> = {
|
||||
request: {
|
||||
generationConfig: {
|
||||
thinkingConfig: {
|
||||
thinkingBudget: 16000,
|
||||
include_thoughts: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
applyThinkingConfigToRequest(requestBody, "gemini-3-pro", { deleteThinkingConfig: true })
|
||||
|
||||
const genConfig = (requestBody.request as Record<string, unknown>).generationConfig as Record<string, unknown>
|
||||
expect(genConfig.thinkingConfig).toBeUndefined()
|
||||
})
|
||||
|
||||
// #given: A request body without request.generationConfig
|
||||
// #when: applyThinkingConfigToRequest is called
|
||||
// #then: It should not modify the body (graceful handling)
|
||||
it("should handle missing generationConfig gracefully", () => {
|
||||
const requestBody: Record<string, unknown> = {}
|
||||
|
||||
applyThinkingConfigToRequest(requestBody, "gemini-2.5-flash", {
|
||||
thinkingBudget: 24576,
|
||||
includeThoughts: true,
|
||||
})
|
||||
|
||||
expect(requestBody.request).toBeUndefined()
|
||||
})
|
||||
|
||||
// #given: A request body and an unknown model
|
||||
// #when: applyThinkingConfigToRequest is called
|
||||
// #then: It should not set any thinking config (graceful handling)
|
||||
it("should handle unknown model gracefully", () => {
|
||||
const requestBody: Record<string, unknown> = {
|
||||
request: {
|
||||
generationConfig: {},
|
||||
},
|
||||
}
|
||||
|
||||
applyThinkingConfigToRequest(requestBody, "unknown-model", {
|
||||
thinkingBudget: 24576,
|
||||
includeThoughts: true,
|
||||
})
|
||||
|
||||
const genConfig = (requestBody.request as Record<string, unknown>).generationConfig as Record<string, unknown>
|
||||
expect(genConfig.thinkingConfig).toBeUndefined()
|
||||
})
|
||||
|
||||
// #given: A request body with Gemini 3 and budget that maps to "low" level
|
||||
// #when: applyThinkingConfigToRequest is called with uppercase level mapping
|
||||
// #then: It should convert to lowercase ("low")
|
||||
it("should convert uppercase level to lowercase", () => {
|
||||
const requestBody: Record<string, unknown> = {
|
||||
request: {
|
||||
generationConfig: {},
|
||||
},
|
||||
}
|
||||
const config: ThinkingConfig = { thinkingBudget: 1024, includeThoughts: true }
|
||||
|
||||
applyThinkingConfigToRequest(requestBody, "gemini-3-pro", config)
|
||||
|
||||
const genConfig = (requestBody.request as Record<string, unknown>).generationConfig as Record<string, unknown>
|
||||
const thinkingConfig = genConfig.thinkingConfig as Record<string, unknown>
|
||||
expect(thinkingConfig.thinkingLevel).toBe("low")
|
||||
expect(thinkingConfig.thinkingLevel).not.toBe("LOW")
|
||||
})
|
||||
})
|
||||
@@ -1,755 +0,0 @@
|
||||
/**
|
||||
* Antigravity Thinking Block Handler (Gemini only)
|
||||
*
|
||||
* Handles extraction and transformation of thinking/reasoning blocks
|
||||
* from Gemini responses. Thinking blocks contain the model's internal
|
||||
* reasoning process, available in `-high` model variants.
|
||||
*
|
||||
* Key responsibilities:
|
||||
* - Extract thinking blocks from Gemini response format
|
||||
* - Detect thinking-capable model variants (`-high` suffix)
|
||||
* - Format thinking blocks for OpenAI-compatible output
|
||||
*
|
||||
* Note: This is Gemini-only. Claude models are NOT handled by Antigravity.
|
||||
*/
|
||||
|
||||
import {
|
||||
normalizeModelId,
|
||||
ANTIGRAVITY_MODEL_CONFIGS,
|
||||
REASONING_EFFORT_BUDGET_MAP,
|
||||
type AntigravityModelConfig,
|
||||
} from "./constants"
|
||||
|
||||
/**
|
||||
* Represents a single thinking/reasoning block extracted from Gemini response
|
||||
*/
|
||||
export interface ThinkingBlock {
|
||||
/** The thinking/reasoning text content */
|
||||
text: string
|
||||
/** Optional signature for signed thinking blocks (required for multi-turn) */
|
||||
signature?: string
|
||||
/** Index of the thinking block in sequence */
|
||||
index?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Raw part structure from Gemini response candidates
|
||||
*/
|
||||
export interface GeminiPart {
|
||||
/** Text content of the part */
|
||||
text?: string
|
||||
/** Whether this part is a thinking/reasoning block */
|
||||
thought?: boolean
|
||||
/** Signature for signed thinking blocks */
|
||||
thoughtSignature?: string
|
||||
/** Type field for Anthropic-style format */
|
||||
type?: string
|
||||
/** Signature field for Anthropic-style format */
|
||||
signature?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Gemini response candidate structure
|
||||
*/
|
||||
export interface GeminiCandidate {
|
||||
/** Content containing parts */
|
||||
content?: {
|
||||
/** Role of the content (e.g., "model", "assistant") */
|
||||
role?: string
|
||||
/** Array of content parts */
|
||||
parts?: GeminiPart[]
|
||||
}
|
||||
/** Index of the candidate */
|
||||
index?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Gemini response structure for thinking block extraction
|
||||
*/
|
||||
export interface GeminiResponse {
|
||||
/** Response ID */
|
||||
id?: string
|
||||
/** Array of response candidates */
|
||||
candidates?: GeminiCandidate[]
|
||||
/** Direct content (some responses use this instead of candidates) */
|
||||
content?: Array<{
|
||||
type?: string
|
||||
text?: string
|
||||
signature?: string
|
||||
}>
|
||||
/** Model used for response */
|
||||
model?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Result of thinking block extraction
|
||||
*/
|
||||
export interface ThinkingExtractionResult {
|
||||
/** Extracted thinking blocks */
|
||||
thinkingBlocks: ThinkingBlock[]
|
||||
/** Combined thinking text for convenience */
|
||||
combinedThinking: string
|
||||
/** Whether any thinking blocks were found */
|
||||
hasThinking: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Default thinking budget in tokens for thinking-enabled models
|
||||
*/
|
||||
export const DEFAULT_THINKING_BUDGET = 16000
|
||||
|
||||
/**
|
||||
* Check if a model variant should include thinking blocks
|
||||
*
|
||||
* Returns true for model variants with `-high` suffix, which have
|
||||
* extended thinking capability enabled.
|
||||
*
|
||||
* Examples:
|
||||
* - `gemini-3-pro-high` → true
|
||||
* - `gemini-2.5-pro-high` → true
|
||||
* - `gemini-3-pro-preview` → false
|
||||
* - `gemini-2.5-pro` → false
|
||||
*
|
||||
* @param model - Model identifier string
|
||||
* @returns True if model should include thinking blocks
|
||||
*/
|
||||
export function shouldIncludeThinking(model: string): boolean {
|
||||
if (!model || typeof model !== "string") {
|
||||
return false
|
||||
}
|
||||
|
||||
const lowerModel = model.toLowerCase()
|
||||
|
||||
// Check for -high suffix (primary indicator of thinking capability)
|
||||
if (lowerModel.endsWith("-high")) {
|
||||
return true
|
||||
}
|
||||
|
||||
// Also check for explicit thinking in model name
|
||||
if (lowerModel.includes("thinking")) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a model is thinking-capable (broader check)
|
||||
*
|
||||
* This is a broader check than shouldIncludeThinking - it detects models
|
||||
* that have thinking capability, even if not explicitly requesting thinking output.
|
||||
*
|
||||
* @param model - Model identifier string
|
||||
* @returns True if model supports thinking/reasoning
|
||||
*/
|
||||
export function isThinkingCapableModel(model: string): boolean {
|
||||
if (!model || typeof model !== "string") {
|
||||
return false
|
||||
}
|
||||
|
||||
const lowerModel = model.toLowerCase()
|
||||
|
||||
return (
|
||||
lowerModel.includes("thinking") ||
|
||||
lowerModel.includes("gemini-3") ||
|
||||
lowerModel.endsWith("-high")
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a part is a thinking/reasoning block
|
||||
*
|
||||
* Detects both Gemini-style (thought: true) and Anthropic-style
|
||||
* (type: "thinking" or type: "reasoning") formats.
|
||||
*
|
||||
* @param part - Content part to check
|
||||
* @returns True if part is a thinking block
|
||||
*/
|
||||
function isThinkingPart(part: GeminiPart): boolean {
|
||||
// Gemini-style: thought flag
|
||||
if (part.thought === true) {
|
||||
return true
|
||||
}
|
||||
|
||||
// Anthropic-style: type field
|
||||
if (part.type === "thinking" || part.type === "reasoning") {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a thinking part has a valid signature
|
||||
*
|
||||
* Signatures are required for multi-turn conversations with Claude models.
|
||||
* Gemini uses `thoughtSignature`, Anthropic uses `signature`.
|
||||
*
|
||||
* @param part - Thinking part to check
|
||||
* @returns True if part has valid signature
|
||||
*/
|
||||
function hasValidSignature(part: GeminiPart): boolean {
|
||||
// Gemini-style signature
|
||||
if (part.thought === true && part.thoughtSignature) {
|
||||
return true
|
||||
}
|
||||
|
||||
// Anthropic-style signature
|
||||
if ((part.type === "thinking" || part.type === "reasoning") && part.signature) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract thinking blocks from a Gemini response
|
||||
*
|
||||
* Parses the response structure to identify and extract all thinking/reasoning
|
||||
* content. Supports both Gemini-style (thought: true) and Anthropic-style
|
||||
* (type: "thinking") formats.
|
||||
*
|
||||
* @param response - Gemini response object
|
||||
* @returns Extraction result with thinking blocks and metadata
|
||||
*/
|
||||
export function extractThinkingBlocks(response: GeminiResponse): ThinkingExtractionResult {
|
||||
const thinkingBlocks: ThinkingBlock[] = []
|
||||
|
||||
// Handle candidates array (standard Gemini format)
|
||||
if (response.candidates && Array.isArray(response.candidates)) {
|
||||
for (const candidate of response.candidates) {
|
||||
const parts = candidate.content?.parts
|
||||
if (!parts || !Array.isArray(parts)) {
|
||||
continue
|
||||
}
|
||||
|
||||
for (let i = 0; i < parts.length; i++) {
|
||||
const part = parts[i]
|
||||
if (!part || typeof part !== "object") {
|
||||
continue
|
||||
}
|
||||
|
||||
if (isThinkingPart(part)) {
|
||||
const block: ThinkingBlock = {
|
||||
text: part.text || "",
|
||||
index: thinkingBlocks.length,
|
||||
}
|
||||
|
||||
// Extract signature if present
|
||||
if (part.thought === true && part.thoughtSignature) {
|
||||
block.signature = part.thoughtSignature
|
||||
} else if (part.signature) {
|
||||
block.signature = part.signature
|
||||
}
|
||||
|
||||
thinkingBlocks.push(block)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handle direct content array (Anthropic-style response)
|
||||
if (response.content && Array.isArray(response.content)) {
|
||||
for (let i = 0; i < response.content.length; i++) {
|
||||
const item = response.content[i]
|
||||
if (!item || typeof item !== "object") {
|
||||
continue
|
||||
}
|
||||
|
||||
if (item.type === "thinking" || item.type === "reasoning") {
|
||||
thinkingBlocks.push({
|
||||
text: item.text || "",
|
||||
signature: item.signature,
|
||||
index: thinkingBlocks.length,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Combine all thinking text
|
||||
const combinedThinking = thinkingBlocks.map((b) => b.text).join("\n\n")
|
||||
|
||||
return {
|
||||
thinkingBlocks,
|
||||
combinedThinking,
|
||||
hasThinking: thinkingBlocks.length > 0,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Format thinking blocks for OpenAI-compatible output
|
||||
*
|
||||
* Converts Gemini thinking block format to OpenAI's expected structure.
|
||||
* OpenAI expects thinking content as special message blocks or annotations.
|
||||
*
|
||||
* Output format:
|
||||
* ```
|
||||
* [
|
||||
* { type: "reasoning", text: "thinking content...", signature?: "..." },
|
||||
* ...
|
||||
* ]
|
||||
* ```
|
||||
*
|
||||
* @param thinking - Array of thinking blocks to format
|
||||
* @returns OpenAI-compatible formatted array
|
||||
*/
|
||||
export function formatThinkingForOpenAI(
|
||||
thinking: ThinkingBlock[],
|
||||
): Array<{ type: "reasoning"; text: string; signature?: string }> {
|
||||
if (!thinking || !Array.isArray(thinking) || thinking.length === 0) {
|
||||
return []
|
||||
}
|
||||
|
||||
return thinking.map((block) => {
|
||||
const formatted: { type: "reasoning"; text: string; signature?: string } = {
|
||||
type: "reasoning",
|
||||
text: block.text || "",
|
||||
}
|
||||
|
||||
if (block.signature) {
|
||||
formatted.signature = block.signature
|
||||
}
|
||||
|
||||
return formatted
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Transform thinking parts in a candidate to OpenAI format
|
||||
*
|
||||
* Modifies candidate content parts to use OpenAI-style reasoning format
|
||||
* while preserving the rest of the response structure.
|
||||
*
|
||||
* @param candidate - Gemini candidate to transform
|
||||
* @returns Transformed candidate with reasoning-formatted thinking
|
||||
*/
|
||||
export function transformCandidateThinking(candidate: GeminiCandidate): GeminiCandidate {
|
||||
if (!candidate || typeof candidate !== "object") {
|
||||
return candidate
|
||||
}
|
||||
|
||||
const content = candidate.content
|
||||
if (!content || typeof content !== "object" || !Array.isArray(content.parts)) {
|
||||
return candidate
|
||||
}
|
||||
|
||||
const thinkingTexts: string[] = []
|
||||
const transformedParts = content.parts.map((part) => {
|
||||
if (part && typeof part === "object" && part.thought === true) {
|
||||
thinkingTexts.push(part.text || "")
|
||||
// Transform to reasoning format
|
||||
return {
|
||||
...part,
|
||||
type: "reasoning" as const,
|
||||
thought: undefined, // Remove Gemini-specific field
|
||||
}
|
||||
}
|
||||
return part
|
||||
})
|
||||
|
||||
const result: GeminiCandidate & { reasoning_content?: string } = {
|
||||
...candidate,
|
||||
content: { ...content, parts: transformedParts },
|
||||
}
|
||||
|
||||
// Add combined reasoning content for convenience
|
||||
if (thinkingTexts.length > 0) {
|
||||
result.reasoning_content = thinkingTexts.join("\n\n")
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
/**
|
||||
* Transform Anthropic-style thinking blocks to reasoning format
|
||||
*
|
||||
* Converts `type: "thinking"` blocks to `type: "reasoning"` for consistency.
|
||||
*
|
||||
* @param content - Array of content blocks
|
||||
* @returns Transformed content array
|
||||
*/
|
||||
export function transformAnthropicThinking(
|
||||
content: Array<{ type?: string; text?: string; signature?: string }>,
|
||||
): Array<{ type?: string; text?: string; signature?: string }> {
|
||||
if (!content || !Array.isArray(content)) {
|
||||
return content
|
||||
}
|
||||
|
||||
return content.map((block) => {
|
||||
if (block && typeof block === "object" && block.type === "thinking") {
|
||||
return {
|
||||
type: "reasoning",
|
||||
text: block.text || "",
|
||||
...(block.signature ? { signature: block.signature } : {}),
|
||||
}
|
||||
}
|
||||
return block
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Filter out unsigned thinking blocks
|
||||
*
|
||||
* Claude API requires signed thinking blocks for multi-turn conversations.
|
||||
* This function removes thinking blocks without valid signatures.
|
||||
*
|
||||
* @param parts - Array of content parts
|
||||
* @returns Filtered array without unsigned thinking blocks
|
||||
*/
|
||||
export function filterUnsignedThinkingBlocks(parts: GeminiPart[]): GeminiPart[] {
|
||||
if (!parts || !Array.isArray(parts)) {
|
||||
return parts
|
||||
}
|
||||
|
||||
return parts.filter((part) => {
|
||||
if (!part || typeof part !== "object") {
|
||||
return true
|
||||
}
|
||||
|
||||
// If it's a thinking part, only keep it if signed
|
||||
if (isThinkingPart(part)) {
|
||||
return hasValidSignature(part)
|
||||
}
|
||||
|
||||
// Keep all non-thinking parts
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Transform entire response thinking parts
|
||||
*
|
||||
* Main transformation function that handles both Gemini-style and
|
||||
* Anthropic-style thinking blocks in a response.
|
||||
*
|
||||
* @param response - Response object to transform
|
||||
* @returns Transformed response with standardized reasoning format
|
||||
*/
|
||||
export function transformResponseThinking(response: GeminiResponse): GeminiResponse {
|
||||
if (!response || typeof response !== "object") {
|
||||
return response
|
||||
}
|
||||
|
||||
const result: GeminiResponse = { ...response }
|
||||
|
||||
// Transform candidates (Gemini-style)
|
||||
if (Array.isArray(result.candidates)) {
|
||||
result.candidates = result.candidates.map(transformCandidateThinking)
|
||||
}
|
||||
|
||||
// Transform direct content (Anthropic-style)
|
||||
if (Array.isArray(result.content)) {
|
||||
result.content = transformAnthropicThinking(result.content)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
/**
|
||||
* Thinking configuration for requests
|
||||
*/
|
||||
export interface ThinkingConfig {
|
||||
/** Token budget for thinking/reasoning */
|
||||
thinkingBudget?: number
|
||||
/** Whether to include thoughts in response */
|
||||
includeThoughts?: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize thinking configuration
|
||||
*
|
||||
* Ensures thinkingConfig is valid: includeThoughts only allowed when budget > 0.
|
||||
*
|
||||
* @param config - Raw thinking configuration
|
||||
* @returns Normalized configuration or undefined
|
||||
*/
|
||||
export function normalizeThinkingConfig(config: unknown): ThinkingConfig | undefined {
|
||||
if (!config || typeof config !== "object") {
|
||||
return undefined
|
||||
}
|
||||
|
||||
const record = config as Record<string, unknown>
|
||||
const budgetRaw = record.thinkingBudget ?? record.thinking_budget
|
||||
const includeRaw = record.includeThoughts ?? record.include_thoughts
|
||||
|
||||
const thinkingBudget =
|
||||
typeof budgetRaw === "number" && Number.isFinite(budgetRaw) ? budgetRaw : undefined
|
||||
const includeThoughts = typeof includeRaw === "boolean" ? includeRaw : undefined
|
||||
|
||||
const enableThinking = thinkingBudget !== undefined && thinkingBudget > 0
|
||||
const finalInclude = enableThinking ? (includeThoughts ?? false) : false
|
||||
|
||||
// Return undefined if no meaningful config
|
||||
if (
|
||||
!enableThinking &&
|
||||
finalInclude === false &&
|
||||
thinkingBudget === undefined &&
|
||||
includeThoughts === undefined
|
||||
) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
const normalized: ThinkingConfig = {}
|
||||
if (thinkingBudget !== undefined) {
|
||||
normalized.thinkingBudget = thinkingBudget
|
||||
}
|
||||
if (finalInclude !== undefined) {
|
||||
normalized.includeThoughts = finalInclude
|
||||
}
|
||||
return normalized
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract thinking configuration from request payload
|
||||
*
|
||||
* Supports both Gemini-style thinkingConfig and Anthropic-style thinking options.
|
||||
* Also supports reasoning_effort parameter which maps to thinking budget/level.
|
||||
*
|
||||
* @param requestPayload - Request body
|
||||
* @param generationConfig - Generation config from request
|
||||
* @param extraBody - Extra body options
|
||||
* @returns Extracted thinking configuration or undefined
|
||||
*/
|
||||
export function extractThinkingConfig(
|
||||
requestPayload: Record<string, unknown>,
|
||||
generationConfig?: Record<string, unknown>,
|
||||
extraBody?: Record<string, unknown>,
|
||||
): ThinkingConfig | DeleteThinkingConfig | undefined {
|
||||
// Check for explicit thinkingConfig
|
||||
const thinkingConfig =
|
||||
generationConfig?.thinkingConfig ?? extraBody?.thinkingConfig ?? requestPayload.thinkingConfig
|
||||
|
||||
if (thinkingConfig && typeof thinkingConfig === "object") {
|
||||
const config = thinkingConfig as Record<string, unknown>
|
||||
return {
|
||||
includeThoughts: Boolean(config.includeThoughts),
|
||||
thinkingBudget:
|
||||
typeof config.thinkingBudget === "number" ? config.thinkingBudget : DEFAULT_THINKING_BUDGET,
|
||||
}
|
||||
}
|
||||
|
||||
// Convert Anthropic-style "thinking" option: { type: "enabled", budgetTokens: N }
|
||||
const anthropicThinking = extraBody?.thinking ?? requestPayload.thinking
|
||||
if (anthropicThinking && typeof anthropicThinking === "object") {
|
||||
const thinking = anthropicThinking as Record<string, unknown>
|
||||
if (thinking.type === "enabled" || thinking.budgetTokens) {
|
||||
return {
|
||||
includeThoughts: true,
|
||||
thinkingBudget:
|
||||
typeof thinking.budgetTokens === "number"
|
||||
? thinking.budgetTokens
|
||||
: DEFAULT_THINKING_BUDGET,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Extract reasoning_effort parameter (maps to thinking budget/level)
|
||||
const reasoningEffort = requestPayload.reasoning_effort ?? extraBody?.reasoning_effort
|
||||
if (reasoningEffort && typeof reasoningEffort === "string") {
|
||||
const budget = REASONING_EFFORT_BUDGET_MAP[reasoningEffort]
|
||||
if (budget !== undefined) {
|
||||
if (reasoningEffort === "none") {
|
||||
// Special marker: delete thinkingConfig entirely
|
||||
return { deleteThinkingConfig: true }
|
||||
}
|
||||
return {
|
||||
includeThoughts: true,
|
||||
thinkingBudget: budget,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve final thinking configuration based on model and context
|
||||
*
|
||||
* Handles special cases like Claude models requiring signed thinking blocks
|
||||
* for multi-turn conversations.
|
||||
*
|
||||
* @param userConfig - User-provided thinking configuration
|
||||
* @param isThinkingModel - Whether model supports thinking
|
||||
* @param isClaudeModel - Whether model is Claude (not used in Antigravity, but kept for compatibility)
|
||||
* @param hasAssistantHistory - Whether conversation has assistant history
|
||||
* @returns Final thinking configuration
|
||||
*/
|
||||
export function resolveThinkingConfig(
|
||||
userConfig: ThinkingConfig | undefined,
|
||||
isThinkingModel: boolean,
|
||||
isClaudeModel: boolean,
|
||||
hasAssistantHistory: boolean,
|
||||
): ThinkingConfig | undefined {
|
||||
// Claude models with history need signed thinking blocks
|
||||
// Since we can't guarantee signatures, disable thinking
|
||||
if (isClaudeModel && hasAssistantHistory) {
|
||||
return { includeThoughts: false, thinkingBudget: 0 }
|
||||
}
|
||||
|
||||
// Enable thinking by default for thinking-capable models
|
||||
if (isThinkingModel && !userConfig) {
|
||||
return { includeThoughts: true, thinkingBudget: DEFAULT_THINKING_BUDGET }
|
||||
}
|
||||
|
||||
return userConfig
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Model Thinking Configuration (Task 2: reasoning_effort and Gemini 3 thinkingLevel)
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Get thinking config for a model by normalized ID.
|
||||
* Uses pattern matching fallback if exact match not found.
|
||||
*
|
||||
* @param model - Model identifier string (with or without provider prefix)
|
||||
* @returns Thinking configuration or undefined if not found
|
||||
*/
|
||||
export function getModelThinkingConfig(
|
||||
model: string,
|
||||
): AntigravityModelConfig | undefined {
|
||||
const normalized = normalizeModelId(model)
|
||||
|
||||
// Exact match
|
||||
if (ANTIGRAVITY_MODEL_CONFIGS[normalized]) {
|
||||
return ANTIGRAVITY_MODEL_CONFIGS[normalized]
|
||||
}
|
||||
|
||||
// Pattern matching fallback for Gemini 3
|
||||
if (normalized.includes("gemini-3")) {
|
||||
return {
|
||||
thinkingType: "levels",
|
||||
min: 128,
|
||||
max: 32768,
|
||||
zeroAllowed: false,
|
||||
levels: ["low", "high"],
|
||||
}
|
||||
}
|
||||
|
||||
// Pattern matching fallback for Gemini 2.5
|
||||
if (normalized.includes("gemini-2.5")) {
|
||||
return {
|
||||
thinkingType: "numeric",
|
||||
min: 0,
|
||||
max: 24576,
|
||||
zeroAllowed: true,
|
||||
}
|
||||
}
|
||||
|
||||
// Pattern matching fallback for Claude via Antigravity
|
||||
if (normalized.includes("claude")) {
|
||||
return {
|
||||
thinkingType: "numeric",
|
||||
min: 1024,
|
||||
max: 200000,
|
||||
zeroAllowed: false,
|
||||
}
|
||||
}
|
||||
|
||||
return undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Type for the delete thinking config marker.
|
||||
* Used when reasoning_effort is "none" to signal complete removal.
|
||||
*/
|
||||
export interface DeleteThinkingConfig {
|
||||
deleteThinkingConfig: true
|
||||
}
|
||||
|
||||
/**
|
||||
* Union type for thinking configuration input.
|
||||
*/
|
||||
export type ThinkingConfigInput = ThinkingConfig | DeleteThinkingConfig
|
||||
|
||||
/**
|
||||
* Convert thinking budget to closest level string for Gemini 3 models.
|
||||
*
|
||||
* @param budget - Thinking budget in tokens
|
||||
* @param model - Model identifier
|
||||
* @returns Level string ("low", "high", etc.) or "medium" fallback
|
||||
*/
|
||||
export function budgetToLevel(budget: number, model: string): string {
|
||||
const config = getModelThinkingConfig(model)
|
||||
|
||||
// Default fallback
|
||||
if (!config?.levels) {
|
||||
return "medium"
|
||||
}
|
||||
|
||||
// Map budgets to levels
|
||||
const budgetMap: Record<number, string> = {
|
||||
512: "minimal",
|
||||
1024: "low",
|
||||
8192: "medium",
|
||||
24576: "high",
|
||||
}
|
||||
|
||||
// Return matching level or highest available
|
||||
if (budgetMap[budget]) {
|
||||
return budgetMap[budget]
|
||||
}
|
||||
|
||||
return config.levels[config.levels.length - 1] || "high"
|
||||
}
|
||||
|
||||
/**
|
||||
* Apply thinking config to request body.
|
||||
*
|
||||
* CRITICAL: Sets request.generationConfig.thinkingConfig (NOT outer body!)
|
||||
*
|
||||
* Handles:
|
||||
* - Gemini 3: Sets thinkingLevel (string)
|
||||
* - Gemini 2.5: Sets thinkingBudget (number)
|
||||
* - Delete marker: Removes thinkingConfig entirely
|
||||
*
|
||||
* @param requestBody - Request body to modify (mutates in place)
|
||||
* @param model - Model identifier
|
||||
* @param config - Thinking configuration or delete marker
|
||||
*/
|
||||
export function applyThinkingConfigToRequest(
|
||||
requestBody: Record<string, unknown>,
|
||||
model: string,
|
||||
config: ThinkingConfigInput,
|
||||
): void {
|
||||
// Handle delete marker
|
||||
if ("deleteThinkingConfig" in config && config.deleteThinkingConfig) {
|
||||
if (requestBody.request && typeof requestBody.request === "object") {
|
||||
const req = requestBody.request as Record<string, unknown>
|
||||
if (req.generationConfig && typeof req.generationConfig === "object") {
|
||||
const genConfig = req.generationConfig as Record<string, unknown>
|
||||
delete genConfig.thinkingConfig
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
const modelConfig = getModelThinkingConfig(model)
|
||||
if (!modelConfig) {
|
||||
return
|
||||
}
|
||||
|
||||
// Ensure request.generationConfig.thinkingConfig exists
|
||||
if (!requestBody.request || typeof requestBody.request !== "object") {
|
||||
return
|
||||
}
|
||||
const req = requestBody.request as Record<string, unknown>
|
||||
if (!req.generationConfig || typeof req.generationConfig !== "object") {
|
||||
req.generationConfig = {}
|
||||
}
|
||||
const genConfig = req.generationConfig as Record<string, unknown>
|
||||
genConfig.thinkingConfig = {}
|
||||
const thinkingConfig = genConfig.thinkingConfig as Record<string, unknown>
|
||||
|
||||
thinkingConfig.include_thoughts = true
|
||||
|
||||
if (modelConfig.thinkingType === "numeric") {
|
||||
thinkingConfig.thinkingBudget = (config as ThinkingConfig).thinkingBudget
|
||||
} else if (modelConfig.thinkingType === "levels") {
|
||||
const budget = (config as ThinkingConfig).thinkingBudget ?? DEFAULT_THINKING_BUDGET
|
||||
let level = budgetToLevel(budget, model)
|
||||
// Convert uppercase to lowercase (think-mode hook sends "HIGH")
|
||||
level = level.toLowerCase()
|
||||
thinkingConfig.thinkingLevel = level
|
||||
}
|
||||
}
|
||||
@@ -1,97 +0,0 @@
|
||||
/**
|
||||
* Thought Signature Store
|
||||
*
|
||||
* Stores and retrieves thought signatures for multi-turn conversations.
|
||||
* Gemini 3 Pro requires thought_signature on function call content blocks
|
||||
* in subsequent requests to maintain reasoning continuity.
|
||||
*
|
||||
* Key responsibilities:
|
||||
* - Store the latest thought signature per session
|
||||
* - Provide signature for injection into function call requests
|
||||
* - Clear signatures when sessions end
|
||||
*/
|
||||
|
||||
/**
|
||||
* In-memory store for thought signatures indexed by session ID
|
||||
*/
|
||||
const signatureStore = new Map<string, string>()
|
||||
|
||||
/**
|
||||
* In-memory store for session IDs per fetch instance
|
||||
* Used to maintain consistent sessionId across multi-turn conversations
|
||||
*/
|
||||
const sessionIdStore = new Map<string, string>()
|
||||
|
||||
/**
|
||||
* Store a thought signature for a session
|
||||
*
|
||||
* @param sessionKey - Unique session identifier (typically fetch instance ID)
|
||||
* @param signature - The thought signature from model response
|
||||
*/
|
||||
export function setThoughtSignature(sessionKey: string, signature: string): void {
|
||||
if (sessionKey && signature) {
|
||||
signatureStore.set(sessionKey, signature)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieve the stored thought signature for a session
|
||||
*
|
||||
* @param sessionKey - Unique session identifier
|
||||
* @returns The stored signature or undefined if not found
|
||||
*/
|
||||
export function getThoughtSignature(sessionKey: string): string | undefined {
|
||||
return signatureStore.get(sessionKey)
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear the thought signature for a session
|
||||
*
|
||||
* @param sessionKey - Unique session identifier
|
||||
*/
|
||||
export function clearThoughtSignature(sessionKey: string): void {
|
||||
signatureStore.delete(sessionKey)
|
||||
}
|
||||
|
||||
/**
|
||||
* Store or retrieve a persistent session ID for a fetch instance
|
||||
*
|
||||
* @param fetchInstanceId - Unique identifier for the fetch instance
|
||||
* @param sessionId - Optional session ID to store (if not provided, returns existing or generates new)
|
||||
* @returns The session ID for this fetch instance
|
||||
*/
|
||||
export function getOrCreateSessionId(fetchInstanceId: string, sessionId?: string): string {
|
||||
if (sessionId) {
|
||||
sessionIdStore.set(fetchInstanceId, sessionId)
|
||||
return sessionId
|
||||
}
|
||||
|
||||
const existing = sessionIdStore.get(fetchInstanceId)
|
||||
if (existing) {
|
||||
return existing
|
||||
}
|
||||
|
||||
const n = Math.floor(Math.random() * Number.MAX_SAFE_INTEGER)
|
||||
const newSessionId = `-${n}`
|
||||
sessionIdStore.set(fetchInstanceId, newSessionId)
|
||||
return newSessionId
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear the session ID for a fetch instance
|
||||
*
|
||||
* @param fetchInstanceId - Unique identifier for the fetch instance
|
||||
*/
|
||||
export function clearSessionId(fetchInstanceId: string): void {
|
||||
sessionIdStore.delete(fetchInstanceId)
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear all stored data for a fetch instance (signature + session ID)
|
||||
*
|
||||
* @param fetchInstanceId - Unique identifier for the fetch instance
|
||||
*/
|
||||
export function clearFetchInstanceData(fetchInstanceId: string): void {
|
||||
signatureStore.delete(fetchInstanceId)
|
||||
sessionIdStore.delete(fetchInstanceId)
|
||||
}
|
||||
@@ -1,78 +0,0 @@
|
||||
import { describe, it, expect } from "bun:test"
|
||||
import { isTokenExpired } from "./token"
|
||||
import type { AntigravityTokens } from "./types"
|
||||
|
||||
describe("Token Expiry with 60-second Buffer", () => {
|
||||
const createToken = (expiresInSeconds: number): AntigravityTokens => ({
|
||||
type: "antigravity",
|
||||
access_token: "test-access",
|
||||
refresh_token: "test-refresh",
|
||||
expires_in: expiresInSeconds,
|
||||
timestamp: Date.now(),
|
||||
})
|
||||
|
||||
it("should NOT be expired if token expires in 2 minutes", () => {
|
||||
// #given
|
||||
const twoMinutes = 2 * 60
|
||||
const token = createToken(twoMinutes)
|
||||
|
||||
// #when
|
||||
const expired = isTokenExpired(token)
|
||||
|
||||
// #then
|
||||
expect(expired).toBe(false)
|
||||
})
|
||||
|
||||
it("should be expired if token expires in 30 seconds", () => {
|
||||
// #given
|
||||
const thirtySeconds = 30
|
||||
const token = createToken(thirtySeconds)
|
||||
|
||||
// #when
|
||||
const expired = isTokenExpired(token)
|
||||
|
||||
// #then
|
||||
expect(expired).toBe(true)
|
||||
})
|
||||
|
||||
it("should be expired at exactly 60 seconds (boundary)", () => {
|
||||
// #given
|
||||
const sixtySeconds = 60
|
||||
const token = createToken(sixtySeconds)
|
||||
|
||||
// #when
|
||||
const expired = isTokenExpired(token)
|
||||
|
||||
// #then - at boundary, should trigger refresh
|
||||
expect(expired).toBe(true)
|
||||
})
|
||||
|
||||
it("should be expired if token already expired", () => {
|
||||
// #given
|
||||
const alreadyExpired: AntigravityTokens = {
|
||||
type: "antigravity",
|
||||
access_token: "test-access",
|
||||
refresh_token: "test-refresh",
|
||||
expires_in: 3600,
|
||||
timestamp: Date.now() - 4000 * 1000,
|
||||
}
|
||||
|
||||
// #when
|
||||
const expired = isTokenExpired(alreadyExpired)
|
||||
|
||||
// #then
|
||||
expect(expired).toBe(true)
|
||||
})
|
||||
|
||||
it("should NOT be expired if token has plenty of time", () => {
|
||||
// #given
|
||||
const twoHours = 2 * 60 * 60
|
||||
const token = createToken(twoHours)
|
||||
|
||||
// #when
|
||||
const expired = isTokenExpired(token)
|
||||
|
||||
// #then
|
||||
expect(expired).toBe(false)
|
||||
})
|
||||
})
|
||||
@@ -1,213 +0,0 @@
|
||||
import {
|
||||
ANTIGRAVITY_CLIENT_ID,
|
||||
ANTIGRAVITY_CLIENT_SECRET,
|
||||
ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS,
|
||||
GOOGLE_TOKEN_URL,
|
||||
} from "./constants"
|
||||
import type {
|
||||
AntigravityRefreshParts,
|
||||
AntigravityTokenExchangeResult,
|
||||
AntigravityTokens,
|
||||
OAuthErrorPayload,
|
||||
ParsedOAuthError,
|
||||
} from "./types"
|
||||
|
||||
export class AntigravityTokenRefreshError extends Error {
|
||||
code?: string
|
||||
description?: string
|
||||
status: number
|
||||
statusText: string
|
||||
responseBody?: string
|
||||
|
||||
constructor(options: {
|
||||
message: string
|
||||
code?: string
|
||||
description?: string
|
||||
status: number
|
||||
statusText: string
|
||||
responseBody?: string
|
||||
}) {
|
||||
super(options.message)
|
||||
this.name = "AntigravityTokenRefreshError"
|
||||
this.code = options.code
|
||||
this.description = options.description
|
||||
this.status = options.status
|
||||
this.statusText = options.statusText
|
||||
this.responseBody = options.responseBody
|
||||
}
|
||||
|
||||
get isInvalidGrant(): boolean {
|
||||
return this.code === "invalid_grant"
|
||||
}
|
||||
|
||||
get isNetworkError(): boolean {
|
||||
return this.status === 0
|
||||
}
|
||||
}
|
||||
|
||||
function parseOAuthErrorPayload(text: string | undefined): ParsedOAuthError {
|
||||
if (!text) {
|
||||
return {}
|
||||
}
|
||||
|
||||
try {
|
||||
const payload = JSON.parse(text) as OAuthErrorPayload
|
||||
let code: string | undefined
|
||||
|
||||
if (typeof payload.error === "string") {
|
||||
code = payload.error
|
||||
} else if (payload.error && typeof payload.error === "object") {
|
||||
code = payload.error.status ?? payload.error.code
|
||||
}
|
||||
|
||||
return {
|
||||
code,
|
||||
description: payload.error_description,
|
||||
}
|
||||
} catch {
|
||||
return { description: text }
|
||||
}
|
||||
}
|
||||
|
||||
export function isTokenExpired(tokens: AntigravityTokens): boolean {
|
||||
const expirationTime = tokens.timestamp + tokens.expires_in * 1000
|
||||
return Date.now() >= expirationTime - ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS
|
||||
}
|
||||
|
||||
const MAX_REFRESH_RETRIES = 3
|
||||
const INITIAL_RETRY_DELAY_MS = 1000
|
||||
|
||||
function calculateRetryDelay(attempt: number): number {
|
||||
return Math.min(INITIAL_RETRY_DELAY_MS * Math.pow(2, attempt), 10000)
|
||||
}
|
||||
|
||||
function isRetryableError(status: number): boolean {
|
||||
if (status === 0) return true
|
||||
if (status === 429) return true
|
||||
if (status >= 500 && status < 600) return true
|
||||
return false
|
||||
}
|
||||
|
||||
export async function refreshAccessToken(
|
||||
refreshToken: string,
|
||||
clientId: string = ANTIGRAVITY_CLIENT_ID,
|
||||
clientSecret: string = ANTIGRAVITY_CLIENT_SECRET
|
||||
): Promise<AntigravityTokenExchangeResult> {
|
||||
const params = new URLSearchParams({
|
||||
grant_type: "refresh_token",
|
||||
refresh_token: refreshToken,
|
||||
client_id: clientId,
|
||||
client_secret: clientSecret,
|
||||
})
|
||||
|
||||
let lastError: AntigravityTokenRefreshError | undefined
|
||||
|
||||
for (let attempt = 0; attempt <= MAX_REFRESH_RETRIES; attempt++) {
|
||||
try {
|
||||
const response = await fetch(GOOGLE_TOKEN_URL, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/x-www-form-urlencoded",
|
||||
},
|
||||
body: params,
|
||||
})
|
||||
|
||||
if (response.ok) {
|
||||
const data = (await response.json()) as {
|
||||
access_token: string
|
||||
refresh_token?: string
|
||||
expires_in: number
|
||||
token_type: string
|
||||
}
|
||||
|
||||
return {
|
||||
access_token: data.access_token,
|
||||
refresh_token: data.refresh_token || refreshToken,
|
||||
expires_in: data.expires_in,
|
||||
token_type: data.token_type,
|
||||
}
|
||||
}
|
||||
|
||||
const responseBody = await response.text().catch(() => undefined)
|
||||
const parsed = parseOAuthErrorPayload(responseBody)
|
||||
|
||||
lastError = new AntigravityTokenRefreshError({
|
||||
message: parsed.description || `Token refresh failed: ${response.status} ${response.statusText}`,
|
||||
code: parsed.code,
|
||||
description: parsed.description,
|
||||
status: response.status,
|
||||
statusText: response.statusText,
|
||||
responseBody,
|
||||
})
|
||||
|
||||
if (parsed.code === "invalid_grant") {
|
||||
throw lastError
|
||||
}
|
||||
|
||||
if (!isRetryableError(response.status)) {
|
||||
throw lastError
|
||||
}
|
||||
|
||||
if (attempt < MAX_REFRESH_RETRIES) {
|
||||
const delay = calculateRetryDelay(attempt)
|
||||
await new Promise((resolve) => setTimeout(resolve, delay))
|
||||
}
|
||||
} catch (error) {
|
||||
if (error instanceof AntigravityTokenRefreshError) {
|
||||
throw error
|
||||
}
|
||||
|
||||
lastError = new AntigravityTokenRefreshError({
|
||||
message: error instanceof Error ? error.message : "Network error during token refresh",
|
||||
status: 0,
|
||||
statusText: "Network Error",
|
||||
})
|
||||
|
||||
if (attempt < MAX_REFRESH_RETRIES) {
|
||||
const delay = calculateRetryDelay(attempt)
|
||||
await new Promise((resolve) => setTimeout(resolve, delay))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
throw lastError || new AntigravityTokenRefreshError({
|
||||
message: "Token refresh failed after all retries",
|
||||
status: 0,
|
||||
statusText: "Max Retries Exceeded",
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse a stored token string into its component parts.
|
||||
* Storage format: `refreshToken|projectId|managedProjectId`
|
||||
*
|
||||
* @param stored - The pipe-separated stored token string
|
||||
* @returns Parsed refresh parts with refreshToken, projectId, and optional managedProjectId
|
||||
*/
|
||||
export function parseStoredToken(stored: string): AntigravityRefreshParts {
|
||||
const parts = stored.split("|")
|
||||
const [refreshToken, projectId, managedProjectId] = parts
|
||||
|
||||
return {
|
||||
refreshToken: refreshToken || "",
|
||||
projectId: projectId || undefined,
|
||||
managedProjectId: managedProjectId || undefined,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Format token components for storage.
|
||||
* Creates a pipe-separated string: `refreshToken|projectId|managedProjectId`
|
||||
*
|
||||
* @param refreshToken - The refresh token
|
||||
* @param projectId - The GCP project ID
|
||||
* @param managedProjectId - Optional managed project ID for enterprise users
|
||||
* @returns Formatted string for storage
|
||||
*/
|
||||
export function formatTokenForStorage(
|
||||
refreshToken: string,
|
||||
projectId: string,
|
||||
managedProjectId?: string
|
||||
): string {
|
||||
return `${refreshToken}|${projectId}|${managedProjectId || ""}`
|
||||
}
|
||||
@@ -1,243 +0,0 @@
|
||||
/**
|
||||
* Antigravity Tool Normalization
|
||||
* Converts tools between OpenAI and Gemini formats.
|
||||
*
|
||||
* OpenAI format:
|
||||
* { "type": "function", "function": { "name": "x", "description": "...", "parameters": {...} } }
|
||||
*
|
||||
* Gemini format:
|
||||
* { "functionDeclarations": [{ "name": "x", "description": "...", "parameters": {...} }] }
|
||||
*
|
||||
* Note: This is for Gemini models ONLY. Claude models are not supported via Antigravity.
|
||||
*/
|
||||
|
||||
/**
|
||||
* OpenAI function tool format
|
||||
*/
|
||||
export interface OpenAITool {
|
||||
type: string
|
||||
function?: {
|
||||
name: string
|
||||
description?: string
|
||||
parameters?: Record<string, unknown>
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Gemini function declaration format
|
||||
*/
|
||||
export interface GeminiFunctionDeclaration {
|
||||
name: string
|
||||
description?: string
|
||||
parameters?: Record<string, unknown>
|
||||
}
|
||||
|
||||
/**
|
||||
* Gemini tools format (array of functionDeclarations)
|
||||
*/
|
||||
export interface GeminiTools {
|
||||
functionDeclarations: GeminiFunctionDeclaration[]
|
||||
}
|
||||
|
||||
/**
|
||||
* OpenAI tool call in response
|
||||
*/
|
||||
export interface OpenAIToolCall {
|
||||
id: string
|
||||
type: "function"
|
||||
function: {
|
||||
name: string
|
||||
arguments: string
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Gemini function call in response
|
||||
*/
|
||||
export interface GeminiFunctionCall {
|
||||
name: string
|
||||
args: Record<string, unknown>
|
||||
}
|
||||
|
||||
/**
|
||||
* Gemini function response format
|
||||
*/
|
||||
export interface GeminiFunctionResponse {
|
||||
name: string
|
||||
response: Record<string, unknown>
|
||||
}
|
||||
|
||||
/**
|
||||
* Gemini tool result containing function calls
|
||||
*/
|
||||
export interface GeminiToolResult {
|
||||
functionCall?: GeminiFunctionCall
|
||||
functionResponse?: GeminiFunctionResponse
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize OpenAI-format tools to Gemini format.
|
||||
* Converts an array of OpenAI tools to Gemini's functionDeclarations format.
|
||||
*
|
||||
* - Handles `function` type tools with name, description, parameters
|
||||
* - Logs warning for unsupported tool types (does NOT silently drop them)
|
||||
* - Creates a single object with functionDeclarations array
|
||||
*
|
||||
* @param tools - Array of OpenAI-format tools
|
||||
* @returns Gemini-format tools object with functionDeclarations, or undefined if no valid tools
|
||||
*/
|
||||
export function normalizeToolsForGemini(
|
||||
tools: OpenAITool[]
|
||||
): GeminiTools | undefined {
|
||||
if (!tools || tools.length === 0) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
const functionDeclarations: GeminiFunctionDeclaration[] = []
|
||||
|
||||
for (const tool of tools) {
|
||||
if (!tool || typeof tool !== "object") {
|
||||
continue
|
||||
}
|
||||
|
||||
const toolType = tool.type ?? "function"
|
||||
if (toolType === "function" && tool.function) {
|
||||
const declaration: GeminiFunctionDeclaration = {
|
||||
name: tool.function.name,
|
||||
}
|
||||
|
||||
if (tool.function.description) {
|
||||
declaration.description = tool.function.description
|
||||
}
|
||||
|
||||
if (tool.function.parameters) {
|
||||
declaration.parameters = tool.function.parameters
|
||||
} else {
|
||||
declaration.parameters = { type: "object", properties: {} }
|
||||
}
|
||||
|
||||
functionDeclarations.push(declaration)
|
||||
} else if (toolType !== "function" && process.env.ANTIGRAVITY_DEBUG === "1") {
|
||||
console.warn(
|
||||
`[antigravity-tools] Unsupported tool type: "${toolType}". Tool will be skipped.`
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// Return undefined if no valid function declarations
|
||||
if (functionDeclarations.length === 0) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
return { functionDeclarations }
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert Gemini tool results (functionCall) back to OpenAI tool_call format.
|
||||
* Handles both functionCall (request) and functionResponse (result) formats.
|
||||
*
|
||||
* Gemini functionCall format:
|
||||
* { "name": "tool_name", "args": { ... } }
|
||||
*
|
||||
* OpenAI tool_call format:
|
||||
* { "id": "call_xxx", "type": "function", "function": { "name": "tool_name", "arguments": "..." } }
|
||||
*
|
||||
* @param results - Array of Gemini tool results containing functionCall or functionResponse
|
||||
* @returns Array of OpenAI-format tool calls
|
||||
*/
|
||||
export function normalizeToolResultsFromGemini(
|
||||
results: GeminiToolResult[]
|
||||
): OpenAIToolCall[] {
|
||||
if (!results || results.length === 0) {
|
||||
return []
|
||||
}
|
||||
|
||||
const toolCalls: OpenAIToolCall[] = []
|
||||
let callCounter = 0
|
||||
|
||||
for (const result of results) {
|
||||
// Handle functionCall (tool invocation from model)
|
||||
if (result.functionCall) {
|
||||
callCounter++
|
||||
const toolCall: OpenAIToolCall = {
|
||||
id: `call_${Date.now()}_${callCounter}`,
|
||||
type: "function",
|
||||
function: {
|
||||
name: result.functionCall.name,
|
||||
arguments: JSON.stringify(result.functionCall.args ?? {}),
|
||||
},
|
||||
}
|
||||
toolCalls.push(toolCall)
|
||||
}
|
||||
}
|
||||
|
||||
return toolCalls
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a single Gemini functionCall to OpenAI tool_call format.
|
||||
* Useful for streaming responses where each chunk may contain a function call.
|
||||
*
|
||||
* @param functionCall - Gemini function call
|
||||
* @param id - Optional tool call ID (generates one if not provided)
|
||||
* @returns OpenAI-format tool call
|
||||
*/
|
||||
export function convertFunctionCallToToolCall(
|
||||
functionCall: GeminiFunctionCall,
|
||||
id?: string
|
||||
): OpenAIToolCall {
|
||||
return {
|
||||
id: id ?? `call_${Date.now()}_${Math.random().toString(36).slice(2, 8)}`,
|
||||
type: "function",
|
||||
function: {
|
||||
name: functionCall.name,
|
||||
arguments: JSON.stringify(functionCall.args ?? {}),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a tool array contains any function-type tools.
|
||||
*
|
||||
* @param tools - Array of OpenAI-format tools
|
||||
* @returns true if there are function tools to normalize
|
||||
*/
|
||||
export function hasFunctionTools(tools: OpenAITool[]): boolean {
|
||||
if (!tools || tools.length === 0) {
|
||||
return false
|
||||
}
|
||||
|
||||
return tools.some((tool) => tool.type === "function" && tool.function)
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract function declarations from already-normalized Gemini tools.
|
||||
* Useful when tools may already be in Gemini format.
|
||||
*
|
||||
* @param tools - Tools that may be in Gemini or OpenAI format
|
||||
* @returns Array of function declarations
|
||||
*/
|
||||
export function extractFunctionDeclarations(
|
||||
tools: unknown
|
||||
): GeminiFunctionDeclaration[] {
|
||||
if (!tools || typeof tools !== "object") {
|
||||
return []
|
||||
}
|
||||
|
||||
// Check if already in Gemini format
|
||||
const geminiTools = tools as Record<string, unknown>
|
||||
if (
|
||||
Array.isArray(geminiTools.functionDeclarations) &&
|
||||
geminiTools.functionDeclarations.length > 0
|
||||
) {
|
||||
return geminiTools.functionDeclarations as GeminiFunctionDeclaration[]
|
||||
}
|
||||
|
||||
// Check if it's an array of OpenAI tools
|
||||
if (Array.isArray(tools)) {
|
||||
const normalized = normalizeToolsForGemini(tools as OpenAITool[])
|
||||
return normalized?.functionDeclarations ?? []
|
||||
}
|
||||
|
||||
return []
|
||||
}
|
||||
@@ -1,244 +0,0 @@
|
||||
/**
|
||||
* Antigravity Auth Type Definitions
|
||||
* Matches cliproxyapi/sdk/auth/antigravity.go token format exactly
|
||||
*/
|
||||
|
||||
/**
|
||||
* Token storage format for Antigravity authentication
|
||||
* Matches Go metadata structure: type, access_token, refresh_token, expires_in, timestamp, email, project_id
|
||||
*/
|
||||
export interface AntigravityTokens {
|
||||
/** Always "antigravity" for this auth type */
|
||||
type: "antigravity"
|
||||
/** OAuth access token from Google */
|
||||
access_token: string
|
||||
/** OAuth refresh token from Google */
|
||||
refresh_token: string
|
||||
/** Token expiration time in seconds */
|
||||
expires_in: number
|
||||
/** Unix timestamp in milliseconds when tokens were obtained */
|
||||
timestamp: number
|
||||
/** ISO 8601 formatted expiration datetime (optional, for display) */
|
||||
expired?: string
|
||||
/** User's email address from Google userinfo */
|
||||
email?: string
|
||||
/** GCP project ID from loadCodeAssist API */
|
||||
project_id?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Project context returned from loadCodeAssist API
|
||||
* Used to get cloudaicompanionProject for API calls
|
||||
*/
|
||||
export interface AntigravityProjectContext {
|
||||
/** GCP project ID for Cloud AI Companion */
|
||||
cloudaicompanionProject?: string
|
||||
/** Managed project ID for enterprise users (optional) */
|
||||
managedProjectId?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Metadata for loadCodeAssist API request
|
||||
*/
|
||||
export interface AntigravityClientMetadata {
|
||||
/** IDE type identifier */
|
||||
ideType: "IDE_UNSPECIFIED" | string
|
||||
/** Platform identifier */
|
||||
platform: "PLATFORM_UNSPECIFIED" | string
|
||||
/** Plugin type - typically "GEMINI" */
|
||||
pluginType: "GEMINI" | string
|
||||
}
|
||||
|
||||
/**
|
||||
* Request body for loadCodeAssist API
|
||||
*/
|
||||
export interface AntigravityLoadCodeAssistRequest {
|
||||
metadata: AntigravityClientMetadata
|
||||
}
|
||||
|
||||
export interface AntigravityUserTier {
|
||||
id?: string
|
||||
isDefault?: boolean
|
||||
userDefinedCloudaicompanionProject?: boolean
|
||||
}
|
||||
|
||||
export interface AntigravityLoadCodeAssistResponse {
|
||||
cloudaicompanionProject?: string | { id: string }
|
||||
currentTier?: { id?: string }
|
||||
allowedTiers?: AntigravityUserTier[]
|
||||
}
|
||||
|
||||
export interface AntigravityOnboardUserPayload {
|
||||
done?: boolean
|
||||
response?: {
|
||||
cloudaicompanionProject?: { id?: string }
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Request body format for Antigravity API calls
|
||||
* Wraps the actual request with project and model context
|
||||
*/
|
||||
export interface AntigravityRequestBody {
|
||||
project: string
|
||||
model: string
|
||||
userAgent: string
|
||||
requestType: string
|
||||
requestId: string
|
||||
request: Record<string, unknown>
|
||||
}
|
||||
|
||||
/**
|
||||
* Response format from Antigravity API
|
||||
* Follows OpenAI-compatible structure with Gemini extensions
|
||||
*/
|
||||
export interface AntigravityResponse {
|
||||
/** Response ID */
|
||||
id?: string
|
||||
/** Object type (e.g., "chat.completion") */
|
||||
object?: string
|
||||
/** Creation timestamp */
|
||||
created?: number
|
||||
/** Model used for response */
|
||||
model?: string
|
||||
/** Response choices */
|
||||
choices?: AntigravityResponseChoice[]
|
||||
/** Token usage statistics */
|
||||
usage?: AntigravityUsage
|
||||
/** Error information if request failed */
|
||||
error?: AntigravityError
|
||||
}
|
||||
|
||||
/**
|
||||
* Single response choice in Antigravity response
|
||||
*/
|
||||
export interface AntigravityResponseChoice {
|
||||
/** Choice index */
|
||||
index: number
|
||||
/** Message content */
|
||||
message?: {
|
||||
role: "assistant"
|
||||
content?: string
|
||||
tool_calls?: AntigravityToolCall[]
|
||||
}
|
||||
/** Delta for streaming responses */
|
||||
delta?: {
|
||||
role?: "assistant"
|
||||
content?: string
|
||||
tool_calls?: AntigravityToolCall[]
|
||||
}
|
||||
/** Finish reason */
|
||||
finish_reason?: "stop" | "tool_calls" | "length" | "content_filter" | null
|
||||
}
|
||||
|
||||
/**
|
||||
* Tool call in Antigravity response
|
||||
*/
|
||||
export interface AntigravityToolCall {
|
||||
id: string
|
||||
type: "function"
|
||||
function: {
|
||||
name: string
|
||||
arguments: string
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Token usage statistics
|
||||
*/
|
||||
export interface AntigravityUsage {
|
||||
prompt_tokens: number
|
||||
completion_tokens: number
|
||||
total_tokens: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Error response from Antigravity API
|
||||
*/
|
||||
export interface AntigravityError {
|
||||
message: string
|
||||
type?: string
|
||||
code?: string | number
|
||||
}
|
||||
|
||||
/**
|
||||
* Token exchange result from Google OAuth
|
||||
* Matches antigravityTokenResponse in Go
|
||||
*/
|
||||
export interface AntigravityTokenExchangeResult {
|
||||
access_token: string
|
||||
refresh_token: string
|
||||
expires_in: number
|
||||
token_type: string
|
||||
}
|
||||
|
||||
/**
|
||||
* User info from Google userinfo API
|
||||
*/
|
||||
export interface AntigravityUserInfo {
|
||||
email: string
|
||||
name?: string
|
||||
picture?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Parsed refresh token parts
|
||||
* Format: refreshToken|projectId|managedProjectId
|
||||
*/
|
||||
export interface AntigravityRefreshParts {
|
||||
refreshToken: string
|
||||
projectId?: string
|
||||
managedProjectId?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* OAuth error payload from Google
|
||||
* Google returns errors in multiple formats, this handles all of them
|
||||
*/
|
||||
export interface OAuthErrorPayload {
|
||||
error?: string | { status?: string; code?: string; message?: string }
|
||||
error_description?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Parsed OAuth error with normalized fields
|
||||
*/
|
||||
export interface ParsedOAuthError {
|
||||
code?: string
|
||||
description?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Multi-account support types
|
||||
*/
|
||||
|
||||
/** All model families for rate limit tracking */
|
||||
export const MODEL_FAMILIES = ["claude", "gemini-flash", "gemini-pro"] as const
|
||||
|
||||
/** Model family for rate limit tracking */
|
||||
export type ModelFamily = (typeof MODEL_FAMILIES)[number]
|
||||
|
||||
/** Account tier for prioritization */
|
||||
export type AccountTier = "free" | "paid"
|
||||
|
||||
/** Rate limit state per model family (Unix timestamps in ms) */
|
||||
export type RateLimitState = Partial<Record<ModelFamily, number>>
|
||||
|
||||
/** Account metadata for storage */
|
||||
export interface AccountMetadata {
|
||||
email: string
|
||||
tier: AccountTier
|
||||
refreshToken: string
|
||||
projectId: string
|
||||
managedProjectId?: string
|
||||
accessToken: string
|
||||
expiresAt: number
|
||||
rateLimits: RateLimitState
|
||||
}
|
||||
|
||||
/** Storage schema for persisting multiple accounts */
|
||||
export interface AccountStorage {
|
||||
version: number
|
||||
accounts: AccountMetadata[]
|
||||
activeIndex: number
|
||||
}
|
||||
@@ -1,93 +0,0 @@
|
||||
import { loadAccounts, saveAccounts } from "../../auth/antigravity/storage"
|
||||
import type { AccountStorage } from "../../auth/antigravity/types"
|
||||
|
||||
export async function listAccounts(): Promise<number> {
|
||||
const accounts = await loadAccounts()
|
||||
|
||||
if (!accounts || accounts.accounts.length === 0) {
|
||||
console.log("No accounts found.")
|
||||
console.log("Run 'opencode auth login' and select Google (Antigravity) to add accounts.")
|
||||
return 0
|
||||
}
|
||||
|
||||
console.log(`\nGoogle Antigravity Accounts (${accounts.accounts.length}/10):\n`)
|
||||
|
||||
for (let i = 0; i < accounts.accounts.length; i++) {
|
||||
const acc = accounts.accounts[i]
|
||||
const isActive = i === accounts.activeIndex
|
||||
const activeMarker = isActive ? "* " : " "
|
||||
|
||||
console.log(`${activeMarker}[${i}] ${acc.email || "Unknown"}`)
|
||||
console.log(` Tier: ${acc.tier || "free"}`)
|
||||
|
||||
const rateLimits = acc.rateLimits || {}
|
||||
const now = Date.now()
|
||||
const limited: string[] = []
|
||||
|
||||
if (rateLimits.claude && rateLimits.claude > now) {
|
||||
const mins = Math.ceil((rateLimits.claude - now) / 60000)
|
||||
limited.push(`claude (${mins}m)`)
|
||||
}
|
||||
if (rateLimits["gemini-flash"] && rateLimits["gemini-flash"] > now) {
|
||||
const mins = Math.ceil((rateLimits["gemini-flash"] - now) / 60000)
|
||||
limited.push(`gemini-flash (${mins}m)`)
|
||||
}
|
||||
if (rateLimits["gemini-pro"] && rateLimits["gemini-pro"] > now) {
|
||||
const mins = Math.ceil((rateLimits["gemini-pro"] - now) / 60000)
|
||||
limited.push(`gemini-pro (${mins}m)`)
|
||||
}
|
||||
|
||||
if (limited.length > 0) {
|
||||
console.log(` Rate limited: ${limited.join(", ")}`)
|
||||
}
|
||||
|
||||
console.log()
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
export async function removeAccount(indexOrEmail: string): Promise<number> {
|
||||
const accounts = await loadAccounts()
|
||||
|
||||
if (!accounts || accounts.accounts.length === 0) {
|
||||
console.error("No accounts found.")
|
||||
return 1
|
||||
}
|
||||
|
||||
let index: number
|
||||
|
||||
const parsedIndex = Number(indexOrEmail)
|
||||
if (Number.isInteger(parsedIndex) && String(parsedIndex) === indexOrEmail) {
|
||||
index = parsedIndex
|
||||
} else {
|
||||
index = accounts.accounts.findIndex((acc) => acc.email === indexOrEmail)
|
||||
if (index === -1) {
|
||||
console.error(`Account not found: ${indexOrEmail}`)
|
||||
return 1
|
||||
}
|
||||
}
|
||||
|
||||
if (index < 0 || index >= accounts.accounts.length) {
|
||||
console.error(`Invalid index: ${index}. Valid range: 0-${accounts.accounts.length - 1}`)
|
||||
return 1
|
||||
}
|
||||
|
||||
const removed = accounts.accounts[index]
|
||||
accounts.accounts.splice(index, 1)
|
||||
|
||||
if (accounts.accounts.length === 0) {
|
||||
accounts.activeIndex = -1
|
||||
} else if (accounts.activeIndex >= accounts.accounts.length) {
|
||||
accounts.activeIndex = accounts.accounts.length - 1
|
||||
} else if (accounts.activeIndex > index) {
|
||||
accounts.activeIndex--
|
||||
}
|
||||
|
||||
await saveAccounts(accounts)
|
||||
|
||||
console.log(`Removed account: ${removed.email || "Unknown"} (index ${index})`)
|
||||
console.log(`Remaining accounts: ${accounts.accounts.length}`)
|
||||
|
||||
return 0
|
||||
}
|
||||
@@ -267,10 +267,6 @@ export function generateOmoConfig(installConfig: InstallConfig): Record<string,
|
||||
$schema: "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
|
||||
}
|
||||
|
||||
if (installConfig.hasGemini) {
|
||||
config.google_auth = false
|
||||
}
|
||||
|
||||
const agents: Record<string, Record<string, unknown>> = {}
|
||||
|
||||
if (!installConfig.hasClaude) {
|
||||
@@ -642,7 +638,6 @@ export function addProviderConfig(config: InstallConfig): ConfigMergeResult {
|
||||
}
|
||||
|
||||
interface OmoConfigData {
|
||||
google_auth?: boolean
|
||||
agents?: Record<string, { model?: string }>
|
||||
}
|
||||
|
||||
@@ -713,9 +708,6 @@ export function detectCurrentConfig(): DetectedConfig {
|
||||
result.hasChatGPT = false
|
||||
}
|
||||
|
||||
if (omoConfig.google_auth === false) {
|
||||
result.hasGemini = plugins.some((p) => p.startsWith("opencode-antigravity-auth"))
|
||||
}
|
||||
} catch {
|
||||
/* intentionally empty - malformed omo config returns defaults from opencode config detection */
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ import { install } from "./install"
|
||||
import { run } from "./run"
|
||||
import { getLocalVersion } from "./get-local-version"
|
||||
import { doctor } from "./doctor"
|
||||
import { listAccounts, removeAccount } from "./commands/auth"
|
||||
import type { InstallArgs } from "./types"
|
||||
import type { RunOptions } from "./run"
|
||||
import type { GetLocalVersionOptions } from "./get-local-version/types"
|
||||
@@ -135,45 +134,6 @@ Categories:
|
||||
process.exit(exitCode)
|
||||
})
|
||||
|
||||
const authCommand = program
|
||||
.command("auth")
|
||||
.description("Manage Google Antigravity accounts")
|
||||
|
||||
authCommand
|
||||
.command("list")
|
||||
.description("List all Google Antigravity accounts")
|
||||
.addHelpText("after", `
|
||||
Examples:
|
||||
$ bunx oh-my-opencode auth list
|
||||
|
||||
Shows:
|
||||
- Account index and email
|
||||
- Account tier (free/paid)
|
||||
- Active account (marked with *)
|
||||
- Rate limit status per model family
|
||||
`)
|
||||
.action(async () => {
|
||||
const exitCode = await listAccounts()
|
||||
process.exit(exitCode)
|
||||
})
|
||||
|
||||
authCommand
|
||||
.command("remove <index-or-email>")
|
||||
.description("Remove an account by index or email")
|
||||
.addHelpText("after", `
|
||||
Examples:
|
||||
$ bunx oh-my-opencode auth remove 0
|
||||
$ bunx oh-my-opencode auth remove user@example.com
|
||||
|
||||
Note:
|
||||
- Use 'auth list' to see account indices
|
||||
- Removing the active account will switch to the next available account
|
||||
`)
|
||||
.action(async (indexOrEmail: string) => {
|
||||
const exitCode = await removeAccount(indexOrEmail)
|
||||
process.exit(exitCode)
|
||||
})
|
||||
|
||||
program
|
||||
.command("version")
|
||||
.description("Show version information")
|
||||
|
||||
@@ -311,7 +311,6 @@ export const OhMyOpenCodeConfigSchema = z.object({
|
||||
agents: AgentOverridesSchema.optional(),
|
||||
categories: CategoriesConfigSchema.optional(),
|
||||
claude_code: ClaudeCodeConfigSchema.optional(),
|
||||
google_auth: z.boolean().optional(),
|
||||
sisyphus_agent: SisyphusAgentConfigSchema.optional(),
|
||||
comment_checker: CommentCheckerConfigSchema.optional(),
|
||||
experimental: ExperimentalConfigSchema.optional(),
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
import type { Plugin } from "@opencode-ai/plugin"
|
||||
import { createGoogleAntigravityAuthPlugin } from "./auth/antigravity"
|
||||
|
||||
const GoogleAntigravityAuthPlugin: Plugin = async (ctx) => {
|
||||
return createGoogleAntigravityAuthPlugin(ctx)
|
||||
}
|
||||
|
||||
export default GoogleAntigravityAuthPlugin
|
||||
@@ -36,7 +36,6 @@ import {
|
||||
createContextInjectorHook,
|
||||
createContextInjectorMessagesTransformHook,
|
||||
} from "./features/context-injector";
|
||||
import { createGoogleAntigravityAuthPlugin } from "./auth/antigravity";
|
||||
import { applyAgentVariant, resolveAgentVariant } from "./shared/agent-variant";
|
||||
import { createFirstMessageVariantGate } from "./shared/first-message-variant";
|
||||
import {
|
||||
@@ -293,10 +292,6 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
|
||||
? createAutoSlashCommandHook({ skills: mergedSkills })
|
||||
: null;
|
||||
|
||||
const googleAuthHooks = pluginConfig.google_auth !== false
|
||||
? await createGoogleAntigravityAuthPlugin(ctx)
|
||||
: null;
|
||||
|
||||
const configHandler = createConfigHandler({
|
||||
ctx,
|
||||
pluginConfig,
|
||||
@@ -304,8 +299,6 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
|
||||
});
|
||||
|
||||
return {
|
||||
...(googleAuthHooks ? { auth: googleAuthHooks.auth } : {}),
|
||||
|
||||
tool: {
|
||||
...builtinTools,
|
||||
...backgroundTools,
|
||||
|
||||
Reference in New Issue
Block a user