What happened at Vercel this month worked mostly because of the default settings in Google (and Microsoft) identity systems. By default, employees can authorize third-party apps to access their account without IT oversight. By default, the OAuth tokens those apps get live a long time and work even after password resets. By default, Google doesn’t guarantee that it will challenge a user if they suddenly appear from a new country or from a new device.
It doesn’t seem like Vercel tightened up any of those issues. Below are the admin console settings that would have closed some of those gaps.
The chain has three points we’ll walk through:
- Initial endpoint compromise that started it
- OAuth request that gave the attacker a foothold in Vercel’s Workspace
- Pivot from that Workspace into Vercel’s infrastructure
Initial compromise of personal endpoint
What happened
Attackers started when a Context.ai employee downloaded a malicious Roblox cheat on their personal device. Context.ai’s own breach statement confirms that the attacker then gained access to their AWS environment, which contained customer OAuth tokens for their AI Office Suite product, including an OAuth token for a Vercel employee.
What could have helped
User education and corporate policies. Corporate policies should prohibit access to work resources on personal devices. Users should receive regular phishing and malware awareness training.
Conditional access policies. Where education and policy fail, access controls remove the human element. Restrict access to corporate resources outside of a VPN, registered device, or geofence where possible.
- GWS — https://knowledge.workspace.google.com/admin/security/protect-your-business-with-context-aware-access
- Entra ID — https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-conditional-access
Google Workspace pivot
What happened
Leveraging the stolen OAuth token that the Vercel employee had approved, the attacker accessed their Google Workspace account. The application permissions were built to allow AI agents to access Workspace on the user’s behalf, including writing emails and creating documents, and during the initial app request the user effectively gave full read-write access to Gmail, Drive, Calendar, and the rest of Google Workspace. OAuth tokens are valid for a long time, even beyond password resets, which gave the attacker access to the environment for two months.
What could have helped
Restrict third-party OAuth application access. Set the default to block unverified and uncategorized apps and require explicit admin approval before any app requesting sensitive scopes can be authorized by employees. It’s also important for the person approving to have the expertise to scrutinize the request. Does the app truly need read-write access to everything? Would more granular access be sufficient? Is there already an approved app with similar functionality?
- GWS — https://knowledge.workspace.google.com/admin/apps/control-which-apps-access-google-workspace-data
- Entra ID — https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/configure-user-consent?pivots=portal
Token lifetime limits. Shorter refresh token expiration windows constrain the attacker’s access if a token is stolen.
- GWS — https://support.google.com/a/answer/9368756
- Entra ID — https://learn.microsoft.com/en-us/entra/identity-platform/configurable-token-lifetimes
Insecure Google Workspace configurations
What happened
Once inside the GWS account, the attacker was “able to pivot into a Vercel environment, and subsequently maneuvered through systems to enumerate and decrypt non-sensitive environment variables.” While there isn’t a ton of public information on the exact nature of this pivot, or the level of permissions the compromised OAuth session had, a good guess would be misconfigured Google Groups or insecure secret storage.
Groups set to “Anyone in org can join/view” enable an easy avenue of privilege escalation within GWS. Additionally, groups are frequently granted GCP IAM roles and shared Drive access. An attacker who can view a sensitive Google Group may also be able to pivot to other services by abusing shared mailboxes and intercepting password reset requests/email-based MFA. In our experience, users routinely store secrets in Google Drive, Docs, and Sheets, all accessible to any authenticated session.
What could have helped
Audit Google Groups membership settings. Every internal group should be set to Invite Only. Self-join groups are a privilege escalation path if they carry GCP role bindings. Set archive visibility to Members Only.
gam all groups show settings | grep -E "(name|whoCanJoin|whoCanView)"
Secrets scanning. A regular secrets-scanning policy that monitors Drive, Docs, and Sheets for plaintext credentials can find them before attackers do.
Closing
The Vercel attack chain is a useful teaching incident, because like many breaches, none of the failures on the chain required advanced attacker capability or zero-day vulnerabilities. Commodity infostealer, an OAuth app request that maybe shouldn’t have been approved, long-lived tokens, and a customer-facing environment that seemingly hadn’t been hardened against an attacker with a valid session. The controls that would have disrupted this attack are documented and feasible in most environments. Companies skip these controls because they slow engineers down and the cost of skipping hasn’t come due.