Abuse Investigator
OpenAI
About the Team
OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.
The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new and emerging classes of risk. This enables our partner teams to develop data-backed policies and safety mitigations. Precisely understanding how models behave in real-world conditions allows us to safely enable users to build useful things with our products.
About the Role
As an Abuse Investigator focused on AI Self-Autonomy and Agentic Risk on the Intelligence and Investigations team, you will be responsible for identifying and investigating cases where models exhibit autonomous or agentic behavior, including chaining capabilities, acting with increasing independence, or demonstrating patterns that may introduce safety risk. This includes detecting behaviors that are not explicitly intended, understood, or covered by existing safeguards.
This role requires deep domain-specific expertise in identifying, understanding, and mitigating risk from agentic systems, model autonomy, and AI self-improvement signals. You’ll need experience investigating complex systems where behavior emerges across multiple steps, tools, or interactions, and the ability to distinguish between normal task execution and concerning patterns such as persistence, workaround behavior, or capability expansion. You’ll need a proven ability to navigate ambiguous signals in a rapidly evolving and highly technical environment.
This role is based in our San Francisco office. Investigations may involve reviewing complex or sensitive model behaviors and edge-case outputs, requiring strong judgment and resilience in high-pressure environments.
In this role, you will:
- Review leads, investigate model behavior, and identify cases where systems de...
Share this job: