Software Engineer - GPU Inference
Baseten
ABOUT BASETEN
Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E https://www.baseten.co/blog/announcing-baseten-s-300m-series-e/, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.
THE ROLE:
Voice is becoming the internet’s next interface, but a production-grade Voice AI system is "hard to build" https://greylock.com/greymatter/voice-agents-easy-to-use-hard-to-build/. You’ll join a small founding team of Baseten Voice AI, focused on bringing state-of-the-art open source models into production for Voice AI customers across productivity, customer service, clinical conversation, creator tools, education, and more. You’ll make a meaningful impact on people’s daily lives and help reshape these industries.
This is a high-impact, high-ownership role. You will be the primary owner of Baseten Voice AI - our in-house inference stack to power Voice AI models - from product roadmap through engineering implementation. You’ll partner closely with Forward Deployed Engineers, Model Performance Engineers, and sister engineering teams to push the boundaries of Voice AI.
EXAMPLE INITIATIVES:
- Develop world-class model serving stack for state-of-the-art open-source voice models — reduce end-to-end and tail latency (p95/p99), increase throughput, and improve GPU efficiency via profiling, runtime tuning, and server-level optimizations.
- Build large-scale, real-time infrastructure for multi-model voice agents — orchestrate STT, TTS, and agent components with streaming I/O to meet customer SLOs.
- Design tight training ↔ inferenc...
Share this job: