AI Inference Engineer - Stealth Startup San Fransisco Onsite
Compensation:$200K-$300K + equity
Join a stealth-stage team backed by prominent academic research and successful technical founders, working at the bleeding edge of AI infrastructure. As generative AI continues to scale rapidly, the bottleneck is no longer training-it&s inference. This team is rebuilding the core systems that power inference, from kernel-level GPU optimizations to full-stack distributed deployment.
This role is ideal for engineers who want to go deep: working on quantization, KV caching, attention mechanisms like FlashAttention, and designing new strategies for parallelism across heterogeneous compute. You&ll contribute to an integrated software-hardware stack that enables large-scale model deployment with dramatically improved performance, efficiency, and quality-at production scale.
What You&ll Be Doing:
- Research and implement state-of-the-art techniques to improve AI model inference speed and quality
- Architect and optimize distributed AI infrastructure across both GPU Kernel and software layers
- Profile, benchmark, and debug system performance across varied hardware environments
- Drive improvements in model execution through compiler-level tuning, caching, and runtime strategies
What They&re Looking For:
- Bachelor&s degree in Computer Science, Engineering, Applied Math, or a related field
- Strong experience with performance optimization and systems-level thinking
- Proficiency in Python, C+, and CUDA
- Familiarity with AI frameworks like PyTorch, TensorFlow, ONNX, or vLLM
Nice to Have:
- Graduate degree in a technical field
- Experience with MLIR or other compiler frameworks
- Hands-on work with large-scale GPU infrastructure or custom kernels
This is a hands-on, foundational role in a fast-moving environment, offering the chance to shape the backbone of the next generation of AI systems.