Would you be interested to be a part of fast-growing AI company in Palo Alto, California where you contribute your experience as a Training Engineer to train, optimize, scale, and deploy a variety of generative AI models such as large language models, voice/speech foundation models, vision and multi-modal foundation models using cutting-edge techniques and frameworks.
In this hands-on role, you will implement state of art neural architecture, train these complex models with billions of parameters from scratch to production while optimizing for low latency, high throughput, and cost efficiency.
Key Responsibilities:
- Architect Distributed Training Systems: Design and implement highly scalable distributed training pipelines for LLMs and frontier models, leveraging model parallelism (tensor, pipeline, expert) and data parallelism techniques.
- Optimize Performance: Utilize deep knowledge of CUDA, C , and low-level optimizations to enhance model training speed and efficiency across diverse hardware configurations.
- Implement Novel Techniques: Research and apply cutting-edge parallelism techniques like Flash Attention to accelerate model training and reduce computational costs.
- Framework Expertise: Demonstrate proficiency in deep learning frameworks such as PyTorch, TensorFlow, and JAX, and tailor them for distributed training scenarios.
- Scale to Hundreds of Billions of Parameters: Work with massive models, ensuring stable and efficient training across distributed resources.
- Evaluate Scaling Laws: Design and conduct experiments to analyze the impact of model size, data, and computational resources on model performance.
- Collaborate: Partner closely with research scientists and engineers to integrate research findings into production-ready training systems.
Qualifications:
Advanced Degree: Ph.D.in Computer Science, Machine Learning, or a related field.