Principal AI Engineer - MLOps Cybersecurity AWS LLMs Kubernetes
My client, a leading innovator in AI-driven cybersecurity, is expanding their AI Center of Excellence and hiring a Principal AI Engineer - MLOps. This is a high-impact, hands-on leadership role focused on building scalable, production-grade ML infrastructure that powers advanced threat detection and response solutions.
This is an opportunity to work on cutting-edge AI/ML initiatives at the intersection of security, DevOps, and cloud-native engineering-solving real-world problems at scale.
Key Responsibilities
- Define and lead MLOps strategy, best practices, and technical roadmap
- Design, build, and optimize ML pipelines and deployment infrastructure (AWS, SageMaker, Terraform)
- Develop, deploy, and monitor AI models in production environments
- Collaborate closely with data scientists, ML researchers, and backend engineers
- Build APIs and interfaces (Python, FastAPI, Flask, TypeScript) to power AI applications
- Drive CI/CD, containerization, and orchestration using Docker and Kubernetes
- Enable end-to-end automation, versioning, and governance of ML workflows
- Support ongoing AI research and contribute to internal tools and platforms
Ideal Candidate Profile
- Proven experience in MLOps, model deployment, and infrastructure engineering
- Strong cloud experience, especially AWS (SageMaker, ECS, Lambda, etc.)
- Skilled in Python and modern frameworks like FastAPI or Flask
- Familiar with LLMs, distributed systems, and GPU-accelerated workloads
- Hands-on with CI/CD, Docker, Kubernetes, and Terraform
- Background in cybersecurity, data engineering, or DevSecOps is a plus
- Excellent communication, problem-solving, and leadership skills
Why Apply?
- Join a mission-driven company on the front lines of cybersecurity innovation
- Work in a high-ownership role where AI models go from lab to live
- Opportunity to contribute to published research, internal platforms, and strategic initiatives
- Remote-flexible with high-impact, high-visibility projects