Overview: Their role is around enabling agentic AI models, foundation LLMs into their applications for real-time security use cases. Their team consists largely of Data Scientists who have built traditional Machine Learning models (i.e. anomaly detection); this role requires Development/Engineering backgrounds who have moved into "modern AI" (generative, pre-trained LLM model enablement) within the past 2 years.
Primary Responsibilities:Development (70%): - Writing services in Java, C , and Python.
- Integrating AI models into existing services.
- Ensuring the separation of logic from agent behavior, allowing agents to be wrapped in services that can be called as needed.
AI and Machine Learning (20%): - Working with foundation models and large language models (LLMs) like Gemini, LLaMa, Claude, and OpenAI models in Azure.
- Applying techniques such as transfer learning, fine-tuning, and RAG for extensible model deployment and environment alignment.
- Ensuring models are secure, unbiased, and properly wrapped within a platform.
Cloud Deployment (10%): - Deploying models and services to cloud platforms: primarily Azure and some in GCP. Their GCP environment and Vertex are largely for research and POC development
Leah Internal Notes: - Proficiency in Java, C preferred majority of Wells is in Java but cyber largely uses C /.NET. They have some Python open to seeing candidates with any but need to be open to learning other programming languages. Ideal would by Go and Rust for real-time detection in defensive cyber, but not there yet. These would be nice to have skills.
- Experience with Azure or GCP cloud environments, including general knowledge of containers and Kubernetes.
- CI/CD Pipelines: Familiarity with tools like Harness or Jenkins for building and deploying applications would be a plus but should understand CI/CD fundamentals.