Our client is seeking a Senior AI Security Engineer for a one year contract. This role is hybrid with 3 days a week onsite in lower Manhattan. This role will help secure and ensure the responsible use of the Project's Artificial Intelligence (AI) capabilities. This requires the testing and hardening of the AI
ecosystem through well-designed frameworks designed to protect systems and data from emerging cyber-attacks.
TASKS:
- Design, implement, and execute test approaches to GenAI systems (Chatbot) to identify security flaws, particularly those impacting confidentiality, integrity, or availability of information.
- Perform various types of tests such as functional testing, regression testing, performance testing, and usability testing to evaluate the behavior and performance of the AI algorithms and models.
- Create, implement, and execute test plans and strategies for evaluating AI systems, including defining test objectives, selecting suitable testing methods, and identifying test scenarios.
- Document test methods, results, and suggestions in clear and brief reports to stakeholders.
- Perform security assessments including creating updating and maintaining threat models and
- security integration of Gen AI platforms. Ensure that security design and controls are
- consistent with OTI's security architecture principals.
- Design security reference architectures and implement/configure security controls with an emphasis on GenAI technologies.
- Provide AI security architecture and design guidance as well as conduct full-stack architecture reviews of software for GenAI systems and platforms.
- Serve as a subject matter expert on information security for GenAI systems and applications in cloud/vendor and on-prem environments.
- Discuss AI/ML concepts proficiently with data science and ML teams to identify and develop solutions for security issues.
- Collaborate with engineering teams to perform advanced security analysis on complex GenAI systems, identifying gaps and contributing to design solutions and security requirements
- Identify and document defects, irregularities or inconsistencies in AI systems and working closely with developers to rectify and resolve them.
- Ensure the quality, consistency and relevance of data used for training and testing AI models (includes collecting, preprocessing and validating data)
- Assess AI systems for ethical considerations and potential biases to make sure they follow
- ethical standards and encourage inclusivity and diversity.
- Collaborate with diverse teams including developers, data scientists, and domain experts to understand requirements validate assumptions and align testing efforts with project goals.
- Conducting research to identify vulnerabilities and potential failures in AI systems.
- Design and implement mitigations, detections, and protections to enhance the security and
- reliability of AI systems.
- Perform model input and output security including prompt injection and security assurance.
MANDATORY SKILLS/EXPERIENCE • Bachelor's degree in computer science, electrical or computer engineering, statistics,
econometrics, or related field, or equivalent work experience
• 12+ years of hands-on experience in cybersecurity or information security.
• 4+ years of experience programming with demonstrated advanced skills with Python and the
standard ML stack (TensorFlow/Torch, NumPy, Pandas, etc.)
• 4+ years of experience with Natural Language Processing (NLP) and Large Language Models
(LLM) desired
• 4+ years of experience working in cloud environment (Azure, AWS, GCP)
• Demonstrated proficiency with AI/ML fundamental concepts and technologies including ML,
deep learning, NLP, and computer vision.
• Demonstrated ability (expertise preferred) in attacking GenAI products and platforms.
• Demonstrated recent experience with large language models.
• Demonstrated experience with using AI testing frameworks and tools such as TensorFlow or
PyTorch, or Keras
• Demonstrated ability to write test scripts, automate test cases, and analyze test results using
programming languages and testing frameworks listed above.
• Demonstrated ability to Identify and document defects, irregularities or inconsistencies in AI
systems and working closely with developers to rectify and resolve them.
• Ability to work independently to learn new technologies, methods, processes,
frameworks/platforms, and systems.
• Excellent written and verbal communication skills to articulate challenging technical concepts
to both lay and expert audiences.
• Ability to stay updated on the latest developments, trends, and best practices in both software
testing and artificial intelligence.
DESIRABLE SKILLS/EXPERIENCE:
• Background in designing and implementing security mitigations and protections and/or
publications in the space
• Participated or currently participating in CTF/GRT/AI Red Teaming events and/or bug bounties
developing or contributing to OSS projects.
• Understanding of ML lifecycle and MLOps.
• Perform various types of tests such as functional testing, regression testing, performance
testing, and usability testing to evaluate the behavior and performance of the AI algorithms and models
• Ability to ensure the quality, consistency and relevance of data used for training and testing AI
models (includes collecting, preprocessing and validating data)
• Ability to assess AI systems for ethical considerations and potential biases to make sure they
follow ethical standards and encourage inclusivity and diversity
• Ability work in and provide technical leadership to cross-functional teams to develop and
implement AI/ML solutions, including capabilities that leverage LLM technology
• Highly flexible/willing to learn new technologies