Lead Software Engineer with Security Clearance

Herndon, Virginia

Country Intelligence Group
Apply for this Job
Lead Software Engineer - Moon 2111 Country Intelligence Group is seeking a Full-Time Software Engineer to support our client in the design, implementation, and optimization of enterprise-wide collaboration network architecture, advanced data systems, and mission-driven software solutions. The selected candidate will play a critical role in leading the integration of complex data environments, graph-based architectures, and cloud-native platforms while ensuring compliance with federal security and governance policies. This role involves close collaboration with stakeholder communities, engineering teams, and mission partners to deliver holistic, scalable technical solutions aligned with Sponsor objectives. The ideal candidate will possess deep technical experience in software engineering, data integration, and systems architecture, with an emphasis on performance optimization, maintainability, and security. Tasks Performed:
• Design, review, and implementation of collaboration network architecture, including graphing tool recommendations, data flow, data integration tools, user interfaces.
• Develop operation and maintenance documentation to include user support, developer support, tool integration, and data requirements/integration standard operating procedures.
• Design, development, and implementation of software systems/services solutions based on requirements analysis.
• Perform operational feasibility evaluations and recommend improvements.
• Develop and maintain system and solution code, architecture, and security documentation.
• Assist with software licensing and purchase efforts.
• Serves as a POC to operate as the focal point for integrating Sponsor community best practices already in use for data and solution approach, developing new recommendations as directed, defining detailed engineering requirements for implementing the new analytics, databases, and models.
• Engage with communities of practice, stakeholders, and engineering teams to apply technical expertise on the mission data to develop holistic scalable solutions.
• Lead the implementation of technical standards for the data graphing tool and associated integrations with systems to ensure information security requirements are met in an efficiently scalable manner that enables measurement and monitoring. Education, Experience and Qualifications:
• Experience with designing cloud-native architectures using cloud services such as AWS, Google, IBM, and Oracle.
• Experience designing and operating big data systems.
• Experience building and optimizing performance of large-scale graph databases (tens of billions of edges) using DynamoDB or new enhanced capabilities.
• Experience developing and operating graph traversal capabilities using data graphing tool traversal capabilities built upon Apache Gremlin or new enhanced capabilities.
• Experience developing and operating NoSQL solutions to complex big data applications.
• Experience in data modeling for performance, partition sharding, record/event aggregation workflows, stream processing, and metrics gathering.
• Experience designing and operating large-scale serverless geospatial indexes built with GeoMESA.
• Experience with partition and sort key design and implementation to ensure consistent performance.
• Experience with aggregation operations to de-duplicate records on continuous data feeds.
• Subject matter expertise experience with relational databases to noSQL.
• Experience building and operating high performance data processing pipelines using Lambda, Step Functions and PySpark.
• Experience building high quality User Interface/User experiences with the React framework and webGL.
• Experience designing and operating large scale graph databases using Apache Cassandra.
• Experience performing in-depth technical analysis of large-scale graph databases to develop implementation strategies for search optimizations.
• Experience developing technical capabilities for processing, persistence and search of datasets that are collected or maintained using standards common in the community.
• Experience facilitating engineering discussions across teams representing multiple stakeholders to develop and execute implementation strategies that meet mission needs.
• Experience developing Machine Learning Operations (MLOps) pipelines for large scale applications.
• Experience maintaining configuration of software using configuration management resources such as GitHub.
• Experience designing, building and operating big data systems, such as persistence, partitioning, indexing, at scale of trillions of records/events.
• Experience with Niagara Files (NiFi) applications or new enhanced capabilities.
• Experience developing and operating Kubernetes infrastructure.
• Experience supporting engineering efforts that will contribute to delivery of capabilities such as datasets and functionality such as communications, geospatial workflows.
• Experience implementing DevSecOps and agile development in production environments.
• Experience with agile software development and testing.
• Experience with federal security, regulatory and compliance requirements and security accreditation package development.
• Experience with data security and governance using centralized security controls like LDAP, encrypting the data, and auditing access to the data.
• Experience with specialized technologies that are optimized for the particular use of the data, such as relational databases, a NoSQL database (Cassandra), or object storage.
• Experience with Apache, TINKERPOP, GREMLIN and/or JANUSGRAPH to design, develop, implement and maintain system.
• Knowledge of Graph Database to design, develop, implement and maintain system.
• Experience with C or C to write interfaces.
• Experience using centralized security controls like LDAP, encrypting data, and auditing access to data.
• Experience with databases including Postgres, MariaDB, ELK, Minio, AWS S3, Neo4j, MongoDB, noSQL.
• Experience with Python (pypi libraries).
• Experience with operating systems including Centos7 and RockyLinux8.
• Experience with orchestration technologies including Kubernetes, Docker, Docker
Compose, Docker-Swarm.
• Experience with development tools including vscode, gitlab, jupyterhub/notebooks, MATLAB.
• Experience in large collaboration and development environments.
• Experience with data types including unstructured, structured, or semi-structured data such as CSV, JSON, JSONL, AVRO, Protocol Buffers, Parquet, etc.
• Experience with designing cloud-native architectures using cloud services. (Preferred)
• Experience designing and operating big data systems within policy and regulatory environment. (Preferred)
• Experience developing and operating graph traversal capabilities using the data graphing tool traversal capabilities built upon Apache Gremlin. (Preferred)
• Experience building and operating high performance data processing pipelines using Lambda, Step Functions and PySpark on the infrastructure with EMR. (Preferred)
• Experience working with enterprise services used for Data Management, including the enterprise catalog service (and associated APIs), and Policy Decision Points (PDPs). (Preferred)
• Experience developing Machine Learning Operations (MLOps) pipelines for large scale application in the environment. (Preferred)
• Experience and understanding of IT Service Management and common SLA measurements. (Preferred)
• Experience presenting solutions, requirements, and presentations to diverse audiences. (Preferred)
• Experience working with container orchestration technologies such as AWS ECS, AWS Fargate, and Kubernetes or other enhanced capabilities available. (Preferred)
• Experience in managing large operational cloud environments spanning multiple tenants using Multi-Account management, AWS Well Architected Best Practices, and AWS Organization Units/Service Control Policies (OU/SCP). (Preferred)
• Experience with micro-services such as building decoupled systems, utilizing RESTful endpoints and lightweight systems. (Preferred)
• Experience in total systems perspectives, including a technical understanding of systems and applications relationships, dependencies, and requirements of hardware and software components. (Preferred)
• Experience consulting with customers to determine present and future user needs. (Preferred)
• Experience providing frequent contact with customers, traceability within program documents, and the overall computing environment and architecture. (Preferred)
• Certifications including AWS Certified Solutions Architect, AWS Machine Learning Certification(s), Agile certification, Azure, Security+, GSEC, CCNA. (Preferred) Other Job Requirements:
• Active Top Secret/SCI w/Full Scope Polygraph.
• U.S. Citizenship and must pass a successful background check.
• Location: Herndon, VA.
Date Posted: 14 May 2025
Apply for this Job