Who we are Picogrid envisions a future where every system, from the smallest sensors to next-generation fighter jets, can collaborate autonomously to create a safer and more prosperous world for all.
Today, we are at the forefront of American defense technology, developing a unified platform that integrates fragmented technologies-such as sensors, cameras, radar, and drones-into sophisticated mission systems.
Our technology is deployed globally, supporting customers including the U.S. Army, U.S. Air Force, CAL FIRE, PG&E, U.S. Fish and Wildlife, and many others.
About this Role Picogrid is seeking an experienced Senior Data Engineer to design, build, and own our next-generation highly low latency, data ingestion and aggregation system. In this role, you will architect a robust data ingestion pipeline capable of handling diverse data types-from telemetry time-series data to large blob storage-and integrate an automation workflow for downstream processing. You will work in Go, design solutions for cloud, edge-based, and/or hybrid environments, and collaborate with teams using Kubernetes and AWS. The ideal candidate is passionate about data infrastructure, has a strong understanding of data governance and compliance practices, and is excited about building scalable solutions from the ground up.
Key Responsibilities - Data Pipeline Architecture & Development:
- Design and build a highly extensible, scalable data ingestion pipeline to capture and process data from thousands of sensors, IoT devices, and external sources.
- Ensure the pipeline accommodates different data types (e.g., time-series telemetry, blob storage) and is built to seamlessly onboard new data sources.
- Architect a secure data platform that meets stringent US government/DoD compliance requirements and is designed for third-party developers to build additional solutions.
- Workflow & Automation:
- Develop and implement a workflow automation pipeline that triggers actions based on ingested data, ensuring timely data processing and integration into downstream systems.
- Third-Party Platform Enablement:
- Build and maintain a platform that serves as a foundation for internal/external partners and third-party developers, ensuring it's well-documented, modular, and secure.
- Provide API endpoints, SDKs, and integration guidelines to facilitate third-party solution development.
- Software Development:
- Write and maintain efficient, robust, and clean code using Go, Python, and Typescript.
- Collaborate on code reviews and implement best practices in software development and testing.
- Cloud & Hybrid Architecture:
- Architect solutions that operate effectively in cloud, edge, or hybrid environments using AWS and other relevant platforms.
- Leverage Kubernetes for container orchestration and management of scalable microservices.
- Data Governance & Compliance:
- Integrate data governance practices into the pipeline design, ensuring data integrity, quality, and compliance with relevant regulations.
- Stay up-to-date with compliance requirements and implement necessary controls in data handling and storage.
- End-to-End Ownership:
- Take complete ownership of the data ingestion and aggregation system, from concept and design through to implementation, monitoring, and ongoing improvements.
Required Skills & Qualifications - Technical Proficiency:
- Strong experience in designing and building data pipelines and ETL/ELT processes.
- Technical expertise in one or more of the following stacks: Go, Node, Java, Python
- Hands-on experience with AWS services (e.g., S3, Athena, Glue, Lambda, Redshift) and cloud-based infrastructure.
- Working knowledge of container orchestration (e.g., Kubernetes, Docker Compose) and microservices architecture.
- Experience in managing and provisioning new data infrastructure with IaC tools such as Terraform.
- Experience in building scalable solutions in cloud, edge, or hybrid environments.
- Data Handling:
- Expertise in ingesting and processing high volumes of diverse data, including real-time sensor and IoT data.
- Familiarity with managing different storage systems optimized for various data types (e.g., time-series databases, blob storage).
- Data Governance & Compliance:
- Awareness of data governance practices, including data quality, lineage, and security.
- Understanding of compliance regulations (e.g., NIST 800-53r5, Fedramp, CMMC) and how they impact data storage and processing.
- Soft Skills:
- Strong problem-solving skills with the ability to troubleshoot complex data pipelines.
- Excellent communication skills and the ability to work collaboratively across teams.
- Ability to take initiative and work independently with minimal supervision.
Preferred Qualifications - Experience with workflow orchestration tools (e.g., Apache Airflow, AWS Step Functions).
- Exposure to other programming languages or data engineering tools (e.g., Python, Spark).
- Prior experience in scaling data systems for early-stage products transitioning to enterprise-grade solutions.
- Experience working with government or defense-related projects.
Export Control Requirements To conform to U.S. Government export regulations, applicant must be a (i) U.S. citizen or national, (ii) U.S. lawful, permanent resident (aka green card holder), (iii) Refugee under 8 U.S.C. 1157, or (iv) Asylee under 8 U.S.C. 1158, or be eligible to obtain the required authorizations from the U.S. Department of State.