This is a highly confidential search and will require a signed NDA to disclose the company name. The role is based in Los Angeles, CA.
About Eleven Recruiting
We are a specialized technology staffing agency supporting professional and financial services companies. Why do we stand out in technology staffing? We listen and act as advisors for our candidates on how they can best add value, find interesting projects, and pave a path for career advancement. We advocate for best pay, diversity in tech, and best job-fit for every candidate we place.
Our client is a management firm that runs the investments, initiatives, and operations for a visionary collective committed to shaping the future. Its reach is vast-overseeing diverse assets, funding transformative projects, and building programs that redefine science, philanthropy, and innovation. With a team of over 800 employees, this organization not only manages resources, but also drives meaningful change. From advancing groundbreaking ocean research to revolutionizing grant and investment strategies, every decision is purposeful.
This firm is seeking a Senior Data Engineer to architect and build cloud-native pipelines, batch, streaming, and GenAI-enabled that turn structured and unstructured data from finance, HR, philanthropy, ERP/CRM, documents, and logs into trusted, self-service insights. Leveraging modern data stacks, and BI tools, you will drive advanced analytics, forecasting, and real-time dashboards while mentoring engineers and partnering closely with senior stakeholders.
Responsibilities:
- Architect and develop a lake-house stack, and Data management platform using tools such as Redshift or BigQuery, Airbyte, Airflow or similar.
- Design real-time/streaming, and batch pipelines support event-driven analytics and near-instant insights, using tools such as Kafka, Spark or similar.
- Build resilient ELT/ETL flows for relational data, semi-structured events, PDFs, images, and log streams, with automated testing, lineage, and governance.
- Embed AI in routine workflows: document intelligence, prompt-driven data-quality checks, automated documentation, AI code development, and conversational data exploration.
- Deliver self-service semantic layers and pixel-perfect dashboards in Tableau, Looker, or Superset, empowering stakeholders to explore data in real time.
- Partner with Finance, HR, and Philanthropy to build predictive models, projections, and scenario analyses that guide budgeting, portfolio strategy, and grant outcomes.
- Gather requirements, translate business logic into elegant data models, and evangelize best practices across on-shore/off-shore teams.
- Set coding standards, review designs, and champion security, privacy, and cost-aware architecture.
Qualifications:
- Bachelor's or Master's in Computer Science (or Equivalent experience).
- 8+ years designing and operating data platforms spanning structured and unstructured (documents, logs) workloads.
- Production expertise in data storage and processing pipelines with tools such as Airflow, Airbyte, Python, advanced SQL, cloud warehouses (Redshift or BigQuery), and real-time streaming (Kafka, Kinesis, or Pub/Sub) plus distributed processing on Spark.
- Mastery of dimensional and wide table data modeling, performance tuning.
- Demonstrated use of GenAI/LLMs for document processing, conversational analytics, or code generation.
- Hands-on experience building predictive models, time-series forecasts, scenario projections, and financial data analyses.
- Working knowledge of CI/CD (GitHub), containerization (Kubernetes/Helm), and infrastructure-as-code.
- Clear, concise communicator able to align technical architecture with executive strategy and inspire cross-functional teams.