Data Engineer

Seattle, Washington

Diverse Lynx
Apply for this Job
Data Engineer (Java/Python, Spark SQL, Hadoop, Hive, Databricks , AWS, GCP or Azure)
Seattle, WA
Contract Role

Onsite from Day 1

Essential Skills:
  • Software development experience in big data technologies Databricks, Hadoop, Hive, Spark(PySpark)
  • Proficiency in data processing using technologies like Spark Streaming, Spark SQL, Summary of Key
  • Responsibilities and essential job functions include but are not limited to the following:
  • Leads large-scale, complex, cross-functional projects build technical roadmap for the WFM Data Services platform .
  • Leading and reviewing design artifacts
  • Build and own the automation and monitoring frameworks that showcase reliable, accurate, easy-to-understand metrics and operational KPIs to stakeholders for data pipeline quality
  • Execute proof of concept on new technology and tools to pick the best tools and solutions
  • Supports business objectives by collaborating with business partners to identify opportunities and drive resolution.? Communicating status and issues to Sr Starbucks leadership and stakeholders.? Directing project team and cross functional teams on all technical aspects of the projects
  • Lead with engineering team to build and support real-time, highly available data, data pipeline and technology capabilities? Translate strategic requirements into business requirements to ensure solutions meet business needs? Define & implement data retention policies and procedures
  • Define & implement data governance policies and procedures
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability
  • Enable team to pursue insights and applied breakthroughs, while also driving the solutions to Starbucks scale? Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of structured and unstructured data sources and using big data technologies.? Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Perform root cause analysis to identify permanent resolutions to software or business process issues
Basic Qualifications:
  • 5+ year of experience with object-oriented/object function scripting languages: Python, Java, etc
  • 3+ years of leading development of large-scale cloud-based services with platforms like AWS, GCP or Azure and developing and operating cloud-based distributed systems.
  • Experience building and optimizing data pipelines, architectures, and data sets.
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management
  • Strong computer science fundamentals in data structures, algorithm design, problem solving, and complexity
  • Working knowledge of message queuing, stream processing, and highly scalable ?big data?

Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.
Date Posted: 09 May 2025
Apply for this Job