Data Pipeline Engineer

Chandler, Arizona

Harnham
Job Expired - Click here to search for similar jobs
Title: Data Pipeline Engineer

Location: Chandler, AZ 85225 - Fully On-Site

Overview of Role:

As a Data Pipeline Engineer, you will play a crucial role in scaling, implementing, and architecting innovative data initiatives that drive our business forward. We are looking for a passionate individual eager to learn and work with cutting-edge technologies in a dynamic environment. This multifaceted position blends data engineering, governance, and integration to enhance our data capabilities.

Company Description:

Join our client, an exciting startup focused on strategic investments aimed at capital growth. As they expand their operations, they seek a dedicated and knowledgeable team member who shares their vision and enthusiasm.

Role Description:
  • Design and implement robust data pipelines using technologies such as Apache Spark, Hadoop, and Kafka.
  • Utilize AWS or Azure services (e.g., EC2, RDS, S3, Lambda, Azure Data Lake) to facilitate efficient data handling and processing.
  • Develop and enhance data models and storage solutions (SQL, NoSQL) to guarantee data quality and accessibility for both operational and analytical applications.
  • Automate data workflows with ETL tools and frameworks (e.g., Apache Airflow, Talend) to ensure timely data availability and integration.
  • Collaborate with data scientists, providing necessary infrastructure and tools for complex analytical models using Python or R.
  • Implement best practices for data governance and security compliance, focusing on encryption, masking, and access controls within a cloud environment.
Skills and Experience:
  • Bachelor's degree in Computer Science, MIS, or related field.
  • Strong background in ETL/ELT pipelining, specifically with DBT.
  • Extensive experience with PostGreSQL and ability to discuss it in-depth.
  • Proficient in coding with Python and SQL.
  • Familiarity with cloud computing environments (AWS, Azure, GCP) and Data/ML platforms (Databricks, Spark).
  • Experience with NoSQL databases such as Apache Cassandra or MongoDB.
  • Knowledge of common Python Data Engineering libraries, including Scikit-learn, Pandas, NumPy, Airflow, Kafka, and Spark.
Benefits:
  • Health, Dental, and Vision Insurance.
  • Additional benefits details to be determined.
Please note: Candidates must be authorized to work in the United States to be considered at this time.

Date Posted: 23 April 2025
Job Expired - Click here to search for similar jobs