Sr Data Engineer

Pittsburgh, Pennsylvania

TDK SensEI
Apply for this Job
Sr. Data Engineer

TDK SensEI

Pittsburgh, PA

This position is for our Pittsburgh, PA office - only apply if you are based there or willing to relocate

At TDK SensEI, we are transforming how industrial customers utilize and interact with sensor data. We specialize in developing advanced AI solutions capable of running directly on edge devices. By processing data locally, TDK SensEI enhances real-time decision-making, privacy, security, and cost efficiency. Our offerings include automated machine learning tools, AI-powered condition-based monitoring systems, and various sensor devices optimized for low latency and power consumption. Collaborating with leading global companies, we empower teams to effortlessly devise and implement machine learning solutions for industrial applications, all without the need for coding.

We are seeking a Senior Data Engineer to join our team and play a pivotal role in designing, implementing, and maintaining scalable data solutions. As a Senior Data Engineer, you will be responsible for building and optimizing our data pipelines and architecture, ensuring the seamless flow of data across our systems. You will work closely with Machine Learning Engineers and Software Engineers to support their data needs and contribute to the development of innovative data-driven solutions.

As a Senior Data Engineer, your responsibilities will include:

Design, develop, and maintain scalable data pipelines and architectures

Collaborate with Machine Learning Engineers and Software Engineers to understand data requirements and deliver solutions

Implement data governance and security best practices to ensure data integrity and compliance

Optimize data workflows for performance and cost-efficiency

Monitor and troubleshoot data pipeline issues to ensure high availability and reliability

Stay up-to-date with emerging data engineering technologies and best practices

Skills & Requirements:

Proven experience as a Data Engineer with at least 5 years industry experience, with a strong background in building and maintaining data infrastructure

Expert coding ability and fluency with Python or other OOP language

Strong understanding of data modeling, ETL/ELT processes, and data lake & warehouse concepts

Experience with big data technologies such as Apache Spark, Kafka

Familiarity with data governance, security, and compliance best practices

Excellent problem-solving skills and the ability to work independently and collaboratively with cross-functional teams

US Work Authorization required

Bonus Qualifications

Extensive experience with AWS services, including but not limited to S3, Lambda, IoT Core, and Greengrass

Experience with AWS SageMaker and integration to data systems

Experience with CI/CD pipelines and DevOps practices

Experience with Docker and containerization

Experience with SQL and NoSQL databases

Date Posted: 17 April 2025
Apply for this Job