Data Engineer

Newark, New Jersey

Seven Seven Software
Apply for this Job
Qualifications:

Bachelor's degree in computer science, Software Engineering, MIS or equivalent combination of education and experience

Experience implementing, supporting data lakes, data warehouses and data applications on AWS for large enterprises

Programming experience with Python, Shell scripting and SQL

Solid experience of AWS services such as CloudFormation, S3, Athena, Glue, EMR/Spark, RDS, Redshift, DynamoDB, Lambda, Step Functions, IAM, KMS, SM etc.

Solid experience implementing solutions on AWS based data lakes.

Should have good experience with AWS Services - API Gateway, Lambda, Step Functions, SQS, DynamoDB, S3, Elasticsearch

Serverless application development using AWS Lambda

Experience in AWS data lake/data warehouse/business analytics

Experience in system analysis, design, development, and implementation of data ingestion pipeline in AWS

Knowledge of ETL/ELT

End-to-end data solutions (ingest, storage, integration, processing, access) on AWS

Architect and implement CI/CD strategy for EDP

Implement high velocity streaming solutions using Amazon Kinesis, SQS, and Kafka (preferred)

Migrate data from traditional relational database systems, file systems, NAS shares to AWS relational databases such as Amazon RDS, Aurora, and Redshift

Migrate data from APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift

Implement POCs on any new technology or tools to be implemented on EDP and onboard for real use-case

AWS Solutions Architect or AWS Developer Certification preferred

Good understanding of Lakehouse/data cloud architecture

Responsibilities:

Designing, building and maintaining efficient, reusable, and reliable architecture and code.

Build reliable and robust Data ingestion pipelines (within AWS, onprem to AWS ,etc.)

Ensure the best possible performance and quality of high scale data engineering project

Participate in the architecture and system design discussions

Independently perform hands on development and unit testing of the applications.

Collaborate with the development team and build individual components into complex enterprise web systems.

Work in a team environment with product, production operation, QE/QA and cross functional teams to deliver a project throughout the whole software development cycle.

Responsible to identify and resolve any performance issues

Keep up to date with new technology development and implementation

Participate in code review to make sure standards and best practices are met.

Job Requirements

Qualifications:

Bachelor's degree in computer science, Software Engineering, MIS or equivalent combination of education and experience

Experience implementing, supporting data lakes, data warehouses and data applications on AWS for large enterprises

Programming experience with Python, Shell scripting and SQL

Solid experience of AWS services such as CloudFormation, S3, Athena, Glue, EMR/Spark, RDS, Redshift, DynamoDB, Lambda, Step Functions, IAM, KMS, SM etc.

Solid experience implementing solutions on AWS based data lakes.

Should have good experience with AWS Services - API Gateway, Lambda, Step Functions, SQS, DynamoDB, S3, Elasticsearch

Serverless application development using AWS Lambda

Experience in AWS data lake/data warehouse/business analytics

Experience in system analysis, design, development, and implementation of data ingestion pipeline in AWS

Knowledge of ETL/ELT

End-to-end data solutions (ingest, storage, integration, processing, access) on AWS

Architect and implement CI/CD strategy for EDP

Implement high velocity streaming solutions using Amazon Kinesis, SQS, and Kafka (preferred)

Migrate data from traditional relational database systems, file systems, NAS shares to AWS relational databases such as Amazon RDS, Aurora, and Redshift

Migrate data from APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift

Implement POCs on any new technology or tools to be implemented on EDP and onboard for real use-case

AWS Solutions Architect or AWS Developer Certification preferred

Good understanding of Lakehouse/data cloud architecture

Responsibilities:

Designing, building and maintaining efficient, reusable, and reliable architecture and code.

Build reliable and robust Data ingestion pipelines (within AWS, onprem to AWS ,etc.)

Ensure the best possible performance and quality of high scale data engineering project

Participate in the architecture and system design discussions

Independently perform hands on development and unit testing of the applications.

Collaborate with the development team and build individual components into complex enterprise web systems.

Work in a team environment with product, production operation, QE/QA and cross functional teams to deliver a project throughout the whole software development cycle.

Responsible to identify and resolve any performance issues

Keep up to date with new technology development and implementation

Participate in code review to make sure standards and best practices are met.
Date Posted: 19 May 2025
Apply for this Job