Job Expired - Click here to search for similar jobs
Job Title: Data Engineer
Location: Columbus, OH (Hybrid)
Duration: 6 months contract to Hire
Job Description:
Supplier notes:
data pipeline
Java Spark
must have- TerraData and database knowledge.
AWS, ETL are preferred.
informatica or abinito
CI/CD and data warehouse concept
Java/ Spark must
Snowflake knowldge
MUST have
- teradata, DBMS knowledge
- Cloud knowledge - AWS preferably
- ETL knowledge
- CICD and data warehouse concept
- Java & Spark
NICE To Have
- Abinitio
- postgres DB knowledge
- python
Job responsibilities
" Execute software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems.
" Write secure and high-quality code and maintains algorithms that run synchronously with appropriate systems.
" Produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development.
" Apply knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation.
" Apply technical troubleshooting to break down solutions and solve technical problems of basic complexity.
" Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems.
" Proactively identify hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture.
" Contribute to software engineering communities of practice and events that explore new and emerging technologies.
" Add to team culture of diversity, equity, inclusion, and respect.
Required qualifications, capabilities, and skills.
" 4 to 7 years of Spark on Cloud development experience
" 4 to 7 years of strong SQL skills; Teradata is preference but experience in any other RDBMS
" Proven experience in understanding requirement related to extraction, transformation, and loading (ETL) of data using Spark on Cloud
" Formal training or certification on software engineering concepts and 3+ years applied experience.
" Ability to independently design, build, test, and deploy code. Should be able to lead by example and guide the team with his/her technical expertise.
" Ability to identify risks/issues for the project and manage them accordingly.
" Hands-on development experience and in-depth knowledge of Java/Python, Microservices, Containers/Kubernetes, Spark, and SQL.
" Hands-on practical experience in system design, application development, testing, and operational stability
" Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages.
" Proficient in coding in one or more programming languages
" Experience across the whole Software Development Life Cycle
" Proven understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security
" Proven knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)
Preferred qualifications, capabilities, and skills
" Knowledge about Data warehousing Concepts.
" Experience with Agile based project methodology.
" Ability to identify risks/issues for the project and manage them accordingly.
" Knowledge or experience on ETL technologies like Informatica or Ab-initio would be preferable
" People management skills would be given preference but is not mandatory.
Date Posted: 16 May 2024
Job Expired - Click here to search for similar jobs