Data Engineer

Plano, Texas

HexaQuEST Global
Job Expired - Click here to search for similar jobs
Treasury Data Platform is collection of tools and processes that manage and interact with data to orchestrate, source, transform, and make data available to consumers. This role requires an experienced and highly skilled software engineer to be part of the design and implementation of strategic data services for the Treasury group to standardize our capabilities and establish similar patterns across the broader organization. As a data engineer, candidate will be expected to help the team craft data solutions to meet business and enterprise requirements using PySpark, Python, Oracle Exadata, and emerging data technologies to help orchestrate a metadata driven data pipeline implementing ETL Automation.

This is an individual contributor role. Typically requires 10 plus years of applicable experience. This role will require you to:

Ability to develop, modify and adopt tools and processes to support self-service data pipeline management.

Drive adoption of data tools to modernize existing ETL frameworks/processes and implementing with a strategic mindset and typically have in-depth knowledge of development tools, and languages.

Collaborate with business and technology partners across the organization to assess data needs and prioritize adoption.

Identify additional strategic opportunities to evolve the data engineering practice.

10+ years of experience in data engineering or business intelligence.

Good Understanding of Spark/PySpark framework.

3+ years experience with Python (e.g. Pandas, Data Frames) and use in data processing solutions.

Minimum 3 to 5 Years practical working experience with relational databases, preferably Oracle Exadata.

Familiarity with Data Warehouse / Data Mart / Business Intelligence concepts.

Hands on experience with schema design and data modeling.

Strong practical working experience with Unix scripting in at least one of Python, Perl, Shell (either bash or zsh).

Experience with performance tuning data transformations across large data sets.

Experience with CI/CD processes and tools (e.g. Ansible, Jenkins).

Design and development experience in modern technologies such as API management, REST/API integration, Containers, Micro services.

Banking / Capital Markets / Accounting domain knowledge.

Implementing ETL with PySpark.

Practical working experience using ETL tools, preferably Informatica.

Experience or familiarity with orchestration tools, such as Airflow.

Experience designing and building data warehouses and knowledge of the data flows involved.

Software development in an Agile environment.

Working experience with Git, Jira, Confluence.

Excellent written and oral communication skills.

Willingness to take on problems outside of current skillset and experience.

Date Posted: 01 May 2024
Job Expired - Click here to search for similar jobs