Data Engineer

Detroit, Michigan

Apollo ITS
Apply for this Job
Job Title: Data Engineer
Location: Metro Detroit, MI-Hybrid
Duration: 12 Months
Note:
  • Must be LOCAL to Metro Detroit - HYBRID 3 days in office in Ann Arbor.
  • Interview: 20 mins with manager, 1 hour with manager and 1-2 team members (ETL scenarios and SQL Assessment)
Team/Project:
  • Will be working with the HR team on the UKG software (vendor). They have a BA that interfaces with the vendor but always helpful to have a technical person there as well. This software holds store team member data, log ins, hours, time off etc. All the HR data for store employees.
  • Mirandas team is about 13-14 devs and some BA's
Notes from intake we had for previous role:
  • Main tools
    • ETL-Talend; DataStage/Informatica might work but lets begin w talend
    • SQL Server DB
  • Outside of that all other things are a plus
    • Cloud- Azure is the plan, Kafka (for streaming- its helpful),
    • Databricks/Lakehouse (Esp for mar-tech people)
    • Python/shell scripting would be nice
  • Don't see a big need for big-data, IoT given domain
  • Comms are big; 2-4 years of Experience is what they want
Additional Job Details
The data engineering specialist will primarily focus on development of large volume information ingestion and transformation. This position is responsible for orchestrating data interfaces into (and out of) our Enterprise Data Warehouse using Talend, SQL, Python and other data engineering solutions.
GENERAL RESPONSIBILITIES
  • Design and develop ETL (Talend) / SQL / Python based processes to perform complex data transformation processes.
  • Design, code, and test major data processing features, as well as work jointly with other team members to provide complex software enhancements for the enterprise data storage platforms (RDBMS, Lakehouse, No-SQL platforms)
  • Build Data Integration solutions to handle batch / streaming / IoT data on ETL, Big-Data platforms.
  • Develop and Deliver changes in the Enterprise Data Warehouse according to Data Warehousing best practices
  • Gather requirements and construct documentation to aid in maintenance and code reuse in accordance with team processes and standards
  • Design, code, and test major data processing features, as well as work jointly with other team members to provide complex software enhancements for the enterprise data storage platforms
  • Monitor scheduled jobs and improve reliability of ongoing processing
  • Monitor, measure, and enhance ways to improve system performance
  • Ability to multi-task deliverables and manage them in an efficient manner.
  • Performs other duties as assigned
  • Qualifications
  • Understanding and Hands-on experience with key technologies (SQL, ETL, Data modeling, data processing)
  • Strong SQL skills with Relational DBMS technology - SQL Server
  • 2-4 yrs Hands-on experience with ETL tools (Talend preferred)
  • Good understanding and expertise in relational database concepts and data processing concepts
  • Experience in handling multiple data formats (Delimited file, JSON, XML etc.)
  • Experience with data lakehouses (Databricks) and cloud technologies (Aure) a plus
  • Experience with Customer and digital marketing data, including implementing Customer Data Platform and Identity Resolution solutions and integrating with a ESP (Salesforce), a plus
  • Hands on experience designing and implementing data ingestion techniques for real time processes (IoT, eCommerce) a plus.
  • Development experience in a Big-Data environment a Plus; Spark, Kafka, Message Queues
  • Experience with shell scripting / Python a plus
  • Strong communication skills (oral and written)
  • Good analytical and problem solving skills
  • Knowledge of CRM, MDM, HR Reporting and Business Intelligence a plus
  • Candidate must be thorough and detail-oriented and a team player
  • Able to work on multiple priorities in a deadline-driven environment
TOP MUST HAVES:
  • ETL (Talend)
  • SQL
  • Cloud (azure) and Kafka are helpful
Date Posted: 26 March 2025
Apply for this Job