Data Platform Engineer

Miami, Florida

Apolis
Apply for this Job
Role: Data Platform Engineer
Work location: USA Miami, FL - onsite.
Rate: $70/hr


Job Description:
Essential Duties & Responsibilities:

• Maintain and optimize the data pipeline architecture, leveraging Kafka for efficient data streaming and processing.

• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

• Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and cloud big data technologies.

• Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.

• Setting up new users and giving them access to the Data Platforms

• Managing and maintaining Production deployments including ETL jobs deployments, patches to Datawarehouse.

• Monitoring production ETL jobs

• Employing the latest security protocols making sure that we are following standards per Information Security.

• Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

• Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

• Work with data and analytics experts to strive for greater functionality in data systems.

• Constantly looking to automate their workloads and striving to provide more timely, accurate, complete business analytics

Qualifications:

• Requires a bachelor's degree in area of specialty.

• 3+ years of experience in a data/cloud engineering role

• Experience with relational SQL and NoSQL databases

• Experience with cloud services/providers: AWS, Azure, etc.

• Experience with a scripting language: Python, R, etc.

• Proficiency in Kafka for data streaming and processing.

Experience with NIX distributions

• Experience building and optimizing big data data pipelines, architectures, and data sets.

• Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

• Strong analytic skills related to working with unstructured datasets.

• Build processes supporting data transformation, data structures, metadata, dependency, and workload management.

Strong organizational skills:

• Experience supporting and working with cross-functional teams in a dynamic environment.

Knowledge & Skills:

• Knowledge of computer science or engineering technical concepts, practices, and procedures within a particular field

• Proficiency in Python, Scala, or SQL.

• Expertise in Apache Spark, Hadoop, or Apache Kafka.

• Cloud experience in Azure (preferred) or AWS.

• Deployment and management of Databricks clusters.

• Mastery of Azure Data Factory (ADF) for integration.

• Expertise with Databricks workspace, runtime, and Databricks Connect.

• Understanding of Databricks Unity Catalog.

• Knowledge of Delta Live Tables and Delta Engine.

• Familiarity with Delta Lake architecture.

• Kafka

• Strong SQL skills.

• Able to communicate and implement technical solutions.

• Proven ability to collaborate with technical peers.

• Capable of working independently and as part of a team.

• Ability to assist and guide junior staff as necessary.

• Demonstrate a certain degree of creativity with analytical and problem-solving skills.

• Strong with methodologies, tools, best practices, and processes within specific area of responsibility
Date Posted: 02 May 2025
Apply for this Job