Thompson First Group is currently seeking a qualified Data Engineer for a client in Dallas, TX. This position would be a full time permanent role. Ideal candidates will have at least one year of Data Engineering experience.
Job Summary: Responsible for optimizing, automating, and maintaining our data pipelines and ensuring seamless data operations across platforms. Work closely with business stakeholders, analysts, and development teams to guarantee that data is accurate, accessible, and aligned with business objectives.
Responsibilities: Data Automation & Integration: - Migrate and optimize data automation processes ownership from third-party vendors.
- Develop and maintain ETL/ELT pipelines to ensure seamless data movement and transformation.
- Quickly integrate changes in business processes into existing data workflows with minimal disruption.
Data Accuracy & Quality Control: - Monitor and troubleshoot data flows to identify, diagnose, and resolve errors efficiently.
- Ensure consistent and accurate data across all platforms.
- Enforce data governance policies and implement data quality checks to maintain data accuracy and integrity.
Data Documentation & Oversight: - Document data sources, transformations, and flow processes to ensure transparency and maintain institutional knowledge.
- Create and maintain data dictionaries, process documentation, and technical specifications.
Cross-Platform Consistency: - Validate that business metrics and calculations are consistent across all platforms and reporting systems.
- Collaborate with analysts and stakeholders to standardize data definitions and ensure consistency across dashboards and reports.
Data Support & Stakeholder Collaboration: - Provide timely and accurate data support to analysts and business stakeholders.
- Collaborate with cross-functional teams to understand business requirements and develop
data solutions that align with organizational goals.
Data Optimization & Performance: - Optimize data storage, processing, and querying for maximum efficiency and scalability.
- Evaluate and improve data models, query performance, and data infrastructure for improved
efficiency.
Technical Skills and Experience: - Experience with cloud platforms (e.g., AWS, Azure, GCP) and data storage solutions.
- Knowledge of programming languages such as Python, R, or Java for data manipulation and automation.
- Familiarity with version control systems (e.g., Git) and Agile methodologies.
- Experience with data governance frameworks and compliance standards.
- Bachelor's degree in Computer Science, Engineering, Statistics or other related quantitative field
- Strong SQL experience
- 1-3 years working with modern ETL tools
Behavioral Competencies: - Self-starter
- Strong attention to detail and a passion for maintaining high-quality data.
- Ability to work independently while collaborating effectively with cross-functional teams.
- Adaptability to changing business requirements and ability to pivot quickly.
Additional Information: - Position Type: HQ (up to 1 day per week remote)
- Supervisory responsibilities: None
- Travel requirement: Less than 10%
Compensation: $75,000 per year