Apply for this Job
Insight Global is seeking an Offshore - Senior Data Engineer's to join the Data Engineering team. As part of a major data migration project from on-premises SQL to Databricks, you will play a crucial role in designing, building, and optimizing data pipelines using Databricks, Python, PySpark, and SQL. You will also be responsible for evaluating the existing SQL environment to convert SQL code to Databricks SQL and PySpark to optimize performance and scalability, conduct testing to ensure data accuracy/performance and deploy new data pipelines and models in the Databricks environment. This an exciting opportunity to work on innovative project and contribute to the transformation of the organizations data infrastructure.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to .
To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: .
Required Skills & Experience
- 5+ years of experience working as a Data Engineer, enhancing data processing and data analytics capabilities.
- Experience with designing, building, and optimizing data pipelines using Databricks, Python, PySpark, and SQL.
- Experience designing and implementing scalable ETL pipelines using Databricks, Python, PySpark, and SQL.
- Experience converting existing SQL code, including stored procedures and ETL scripts, to using Databricks, Python, PySpark, and SQL.
- Thorough knowledge of the data workflow - data ingestion, data processing, data analysis and visualization.
- Must be able to work a PST or CST working hours and occasionally handle After-Hours Support.
Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.
Date Posted: 13 April 2025
Apply for this Job