About the job Lead Data Engineer
Key Skills: Snowflake, SQL, Python, Spark, AWS- Glue, Big Data ConceptsMust have: - 10+ years of relevant experience in Data Engineering and delivery.
- 10+ years of relevant work experience in Big Data Concepts. Worked on cloud implementations.
- Strong experience with SQL, python and PySpark
- Good understanding of Data ingestion and data processing frameworks
- Good experience in Snowflake, SQL, AWS (glue, EMR, S3, Aurora, RDS, AWS architecture)
- Good aptitude, strong problem-solving abilities, analytical skills, and ability to take ownership as appropriate.
- Should be able to do coding, debugging, performance tuning, and deploying the apps to the Production environment.
- Experience working in Agile Methodology
Good to have: Have experience in DevOps tools (Jenkins, GIT etc.) and practices, continuous integration, and delivery (CI/CD) pipelines.
Worked on cloud implementations, data migration, Data Vault 2.0, etc.
Requirements: - Bachelors or Masters degree in Computer Science, Information Technology, or a related field.
- Proven experience as a Data Engineer, with a focus on AWS and Snowflake.
- Strong understanding of data warehousing concepts and best practices.
- Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders.
- Experience in the insurance industry, preferably with knowledge of claims and loss processes.
- Proficiency in SQL, Python, and other relevant programming languages.
- Strong problem-solving skills and attention to detail.
- Ability to work independently and as part of a team in a fast-paced environment.
Preferred Qualifications: - Experience with data modeling and ETL processes.
- Familiarity with data governance and data security practices.
- Certification in AWS or Snowflake is a plus.