Role: Data Engineer
Experience: 3-5 yrs
Notice Period: Immediate
Work Mode: Remote
Job Description
Requirements:
- Minimum 3-5 years of experience in data engineering, data warehousing, and business intelligence solutions.
- Hands-on experience with Azure Synapse Analytics, Azure Data Factory, Data Lake, Azure SQL, and Databricks.
- Proficiency in writing optimized SQL queries and Spark development (PySpark, Spark SQL).
- Experience with hybrid cloud deployments and integration between on-premises and cloud environments.
- Strong understanding of data engineering best practices, such as code modularity, documentation, and version control.
- Knowledge of data and analytics concepts, including dimensional modeling, ETL, reporting tools, data governance, and data warehousing.
Responsibilities:
- Design, implement, and support data warehousing and business intelligence solutions using Azure Synapse and Microsoft Fabric.
- Develop scalable and efficient data pipelines using Azure Data Factory, PySpark notebooks, Spark SQL, and Python.
- Implement ETL (Extract, Transform, Load) processes to extract data from diverse sources, transform it into suitable formats, and load it into data warehouses or analytical systems.
- Write optimized SQL queries on Azure Synapse Analytics (dedicated and serverless resources).
- Troubleshoot and resolve complex issues related to Spark core internals, Spark SQL, Structured Streaming, and Delta.
- Monitor and fine-tune data pipelines and processing workflows to enhance performance and efficiency.
- Ensure data security and compliance with data privacy regulations.
- Well versed with Medallion, Lambda architectures.
- Experience in Kafka and other streaming data processes.
- Collaborate with business stakeholders to gather requirements and create comprehensive technical solutions.
Interested candidates can shares resumes to