Apply for this Job
Experience working in Google environment and building applications and integrations using tools such as Replicator ( built on Apache Beam ) is preferred Responsibilities - Design, develop, and maintain data pipelines to integrate data from various cloud platforms (eg Google Cloud) and systems
- Build integration points between data in one environment and software in another to ensure seamless data flow and interoperability.
- Collaborate with cross-functional teams to understand data requirements and ensure data quality and integrity.
Qualification and Experience - Expertise with programming languages such as Java is mandatory.
- SQL and ETL tools (e.g., Apache NiFi, Talend, Informatica).
- Experience with data processing technologies and platforms (e.g., Apache Beam, Dataflow, Hadoop, Spark) - Streaming and batch pipeline creation is mandatory.
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Pub/Sub).
- API design principles and best practices for performance and scalability.
- Familiarity with cloud platforms (GCP) and associated data services
- Experience in building scalable solution in large distributed systems
Date Posted: 06 April 2025
Apply for this Job