Position: Data Engineer
Location: 100% Remote
Duration: 6+ Months Experience: - 8+ YearsTechnologies We Use: AWS Serverless Services Stack (S3, Athena, Lambda, SQS, SNS, Step Functions, Redshift, Glue, etc.),
Python
SQL
Bitbucket/Github
MSSQL Server
Salesforce.
Your Role: - Execute on a shared technical architectural direction, long-term vision, and guiding principles for data and platforms and ensure alignment with the enterprise technology strategy.
- Design, build out and modify AWS data integration pipelines to modernize and harden process models for extracting data from operational platforms into data lake.
- Build and manage data models (relational, dimensional & NoSQL) to support standardized processing environment. Build deep understanding of how dozens of data sources can be combined for 360-view.
- Provide support for and own enterprise integrations (i.e. Salesforce, AWS Cognito, Enterprise APIs, Google Analytics, Ecommerce, etc.)
- Develop integration solutions in the incubation phase and build prototypes as necessary to validate architecture approaches.
- Develop and maintain SQL queries to support aggregations used for reporting and marketing automation.
- Maintain high standards of process documentation. Provide non-functional requirements according to which the solution is defined, managed, and delivered.
- Partner with architecture and engineering leaders to drive alignment on the right balance between consistency and flexibility in our cloud adoption approach.
- Mentor, educate and train colleagues as requested.
Qualifications Required Experience: - 3+ years of experience with architecting and engineering AWS cloud-based Data solutions.
- 7+ years of experience in Data Engineering for mid-size to large corporations.
- Experience supporting and working with data platforms.
- Extensive Python/C coding experience in application to data processing.
- Excellent SQL skills.
- Experience in data modeling (relational, dimensional & NoSQL)
- Experience in building data integration pipelines for transitional and analytical applications
- Experience with cloud-native data and parallel data processing architectures (Data Lake architectures, MPP database engines, serverless data processing pipelines, Spark, etc.)
- Experience consuming data from various sources (databases, file sources, REST services, SOAP services, web-scraping, etc.) and various formats (csv, tsv, json, xml, html, unstructured text, etc.) in Python or C
- Execution of DevOps methodologies and Continuous Integration/Continuous Delivery (CI/CD)
- Experience with Agile (SCRUM, Kanban, etc.) or lean development and able to discuss workflows for different software development processes.
- Attention to detail and results oriented, with a strong product and customer focus.
- Strong analytical and problem-solving skills.
- The ability to work in a team environment.
- Excellent verbal and written communication skills with the ability to talk to technical and non-technical people, including executive management.
- Drive to learn and keep pace with the latest advances in the field; rapidly grasp new technologies to support the environment and contribute to project deliverables.
What We Look For:
- Experience with products in cloud deployment.
- Experience building and implementing scalable applications that use modern design patterns that leverage Cloud platforms.
- Leveraging domain driven design principles to support legacy re-platforming efforts.
- Designing message-based interaction models to support distributed data across the enterprise.
- Excellent problem-solving skills.
- Strong development skills - must be able to work "hands-on."
- Strong team player who enjoys working in a fast-paced atmosphere
- Ability to manage multiple priorities, commitments and projects and organize effectively.
- Self-motivated and passionate about what you do.
- Strong written and verbal communication skills.