Job Title: Senior Data Engineer
Location: Beaverton, OR
Duration: 24 months (130 W, 5D)Responsibilities: • Python programmers/developers who done extensive hands-on work in data engineering space
• Willingness to quickly learn and adapt.
• Experience in solutioning and implementing data pipelines, data curation, data modeling and implementing data solutions.
• Strong understanding of different type of data and the lifecycle of data.
• Design, develop, and launch extremely efficient and reliable data pipelines using Python frameworks to move data and to provide intuitive analytics to our partner teams.
• Collaborate with other engineers and Data Scientists to Client for the best solutions.
• Diagnose and solve issues in our existing data pipelines and envision and build their successors.
Required Qualifications • Bachelor's degree in Computer Science or equivalent work experience.
• Minimum 8+ years experience in IT
• 6+ years Proficiency working with Python specifically related to data processing with proficiency in Python Libraries - Pandas, NumPy, PySpark, PyOdbc, PyMsSQL, Requests, Boto3, SimpleClient, Json.
• 4+ years experience in Data Warehouse technologies - Databricks and Snowflake
• 4+ years Strong SQL (SQL, performance, Stored Procedures, Triggers, schema design) skills and knowledge of one of more RDBMS and NoSQL DBs like MSSQL/MySQL and DynamoDB/MongoDB/Redis.
• 4+ years in designing, developing and managing REST APIs.
• 2+ years Strong AWS skills using AWS Data Exchange, Athena, Cloud Formation, Lambda, S3, AWS Console, IAM, STS, EC2, EMR
• 2+ years ETL tools like Apache Airflow/AWS Glue/Azure Data Factory/Talend/Alteryx
• 1+ year in Hadoop, Hive
• Excellent verbal communication skills.
• Knowledge of DevOps/Git for agile planning and code repository
Additional Requirements: - Bachelor's degree in Computer Science preferred, not required
- Databricks certification nice to have, not required
- 5-7 years' experience in a data engineering role
- Top skills include the following:
- Python
- Databricks
- Snowflake
- PySpark
- SQL
- Soft skills required:
- Excellent verbal and written communication skills (frequent stakeholder engagement)
- Someone who can come up with suggestions, not just an "order taker"
- AWS expertise and experience building data pipelines will also be preferred