Data Engineer Digital Asset Investment Firm We are a leading digital asset investment firm that operates with a focus on institutional-grade standards, integrating various business areas such as Alpha Strategies, Trading, and Asset Management. Founded by experienced professionals with deep expertise in finance, blockchain, and technology, we share a common passion for digital assets. Our core values are rooted in openness, connectivity, collaboration, and strong partnerships throughout the digital asset ecosystem. People and technology are central to our mission.
We are looking for an experienced
Data Engineer to join our team. In this role, you'll play a critical part in developing and managing our data infrastructure. This position requires strong technical expertise and the ability to work closely with quantitative researchers to process, analyze, and manage large datasets that fuel our quantitative investment strategies.
Key Responsibilities:- Collaborate with quantitative researchers to understand their data needs
- Enhance the performance and usability of our petabyte-scale data lake, built with Python
- Create high-performance, real-time event-driven datasets using both live and historical market data
- Oversee the development, testing, and deployment of machine learning-based trading models
- Integrate external APIs and third-party data sources for structured and unstructured data ingestion
- Build, refine, and optimize data pipelines for efficient ETL processes that support research, analysis, forecasting, and execution
- Implement automated measures to ensure the integrity, accuracy, and consistency of data inputs and outputs
Skills and Qualifications:- At least 3 years of experience in a similar role, preferably within a quantitative hedge fund or proprietary trading firm
- Solid understanding and hands-on experience with L2 and L3 market data
- Bachelor's degree in Computer Science or a related discipline
- Strong programming skills in Python and Rust, with experience using Linux and Docker
- Familiarity with open-source data tools like Apache Arrow, and distributed computing tools such as Ray or Dask
- Proven experience in designing and optimizing data pipelines, data modeling, and ETL processes
- Expertise in building event-driven applications using tools like Protobuf, Kafka, and Schema Registry
- Familiarity with AWS and its data services
- Strong collaboration skills, with the ability to work closely with quantitative researchers and translate their needs into technical solutions
- Excellent analytical and problem-solving abilities to process large datasets and identify actionable insights
- Highly detail-oriented and methodical in your work
- Able to thrive in a fast-moving, dynamic work environment and adapt to new technologies
- Excellent written and verbal communication skills