Research Engineer

San Francisco, California

Anthropic Limited
Job Expired - Click here to search for similar jobs
You want to build large scale ML systems from the ground up. You care about making safe, steerable, trustworthy systems. As a Research Engineer, you'll touch all parts of our code and infrastructure, whether that's making the cluster more reliable for our big jobs, improving throughput and efficiency, running and designing scientific experiments, or improving our dev tooling. You're excited to write code when you understand the research context and more broadly why it's important. Note: This is an "evergreen" role that we keep open on an ongoing basis. We receive many applications for this position, and you may not hear back from us directly if we do not currently have an open role on any of our teams that matches your skills and experience. We encourage you to apply despite this, as we are continually evaluating for top talent to join our team. You are also welcome to reapply as you gain more experience, but we suggest only reapplying once per year. We may also put up separate, team-specific job postings . when In those cases, the teams will give preference to candidates who apply to the team-specific postings, so if you are interested in a specific team please make sure to check for team-specific job postings. You may be a good fit if you:
  • Have significant software engineering experience
  • Are results-oriented, with a bias towards flexibility and impact
  • Pick up slack, even if it goes outside your job description
  • Enjoy pair programming (we love to pair.)
  • Want to learn more about machine learning research
  • Care about the societal impacts of your work
Strong candidates may also have experience with:
  • High performance, large-scale ML systems
  • GPUs, Kubernetes, Pytorch, or OS internals
  • Language modeling with transformers
  • Reinforcement learning
  • Large-scale ETL
Representative projects:
  • Optimizing the throughput of a new attention mechanism
  • Comparing the compute efficiency of two Transformer variants
  • Making a Wikipedia dataset in a format models can easily consume
  • Scaling a distributed training job to thousands of GPUs
  • Writing a design doc for fault tolerance strategies
  • Creating an interactive visualization of attention between tokens in a language model
Date Posted: 23 May 2024
Job Expired - Click here to search for similar jobs