Data Scientist

Pittsburgh, Pennsylvania

Clarium Managed Services
Apply for this Job

Overview:

Clarium is seeking a highly skilled and analytical Performance Engineer with strong expertise in predictive modeling, data analysis, and reporting. The ideal candidate will play a critical role in analyzing performance data and supporting the scalability and efficiency of our OpenShift container platform (OCP) and legacy application environments. If you're passionate about data-driven insights and performance engineering, we invite you to apply for this exciting opportunity.


Key Responsibilities:

  • Maintain and enhance predictive models for performance engineering and capacity forecasting across OpenShift and legacy systems.
  • Analyze production usage data to identify trends, anomalies, and capacity constraints.
  • Collect and consolidate performance data from various sources (monitoring tools, logs, telemetry systems, etc.).
  • Compare real-world usage with test environment simulations to refine forecasting models.
  • Develop and maintain dashboards and visual reports on capacity headroom and performance metrics.
  • Monitor infrastructure performance (CPU, memory, disk, etc.) and proactively identify optimization opportunities.
  • Collaborate with engineering, DevOps, and business stakeholders to align capacity plans with future demand.
  • Provide data-driven insights and recommendations to ensure optimal system performance and scalability.
  • Document capacity planning strategies, models, and best practices.

Must-Have Skills:

  • 7+ years of experience in data analysis, predictive modeling, or performance engineering.
  • Strong hands-on experience with OpenShift container platform infrastructure.
  • Solid background in data analysis and performance reporting.
  • Excellent analytical thinking and problem-solving skills.
  • Strong communication and collaboration skills.
  • High attention to detail and accuracy.

Preferred Qualifications:

  • Experience with containerization platforms (e.g., Docker, Kubernetes).
  • Familiarity with cloud platforms (AWS, Azure, or Google Cloud).
  • Experience with monitoring/logging tools (e.g., Prometheus, Grafana, ELK).
  • Proficiency in data visualization tools (e.g., Tableau, Power BI).
  • Working knowledge of scripting languages (e.g., Python, Bash).
  • Exposure to performance testing tools (e.g., JMeter, LoadRunner, Gatling).
  • Understanding of DevOps methodologies and ITIL practices.
  • Experience with agile project management tools and methodologies.
  • Familiarity with business intelligence or data warehousing concepts.

Date Posted: 02 May 2025
Apply for this Job