hero


Work for one of our portfolio companies

MLOps Engineer

PicCollage

PicCollage

Taipei City, Taiwan
Posted on Oct 1, 2024
As an MLOps Engineer, you will play a key role in designing, implementing, and maintaining robust machine learning platforms and data pipelines, ensuring smooth deployment, scaling, and monitoring of models in production. You will drive automation to accelerate the development, evaluation, and integration of machine learning models, improving collaboration and overall efficiency. In addition to optimizing production environments, you will act as a bridge between machine learning developers and software engineers, ensuring seamless integration of ML systems into applications. You will also share best practices for MLOps and have the opportunity to work on high-impact projects that reach millions of users, as well as help bring innovative new applications to market.

Responsibilities:

  • Design, implement and maintain machine learning platform and data pipelines, ensuring seamless deployment, scaling, and monitoring of models in production environments.
  • Set up monitoring systems for deployed models and tracking key metrics.
  • Apply and share software engineering best practices within the context of machine learning.
  • Collaborate with ML developers to ensure model performance is maintained in production and work with software engineers to integrate ML systems into the broader application stack.
  • Accelerate machine learning development, evaluation, and integration speed through automation of workflows, tools, and processes to enhance collaboration and efficiency.

Qualifications:

  • Strong programming skills in Python.
  • Proficiency with containerization tools (Docker, Kubernetes) and cloud platforms (GCP, AWS, Azure; expertise in at least one).
  • Experience working with backend servers and APIs (e.g., FastAPI, Django, or similar frameworks).
  • Experience with machine learning frameworks (PyTorch, TensorFlow).
  • Experience with MLOps tools (Kubeflow, MLflow, TFX).
  • Experience with monitoring tools (Prometheus, Grafana) and logging frameworks.
  • Knowledge of data engineering concepts (ETL pipelines, data lakes, data warehouses).