We only consider candidates with a CV. To make sure we receive your CV, use the "send CV" function on your right or use mail [email protected]
Tasks:
- Hiring and leading the Data Engineering and MLOps team;
- Developing scalable and reliable ML infrastructure components to facilitate the deployment of ML models;
- Closely cooperating with the ML team, core platform developers, and architects for the maximum impact across the organization;
- Being a critical resource to implement the research team's ideas and bring them to market.
Must-haves:
- Experience in building scalable and reliable ML infrastructure components and data ingestion and processing pipelines for 4+ years;
- Experience in deploying ML APIs/containers – such as Docker, Kubernetes, or OpenShift – to the public cloud (Azure, AWS, or Google Cloud);
- Experience with ML lifecycle platforms (MLflow, Kubeflow);
- Experience in designing data processing pipelines using Kafka, Elasticsearch, and Kibana;
- Knowledge of any tool for logging and monitoring, such as Grafana, Prometheus, Splunk, etc.;
- Experience with Python;
- Level of English – Intermediate.
Nice-to-naves:
- Experience with serverless platforms, such as AWS Lambda, Google Cloud Functions, Nuclio, Knative, etc.;
- Knowledge of deployment management tools, such as Terraform or Helm;
- Experience with Data Science tools in an AWS environment;
- Experience in progressive delivery, such as A/B testing, Canary deployment, and multi-armed bandit;
- Experience in managing Linux systems – the customer uses Ubuntu/Debian for bare metal servers and Amazon Linux in EKS and EC2 instances;
- Experience with a bare-metal Kubernetes cluster.