Andersen, an international IT company, invites an experienced Data Engineer to join its team to work with a US company.
The project is the creation of new solutions, such as intelligent personal assistants for enterprise engineers. These are SaaS solutions that operate with data and knowledge sources, analyzing them and using them to solve specific problems.
The customer is a company that provides the financial sector and various business areas with necessary analytical services, including data, expertise, and technology for making daily decisions.
Tech stack on the project: Python, Kafka, JSON, Protocol Buffers, Apache Avro, AWS, Google Cloud Platform, Apache Airflow, Luigi, Apache Oozie, Apache NiFi, Apache Beam, Make, Bash, CL tools.
Tasks:
- Designing and building scalable and reliable data processing and transformation pipelines (designing ETL system for NLP/ML/DL and other projects);
- Owning Data Engineering in projects in Deep Learning, Natural Language Processing, Data Capturing, Information Retrieval, and other areas;
- Working together with Data Scientists, ML Engineers, and developers on building the intelligent capabilities for the company’s products;
- Creating a robust design and best practices in Data Engineering.
Must-haves:
- Experience as a Data Engineer for 3+ years;
- Hands-on experience in building data processing systems using Kafka for 1+ years;
- Experience with various data formats (JSON, Protocol Buffers, Avro, etc.) and schema validation tooling;
- Experience with cloud platforms (AWS, GCP, etc.);
- Experience with data processing automation tools, schedulers, and pipelines (Apache Airflow, Luigi, Apache Oozie, Apache NiFi, Apache Beam, Make, etc.);
- Experience with Linux (Bash, CL tools);
- Strong coding and software engineering skills;
- Experience in Python programming for 2+ years;
- Level of English – Intermediate.
Nice-to-haves:
- Experience with Big Data processing tools (Spark, etc.);
- Experience with search systems (Elasticsearch, Apache Lucene, Apache Solr);
- Experience with RDF or OWL;
- Experience with Big Data processing and analytics stack in AWS: EMR, S3, EC2, Athena, Kinesis, Lambda, QuickSight, etc.;
- Experience in building distributed high-volume data services;
- MS degree related to Computer Science, Data Science, or Statistics;
- Experience in projects with Deep Learning, Natural Language Processing, or Information Retrieval;
- Experience in ML.
Reasons why this job would be interesting to you:
- Andersen cooperates with such businesses as Samsung, Johnson & Johnson, Ryanair, Europcar, TUI, Verivox, Media Markt, Shypple, etc. This project is just your beginning here – working with us means reliability and prospects;
- We have been strengthening our expertise since 2007. During this time, we have formed excellent teams with streamlined processes, where you can learn something new from your colleagues every day and enjoy your work;
- We welcome specialists from every part of the world;
- Salaries at Andersen are pegged to the US dollar, and our employees are provided with a benefit package and an extensive set of bonuses;
- There are many different ways to grow and develop at our company. You can improve as a specialist or a manager, and all your activities will be decently rewarded;
- Our employees have access to Andersen Knowledge Base, where they can take courses on the art of negotiation, project management, Machine Learning and Data Analysis, DevOps practices, programming languages, cloud services, and more.
We invite you to join our team!
Contacts:[email protected]
We only consider candidates with a CV. To make sure we receive your CV, use the "send CV" function on your right
https://people.andersenlab.com/