Amaris is an independent, international Technologies and Management Consulting Group. Created in 2007, Amaris is already established in over 50 countries and supports more than 850 clients worldwide throughout their project's lifecycle. Our expertise covers five areas of innovation: Business and Management, Information Technologies, Engineering and High Technologies, Telecommunications and Biotech and Pharma. With more than 65 offices across the world, the Group offers proximity support to its clients in all their locations and many opportunities of international careers for employees. In 2019, Amaris aims to reach a turnover of 350 million euros, 6500 employees and to develop its workforces with the anticipation of a further 2000 new job openings. We expect to triple our workforce within the next few years and reach a leading international position in independent consulting.
In order to sustain the growth of our activities in Switzerland, Amaris Consulting is hiring a Data Engineer for its Data Science Center of Excellence.
5+ years of relevant experience in Data Engineering.
As Data Engineer your main task is to be responsible for the data management of data & analytics platform that means data ingestions, data integration, data processing, data quality management, cleansing and wrangling of data and ensure full integration into IT environments and analytics solutions.
Develop, construct, test and maintain data architectures such as databases, data warehouses and large-scale data processing systems
Design and develop data pipelines/systems for data modelling, mining and production
Ensure the data architecture is in place to support routine and ad-hoc requirements of data analytics team, stakeholders and the business
Leverage on variety of programming languages and data crawling/processing tools to make raw data clean and highly available for use in descriptive and predictive modelling
Recommend and implement ways to improve data quality, reliability, flexibility and efficiency
Ensure data assets and data catalogs are organized and stored in an efficient way so that information is easy to access and retrieve
At least, 3+ years of production level experience Python or Scala (Apache Spark)
Design and implement Big data solutions in Azure or GCP or AWS
Experience with ELT particularly in the Big Data environment
Experience on a DataCatalog solution
Experience in traditional database systems (SQL Server, Oracle, MySQL or MariaDB)
Experience on NO SQL DB like MongoDB, Cassandra and HBASE
Strong written and verbal communication skills
Knowledge with AGILE project management
Experience on ElasticSearch, Logstash and Kibana
Experience on Kafka or Flume