Big Data Engineer with Analytics required, to work on a project within a major Bank in Switzerland. Our client is looking for an experienced implementer of productive, business rule and metadata driven Big Data solutions, using Apache Hadoop and Spark.
Responsibilities
Interact with the architecture team to refine requirements
Work within project team to implement solutions on the existing Hadoop platform
Work with Platform Engineers to ensure dependant components are provisioned
Provide input in defining the design / architecture of the envisaged solution
Implement business rules for streamlining data feed(s)
Implement rule based framework to abstract complex technical implementation into reusable, generic components
Required
Proven experience implementing solutions to process large amounts of data in a Hadoop ecosystem, utilising Apache Spark
Experience implementing generic components for the ingestion, validation, and structuring of disparate data sources into a distributed data platform
Experiencing implementing complex event processing patterns
Strong programming and scripting skills; e.g. Java, Scala, Python, R
Strong understanding of Hadoop technologies; e.g. MapReduce, HDFS, HBase, Hive, Sqoop, Flume, Kafka...
Software Engineering background, with experience using common SDLC tools and practices for agile, ideally including Continuous Delivery
Experience in BRMS driven solutions
Experience with master data management (MDM) including ontology curation and data cleansing
Experience with end-user notepad tools; e.g, Jupyter, Zeppelin
Ability to operate in a global team and coordinate activities with remote resources
Capable of documenting solutions so they can be re-used
Skills in exploratory statistics to relate to data analysts and statisticians
Desirable
Macroeconomic domain knowledge
Experience in econometrics
Experience with data visualisation tools, e.g. Tableau
Start date: April 2018 Duration: 3 months, initially
Are you interested to work in this challenging position? Benjamin Wenk, Talent Resourcer, is looking forward to receiving your complete profile. Please send an e-mail to benjamin.wenk@coopers.ch
Find more jobs at: www.coopers.ch
Coopers Group AG | Seestrasse 72b | CH-6052 Hergiswil NW | Tel. +41 41 632 43 50
Robert Stanislawek : Senior Consultant
+41 41 632 43 50
Contact
Apply online
07/04/2018
Full time
Big Data Engineer with Analytics required, to work on a project within a major Bank in Switzerland. Our client is looking for an experienced implementer of productive, business rule and metadata driven Big Data solutions, using Apache Hadoop and Spark.
Responsibilities
Interact with the architecture team to refine requirements
Work within project team to implement solutions on the existing Hadoop platform
Work with Platform Engineers to ensure dependant components are provisioned
Provide input in defining the design / architecture of the envisaged solution
Implement business rules for streamlining data feed(s)
Implement rule based framework to abstract complex technical implementation into reusable, generic components
Required
Proven experience implementing solutions to process large amounts of data in a Hadoop ecosystem, utilising Apache Spark
Experience implementing generic components for the ingestion, validation, and structuring of disparate data sources into a distributed data platform
Experiencing implementing complex event processing patterns
Strong programming and scripting skills; e.g. Java, Scala, Python, R
Strong understanding of Hadoop technologies; e.g. MapReduce, HDFS, HBase, Hive, Sqoop, Flume, Kafka...
Software Engineering background, with experience using common SDLC tools and practices for agile, ideally including Continuous Delivery
Experience in BRMS driven solutions
Experience with master data management (MDM) including ontology curation and data cleansing
Experience with end-user notepad tools; e.g, Jupyter, Zeppelin
Ability to operate in a global team and coordinate activities with remote resources
Capable of documenting solutions so they can be re-used
Skills in exploratory statistics to relate to data analysts and statisticians
Desirable
Macroeconomic domain knowledge
Experience in econometrics
Experience with data visualisation tools, e.g. Tableau
Start date: April 2018 Duration: 3 months, initially
Are you interested to work in this challenging position? Benjamin Wenk, Talent Resourcer, is looking forward to receiving your complete profile. Please send an e-mail to benjamin.wenk@coopers.ch
Find more jobs at: www.coopers.ch
Coopers Group AG | Seestrasse 72b | CH-6052 Hergiswil NW | Tel. +41 41 632 43 50
Robert Stanislawek : Senior Consultant
+41 41 632 43 50
Contact
Apply online
Innerhalb unserer Wachstumsstrategie suchen wir einen erfahrenen BI-Spezialisten mit Lust sich in Richtung Qlik-Front-Ends weiterzuentwickeln und unseren Kunden zusätzliche Planungsfunktionalitäten auf der Basis von Jedox zu bieten. In Ihrer künftigen Position als BI-Consultant sind Sie Teil unseres Teams „BI Consulting Services“ und arbeiten in unterschiedlichen Projekten.
Ihre Verantwortungsbereiche
Beratung von Kunden in vielfältigen BI-Themen im Zusammenspiel mit verschiedenen Quell- und Umsystemen (z.B. SAP)
Mit Ihrem koordinierten Vorgehen, ermitteln Sie den Informationsbedarf aus der Sicht unserer Kunden
Sie sind verantwortlich für die Beratung, Konzeptionierung, Planung, Entwicklung sowie der nachhaltigen Betreuung von BI-Lösungen auf der Basis von Qlik und Jedox
Selbständige Projektleitung und Koordination mit externen Dienstleistern und Kunden
Unsere Anforderungen
Ausbildung in Wirtschaftsinformatik bzw. Informatik (HFP, FH, TS, Uni, MSc, usw.)
Fachliche Kenntnisse und methodische Projekterfahrung im Bereich Business Intelligence
Erfahrung mit Qlik und/oder Jedox-Produkten sowie anderen BI Tools (z.B. OLAP-basierende)
Erfahrung mit relationalen Datenbanksystemen sowie gute SQL-Kenntnisse
Praktische Erfahrung im Bereich SAP BI und BW erwünscht
Sie verstehen komplexe Zusammenhänge und verfügen über Kenntnisse in Analyse, Konzeption und Design von Datenmodellen
Sie wollen selbstständig, initiativ und in einem Team dienstleistungs- und kundenorientiert arbeiten
Sehr gute Deutsch- und gute Englischkenntnisse
Arbeitsbeginn: per sofort oder nach Vereinbarung
Unser Angebot
Eine vielseitige Herausforderung in einem spannenden, modernen Spielfeld und einem starken, erfahrenen Team. Sie haben die Möglichkeit Projektverantwortung zu übernehmen und diese erfolgreich im Team zum Ziel zu führen. Der Austausch unter den Mitarbeitenden wird aktiv gelebt sowie auch disziplinübergreifend gefördert. Bei uns können Sie etwas bewegen!
Haben wir Ihr Interesse geweckt? Dann freuen wir uns auf Ihre vollständige Online-Bewerbung!
15/03/2018
Full time
Innerhalb unserer Wachstumsstrategie suchen wir einen erfahrenen BI-Spezialisten mit Lust sich in Richtung Qlik-Front-Ends weiterzuentwickeln und unseren Kunden zusätzliche Planungsfunktionalitäten auf der Basis von Jedox zu bieten. In Ihrer künftigen Position als BI-Consultant sind Sie Teil unseres Teams „BI Consulting Services“ und arbeiten in unterschiedlichen Projekten.
Ihre Verantwortungsbereiche
Beratung von Kunden in vielfältigen BI-Themen im Zusammenspiel mit verschiedenen Quell- und Umsystemen (z.B. SAP)
Mit Ihrem koordinierten Vorgehen, ermitteln Sie den Informationsbedarf aus der Sicht unserer Kunden
Sie sind verantwortlich für die Beratung, Konzeptionierung, Planung, Entwicklung sowie der nachhaltigen Betreuung von BI-Lösungen auf der Basis von Qlik und Jedox
Selbständige Projektleitung und Koordination mit externen Dienstleistern und Kunden
Unsere Anforderungen
Ausbildung in Wirtschaftsinformatik bzw. Informatik (HFP, FH, TS, Uni, MSc, usw.)
Fachliche Kenntnisse und methodische Projekterfahrung im Bereich Business Intelligence
Erfahrung mit Qlik und/oder Jedox-Produkten sowie anderen BI Tools (z.B. OLAP-basierende)
Erfahrung mit relationalen Datenbanksystemen sowie gute SQL-Kenntnisse
Praktische Erfahrung im Bereich SAP BI und BW erwünscht
Sie verstehen komplexe Zusammenhänge und verfügen über Kenntnisse in Analyse, Konzeption und Design von Datenmodellen
Sie wollen selbstständig, initiativ und in einem Team dienstleistungs- und kundenorientiert arbeiten
Sehr gute Deutsch- und gute Englischkenntnisse
Arbeitsbeginn: per sofort oder nach Vereinbarung
Unser Angebot
Eine vielseitige Herausforderung in einem spannenden, modernen Spielfeld und einem starken, erfahrenen Team. Sie haben die Möglichkeit Projektverantwortung zu übernehmen und diese erfolgreich im Team zum Ziel zu führen. Der Austausch unter den Mitarbeitenden wird aktiv gelebt sowie auch disziplinübergreifend gefördert. Bei uns können Sie etwas bewegen!
Haben wir Ihr Interesse geweckt? Dann freuen wir uns auf Ihre vollständige Online-Bewerbung!
For our client, a pharmaceutical company in Basel, we are looking for a: Senior Data Scientist
Background :
We are looking for a Senior Data Scientist. You will be part of the Pharma BI & MDM (Business Intelligence and Master Data Management) Data Science Lab based in Kaiseraugst. The team consists of data scientists and computing experts in Poland and Basel. The aim is apply Data Science in various business areas Logistics, Finance, Commercial, Manufacturing, PHC etc. in order to generate value for the company. As senior data scientist you will be responsible for driving Data Science Experiments assigned to you as well as supervising, coaching other teams members/experiments.
Tasks & Responsibilities:
- Drive the execution of data science experiments based on business requirements and data - Engage with customers e.g. business-facing IT - Performing high quality, timely and accurate analyses using appropriate state-of-the-art methodologies, tools and resources - As a senior, support/coach team members on data science experiments they are working on - Contribute to the design and use of statistical models, algorithms, tools etc. - Support the evaluation of the designed models - Train team members on the design and scientific background of statistical models, algorithms, tools etc. (coaching some team members in Poland, ADMD) - Contribute and be integrated into the established processes of the Data Science Lab
Must Haves:
- M.S. or advanced degree (PhD) in data science related field (e.g. statistics, mathematics, computer science, epidemiology, health economics, outcomes research) - +8 years' experience of the major data science modeling methods and machine learning - Substantial experience in applying statistical modeling, machine learning, exploratory, and confirmatory data analysis in mid to large volume data sets - Substantial experience with R and/or Python - Experience with relational data sources (SQL) and visualization tools (Tableau) - Experience using big data platforms/tools such as Hadoop, SPARK etc. - Good verbal and writing communication skills in English - Coaching / Leading experience of sub-teams
Nice to Haves:
Knowledge of Tableau, Shiny-RStudio and D3.js would be highly preferred
Spark is a high plus
German
Are you interested to work in this challenging position? Alessia Biassoli, Talent Resourcer, is looking forward to receiving your complete profile. Please send an e-mail to alessia.biassoli@coopers.ch
Find more jobs at: www.coopers.ch
Coopers Group AG | Seestrasse 72b | CH-6052 Hergiswil NW | Tel. +41 41 632 43 50
Alessia Biassoli : Talent Resourcer
+41 41 632 43 50
Contact
Apply online
18/04/2018
Full time
For our client, a pharmaceutical company in Basel, we are looking for a: Senior Data Scientist
Background :
We are looking for a Senior Data Scientist. You will be part of the Pharma BI & MDM (Business Intelligence and Master Data Management) Data Science Lab based in Kaiseraugst. The team consists of data scientists and computing experts in Poland and Basel. The aim is apply Data Science in various business areas Logistics, Finance, Commercial, Manufacturing, PHC etc. in order to generate value for the company. As senior data scientist you will be responsible for driving Data Science Experiments assigned to you as well as supervising, coaching other teams members/experiments.
Tasks & Responsibilities:
- Drive the execution of data science experiments based on business requirements and data - Engage with customers e.g. business-facing IT - Performing high quality, timely and accurate analyses using appropriate state-of-the-art methodologies, tools and resources - As a senior, support/coach team members on data science experiments they are working on - Contribute to the design and use of statistical models, algorithms, tools etc. - Support the evaluation of the designed models - Train team members on the design and scientific background of statistical models, algorithms, tools etc. (coaching some team members in Poland, ADMD) - Contribute and be integrated into the established processes of the Data Science Lab
Must Haves:
- M.S. or advanced degree (PhD) in data science related field (e.g. statistics, mathematics, computer science, epidemiology, health economics, outcomes research) - +8 years' experience of the major data science modeling methods and machine learning - Substantial experience in applying statistical modeling, machine learning, exploratory, and confirmatory data analysis in mid to large volume data sets - Substantial experience with R and/or Python - Experience with relational data sources (SQL) and visualization tools (Tableau) - Experience using big data platforms/tools such as Hadoop, SPARK etc. - Good verbal and writing communication skills in English - Coaching / Leading experience of sub-teams
Nice to Haves:
Knowledge of Tableau, Shiny-RStudio and D3.js would be highly preferred
Spark is a high plus
German
Are you interested to work in this challenging position? Alessia Biassoli, Talent Resourcer, is looking forward to receiving your complete profile. Please send an e-mail to alessia.biassoli@coopers.ch
Find more jobs at: www.coopers.ch
Coopers Group AG | Seestrasse 72b | CH-6052 Hergiswil NW | Tel. +41 41 632 43 50
Alessia Biassoli : Talent Resourcer
+41 41 632 43 50
Contact
Apply online
For our client, a pharmaceutical company in Basel, we are looking for a Data Analyst (60 to 80%)
Background : Environment is Pharma/Clinical Operations IT. There are several decentralized processes (silos), using partially unstructured non-scientific business process data of sub-optimal data quality. In order to improve this situation in a first step, an as-is analysis of data processes and their relations need to be carried out. In many cases a manual (with different steps), i.e. to extract, analyze, document and visualize the data will be necessary. This phase can be regarded as a preparation phase for step 2. Step 2 will implement a system that supports an impact-on-change / what-if analysis for decommissioning legacy systems (data, processes, applications).
Tasks & Responsibilities:
- Access diverse Data Sources (e.g. Access DB, Oracle, Documents) and document data architecture - Extract and clean data (mostly unstructured and of poor data quality (e.g. by intermediate extraction in files) - Document and visualize current data architecture - Document as-is situation: extract, understand, document and visualize data & processes (silo's) - Understand and document / visualize relationships of processes and data - Data cleaning / data consolidation (e.g. Python / R / SAS) - General data analytics / data crunching tasks - Programming of algorithms to clean / validate data sets (e.g. Python / R / SAS) - Programming of front end for visualization (business objects, business intelligence) of data - Data architecture: conception of database architecture, conception, design and programming of Enterprise Service Bus infrastructure in order to extract data, model it and visualize it with the aim of gaining insights on the given processes
Must Haves:
- +5 years of experience in Data Analysis and data & process visualization (e.g. Gephi, Spotfire or any other social network analysis/neural network visualizations tools), dash boarding & reporting conception and development (business objects, QLIK view, Tableau or any other business intelligence visualization tool) - Strong skills in data analysis languages (e.g. Python / R / SAS), database access and basic data query (SQL) - +5 years of experience in data mining / data crunching and clean-up of large data sets - Practical working experience with Excel for data source access and data consolidation and cleaning - Very good writing and speaking skills in English - Documentation skills (precise working)
Nice to Haves:
Experience in ERP implementation (from mapping of requirements to technical specs to BO)
Experienced in Data/Solution/Application management, preferably at implementation/migration stages
Experience on intelligent business process management software (IBPM)
Degree in IT or Data Science
Knowledge of access diverse data sources using Excel
Experience in Clinical Trials processes
Understanding of point to point system connections, data migration and data warehouse concepts
Experience in Machine Learning
Are you interested to work in this challenging position? Alessia Biassoli, Talent Resourcer, is looking forward to receiving your complete profile. Please send an e-mail to alessia.biassoli@coopers.ch
Find more jobs at: www.coopers.ch
Coopers Group AG | Seestrasse 72b | CH-6052 Hergiswil NW | Tel. +41 41 632 43 50
Alessia Biassoli: Talent Resourcer
+41 41 632 43 50
Contact
Apply online
18/04/2018
Full time
For our client, a pharmaceutical company in Basel, we are looking for a Data Analyst (60 to 80%)
Background : Environment is Pharma/Clinical Operations IT. There are several decentralized processes (silos), using partially unstructured non-scientific business process data of sub-optimal data quality. In order to improve this situation in a first step, an as-is analysis of data processes and their relations need to be carried out. In many cases a manual (with different steps), i.e. to extract, analyze, document and visualize the data will be necessary. This phase can be regarded as a preparation phase for step 2. Step 2 will implement a system that supports an impact-on-change / what-if analysis for decommissioning legacy systems (data, processes, applications).
Tasks & Responsibilities:
- Access diverse Data Sources (e.g. Access DB, Oracle, Documents) and document data architecture - Extract and clean data (mostly unstructured and of poor data quality (e.g. by intermediate extraction in files) - Document and visualize current data architecture - Document as-is situation: extract, understand, document and visualize data & processes (silo's) - Understand and document / visualize relationships of processes and data - Data cleaning / data consolidation (e.g. Python / R / SAS) - General data analytics / data crunching tasks - Programming of algorithms to clean / validate data sets (e.g. Python / R / SAS) - Programming of front end for visualization (business objects, business intelligence) of data - Data architecture: conception of database architecture, conception, design and programming of Enterprise Service Bus infrastructure in order to extract data, model it and visualize it with the aim of gaining insights on the given processes
Must Haves:
- +5 years of experience in Data Analysis and data & process visualization (e.g. Gephi, Spotfire or any other social network analysis/neural network visualizations tools), dash boarding & reporting conception and development (business objects, QLIK view, Tableau or any other business intelligence visualization tool) - Strong skills in data analysis languages (e.g. Python / R / SAS), database access and basic data query (SQL) - +5 years of experience in data mining / data crunching and clean-up of large data sets - Practical working experience with Excel for data source access and data consolidation and cleaning - Very good writing and speaking skills in English - Documentation skills (precise working)
Nice to Haves:
Experience in ERP implementation (from mapping of requirements to technical specs to BO)
Experienced in Data/Solution/Application management, preferably at implementation/migration stages
Experience on intelligent business process management software (IBPM)
Degree in IT or Data Science
Knowledge of access diverse data sources using Excel
Experience in Clinical Trials processes
Understanding of point to point system connections, data migration and data warehouse concepts
Experience in Machine Learning
Are you interested to work in this challenging position? Alessia Biassoli, Talent Resourcer, is looking forward to receiving your complete profile. Please send an e-mail to alessia.biassoli@coopers.ch
Find more jobs at: www.coopers.ch
Coopers Group AG | Seestrasse 72b | CH-6052 Hergiswil NW | Tel. +41 41 632 43 50
Alessia Biassoli: Talent Resourcer
+41 41 632 43 50
Contact
Apply online
Industry : ERP, Risk, CRM, HCM, Data & Analytics, Security, Infrastructure & Development, Finance & Business Advisory Services
Salary : Swiss Franc900 - Swiss Franc950 per day
My well recognised financial client are seeking an experience Big Data Engineer/Architect to lead the implementation of solutions on a greenfield POC project!!
You will be an experienced implementer of productive, business rule and metadata driven Big Data solutions, using Apache Hadoop and Spark. The successful candidate should have demonstrable experience of implementing similar solutions, be capable of working independently as well as comfortable working within an agile project team. The ability to relate complex data problems to business users and vice-versa, is crucial.
The Job:
Interact with the architecture team to refine requirements
Work within project team to implement solutions on the existing Hadoop platform
Work with Platform Engineers to ensure Dependent components are provisioned
Provide input in defining the design/architecture of the envisaged solution
Implement business rules for streamlining data feed(s)
Implement rule based framework to abstract complex technical implementation into reusable, generic components
We are looking for candidates with:
Proven experience implementing solutions to process large amounts of data in a Hadoop ecosystem, utilising Apache Spark
Experience implementing generic components for the ingestion, validation, and structuring of disparate data sources into a distributed data platform
Experiencing implementing complex event processing patterns
Strong programming and Scripting skills; eg Java or Scala or Python or R
Strong understanding of Hadoop technologies; eg MapReduce, HDFS, HBase, Hive, Sqoop, Flume, Kafka
Software Engineering background, with experience using common SDLC tools and practices for agile, ideally including Continuous Delivery
Experience in BRMS driven solutions
Experience with master data management (MDM) including ontology curation and data cleansing
Experience with end-user notepad tools; eg, Jupyter, Zeppelin
Ability to operate in a global team and coordinate activities with remote resources
Capable of documenting solutions so they can be re-used
Skills in exploratory statistics to relate to data analysts and statisticians
Nice to have:
Economics or Macroeconomic domain knowledge
Experience in econometrics
Experience with data visualisation tools, eg Tableau
Lawrence Harvey are a preferred supplier for this client! Apply now with an up to date CV for interview slots arranged.
Lawrence Harvey is acting as an Employment Business in regards to this position.
Visit our website www.lawrenceharvey.com and follow us on Twitter for all live vacancies @lawharveyjobs Apply for this Job
Contact Name :Tom Francis
Contact Emai l: t.francis@lawrenceharvey.com
14/04/2018
Full time
Industry : ERP, Risk, CRM, HCM, Data & Analytics, Security, Infrastructure & Development, Finance & Business Advisory Services
Salary : Swiss Franc900 - Swiss Franc950 per day
My well recognised financial client are seeking an experience Big Data Engineer/Architect to lead the implementation of solutions on a greenfield POC project!!
You will be an experienced implementer of productive, business rule and metadata driven Big Data solutions, using Apache Hadoop and Spark. The successful candidate should have demonstrable experience of implementing similar solutions, be capable of working independently as well as comfortable working within an agile project team. The ability to relate complex data problems to business users and vice-versa, is crucial.
The Job:
Interact with the architecture team to refine requirements
Work within project team to implement solutions on the existing Hadoop platform
Work with Platform Engineers to ensure Dependent components are provisioned
Provide input in defining the design/architecture of the envisaged solution
Implement business rules for streamlining data feed(s)
Implement rule based framework to abstract complex technical implementation into reusable, generic components
We are looking for candidates with:
Proven experience implementing solutions to process large amounts of data in a Hadoop ecosystem, utilising Apache Spark
Experience implementing generic components for the ingestion, validation, and structuring of disparate data sources into a distributed data platform
Experiencing implementing complex event processing patterns
Strong programming and Scripting skills; eg Java or Scala or Python or R
Strong understanding of Hadoop technologies; eg MapReduce, HDFS, HBase, Hive, Sqoop, Flume, Kafka
Software Engineering background, with experience using common SDLC tools and practices for agile, ideally including Continuous Delivery
Experience in BRMS driven solutions
Experience with master data management (MDM) including ontology curation and data cleansing
Experience with end-user notepad tools; eg, Jupyter, Zeppelin
Ability to operate in a global team and coordinate activities with remote resources
Capable of documenting solutions so they can be re-used
Skills in exploratory statistics to relate to data analysts and statisticians
Nice to have:
Economics or Macroeconomic domain knowledge
Experience in econometrics
Experience with data visualisation tools, eg Tableau
Lawrence Harvey are a preferred supplier for this client! Apply now with an up to date CV for interview slots arranged.
Lawrence Harvey is acting as an Employment Business in regards to this position.
Visit our website www.lawrenceharvey.com and follow us on Twitter for all live vacancies @lawharveyjobs Apply for this Job
Contact Name :Tom Francis
Contact Emai l: t.francis@lawrenceharvey.com
Salary : CHF77 - CHF87 per hour
Sector : Big Data
Experis is the global leader in professional resourcing and project-based workforce solutions. Our suite of services range from interim and permanent recruitment to managed services and consulting, enabling businesses to achieve their goals. We accelerate organisational growth by attracting, assessing and placing specialised professional talent.
For our international client based in Basel, Experis is currently looking for a Data Engineer for a contracting position until the end of September.
Tasks & Responsibilities:
Collect all existing physical data models, specifications and document in a unified format.
Derive logical models for both the source systems and target system and document such that business experts can provide input.
Capture input from business experts and create specifications of transformations which can be easily implemented or are directly machine-readable.
Keep track of data migration from legacy systems to new solution
Specify tests and acceptance criteria which allow efficient verification and validation of the migration.
You bring for this role the following qualifications:
Mandatory Skills and Experience:
Minimum of Master's Degree (Computer Science, Mathematics, Statistics)
Extensive expertise in logical and physical data modeling
Proven experience with ETL and migration tools
Advanced degree in Applied Mathematics, Business Analytics, Statistics, Machine Learning, Computer Science or related fields is a plus
Good analytical and logical thinking skills
10+ plus years' experience as Data Engineer or equivalent
5+ years of experience in working in regulated industries, preferably the pharmaceutical industry
Regulatory domain knowledge (specifically submission and registration management)
Experience gathering and analyzing migration requirements
Excellent understanding and knowledge of general IT infrastructure technology, systems and management processes
Experience with compliance requirements (e.g. SOX, GxP / CSV, E-compliance, Records Management, Privacy)
Excellent verbal and written communication skills in English
Experience of working in global teams
Preferred Skills and Constraints:
Experience working in Data migration projects within Pharmaceutical Regulatory domain is an added advantage
Interested in this opportunity? Kindly send us your CV today through the link in the advert. However should you have any questions please contact Stéphanie Vogt on +41 61 282 22 16.
Even though this position may not be the perfect fit for you, please reach out to us, as we have hundreds of open positions at Experis IT across Switzerland.
Check out all of Experis' job openings at www.experis.ch or visit my personal page https://www.experis.ch/stephanie-vogt and connect to me on www.linkedin.com/in/stephanie-vogt2
APPLY
This role is managed by:
Stéphanie Vogt : Recruiter
41 61 282 22 16
MY PROFILE
stephanie.vogt@experis.ch
09/04/2018
Full time
Salary : CHF77 - CHF87 per hour
Sector : Big Data
Experis is the global leader in professional resourcing and project-based workforce solutions. Our suite of services range from interim and permanent recruitment to managed services and consulting, enabling businesses to achieve their goals. We accelerate organisational growth by attracting, assessing and placing specialised professional talent.
For our international client based in Basel, Experis is currently looking for a Data Engineer for a contracting position until the end of September.
Tasks & Responsibilities:
Collect all existing physical data models, specifications and document in a unified format.
Derive logical models for both the source systems and target system and document such that business experts can provide input.
Capture input from business experts and create specifications of transformations which can be easily implemented or are directly machine-readable.
Keep track of data migration from legacy systems to new solution
Specify tests and acceptance criteria which allow efficient verification and validation of the migration.
You bring for this role the following qualifications:
Mandatory Skills and Experience:
Minimum of Master's Degree (Computer Science, Mathematics, Statistics)
Extensive expertise in logical and physical data modeling
Proven experience with ETL and migration tools
Advanced degree in Applied Mathematics, Business Analytics, Statistics, Machine Learning, Computer Science or related fields is a plus
Good analytical and logical thinking skills
10+ plus years' experience as Data Engineer or equivalent
5+ years of experience in working in regulated industries, preferably the pharmaceutical industry
Regulatory domain knowledge (specifically submission and registration management)
Experience gathering and analyzing migration requirements
Excellent understanding and knowledge of general IT infrastructure technology, systems and management processes
Experience with compliance requirements (e.g. SOX, GxP / CSV, E-compliance, Records Management, Privacy)
Excellent verbal and written communication skills in English
Experience of working in global teams
Preferred Skills and Constraints:
Experience working in Data migration projects within Pharmaceutical Regulatory domain is an added advantage
Interested in this opportunity? Kindly send us your CV today through the link in the advert. However should you have any questions please contact Stéphanie Vogt on +41 61 282 22 16.
Even though this position may not be the perfect fit for you, please reach out to us, as we have hundreds of open positions at Experis IT across Switzerland.
Check out all of Experis' job openings at www.experis.ch or visit my personal page https://www.experis.ch/stephanie-vogt and connect to me on www.linkedin.com/in/stephanie-vogt2
APPLY
This role is managed by:
Stéphanie Vogt : Recruiter
41 61 282 22 16
MY PROFILE
stephanie.vogt@experis.ch
Big Data Engineer - Analytics wanted for our Basel based client in the financial sector.
Your experience/skills:
Software Engineering background with experience in SDLC tools and practices for agile, ideally including Continuous Delivery
Proven experience implementing solutions to process large amounts of data in a Hadoop ecosystem, utilising Apache Spark, as well as generic components for the ingestion, validation, and structuring of disparate data sources and complex event processing patterns
Strong programming and scripting skills in Java, Scala, Python and R
Profound knowledge of Hadoop technologies, such as MapReduce, HDFS, HBase, Hive, Sqoop, Flume and Kafka
Experience with end-user notepad tools, such as Jupyter and Zeppelin as well as BRMS driven solutions and master data management (MDM)
Languages: fluent English both written and spoken
Your tasks:
Implementing solutions on the existing Hadoop platform in collaboration with a global team
Liaising with Platform Engineers to ensure dependent components are provisioned
Providing input in defining the design and architecture of the envisaged solution
Implementing business rules for streamlining data feeds and rule based framework to abstract complex technical implementation into reusable, generic components
Interacting with the architecture team to refine requirements
Start: 05/2018 Duration: 3MM+
Does this sound like an interesting and challenging opportunity to you? Then take the next step by sending us your CV as a Word document and a contact telephone number.
Due to work permit restrictions we can unfortunately only consider applications from EU or Swiss citizens as well as current work-permit holders for Switzerland.
Going the extra mile…
New to Switzerland? In case of successful placement, we support you with:
All administrative questions
Finding an apartment
Health – and social insurance
Work permit and much more
09/04/2018
Full time
Big Data Engineer - Analytics wanted for our Basel based client in the financial sector.
Your experience/skills:
Software Engineering background with experience in SDLC tools and practices for agile, ideally including Continuous Delivery
Proven experience implementing solutions to process large amounts of data in a Hadoop ecosystem, utilising Apache Spark, as well as generic components for the ingestion, validation, and structuring of disparate data sources and complex event processing patterns
Strong programming and scripting skills in Java, Scala, Python and R
Profound knowledge of Hadoop technologies, such as MapReduce, HDFS, HBase, Hive, Sqoop, Flume and Kafka
Experience with end-user notepad tools, such as Jupyter and Zeppelin as well as BRMS driven solutions and master data management (MDM)
Languages: fluent English both written and spoken
Your tasks:
Implementing solutions on the existing Hadoop platform in collaboration with a global team
Liaising with Platform Engineers to ensure dependent components are provisioned
Providing input in defining the design and architecture of the envisaged solution
Implementing business rules for streamlining data feeds and rule based framework to abstract complex technical implementation into reusable, generic components
Interacting with the architecture team to refine requirements
Start: 05/2018 Duration: 3MM+
Does this sound like an interesting and challenging opportunity to you? Then take the next step by sending us your CV as a Word document and a contact telephone number.
Due to work permit restrictions we can unfortunately only consider applications from EU or Swiss citizens as well as current work-permit holders for Switzerland.
Going the extra mile…
New to Switzerland? In case of successful placement, we support you with:
All administrative questions
Finding an apartment
Health – and social insurance
Work permit and much more
Data Engineer - Migration wanted for our Basel based client in the pharmaceutical industry.
Your experience/skills:
Master degree in Computer Science, Mathematics or in another relevant field with minimum 10 years’ work experience as a Data Engineer or equivalent
5+ years’ practical experience in regulated industries, preferably the pharmaceutical industry along with expertise in logical and physical data modelling
Extensive knowledge of general IT infrastructure technology, systems and management processes as well as familiarity with ETL and migration tools
Ability to gather and analyze compliance and migration requirements (SOX, GxP / CSV, E-Compliance, Records Management, Privacy)
Languages: fluent English both written and spoken
Your tasks:
Supporting the data sourcing and migration work stream of a new system which redefines operations in regulatory affairs
Gathering all existing physical data models, API specifications and documenting in a unified input
Determining logical models for both the source systems and target system and documenting such that business experts can provide input
Capturing input from business experts and creating specifications of transformations which can be easily implemented or are directly machine-readable
Keeping track of data migration from legacy systems to new solution as well as specifying tests and acceptance criteria which allow efficient verification and validation of the migration
Start: ASAP Duration: 5MM+
Does this sound like an interesting and challenging opportunity to you? Then take the next step by sending us your CV as a Word document and a contact telephone number.
Due to work permit restrictions we can unfortunately only consider applications from EU or Swiss citizens as well as current work-permit holders for Switzerland.
Going the extra mile…
New to Switzerland? In case of successful placement, we support you with:
All administrative questions
Finding an apartment
Health – and social insurance
Work permit and much more
09/04/2018
Full time
Data Engineer - Migration wanted for our Basel based client in the pharmaceutical industry.
Your experience/skills:
Master degree in Computer Science, Mathematics or in another relevant field with minimum 10 years’ work experience as a Data Engineer or equivalent
5+ years’ practical experience in regulated industries, preferably the pharmaceutical industry along with expertise in logical and physical data modelling
Extensive knowledge of general IT infrastructure technology, systems and management processes as well as familiarity with ETL and migration tools
Ability to gather and analyze compliance and migration requirements (SOX, GxP / CSV, E-Compliance, Records Management, Privacy)
Languages: fluent English both written and spoken
Your tasks:
Supporting the data sourcing and migration work stream of a new system which redefines operations in regulatory affairs
Gathering all existing physical data models, API specifications and documenting in a unified input
Determining logical models for both the source systems and target system and documenting such that business experts can provide input
Capturing input from business experts and creating specifications of transformations which can be easily implemented or are directly machine-readable
Keeping track of data migration from legacy systems to new solution as well as specifying tests and acceptance criteria which allow efficient verification and validation of the migration
Start: ASAP Duration: 5MM+
Does this sound like an interesting and challenging opportunity to you? Then take the next step by sending us your CV as a Word document and a contact telephone number.
Due to work permit restrictions we can unfortunately only consider applications from EU or Swiss citizens as well as current work-permit holders for Switzerland.
Going the extra mile…
New to Switzerland? In case of successful placement, we support you with:
All administrative questions
Finding an apartment
Health – and social insurance
Work permit and much more
Industry : Data & Analytics
Salary : Swiss Franc700 - Swiss Franc750 per day
Big Data Engineer - Basel, Switzeland - 3 months rolling
I am seeking an experience Big Data Engineer to join a financial client in Basel, Switzerland. You need to be an experienced implementer of productive, business rule and metadata driven Big Data solutions, using Apache Hadoop and Spark.
The successful Big Data Engineer should have demonstrable experience of implementing similar solutions, be capable of working independently as well as comfortable working within an agile project team. The ability to relate complex data problems to business users and vice-versda is crucial.
Responsibilities
Interact with the architecture team to refine requirements
Work within project team to implement solutions on the existing Hadoop platform
Work with Platform Engineers to ensure dependant components are provisioned
Provide input in defining the design / architecture of the envisaged solution
Implement business rules for streamlining data feed(s)
Implement rule based framework to abstract complex technical implementation into re-usable, generic components
Required
Proven experience implementing solutions to process large amounts of data in a Hadoop ecosystem, utilising Apache Spark
Experience implementing generic components for the ingestion, validation, and structuring of disparate data sources into a distributed data platform
Experiencing implementing complex event processing patterns
Strong programming and scripting skills; e.g. Java, Scala, Python, R
Strong understanding of Hadoop technologies; e.g. MapReduce, HDFS, HBase, Hive, Sqoop, Flume, Kafka…
Software Engineering background, with experience using common SDLC tools and practices for agile, ideally including Continuous Delivery
Experience in BRMS driven solutions
Experience with master data management (MDM) including ontology curation and data cleansing
Experience with end-user notepad tools; e.g, Jupyter, Zeppelin
Lawrence Harvey is acting as an Employment Business in regards to this position. Visit our website www.lawrenceharvey.com and follow us on Twitter for all live vacancies @lawharveyjobs
Apply for this Job
Contact Name : William Wakefield
Email : w.wakefield@lawrenceharvey.com
07/04/2018
Full time
Industry : Data & Analytics
Salary : Swiss Franc700 - Swiss Franc750 per day
Big Data Engineer - Basel, Switzeland - 3 months rolling
I am seeking an experience Big Data Engineer to join a financial client in Basel, Switzerland. You need to be an experienced implementer of productive, business rule and metadata driven Big Data solutions, using Apache Hadoop and Spark.
The successful Big Data Engineer should have demonstrable experience of implementing similar solutions, be capable of working independently as well as comfortable working within an agile project team. The ability to relate complex data problems to business users and vice-versda is crucial.
Responsibilities
Interact with the architecture team to refine requirements
Work within project team to implement solutions on the existing Hadoop platform
Work with Platform Engineers to ensure dependant components are provisioned
Provide input in defining the design / architecture of the envisaged solution
Implement business rules for streamlining data feed(s)
Implement rule based framework to abstract complex technical implementation into re-usable, generic components
Required
Proven experience implementing solutions to process large amounts of data in a Hadoop ecosystem, utilising Apache Spark
Experience implementing generic components for the ingestion, validation, and structuring of disparate data sources into a distributed data platform
Experiencing implementing complex event processing patterns
Strong programming and scripting skills; e.g. Java, Scala, Python, R
Strong understanding of Hadoop technologies; e.g. MapReduce, HDFS, HBase, Hive, Sqoop, Flume, Kafka…
Software Engineering background, with experience using common SDLC tools and practices for agile, ideally including Continuous Delivery
Experience in BRMS driven solutions
Experience with master data management (MDM) including ontology curation and data cleansing
Experience with end-user notepad tools; e.g, Jupyter, Zeppelin
Lawrence Harvey is acting as an Employment Business in regards to this position. Visit our website www.lawrenceharvey.com and follow us on Twitter for all live vacancies @lawharveyjobs
Apply for this Job
Contact Name : William Wakefield
Email : w.wakefield@lawrenceharvey.com