Site Reliability Engineer – Big Data Service

Full time Data Science Big Data Statistics DevOps

Job Description

 
Your role
Are you passionate about technology and data? Do you want to help implement the strategy for Global Big Data environments at an international firm? Are you motivated to work in a complex, diverse, and global setting where ideas are valued and effort is appreciated?
We are looking for a Site Reliability Engineer to join our team and help us to:
  • implement migrations, upgrades, patches, and deployments across our Big Data environments
  • assist a large user community with technical support
  • ensure “lights-on” support-services are up and running for all environments
  • help Define and Build automation routines to improve the service
  • create/Manage existing and new projects
  • collaborate with cross-functional teams like engineering, hardware, platform services, and operations
  • occasional support on weekends and evenings

Your team

You will be working in the Big Data SRE team in Zurich to focus on reliability and efficiency in collaboration with our Global Big Data Service team. The Big Data Service is a Center of Excellence across the firm for Databricks, Cloudera and other large-scale data platforms. We enable storage and reporting solutions for critical lines of business, such as Wealth Management, Investment Banking, Risk and Finance, Human Resources, and other mission critical technology Service Teams. Teamwork is pivotal to our success. We offer flexibility in the workplace and equal opportunities for all team members.

Your expertise

  • hands-on experience with Big Data technologies, managing large-scale clusters such as Hadoop, Spark, Cloudera, Kafka, HIVE, Impala etc. I.e. Big Data is not a foreign word for you
  • skillful with Linux/Unix, distributed computing, and knowledgeable on networks
  • basic knowledge of Azure Databricks and the willingness to get engaged and skilled up in this product for SRE related work in a shift-left structure
  • ideally certified AZ-104 or equivalent level of Azure knowledge
  • experience working in a DevOps team, practicing agile methodologies and ability to work independently
  • bachelor's degree in relevant discipline or equivalent experience
  • excellent analytical and problem-solving skills
  • strong communication skills, capable of effectively collaborating with seasoned Big Data experts, as well as guiding individuals new to Big Data technologies
  • advanced level in scripting languages such as Python, Bash, or similar ability to troubleshoot and optimize big data applications and software

About us

UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors..
We have a presence in all major financial centers in more than 50 countries.

Join us

At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs.
From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact?

Disclaimer / Policy statements

UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.