Senior Data Engineer
Job Description
Our client is an European R&D center. They are a new electric mobility technology and solutions brand that is aiming to satisfy the global demand for premium electric vehicles. Their work comes to life in products and services from world leading brands.
Description of the assignment
You will be an important member in the department to support data platform development and operation. They follow CI/CD best practice to create, test, deliver and deploy of data products. You will play a crucial role in not only operating data platform and deploying the latest data products with infrastructure team, but also performing proper verification after the deployment to minimize the risk of system disruptions, data loss, and security vulnerabilities, ensuring business continuity and user satisfaction.
As a senior data engineer, you will work with the data produced and consumed by our client in the European region. Your expertise will guide and assist stakeholders to access the necessary dataset they need to perform their work in the secure manner. This role demands knowledge of data warehousing concepts and ETL processes, along with basic understand of AWS and Azure cloud platform and software testing.
Responsibilities and Deliveries
Design, develop and maintain scalable batch and stream data pipelines to ingest data into platform from various sources, use tools to automate and orchestrate data pipelines
Develop and implement data processing workflows of cleansing, deduplication, and transformation processes to prepare data for data services
Operate and maintain data platform with infrastructure team.
Perform data platform upgrade test including regression test, security test, functional test.
Implement checks and measures to ensure the security, accuracy, completeness, and consistency of data
Set up tools to monitor pipeline performance, data quality, and system health
Diagnose and resolve issues with data pipelines, storage systems, or integrations
Collaborate with stakeholders to define data models and ensure the data meets their needs
Provide technical guidance and data access support to data consumers and addressing technical challenges.
Document data pipelines, architectures, and workflows to ensure clarity and ease of maintenance
Design and maintain data models for reporting and analytics in BI tools
Develop simple dashboard with Power BI.
Qualifications and skills required for the role
You must be a team player.
E. B.Sc. or M.Sc. in Computer Science, Engineering, Information Technology or equivalent work experience
Be interested in big data technologies and eager to learn new technologies.
Good knowledge of data warehousing concepts and ETL processes, DevOps, CI/CD and software testing.
Expertise with one of programming languages such as SQL, or Python.
Have working experience of AWS (Redshift, EMR, S3) and Azure (Data Factory, Power BI service) data analytics services. Databricks will be preferable.
Good at data processing frameworks like Flink or Spark
Be able to use Power BI for data analytics and visualization.
Be able to manage data platform with IaC will be merit.
Personal attributes
You are the kind of person that thrives in an environment where everything is not set and gets inspired from defining and implementing solutions. You are pragmatic, self-driven, curious, and flexible with a “we’ll find a way” attitude.
Excellent communication skills
Ability to drive an independent product stream within the context of a broader team project
Comfortable with navigating ambiguous and evolving situations
Proactive, self-starter and ability to manage multiple tasks effectively
A team-oriented approach and with a flexible mindset
Fluent in English and with Chinese language as a plus
The team is located in Göteborg, Lindholmen but works with teams and stakeholders all over the world.
IT/SW requirements:
AWS & Azure, Databricks