Data Engineer - Analytics/Modelling

Location: Birmingham/London, UK

Mode: Hybrid

Responsibilities

  • Lead the design and implementation of AWS-based data products that leverage reusable datasets.
  • Collaborate on creating scalable data solutions using a range of new and emerging technologies from the AWS platform.
  • Demonstrate AWS data expertise when communicating with stakeholders and translating requirements into technical data solutions.
  • Manage both real-time and batch data pipelines. Our technology stack includes a wide variety of technologies such as Kafka, AWS Kinesis, Redshift, and DBT.
  • Design and model data workflows from ingestion to presentation, ensuring data security, privacy, and cost-effective solutions.
  • Create a showcase environment to demonstrate data engineering best practices and cost-effective solutions on AWS.
  • Build a framework suitable for stakeholders with low data fluency. This framework should enable easy access to data insights and facilitate informed decision-making.

Requirements

  • Expertise in the full data lifecycle: project setup, data pipeline design, data modelling and serving, testing, deployment, monitoring, and maintenance.
  • Strong data architecture background in cloud-based architectures (SaaS, PaaS, IaaS).
  • Proven engineering skills with experience in Python, SQL, Spark, and DBT, or similar frameworks for large-scale data processing.
  • Deep knowledge of AWS services relevant to data engineering, including AWS Glue, AWS EMR, Amazon S3, Redshift.
  • Experience with Infrastructure-as-Code (IaC) using Terraform or AWS CloudFormation.
  • Proven ability to design and optimize data models to address data quality and performance issues.
  • Excellent communication and collaboration skills to work effectively with stakeholders across various teams.
  • Ability to create user-friendly data interfaces and visualizations that cater to stakeholders with varying levels of data literacy.
Apply