Senior Data Engineer

Loading...

Senior Data Engineer

Details

  • Work Location Type:
    Hybrid
  • Office:
  • Type of Employment:
    Full Time Permanent
  • Reference Number:
    TEC2343

About Us

At FDJ UNITED, we don't just follow the game, we reinvent it.

FDJ UNITED is one of Europe’s leading betting and gaming operators, with a vast portfolio of iconic brands and a reputation for technological excellence. With more than 5,000 employees and a presence in around fifteen regulated markets, the Group offers a diversified, responsible range of games, both under exclusive rights and open to competition. We set new standards, proving that entertainment and safety can go hand in hand. Here, you’ll work alongside a team of passionate individuals dedicated to delivering the best and safest entertaining experiences for our customers every day.

We’re looking for bold people who are eager to succeed and ready to level-up the game. If you thrive on innovation, embrace challenges, and want to make a real impact at all levels, FDJ UNITED is your playing field.

Join us in shaping the future of gaming. Are you ready to LEVEL-UP THE GAME?

#LI-LB1

The role

As a Senior Data Engineer, you will play a pivotal role in designing, implementing, and optimising robust data solutions to support business-critical analytics and decision-making processes. This role requires expertise in building and managing data pipelines, working with data warehouse platforms like Redshift and Oracle, and delivering end-to-end data engineering projects.

You will collaborate with cross-functional teams to ensure seamless integration of data solutions, delivering high-quality analytics and reporting capabilities while leveraging modern DevOps practices. Your contributions will directly influence Kindred's ability to harness data for strategic insights and innovation.

 

What you will do

Support AI initiatives working with programming languages such as Python, Java, Scala, and more for data pipeline and data lineage
•    Embed AI into Data Workflows: Integrate LLM-based capabilities into existing data engineering processes to deliver contextual insights, enhance data lineage, and power intelligent features.
•    Develop Cutting-Edge AI Applications: Leverage Large Language Models in Python to build next-generation AI solutions, including semantic search, vector stores, and Retrieval-Augmented Generation pipelines.

Build and Optimise Data Pipelines:
•    Design, develop, and maintain scalable data pipelines using Java Spring Boot, Python, and orchestration tools like Apache Airflow.
•    Implement and optimise ETL/ELT processes for efficient handling of large-scale data flows.
•    Leverage Kafka, Redshift and Oracle data warehouses to build performant data models and pipelines.

Cloud Infrastructure and DevOps:
•    Architect and manage data solutions on AWS, including services like Redshift, S3, EC2, and Lambda.
•    Collaborate with DevOps teams to implement CI/CD pipelines, containerised environments using Docker and Kubernetes, and infrastructure-as-code tools like Terraform.

Innovation and Continuous Learning:
•    Stay updated on emerging technologies in data engineering, cloud computing, and DevOps to future-proof our data solutions.
•    Share knowledge through mentoring, documentation, and internal training sessions to foster a learning culture.

 

Skills and experience required: 

Core Technical Skills:
•    Expertise in Java and Python: Proficiency in both languages is essential.
•    Cloud Data Engineering Experience: A strong background in cloud platforms, particularly AWS.
•    Proficiency in Apache Spark: Experience with big data processing frameworks like Apache Spark is required.
•    Experience with Apache Airflow: Competence in workflow orchestration tools such as Apache Airflow is necessary.
•    Familiarity with Spring Framework: Knowledge of Java frameworks like Spring and Spring Boot is beneficial
•    Cloud Proficiency: Proven expertise with AWS services, including Redshift, S3, EC2, Lambda, and Kafka.

DevOps and Automation:
•    Experience with containerisation (Docker) and orchestration (Kubernetes).
•    Proficiency in CI/CD tools such as Jenkins, GitHub Actions, or GitLab CI/CD.
•    Familiarity with infrastructure-as-code tools like Terraform or Ansible.

Desired Qualities:
•    Experience with other programming languages like Scala or .NET.
•    Familiarity with Power BI Fabric technologies.
•    Knowledge of data governance and compliance best practices.

Our Way Of Working

Our world is hybrid.

A career is not a sprint. It’s a marathon. One of the perks of joining us is that we value you as a person first. Our hybrid world allows you to focus on your goals and responsibilities and lets you self-organise to improve your deliveries and get the work done in your own way.

Application Process

We believe talent knows no boundaries. Our hiring process focuses solely on your skills, experience, and potential to contribute to our team. We welcome applicants from all backgrounds and evaluate each candidate based on merit, regardless of personal characteristics as the age, gender, origin, religion, sexual orientation, neurodiversity or disability.

 
 

Details

  • Work Location Type:
    Hybrid
  • Office:
  • Type of Employment:
    Full Time Permanent
  • Reference Number:
    TEC2343

Location

Loading...
Close map
Location
Kindred House, 17-25 Hartfield Road, Wimbledon, London, United Kingdom, SW19 3SE
Loading...
Loading...

Benefits

Well-being allowance
Learning and development opportunities
Inclusion networks
Charity days
Long service awards
Private medical insurance
Life assurance and income protection
Employee Assistance Programme
Pension

Meet the recruiter

Lisa Begum

lisa.begum@kindredgroup.com

Share this page

Share with linkedin
Share with facebook
Share with twitter
Share with email
Loading