At FDJ UNITED, we don't just follow the game, we reinvent it.
FDJ UNITED is one of Europe’s leading betting and gaming operators, with a vast portfolio of iconic brands and a reputation for technological excellence. With more than 5,000 employees and a presence in around fifteen regulated markets, the Group offers a diversified, responsible range of games, both under exclusive rights and open to competition. We set new standards, proving that entertainment and safety can go hand in hand. Here, you’ll work alongside a team of passionate individuals dedicated to delivering the best and safest entertaining experiences for our customers every day.
We’re looking for bold people who are eager to succeed and ready to level-up the game. If you thrive on innovation, embrace challenges, and want to make a real impact at all levels, FDJ UNITED is your playing field.
Join us in shaping the future of gaming. Are you ready to LEVEL-UP THE GAME?
At FDJ United, data and analytics are crucial to our growth and innovation as we expand into new markets. We are seeking a Cloud Data Engineer to join our Automaton and Insights (A&I) team. In this role, you will support the development and maintenance of our clickstream data architecture on AWS, helping ensure the generation of high-quality datasets that accurately reflect customer behaviour across various digital platforms.
You will work with cutting-edge technologies, including Snowplow for event tracking, Apache Iceberg as our central data lake, dbt Spark for scalable data transformations, and ClickHouse for high-performance analytics, alongside Open Metadata for data governance.
Working closely with Python Developers and Analysts, you'll help enable the creation of self-serve analytics dashboards that empower teams across the business to generate actionable insights and improve decision-making. If you're passionate about growing your skills in cloud data engineering and eager to learn how data shapes customer understanding, we invite you to start your career journey with us at FDJ United.
- Design and implement dbt data models that support decision-making, marketing insights, product reporting, and ad-hoc analysis across the business.
- Build and maintain schemas to enable automated data processing workflows, leveraging Apache Iceberg as the central data lake.
- Develop and optimize data transformations with dbt Spark, ensuring models are reliable, reusable, and well-documented.
- Enhance self-serve analytics by developing well-structured, performant ClickHouse tables that serve as the foundation for business dashboards and reporting.
- Define and maintain a semantic layer that standardizes business metric definitions, enabling consistency for natural language querying tools and self-serve analytics.
- Collaborate on exploratory projects, including proof-of-concept generative AI initiatives, applying engineering expertise to evaluate and integrate emerging technologies.
- Document and share best practices for data transformation and modeling, promoting effective use of Snowplow, dbt, and related tools across the organization.
- Stay curious and proactive in learning about advancements in data engineering, cloud platforms, and AI technologies, bringing fresh ideas to the team.
- Practical experience with data transformations, including working with SQL and Python to manipulate and prepare datasets.
- Exposure to data modelling concepts and some hands-on experience building models in dbt, with an eagerness to strengthen these skills.
- Interest in learning about clickstream data and event-driven architectures, with some familiarity considered a plus.
- Familiarity with cloud platforms such as AWS, and a willingness to deepen knowledge in DevOps and automation practices.
- Awareness of governance frameworks and data catalogues, with an interest in supporting documentation and metadata management.
- Strong working knowledge of SQL and confidence in writing efficient queries.
- Solid programming skills in Python.
- A proactive, growth-oriented mindset with the ability to adapt to ambiguity and learn through iteration. Excellent communication skills to work with senior engineers and analysts, translating business requirements into technical specifications for data models.
- dbt certification and hands-on experience developing production-grade dbt models.
- 2+ years of experience with Airflow or other orchestration frameworks.
- Experience with modern data stack components such as Spark, Kafka or Iceberg.
- Experience working with large-scale columnar databases (e.g., ClickHouse, Redshift, BigQuery) and performance tuning for analytical workloads.
Ensure that you adhere to the Governance, Risk & Compliance (GRC) obligations for your role.
Reach out to the Compliance Teams if unsure of any of your compliance obligations or the requirements are unclear.
Our world is hybrid.
A career is not a sprint. It’s a marathon. One of the perks of joining us is that we value you as a person first. Our hybrid world allows you to focus on your goals and responsibilities and lets you self-organise to improve your deliveries and get the work done in your own way.
We believe talent knows no boundaries. Our hiring process focuses solely on your skills, experience, and potential to contribute to our team. We welcome applicants from all backgrounds and evaluate each candidate based on merit, regardless of personal characteristics as the age, gender, origin, religion, sexual orientation, neurodiversity or disability.