At FDJ UNITED, we don't just follow the game, we reinvent it.
FDJ UNITED is one of Europe’s leading betting and gaming operators, with a vast portfolio of iconic brands and a reputation for technological excellence. With more than 5,000 employees and a presence in around fifteen regulated markets, the Group offers a diversified, responsible range of games, both under exclusive rights and open to competition. We set new standards, proving that entertainment and safety can go hand in hand. Here, you’ll work alongside a team of passionate individuals dedicated to delivering the best and safest entertaining experiences for our customers every day.
We’re looking for bold people who are eager to succeed and ready to level-up the game. If you thrive on innovation, embrace challenges, and want to make a real impact at all levels, FDJ UNITED is your playing field.
Join us in shaping the future of gaming. Are you ready to LEVEL-UP THE GAME?
As a Senior AI Solutions Engineer, you'll design, build, and run AI solutions that make a real difference in day-to-day decision-making across the business. This is a hands-on engineering role focused on shipping AI into production, not prototypes.
You'll work across Python + Java microservices, LLM/RAG systems, vector search, and data pipelines, deploying to AWS (incl. Bedrock) and Azure (Azure AI Foundry). You'll partner closely with the Lead AI Solutions Engineer, platform engineers, analysts, and data teams to deliver scalable capabilities that are secure, observable, and maintainable.
Design and implement RAG-powered services (assistants, chat experiences, semantic search) using modern LLM patterns
Improve retrieval quality through embeddings, metadata enrichment, ranking strategies, and evaluation feedback loops
Build modular components that can be reused across multiple use cases and domains
Build and maintain backend services and APIs using Python (FastAPI/LangChain/Hugging Face) and Java
Create clean service boundaries, versioned APIs, and secure integration patterns for enterprise environments
Produce high-quality documentation and maintain an engineering standard that scales beyond one team
Build and operate pipelines for ingestion, embedding generation, chunking strategies, and metadata processing
Orchestrate ETL/ELT workflows using Airflow for batch and near-real-time use cases
Ensure governance, security, and privacy requirements are met (and provable)
Deploy solutions across AWS and Azure, using CI/CD and IaC to keep releases safe and repeatable
Containerise and run workloads with Docker and Kubernetes, working with Platform Engineering on Kindred Cloud
Build with production realities in mind: logging, monitoring, failure handling, scalability, and cost controls
Implement and optimise vector search using PGVector / ChromaDB, including indexing strategies and query performance
Work with Sentence Transformers / OpenAI embeddings and similarity techniques (e.g., cosine similarity) to improve precision/recall
Work across teams to align on design choices, integration patterns, and shared reusable components
Mentor others through reviews, pairing, and knowledge-sharing sessions
Bring pragmatic innovation: test new approaches, keep what works, and productise it
AI features move from idea → production with measurable adoption and value
RAG systems deliver relevant, trustworthy outputs with clear performance indicators
Services are secure, observable, and operationally stable (not fragile demos)
Engineers and stakeholders trust the platform and can build on it without reinventing the wheel
5+ years in backend engineering, data engineering, or AI/ML integration roles
Strong hands-on skills in Python and solid experience with Java (or deep JVM ecosystem experience)
Practical experience building with LLMs, embeddings, semantic search, and RAG-style architectures
Experience with vector databases (PGVector/ChromaDB or similar) and retrieval optimisation
Strong delivery habits: CI/CD, Docker, Kubernetes, and Infrastructure as Code (Terraform/CloudFormation)
Cloud experience across AWS (EC2, S3, Lambda, Bedrock, CodePipeline etc.) and/or Azure AI Foundry
Comfortable working with stakeholders, ambiguity, and trade-offs — you can turn fuzzy problems into shipped outcomes
Experience fine-tuning or adapting models for domain use cases
Experience building internal developer platforms or reusable AI components
Experience with evaluation/observability for GenAI systems (quality, latency, cost, drift, safety)
Prior experience in regulated environments or with identity/security integrations (SSO, IAM)
Work on real AI products used across the organisation — not proofs of concept
Modern stack: LLMs, RAG, vector search, Kubernetes, multi-cloud
Strong cross-functional collaboration with data, platform, and product teams
Space to innovate, improve foundations, and build capabilities that scale
Our world is hybrid.
A career is not a sprint. It’s a marathon. One of the perks of joining us is that we value you as a person first. Our hybrid world allows you to focus on your goals and responsibilities and lets you self-organise to improve your deliveries and get the work done in your own way.
We believe talent knows no boundaries. Our hiring process focuses solely on your skills, experience, and potential to contribute to our team. We welcome applicants from all backgrounds and evaluate each candidate based on merit, regardless of personal characteristics as the age, gender, origin, religion, sexual orientation, neurodiversity or disability.