It started mundane. Fawzi was looking for a place to live. Jacob had a house.
They got to talking—about projects, about interests, about problems they’d been thinking about.
Jacob had been wrestling with a specific research question for months:
Why do legal language models confidently generate completely wrong answers?
Not big structural failures. Subtle mistakes. Fictional case citations. Minor details that don’t exist.
Logical reasoning that drifts into hallucination territory without the model ever noticing.
Fawzi brought a different perspective. He’s deep in the legal world—finishing his Global Law degree at
Tilburg University, connected to practicing attorneys across the US, Middle East, and Europe through his
family’s international law firm network. He understood what lawyers actually need and what real legal practice
looks like.
Playing around with the problem together, something clicked:
The issue wasn’t the models. It was the data.
Legal AI was being trained on static legal archives—case databases, statutory records,
research repositories. Useful for legal research, but completely wrong for training systems
to have conversations about law. And critically, the training data had almost no examples of
common mistakes, subtle errors, or the kinds of hallucinations legal professionals deal with regularly.
It was like training a doctor on textbook anatomy but never showing them what an actual human body looks like,
or what a misdiagnosis looks like.
They realized you could fix this by:
- Working directly with practicing lawyers to understand real legal conversations
- Building datasets specifically for conversational AI (not just research)
- Deliberately including wrong answers to teach models what to avoid
- Validating everything with legal professionals who understood the stakes
So in 2025, they started Entropy Partners.
Jacob brought the AI expertise. He’d just graduated from Cognitive Science & Artificial Intelligence
at Tilburg and is now doing a Master’s at Utrecht. He understands how to structure data so AI systems can
actually learn the right things from it.
Fawzi brought the legal network and domain understanding. Global Law student in his final year at Tilburg,
with deep connections to practicing attorneys across multiple jurisdictions. He knew who to talk to
and what questions to ask.
Together, they decided to build a new category: legal training datasets engineered specifically
for conversational AI, created in collaboration with real attorneys, and deliberately designed
to reduce hallucination through strategic error integration.