How I Broke My AI (and what I learnt)
- Marc May
- 3 days ago
- 3 min read
It worked. Until it didn’t.
When I first started using AI at work, it felt like I had discovered a cheat code. Drafts improved, summaries made sense, and I suddenly looked far more efficient than I probably was.Encouraged by early success, I kept adding more information. More documents. More detail. At some point, the AI stopped helping and started politely nodding while very clearly losing the plot.
The day I gave it too much to read
The breaking point came with a very familiar legal instinct: “I’ll just give it everything.” I uploaded contracts, background notes, previous advice, and a long chain of emails that probably should have been summarised years ago. Then I added detailed instructions, just to be safe.The result looked fine at first glance, but something was off. Key points were missed. Nuances disappeared. It felt like handing a junior lawyer a lever-arch file the size of a small suitcase and expecting brilliance by lunchtime.
What the AI actually sees
This is where I learnt about the context window. When I say “AI” here, I’m really talking about large language models, the tools most of us use at work. And they don’t see everything. They only work with what fits inside their context window.Your prompt, earlier messages in the chat, and any documents you upload all sit there together. That’s the full picture the AI can actively reason with at that moment.Its training sits quietly in the background. Think of it as a lawyer’s law degree and years of experience. The context window, by contrast, is the physical case file open on the desk. Once that file turns into a leaning tower of annexes, things start slipping through. Not because the lawyer forgot the law, but because no one can work properly with half the file falling off the table.
Why more information made things worse
My mistake was assuming that more information would lead to better answers. A very legal assumption. So I kept adding documents “just in case”, layering instructions on top of instructions, and clarifying points that probably didn’t need clarifying at all. The problem is that once the context window gets overloaded, the AI starts making trade-offs. Some details are quietly dropped. Others are oversimplified. The output still looks confident, just less accurate. It’s like asking a colleague to review a contract while simultaneously briefing them on three other matters, forwarding a week-long email thread, and expecting a perfect answer by close of business. Something has to give, and it’s rarely the unimportant bits.
Red flags I now watch for
After breaking things a few times, I started noticing patterns. When the context window is struggling, the signs are usually subtle but consistent:
The AI confidently answers a question I didn’t ask.
It forgets constraints I clearly set earlier in the chat.
The analysis of long documents suddenly becomes vague, generic, or oddly short.
It contradicts itself across answers, often without realising it.
Early responses were sharp; later ones feel like they were written after a very long day.
None of these would immediately raise a red flag on their own. Together, they’re usually a sign that the desk is too full and something important has fallen on the floor.
What I do differently now
These days, I’m much more deliberate. If a conversation starts getting messy, I stop and reset. A new chat, a clean prompt, and a short summary of what actually matters usually beats trying to rescue a bloated thread.I also break work into smaller pieces. Instead of uploading everything at once, I ask the AI to focus on one document or issue at a time. It turns out that, much like junior lawyers, AI performs better with clear instructions and a manageable workload.Most importantly, I’ve stopped treating AI like an all-knowing assistant and started treating it like a very fast colleague with a limited attention span. If I wouldn’t dump a suitcase of documents on someone’s desk and expect miracles, I no longer do it to AI.
It’s not (just) about smarter prompts
Prompting still matters, of course. But it only works inside the context window. If the right information isn’t there, no clever wording will save it.Understanding this didn’t suddenly turn me into a technical expert. It just helped me stop blaming my prompts and start paying attention to how much I was asking the AI to hold at once.Once I did that, the tool became far more predictable, far more useful, and a lot less frustrating. Which, in legal work, already counts as a win.
Nicolás Panigutti
Global Legal Transformation
Santander