Last night I was at an ACCA event. The room was full of CFOs and senior finance leaders — exactly the people I most want to hear from.

The consensus? AI is coming. But it doesn't have context. It can't make judgement calls. It doesn't know your organisation.

And here's the thing: they're right.

If you open a generic AI assistant and ask it to draft your board paper, write your variance commentary, or advise on your reserves policy — you'll get something that sounds plausible and means nothing. It doesn't know your chart of accounts. It doesn't know your restricted funds. It doesn't know how your Finance Committee Chair prefers to receive a risk summary. It doesn't know you.

So the CFOs in that room have diagnosed the problem accurately. What they haven't seen yet is the solution.

I Didn't Ask AI to Be Clever. I Gave It My Context.

About six months ago, I started learning how to use Claude Code — Anthropic's command-line AI tool — properly. Not as a chatbot. Not as a search engine. As a work partner that lives inside a persistent, structured vault of everything I know and everything I've built.

The approach comes from the Multiply Academy methodology. The core idea is deceptively simple: AI doesn't need to be smarter. It needs to know more about your specific situation. You build the context. The AI applies its capability to that context. The combination is transformative in a way that neither part is on its own.

So that's what I built.

What's in the Vault

I use Obsidian as my knowledge base — a folder structure on my laptop that holds everything connected to my work. Into that vault I've put:

My shared brain contains

  • My brand guidelines: exact hex codes, typography rules, logo conventions
  • My full career history, CV, professional biography, and published articles
  • My voice and values — how I communicate, what I believe, my two-challenge framework for finance leaders thinking about AI
  • My client pipeline, proposals, and discovery call notes
  • My automation catalogue: detailed specs for every finance automation I've built and tested
  • My expertise — years of accumulated sector knowledge, curated research, regulatory reference materials, and deep domain intelligence
  • A memory system: structured files that Claude reads at the start of every session, so it knows who I am and how I like to work

When I open Claude Code now, it doesn't start from zero. It starts from all of that.

What That Actually Means in Practice

The difference isn't subtle.

When I ask for help drafting a proposal, it drafts in my voice, using my brand colours, referencing my actual automations by name, and positioning them correctly for the client I'm writing for. When I ask for a LinkedIn post, it writes in my register — personal, specific, never claiming client results I don't yet have. When I ask it to assess a job opportunity, it reads my CV first without being asked, because a memory rule tells it to.

I'm not describing a future state. This is how I worked today. It's how I worked yesterday.

Tasks that used to take hours now take minutes — not because the thinking disappeared, but because the context was already loaded and the AI could apply it immediately. I do the judgement calls. The AI does the labour.

So What About Those Judgement Calls?

The CFOs in that room were worried that AI would try to make decisions it isn't equipped to make. That's a fair concern for generic AI. It's not a concern for a well-configured shared brain.

In my setup, the AI doesn't decide what to write in my annual report — it drafts, and I review and approve. It doesn't decide whether to accept a client engagement — it surfaces the analysis, and I make the call. It doesn't decide whether a particular LinkedIn comment is too sales-y — it drafts to a brief, and I edit before I post.

The human judgement is still there. What AI has removed is the two hours of mechanical groundwork that used to precede every judgement call.

The Honest Version

I haven't done this perfectly. The vault took months to build properly. The memory system required real thought about what future-me would need to know. There were sessions early on where the AI gave me something generic because I hadn't given it the context it needed.

But the methodology from Multiply Academy gave me a framework for thinking about it — what to put in, how to structure it, how to make the context retrievable and useful rather than just voluminous. And the results since getting the structure right have been significant.

What This Means for Finance Directors

The "no context" objection is correct and it's solvable. The question isn't whether AI can understand your organisation. The question is whether you're prepared to give it the material it needs to do so.

The finance leaders who solve that problem first — who build the shared brain, encode the organisational knowledge, and integrate AI into how they actually work — are going to have a structural productivity advantage over those who are still waiting for AI to figure it out by itself.

It won't.

You have to build the context. But once you do, the judgement calls get a lot easier when you don't have to do everything else at the same time.