Teaching AI to be your Second Brain's librarian
My brain is wired for databases, so handwritten notes have never worked for me. Paper doesn’t let me search, reorganize, or build connections between ideas. But this same wiring comes with a curse - I tend to overcomplicate my note-taking system over time. I’ve tried a lot of different apps, and each promised to be the system that would finally help me organize my thoughts and remember what matters. None of them stuck, and the problem wasn’t the tools, it was me. I’d capture information enthusiastically, then never look at it again. Though if I’m being honest, the real explanation might be simpler: I’m probably too lazy.
This year, something changed. I’ve been working heavily with AI for coding, using tools like Claude Code and Cursor that can read, understand, and modify files. Somewhere along the way, a thought kept nagging at me: what if I could use these same tools with my notes? I’d seen blog posts and YouTube videos of people doing exactly this, but my vault was too complicated. Years of overengineering had left it riddled with dynamic content from Dataview plugin queries (the kind of stuff that looks like gibberish to an AI reading raw markdown files). Then I read Steph Ango’s article about how he uses Obsidian, and suddenly I had a blueprint. His approach of minimal folders, heavy linking, and markdown everything wasn’t just good note-taking practice, it was exactly the structure an AI assistant could navigate.
The Markdown Foundation
Before I explain the AI part, I need to talk about file formats. This might seem like a detour, but it’s actually the foundation of everything that follows.
Markdown isn’t just durable for humans, it’s also the native language of AI coding tools. This became clear to me at work, where we’ve been discussing how to turn our software documentation and product specs into powerful resources for AI. The more structured and plain-text your docs are, the more useful they become as context for AI assistants. Suddenly, writing good markdown documentation isn’t just about human readability, it’s about creating an interface that AI can leverage.
When I started using tools like Cursor I realized my markdown vault isn’t just a collection of notes, it’s an API. Every file is readable, every link is parseable, every piece of structure I create becomes something the AI can understand and work with.
This only works because markdown is plain text. If my notes were locked in a proprietary database, I’d need to set up API integrations or MCP servers just to give the AI access. The choice of format, which seemed like a minor technical decision, turned out to be the enabling constraint for everything that followed.
Building a Persistent Memory
What really clicked for me was discovering instruction files like CLAUDE.md or .cursorrules. These are just markdown files that define conventions and system instructions for the AI. Suddenly I realized I could set very specific rules for how the AI should interact with my vault—naming conventions, folder structure, linking patterns, and keep it all in a file that’s easy to maintain and version control. The vault isn’t just storage anymore, it’s an interface between my mind and an AI assistant.
This is fundamentally different from using ChatGPT or Claude in a browser, where conversations are stateless: you provide context in the prompt, get a response, and start fresh next time. But when AI has access to your notes, every conversation builds on everything you’ve written before.
The vault becomes a persistent context layer. I started thinking about it this way after watching a keynote at Lisbon AI 2025 where the presenter demonstrated a memory layer for AI agents. It clicked: my notes could be that memory layer. The AI doesn’t just know things in general, it knows things about me specifically: my projects, my areas of focus, the people I work with, the decisions I’ve made. Not because I told it in a single conversation, but because it’s all written down in files it can access.
The Librarian Metaphor
Here’s where I want to be careful about expectations. AI in my PKM system isn’t doing my thinking for me, generating insights I couldn’t have myself, or replacing the hard work of connecting ideas and forming opinions. What it’s doing is the work of a librarian.
Think about what a good librarian does: they know where everything is, they can find that article you vaguely remember reading last year, they maintain the organizational system so things stay findable, and they notice when a book is miscategorized and fix it. They suggest connections: “If you’re interested in this, you might also want to look at that.”
This is exactly what I’ve trained my AI assistant to do. I wrote an instruction file that explains my organizational philosophy, my naming conventions, my folder structure, and my linking habits. When the AI operates in my vault, it follows these rules.
The other day, I asked it to help me reorganize my References folder. Within minutes, it had analyzed each file, understood what belonged where based on my stated philosophy, and moved things accordingly. It found product notes mixed with my personal analyses, reference material in the wrong location, and edge cases I hadn’t considered. It asked clarifying questions when the rules were ambiguous. It did quickly what would have taken me long time of tedious file management.
But here’s the key: I made the decisions and the AI executed them. I explained my philosophy; the AI applied it consistently. This is librarian work, essential and valuable, and something I would never do myself because it’s boring.
Two Modes of AI Collaboration
As I’ve built this system, I’ve realized there are two distinct ways AI participates in my knowledge management:
Interactive mode is what I described above. I open a terminal, start a conversation with the AI, and ask it to do things: review my notes, create a new file, find connections, reorganize a folder. This is real-time collaboration where I have a goal and the AI helps me achieve it.
Automated mode is different. This is AI working in the background, without my involvement, to enrich my vault with context I’d never manually record.
Every night, a script runs on my home server that fetches my activity from various platforms: the conversations I had at work, the pull requests I created, the tasks I completed, the money I spent. It formats all of this into markdown and commits it to my daily note, so by the time I wake up, yesterday’s note is already populated with a record of what I did.
I call these daily notes “read-only” because I don’t write in them. They’re generated artifacts whose purpose isn’t to be read directly but to provide context. When I ask the AI to help me prepare for a meeting, it can see what I was discussing last week. When I ask for a weekly summary, it has the raw material to synthesize. When I’m trying to remember when I made a certain decision, it’s searchable.
This is what turns the AI from a chat assistant into something that actually knows my context. The daily notes are the feed, the AI is the processor, and together they create a system that knows more about my activities than I consciously remember.
No Vendor Lock-In
I want to end with a design principle that I think is crucial for anyone building a system like this: avoid vendor lock-in.
The AI landscape is evolving rapidly, and today’s best model might be tomorrow’s second choice. Pricing changes, capabilities change, new players emerge, and if your entire system depends on one provider, you’re vulnerable.
I recently switched from Claude Code (Anthropic’s tool) to OpenCode (an open-source alternative). Don’t get me wrong, I love Claude models, and they’ve been my most-used models for coding this year by far. But OpenCode supports multiple AI providers, so I can use Claude for complex tasks, a cheaper model for simple queries, and free models for experimentation. Same vault, same instruction files, same commands, just different brains as needed.
This works because my vault is the source of truth, not the AI tool. For my instruction files, I followed the conventions from AGENTS.md, an open format for guiding coding agents that’s already used by thousands of open-source projects and supported by tools from OpenAI, Google, Cursor, and many others. The daily notes are just markdown that any AI can process, and the organizational philosophy lives in my files, not in some provider’s proprietary system.
The AI is a lens through which I view and manage my notes. I can swap lenses without changing the underlying collection.
The takeaway is simple: if you’re using a markdown-based note system, you already have the foundation for AI collaboration. Your notes are readable by machines, and the question is just what you want those machines to do with them.