Building My Personal AI Utilization System
Motivation
AI tools are powerful, but using them individually quickly becomes messy.
A typical workflow looks like this:
ChatGPT Perplexity NotebookLM Drive repeat
Over time:
- context becomes fragmented
- knowledge spreads across tools
- switching between AIs creates friction
- long-term work loses structure
The limitation is not the capability of AI.
The real problem is the lack of architecture.
Goal
Design a system where multiple AI tools work together instead of being used independently.
This system should support:
- studying university courses
- managing research materials
- writing long documents
- building software projects
- organizing knowledge over time
The focus is practical workflow, not theoretical design.
Key Idea
Assign clear roles to different tools.
Search → Perplexity
Store → Zotero
Analyze → NotebookLM
Organize → Gemini
Design → Claude
Build → ChatGPT / Codex
Code State → GitHub
Memory → Google Drive
Control → gws CLI
Each tool does one job well.
Philosophy
- AI assists thinking — it does not replace it
- AI is a cognitive tool
- Working systems > perfect systems
Practical Constraints
AI workflow tooling is still immature.
- no universal standard
- fragmented ecosystems
- high setup cost
And a common trap:
You spend more time building the system than using it.
This architecture is designed to avoid that.
Core Architecture
This system is built around one simple idea:
Google Drive = shared memory
Gemini CLI = memory operator (Librarian)
Architecture Overview
This is not a linear pipeline.
It is an input/output system centered on memory.
Diagram
INPUTS
PDFs / notes / docs / course files / drafts
|
v
[ Gemini CLI (via gws CLI) ]
- classify
- rename
- route
- update memory
- extract dates
|
v
[ Google Drive Memory Layer ]
|
+------------+------------+
| |
v v
[ Knowledge Tools ] [ Execution Tools ]
- NotebookLM - Claude
- Perplexity - ChatGPT / Codex
- Zotero (manual) - Claude Code
| |
v v
Analysis / Search Design / Build / Code
\ /
\ /
+---------+-----------+
|
v
OUTPUTS
Google Drive / GitHub
How It Works
1. Input → Gemini
All raw inputs go through Gemini CLI.
- lecture files
- PDFs
- notes
- drafts
Gemini:
- reads content
- renames files
- classifies
- places them correctly
- updates context
2. Google Drive = Memory
Drive stores:
- documents
- course materials
- project files
- AI context
This is what gives the system continuity.
3. AI Tools Consume Context
Different tools read from the same memory.
Knowledge side
- NotebookLM → document analysis
- Perplexity → search
- Zotero → paper archive (manual, source of truth)
Execution side
- Claude → reasoning / design
- ChatGPT / Codex → implementation
- Claude Code → repo interaction
4. Output → System
Outputs are written back:
- Drive → knowledge
- GitHub → code
Minimal Setup
You do NOT need everything.
Minimal Working System
Google Drive
Gemini CLI
gws CLI
This alone gives you:
- file organization
- persistent memory
- reusable context
Optional Tools
Perplexity → search
NotebookLM → analysis
Claude → reasoning
ChatGPT / Codex → coding
GitHub → code state
Zotero → research archive
Add gradually.
Key Insight
The architecture matters more than the tools.
Google Drive Folder Architecture
AI_OS/
├── University/
├── Projects/
└── Librarian/
Librarian (Global System Context)
Librarian/
architecture.md
routing.md
decisions.md
Defines:
- system structure
- tool routing rules
- design decisions
librarian_memory (Local Context)
Each course/project has its own:
librarian_memory/
Standard Files
librarian_memory/
current_task.md
brief.md
handoff.md
current_task.md
Tracks active work.
Working on:
Exercise B
Next:
Proof by induction
Deadline:
March 16
brief.md
Stores key facts.
Course: CAS2101
Midterm: April 21
Final: June 16
handoff.md
Maintains continuity.
Completed:
1–3
Remaining:
4–5
Notes:
proof structure 중요
Why This Matters
Without this:
every AI session resets
With this:
persistent cross-AI memory
🔒 Librarian Control Policy (Critical)
The following folders are AI-managed:
Librarian/
*/librarian_memory/
Rule
Do NOT edit these manually.
Instead:
"Update current_task.md to reflect new task"
Why
These folders are:
system memory + AI context layer
Manual edits can break consistency.
Principle
User controls intent Gemini controls memory
Gemini = Librarian
Gemini manages:
- file organization
- naming
- routing
- context updates
- summaries
- schedule extraction
Example Workflows
1. Course Material Automation
Download Gemini auto organize + context update
2. Project Knowledge Management
Code → GitHub
Docs → Drive (Gemini managed)
3. Research Workflow
Perplexity Zotero NotebookLM Claude
Zotero = original source
5. Schedule Automation
Document Gemini Calendar + memory update
Flexibility & Evolution
This system is loosely coupled.
You can:
- replace tools
- simplify structure
- customize workflows
Context is Portable
All important state lives in:
Drive + markdown files
So you can:
move context → change tools → continue work
Key Idea
Tools are replaceable Context is persistent
Interoperability (MCP)
This system aligns with emerging standards like:
Model Context Protocol (MCP)
Which enables:
- shared context
- tool interoperability
- external system integration
Environment Setup (Guideline)
Identity Layer
Google
GitHub
OpenAI
Anthropic
Perplexity
👉 Use Google as unified login
Development
Git
VS Code
Node / Python
Terminal
Core Integration
gcloud CLI
gws CLI
Gemini CLI
Google Drive
Setup Principle
Do not copy blindly. Adapt to your environment.
Quick Start
1. Create folder structure
2. Add librarian_memory files
3. Install Gemini CLI + gws
4. Run first file
Adapting the System
You don’t have to build this alone.
Use AI
Give this article or repo to an AI
→ ask it to adapt the system
Example
"Adapt this to my Mac + Python workflow"
"Simplify this to minimal setup"
Why This Works
Because the system is:
modular + loosely coupled + context-driven
Conclusion
AI tools alone create fragmentation.
This system turns them into:
a structured, persistent working environment
Result:
- shared context
- organized knowledge
- reduced friction