Building an AI-Native Business Operating Layer
Public notes on enterprise AI memory, semantic ingestion, business knowledge graphs, and personal agents for work.
Why this exists
I am experimenting with whether business software can be rebuilt around AI-native memory and context instead of static screens, forms, and manually maintained data structures. This site records that exploration in the open: what I am trying to build, why it might matter, and what remains unsolved.
The domain is unusual—tourbignellze.top is not a corporate brand—and that is
intentional. This reads as a public lab notebook, not a polished SaaS landing page.
The problem with current business software
Most operational knowledge never lives where the software expects it. Decisions sit in email threads. Commitments are spoken in meetings and lost in transcripts. Risks surface in tickets or side conversations. Context is scattered across documents, CRM notes, and people’s heads.
Traditional systems capture what someone bothered to type into a form. Everything else is external. When teams ask “what did we agree to?” or “why did we choose this?”, they search inboxes and calendars—not the ERP.
The thesis
AI-native business software should be built around memory, context, and agents, not only around screens and forms. The model is not a chat feature bolted onto last decade’s UI. It is part of how information enters the system, how knowledge is represented, and how people retrieve and act on it.
Incoming information—emails, meeting transcripts, support tickets, documents, ERP and CRM exports, external sources—would be interpreted rather than merely stored. Useful facts, events, signals, and relationships would be extracted and integrated into a central model. Raw sources would remain available for audit. Distilled passages would be indexed for retrieval without discarding ground truth.
What I am exploring
The architecture I am prototyping separates concerns deliberately:
- Semantic ingestion — interpret messy business inputs; preserve raw sources; extract evidence and structured objects.
- Semantic distillation — remove noise while keeping signal; separate evidence passages from broader context; avoid one-line summaries that erase detail.
- Vector and graph retrieval — vectors for semantic similarity; a business knowledge graph for entities, relationships, and temporal change.
- Access-controlled agents — individualized to users and tasks; retrieval filtered by policy before context reaches the model.
- Human validation — still required for sensitive, ambiguous, or critical data; automation should expose uncertainty, not hide it.
In the current proof-of-concept, PostgreSQL holds authoritative records—users, projects, permissions, raw text, citation anchors, audit events. A graph memory layer supports hybrid retrieval but is verified against the database before anything is shown to an agent.
Why this is hard
Several problems are easy to describe and difficult to implement well:
- Permissions — project boundaries, roles, and future finer scopes must gate retrieval deterministically.
- Source traceability — answers should cite passages tied to immutable sources, not paraphrase from memory.
- Contradictions — sources disagree; the system should surface conflict rather than merge incompatible facts.
- Temporal facts — relationships and statuses change; “true now” differs from “true in March.”
- Stale knowledge — old decisions linger in indexes unless supersession is modeled explicitly.
- Confidence and governance — not everything extracted by a model should be treated as fact; policy and review still matter.
Current state
The project is in research and proof-of-concept stage. A local monorepo implements manual file upload, an ingestion worker, semantic distillation, structured memory extraction, graph ingestion, and project-scoped retrieval with cited answers. There are no live connectors, no production deployment, and no claim of enterprise readiness.
That scope is deliberate: prove the pipeline on realistic messy inputs before pretending integrations and governance are solved.
What this could become
If the architecture holds, the long-term shape is an AI-native layer that helps people access, update, and act on business knowledge—possibly an alternative to parts of traditional ERP and operations workflows, especially where work is conversational and contextual rather than transactional.
Personal agents could answer questions with citations, flag risks that crossed projects, and prepare briefings from governed memory rather than from whatever happened to be pasted into chat that morning.
What I am not claiming
- I am not claiming traditional ERP disappears overnight.
- I am not claiming large language models can safely manage enterprise data without governance, access control, and human oversight.
- I am not claiming the architecture is solved—only that it is worth exploring seriously.
- I am not sharing full implementation details on this site; this is the narrative layer.
Latest research notes
View all notes →-
Why Access Control Cannot Be an Afterthought
Enterprise AI must filter data before retrieval. Prompt instructions are not security controls, and retrieval indexes must not be the authority for permissions.
-
Semantic Ingestion as the Missing Layer
Business inputs arrive messy and continuous. A semantic ingestion layer turns them into governed memory without losing ground truth.
-
Why Enterprise AI Needs Memory, Not Just Chat
Chat interfaces are easy to demo but insufficient for work. The hard part is persistent, governed, traceable business memory.