Skip to content
CK/SYSTEMS
Source: Article active
· 10 min read

How I Run Parts of calvinkennedy.com with a Governed AI System

A case study on using OpenClaw and Portarium to handle bounded business workflows on calvinkennedy.com without pretending the whole business is autonomous.

Case Study AI Agents Governance Consulting

The useful version of the agent story is not “AI runs my business.”

The useful version is narrower and more defensible:

I am using a governed agent system to run selected workflows on calvinkennedy.com, and I am treating my own business as the first proving ground before I sell that pattern to anyone else.

That is the case study.

The stack is split deliberately:

  • OpenClaw is the operator
  • Portarium is the governance layer
  • calvinkennedy.com is the live business surface where the workflows become real

The distinction matters because most agent stories collapse at exactly this point. They jump from “the model can do something interesting” to “the business should trust it with a whole workflow.” I am trying to close that gap the boring way: bounded scope, explicit policy, audit trails, and human approval on anything that can actually hurt me.

What is live right now

This is not a hypothetical architecture deck. There are already real business workflows running through the governed path.

Today, the live slice looks like this:

  • contact-form inquiries from calvinkennedy.com are forwarded into OpenClaw
  • OpenClaw classifies the inquiry into a service line such as consulting, tutoring, or AI workflow coaching
  • OpenClaw drafts a reply instead of sending one
  • OpenClaw creates a beads issue with the lead context and the draft so I can review it
  • workflow and policy events are written to a persistent JSONL audit trail

That means the system is already doing real work on a real business surface:

  • intake
  • classification
  • summary
  • draft preparation
  • internal work routing

What it is not doing:

  • sending outbound emails
  • publishing public content automatically
  • changing production configuration
  • taking any action that should bypass explicit review

That line is the whole point.

Why I built the operating model this way

Most agent failures are not model failures first. They are workflow failures.

The common pattern looks like this:

  1. a team sees a strong demo
  2. they wire the model into a messy workflow
  3. they skip policy, validation, or approval boundaries
  4. the system eventually surprises them in exactly the place where surprise is expensive

I do not want calvinkennedy.com to become that story.

So the architecture is split into roles.

OpenClaw: the operator

OpenClaw receives triggers and does bounded work:

  • intake handling
  • inquiry classification
  • draft-reply generation
  • weekly digest drafting
  • beads preparation

It is allowed to be useful. It is not allowed to decide its own safety model.

Portarium: the governance layer

Portarium validates what OpenClaw is trying to do and decides whether the action is:

  • auto
  • assisted
  • human-approve
  • manual-only

That means “can the model do this?” is not the only question.

The more important question is:

“Should this workflow be allowed to happen in this context, at this risk level, with this amount of visibility?”

That is what Portarium exists to answer.

The workflow I trust first: inbound inquiry handling

The first serious business workflow is not content generation or deployment automation. It is lead handling.

That was the correct first slice for four reasons:

  • the inputs are already structured
  • the commercial value is obvious
  • the error cost is manageable if sending stays manual
  • the workflow is easy to audit

Here is the current path in plain English:

  1. someone submits the contact form on calvinkennedy.com
  2. the site validates the payload and forwards it to OpenClaw with a request ID
  3. OpenClaw infers the likely service line and runs the inquiry through the governed workflow
  4. the system drafts a reply and creates an internal beads item instead of firing off an email
  5. I review the context, the draft, and the next step manually

That removes low-leverage admin work without pretending responsibility disappeared.

This is the exact line I care about in agent systems:

  • let the system absorb repetitive workflow load
  • keep accountability explicit where the consequences are real

What the audit trail changes

Without auditability, an agent system is just a sequence of opaque guesses.

OpenClaw now writes persistent JSONL audit records for two things:

  • Portarium policy decisions
  • OpenClaw lifecycle and tool-result events

That gives me a record of:

  • what workflow was running
  • what service line it thought it was handling
  • what was allowed automatically
  • what was blocked or escalated
  • what tool execution succeeded or failed

This matters more than it sounds.

If a lead draft is poor, if a classification is wrong, or if a workflow misfires, I need to know whether the failure was:

  • bad classification
  • weak prompting
  • wrong policy tier
  • bad tool execution
  • missing context

Without that visibility, you do not have operations. You have vibes.

What I still do not trust the system to do

This is the part most agent marketing skips.

The case study only has value if the boundaries are explicit.

Right now, I do not trust the system to:

  • send outbound communication on my behalf
  • publish content without review
  • make registrar, payment, or purchase decisions
  • perform destructive infrastructure actions
  • quietly mutate the business in ways that are hard to unwind

Those actions stay behind approval or manual-only boundaries because the cost of being wrong is not “the draft was weak.” The cost is trust, money, or production damage.

That is why I think the phrase “autonomous business” is usually too sloppy to be useful.

What I am building is not autonomy theater. It is governed business operations.

The business value is already clear

Even at this early stage, the value proposition is straightforward.

The system already helps with:

  • faster first-pass handling of inbound leads
  • cleaner classification across consulting, tutoring, and AI workflow coaching
  • less manual reconstruction when I switch contexts
  • better internal records of what happened in a workflow
  • a stronger foundation for follow-up, reminders, and future CRM-style state

This is exactly why I think most teams should start with one bounded workflow instead of a big autonomy pitch.

The early win is not “the agent is amazing.”

The early win is:

“A task I used to rebuild from scratch now arrives pre-structured, pre-routed, and easier to review.”

That is operational leverage.

What comes next

The next layer is not more surface-level magic. It is stronger business state.

The current follow-on work is:

  • durable lead-state persistence
  • better follow-up coverage
  • richer weekly digest workflows
  • more visible approval and review paths

Later, content assistance and publishing support can expand, but only once the approval model is mature enough to make public-facing changes safe.

That sequencing matters.

If I cannot trust the system on one narrow workflow, I have no business pretending it should manage five.

Why this is a consulting proof asset

This is the strongest reason to build the system on my own stack first.

I do not want to sell governed AI systems as a theory.

I want to be able to say:

  • I used this pattern on my own business first
  • I know which parts should be automated early
  • I know where the approval boundaries belong
  • I know what needs logging because I had to debug it myself
  • I know the difference between a flashy agent and a workflow that actually survives production

That is a much stronger consulting story than generic AI transformation language.

The pitch is not:

“Look at my autonomous agent.”

The pitch is:

“I built a governed system that runs selected business workflows on my own stack. If you have one workflow that is still brittle, manual, or hard to trust, I can help you harden that too.”

The real thesis

The important part of agent infrastructure is not the model acting alone.

It is the operating layer around the model:

  • policy
  • validation
  • review
  • auditability
  • explicit boundaries

That is what makes OpenClaw + Portarium useful.

And that is why calvinkennedy.com matters to me as more than a personal site. It is the place where the architecture has to survive real use, real stakes, and real operational constraints.

That is a better test than a demo.

If you are trying to make one workflow reliable enough to trust, start with the workflow, not the slogan.

Newsletter

Short notes on building AI agents in production.

One email when something worth sharing ships. No fluff, no daily cadence, no recycled growth-thread noise.

Primary use: consulting updates, governed AI workflow lessons, and major project writeups.

Newsletter

Short notes on building AI agents in production.

One email when something worth sharing ships. No fluff, no daily cadence, no recycled growth-thread noise.

Primary use: consulting updates, governed AI workflow lessons, and major project writeups.