AI agents are ready to do real work. The problem is control.
Mr. Alpaca helps small teams define data boundaries, permission rules, and human approval gates before agents touch email, files, CRM, code, or customer operations.
Clear operating rules before AI access expands.
AI adoption is no longer about better prompts.
Small teams are moving from AI chat to AI action. Agents may soon read files, draft emails, update CRM records, summarize meetings, modify code, publish content, or trigger workflows.
That changes the operating question — and the question is no longer about prompting, models, or productivity tips:
- ?What can the agent read?
- ?What can it draft?
- ?What can it send?
- ?What can it update?
- ?What can it delete?
- ?What must stay blocked?
- ?When does a human approve?
- ?Who owns the rule?
The problem is not AI usage itself. The problem is uncontrolled access. The answer is not to ban AI agents — it is to design their operating boundaries before they expand.
- 01 An agent can draft an email — but should it send it? Drafting is a useful, low-risk action. Sending on your behalf is a different risk class entirely.
- 02 It can read a spreadsheet — but should it see every customer row? Read access without scope means agents see far more data than the workflow actually needs.
- 03 It can search Drive — but should it access contracts and payroll files? Folder-level boundaries are usually missing on day one of agent adoption.
- 04 It can update CRM — but should it change customer records without approval? Write access to systems of record without an approval gate is where small teams get into trouble first.
- 05 It can write code — but should it see production secrets? Coding agents that touch repos can pick up environment variables and credentials no one meant to expose.
- 06 It can summarize meetings — but should it process confidential transcripts? Meeting bots and summarizers route conversation content to vendors that may not have been reviewed.
We design the safety layer between your team and your AI agents.
Map the workflows your team wants agents to handle.
Sales follow-ups, drafting, internal reporting, customer support triage, scheduling, content generation — workflow by workflow, written down, before any agent touches a real system.
Classify what agents can see, redact, or never touch.
A practical map of which kinds of data should be allowed in agent context, which should be redacted first, which should be restricted to specific systems, and which should stay out of AI workflows entirely.
Define read / write / send / delete / update by surface.
Per-system permission models: what an agent can read, what it can write, what it can send, what it can update, and what it should never delete — across email, files, CRM, code, and internal tools.
Decide when humans must approve before an agent acts.
Clear rules for when a person must sign off before an agent sends, publishes, updates, imports, deletes, or escalates — so agents move work forward without quietly committing the team to actions no one reviewed.
Identify the AI surfaces already in use.
Personal AI accounts, unmanaged tools, browser extensions, meeting bots, coding assistants, and automation tools — surfaced into a simple inventory leadership can actually act on.
Create practical rules your team can follow.
A written agent SOP, a redaction checklist, a vendor review note, and a short approval matrix — sized for a small team, not an enterprise security program.
Built for small teams of 1–100 people moving from AI chat to AI actions.
Professional services
Founder-led firms whose work touches client confidentiality every day.
Boutique law firms
Small practices balancing privilege and confidentiality with AI-assisted drafting, research, and intake.
Accounting & bookkeeping
Firms handling client financials, returns, and books, where agents may begin touching reconciliation work.
Consulting firms
Boutique consultancies whose deliverables and client interviews increasingly pass through AI agents.
Marketing agencies
Creative and marketing studios connecting agents to inboxes, CMS, newsletters, and client comms.
Small SaaS & software
Engineering teams where Cursor, Copilot, and coding agents touch repos, fixtures, and customer data.
Operations leads
COOs and ops managers who own internal policy and now also own "what agents can actually do here."
AI-adopting SMEs
B2B service companies and small teams where agent adoption has scaled faster than process.
Start with an AI Data Hygiene Snapshot.
Before giving AI agents access to real systems, find out how your team already uses AI, what data may be exposed, which tools are unmanaged, and which workflows need basic rules.
The Snapshot is the entry diagnostic step inside the broader Agent Workflow Safety & Governance service. It produces a written, human-reviewed read on where you are today — so the governance work that follows is grounded in facts rather than assumptions.
Mini Snapshot (US$399) for a fast first read; Full Snapshot (from US$1,200) for teams handling client or customer data and wanting a 30-day plan. Both are written deliverables with human review before delivery.
What the entry diagnostic actually looks like.
This is a fictional sample Snapshot excerpt designed to show the report structure, finding style, and level of detail. "Northbridge Legal Studio" is not a real firm. The findings and figures below are illustrative and not based on any real client or real confidential documents. The Snapshot sits at the front of the broader Agent Workflow Safety & Governance service — it produces the read on current AI usage that informs everything downstream.
The Snapshot is not an auto-generated AI report. It is a structured operational review.
The Mini AI Data Hygiene Snapshot is not an auto-generated AI report. It is produced through a structured operational review process — AI may assist drafting, but final delivery requires human QA. Every step has a defined input, a defined output, and a human in the loop before sign-off.
Structured intake
You complete a structured intake questionnaire about tools, accounts, agent surfaces, workflows, and where AI already touches client or customer work. No sensitive documents required.
Intake review
We review the intake for completeness, sensitive material, and missing context — and ask short follow-up questions only when the answer would change the findings.
Mapping to fixed risk dimensions
Your answers are mapped against a fixed set of AI workflow risk dimensions — tool usage, data exposure, permission patterns, approval gaps, vendor posture, and review practices — applied the same way for every team.
Controlled draft generation
A draft report is generated using a controlled report framework — fixed sections, fixed scoring model, fixed finding format. The framework constrains tone, structure, and what claims are allowed.
Human review & QA
A human reviewer checks the draft for unsupported claims, overstatements, missing disclaimers, scope creep, and practical usefulness — and rewrites anything that does not pass.
Delivery after QA
The final PDF is delivered only after human QA has signed off. If governance setup is the right next step, we plan it from the Snapshot's findings rather than starting again.
Four steps. From a free self-check to a full Agent Workflow Safety & Governance Setup.
The AI Data Hygiene Snapshot is the entry diagnostic — not the destination. It sits inside the broader Agent Workflow Safety & Governance service. The Self-Check is a preliminary signal. The Mini Snapshot is an entry diagnostic. The Full Snapshot is a deeper diagnostic. The Governance Setup is a higher-level governance design engagement built on the Snapshot's findings.
Free AI Risk Self-Check
A 5-minute first signal on where your team's AI usage stands today.
- 5-minute self-check
- Basic risk band
- Top likely risk areas
- Recommended next step
- No human review
- No custom written report
Mini AI Data Hygiene Snapshot
A lightweight, human-reviewed written read for 1–25 person teams.
- Short intake
- 5–8 page human-reviewed report
- 3–5 priority findings
- Provisional risk band
- Stop / restrict list
- Safe-to-continue uses
- Basic redaction checklist
- 7-day action list
Full AI Data Hygiene Snapshot
A deeper diagnostic with a 30-day plan, for teams handling client or customer data.
- Full intake review
- Operational risk score
- 6–10 findings
- AI tool inventory
- Data classification
- Prompt redaction checklist
- Draft AI usage policy
- Human review matrix
- 30-day remediation plan
- Client-facing AI use statement (when relevant)
Agent Workflow Safety & Governance Setup
Typical projects usually start around US$3,000 after a completed Snapshot. Final scope depends on systems, workflows, and approval needs.
Higher-level governance design — operating boundaries, approval gates, and SOPs in writing.
- Agent-ready workflow map
- Permission model
- Approval gates
- Operating SOPs
- Internal staff guidance
- Governance documents
- Handoff plan
- Working sessions with leadership
All prices in USD · Fixed-scope diagnostics · Custom scope for governance setup
Final scope may depend on team size, data sensitivity, workflow complexity, and agent-connected systems.
Clarity matters more than coverage. Here is what the work is not.
We work in a narrow band: the operational governance of how a small team safely connects AI agents to real workflows. For everything else, we will tell you who to talk to.
- Not legal advice
- Not cybersecurity certification
- Not penetration testing
- Not vulnerability scanning
- Not a SOC 2 assessment
- Not an ISO 27001 audit
- Not a GDPR / HIPAA / CCPA compliance opinion
- Not a breach investigation
- Not employee surveillance
- Not a review of actual confidential documents
- Not a review of employee AI chat histories
- Not a production system review
- Not a full vendor risk assessment
- Not a guarantee that AI use is safe
The questions teams ask before they start.
What does “Agent Workflow Safety & Governance” actually mean?
It means designing the operating boundaries for AI agents before they touch real systems. Concretely: which workflows agents handle, which data they can see, what they're allowed to read or write, and which actions need human approval. Agents are increasingly able to do real work — sending emails, updating CRM records, drafting newsletters, modifying code. Governance is the layer that decides what they should and shouldn't do on your behalf.
Do you need access to our systems or our API keys?
No. We do not request passwords, API keys, production access, or authenticated sessions in your environment. The work is built on structured intake answers and redacted workflow descriptions that your team controls. The whole point is to design agent boundaries — not to install ourselves inside them.
Why isn't this just a cybersecurity audit?
Cybersecurity audits look at network, infrastructure, vulnerabilities, and certifications. We look at the operational layer above that: what work agents are doing, which data they touch, and which actions need a human in the loop. A team can have strong cybersecurity and still hand an agent the keys to its inbox without rules. The two layers complement each other; we do not replace cybersecurity professionals and will say so.
What kinds of agent actions do you actually map?
Read, write, send, delete, update, and access — across the surfaces an agent might touch. Examples: reading Gmail threads, drafting replies, sending email, reading Drive folders, updating CRM records, modifying code in a repo, drafting Slack messages, broadcasting to channels, drafting newsletters, sending to subscriber lists. Each action gets a permission decision and, where appropriate, a human approval gate.
Do we need to upload client documents or chat histories?
No. We only need general descriptions and redacted workflow examples. Employee AI chat histories, full client documents, privileged materials, and confidential raw datasets are explicitly out of scope. If a question would normally require seeing sensitive content, we describe what would need to be confirmed and your team confirms it on their side.
Where does the AI Data Hygiene Snapshot fit in?
The Snapshot is the entry diagnostic. Before designing agent boundaries, it helps to know how the team already uses AI, which tools are unmanaged, and which workflows need basic rules right away. Many teams stop at the Snapshot and run with the 30-day plan themselves. Others use the Snapshot as the front door to a full Agent Workflow Safety & Governance Setup.
Is this legal advice or a compliance certification?
No. It is operational AI workflow safety and governance guidance. It is not legal advice, not a cybersecurity audit, not a SOC 2 or ISO 27001 assessment, and not a GDPR / HIPAA / CCPA compliance opinion. For those, we recommend qualified legal, privacy, or security professionals.
Is this suitable for law firms, accounting firms, or agencies?
Yes — especially if your team is starting to connect agents to inboxes, CRMs, or client-facing tools without clear data rules. Many of our typical buyers come from boutique law firms, accounting and bookkeeping firms, consulting firms, marketing agencies, and small SaaS teams.
Where are you based?
Alpaca Data Lab is a Taiwan-based independent AI workflow governance studio serving English-speaking small teams. The service is designed to be remote, structured, and low-risk — delivered over questionnaires, working sessions, and shared documents.
Can we move from Snapshot to Governance Setup later?
Yes. The Snapshot is the entry diagnostic. Teams that want to operationalize agent workflows — with permission models, approval gates, SOPs, and governance documents — can move into the Agent Workflow Safety & Governance Setup engagement at any time, using the Snapshot's findings as the starting point.
Ready to make your workflows agent-ready?
Start by finding where your current AI usage is safe, where it is risky, and what needs human approval before agents get more access. The Self-Check is free; the Snapshot starts at US$399; the full Agent Workflow Safety & Governance Setup is custom-scoped from the Snapshot's findings.