AI work standards library
This library defines enforceable standards for AI-assisted work artifacts.
What is an AI work standard
An AI work standard is a defined control set for how a work artifact is scoped, reviewed, approved, released, and evidenced when AI is part of the drafting process.[1][8] It governs decisions made by people during creation and release, not system internals.[1][10]
The standard applies to the artifact lifecycle:
- eligibility and scope
- required checks
- accountable review
- retained evidence
- corrective actions
- measured outcomes
Closed loop model
Foryn uses a closed loop model for governance. Control sets what must happen. Work enforces those requirements while the artifact is being produced. Evidence records what actually happened, then feeds standard updates.[1][2]
- Control defines the standard.
- Work enforces it in workflow.
- Evidence proves what happened.
Table of contents
- Scope and eligibility
Defines which artifacts are in scope, who owns decisions, and when governed workflow is mandatory. - Artifact classification and release
Defines artifact classes, release thresholds, and release authority by impact level. - Input governance and data handling
Defines input constraints, sensitive data handling, and preprocessing controls before drafting. - Work instructions and required checks
Defines required work instructions and quality checks before a draft can move forward. - Human review, approvals, and accountability
Defines reviewer roles, approval rules, and accountability for release decisions. - Evidence capture and audit readiness
Defines what evidence must be retained and how to prepare audit-ready records. - Exceptions, incidents, and corrective actions
Defines exception handling, incident response triggers, and corrective action requirements. - Measurement and improvement cycle
Defines governance metrics, review cadence, and standard update mechanics.
How to use this library
Leaders can use this library as an operating baseline.
- Adopt: select the pages that define mandatory controls for your artifact types.
- Review: assign control owners and reviewer owners for each requirement.
- Measure: track required metrics monthly and open corrective actions when thresholds fail.
- Improve: update control definitions from evidence, not opinions.
Definition
A governed AI work artifact is any deliverable where AI-assisted drafting could change business, legal, regulatory, financial, workforce, or customer outcomes.[1][3] A standard is valid only if requirements are enforceable during work and testable after release.[1][9] Evidence is required to show who decided what, when, and on which revision.[4][7]
Thesis
Most governance failures come from ambiguous work controls, not missing policy text. Effective governance starts with enforceable artifact rules, mandatory human review behavior, and retained decision evidence. These pages provide the minimum control baseline leaders can audit and improve.[1][4][8]
What the standard requires
- Define in-scope artifact categories and required governance level per category.[1][8]
- Assign accountable owner and approving authority for each governed artifact class.[4][10]
- Require structured input constraints before drafting begins.[1][6]
- Require explicit review confirmation before release for in-scope artifacts.[2][3]
- Block release when mandatory checks or approvals are incomplete.[4][9]
- Retain revision, review, and release evidence for the defined retention period.[4][7]
- Define incident triggers and response actions for governance failures.[5][12]
- Track governance KPIs and corrective action closure rates monthly.[1][11]
Reviewer checklist
- Verify the artifact is mapped to an approved in-scope category.
- Verify the accountable owner is assigned and active.
- Verify required input constraints are present and complete.
- Verify required checks were completed for the current revision.
- Verify required approver role signed off before release.
- Verify release conditions match artifact classification.
- Verify required evidence records are stored and retrievable.
- Verify exception handling path was not used without authorization.
- Verify incident log entry exists for any control failure.
- Verify metrics and control outcomes are included in monthly review.
Evidence to retain
- Artifact classification record with control level.
- Named accountable owner and approver assignment record.
- Revision history with timestamps and editor identity.
- Completed checklist record for mandatory checks.
- Review confirmation event tied to released revision.
- Release decision record with authority and timestamp.
- Exception record with justification and expiration.
- Incident ticket and corrective action tracking record.
- Monthly metrics snapshot and governance review minutes.
Common failure modes
- Scope drift: teams release artifacts that were never classified.
- Reviewer substitution: sign-off is performed by unauthorized roles.
- Checklist bypass: mandatory checks are marked complete without evidence.
- Evidence gaps: release occurs but decision record is missing.
- Exception misuse: temporary bypass becomes persistent behavior.
- Metrics blindness: teams do not track control failure trends.
Practical enterprise scenarios
-
Context: HR policy memo, high urgency due legal deadline.
Decision point: Can the memo be released same day.
Required checks: classification, policy alignment, legal review complete.
Required approver: HR director plus legal reviewer.
Evidence captured: revision log, legal review record, release approval record.
Plane mapping: Control defines approval pair, Work enforces gate, Evidence stores release trail. -
Context: Procurement vendor notice, medium urgency.
Decision point: Is self-review enough for release.
Required checks: template compliance, factual verification, audience check.
Required approver: procurement manager.
Evidence captured: checklist completion, manager sign-off, final version hash.
Plane mapping: Control sets manager requirement, Work enforces assignment, Evidence records sign-off. -
Context: Operations incident update to customers, high urgency.
Decision point: Can exception path allow immediate send.
Required checks: incident classification, approved exception policy, post-release review scheduled.
Required approver: on-call incident commander.
Evidence captured: exception authorization, release timestamp, post-incident corrective action ticket.
Plane mapping: Control defines exception policy, Work enforces conditional path, Evidence captures rationale. -
Context: Internal process guide, low urgency.
Decision point: Is manager review mandatory.
Required checks: classification as internal low impact, self-review checklist complete.
Required approver: none beyond named author if policy allows.
Evidence captured: classification record, self-review confirmation, release event.
Plane mapping: Control defines low-impact rule, Work enforces checklist, Evidence logs self-review.
Plane mapping
- Control Plane: Define artifact classes, review thresholds, approval authority, retention, and metrics.
- Work Plane: Enforce mandatory steps, checks, role rules, and release gates during artifact production.
- Evidence Plane: Record revisions, decisions, approvals, exceptions, incidents, and metric outcomes.
FAQs
What is governed here, the model or the artifact?
The artifact lifecycle is governed. The standard controls human decisions before release.[1][10]
Can low-risk artifacts use lighter review?
Yes, if classification rules define lighter review and evidence requirements. The rule must be explicit and auditable.[1][11]
Do standards apply only to external communications?
No. Internal artifacts can also create risk. Scope rules should include internal high-impact artifacts.[1][3]
What makes a requirement enforceable?
It has a clear condition, a required actor, and observable evidence. If it cannot be verified, it is not enforceable.[4][7]
How often should standards be updated?
Set a fixed governance review cadence and update when evidence shows control gaps. Quarterly is common for enterprise programs.[1][2]
References
- NIST, Artificial Intelligence Risk Management Framework (AI RMF 1.0), https://www.nist.gov/itl/ai-risk-management-framework
- NIST, AI RMF Playbook, https://airc.nist.gov/airmf-resources/playbook/
- NIST, NIST AI 600-1, Generative Artificial Intelligence Profile, https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-generative-artificial-intelligence
- NIST, SP 800-53 Rev. 5, https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final
- NIST, SP 800-61 Rev. 2, https://csrc.nist.gov/pubs/sp/800/61/r2/final
- NIST, SP 800-122, https://csrc.nist.gov/pubs/sp/800/122/final
- NIST, SP 800-86, https://csrc.nist.gov/pubs/sp/800/86/final
- ISO/IEC 42001:2023, https://www.iso.org/standard/81230.html
- ISO/IEC 23894:2023, https://www.iso.org/standard/77304.html
- ISO/IEC 38507:2022, https://www.iso.org/standard/56641.html
- ISO 31000:2018, https://www.iso.org/iso-31000-risk-management.html
- Regulation (EU) 2024/1689 (AI Act), https://eur-lex.europa.eu/eli/reg/2024/1689/oj