Key takeaways
- Classification is a shared language for product, marketing, sales, and compliance—make it explicit and living.
- Use a layered model: Market & Monetization → Data & Architecture → Trust & Performance.
- Anchor pricing to your unit of value (seat, event, token, transaction, outcome) to avoid adoption friction.
- Pick one primary category for H1/hero. Treat secondary use cases as bundles or add-ons to protect topical authority.
- Revisit quarterly or after any material change in pricing, autonomy, deployment, or target ICP.
What “classification” means—and why it matters
Classification is a concise definition of what you are and how you should be evaluated: the job you solve, who buys and uses you, how AI shows up in your product, how it’s deployed and integrated, and the trust envelope you operate in. When explicit, it drives:
- SEO & content: topical map, clusters, and internal links reflect your true primary category.
- GTM & sales: ICP targeting, demo narrative, proof points, and competitive framing.
- Roadmap & pricing: features and monetization tied to the unit of value customers feel.
- Risk & governance: clarity on autonomy, explainability, and buyer compliance needs.
The 12-factor classification framework
Group the criteria into three layers. Select what’s true today (not aspirational). The result becomes your canonical classification line.
Layer A — Market & Monetization Fit
- Primary Use Case / JTBD: e.g., support deflection, sales enablement, forecasting, code review, FP&A, HR workflows.
- Buyer Persona & ICP: who signs vs. who uses; SMB/mid/enterprise; business vs. technical buyer.
- Industry Focus: horizontal (cross-industry) or vertical (e.g., healthcare, fintech, retail); note domain constraints.
- Pricing Model & Unit of Value: seat, usage (tokens/calls/events), tiered hybrid, or outcomes-based; align to perceived value.
Layer B — Data & Architecture
- AI Capability Level: descriptive → predictive → prescriptive → generative (or multi-modal). State the dominant mode.
- Autonomy & Human-in-the-Loop: assistive (suggest), semi-autonomous (approve), autonomous (act); define guardrails.
- Data Modality & Source: text, tabular, image/video, audio, events; first-party, third-party, or synthetic; batch vs. streaming.
- Deployment & Integration: multi-tenant cloud by default? private cloud/VPC/on-prem for regulated buyers? edge? APIs, SDKs, and connectors.
Layer C — Trust, Governance & Performance
- Security & Privacy: tenant isolation, encryption, secrets management, data retention, incident response.
- Compliance & Policy: map to regimes (SOC 2, ISO 27001, HIPAA, GDPR, etc.) and your AI risk controls.
- Explainability & Observability: evaluation harnesses, prompt/model versioning, bias/robustness tests, lineage, audit logs.
- SLAs & Performance: availability, latency targets, quality KPIs, and graceful fallbacks (human takeover).
Pro tip: If you span multiple categories, choose one primary for your H1, marketplace listing, and nav. Position others as add-ons or bundles to avoid diluted relevance signals.
Copy-ready scorecard template
Paste this into Notion/Docs and score each factor from 0-3 (0 = undefined, 3 = crisp and evidenced). Revisit quarterly.
| Criterion | Your selection | Evidence / KPI | Score (0-3) |
|---|---|---|---|
| Use Case / JTBD | ________ | Top 1–2 jobs + outcome metric | _ |
| Buyer Persona & ICP | ________ | Titles, segment, ACV band | _ |
| Industry Focus | ________ | Horizontal / named vertical | _ |
| Pricing & Unit of Value | ________ | Seat/usage/hybrid/outcomes | _ |
| AI Capability Level | ________ | Descriptive/predictive/etc. | _ |
| Autonomy Level | ________ | Assistive/approve/act + guardrails | _ |
| Data Modality & Source | ________ | Text/tabular/etc.; 1P/3P | _ |
| Deployment & Integration | ________ | Cloud/VPC/on-prem; APIs/SDKs | _ |
| Security & Privacy | ________ | Controls + attestations | _ |
| Compliance & Policy | ________ | Regimes + evidence | _ |
| Explainability & Observability | ________ | Evals, logs, lineage | _ |
| SLAs & Performance | ________ | Availability, latency, quality | _ |
Interpretation: 30–36 = market-ready and crisp; 22–29 = competitive but tighten messages; ≤21 = revisit primary category and unit of value.
Three worked examples
1) AI Support Copilot (horizontal)
- Use case: deflection & agent assist
- Persona: VP Support (buyer), agents (users), SMB–mid
- Pricing: seat + usage (messages)
- AI level: generative + retrieval; assistive
- Data: text (help center, ticket history)
- Deployment: multi-tenant; Zendesk/Freshdesk apps + APIs
- Trust: SOC 2, audit logs, response citations
- SLA: P95 latency, answer quality benchmark
2) AI Demand Forecaster (vertical: retail)
- Use case: predictive inventory and pricing
- Persona: VP Merchandising; enterprise
- Pricing: outcomes-linked per location/SKU band
- AI level: predictive → prescriptive; semi-autonomous
- Data: tabular time-series (POS, promo, weather)
- Deployment: private cloud/VPC; ERP connectors
- Trust: explainers, bias checks across stores
- SLA: MAPE and stability across seasonality
3) AI Code Review Assistant (developer tooling)
- Use case: PR review, refactor hints, security patterns
- Persona: VPE/Staff Engineers; mid-market–enterprise
- Pricing: seat with token caps + enterprise SSO add-on
- AI level: generative + static analysis; assistive
- Data: repo-scoped; no training on customer IP
- Deployment: cloud + on-prem gateway for regulated buyers
- Trust: policy-based redaction, audit logs
- SLA: latency by file size; secure-pattern recall
Classify your product in 30 minutes (checklist)
- Define the job: write a one-line JTBD and the primary outcome metric.
- Lock the ICP: name 1–2 buyer titles, segment (SMB/mid/enterprise), and typical ACV.
- Choose primary category: the simplest, most defensible label customers already search.
- Pick unit of value: seat, usage, or outcome—whichever best tracks perceived value.
- State AI mode & autonomy: descriptive/predictive/prescriptive/generative and assist/approve/act.
- Document data + deployment: modality, sources, and where it runs (cloud/VPC/on-prem/edge).
- List trust controls: security, compliance, explainability, observability.
- Draft your canonical classification line: “[Primary category] for [ICP] that [job/outcome], priced by [unit of value], [AI mode], [autonomy], deployed [model], with [key trust control].”
- Reflect in SEO: set H1, revise nav labels, update clusters and internal links to mirror the primary category.
- Review quarterly: rerun the scorecard; adjust if pricing/autonomy/ICP shifts.
Example canonical line: “AI support copilot for SMB service teams that deflects repetitive tickets, priced by messages, powered by generative + retrieval AI, assistive with human approval, deployed as multi-tenant SaaS, with SOC 2 and audit logs.”
Common mistakes (and quick fixes)
- AI-washing: claiming “AI-native” when automation is rule-based. Fix: state the real AI mode and where it drives outcomes.
- Unit-value mismatch: charging per seat when value is API volume. Fix: align pricing metric to felt value.
- Persona sprawl: “for everyone” messaging. Fix: pick one buyer and one primary user; build proof around them.
- Multi-category dilution: mixing labels across pages. Fix: set a primary category and reflect it everywhere.
- Hidden risk posture: no clarity on autonomy, data handling, or controls. Fix: publish a simple trust page and link to it.
FAQs
What are the essential AI SaaS product classification criteria?
Twelve factors across three layers: market & monetization (use case, persona, industry, pricing), data & architecture (AI mode, autonomy, data, deployment), and trust & performance (security, compliance, explainability, SLAs).
How does classification impact SEO?
It dictates your topical map, internal linking, and semantic signals. A single primary category concentrates authority and improves relevance for competitive queries.
Horizontal vs. vertical—which ranks faster?
Vertical products often win niche, high-intent long-tails; horizontal tools can build broad authority. Choose based on ICP concentration and proof you can publish.
How often should I revisit my classification?
Quarterly, or after any change in pricing, autonomy, deployment, or ICP. Use the scorecard to keep it objective.
Glossary
- JTBD: “Jobs-to-be-Done”; the fundamental task users hire your product to accomplish.
- Unit of value: the metric customers equate with value (seat, token, call, event, outcome).
- Autonomy level: assistive, semi-autonomous, autonomous—how independently AI acts.
- Topical map: the cluster of pages and internal links that signal your primary category.