OAIRA
OAIRAAI Integration

AI-Powered Research Intelligence

Built for teams who want AI acceleration without losing control of their data or methodology.

OAIRA integrates AI across every stage of the research lifecycle: from survey design and pre-deployment validation to autonomous interviewing, deep research synthesis, and streaming report generation. The AI handles methodology; you keep full editorial control.

AI Capabilities

OAIRA is AI-enabled across ten distinct workflows, each addressing a real bottleneck in the research cycle — from design to delivery.

1

Conversational Survey Design

Describe your research goal in natural language. OAIRA scores your goal against eight professional methodologies and recommends the best fit — then walks you through a methodology-specific guided workflow to collect the context needed for generation.

"We want to understand why enterprise customers churn in the first 90 days." → Recommended: Jobs-to-be-Done → 6-step guided workflow → complete survey generated

The eight methodologies are: Jobs-to-be-Done, User Journey Mapping, Gap Analysis, Hypothesis Testing, Comparative Analysis, Sentiment and Opinion, Audience Segmentation, and Exploratory Discovery. Each has its own step engine, generators, and analysis pipeline.

2

Simulation — Validate Before You Deploy

Run your survey against a pool of AI personas (10–500) before sending it to real respondents. Each persona is called with its full profile and the survey's branching logic; responses are stored with the same schema as real submissions.

  • Divergence detection: synthetic vs real response distribution compared automatically
  • Cost transparency: estimated and actual token usage shown before and after
  • Full analytics pipeline runs on simulated data — same charts, same methodology analysis
  • Reusable persona pools: simulate against the same audience repeatedly
3

Autonomous AI Interviewer

Deploy a conversational AI agent that conducts open-ended qualitative interviews against your research brief. The agent decides in real time whether to probe deeper, transition to the next topic, or wrap up — based on coverage tracking across your question set.

  • Adaptive probing: if a respondent answers vaguely, the agent asks for an example
  • Configurable style: structured, balanced, exploratory, or deep dive
  • Configurable persona: UX researcher, academic, journalist, casual
  • Structured extraction: conversational transcript converted to structured survey responses

Public interview URL auto-generated. Voice mode available in lab.

4

Deep Research Pipeline (8 Phases)

An orchestrated multi-phase AI workflow for synthesizing findings from uploaded documents. Upload source files; the pipeline extracts, models, synthesizes, validates, and finalizes a confidence-scored research artifact.

1.Planning
2.Discovery
3.Triage
4.Extraction
5.Modeling
6.Synthesis
7.Validation
8.Finalization
  • Semantic search via pgvector — retrieves relevant chunks across 10+ documents
  • Citation tracking — every finding traces back to source document(s)
  • Contradiction detection — claims that differ across sources are flagged
  • Streaming chat — ask questions while research is in progress
5

AI-Authored Branching Logic

Survey branching is rule-based and deterministic — each branch condition is evaluated at response time, not by a live model call. The AI's role is in generating those rules during survey creation, encoding methodological intent as skip conditions.

JTBD: "If satisfaction ≥ 8, skip the switching intent section — high satisfaction respondents won't have meaningful switching data."

Branching hints are embedded in question metadata during generation. Users can also define rules visually in the builder.

6

Methodology-Specific Analysis

As responses arrive, the analysis engine computes methodology-appropriate metrics automatically. No configuration needed — the methodology selected at creation time determines which analysis runs.

  • JTBD: Ulwick opportunity scores (importance × satisfaction gap)
  • User Journey: Friction rates per stage (% rating ≤ 4)
  • Gap Analysis: Importance–satisfaction gap pairs ranked by priority tier
  • Exploratory: Keyword frequency with stop-word filtering, top 8 themes

AI analytics chat lets you ask natural language questions against live data: "What's the biggest friction point in the onboarding stage?"

7

Streaming Report Generation

Generate research reports section by section with streaming output — you see the text being written in real time. Claude synthesizes quantitative and qualitative data into coherent narratives with embedded charts.

Export to PDF, PowerPoint, Excel, HTML, or DOCX. Reports include charts rendered to static images for offline formats.

8

Assessment and Teaching Mode

Surveys can operate as scored assessments. Each question carries a correct answer, scoring weight, and one or more follow-on actions that fire when response conditions are met — all matching conditions fire, not just the first.

  • ai_generate: Claude generates a personalized explanation based on the specific answer given
  • lesson: Show an instructional text card
  • video / resource: Link to video or external learning material
  • function_call: Trigger a webhook with respondent data

Session summaries show per-respondent scores, correct/incorrect breakdowns, and aggregate pass/fail rates.

9

Dashboard Intelligence

The admin home surfaces AI-generated research briefings across all active surveys, simulations, and research runs — without requiring you to navigate into each one. Low response rates, simulation divergences, and stalled pipelines are surfaced automatically as concise briefings.

10

API, CLI and MCP Integration

The full platform surface is accessible programmatically. Connect OAIRA's AI capabilities to your existing workflows via REST, terminal, or AI agent.

  • Trigger survey generation from your product, CI pipeline, or Slack
  • Run simulations programmatically and pull results into your data warehouse
  • Expose OAIRA as tools in Claude Desktop, Cursor, or VS Code via MCP at /api/mcp
  • Autonomous agent workflows: brief → select survey → simulate → analyse → report, unattended

Your Control, AI's Speed

Data Sovereignty

Your survey data stays in your Supabase instance. Row-level security policies enforce per-organisation isolation at the database layer. No third-party storage, no vendor lock-in.

LLM Vendor Preference

OAIRA uses Anthropic Claude Sonnet as its primary model. A vendor preference setting in admin allows switching the model used for research chat and deep research pipelines. OpenAI is used for image generation, embeddings, and TTS.

Human-in-the-Loop

Every AI-generated artifact is editable. Review, refine, or override any survey, question, report, or analysis output. AI accelerates; you approve.

Technical Details

AI Stack: Anthropic Claude Sonnet 4 for generation, analysis, and interviewing. Vercel AI SDK for streaming (data stream protocol:0: text,9: tool calls,a: tool results,d: finish). OpenAI for embeddings, image generation, and TTS.

Architecture: Next.js 16 App Router with Edge Runtime for low-latency responses. Supabase PostgreSQL with RLS and pgvector for semantic search. TypeScript strict mode with Zod validation on all inputs.

Security: Bearer token authentication, per-organisation row-level security policies, environment-based secrets, Sentry error tracking, Pino structured logging with per-request UUIDs.

Integration Points: REST API (OpenAPI 3.0 at /api/docs), CLI (@worksona/oaira-cli), HTTP MCP server (/api/mcp).