Free Tool · Full Guide · 3 Steps

Health Storyboard AI —
One Word In. Full Production Package Out.

A free, self-hostable AI tool that takes any health object — a fruit, vegetable, supplement — and produces a complete viral short-form video production package: topic ideas, narration script, Pixar-style character brief, and a scene-by-scene storyboard with image and animation prompts. Built with Next.js 14 and Groq.

TrafCodes·Next.js 14 · TypeScript · Groq AI·Open Source · Free

📥 Download Health Storyboard AI — Full Source Code

Next.js 14 · TypeScript · Tailwind · Groq SDK · Deploy to Vercel in 5 minutes

Download FreeLive Demo ↗

What Is Health Storyboard AI?

Health Storyboard AI is a Next.js web app that automates the entire pre-production pipeline for health content videos. You type a single word — a health object like "Spinach" or "Orange" — and the AI produces everything you need to go from blank page to storyboard-ready production package.

Under the hood it uses the Groq SDK with llama-3.3-70b-versatile for ultra-fast inference. Every generation — from topic ideas all the way to the full storyboard — completes in seconds, not minutes. The entire pipeline makes exactly three structured API calls, one per step.

"

One word in. Script, character brief, and scene-by-scene storyboard out. Zero blank-page problem.

How It Works — 3 Steps

The tool is a linear 3-step wizard. Each step produces output that feeds directly into the next — forming a coherent, consistent production pipeline from idea to fully-prompted storyboard.

Step 1 — Topic Selection
Type a health object → choose a viral topic
Enter a health object (fruit, vegetable, supplement). The AI generates 10 viral short-video topic ideas written in first-person from the object's perspective. You pick one — it becomes the creative brief for the entire production. An optional Affiliate CTA toggle adds a monetization hook to the narration.
Step 2 — Script & Character
Selected topic → narration + Pixar character brief
The moment a topic is selected, a single API call produces three outputs at once: a 5–6 paragraph narration (TTS-ready), a character image prompt for Midjourney or DALL·E, and a structured character sheet JSON for AI video tools that support character locking (Kling AI, Runway Gen-3).
Step 3 — Storyboard
Character + narration → full scene storyboard
The narration is split into 1–5 scenes (user chooses). Each scene gets: a narration excerpt, a cinematic text-to-image prompt, and an image-to-video animation prompt with camera motion and lip-sync instructions. All scene prompts are built on the same character from Step 2 — visual consistency is guaranteed across every frame.

Step 1 — Topic Generation

The user types a health object and clicks Generate Ideas. A request fires to POST /api/generate with type: "topics". The model is instructed as a viral health content creator and responds with a clean JSON array — no backticks, no preamble, no extra text.

Request

JSON · Request — topics
// POST /api/generate — Step 1 { "type": "topics", "input": "Spinach", // health object entered by user "cta": false // affiliate CTA toggle }

Response

JSON · Response — 10 topic ideas
// Response — array of 10 topic ideas [ { "title": "I Was Ignored for Years — Until Science Proved I'm a Superfood", "hook": "Spinach reveals the shocking truth about its own overlooked power" }, { "title": "What Happens to Your Blood 10 Minutes After Eating Me", "hook": "..." }, // ...8 more topics ]

Each topic object contains a title (the full video concept with hook) and a hook (one-sentence preview). Topics are displayed as a selectable list. Once the user picks one and clicks Create Script →, the selection is passed to Step 2 and a script generation request fires automatically.

Tip: The more specific the health object, the more distinctive the output. "Purple sweet potato" produces far richer character designs and stronger emotional hooks than just "vegetable".

Step 2 — Script & Character Generation

This is the core of the engine. A single API call produces three interdependent outputs. The model is instructed as a viral health video scriptwriter and must respond with valid JSON only — no markdown fences, no preamble. The server runs the response through a stripJson() utility before parsing to handle any accidental formatting noise.

Request

JSON · Request — script
// POST /api/generate — Step 2 { "type": "script", "input": "Spinach", "topic": "I Was Ignored for Years — Until Science Proved I'm a Superfood", "cta": false }

Response — three outputs in one call

JSON · Response — narration + character prompt + character sheet
{ "narration": "I am Spinach. For decades, cartoon heroes credited me with giving them strength...", "characterPrompt": "Pixar-style anthropomorphic spinach leaf, deep emerald green, golden veins, large expressive eyes, confident hero pose, tiny cape, cinematic rim lighting, 4K...", "characterSheet": { "character_design_sheet": { "subject": "Spinach Hero", "reference": "Pixar / Inside Out visual style", "identity_retention": { "mode": "Strict Facial consistency mode" }, "instructions": { "description": "Leaf body with face, deep green, golden vein highlights, large warm eyes, small mouth, confident expression..." } } } }
  • 01
    narration

    5–6 short paragraphs written in first-person voice from the health object's perspective. Hook-first, short-form pacing. Paste directly into a TTS tool or hand to a voice actor. If the CTA toggle is on, the narration ends with an affiliate call-to-action that feels authentic because it stays in character.

  • 02
    characterPrompt

    A dense, Pixar-style image generation prompt in English. Covers appearance, expression, pose, lighting, and cinematic qualities. Ready to paste into Midjourney, DALL·E, or any image model. Optimized for multi-generation consistency.

  • 03
    characterSheet (JSON)

    A structured production schema following the character_design_sheet format. Contains subject, reference (visual style), identity_retention.mode: "Strict Facial consistency mode", and a detailed instructions.description. Paste the full sheet into Kling AI's character reference field for frame-to-frame identity locking.

Hook-First Voice

The opening sentence is always bold and scroll-stopping — engineered for short-form video pacing.

Single Character DNA

The same character description flows from prompt to sheet — visual consistency is locked across all scenes.

Machine-Readable Output

Clean JSON that plugs directly into AI video pipelines. No reformatting or manual editing required.

Step 3 — Storyboard Generation

From Step 2, the user selects a scene count (1–5) and clicks Generate N Scenes. The API receives the full narration and the character prompt from Step 2, then splits the narration into N naturally flowing scenes — each with a distinct setting (inside the body, nature, kitchen, gym, and so on).

Response — one object per scene

JSON · Response — storyboard scenes
// Response — array of N scene objects [ { "scene": 1, "sceneNarration": "I am Spinach. For decades, cartoon heroes credited me...", "textToImage": "Pixar-style spinach hero standing in a dark corner of a supermarket, dramatic low-key lighting, dust particles, other vegetables ignoring him, cinematic 16:9, deep emerald tones, ultra-detailed 4K render...", "imageToVideo": "Character speaks directly to camera with lip-sync. Slow dolly zoom in. Dust particles drift. Ambient market sounds fade in. Hard cut to black." }, // ...scenes 2–5 ]

Each scene is displayed as a card with the narration excerpt at the top and a split panel below: text-to-image prompt on the left, image-to-video prompt on the right. Every panel has an individual Copy button. The character from Step 2 is injected into every scene prompt, so visual identity stays locked across the full storyboard.

Recommended workflow: Generate a 3-scene draft first. Copy the character prompt from Step 2 into Midjourney to produce a reference image. If the character looks right, go back and generate the full 5-scene storyboard — catching visual issues early saves time downstream.

Health Storyboard AI vs. Doing It Manually

Here is how this tool compares against writing a health video script manually or using a generic AI chat tool like ChatGPT.

CapabilityHealth Storyboard AIManualGeneric AI Chat
First-person health object voice✓ Built-in✗ Must craft manually✗ Needs custom prompting
Character prompt + sheet in one call✓ Single API call✗ Separate effort✗ Multiple sessions
Midjourney / DALL·E ready output✓ Paste directly✗ Requires adaptation✗ Not optimised
Character locking for Kling AI✓ JSON schema included✗ Not available✗ Not available
Full scene storyboard✓ Up to 5 scenes60–120 minutesMultiple manual prompts
Total time — idea to storyboard✓ Under 30 seconds2–4 hours30–60 minutes
Affiliate CTA toggle✓ One clickManual insertionRe-prompt required

Setup — Deploy in 5 Minutes

This is a standard Next.js 14 project. Deploy to Vercel for free. The only environment variable you need is a Groq API key, which is free at console.groq.com — no credit card required.

  • 1
    Download & install

    Download the ZIP below, extract it, then run npm install inside the project folder.

  • 2
    Set your environment variable

    Create a .env.local file and add:
    GROQ_API_KEY=gsk_xxxxxxxxxxxxxxxx
    Get a free key at console.groq.com/keys.

  • 3
    Run locally or deploy to Vercel

    Local: npm run dev → open localhost:3000
    Vercel: push to GitHub → import at vercel.com → add the GROQ_API_KEY env var → Deploy.

Tech stack: Next.js 14 (App Router) · TypeScript · Tailwind CSS · Groq SDK (llama-3.3-70b-versatile) · Skeleton loading states · Per-panel copy buttons · Fully responsive.

Download & Deploy Now

Full source code, free. Deploy to Vercel in 5 minutes with one free Groq API key. No subscription, no watermark, no usage limits.

Download FreeLive Demo ↗