cinematic-ui
Health Gecti
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 18 GitHub stars
Code Gecti
- Code scan — Scanned 4 files during light audit, no dangerous patterns found
Permissions Gecti
- Permissions — No dangerous permissions requested
This skill provides a reasoning framework for AI coding assistants, prompting them to design websites using narrative, composition, and lighting techniques inspired by specific films and directors. Rather than functioning as a simple template library, it guides the AI through a structured, creative workflow.
Security Assessment
Overall Risk: Low. The automated code scan reviewed 4 files and found no dangerous patterns, hardcoded secrets, or requests for excessive permissions. This tool appears to be a collection of prompt instructions and framework guidelines (with dedicated compatibility for popular AI platforms like Cursor, Copilot, and Gemini) rather than executable software. It does not execute shell commands, make external network requests, or access sensitive local data.
Quality Assessment
The project is in excellent health and actively maintained, with its most recent push occurring today. It is fully open-source under the standard MIT license, allowing for broad commercial and private use. Community trust is currently low-to-moderate with 18 GitHub stars, which is expected for a niche, creative design tool. However, the repository is highly polished, featuring comprehensive documentation and broad multilingual support, which reflects a strong commitment from the developer.
Verdict
Safe to use.
A reasoning-first cinematic web design skill. Makes AI think like a film director — research a real film, extract its visual language, translate it into page narrative and composition. Not a style picker. A director's workflow. 讓 AI 像電影導演一樣思考的網站設計 Skill。研究真實電影語言、轉譯成頁面敘事與構圖。不是風格選單,是導演工作流。
Cinematic UI
Turn director and film research into premium website rhythm, space, light, and composition.
繁體中文 · 简体中文 · 日本語 · Español · Français · Deutsch · 한국어 · Português · Tiếng Việt
Why This Is Different
Most AI design skills are lookup tables.
They give the AI a database of 67 styles, 161 color palettes, 50 component patterns — and the AI picks from the menu. The result is technically clean, occasionally polished, and reliably forgettable. The AI is not thinking. It is shopping.
Cinematic UI is a thinking framework, not a materials library.
Instead of handing the AI a menu, this skill forces it to work the way a real film director does:
- Research a specific director and a specific film — lighting logic, framing discipline, scene rhythm, material sense
- Extract what makes that film's visual language work at a structural level
- Develop an original visual thesis for the site: what kind of scene is this page? what composition is irreplaceable here?
- Justify every layout decision against that film language
- Only then formalize CSS, motion, and implementation
The difference in output is not a matter of degree. It is a different category of result.
A lookup table produces a site that looks like a site. A director's workflow produces a site that feels like it was made by someone who had a point of view.
The AI is not a shopkeeper picking items off shelves.
The AI is the director. The film is the brief. The website is the production.
What It Is
cinematic-layout is a cross-agent reasoning skill built around a single fixed mechanism:
- Choose a director + specific film
- If web access is available, research that director and film — cinematography, lighting, rhythm, material, scene control
- Translate that film language into a web-executable narrative and layout system
- Define one distinct scene and one irreplaceable composition per major page role
- Only then formalize HTML / CSS / JS implementation
The film is not a spec sheet. The film is research input.
The computer-operable workflow starts when those observations are translated intodecisions.md,storyboard.md,compiled-spec.md, and implementation.
Problems It Solves
| Problem | What It Looks Like | How This Skill Solves It |
|---|---|---|
| Pacing | Sections are technically valid but feel like a slide deck, not a directed site | Director narrative templates replace the default Hero → Features → Stats → CTA flow |
| Space | Components exist but visual weight, distance, hierarchy, and pressure are weak | Forces a Signature Composition per page — no fallback to default grid |
| Light | Surface glow only — no real lighting logic, shadow behavior, or material control | Colors sourced from actual film scenes, paired with background-techniques library |
| Premium feel | Clean but not expensive, not restrained, not directed enough | Premium Calibration checklist forces explicit "what we will NOT do" decisions |
| Uniqueness | After multiple demos, hero posture, section rhythm, and geometry start repeating | Demo Uniqueness Protocol: history audit + Shell-ban list before every new project |
Core Features
| Feature | Description |
|---|---|
| Reasoning-first | AI must develop an original visual thesis — not select from a preset library |
| Director-first | Emotional direction comes from a real film, not generic luxury branding |
| Research-first | When web access is available, research the director and film before locking Phase 1 |
| Start questionnaire gate | Every invocation must complete the opening questionnaire before Phase 1 begins |
| Storyboard-first | Write decisions.md, storyboard.md, compiled-spec.md before any frontend code |
| Demo Uniqueness Protocol | Audits prior outputs and enforces a Shell-ban list to prevent repeated shells across projects |
| Anti-grid fallback | Grid is allowed as invisible alignment infrastructure — never as the visible default composition |
| Sub-agent friendly | Film research, niche research, page scenes, spec slices, and verification can be delegated while one lead agent maintains final coherence |
Workflow
| Phase | Main Work | Output Artifact |
|---|---|---|
| Phase 1 — Decisions | Complete start questionnaire, choose director + film, run uniqueness audit, research if web access available | decisions.md |
| Phase 2 — Storyboard | Define site-wide cinematic grammar, write scene thesis per page role, lock signature composition per page | storyboard.md |
| Phase 3 — Compiled Spec | Extract camera / interaction / composition / texture / typography per storyboard — shared system last | compiled-spec.md |
| Phase 4 — Build & Verify | Implement from spec, add reduced-motion + responsive, validate against anti-garbage rules | HTML / CSS / JS |
Phase 2 internal order (non-negotiable):
Site-wide cinematic grammar → per-page scene thesis → per-page signature composition → shared system
Supported Platforms
Claude Code and OpenAI Codex are the two primary platforms. All others are fully supported.
| Tool / Platform | Entry File | Install / Config |
|---|---|---|
| Claude Code (primary) | CLAUDE.md |
~/.claude/skills/cinematic-ui |
| Codex / ChatGPT (primary) | CODEX.md |
$CODEX_HOME/skills/cinematic-ui |
| Cursor | .cursor/rules/cinematic-ui.mdc |
Already in .cursor/rules/ — works on clone |
| Windsurf | .windsurf/rules/cinematic-ui.md |
Already in .windsurf/rules/ — works on clone |
| GitHub Copilot | .github/copilot-instructions.md |
Already in .github/ — works on clone |
| Gemini / Antigravity | GEMINI.md |
Read at project startup |
| Cross-tool shared | AGENTS.md |
Universal reference for any agent |
Installation
Claude Code
Windows:
git clone https://github.com/akseolabs-seo/cinematic-ui "$env:USERPROFILE\.claude\skills\cinematic-layout"
macOS / Linux:
git clone https://github.com/akseolabs-seo/cinematic-ui ~/.claude/skills/cinematic-ui
Then invoke with /cinematic-ui inside Claude Code.
Codex / ChatGPT
git clone https://github.com/akseolabs-seo/cinematic-ui $CODEX_HOME/skills/cinematic-ui
Cursor / Windsurf / GitHub Copilot
git clone https://github.com/akseolabs-seo/cinematic-ui
The .cursor/rules/, .windsurf/rules/, and .github/copilot-instructions.md files are already in place. No additional config needed — the rules activate as soon as the repo is present in your project.
Any other tool
Point the tool at this repo or copy the relevant entry file. See the platform table above.
Suggested Prompt Pattern
Use cinematic-layout to build a homepage.
Pick the director and film yourself.
If web access is available, research the director and film first.
Run the Demo Uniqueness Protocol.
Do not reuse shells from previous demos.
Optimize for a great single-page result before building a shared system.
References Library
All reference data lives in references/, organized by phase. Load only what the current phase needs — do not read the entire library at once.
Core Rule Files
| File | Purpose |
|---|---|
references/library-index.md |
Which files to read per phase — start here |
references/premium-calibration.md |
Self-check after director brief: restraint and premium quality |
references/anti-garbage.md |
Common AI design degradation patterns — run in Phase 3 and Phase 4 |
references/anti-convergence.md |
Hash-based selection to prevent repeated shells across demos or pages |
references/implementation-guardrails.md |
Phase 3–4 rules: JS effect list, Entrance Map, Phase 3 checklist, Punch Up / Pull Back |
references/reference-protocol.md |
How to decompose a reference site without copying it |
references/output-templates.md |
Standard format templates for each phase artifact |
Data Libraries (Phase 1–3)
| File | Contents |
|---|---|
references/data/directors-200.md |
200+ directors by genre, with signature films and visual style descriptions |
references/data/hero-archetypes.md |
30 hero skeleton options |
references/data/narrative-beats.md |
25 narrative beats + 18 director arc templates |
references/data/section-functions.md |
50 functional section types |
references/data/section-archetypes.md |
91+ section skeleton options |
references/data/dna-index.tsv |
Design DNA index of 1,486 sites — searchable by mood, type, motion |
references/data/design-dna-db.txt |
Deep site-level DNA data (load only on index hit) |
references/data/camera-shots-50.md |
55 entrance and reveal behaviors with CSS |
references/data/interaction-effects-50.md |
55+ hover / click / scroll interactions (includes JS-required variants) |
references/data/compositions.md |
80 layout compositions and grid logics |
references/data/visual-elements.md |
40 visual decoration elements |
references/data/background-techniques.md |
50+ hero background and atmosphere layer techniques |
references/data/typography-cinema.md |
40+ text performance and hierarchy treatments |
references/data/color-grades.md |
40+ film palette to UI token translations |
references/data/font-moods.md |
30+ font pairings by tone |
references/data/textures.md |
30+ grain / grid / dust / scan line surface techniques |
Repository Structure
cinematic-layout/
├── SKILL.md ← main skill logic (primary entry for all agents)
├── skill.json ← universal skill manifest
├── directors-library.md ← legacy compatibility file
│
├── CLAUDE.md ← Claude Code
├── AGENTS.md ← cross-tool shared reference
├── CODEX.md ← Codex / ChatGPT
├── GEMINI.md ← Gemini / Antigravity
│
├── .cursor/
│ └── rules/
│ └── cinematic-ui.mdc ← Cursor rules (auto-loaded)
│
├── .windsurf/
│ └── rules/
│ └── cinematic-ui.md ← Windsurf rules (auto-loaded)
│
├── .github/
│ ├── copilot-instructions.md ← GitHub Copilot (auto-loaded)
│ ├── PULL_REQUEST_TEMPLATE.md
│ └── ISSUE_TEMPLATE/
│
├── agents/
│ └── openai.yaml ← OpenAI skill metadata
│
├── docs/
│ └── banner.svg
│
├── CHANGELOG.md
├── CONTRIBUTING.md
├── CODE_OF_CONDUCT.md
├── SECURITY.md
├── LICENSE
│
└── references/
├── library-index.md
├── premium-calibration.md
├── anti-garbage.md
├── anti-convergence.md
├── implementation-guardrails.md
├── reference-protocol.md
├── output-templates.md
└── data/ ← 18 design data libraries (~600KB total)
Follow
For updates and more AI / design workflow content:
Contributing
Please read CONTRIBUTING.md before opening a PR.
License
MIT. See LICENSE.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi