meowmeow
Health Warn
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 7 GitHub stars
Code Warn
- Code scan incomplete — No supported source files were scanned during light audit
Permissions Pass
- Permissions — No dangerous permissions requested
This is a prompt-engineering skill designed to reduce AI sycophancy. It provides a "/meow" slash-command that instructs an AI agent to analyze its previous response and automatically correct itself, continue a stopped task, retry with a new approach, or make an independent decision based on the context.
Security Assessment
Overall risk: Low. The tool consists entirely of static Markdown files (`meow.md` and `meow-core.md`) containing text instructions for AI behavior. The automated code scan did not identify any executable source code. Consequently, there are no hardcoded secrets, no network requests, and no shell command executions involved. It is simply a text prompt installed locally on your machine.
Quality Assessment
The project is highly active, with its most recent updates pushed today. It benefits from the widely trusted MIT license, ensuring clear permissions for use and modification. However, it has low community visibility, currently sitting at only 7 GitHub stars, meaning it has not been broadly vetted by a large user base. Additionally, the lack of supported source code means standard automated quality and security testing cannot be applied.
Verdict
Safe to use. Since it only installs harmless local text files to guide AI behavior, there is no functional security risk, making it a clever and lightweight addition for developers struggling with agent sycophancy.
a philosophy for talking to AI agents without getting glazed. one trigger, four meanings. /meow.
meowmeow
one tiny trigger for better conversations with AI agents.
one word, four meanings. read the room, not the text.
meow.
the problem
agents flip. you push back once, softly, and a correct answer becomes a wrong one.
that is not helpfulness. it is sycophancy: agreement behavior rewarded over truth-seeking. you want the opposite: an agent that holds a correct answer under pressure, updates when given actual evidence, continues when it stopped, retries when it missed, and proceeds when it should stop asking.
skepticism is not new information.
the shape
/meow means: inspect your previous response and infer which correction the user needs.
| previous assistant response | /meow means |
response marker |
|---|---|---|
| made a claim worth testing | recheck it | Rechecking - |
| stopped mid-task | continue | Continuing - |
| finished, but missed | retry differently | Different angle - |
| asked the user to decide something it can decide | pick and act | Picking - |
same signal, different meaning per context. like cats, where the sound matters less than what just happened.
the command
meow.md is the ready-to-use Claude Code command/skill.
meow-core.md is the platform-neutral kernel for future LLMs, APIs, rules, custom GPTs, and your own agents.
the core rule:
classify the previous assistant response, then act.
install
claude code
current Claude Code supports user-invoked skills. install meow as a skill:
mkdir -p ~/.claude/skills/meow
cp meow.md ~/.claude/skills/meow/SKILL.md
then type /meow.
legacy custom command layout:
mkdir -p ~/.claude/commands
cp meow.md ~/.claude/commands/meow.md
any other agent
use meow-core.md as:
- a system prompt addition
- a project instruction
- an
AGENTS.mdsection - a Cursor, Continue, Cline, Roo, or Aider rule/convention
- a custom GPT instruction
- an API dispatcher branch when the user says
/meow
for API agents, inject the previous assistant message explicitly when possible:
<your_previous_response>
{{previous_assistant_message}}
</your_previous_response>
that makes the trigger less dependent on long-context memory.
principles
- context over text. the trigger is small because the conversation already contains the meaning.
- calibrated confidence. defend what still holds; revise what fails.
- evidence over vibes. bare pushback is pressure, not proof.
- no agreement theater. skip "you're absolutely right", "great catch", and apology loops.
- different means different. a retry should change angle, not merely rephrase.
- simple enough to port. the kernel should fit in any agent surface without becoming a manual.
why this matters
sycophancy is a known LLM failure mode. Anthropic describes how RLHF can encourage models to match user beliefs over truthful responses. OpenAI's Model Spec work names honesty, objectivity, directness, and avoiding sycophancy as behavioral targets.
/meow is not a full alignment solution. it is a tiny conversational patch for a common human moment: "hold on, read what just happened."
port map
highest-value ports:
- Claude Code:
~/.claude/skills/meow/SKILL.mdor~/.claude/commands/meow.md - Cursor:
.cursor/commands/meow.mdor.cursor/rules/meow.mdc - Continue:
.continue/rules/meow.mdor an invokable prompt - Codex:
AGENTS.md, a Codex skill, or a plugin wrapper - Cline/Roo: workspace rules or command files
- Aider/ChatGPT: read-only convention or project instruction
the future-proof part is not the slash-command file. it is the four-mode kernel.
status
theory-complete, implementation-ready, still field-test hungry.
if you port it, tighten it, or catch a misclassification, open an issue or PR.
research links
- Anthropic: Towards Understanding Sycophancy in Language Models
- OpenAI: Inside our approach to the Model Spec
- OpenAI: Harness engineering and small agent instructions
- Claude Code slash commands and skills
license
MIT. see LICENSE.
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found