caveman

skill
Security Audit
Warn
Health Warn
  • License รขโ‚ฌโ€ License: MIT
  • Description รขโ‚ฌโ€ Repository has a description
  • Active repo รขโ‚ฌโ€ Last push 0 days ago
  • Low visibility รขโ‚ฌโ€ Only 7 GitHub stars
Code Pass
  • Code scan รขโ‚ฌโ€ Scanned 2 files during light audit, no dangerous patterns found
Permissions Pass
  • Permissions รขโ‚ฌโ€ No dangerous permissions requested
Purpose
This is a Claude Code skill that modifies the AI's output style to use extremely concise, "caveman" language. It is designed to reduce LLM token usage and costs by up to 75% while retaining full technical accuracy and normal code generation.

Security Assessment
Overall Risk: Low
The codebase is lightweight, consisting of only 2 files. The automated code scan passed with no dangerous patterns detected, and the tool does not request any dangerous system permissions. It does not appear to access sensitive data, execute arbitrary shell commands, or make external network requests. There are no hardcoded secrets to worry about. The tool functions purely as a prompt-engineering wrapper to change Claude's conversational style.

Quality Assessment
The project is very new and currently has low community visibility with only 7 GitHub stars, meaning it has not been broadly battle-tested. However, it is under active development (last pushed 0 days ago) and does come with a standard MIT license, which is permissive and safe for commercial and private use. Given its extremely simple and highly specific scope, the lack of a massive community is not a significant drawback, as there is very little surface area for complex bugs.

Verdict
Safe to use: It is a highly minimal, read-only prompt modification tool that requires practically zero trust to install and use securely.
SUMMARY

๐Ÿชจ why use many token when few token do trick โ€” Claude Code skill that cuts 75% of tokens by talking like caveman

README.md

caveman

why use many token when few token do trick

Install โ€ข Before/After โ€ข Why


A Claude Code skill that makes Claude talk like a caveman โ€” cutting ~75% of tokens while keeping full technical accuracy.

Based on the viral observation that caveman-speak dramatically reduces LLM token usage without losing technical substance. So we made it a one-line install.

Before / After

๐Ÿ—ฃ๏ธ Normal Claude (69 tokens)

"The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. When you pass an inline object as a prop, React's shallow comparison sees it as a different object every time, which triggers a re-render. I'd recommend using useMemo to memoize the object."

๐Ÿชจ Caveman Claude (19 tokens)

"New object ref each render. Inline object prop = new ref = re-render. Wrap in useMemo."

๐Ÿ—ฃ๏ธ Normal Claude

"Sure! I'd be happy to help you with that. The issue you're experiencing is most likely caused by your authentication middleware not properly validating the token expiry. Let me take a look and suggest a fix."

๐Ÿชจ Caveman Claude

"Bug in auth middleware. Token expiry check use < not <=. Fix:"

Same fix. 75% less word. Brain still big.

Install

npx skills add JuliusBrussee/caveman

Or with Claude Code plugin system:

claude plugin marketplace add JuliusBrussee/caveman
claude plugin install caveman@caveman

One rock. That it.

Usage

Trigger with:

  • /caveman
  • "talk like caveman"
  • "caveman mode"
  • "less tokens please"

Stop with: "stop caveman" or "normal mode"

What Caveman Do

Thing Caveman Do?
English explanation ๐Ÿชจ Caveman smash filler words
Code blocks โœ๏ธ Write normal (caveman not stupid)
Technical terms ๐Ÿง  Keep exact (polymorphism stay polymorphism)
Error messages ๐Ÿ“‹ Quote exact
Git commits & PRs โœ๏ธ Write normal
Articles (a, an, the) ๐Ÿ’€ Gone
Pleasantries ๐Ÿ’€ "Sure I'd be happy to" is dead
Hedging ๐Ÿ’€ "It might be worth considering" extinct

Why

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  TOKENS SAVED          โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ 75% โ”‚
โ”‚  TECHNICAL ACCURACY    โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ 100%โ”‚
โ”‚  SPEED INCREASE        โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ ~3x โ”‚
โ”‚  VIBES                 โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ OOG โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
  • Save money โ€” 75% less token = 75% less cost on output
  • Faster response โ€” less token to generate = speed go brrr
  • Same accuracy โ€” all technical info kept, only fluff removed
  • Fun โ€” every code review become comedy

How It Work

Caveman not dumb. Caveman efficient.

Normal LLM waste token on:

  • "I'd be happy to help you with that" (8 wasted tokens)
  • "The reason this is happening is because" (7 wasted tokens)
  • "I would recommend that you consider" (7 wasted tokens)
  • "Sure, let me take a look at that for you" (10 wasted tokens)

Caveman say what need saying. Then stop.

Star This Repo

If caveman save you mass token, mass money โ€” leave mass star. โญ

License

MIT โ€” free like mass mammoth on open plain.

Reviews (0)

No results found