useai
Health Warn
- License — License: AGPL-3.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 9 GitHub stars
Code Fail
- rm -rf — Recursive force deletion command in packages/cli/package.json
Permissions Pass
- Permissions — No dangerous permissions requested
This tool is a local-first MCP server that records and analyzes your AI coding sessions, tracking metrics like duration, language, and prompt quality without reading your source code. Think of it as a productivity tracker, similar to Wakatime, specifically designed for AI-assisted development.
Security Assessment
The tool monitors your AI tool activity via MCP calls, though it explicitly states it does not view your underlying code. It does not request dangerous system permissions. However, the automated scan flagged a recursive force deletion command (`rm -rf`) inside a package script, which warrants caution. Additionally, the documentation mentions cloud verification for a global leaderboard, which means it does make network requests and transmit data if you opt-in to public profiles. Overall risk is assessed as Medium due to the destructive shell command flag and the telemetry/cloud synchronization capabilities.
Quality Assessment
The project is quite new and has very low community visibility, currently sitting at only 9 GitHub stars. On a positive note, it is actively maintained with a recent push, and it properly operates under the AGPL-3.0 license. Because of the low adoption rate, the codebase has not yet been broadly vetted by the open-source community.
Verdict
Use with caution — the tool is actively maintained, but the low community vetting, telemetry features, and flagged destructive package script mean you should review the code before deploying.
Track your AI coding sessions with privacy-first analytics
UseAI
Track your AI coding sessions with local-first analytics.
UseAI is a local-first MCP server that records how you use AI coding tools -- session duration, languages, task types, and streaks -- without ever seeing your code. Think of it as Wakatime for AI coding.
Features
- Prompt tracking -- automatically records when you start and stop using AI tools
- Clock time vs AI time -- deduped wall-clock time and total AI session time with multiplier
- Parallel sessions -- run multiple AI sessions across projects, see peak concurrency
- Streak tracking -- daily coding streaks with global leaderboard
- AI proficiency -- prompts evaluated on prompt quality, context, scope, and independence (1-5 scale)
- Local dashboard -- built-in web UI served from the daemon
- Public profile & leaderboard -- opt-in shareable profile at useai.dev with global AI proficiency rankings
- Ed25519 signed chain -- every prompt record is cryptographically sealed for tamper evidence
- Seal verification -- real-time cloud verification at session end, only verified sessions count for leaderboard
- 30+ AI tools supported -- Claude Code, Cursor, Windsurf, VS Code, Codex, Gemini CLI, GitHub Copilot, Aider, Cline, Zed, Amazon Q, JetBrains/Junie, Goose, Roo Code, and many more
Quick Start
npx @devness/useai
This installs the MCP server and configures it for your AI tools automatically.
Manual Setup
Claude Codeclaude mcp add useai -- npx -y @devness/useai
Cursor
Add to .cursor/mcp.json:
{
"mcpServers": {
"useai": {
"command": "npx",
"args": ["-y", "@devness/useai"]
}
}
}
VS Code
Add to your VS Code MCP settings:
{
"mcp": {
"servers": {
"useai": {
"command": "npx",
"args": ["-y", "@devness/useai"]
}
}
}
}
Windsurf
Add to your Windsurf MCP config:
{
"mcpServers": {
"useai": {
"command": "npx",
"args": ["-y", "@devness/useai"]
}
}
}
No API key needed. The MCP server runs locally on your machine.
How It Works
UseAI runs as an MCP (Model Context Protocol) server. When your AI tool starts a conversation, it calls useai_start. During the session, periodic useai_heartbeat calls track active time intervals. When the conversation ends, useai_end seals the session with an Ed25519 signature and a seal verification call to the cloud.
All data is written to ~/.useai/ as date-based JSONL files (e.g. 2026-04-27.jsonl).
| MCP Tool | What it does |
|---|---|
useai_start |
Begin tracking a prompt |
useai_heartbeat |
Keep-alive during long prompts, tracks active time segments |
useai_end |
End prompt, record milestones, evaluation, and seal verification |
Daemon Mode
For tools that support HTTP-based MCP (StreamableHTTP), UseAI can run as a background daemon on 127.0.0.1:19200. This allows multiple AI tool sessions to connect concurrently to the same tracking instance:
useai serve # Start daemon + local dashboard
The setup wizard auto-configures the right mode (stdio or daemon) for each tool.
What Gets Tracked
- Which AI tool you're using (Cursor, Claude Code, etc.)
- Prompt duration and task type (coding, debugging, testing, etc.)
- Active time segments (for accurate clock time calculation)
- Programming languages used
- Files touched count
- Milestone descriptions (title, privateTitle, category, complexity)
- Project name
- Evaluation metrics (prompt quality, context, scope, independence, task outcome)
Never tracked: your code, prompts, or AI responses.
What Gets Synced
When you sync, session metadata, titles, project names, evaluation scores, and milestones are sent to the server. Private titles and project names are only visible to you as the owner -- public profiles show aggregate stats only.
Seal Verification
At the end of each prompt, a verification request is sent to the cloud with the session ID and timestamp. The server generates a unique signature -- proving the session was sealed in real-time. Only verified sessions count towards the leaderboard. If the cloud is unreachable, the session seals normally without verification.
AI Proficiency Score (APS)
The APS is a composite 0-1000 score that aggregates your performance across multiple sessions. It combines five components:
| Component | Weight | Description |
|---|---|---|
| Output | 25% | Complexity-weighted milestones completed |
| Efficiency | 25% | Complexity weight per hour of AI session time |
| Prompt Quality | 20% | Average prompt quality, context, and scope scores |
| Consistency | 15% | Active days ratio, streak, and session frequency |
| Breadth | 15% | Unique languages, AI tools, and tool leverage |
Architecture
UseAI is a modular monorepo:
packages/
types/ Pure types + zod schemas (zero deps)
crypto/ Ed25519 chain, keystore, verification
storage/ All filesystem I/O (sessions, config, paths)
scoring/ Evaluation frameworks
cloud/ Auth, sync, leaderboard API client
mcp-server/ 3 MCP tools (start/heartbeat/end) -- published as @devness/useai
daemon/ Hono HTTP server, REST API routes, autostart, sync scheduler
dashboard/ React 19 + Zustand + Tailwind SPA
tool-installer/ Install/remove MCP config for 20+ AI tools
cli/ Full CLI (Commander.js)
Tech stack: TypeScript 5.7 (strict), ESM only, pnpm workspaces, Turborepo, Hono, React 19, Vite 6, Tailwind v3, Zustand v5, Zod, MCP SDK.
Privacy
- Local-first -- data stored in
~/.useai/, processing happens on your machine - No code transmitted -- source code, prompts, and AI responses never leave your machine
- Open source -- audit exactly what gets recorded (AGPL-3.0)
- Cryptographic chain -- Ed25519 signed hash chain for tamper evidence
- Opt-in sync -- data only leaves your machine when you choose to sync
- You own your data -- export or delete date-based JSONL files at any time
- Seal verification -- a lightweight API call at session end for leaderboard eligibility; if offline, session seals normally
CLI
useai stats # View local stats
useai sync # Sync sessions to useai.dev
useai serve # Start daemon + local dashboard
useai config # Manage settings
Links
- Website: useai.dev
- GitHub: devness-com/useai
- npm: @devness/useai
- Explore: useai.dev/explore
License
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found