daiv
Your AI-powered SWE teammate, built into your git workflow
DAIV
Open-source async SWE agent for your Git platform
DAIV integrates directly with GitLab and GitHub repositories through webhooks. No separate interface needed — you keep using your existing workflow while DAIV handles automation in the background.
What DAIV does
DAIV automates routine software engineering work so you can focus on creative problem-solving:
- Issue Addressing — Converts issue descriptions into working code. DAIV reads the issue, generates a plan, waits for your approval, then opens a merge/pull request with the implementation.
- Pull Request Assistant — Responds to reviewer comments, applies requested changes, and repairs failing CI/CD pipelines — all from within the merge/pull request conversation.
- Slash Commands & Skills — Invoke commands and skills directly from issues and merge requests (
/help,/plan,/code-review,/clone-to-topics). Built-in skills provide planning, code review, and security audits — and you can create your own.
Quick example
- You create an issue: "Add rate limiting to the API endpoints"
- DAIV posts a plan: Analyzes the codebase and proposes implementation steps
- You approve: Comment
@daiv proceed - DAIV implements: Creates a merge request with the code changes
- Reviewer asks for changes: "@daiv use Redis instead of in-memory storage"
- DAIV updates the code: Modifies the implementation and pushes
Under the hood
DAIV's agent has access to a set of capabilities that make this possible:
- Scalable Workers — Handle more concurrent tasks by increasing worker replicas — no architecture changes needed.
- Subagents — Specialized agents for fast codebase exploration and complex multi-step tasks.
- Sandbox — Secure command execution for running tests, builds, linters, and package management inside an isolated Docker container.
- MCP Tools — External tool integrations via the Model Context Protocol, such as Sentry for error tracking.
- Monitoring — Track agent behavior with LangSmith to analyze performance and identify issues.
- LLM Providers — OpenRouter, Anthropic, OpenAI, and Google Gemini.
Technology Stack
- Agent Framework: Deep Agents — the core agent engine powering DAIV. A general-purpose deep agent with sub-agent spawning, middleware stack, and virtual filesystem. Built on LangGraph.
- Backend Framework: Django for building robust APIs and managing database models.
- Async Tasks: Django Tasks with the
django-tasksbackend anddjango-crontaskfor periodic scheduling. - Code Executor: Sandbox for running commands in a secure sandbox to allow the agents to perform actions on the codebase.
- Observability: LangSmith for tracing and monitoring all the interactions between DAIV and your codebase.
- Error Handling: Sentry for tracking and analyzing errors.
Getting Started
Prerequisites
- Docker & Docker Compose
Local Development Setup
Clone the repository:
git clone https://github.com/srtab/daiv.git cd daivRun setup:
make setupThis creates config files from their templates (
config.secrets.envandconfig.toml). Editdocker/local/app/config.secrets.envand add your API keys — at minimum one LLM provider key (Anthropic, OpenAI, Google, or OpenRouter) andCODEBASE_GITLAB_AUTH_TOKENif using GitLab.Install Dependencies (optional):
We use uv to manage dependencies on DAIV.uv sync[!NOTE]
This will install the project dependencies into a virtual environment. Useful for running linting outside of Docker or enabling autocompletion in VSCode.Start core services:
docker compose up --buildThis starts the core services (db, redis, app, worker, scheduler). SSL certificates are auto-generated on first run.
- DAIV API documentation: https://localhost:8000/api/docs/
Start optional services (as needed):
docker compose --profile gitlab up # local GitLab instance + runner docker compose --profile sandbox up # sandbox code executor docker compose --profile mcp up # MCP servers docker compose --profile full up # all services[!NOTE]
Profiles can be combined:docker compose --profile gitlab --profile sandbox upRun the tests (optional):
DAIV includes a comprehensive test suite. To run tests with coverage:$ docker compose exec -it app bash $ make testRun linting (optional):
To ensure code quality:$ docker compose exec -it app bash $ make lint # to check for linting and formatting issues $ make lint-fix # to automatically fix linting and formatting issues
Optional: Local GitLab
To test DAIV with a local GitLab instance:
Start GitLab:
docker compose --profile gitlab upGet the root password:
docker compose exec -it gitlab grep 'Password:' /etc/gitlab/initial_root_passwordConfigure a personal access token at http://localhost:8929 (use the root user or create a new user) and add it to
docker/local/app/config.secrets.envasCODEBASE_GITLAB_AUTH_TOKEN.Create a test project in GitLab and push your testing code to it.
[!TIP]
You can import using repository URL: go toAdmin Area->Settings->General->Import and export settingsand check theRepository by URLoption.Set up webhooks:
docker compose exec -it app django-admin setup_webhooks[!NOTE]
If you get the errorInvalid url given, go toAdmin Area->Settings->Network->Outbound requestsand checkAllow requests to the local network from webhooks and integrations.Test DAIV by creating an issue in your repository with the
daivlabel. DAIV will automatically present a plan to address the issue.
[!NOTE]
For GitHub integration, you'll need to use GitHub.com or your own GitHub Enterprise instance. SetCODEBASE_CLIENT=githubindocker/local/app/config.envand configure the GitHub App credentials.
Roadmap
- Configurable hooks — run DAIV on specific events with user-defined triggers and actions.
- Chrome extension — interact with DAIV directly from the git platform without leaving the browser.
- Custom MCP servers — user-defined MCP servers via a JSON config file following the Claude Code
.mcp.jsonstandard. - Scheduled maintenance tasks — run DAIV on a cron schedule for tasks like dependency updates, security scans, or documentation drift detection.
- Team notifications — Slack, Discord, and Microsoft Teams integrations to notify teams about DAIV actions and request approvals.
- Self-hosted LLM support — enable local model inference via Ollama or vLLM for air-gapped or cost-sensitive environments.
Contributing
We welcome contributions! Whether you want to fix a bug, add a new feature, or improve documentation, please refer to the CONTRIBUTING.md file for more information.
License
This project is licensed under the Apache 2.0 License.
Support & Community
For questions or support, please open an issue in the GitHub repository. Contributions, suggestions, and feedback are greatly appreciated!
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi