ai-code-interface.el

skill
SUMMARY

Unified Emacs interface supporting OpenAI Codex, GitHub Copilot CLI, Claude Code, Gemini CLI, Opencode, and more

README.md

[[file:./icon.png]]

  • AI Code Interface

[[https://melpa.org/#/ai-code][https://melpa.org/packages/ai-code-badge.svg]]
[[https://stable.melpa.org/#/ai-code][https://stable.melpa.org/packages/ai-code-badge.svg]]
[[https://github.com/tninja/ai-code-interface.el/graphs/contributors][https://img.shields.io/github/contributors/tninja/ai-code-interface.el.svg]]

An Emacs interface for AI-assisted software development. The purpose is to provide a uniform interface and experience for different AI backends, with context-aware AI coding actions, and integrating seamlessly with AI-driven agile development workflows.

| Feature | ai-code-interface.el | claude-code-ide.el | agent-shell | eca-emacs |
|-------------+-----------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------+------------------------------------------------------------------------------|
| Switching | Best when you move between Codex, Claude Code, Gemini CLI, Copilot CLI, and other supported backends without relearning commands | Claude Code only | Many ACP agents, but through one ACP shell model | ECA agents and models inside one ECA workspace |
| Context | Current file, visible buffers, function or region, saved repo context, prompt files, and @ file completion | Active file, selection, project data, diagnostics, and editor-aware mentions | Shell session context, project files, transcripts, screenshots, and clipboard image support | File and range references, repomap, cursor auto-context, and workspace-aware chats |
| Editor tools | Built-in Emacs MCP tools for project info, imenu, xref, and tree-sitter, plus MCP debug and config helpers | Deepest Claude-specific Emacs MCP bridge, including diagnostics and custom Emacs tools exposed to Claude | MCP server configuration is available, but the main focus is the ACP shell session | MCP-aware chat settings and tool-call handling, but centered on the ECA server |
| Actions | Task-first actions such as change code, implement TODO, explain, ask, refactor, and prompt workflows | Claude-first IDE prompting, mentions, and editor-aware operations | Shell-first agent interaction; coding style depends on the ACP agent you connect | Chat-first coding, context injection, and rewrite flows |
| Dev loop | Built-in TDD cycle, diff review, Magit helpers, and refactoring actions | Strong Claude-specific ediff flow with diagnostics-aware review | Strong queue, transcript, model-switching, and container-friendly shell workflow | Strong approvals, rewrite diff, timelines, and saved chats |
| Sessions | Multi-session per backend support / git worktree support | Multiple Claude sessions per project | Multiple ACP shells with queueing and transcripts | Multiple chats per workspace with agent and model switching |
| Visibility | Review-oriented commands, prompt files, and backend-aware session buffers | Session list, debug buffer, resume flow, and IDE diff controls | Queue view, transcripts, screenshots, and shell-level visibility | Tool-call approvals, expandable blocks, timeline view, and chat state indicators |
| Best fit | You want one Emacs UX across many AI coding tools | You want the deepest Claude Code integration in Emacs | You want ACP-native shells and transcript-heavy workflows | You want a polished server-backed chat workspace |

  • All four packages share the same broad goal: keep AI-assisted coding inside Emacs, preserve project context, and make long-running AI sessions practical. The difference is where they place the center of gravity. =claude-code-ide.el= goes deepest on Claude Code plus MCP-powered editor awareness. =agent-shell= goes deepest on ACP and shell-native agent operations. =eca-emacs= goes deepest on a dedicated server-backed chat UX.

  • =ai-code-interface.el= is different because its biggest advantage is one stable coding workflow across many AI backends. Unlike =claude-code-ide.el=, which is centered on Claude Code, =agent-shell=, which is centered on ACP shells, or =eca-emacs=, which is centered on the ECA server, =ai-code-interface.el= keeps the same menu, context gathering, prompt construction, refactoring actions, TDD helpers, git-oriented review flow, and Emacs MCP tools while the backend changes underneath. In practice, that means switching from Claude Code to Codex, Gemini CLI, Copilot CLI, or another backend does not force you to relearn how you work inside Emacs.

  • That also means this package is a good fit when you do not want to pick a single winner forever. You can keep using specialized backends when they are the right tool for a task, but still drive them from one Emacs-centered interface focused on code changes, review, and iterative development instead of one vendor-specific chat surface.

  • Screenshot

[[./transient_menu.png]]

** New User Quick Start

If you are new to this package, follow this order:

  1. Read this section
  2. Run the setup in Installation
  3. Try one workflow in Typical Workflows Example
  4. Configure backend details in AI coding CLI backend

Minimal setup:

#+begin_src emacs-lisp
(use-package ai-code
:config
(ai-code-set-backend 'codex)
;; Optional: use a narrower transient menu on smaller frames
;; (setq ai-code-menu-layout 'two-columns)
(global-set-key (kbd "C-c a") #'ai-code-menu))
#+end_src

First 60 seconds:

  • C-c a a: Start AI CLI session
  • C-c a c: Ask AI to change current function/region
  • C-c a q: Ask question only (no code change)
  • C-c a z: Jump back to AI session buffer

** Installation

Enable installation of packages from MELPA by adding an entry to package-archives after (require 'package) and before the call to package-initialize in your init.el or .emacs file:

#+begin_src emacs-lisp
(require 'package)
(add-to-list 'package-archives '("melpa" . "https://melpa.org/packages/") t)
(package-initialize)
#+end_src

  • Use =M-x package-refresh-contents= or =M-x package-list-packages= to ensure that Emacs has fetched the MELPA package list
  • Use =M-x package-install= to install =ai-code= package
  • Import and configure =ai-code= in your init.el or .emacs file:

#+begin_src emacs-lisp
(use-package ai-code
;; :straight (:host github :repo "tninja/ai-code-interface.el") ;; if you want to use straight to install, no need to have MELPA setting above
:config
;; use codex as backend, other options are 'claude-code, 'gemini, 'github-copilot-cli, 'opencode, 'grok, 'cursor, 'kiro, 'codebuddy, 'aider, 'eca, 'agent-shell, 'claude-code-ide, 'claude-code-el
(ai-code-set-backend 'codex)
;; Optional: default menu stays unchanged; use a narrower 2-column layout on smaller frames
;; (setq ai-code-menu-layout 'two-columns)
;; Enable global keybinding for the main menu
(global-set-key (kbd "C-c a") #'ai-code-menu)
;; Optional: Use eat if you prefer, by default it is vterm
;; (setq ai-code-backends-infra-terminal-backend 'eat) ;; config for native CLI backends. for external backends such as agent-shell, claude-code-ide.el and claude-code.el, please check their own config
;; Optional: Enable @ file completion in comments and AI sessions
(ai-code-prompt-filepath-completion-mode 1)
;; Optional: Ask AI to run test after code changes, for a tighter build-test loop
(setq ai-code-auto-test-type 'ask-me)
;; Optional: In AI session buffers, SPC in Evil normal state triggers the prompt-enter UI
(with-eval-after-load 'evil (ai-code-backends-infra-evil-setup))
;; Optional: Turn on auto-revert buffer, so that the AI code change automatically appears in the buffer
(global-auto-revert-mode 1)
(setq auto-revert-interval 1) ;; set to 1 second for faster update
;; Optional: Set up Magit integration for AI commands in Magit popups
(with-eval-after-load 'magit
(ai-code-magit-setup-transients)))
#+end_src

** Dependencies

*** Required Dependencies

  • Emacs 29.1 or later
  • org: Org-mode support
  • magit: Git integration
  • transient: For the menu system
  • vterm (default) or eat needs to be installed to support native AI coding CLI backends.

*** Optional Dependencies

  • helm: For an enhanced auto-completion experience (ai-code-input.el).
  • whisper: For speech-to-text input via ai-code-speech-to-text-input. Record audio, press RET to stop, then choose whether to insert the transcription into the current buffer, send it to the active AI coding session with editable initial input, or copy it to the clipboard.
  • gptel: For intelligent, AI-generated content headlines in the prompt file.
    • ai-code-task-use-gptel-filename: When non-nil, file name created by ai-code-create-or-open-task-file or ai-code-create-file-or-dir will have auto-generated filenames created by GPTel
    • ai-code-notes-use-gptel-headline: When non-nil, notes created by ai-code-take-notes will have auto-generated headlines created by GPTel
    • ai-code-use-gptel-headline: When non-nil, prompts sent to the AI will have auto-generated headlines created by GPTel, providing better organization and readability in the prompt file
    • ai-code-use-gptel-classify-prompt: When no nil, and ai-code-auto-test-type is not nil, classify whether the current prompt is about code changes and need to trigger following test
  • flycheck: To enable the ai-code-flycheck-fix-errors-in-scope command.
  • yasnippet: For snippet support in the prompt file. A library of snippets is included.
    • (emacs built-in) abbrev + skeleton is also a good way to expand prompt. [[./etc/prompt_expand_with_abbrev_skeleton.el][example abbrev to solve / iterate leetcode problem with tdd (need to set ai-code-auto-test-type with tdd)]], [[./examples/leetcode][example problem resolved]]
  • projectile: For project root initialization.
  • helm-gtags: For tags creation and symbol navigation fallback from clickable session symbols.
  • python-pytest: For running python tests in the TDD workflow.
  • jest: For running JavaScript / TypeScript tests in the TDD workflow.

** Key Features

  • Transient-Driven Hub (C-c a): One keystroke opens a contextual transient menu that groups every capability (CLI control, code actions, agile workflows, utilities) so you never need to memorize scattered keybindings.
    • If the default wide layout does not fit your frame well, set ai-code-menu-layout to two-columns for a narrower menu with the same commands.
  • AI CLI Session Management: Start (a), resume (R), or jump back into (z) the active AI CLI buffer, instantly swap backends (s), upgrade them (u), install backend skills (S), edit backend configs (g), open backend agent file (G), and run prompts against the current file (|). It supports multiple sessions per project. The S entry is especially useful for installing shared skill packs such as [[https://github.com/obra/superpowers][obra/superpowers]].
  • Clickable Session Links: In AI session buffers, supported in-repo file references, nearby code symbols, and http / https URLs are underlined and clickable. File links such as src/foo.el:42, src/foo.el:42:7, src/foo.el:L42-L60, or src/foo.el#L42-L60 jump directly to the right location, nearby symbols such as setq-local, ai-code-session-link-enabled, or UserService.processRequest() trigger code navigation in the associated file, and web links open in your browser. Symbol navigation first tries xref, then helm-gtags when available, and finally falls back to searching in the opened file buffer. Set ai-code-session-link-enabled to nil if you want to disable this link rendering.
    [[./clickable_link_ai_session.png]]
  • Context-Aware Code Actions: The menu exposes dedicated entries for changing code (c), implementing TODOs (i), asking questions (q), explaining code (x), sending free-form commands (<SPC>), and refreshing AI context (@). Each command automatically captures the surrounding function, region, or clipboard contents (via C-u) to keep prompts precise.
  • Agile Development Workflows: Use the refactoring navigator (r), the guided TDD cycle (t), and the pull/review diff helper (v) to keep AI-assisted work aligned with agile best practices. Prompt authoring is first-class through quick access to the prompt file (p), build/test helper (b), and AI-assisted shell/file execution (!). In prompt files, send the current block with C-c C-c.
  • Productivity & Debugging Utilities: Initialize project navigation assets (.), investigate exceptions (e), auto-fix Flycheck issues in scope (f), copy or open file paths formatted for prompts (k, o), generate MCP inspector commands (m), capture session notes straight into Org (n), dictate prompts with speech-to-text (:), and toggle desktop notifications (N) to get alerted when AI responses are ready in background sessions.
  • Seamless Prompt Management: Open the prompt file via ai-code-open-prompt-file (stored under .ai.code.files/.ai.code.prompt.org by default), send regions with ai-code-prompt-send-block, and reuse prompt snippets via yasnippet to keep conversations organized.
  • Interactive Chat & Context Tools: Dedicated buffers hold long-running chats, automatically enriched with file paths, diffs, and history from Magit or Git commands for richer AI responses.
  • AI-Assisted Bash Commands: From Dired, shell, eshell, or vterm, run C-c a ! and type natural-language commands prefixed with : (e.g., :count lines of python code recursively); the tool generates the shell command for review and executes it in a compile buffer.

*** Typical Workflows Example

  • Changing Code: Position the cursor on a function or select a region of code. Press C-c a, then c (ai-code-code-change). Describe the change you want to make in the prompt. The AI will receive the context of the function or region and your instruction.
  • Implementing a TODO: Write a comment in your code, like ;; TODO: Implement caching for this function. Place your cursor on that line and press C-c a, then i (ai-code-implement-todo). The AI will generate the implementation based on the comment.
  • Asking a Question: Place your cursor within a function, press C-c a, then q (ai-code-ask-question), type your question, and press Enter. The question, along with context, will be sent to the AI.
  • Refactoring a Function: With the cursor in a function, press C-c a, then r (ai-code-refactor-book-method). Select a refactoring technique from the list, provide any required input (e.g., a new method name), and the prompt will be generated.
  • Automatically run tests after change: When ai-code-auto-test-type is non-nil, AI will automatically run tests after code changes and follow up on results.
  • One-prompt TDD with refactoring: Press C-c a, then t (ai-code-tdd-cycle) and choose 5. Red + Green + Blue (One prompt) to generate tests, implement code, run tests, and then refactor the changed code in one flow.
  • Reviewing a Pull Request: Press C-c a, then v (ai-code-pull-or-review-diff-file). Choose to generate a diff between two branches. The diff will be created in a new buffer, and you'll be prompted to start a review.
  • Multiple Sessions Support: Start more AI coding session with C-c a a after launching one. Select active session with C-c a z. Prompt with above command will be sent to the selected session.
  • Speech to Text Input: Press C-c a, then : (ai-code-speech-to-text-input). Speak your prompt, press RET to stop recording, then choose whether to insert the transcription into the current buffer, send it to the active AI coding session after editing it in ai-code-read-string, or copy it to the clipboard.

*** Context Engineering

Context engineering is the deliberate practice of selecting, structuring, and delivering the right information to an AI model so the output is specific, accurate, and actionable. For AI-assisted programming, the model cannot read your whole codebase by default, so the quality of the result depends heavily on the clarity and relevance of the provided context (file paths, functions, regions, related files, and repo-level notes). Good context engineering reduces ambiguity, prevents irrelevant suggestions, and keeps changes aligned with the current code.

This package makes context engineering easy by automatically assembling precise context blocks and letting you curate additional context on demand:

  • Automatic file and window context: prompts can include the current file and other visible files (ai-code--get-context-files-string), so the AI sees related code without manual copying.
  • Function or region scoping: most actions capture the current function or active region, keeping requests focused (e.g., ai-code-code-change, ai-code-implement-todo, ai-code-ask-question).
  • Manual context curation: C-c a @ (ai-code-context-action) stores file paths, function anchors, or region ranges in a repo-scoped list, which is appended to prompts via ai-code--format-repo-context-info.
  • Optional clipboard context: prefix with C-u to append clipboard content to prompts for external snippets or logs.
  • @-triggered filepath completion in comments and AI sessions. Type @ to open a completion list of recent and visible repo files, then select a path to insert.
  • Prompt suffix guardrails: set ai-code-prompt-suffix to append persistent constraints to every prompt (when ai-code-use-prompt-suffix is non-nil). Example: (setq ai-code-prompt-suffix "Only use English in code file, but Reply in Simplified Chinese language").
  • Optional GPTel headline generation: set ai-code-use-gptel-headline to auto-generate prompt headings with GPTel.

Example (focused refactor with curated context):

  1. In a buffer, run C-c a @ to add the current function or selected region to stored repo context.
  2. Open another related file in a window so it is picked up by ai-code--get-context-files-string.
  3. Place the cursor in the target function and run C-c a c to request a change.
    The generated prompt will include the function/region scope, visible file list, and stored repo context entries, giving the AI exactly the surrounding information it needs.

**** Built-in Emacs MCP Tools

AI Code includes an Emacs MCP server with these built-in tools:

  • =project_info=
  • =imenu_list_symbols=
  • =xref_find_references=
  • =treesit_info=

***** Use It with an AI CLI

Set the backend to =claude-code=, =codex=, or =github-copilot-cli= and start a session with =C-c a a=. AI Code will:

  • start a local Emacs MCP HTTP server
  • register the current project/buffer as the MCP session context
  • inject a session-scoped MCP config into that agent session automatically

Today this auto-wiring is enabled for =claude-code=, =codex=, and =github-copilot-cli=. The adapter layer is designed so other backends can be added later.

If you disabled it before, enable it again with:

#+begin_src emacs-lisp
(setq ai-code-mcp-agent-enabled-backends '(claude-code codex github-copilot-cli))
#+end_src

***** Make Sure It Works

  1. Start =claude-code=, =codex=, or =github-copilot-cli= with =C-c a a= from the file or project you want to work on.
  2. Switch to the agent buffer and run =M-x ai-code-mcp-agent-show-buffer-status=.

If it shows =:backend=, =:session-id=, and a local =:server-url=, then Emacs has attached MCP to this session.

  1. Then ask the agent to use one of the Emacs tools, for example:
  • use =project_info= and tell me the current project info
  • use =imenu_list_symbols= for the current file

The status command only proves the Emacs side is wired.
The real proof is that the started agent returns your current project path, buffer context, or current-file symbols from those tools.

***** Add New MCP function

  1. Define an Emacs Lisp function.

#+begin_src emacs-lisp
(defun ai-code-mcp-current-buffer-name ()
"Return the current buffer name for the active MCP session."
(buffer-name (current-buffer)))
#+end_src

  1. Register it as an MCP tool before you start the AI agent session:

#+begin_src emacs-lisp
(require 'ai-code-mcp-server)

(ai-code-mcp-make-tool
:function #'ai-code-mcp-current-buffer-name
:name "current_buffer_name"
:description "Return the current Emacs buffer name."
:args nil)
#+end_src

  1. Start =claude-code=, =codex=, or =github-copilot-cli= with =C-c a a=.
    If the agent session is already running, restart it after registering the new tool.

  2. Ask the agent to use the tool, for example:

  • use =current_buffer_name= and tell me the current Emacs buffer name
  1. If you want to verify from Emacs first, run:

#+begin_src emacs-lisp
(ai-code-mcp-dispatch "tools/list")
#+end_src

If =current_buffer_name= appears in the tool list, Emacs has registered it.
If the started agent can call it and returns the expected buffer name, the tool is available inside the AI coding agent.

***** Use the MCP Core Directly from Elisp

If you want to use the MCP core without an AI CLI, you can still call it directly:

#+begin_src emacs-lisp
(require 'ai-code-mcp-server)
(ai-code-mcp-builtins-setup)
(ai-code-mcp-dispatch "tools/list")
#+end_src

To expose your own Emacs function as a tool, use =ai-code-mcp-make-tool=.

*** Harness Engineering Practice

Harness engineering is about building a reliable loop around the model, so the AI does not stop at /make a change/ but continues into /verify the change and react to the result/. In this package, the clearest example is the auto test loop.

Instead of manually telling the AI what to do after every edit, you can make test follow-up part of the workflow:

  • ai-code-auto-test-type: choose how code-change prompts should continue after the edit. You can ask the AI to run tests after the change, use TDD Red+Green, use Red+Green+Blue with refactoring, turn it off, or decide case by case with ask-me.
  • ai-code-tdd-cycle: run a guided TDD flow from the menu, including separate Red, Green, Blue stages or the combined one-prompt flows.
  • ai-code-build-or-test-project: run build/test explicitly from C-c a b when you want a direct verification step in the middle of the loop.
  • ai-code-prompt-suffix: add persistent project rules when needed, so repeated instructions such as response language, coding constraints, or test expectations do not have to be retyped in every prompt.
  • [[https://github.com/obra/superpowers][obra/superpowers]] pair well with these features.

The benefit is practical:

  • faster feedback after each AI code change
  • less context switching between prompting, testing, and follow-up fixes
  • more consistent AI behavior because verification is part of the workflow
  • easier to let the AI continue with the next step after a failed or passing test

This is why features such as ai-code-auto-test-type and ai-code-tdd-cycle fit the idea of harness engineering: they turn testing and follow-up into part of the system, not an afterthought in each prompt.

*** Desktop Notifications (Experimental)

When working with multiple AI sessions, it can be useful to receive desktop notifications when AI responses are complete. This is especially helpful when you prompt an AI and then switch to other tasks while waiting for the response.

**** Enabling Notifications

  • Notifications are disabled by default.
  • Press C-c a then N to toggle notifications on/off.
  • Alternatively, use M-x ai-code-notifications-toggle.
  • To enable notifications in your config:

#+begin_src emacs-lisp
(setq ai-code-notifications-enabled t)
(setq ai-code-notifications-show-on-response t)
#+end_src

**** How It Works

  • The package monitors terminal activity in AI session buffers.
  • When the terminal has been idle for ~5 seconds (configurable via ai-code-backends-infra-idle-delay), it's considered a completed response.
  • If the AI session buffer is not currently visible/focused, a desktop notification is sent.
  • Notifications are throttled to avoid spam (minimum 2 seconds between notifications).

**** Platform Support

  • On Linux with D-Bus, native desktop notifications are used.
  • On other platforms, notifications appear in the Emacs minibuffer.

** AI coding CLI backend

*** Backend Configuration
This package acts as a generic interface that requires a backend AI assistant package to function. You can configure it to work with different backends.

  • Press C-c a to open the AI menu, then s to "Select Backend".
  • Pick one of the supported backends and the integration will switch immediately.
  • The selection updates the start/switch/send commands and, for CLI backends, the CLI used by ai-code-apply-prompt-on-current-file.
  • Press C-c a then S (ai-code-install-backend-skills) to install backend skills for the currently selected backend.
    • If a backend does not define its own installer, S falls back to prompting the active AI CLI to read a skills repository README and perform setup.
    • The default suggested repository is [[https://github.com/obra/superpowers][obra/superpowers]], which is a very useful starting point for adding reusable coding skills to supported CLIs.
    • A practical workflow is: press C-c a, then S, keep the default URL, and let the backend read the repository README and install the skills it describes.
    • [[https://github.com/obra/superpowers][obra/superpowers]] pair well with this package's existing harness-oriented workflows, especially the ai-code-auto-test-type feedback loop, ai-code-tdd-cycle, and ai-code-pull-or-review-diff-file code review flow.

Natively supported options:

It also supports external backends through customization of the ai-code-backends variable; currently it includes:

**** ECA (Editor Code Assistant) backend setup
Install the [[https://eca.dev/][eca]] Emacs package, which provides the =eca=, =eca-session=, =eca-chat-open=,
=eca-chat-send-prompt=, and =eca-chat--get-last-buffer= functions.
Then select the backend in your config:
#+begin_src emacs-lisp
(ai-code-set-backend 'eca)
#+end_src
Note: =ai-code-apply-prompt-on-current-file= is CLI-pipe based and is not supported
when =eca= is the active backend.

**** agent-shell backend setup
Install [[https://github.com/xenodium/agent-shell][agent-shell]] and its dependency [[https://github.com/xenodium/acp.el][acp.el]], then configure one of the ACP agent providers in agent-shell (for example Codex, Gemini, Opencode, etc.).
Select agent-shell via ai-code-select-backend (or (ai-code-set-backend 'agent-shell) in your config).
Note: ai-code-apply-prompt-on-current-file is CLI-pipe based and is not supported when agent-shell is the active backend.

**** Grok CLI setup
Install [[https://grokcli.io/][grok-cli]] and ensure the grok executable is on your PATH.
Customize ai-code-grok-cli-program or ai-code-grok-cli-program-switches if you want to
point at a different binary or pass additional flags (for example,
selecting a profile). After that, select the backend through
ai-code-select-backend or bind a helper in your config.

**** CodeBuddy Code CLI setup
Install CodeBuddy Code CLI via npm: =npm install -g @tencent-ai/codebuddy-code=, or via Homebrew: =brew install Tencent-CodeBuddy/tap/codebuddy-code=.
Ensure the codebuddy executable is on your PATH.
Customize ai-code-codebuddy-cli-program or ai-code-codebuddy-cli-program-switches if you want to
point at a different binary or pass additional flags. After that, select the backend through
ai-code-select-backend or bind a helper in your config.
To resume previous conversations, use =-c= flag (automatically handled by the resume command).

You can add other backends by customizing the ai-code-backends variable.

**** Add a new AI coding CLI backend

  • [[https://github.com/tninja/ai-code-interface.el/pull/2][This PR]] adds github-copilot-cli. It can be an example to add basic support for other AI coding CLI.

  • Open an issue, post information about the new AI coding CLI backend (eg. cursor CLI?), at least providing the command line name. You can also include the version upgrade command, how to resume, where the configuration files are located, and so on. We can ask GitHub Copilot to add support features based on the issue.

** [[https://github.com/tninja/aider.el/blob/main/appendix.org#be-careful-about-ai-generated-code][Why Agile development with AI?]]

** FAQ

*** Q: Using Opencode as backend, it might have performance issues with eat.el in Doom Emacs. [[https://github.com/tninja/ai-code-interface.el/issues/9#issuecomment-3543277108][Issue]]

#+begin_src json
{
"$schema": "https://opencode.ai/config.json",
"theme": "system"
}
#+end_src

*** Q: Gemini CLI response is relatively slow, how to improve?

  • A: use gemini-3-flash model, it is pretty fast, with good quality (being able to solve leetcode hard problems), and it is free. You can set the following in your Emacs config:

#+begin_src elisp
(setq ai-code-gemini-cli-program-switches '("--model" "gemini-3-flash-preview"))
#+end_src

*** Q: Codex CLI use my API key, instead of my ChatGPT Plus subscription and cost money, how to fix that?

  • A: use codex login to login with your OpenAI account that has ChatGPT Plus subscription. After that, Codex CLI will use your ChatGPT Plus subscription automatically. To confirm, check with /status inside the codex CLI buffer.

*** Q: When test-after-change / TDD mode is enabled, AI keeps asking for tool-use approval. How can I make this smoother?

  • A: Enable auto-approval for your active AI coding CLI. For example, in Codex CLI, you can enable the following flag.

#+begin_src elisp
(setq ai-code-codex-cli-program-switches '("--full-auto"))
#+end_src

** AI Assisted Programming related books

The following books introduce how to use AI to assist programming and potentially be helpful to aider / aider.el users.

** Related Emacs packages

** License

Apache-2.0 License

** Contributing

Contributions, issue reports, and improvement suggestions are welcome! Please open an issue or submit a pull request on the project's GitHub repository.

Yorumlar (0)

Sonuc bulunamadi