mcp-aoai-web-browsing

mcp
Security Audit
Fail
Health Pass
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 31 GitHub stars
Code Fail
  • eval() — Dynamic code execution via eval() in server/browser_navigator_server.py
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose
This tool is a minimal server and client implementation that bridges the Model Context Protocol (MCP) with Azure OpenAI. It allows AI models to control a local web browser using Microsoft's Playwright framework to automate web browsing and testing tasks.

Security Assessment
The overall risk is Medium. The tool makes external network requests to the Azure OpenAI API and actively controls a web browser via Playwright. While the repository does not request dangerous local system permissions and does not contain hardcoded secrets (it correctly relies on a `.env` file for API keys), there is a significant code-level flag. The automated scan detected the use of the `eval()` function in the browser controller code. Relying on dynamic code execution can open the door to arbitrary code injection vulnerabilities, especially when handling external inputs or web data. Additionally, because it drives a local browser, there is an inherent risk of sensitive data exposure if the tool is directed to navigate to unsecured or malicious websites.

Quality Assessment
The project is actively maintained, with its most recent push occurring today. It uses an industry-standard permissive license (MIT) and has clear documentation. However, community trust is currently quite low. With only 31 GitHub stars, the project has not yet been widely adopted or thoroughly vetted by a large user base.

Verdict
Use with caution: While the project is active and well-documented, the presence of `eval()` for dynamic execution and the low community adoption introduce potential security and stability concerns.
SUMMARY

A minimal Model Context Protocol 🖥️ server/client🧑‍💻with Azure OpenAI and 🌐 web browser control via Playwright.

README.md

MCP Server & Client implementation for using Azure OpenAI

  • A minimal server/client application implementation utilizing the Model Context Protocol (MCP) and Azure OpenAI.

    1. The MCP server is built with FastMCP.
    2. Playwright is an an open source, end to end testing framework by Microsoft for testing your modern web applications.
    3. The MCP response about tools will be converted to the OpenAI function calling format.
    4. The bridge that converts the MCP server response to the OpenAI function calling format customises the MCP-LLM Bridge implementation.
    5. To ensure a stable connection, the server object is passed directly into the bridge.
    6. The client_bridge supports both in-process and external (stdio) MCP server connections, enabling reuse by different clients (e.g., Claude Code, VS Code, custom scripts).

Model Context Protocol (MCP)

Model Context Protocol (MCP) MCP (Model Context Protocol) is an open protocol that enables secure, controlled interactions between AI applications and local or remote resources.

Official Repositories

Community Resources

Related Projects

  • FastMCP: The fast, Pythonic way to build MCP servers.
  • Chat MCP: MCP client
  • MCP-LLM Bridge: MCP implementation that enables communication between MCP servers and OpenAI-compatible LLMs

MCP Playwright

Configuration

During the development phase in December 2024, the Python project should be initiated with 'uv'. Other dependency management libraries, such as 'pip' and 'poetry', are not yet fully supported by the MCP CLI.

  1. Rename .env.template to .env, then fill in the values in .env for Azure OpenAI:

    AZURE_OPEN_AI_ENDPOINT=
    AZURE_OPEN_AI_API_KEY=
    AZURE_OPEN_AI_DEPLOYMENT_MODEL=
    AZURE_OPEN_AI_API_VERSION=
    
  2. Install uv for python library management

    pip install uv
    uv sync
    
  3. Execute python chatgui.py

    • The sample screen shows the client launching a browser to navigate to the URL.
    chatgui

Using with External Clients

The MCP server can be used by external clients (Claude Desktop, VS Code, Claude Code, etc.) via mcp.json configuration.

Claude Desktop / Claude Code

Add to your claude_desktop_config.json (Claude Desktop) or .claude/mcp.json (Claude Code):

{
  "mcpServers": {
    "browser-navigator": {
      "command": "uv",
      "args": ["run", "fastmcp", "run", "./server/browser_navigator_server.py:app"],
      "cwd": "/path/to/mcp-aoai-web-browsing",
      "env": {
        "AZURE_OPEN_AI_ENDPOINT": "...",
        "AZURE_OPEN_AI_API_KEY": "...",
        "AZURE_OPEN_AI_DEPLOYMENT_MODEL": "...",
        "AZURE_OPEN_AI_API_VERSION": "..."
      }
    }
  }
}

VS Code

Add to .vscode/mcp.json in your workspace:

{
  "servers": {
    "browser-navigator": {
      "command": "uv",
      "args": ["run", "fastmcp", "run", "./server/browser_navigator_server.py:app"],
      "cwd": "${workspaceFolder}",
      "env": {
        "AZURE_OPEN_AI_ENDPOINT": "...",
        "AZURE_OPEN_AI_API_KEY": "...",
        "AZURE_OPEN_AI_DEPLOYMENT_MODEL": "...",
        "AZURE_OPEN_AI_API_VERSION": "..."
      }
    }
  }
}

Using the Bridge Programmatically (stdio)

The client_bridge also supports connecting to external MCP servers via stdio from Python:

from client_bridge import BridgeConfig, MCPServerConfig, BridgeManager
from client_bridge.llm_config import get_default_llm_config

config = BridgeConfig(
    server_config=MCPServerConfig(
        command="uv",
        args=["run", "fastmcp", "run", "./server/browser_navigator_server.py:app"],
    ),
    llm_config=get_default_llm_config(),
    system_prompt="You are a helpful assistant.",
)

async with BridgeManager(config) as bridge:
    response = await bridge.process_message("Navigate to https://example.com")

Using Standard OpenAI (non-Azure)

from client_bridge.llm_config import get_openai_llm_config

config = BridgeConfig(
    mcp=server,
    llm_config=get_openai_llm_config(),
)

Set environment variables:

OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-...

Direct Tool Execution

For clients that manage their own LLM loop, the bridge exposes tool metadata and direct execution:

async with BridgeManager(config) as bridge:
    tools = bridge.get_tools()  # OpenAI function calling format
    result = await bridge.execute_tool("playwright_navigate", {"url": "https://example.com"})

w.r.t. 'stdio'

stdio is a transport layer (raw data flow), while JSON-RPC is an application protocol (structured communication). They are distinct but often used interchangeably, e.g., "JSON-RPC over stdio" in protocols.

Tool description

@self.mcp.tool()
async def playwright_navigate(url: str, timeout=30000, wait_until="load"):
    """Navigate to a URL.""" -> This comment provides a description, which may be used in a mechanism similar to function calling in LLMs.

# Output
Tool(name='playwright_navigate', description='Navigate to a URL.', inputSchema={'properties': {'url': {'title': 'Url', 'type': 'string'}, 'timeout': {'default': 30000, 'title': 'timeout', 'type': 'string'}

Tip: uv

uv run: Run a script.
uv venv: Create a new virtual environment. By default, '.venv'.
uv add: Add a dependency to a script
uv remove: Remove a dependency from a script
uv sync: Sync (Install) the project's dependencies with the environment.

Tip

  • taskkill command for python.exe
taskkill /IM python.exe /F
  • Visual Code: Python Debugger: Debugging with launch.json will start the debugger using the configuration from .vscode/launch.json.

Reviews (0)

No results found