ai-agent-playwright-typescript-template
Health Gecti
- License — License: Apache-2.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 13 GitHub stars
Code Uyari
- network request — Outbound network request in package.json
- process.env — Environment variable access in playwright.config.ts
- process.env — Environment variable access in src/pages/api/user-page.ts
Permissions Gecti
- Permissions — No dangerous permissions requested
This project is a starter template for teams looking to build scalable web test automation. It leverages Playwright and TypeScript to provide a structured testing framework that includes environment management, CI/CD patterns, and extensibility for AI agents.
Security Assessment
The overall security risk is Low. The tool does not request dangerous permissions or attempt to execute hidden shell commands, and no hardcoded secrets were found. It does make outbound network requests, which is standard behavior for a browser automation and testing tool. It also accesses environment variables to manage configurations like API routes. This is a safe practice for handling credentials, provided you populate your environment files securely and never commit them to version control.
Quality Assessment
The project has a clear permissive license (Apache-2.0), making it entirely safe for commercial and open-source use. It is actively maintained, with updates pushed as recently as today. However, being a relatively new or niche template, it currently has a low community trust footprint (13 stars). The absence of a README file is a minor drawback, as it may require developers to spend extra time navigating the codebase to understand the setup process.
Verdict
Safe to use.
Production-ready Playwright test automation starter for teams and startups: strict TypeScript, Page Object Model, environment management, interactive runner, custom HTML reporting (with optional JIRA integration), quality tooling, CI/CD patterns, and agentic (MCP-ready) extensibility.
AI Agent Playwright + TypeScript Automation Framework Template
Production-ready Playwright test automation starter for teams and startups: strict TypeScript, Page Object Model, environment management, interactive runner, custom HTML reporting (with optional JIRA integration), quality tooling, CI/CD patterns, and agentic (MCP-ready) extensibility.
Playwright Docs · TypeScript · ESLint · Prettier · Husky . Research articles
Table of Contents
- Why This Framework
- Features Overview
- Tech Stack
- Prerequisites
- Quick Start
- Project Structure
- Environments & Configuration
- Running Tests (Scripts)
- Interactive Custom Test Runner
- Tagging Strategy
- Page Object Model (POM)
- Configuration Hub (
configuration.ts) - Custom HTML Reporting
- Logging
- Code Quality: Prettier / ESLint / Husky
- TypeScript Configuration
- CI/CD Jenkins Pipeline
- Writing & Extending Tests
- AI Agent & Playwright MCP Integration
- Troubleshooting
- Security & Secrets
- License
- Contributor Details
Why This Framework
Instead of spending days wiring up Playwright from scratch, this template gives you:
- Opinionated yet flexible structure - Follows POM design pattern
- Unified configuration & environment variable loading
- A powerful interactive test runner that composes Playwright commands for you
- Rich custom HTML report with JIRA bug creation links & embedded artifacts
- Tag‑driven selective execution (regression, smoke, customer, internal)
- First‑class logging, helper utilities, and mock data
- CI pipeline (Jenkins) example using official Playwright docker image
Features Overview
| Area | Capability |
|---|---|
| Test Types | UI, API (E2E slot reserved for future) |
| Architecture | Page Object Model for UI/API pages |
| Environment Handling | .env per environment via dotenv + loadEnv() |
| Single Config Hub | utils/configuration.ts serves as the central config hub that defines environments, browsers, test types, tags, run modes, and JIRA constants to enable custom test execution and seamless JIRA–Jenkins integration. |
| Tagging | Regression, Smoke, Customer, Internal (via TAGS enum) |
| Custom Runner | Interactive CLI: choose env, browser(s), test type(s), tags, mode (headed / debug / ui) to enable dynamic test execution through interactive prompts. |
| Reporting | Playwright built‑in HTML + Custom consolidated HTML (donut chart, steps, JIRA integration) |
| Jenkins dashboard | Dashboard for details on test execution, integrated with playwright report for more details and Create bug option to directly create jira bug with auto-populated details including link to screenshot, videos and traces in the configured project |
| JIRA Hooks | One‑click “Create Bug” buttons (IDs configurable in configuration.ts) for Jenkins dashboard |
| Logging | File + console logger (Logger class) writes to logs/automation.log |
| CI | Jenkins pipeline with Dockerized Playwright execution & HTML publish + Github actions for code-quality check and running tests |
| Code Quality | Prettier, ESLint, TypeScript strict, Husky + lint‑staged on commit |
| Trace Artifacts | Screenshots, videos & traces retained on failure |
| Agentic Ready | Integrated Playwright MCP and Playwright agents |
Tech Stack
- Playwright Test (
@playwright/test) - TypeScript (Strict mode)
- Node.js (LTS recommended)
- Dotenv for environment variable loading
- Inquirer for interactive CLI test runner
- Prettier + ESLint + Husky + lint‑staged for quality gates
- Jenkins (example pipeline) / HTML publisher plugin
Prerequisites
Before you start, ensure the following:
- Install Visual Studio Code
- Install Git
- Install Node.js
- Download and install Node.js from here.
- Verify:
git --version
node -v
npm -v
Optional: VS Code + Playwright extension.
Quick Start
Windows PowerShell examples (
powershell.exe)
- Clone:
git clone https://github.com/twinklejoshi/ai-agent-playwright-typescript-template.git
cd ai-agent-playwright-typescript-template
- Install & provision browsers:
npm run setup
- (Optional) Create environment files (see below) then run tests:
npm run test # All tests headless, default env=example
npm run test:local # Explicit local env
- Launch interactive runner:
npm run test:custom
- View Playwright report after a run:
npm run test:report:playwright
- View custom report:
npm run test:report:custom
Project Structure
├── eslint.config.mjs
├── Jenkinsfile
├── package.json
├── playwright.config.ts
├── tsconfig.json
├── environments/
│ ├── local.env (add your own)
│ ├── dev.env (add your own)
│ ├── qa.env (add your own)
│ └── example.env (sample)
├── utils/ # Cross-cutting utilities outside src
│ ├── configuration.ts # Central config file for all config (TAGS, browsers, Jira, etc.) in the project
│ ├── custom-reporter.ts # Custom HTML report generator - generate a simple report and Jenkins dashboard
│ ├── env-loader.ts #Environment loading logic
│ └── run-custom-tests.ts # CLI interactive runner
└── src/
├── pages/
│ ├── ui/ # UI POM classes
│ └── api/ # API abstraction classes
├── shared/
│ ├── mock-data/ # Test data (e.g., todo items, user prototypes)
│ ├── types/ # Reusable TS types
│ └── utils/ # Helpers (logger, local storage checks)
└── tests/
├── ui/ # UI specs & fixtures
└── api/ # API specs
└── e2e/ # (Create for end-to-end flows)
Note: Imports like
@utils/configurationreference rootutils/. If you add more aliases, updatetsconfig.jsonpaths accordingly (see TypeScript Configuration).
Environments & Configuration
Located under environments/. Create one file per target: local.env, dev.env, qa.env (an example.env is provided as a fallback reference). Loader implementation (utils/env-loader.ts):
export const loadEnv = (env: string = 'example') => { /* resolves environments/<env>.env via dotenv */ };
playwright.config.ts calls:
loadEnv(process.env.NODE_ENV || 'example');
Usage pattern:
npx cross-env NODE_ENV=local playwright test --grep "@smoke"
Sample local.env:
environment=local
BASE_URL=https://local.example.com
USERNAME=test_user
PASSWORD=test_pass
Access with process.env.BASE_URL.
If a specified file is missing, adjust the default parameter or create the file to avoid silent misconfiguration.
Adding New Variables
- Add to each
<env>.env - Reference anywhere via
process.env.MY_VAR. - For CI Jenkins pipeline, set/inject using credentials bindings (see Jenkinsfile).
Running Tests (Scripts)
| Script | Purpose |
|---|---|
npm run test |
All tests headless (default env) |
npm run test:local / test:dev |
Force specific environment |
npm run test:headed |
Run in headed browsers |
npm run test:ui |
Launch Playwright UI runner |
npm run test:debug |
Debug mode (slow-mo inspector) |
npm run test:trace |
Enables trace collection |
npm run test:custom |
Interactive multi-select runner |
npm run test:report:playwright |
Open last Playwright HTML report |
npm run test:report:custom |
Open custom consolidated report |
Artifacts (screenshots, videos, traces) retained only on test failure (retain-on-failure).
Interactive Custom Test Runner
npm run test:custom
The test:custom command runs the run-custom-tests.ts script. The run-custom-tests.ts script is an interactive tool for flexible test execution. It allows you to select the environment, browser, test type, test group and test mode. Based on your choices, it dynamically constructs and runs the appropriate Playwright command, simplifying custom test runs without manual configuration changes.
Usage Example
To run the custom test flow:
- Execute the script:
npm run test:custom - Follow the prompts to select your environment, browser, test type, test group and test mode.
- The script will execute the selected tests and display the output.
For more details on how the script works, refer to Custom Test Script: run-custom-tests.ts.
Custom Test Script: run-custom-tests.ts
The run-custom-tests.ts script enables dynamic test execution through interactive prompts. It allows users to select the following options:
Environment:
- Select an environment:
LocalDevQA
- Select an environment:
Browser:
- Select the browser:
ChromiumFirefoxWebKit
- Select the browser:
Test Type:
- Specify the type of tests:
APIUIE2E
- Specify the type of tests:
Test Group:
- Filter tests by tag:
RegressionSmoke
- Filter tests by tag:
Test Mode:
- Filter tests by tag:
HeadlessUI
- Filter tests by tag:
Sample Command Generated by Script
If the user selects:
- Environment:
QA - Browser:
Chromium - Test Type:
UI - Test Group:
Regression - TestMode:
Default => Headless
The generated command will look like:
npx cross-env NODE_ENV=local playwright test --project=chromium .src/tests/ui --grep "@regression"
The script dynamically builds this command, ensuring flexible and efficient test execution.
Tagging Strategy
Tags are defined in utils/configuration.ts enum TAGS:
export enum TAGS {
REGRESSION = "@regression",
SMOKE = "@smoke",
CUSTOMER = "@customer",
INTERNAL = "@internal",
}
Apply tags per test via metadata:
test("create todo", { tag: [TAGS.REGRESSION, TAGS.CUSTOMER] }, async ({ ... }) => { /* ... */ });
Filter execution using --grep "@regression" or combined with OR using pipes from custom runner.
Add new tags: extend TAGS, then insert into TEST_GROUPS for interactive selection.
Page Object Model (POM)
The Page Object Model (POM) is utilized for organizing UI, API, and end-to-end test files. Each application page or endpoint is represented by a class or module, enabling a clean separation of concerns and improving maintainability.
Core Components:
- Pages: Represents the application's UI pages or API endpoints. Contains all related elements and actions.
- Tests: Contains test scripts to validate functionality by using methods from the
pages. - Utils: Provides shared helpers, mock data, constants, and utilities.
Detailed Folder Descriptions
1. shared Folder
- Purpose: Stores shared resources and logic that can be used across all projects.
- Structure:
- mock-data: Contains test data which can be used to validate functionalities
- Example:
export const projectMockData: Project = { name: "New Project - Test 1", description: "New Project - Description", type: "Default", group: "new_group", coordinates: [], };
- Example:
- types: Contains types.
- Example:
export type Project = { name: string; description: string; type: string; group: string; coordinates: Coordinates[]; };
- Example:
- utils: Provides custom logger, global helper functions or utility scripts, e.g., data formatting methods or mock generators or api utils.
- custom-logger.ts file contains implementation of logger functionality that helps in recording steps taken to execute each tests.
- mock-data: Contains test data which can be used to validate functionalities
2. Pages:
ui: Contains classes to model individual pages/components and includes methods to interact with page elements (e.g., clicking buttons, entering text, validating UI elements).- Example:
class LoginPage { async enterUsername(username) { await page.locator('#username').fill(username); } async enterPassword(password) { await page.locator('#password').fill(password); } async clickLogin() { await page.locator('#loginBtn').click(); } }
- Example:
api: Manages API endpoint interactions with reusable methods.- Example:
class UserAPI { getUser(userId) { /* API call logic */ } createUser(data) { /* API call logic */ } }
- Example:
3. Tests:
ui: Contains test scripts for UI components and interactions.- Example:
test('login form validation', async () => { await loginPage.enterUsername('user'); await loginPage.enterPassword(''); await loginPage.clickLogin(); expect(await loginPage.errorText).toBe('Password is required'); });
- Example:
e2e: Implements end-to-end testing scenarios to validate workflows.- Example:
test('create project', async () => { await loginPage.login('user', 'password'); await projectPage.createProject({ name: 'New Project', type: 'default' }); });
- Example:
api: Validates API responses, status codes, and workflows.- Example:
test('should fetch user details', async () => { const user = await userApi.getUser(1); expect(user.name).toBe('John Doe'); });
- Example:
Details of Pages and Tests
What Pages Will Include:
- Web Elements: Locators for UI elements or endpoints for APIs.
- Example (for UI):
this.loginButton = page.locator('#loginBtn');
- Example (for UI):
- Actions: Methods for interacting with the elements (e.g.,
clickLogin(),enterUsername()). - Reusable Functions: Methods for common tasks like navigation or API requests.
What Tests Will Include:
- Scenario Definitions: Scripts to validate specific functionalities or workflows.
- Example: "Verify that the user can log in successfully."
- Assertions: Checks to validate expected outcomes.
- Example:
expect(page.url()).toBe('https://example.com/dashboard');
- Example:
- Setup and Teardown: Initialization and cleanup code to prepare the test environment. This can be moved to fixtures folder. Fixtures encapsulates setup/teardown, are reusable beween test files and can help with grouping
Configuration Hub (configuration.ts)
Purpose
utils/configuration.ts is the canonical source for selectable dimensions of a test run (environment, browser, test type, grouping/tag, execution mode) and external integration constants (JIRA). Centralization avoids script divergence and enables dynamic CLI building.
Exposed Structures
| Constant | Shape | Usage |
|---|---|---|
TAGS |
enum |
Standard tag values used directly in test metadata (tag: field). |
ENVIRONMENTS |
Array<{ name; value }> |
CLI prompt options -> mapped to NODE_ENV. |
BROWSERS |
Array<{ name; value }> |
Translated to repeated --project=<browser> flags. |
TEST_TYPES |
Array<{ name; value }> |
Builds path segments .src/tests/<type> for selective directory runs. |
TEST_GROUPS |
Array<{ name; value }> |
Values come from TAGS (used to assemble --grep). |
MODES |
Array<{ name; value }> |
Appended to the final command (empty string = default headless). |
JIRA_* constants |
Numeric/string placeholders | Consumed by custom reporter to construct create-issue URLs. |
How the Runner Uses It
In run-custom-tests.ts answers from inquirer map directly:
const browserScriptParam = answers.selectedBrowser.map(b => `--project=${b}`).join(' ');
const testTypeParams = answers.selectedTestType.map(t => `.src/tests/${t}`).join(' ');
const testGroupParam = answers.selectedTestGroup.join('|');
With these pieces the final command is assembled (mode appended last). This makes adding a new browser or tag a one-line change in configuration.ts.
Adding a New Dimension
- Add new enum/array entry (e.g.,
TAGS.PERFORMANCE = '@performance'). - Extend
TEST_GROUPSwith a{ name: 'Performance', value: TAGS.PERFORMANCE }entry. - Use tag in test titles or metadata.
- Rerun
npm run test:custom– new option appears automatically.
JIRA Integration Details
Reporter reads IDs/base URL to construct CreateIssueDetails link parameters. After providing real IDs:
| Variable | Description | Example |
|---|---|---|
JIRA_PROJECT_ID |
Project numeric ID | 10201 |
JIRA_PROJECT_ISSUE_TYPE_ID |
Issue type ID (Bug, Task) | 10004 |
JIRA_API_BASE_URL |
Base instance URL | https://example.atlassian.net |
Failure rows display a "Create Bug" button which encodes test metadata (steps, attachments paths, error) into the generated link.
Environment + Path Alias Context
Imports like import { HomePage } from 'pages/ui'; rely on tsconfig.json path mapping ("*": ["./src/*"]) enabling shorthand module resolution. This reinforces portability in agent-based code generation (agents can infer domain boundaries from folder names).
Custom HTML Reporting
File: utils/custom-reporter.ts implements Playwright Reporter interface:
- Collects test results, steps, durations, artifact paths
- Normalizes failure states
- Generates donut chart (Chart.js) summarizing pass/fail/skipped counts with inline percentages
- Two report modes: simplified (local) & detailed (Jenkins) with JIRA action buttons
- One-click “Create Bug” opens pre-filled JIRA issue creation screen via URL parameters (requires valid
JIRA_PROJECT_ID,JIRA_PROJECT_ISSUE_TYPE_ID,JIRA_API_BASE_URLvalues inconfiguration.ts) - Provides quick link to underlying Playwright report per test (
detailsPath)
Open Reports
npm run test:report:playwright # Native report
npm run test:report:custom # Custom report
Custom report screenshots


Jenkins Dashboard

Configure Output Paths
Set environment variables before run:
$env:PLAYWRIGHT_HTML_REPORT_DIR = "reports/playwright-report";
$env:CUSTOM_REPORT_DIR = "reports";
npx playwright test
Jenkins Integration
Pipeline passes Jenkins URLs to reporter so artifact links resolve inside Jenkins UI.
Logging
src/shared/utils/custom-logger.ts writes timestamped log lines to console and to logs/automation.log.
Methods: Logger.info | warn | error | debug
Use inside page objects or helpers for richer step context:
Logger.info(`Creating user id=${id}`);
Rotate / archive logs by adding a post-run step or integrating with a log collector (future enhancement).
Code Quality: Prettier / ESLint / Husky
Prettier
Configured with consistent formatting (printWidth 120, tabs enabled, trailing commas). Run:
npm run prettier # Check
npm run prettier:fix # Auto-fix
ESLint
TypeScript rules + Prettier integration; warnings for unused vars & any.
npm run eslint
npm run eslint:fix
Husky + lint-staged
npm run setup triggers prepare script -> installs Husky. On commit, staged JS/TS files are auto formatted & linted (lint-staged config in package.json).
TypeScript Configuration
Strict settings in tsconfig.json ensure type safety. Key options:
strict: true,noImplicitAny: truebaseUrl: "./"for simpler non-relative imports- Current path mapping:
"*": ["./src/*"](If you want alias like@utils/*, extend:
"paths": {"@utils/*": ["utils/*"], "@shared/*": ["src/shared/*"] }
Re-run IDE TS server after changes.
CI/CD Jenkins Pipeline
Jenkinsfile shows a Docker-based pipeline using official Playwright image:
- Clean workspace safely inside ephemeral Alpine container
- Checkout repo
- Install dependencies + browsers in Playwright container
- Inject credentials (URL, USERNAME, PASSWORD) if configured in Jenkins
- Run tests with environment variables for reporter paths & Jenkins artifact URLs
- Publish custom HTML report via
publishHTML
Adjust: PLAYWRIGHT_IMAGE version, add parallel stages, archive reports/.
Running Locally in Docker (Example Idea)
docker run --rm -v "$PWD":/app -w /app mcr.microsoft.com/playwright:v1.56.0-noble bash -c "npm ci && npx playwright test"
Writing & Extending Tests
Structure
- Place UI specs under
src/tests/uiand use fixtures for page object provisioning. - Place API specs under
src/tests/api. - Create
src/tests/e2efor cross-cutting journeys (login + multi-page flows).
Best Practices
| Practice | Rationale |
|---|---|
| Use page object methods | Avoid selector duplication |
Wrap logical actions in test.step |
Better reporting & trace readability |
| Tag tests meaningfully | Enables selective, faster suites over time |
| Keep mock data small & realistic | Easier maintenance, fewer flaky assumptions |
| Avoid sleeps; rely on expectations | Deterministic & resilient |
Adding E2E
- Create pages for all involved UI flows
- Add fixture that logs in / seeds data
- Write scenario spec & apply
@regressiontag
AI Agent & Playwright MCP Integration
This project is enhanced with AI-driven test assistance through the Model Context Protocol (MCP) and specialized Playwright-focused agents. These agents help you plan test coverage, generate new browser tests, and heal failing tests directly from within a connected MCP client (e.g., VS Code with MCP-enabled assistant).
What Is MCP?
The Model Context Protocol (MCP) is an open protocol that lets AI assistants connect to external tools ("servers") in a secure, structured way. In this project, an MCP server exposes Playwright automation tools so AI agents can:
- Inspect pages
- Generate locators and test code
- Run tests and analyze failures
- Explore application flows for planning test scenarios
Available AI Agents
Located in .github/chatmodes/ (planner, generator, healer). Each chatmode document describes when and how to invoke an agent.
- Planner Agent (📭 planner.chatmode.md)
- Purpose: Explore a live web app and produce structured test scenarios/test plans.
- Typical Use: Early-stage feature validation or expanding coverage.
- Capabilities: Page navigation via Playwright MCP tools, enumerating user journeys, grouping by risk & priority.
- Generator Agent (📭 generator.chatmode.md)
- Purpose: Create Playwright test code from natural language instructions.
- Typical Use: "Generate a test for the checkout flow" → returns a ready-to-run spec.
- Capabilities: DOM inspection, locator suggestions, scaffold assertions, optionally re-run test for verification.
- Healer Agent (📭 healer.chatmode.md)
- Purpose: Debug and fix failing tests by inspecting traces, logs, and page state.
- Typical Use: "Fix the failing login test" → agent pulls error, adjusts selectors or waits, revalidates.
- Capabilities: Reads failing test output, proposes patch, can regenerate selectors, suggests retries or waits only when justified.
How the Integration Works
- MCP Configuration:
.vscode/mcp.jsondeclares the Playwright MCP server dependency ("@playwright/mcp@latest"). When your MCP-enabled assistant starts, it installs/activates this server. - Tool Exposure: The server exposes actions like opening pages, querying locators, running Playwright commands, and gathering artifacts (screenshots, traces).
- Agent Logic: Each chatmode file provides heuristics so the assistant chooses the correct agent based on your request intent.
- Test Lifecycle Tie-In: Generated or healed tests land in
src/tests/...following the existing POM patterns.
Folder & File References
.github/chatmodes/→ Agent behavior & examples..vscode/mcp.json→ Registers MCP servers (Playwright).playwright.config.ts→ Standard Playwright setup; agents align with its projects & settings.src/pages/&src/tests/→ Structure agents follow when generating or updating tests.
Typical Workflows
1. Planning New Coverage
Prompt: "List regression test scenarios for the dashboard at https://app.example.com/dashboard"
Agent Flow:
- Planner navigates to URL
- Enumerates key modules (e.g., charts, filters, export buttons)
- Outputs structured test plan grouped by priority & tags (
@smoke,@regression)
2. Generating a New Test
Prompt: "Create a Playwright test that logs in with user [email protected] / Pass123 and verifies the avatar displays"
Agent Flow:
- Opens login page via MCP tool
- Captures selectors for email, password, submit, avatar
- Generates a spec file (e.g.,
src/tests/ui/login-avatar.spec.ts) using existingpages/uiabstractions if present (or scaffolds minimal inline selectors if not) - Optionally runs test, returns status and patch if adjustments needed
3. Healing a Failing Test
Prompt: "Fix the failing test in todo.spec.ts"
Agent Flow:
- Reads failure stack & screenshot via MCP attachments
- Replays steps, identifies selector mismatch (e.g.,
#todo-inputchanged to[data-test="todo-input"]) - Suggests patch (or applies if auto-fix mode allowed)
- Re-runs test; if stable, reports fix summary
Security & Safety Considerations
- Agents operate only on files inside the workspace (no external code injection).
- MCP tool calls are auditable; each action is explicit.
- Test code changes should be reviewed via version control (commit diffs) before merging.
Extending the Agent System
You can add new specialized agents (e.g., performance profiler) by:
- Creating a new chatmode file under
.github/chatmodes/(e.g.,🛠 profiler.chatmode.md). - Defining description, triggers, examples, and allowed MCP tools.
- (Optional) Adding custom Playwright helpers in
src/shared/utilsto standardize performance metrics.
Adding Custom MCP Servers
If you need additional context (e.g., Jira, analytics), extend .vscode/mcp.json with new servers following the existing JSON schema. Each server can surface APIs the agents can leverage when generating richer reports or logging bugs.
Benefits
- Faster test authoring from natural language.
- Reduced flakiness via automated healing suggestions.
- Structured planning to avoid coverage gaps.
- Consistent adherence to project POM & naming conventions.
Troubleshooting
| Issue | Possible Cause | Resolution |
|---|---|---|
| Agent doesn't see pages | Page not publicly reachable or auth required | Provide test credentials or open tunnel; ensure login flow described |
| Generated selector unstable | Dynamic attribute chosen | Ask agent to regenerate using data-test attributes or nth-match fallback |
| Healer cannot patch test | Test uses outdated helper abstraction | Refactor page object; re-run healer to map stable locators |
| MCP server not loading | Missing MCP-enabled client | Ensure you are using a tool or extension that supports .vscode/mcp.json |
Quick Start (Conceptual)
- Open an MCP-enabled chat interface in VS Code.
- Ask: "Plan tests for the todo feature" → Receive structured plan.
- Ask: "Generate tests for the highest priority scenarios" → Receive spec files.
- Run with
npm run test/npm run test:custom. - If a test fails, ask: "Heal the failing test in todo.spec.ts".
NOTE: The AI agent system augments—does not replace—manual test review. Always validate critical path tests before CI integration.
Extensibility Ideas
- Add GitHub Actions workflow for cross-platform CI
- Integrate Allure or other reporters alongside custom one
- Add visual regression (Playwright screenshot comparisons)
- Implement retries & flake detection dashboards
- Introduce data factories (e.g.,
@faker-js/faker) - Add parallel sharding & test splitting by tag
- Add API auth token refresh logic in a base class
- Generate accessibility scans (axe-core) as optional tests
- Wire Model Context Protocol (MCP) server to automate prompt usage directly from IDE
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
Imports @utils/... fail |
Missing tsconfig path mapping | Extend paths in tsconfig.json |
| Env vars undefined | Wrong NODE_ENV or missing file |
Verify <env>.env exists & name matches |
| Custom report empty | Reporter not loaded | Ensure playwright.config.ts includes ./utils/custom-reporter |
| JIRA button opens malformed page | Invalid IDs/Base URL | Set real values in configuration.ts |
| Videos/traces missing | Test passed (only on failure) | Force failure or set video: "on" for debugging |
| Husky not running | Git hooks not installed | Run npm run setup again |
Debug mode:
npm run test:debug
Security & Secrets
- Keep secrets out of version control—use Jenkins credentials or
.envfiles excluded by.gitignore(add if missing). - Never commit real API keys.
- For JIRA integration: store base URL & IDs as env variables or configuration constants with placeholder values.
License
Licensed under ISC (see LICENSE). Adapt freely; contributions welcome.
Contributor details
Twinkle Joshi
- LinkedIn: https://www.linkedin.com/in/twinkle-j-joshi/
- ResearchGate: https://www.researchgate.net/profile/Twinkle-Joshi-6/research
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi