Overview

The AI Co-pilot is a built-in network engineering assistant inside the Ready Terminal. It understands natural-language goals, reads the live device output, proposes a step-by-step plan, and executes the approved steps through your existing SSH/Telnet session.

The co-pilot never bypasses the operator: every command is shown with its risk level and waits for explicit approval before running — unless Autopilot is enabled, in which case safe and caution steps run automatically while destructive steps still pause for a human click.

The AI Co-pilot is available to both Group Admin and NOC users wherever the Ready Terminal opens (router monitor pages, OLT live status, NOC toolbar, etc.).

What it can do

Capability Example
Diagnose problems "Why is interface ether2 flapping?" → reads /log print, /interface monitor-traffic, proposes follow-ups based on what it sees
Troubleshoot customers "Find PPPoE users with high RX traffic right now" → suggests /ppp active print stats-detail, filters the output
Translate vendors "Cisco equivalent of /ip route print" → show ip route
Propose configuration "Add a 10 Mbps simple queue for 192.168.10.5" → exact RouterOS commands in a plan
Iterative reasoning After each step runs, the captured output is fed back so the AI can refine the next step
Multi-vendor MikroTik RouterOS, Cisco IOS / IOS-XE, Huawei VRP, Juniper Junos, generic Linux

How it works

The AI Co-pilot panel sits at the bottom of the terminal window. It captures a rolling 5 KB of recent output from the active tab and sends it as context with every chat call, so the model is reasoning about the real, current state of your device — not generic web examples.

When you send a message, the AI replies with one of two things:

  1. A clarifying question — if your goal is ambiguous (one short follow-up, no commands).
  2. A Plan — a numbered list of 1–8 single-line commands. Each step has:
    • The exact command
    • A one-line "why"
    • A risk badge: safe · caution · destructive

You can run steps one by one, run all of them, or run only the safe steps. Approved steps pipe into your existing SSH/Telnet session through the same WebSocket the rest of Ready Terminal uses — the back-end never spawns a separate shell.

Risk levels

Risk Meaning Examples
safe Read-only — no state changes show, print, display, get, monitor-traffic
caution Changes config but easily reverted /interface set, /ip address add, /queue simple add, ip route ..., no <feature>
destructive Wipes / reboots / deletes / disables critical services system reset, reload, erase startup-config, format flash, rm -rf /, mkfs, dd if=, halt, shutdown

A server-side denylist re-classifies any matching command to destructive and marks it blocked, even if the model under-reports its risk. Blocked steps require typing CONFIRM in a prompt before they will run — extra friction so a malicious banner or a mis-aligned model can't accidentally wreck a router.

Autopilot

The Autopilot toggle in the panel header controls automation:

Setting Behaviour
Off (default) Every step needs an explicit click. Use this while you're learning to trust the model.
On safe and caution steps auto-run sequentially. destructive steps always pause for an explicit click — even in Autopilot.

The toggle is per-browser (saved in localStorage); switching it on flips a yellow indicator in the panel header so you always know which mode you're in.

Using the Co-pilot — quick walkthrough

  1. Open a Ready Terminal to a device and connect it as usual.
  2. Click the AI Co-pilot strip at the bottom of the terminal to expand it.
  3. Type a goal in plain English — "Check why customer 192.168.5.42 cannot get an IP."
  4. The AI replies with a Plan. For each step you'll see the command, its purpose, and a risk badge.
  5. Click Run on the first step (or Run safe only to batch the read-only ones).
  6. After the command's output appears in the terminal, the AI uses that output to reason about the next step automatically — you can keep clicking through the plan or send a follow-up message.
  7. Type Clear in the panel header to wipe the conversation when you start a new task.

Configuration suggestions

The AI also writes config. Examples:

  • "Allow ping from 10.0.0.0/8 only on the input chain" → produces /ip firewall filter add ... with risk caution.
  • "Limit user john to 5 Mbps down / 1 Mbps up" → produces /queue simple add name="q-john" target=... max-limit=5M/1M.

For configuration changes the plan always ends with a verification step (/queue simple print where name=q-john) so you can confirm the change took effect.

Safety guarantees

  • Operator stays in control. The AI cannot execute anything by itself. The only path from AI → device is a click on a Run button (or the Autopilot rules above).
  • Server-side denylist. Hard-blocked patterns: /system reset, reload, erase startup-config, format flash, rm -rf /, mkfs, dd if=, fork bombs, halt, shutdown, init 0, RouterOS /file remove, /system package uninstall. These cannot be auto-run even in Autopilot.
  • Caps. 8 steps per plan, 20 messages of history per call, 20 chat requests per minute per user.
  • Audit trail. Every chat call is logged with the operator's user ID, role, target host, and how many risk overrides the server applied. Logs land in storage/logs/laravel.log under terminal_ai.chat.
  • Same sudo block as the rest of Ready Terminal — even if the AI proposes it, the WebSocket layer rejects it.

Operator tips

  • Tell the AI the vendor explicitly when you know it (use the small dropdown next to the simpler "Ask AI" bar above the Co-pilot). Saves a turn.
  • Describe the symptom, not the command. "Customer keeps getting disconnected" gives a much better plan than "run /log print".
  • Use Autopilot for diagnostics, not for changes. Even with Autopilot on, walk through configuration plans manually the first few times you use it on a given device.
  • The AI sees what you see. If the terminal is showing a wall of errors, the AI is reading them too — you can just say "what should I do about this?"

Configuration (administrators)

The AI Co-pilot uses any OpenAI-compatible chat-completions endpoint. Set these in your .env:

TERMINAL_AI_API_KEY=sk-...
# Optional overrides:
TERMINAL_AI_MODEL=gpt-4o-mini
TERMINAL_AI_BASE_URL=https://api.openai.com/v1
TERMINAL_AI_RATE_LIMIT=20      # chat calls per user per minute
TERMINAL_AI_TIMEOUT=20         # seconds
TERMINAL_AI_MAX_TOKENS=400

The same TERMINAL_AI_BASE_URL works with OpenAI, OpenRouter, Together, Groq, Anthropic-compatible proxies, or a self-hosted llama.cpp / vLLM server. If TERMINAL_AI_API_KEY is empty, the panel renders a clean "AI is not configured" message and the rest of the terminal keeps working unchanged.