Use Cases
Real-world scenarios and workflow patterns — from single-server setups to multi-environment CI/CD pipelines.
Table of Contents
- Single-Server Setup
- Multi-Environment Aliases
- Subprocess MCP Server
- Namespace Grouping (Dotted Tools)
- Scripting and Automation
- Long-Running Operations
- Complex Argument Payloads
- Server-Initiated Elicitation
- Event-Driven Monitoring
- Server-Initiated Sampling
- Offline and Disconnected Work
- CI/CD Integration
- Profile Overlays (Curated CLI)
- Email MCP Server
- GitHub MCP Server
- Database MCP Server
- Debugging and Diagnostics
- Resource Templates
- Tab Completion and Server Completions
- Multi-User Shared Infrastructure
- Demo and Learning Mode
- Resource Subscriptions (Live Watching)
- Workspace Roots for Context-Aware Servers
- Task Lifecycle Management
- Request Cancellation
- Infrastructure Provisioning Server
- Data Pipeline Orchestration
1. Single-Server Setup
Scenario: You have one MCP server and want to interact with it from the terminal.
# Create a configmcp2cli config init --name api \ --app bridge \ --transport streamable_http \ --endpoint https://api.example.com/mcp
# Set as activemcp2cli use api
# Discover what the server offersmcp2cli ls
# Use tools — each is a first-class command with typed flagsmcp2cli get-user --id 123mcp2cli create-order --product widget --quantity 5
# Read resourcesmcp2cli get "orders://recent"
# Run promptsmcp2cli summarize-order --order-id 4562. Multi-Environment Aliases
Scenario: You work with dev, staging, and production servers and want each to feel like a separate CLI application.
# Create configs for each environmentmcp2cli config init --name dev \ --transport stdio --stdio-command ./dev-server
mcp2cli config init --name staging \ --transport streamable_http \ --endpoint https://staging.api.example.com/mcp
mcp2cli config init --name prod \ --transport streamable_http \ --endpoint https://prod.api.example.com/mcp
# Create symlink aliasesmcp2cli link create --name devmcp2cli link create --name stagingmcp2cli link create --name prod
# Each alias routes to its own server with its own commandsdev lsstaging deploy --version 1.2.3prod health-checkdev echo --message "test"Each alias reads argv[0] and loads the matching config automatically. They
share the same binary but feel like completely different applications.
3. Subprocess MCP Server
Scenario: You want to run a local MCP server as a subprocess — no HTTP server needed.
# Node.js MCP servermcp2cli config init --name local \ --transport stdio \ --stdio-command npx \ --stdio-args '@modelcontextprotocol/server-everything'
# Python MCP servermcp2cli config init --name pyserver \ --transport stdio \ --stdio-command python \ --stdio-args my_mcp_server.py
# Rust MCP servermcp2cli config init --name rustserver \ --transport stdio \ --stdio-command ./target/release/my-mcp-server
mcp2cli use localmcp2cli lsEnvironment variables and working directory can be set in the config:
server: transport: stdio stdio: command: node args: [server.js] cwd: /path/to/project env: NODE_ENV: development DATABASE_URL: "postgres://localhost/dev"4. Namespace Grouping
Scenario: The MCP server uses dotted tool names for organization. mcp2cli auto-groups them into nested subcommands.
# Server exposes: email.send, email.reply, email.draft.create,# email.draft.list, email.labels.list, email.labels.add
email --help# COMMANDS:# send Send an email# reply Reply to an email# draft Draft management# create Create a draft# list List drafts# labels Label management# list List labels# add Add a label# get Fetch a resource by URI# auth Authentication# ...
email send --to user@example.com --subject "Hello" --body "Message"email draft create --subject "WIP" --body "Work in progress"email labels add --message-id msg_456 --label urgentGrouping rules:
- ≥2 capabilities sharing a prefix form a subcommand group
- Dots, slashes, underscores, and hyphens are treated as separators
5. Scripting and Automation
Scenario: You want to use mcp2cli in shell scripts, cron jobs, or automation pipelines.
JSON output for parsing
# Get capabilities as JSONwork --json ls | jq '.data.items[].id'
# Call a tool and extract resultRESULT=$(work --json get-user --id 123 | jq -r '.data.result')echo "User: $RESULT"
# Check auth status programmaticallyAUTH_STATE=$(work --json auth status | jq -r '.data.auth_session.state')if [ "$AUTH_STATE" != "authenticated" ]; then echo "Not authenticated, logging in..." work auth loginfi
# Parse doctor outputwork --json doctor | jq '{server: .data.server, auth: .data.auth_session.state}'NDJSON for streaming
work --output ndjson long-running-task --steps 10 | while IFS= read -r line; do echo "$line" | jq -c '{type: .type, message: .message}' doneChaining commands
# Discover, validate, then executework doctor && work ls && work deploy --env staging
# Batch operationsfor user_id in 1 2 3 4 5; do work --json get-user --id "$user_id" >> users.jsonldone
# Conditional executionif work --json ls | jq -e '.data.items[] | select(.id == "deploy")' > /dev/null; then work deploy --version latestelse echo "Server does not have deploy capability"fiNon-blocking patterns
# Fire multiple background jobsfor dataset in sales marketing engineering; do work analyze --dataset "$dataset" --backgrounddone
# Wait for all to completework jobs list | grep running | while read -r job_id rest; do work jobs wait "$job_id"done6. Long-Running Operations
Scenario: You need to run operations that take minutes or hours — analysis, data processing, batch imports.
Start a background job
work analyze-dataset --dataset q4-2025 --background# → Job created: job-abc-123Monitor progress
work jobs watch --latest # Watch live progress eventswork jobs show --latest # Poll statuswork jobs show job-abc-123work jobs wait --latest # Block until doneManage jobs
work jobs listwork jobs cancel job-abc-123Cross-session persistence
Jobs persist on disk — you can exit the terminal and check results later:
# In one terminal:work big-import --background
# Later, in another terminal:work jobs show --latest# status: completed# result: "Imported 50,000 records"7. Complex Argument Payloads
Scenario: A tool requires deeply nested arguments that are tedious to type inline.
JSON-typed flags
work configure \ --labels '["production","us-west"]' \ --limits '{"cpu":"2","memory":"4Gi"}'File-based payloads
cat > deploy-config.json << 'EOF'{ "environment": "production", "config": { "replicas": 5, "image": "myapp:2.1.0", "labels": ["production", "us-west"], "resources": { "cpu": "1", "memory": "2Gi" } }}EOF
work deploy --args-file deploy-config.jsonLayered overrides
Combine argument sources — they merge with later sources winning:
work deploy \ --args-file deploy-config.json \ --args-json '{"config":{"replicas":10}}' \ --environment canary8. Server-Initiated Elicitation
Scenario: A tool asks for additional input during execution — the server needs confirmation, credentials, or details it can’t determine upfront.
work trigger-deploymentThe server sends an elicitation/create request. mcp2cli prompts
interactively:
--- elicitation request ---Deployment requires additional confirmation: Target region (AWS region) [required]: us-east-1 Confirm (yes/no) [required]: yes Max instances [default: 3]: Tags [comma-separated]: prod,web,v2--- end elicitation ---The response is sent back to the server and execution continues.
Type handling
| Schema type | Input | Result |
|---|---|---|
boolean | yes, true, y, 1 | true |
integer | 42 | 42 |
number | 3.14 | 3.14 |
array | a, b, c | ["a", "b", "c"] |
enum | Matched by title | Corresponding const value |
string | Any text | Used as-is |
9. Event-Driven Monitoring
Scenario: You want to capture runtime events (progress, job updates, auth prompts, server log messages) for monitoring or dashboards.
Multiple sinks simultaneously
events: enable_stdio_events: true # Stderr (human-readable) http_endpoint: "http://monitoring:9090/events" # HTTP webhook (POST JSON) local_socket_path: "/tmp/mcp2cli-events.sock" # Unix socket (NDJSON) sse_endpoint: "127.0.0.1:9091" # SSE server command: "logger -t mcp2cli '${MCP_EVENT_MESSAGE}'" # Shell commandAll five sinks receive every event. Use stderr for development, HTTP for production alerting, sockets for local IPC, SSE for web dashboards, and command exec for custom integrations.
Server notification events
During tool calls, the server may send real-time notifications:
$ work analyze-dataset --dataset q4-2025[work] analyze-1 1/5 Loading dataset...[work] analyze-1 2/5 Parsing records...[work] server debug (db): Query executed in 42ms[work] analyze-1 3/5 Computing aggregates...[work] analyze-1 5/5 CompleteProgress notifications (notifications/progress), log messages
(notifications/message), and capability change signals
(notifications/{tools,resources,prompts}/list_changed) are all delivered
through the event broker.
Command execution sink
Run arbitrary commands for each event with environment variable interpolation:
# Desktop notificationevents: command: "notify-send 'mcp2cli' '${MCP_EVENT_MESSAGE}'"
# Forward to webhookevents: command: "curl -s -X POST http://hooks/mcp -d \"${MCP_EVENT_JSON}\""
# Send to syslogevents: command: "logger -t mcp2cli '${MCP_EVENT_MESSAGE}'"
# Conditional: only alert on errorsevents: command: "[ \"$MCP_EVENT_TYPE\" = 'server_log' ] && logger -p user.err '${MCP_EVENT_MESSAGE}'"Available environment variables:
MCP_EVENT_TYPE— event type (info, progress, server_log, job_update, auth_prompt, list_changed)MCP_EVENT_JSON— full JSON-serialized eventMCP_EVENT_APP_ID— the app_id fieldMCP_EVENT_MESSAGE— human-readable one-line message
Listening to events
# Unix socketsocat UNIX-LISTEN:/tmp/mcp2cli-events.sock,fork - | jq .
# SSEcurl -N http://127.0.0.1:9091Capability change detection
When the server signals that its tool/resource/prompt list has changed:
$ work long-running-task[work] server tools have changed; run 'ls' to refreshA stale marker file is written so the next ls command forces a live
re-discovery instead of using the cache.
10. Server-Initiated Sampling
Scenario: A tool needs a model/AI response during execution — the server
sends sampling/createMessage to the client.
$ work generate-code --spec api-spec.yaml--- sampling request ---The server requests a model response.Model hint: claude-3-5-sonnetSystem: You are an expert code generatorMax tokens: 2000
Messages: [user] Generate a REST controller for the given API spec
Your response (or 'decline' to reject): Here is the controller implementation...--- end sampling ---
title: Code Generationresult: Generated src/controllers/api.tsmcp2cli advertises sampling capability during initialization. The user
acts as a human-in-the-loop model — seeing exactly what the server asks
and deciding what to respond.
Declining a sampling request
Type decline or press Enter with no input to reject:
Your response (or 'decline' to reject): declineThe server receives a JSON-RPC error (-32600) and can handle the rejection gracefully.
11. Offline and Disconnected Work
Scenario: The server is temporarily unreachable but you need to see what capabilities are available.
Discovery cache fallback
# These work even when the server is down (from cache)work lswork ls --toolswork ls --resourcesWhat works offline
- Listing capabilities (
ls) - Viewing job records (
jobs list,jobs show) - Auth status check (
auth status) - Doctor diagnostics (
doctor)
What requires connectivity
- Calling tools
- Reading resources
- Running prompts
- Job sync (
jobs wait,jobs cancel) - Auth flows (
auth login)
12. CI/CD Integration
Scenario: You want to call MCP server tools from CI/CD pipelines — GitHub Actions, GitLab CI, Jenkins.
Setup in CI
cargo install --path .
mcp2cli config init --name ci \ --transport streamable_http \ --endpoint "$MCP_SERVER_ENDPOINT"mcp2cli use ci
echo "$MCP_TOKEN" | mcp2cli auth loginCI pipeline steps
# Validatemcp2cli doctor
# DeployDEPLOY_RESULT=$(mcp2cli --json deploy \ --version "$CI_COMMIT_SHA" \ --environment staging)echo "$DEPLOY_RESULT" | jq -e '.data.success' || exit 1
# Long operationsmcp2cli run-tests --suite full --backgroundmcp2cli jobs wait --latest
JOB_STATUS=$(mcp2cli --json jobs show --latest | jq -r '.data.status')if [ "$JOB_STATUS" != "completed" ]; then echo "Tests failed" exit 1fiEvent forwarding to CI logs
events: enable_stdio_events: true # Events appear in CI log output13. Profile Overlays
Scenario: You want the CLI to feel polished — renaming awkward tool names, hiding internal tools, grouping related commands.
Add a profile
# In configs/work.yamlprofile: display_name: "Work Toolkit" aliases: long-running-operation: lro get-tiny-image: image annotated-message: annotate hide: - print-env - debug-probe groups: analysis: - analyze-data - generate-report - export-csv flags: echo: message: msg resource_verb: fetchResult
work echo --msg hello # Renamed flagwork lro --duration 5 # Shortened command namework image # Friendly namework analysis analyze-data ... # Custom groupingwork fetch demo://resource/... # Custom resource verb14. Email MCP Server
Scenario: An email MCP server exposes tools for sending, reading, and managing email.
Server capabilities
- Tools:
send,reply,forward,archive,labels.add,labels.remove,draft.create,draft.send - Resources:
mail://inbox,mail://sent,mail://draft/123 - Resource templates:
mail://search?q={query},mail://message/{id} - Prompts:
summarize-thread,compose-reply,triage-inbox
Usage
email send --to user@example.com --subject "Hello" --body "Message body"email reply --thread-id th_123 --body "Thanks for the update"email labels add --message-id msg_456 --label urgentemail draft create --subject "Draft" --body "Work in progress"
email get mail://inboxemail search --query "invoice 2026"email message msg_789
email summarize-thread --thread-id th_123email compose-reply --thread-id th_123 --style professionalemail triage-inbox
email send --to team@example.com --subject "Batch report" --backgroundemail jobs watch --latestDotted tool names auto-group: labels.add → email labels add,
draft.create → email draft create.
15. GitHub MCP Server
Scenario: A GitHub MCP server exposes repository, issue, and PR tools.
Server capabilities
- Tools:
repos.list,repos.create,issues.list,issues.create,issues.comment,pr.list,pr.create,pr.review,pr.merge - Resource templates:
gh://repo/{owner}/{name},gh://issue/{owner}/{name}/{number} - Prompts:
review-pr,draft-issue
Usage
gh repos list --org my-org --limit 10gh repos create --name new-project --private truegh issues list --repo owner/repo --state opengh issues create --repo owner/repo --title "Bug" --body "Details..."gh issues comment --repo owner/repo --number 42 --body "Fixed in v2"gh pr list --repo owner/repo --state opengh pr create --repo owner/repo --title "Feature" --head feature-branchgh pr review --repo owner/repo --number 15 --approve truegh pr merge --repo owner/repo --number 15 --method squash
# Resource templatesgh repo owner/my-projectgh issue owner/my-project 42
# AI-powered promptsgh review-pr --repo owner/repo --number 15 --focus securitygh draft-issue --title "Performance regression" --context "Load test results..."16. Database MCP Server
Scenario: A database MCP server exposes query, schema inspection, and migration tools.
db query --sql "SELECT * FROM users LIMIT 10"db query --sql "SELECT count(*) FROM orders WHERE status='pending'"
# Schema inspection (resources)db get "schema://tables"db get "schema://tables/users"
# Migration toolsdb migrate --direction up --steps 1db migrate --direction down --steps 1 --backgrounddb jobs watch --latest
# AI promptsdb explain-query --sql "SELECT u.name, COUNT(o.id) FROM users u JOIN orders o..."db suggest-index --table orders --column status17. Debugging and Diagnostics
Scenario: Something isn’t working. You need to diagnose connectivity, auth, or capability issues.
Doctor
work doctorOutput:
config: workprofile: bridgetransport: streamable_httpserver: my-server 2.1.0auth: authenticatednegotiated: protocol 2025-03-26 with 5 capability groups cachedinventory: 14 tools, 3 resources, 2 prompts cachedInspect
work inspectFull server capability response: protocol version, capabilities, server info.
Ping
work pingServer liveness check with latency measurement.
Verbose logging
MCP2CLI_LOGGING__LEVEL=debug work echo --message test 2>debug.logMCP2CLI_LOGGING__LEVEL=trace work doctor 2>trace.logAuth debugging
work auth status18. Resource Templates
Scenario: The server exposes parameterized resources (URI templates) that become first-class commands.
Single-parameter templates → positional argument
# Template: greeting/{name}work greeting Alice# → resources/read with URI: greeting/AliceMulti-parameter templates → flags
# Template: mail://search?q={query}&folder={folder}work mail-search --query invoice --folder inbox# → resources/read with URI: mail://search?q=invoice&folder=inboxConcrete resources via get
work get "demo://resource/readme.md"work get "demo://resource/static/document/architecture.md"19. Tab Completion and Server Completions
Scenario: You want server-assisted completions for argument values.
# Request completions for a tool argumentwork complete ref/tool echo message "hel"
# Complete a prompt argumentwork complete ref/prompt compose-reply style "prof"
# Set server log levelwork log debugwork log info20. Multi-User Shared Infrastructure
Scenario: Multiple team members use the same MCP server with shared configs via version control.
# Store configs in a git repogit init team-mcp-configscd team-mcp-configs
cat > staging.yaml << 'EOF'schema_version: 1app: profile: bridgeserver: display_name: Staging API transport: streamable_http endpoint: https://staging.internal.example.com/mcpdefaults: output: humanlogging: level: warn format: pretty outputs: [{kind: stderr}]auth: browser_open_command: nullevents: enable_stdio_events: trueEOF
git add . && git commit -m "Add staging config"
# Point mcp2cli to the shared direxport MCP2CLI_CONFIG_DIR=/path/to/team-mcp-configsmcp2cli link create --name stagingstaging lsEach user authenticates independently — tokens are stored in the per-user data directory, not in the shared config.
21. Demo and Learning Mode
Scenario: You want to learn mcp2cli without setting up a real server.
Using the reference server
# Start the reference MCP servernpx @modelcontextprotocol/server-everything streamableHttp# In another terminal:mcp2cli config init --name everything \ --transport streamable_http \ --endpoint http://127.0.0.1:3001/mcpmcp2cli use everything
# Or as a stdio server (no separate process needed):mcp2cli config init --name everything-stdio \ --transport stdio \ --stdio-command npx \ --stdio-args '@modelcontextprotocol/server-everything'mcp2cli use everything-stdio
# Discover and usemcp2cli lsmcp2cli echo --message hellomcp2cli add --a 5 --b 3mcp2cli get-tiny-imagemcp2cli simple-promptmcp2cli complex-prompt --temperature 0.7 --style concise
# Diagnosticsmcp2cli doctormcp2cli inspectmcp2cli ping
# Auth flowsmcp2cli auth loginmcp2cli auth statusmcp2cli auth logout
# Background jobsmcp2cli long-running-operation --duration 5 --steps 3 --backgroundmcp2cli jobs watch --latest22. Resource Subscriptions
Scenario: You want to be notified when a server resource changes — a config file, a database table, or a monitoring endpoint — without polling.
Subscribe to a resource
work subscribe "file:///project/config.yaml"work subscribe "db://tables/users"The server will send notifications/resources/updated whenever the resource
changes. Events arrive through configured sinks (stderr, webhook, socket, SSE).
Monitor in a second terminal
# With SSE event sink enabled:curl -N http://127.0.0.1:9091# → data: {"type":"info","message":"resource updated: file:///project/config.yaml"}Unsubscribe when done
work unsubscribe "file:///project/config.yaml"Automation: react to resource changes
# Watch for updates and re-read when they arrivework subscribe "config://app/settings"
# In another terminal, watch the SSE stream:curl -sN http://127.0.0.1:9091 | while IFS= read -r line; do if echo "$line" | grep -q "resource updated"; then work --json get "config://app/settings" >> config-history.jsonl fidoneUse case: config hot-reload
# In events config:events: command: | [ "$MCP_EVENT_TYPE" = "info" ] && echo "$MCP_EVENT_MESSAGE" | grep -q "resource updated" && \ systemctl reload my-service23. Workspace Roots
Scenario: A code-aware MCP server needs to know which directories it should
operate on. The server sends a roots/list request, and mcp2cli responds with
the configured roots.
Configure roots
# In configs/code.yamlserver: transport: stdio stdio: command: ./code-analysis-server roots: - uri: "file:///home/user/project/src" name: "Source" - uri: "file:///home/user/project/tests" name: "Tests"How it works
During tool execution, the server may request roots/list to understand the
client’s workspace. mcp2cli automatically responds with the configured roots.
The server can then scope its analysis, file search, or code generation to
those directories.
# The server uses roots to scope its workcode analyze --depth full# Server internally calls roots/list → gets [src/, tests/]# → Analyzes only those directoriesMultiple projects
# Frontend projectmcp2cli config init --name frontend --transport stdio --stdio-command ./lsp-server# Add roots: src/, components/, public/
# Backend projectmcp2cli config init --name backend --transport stdio --stdio-command ./lsp-server# Add roots: cmd/, internal/, pkg/
# Each alias reports different rootsfrontend analyze --scope all # Server sees frontend rootsbackend analyze --scope all # Server sees backend roots24. Task Lifecycle Management
Scenario: A server supports the MCP task system for long-running operations. You start a tool as a background task, monitor its progress, and retrieve the result later — even from a different terminal session.
Start a background task
work analyze-dataset --dataset q4-2025 --background# → Task accepted (task-abc-123)# Job created: job-1When --background is used, mcp2cli sends _meta.task in the tool call
request. If the server supports tasks, it returns a task ID immediately instead
of blocking.
Check task status
work jobs show --latest# Queries tasks/get on the server for live status:# job-1: task-abc-123# status: working# message: "Processing 50,000 records..."Wait for completion
work jobs wait --latest# Polls tasks/get every 2 seconds# When complete, calls tasks/result for the final output:# status: completed# result: { "records_processed": 50000, "errors": 0 }Watch live status
work jobs watch --latest# Polls every 1 second, prints each status change:# [job-1] working — Loading dataset...# [job-1] working — Processing records (25,000/50,000)...# [job-1] working — Computing aggregates...# [job-1] completedCancel a running task
work jobs cancel --latest# Sends tasks/cancel to the server# → task-abc-123 cancelledCross-session persistence
Tasks persist on disk. Start in one terminal, check from another:
# Terminal 1:work train-model --epochs 100 --background
# Terminal 2 (later, even after Terminal 1 closed):work jobs show --latestwork jobs wait --latestCI/CD with tasks
# Start a long deployment as a taskwork deploy --image myapp:2.0 --environment staging --background
# Poll until completework jobs wait --latestSTATUS=$(work --json jobs show --latest | jq -r '.data.status')[ "$STATUS" = "completed" ] || exit 125. Request Cancellation
Scenario: You started a long-running tool call and need to abort it gracefully — notifying the server to stop work rather than just killing the connection.
Cancel from the CLI
When the server supports long-running operations, you can cancel in-flight requests:
# Start a long tool callwork process-batch --size 1000000
# Press Ctrl+C — mcp2cli sends notifications/cancelled to the server# The server can then abort the operation gracefullyServer-initiated cancellation
If the server cancels a pending request it sent to you (e.g. an elicitation
or sampling request), mcp2cli handles the notifications/cancelled
notification and logs the reason:
[work] request cancelled by server: timeout exceededProgrammatic cancellation
# Start a background jobwork big-export --backgroundJOB_ID=$(work --json jobs list | jq -r '.data.jobs[-1].id')
# Cancel itwork jobs cancel "$JOB_ID"# Sends tasks/cancel → notifications/cancelled to the server26. Infrastructure Provisioning Server
Scenario: An infrastructure-as-code MCP server exposes provisioning, scaling, and monitoring tools. Operations can take minutes, making the task system essential.
Server capabilities
- Tools:
provision,scale,destroy,status,logs.tail,deploy.rollout,deploy.rollback - Resources:
infra://clusters,infra://cluster/{id},infra://costs - Prompts:
incident-response,capacity-plan
Usage
# Provision a new cluster (long-running → background task)infra provision --region us-east-1 --size medium --backgroundinfra jobs watch --latest# → [job-1] working — Creating VPC...# → [job-1] working — Launching instances...# → [job-1] working — Configuring load balancer...# → [job-1] completed
# Check cluster statusinfra get "infra://cluster/cls-789"
# Scale up (quick operation)infra scale --cluster cls-789 --replicas 5
# Rolling deploymentinfra deploy rollout --cluster cls-789 --image myapp:3.0 --backgroundinfra jobs watch --latest
# Emergency rollbackinfra deploy rollback --cluster cls-789
# Subscribe to cost alertsinfra subscribe "infra://costs"# → Receive notifications when costs exceed thresholds
# AI-assisted incident responseinfra incident-response --cluster cls-789 --symptoms "high latency, 5xx errors"27. Data Pipeline Orchestration
Scenario: A data engineering MCP server orchestrates ETL pipelines, transformations, and quality checks. Multiple long-running jobs run in parallel.
Server capabilities
- Tools:
pipeline.run,pipeline.status,transform,validate,export - Resources:
data://datasets,data://schema/{dataset} - Prompts:
generate-transform,diagnose-quality
Usage
# Start multiple pipelines in paralleldata pipeline run --name ingest-sales --backgrounddata pipeline run --name ingest-marketing --backgrounddata pipeline run --name ingest-support --background
# Watch all jobsdata jobs list# ID Status Name# 1 working ingest-sales# 2 working ingest-marketing# 3 completed ingest-support
# Wait for a specific jobdata jobs wait 1
# Check schemadata get "data://schema/sales"
# Validate data qualitydata validate --dataset sales --rules strict --backgrounddata jobs watch --latest
# Subscribe to dataset changes for a downstream triggerdata subscribe "data://datasets/sales"
# Use server-side roots to scope operations# Config:# roots:# - uri: "file:///data/warehouse"# name: "Data Warehouse"data transform --input sales --output sales_enriched