MCP Servers GitHub Sync Workflow

Iteratively fetch MCP servers from API and sync to private GitHub repository

Back
Workflow Information

ID: mcp_servers_sync_workflow

Namespace: mcp_sync

Version: 1

Created: 2025-08-06

Updated: 2025-08-06

Tasks: 6

Quick Actions
Manage Secrets
Inputs
Name Type Required Default
github_token string Required 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
target_repo string Required Ampcome/mcp_servers
page_size integer Optional 10
update_mode boolean Optional None
Outputs
Name Type Source
sync_summary object Complete sync operation summary
total_processed integer Total number of servers processed
Tasks
get_total_count
script

Fetch total count of MCP servers from registry API

calculate_pages
script

Calculate how many pages needed based on total count and page size

check_existing_repos
script

Get list of existing folders in target GitHub repository

process_existing
script

Extract existing folder names for comparison

process_pages_loop
loop

Iteratively process each page of MCP servers

Loop Configuration
Type: for
Max Iterations: 35
Iterator Variable: page_number
State Variables: total_failed, total_skipped, failed_servers, skipped_servers, total_processed, total_successful, processed_servers
Loop Flow (2 steps)
Fetch Page Data script
Process Page Servers script
final_summary
script

Create comprehensive summary of sync operation

YAML Source
id: mcp_servers_sync_workflow
name: MCP Servers GitHub Sync Workflow
tasks:
- id: get_total_count
  name: Get MCP Registry Total Count
  type: script
  script: "import requests\nimport json\n\n# Make API call to get total count\nurl\
    \ = \"https://api.mcpservers.com/api/v1/mcp/registry\"\nparams = {\"page\": 1,\
    \ \"limit\": 1}\nheaders = {\n    \"Accept\": \"application/json\",\n    \"User-Agent\"\
    : \"MCP-Sync-Workflow\"\n}\n\ntry:\n    print(\"\U0001F50D Fetching total count\
    \ from MCP registry API...\")\n    response = requests.get(url, params=params,\
    \ headers=headers, timeout=30)\n    response.raise_for_status()\n    \n    data\
    \ = response.json()\n    total_count = data.get(\"totalCount\", 0)\n    current_page\
    \ = data.get(\"page\", 1)\n    \n    print(f\"\u2705 API Response successful\"\
    )\n    print(f\"\U0001F4CA Total MCP servers available: {total_count}\")\n   \
    \ print(f\"\U0001F4C4 Current page: {current_page}\")\n    \n    outputs = {\n\
    \        \"totalCount\": total_count,\n        \"page\": current_page,\n     \
    \   \"status\": \"success\"\n    }\n    \nexcept requests.exceptions.RequestException\
    \ as e:\n    print(f\"\u274C API request failed: {e}\")\n    # Fallback to known\
    \ count\n    outputs = {\n        \"totalCount\": 291,\n        \"page\": 1,\n\
    \        \"status\": \"fallback\"\n    }\n\nprint(f\"__OUTPUTS__ {json.dumps(outputs)}\"\
    )\n"
  description: Fetch total count of MCP servers from registry API
  timeout_seconds: 30
- id: calculate_pages
  name: Calculate Total Pages
  type: script
  script: "import math\n\n# Get total count and page size\ncount_result = ${get_total_count}\n\
    total_count = count_result.get(\"totalCount\", 0)\npage_size = ${page_size}\n\n\
    # Calculate total pages (ceiling division)\ntotal_pages = math.ceil(total_count\
    \ / page_size) if total_count > 0 else 0\n\nprint(f\"\U0001F4CA Total MCP servers:\
    \ {total_count}\")\nprint(f\"\U0001F4C4 Page size: {page_size}\")\nprint(f\"\U0001F4CB\
    \ Total pages needed: {total_pages}\")\n\n# Prepare outputs\noutputs = {\n   \
    \ \"total_count\": total_count,\n    \"page_size\": page_size,\n    \"total_pages\"\
    : total_pages\n}\n\nimport json\nprint(f\"__OUTPUTS__ {json.dumps(outputs)}\"\
    )\n"
  depends_on:
  - get_total_count
  description: Calculate how many pages needed based on total count and page size
  timeout_seconds: 10
- id: check_existing_repos
  name: Check Existing GitHub Repositories
  type: script
  script: "import requests\nimport json\n\ngithub_token = \"${github_token}\"\ntarget_repo\
    \ = \"${target_repo}\"\n\nurl = f\"https://api.github.com/repos/{target_repo}/contents\"\
    \nheaders = {\n    \"Authorization\": f\"token {github_token}\",\n    \"Accept\"\
    : \"application/vnd.github.v3+json\",\n    \"User-Agent\": \"MCP-Sync-Workflow\"\
    \n}\n\ntry:\n    print(f\"\U0001F50D Checking existing folders in {target_repo}...\"\
    )\n    response = requests.get(url, headers=headers, timeout=30)\n    \n    if\
    \ response.status_code == 200:\n        contents = response.json()\n        #\
    \ Extract only what we need to avoid null values\n        clean_contents = []\n\
    \        for item in contents:\n            if item.get(\"type\") == \"dir\":\n\
    \                clean_contents.append({\n                    \"name\": item.get(\"\
    name\", \"\"),\n                    \"type\": \"dir\"\n                })\n  \
    \      \n        print(f\"\u2705 Successfully fetched repository contents\")\n\
    \        print(f\"\U0001F4C1 Found {len(clean_contents)} directories\")\n    \
    \    \n        outputs = {\n            \"contents\": clean_contents,\n      \
    \      \"status\": \"success\",\n            \"repo_exists\": 1\n        }\n \
    \   elif response.status_code == 404:\n        print(\"\u2139\uFE0F Repository\
    \ appears to be empty or private\")\n        outputs = {\n            \"contents\"\
    : [],\n            \"status\": \"empty\",\n            \"repo_exists\": 0\n  \
    \      }\n    else:\n        print(f\"\u26A0\uFE0F Unexpected status code: {response.status_code}\"\
    )\n        outputs = {\n            \"contents\": [],\n            \"status\"\
    : \"error\",\n            \"repo_exists\": 0\n        }\n        \nexcept requests.exceptions.RequestException\
    \ as e:\n    print(f\"\u274C Failed to check repository: {e}\")\n    outputs =\
    \ {\n        \"contents\": [],\n        \"status\": \"error\",\n        \"repo_exists\"\
    : 0\n    }\n\nprint(f\"__OUTPUTS__ {json.dumps(outputs)}\")\n"
  depends_on:
  - calculate_pages
  description: Get list of existing folders in target GitHub repository
  timeout_seconds: 30
- id: process_existing
  name: Process Existing Folders List
  type: script
  script: "import json\n\n# Get existing repo contents (now clean without null values)\n\
    existing_result = ${check_existing_repos}\ncontents = existing_result.get(\"contents\"\
    , [])\n\n# Extract folder names (already filtered for directories only)\nexisting_folders\
    \ = []\nfor item in contents:\n    folder_name = item.get(\"name\", \"\")\n  \
    \  if folder_name:\n        existing_folders.append(folder_name)\n\nprint(f\"\U0001F4C1\
    \ Found {len(existing_folders)} existing folders:\")\nfor folder in sorted(existing_folders):\n\
    \    print(f\"  - {folder}\")\n\noutputs = {\n    \"existing_folders\": existing_folders,\n\
    \    \"existing_count\": len(existing_folders)\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(outputs)}\"\
    )\n"
  depends_on:
  - check_existing_repos
  description: Extract existing folder names for comparison
  timeout_seconds: 10
- id: process_pages_loop
  name: Process MCP Servers Pages
  type: loop
  loop_type: for
  depends_on:
  - process_existing
  - calculate_pages
  loop_tasks:
  - id: fetch_page_data
    name: Fetch Page Data
    type: script
    script: "import requests\nimport json\n\n# Access loop iteration variable (automatically\
      \ injected)\npage_number = ${page_number}\n# Access workflow inputs via template\
      \ resolution  \npage_size = ${page_size}\n\nurl = \"https://api.mcpservers.com/api/v1/mcp/registry\"\
      \nparams = {\"page\": page_number, \"limit\": page_size}\nheaders = {\n    \"\
      Accept\": \"application/json\",\n    \"User-Agent\": \"MCP-Sync-Workflow\"\n\
      }\n\ntry:\n    print(f\"\U0001F50D Fetching page {page_number} with {page_size}\
      \ servers...\")\n    response = requests.get(url, params=params, headers=headers,\
      \ timeout=30)\n    response.raise_for_status()\n    \n    data = response.json()\n\
      \    servers = data.get(\"data\", [])\n    current_page = data.get(\"page\"\
      , page_number)\n    \n    print(f\"\u2705 Successfully fetched {len(servers)}\
      \ servers from page {current_page}\")\n    \n    # Check if we've reached the\
      \ end of available data\n    if len(servers) == 0:\n        print(f\"\U0001F4C4\
      \ No more servers available, reached end of data at page {current_page}\")\n\
      \        outputs = {\n            \"servers\": [],\n            \"current_page\"\
      : current_page,\n            \"servers_count\": 0,\n            \"status\":\
      \ \"no_more_data\"  # Special status to indicate end of data\n        }\n  \
      \  else:\n        outputs = {\n            \"servers\": servers,\n         \
      \   \"current_page\": current_page,\n            \"servers_count\": len(servers),\n\
      \            \"status\": \"success\"\n        }\n    \nexcept requests.exceptions.RequestException\
      \ as e:\n    print(f\"\u274C Failed to fetch page {page_number}: {e}\")\n  \
      \  outputs = {\n        \"servers\": [],\n        \"current_page\": page_number,\n\
      \        \"servers_count\": 0,\n        \"status\": \"error\"\n    }\n\nprint(f\"\
      __OUTPUTS__ {json.dumps(outputs)}\")\n"
    description: Fetch MCP servers for current page
    timeout_seconds: 30
  - id: process_page_servers
    name: Process Page Servers
    type: script
    script: "import json\nimport requests\nimport time\nfrom urllib.parse import urlparse\n\
      import base64\n\n# Access flattened outputs from previous loop task (fetch_page_data)\n\
      page_data = inputs.get('servers', [])\ncurrent_page = inputs.get('current_page',\
      \ ${page_number})\n\n# Access workflow inputs via template resolution\ngithub_token\
      \ = \"${github_token}\"\ntarget_repo = \"${target_repo}\"\nupdate_mode = ${update_mode}\n\
      \n# Access existing folders from workflow task (external dependency)\nprocess_existing_result\
      \ = ${process_existing}\nexisting_folders = set(process_existing_result.get('existing_folders',\
      \ []))\n\n# Access state variables - ensure they are integers\ntotal_processed\
      \ = int(loop_state.get('total_processed', 0))\ntotal_successful = int(loop_state.get('total_successful',\
      \ 0))\ntotal_failed = int(loop_state.get('total_failed', 0))\ntotal_skipped\
      \ = int(loop_state.get('total_skipped', 0))\nprocessed_servers = loop_state.get('processed_servers',\
      \ [])\nfailed_servers = loop_state.get('failed_servers', [])\nskipped_servers\
      \ = loop_state.get('skipped_servers', [])\n\n# Get fetch status to check if\
      \ we should continue\nfetch_status = inputs.get('status', 'unknown')\n\nprint(f\"\
      \U0001F504 Processing page {current_page} with {len(page_data)} servers\")\n\
      print(f\"\U0001F4CA Fetch status: {fetch_status}\")\n\n# Skip processing if\
      \ no more data available\nif fetch_status == \"no_more_data\" or len(page_data)\
      \ == 0:\n    print(f\"\u23ED\uFE0F Skipping page {current_page} - no more servers\
      \ to process\")\n    outputs = {\n        \"page\": current_page,\n        \"\
      servers_in_page\": 0,\n        \"page_successful\": 0,\n        \"page_failed\"\
      : 0,\n        \"page_skipped\": 0,\n        \"status\": \"no_more_data\"\n \
      \   }\n    print(f\"__OUTPUTS__ {json.dumps(outputs)}\")\n    # Don't update\
      \ loop state when no data to process\n\n# Debug: Print available data\nprint(f\"\
      \U0001F50D DEBUG - Available inputs keys: {list(inputs.keys())}\")\nprint(f\"\
      \U0001F50D DEBUG - Loop state keys: {list(loop_state.keys())}\")\nprint(f\"\U0001F50D\
      \ DEBUG - Page data type: {type(page_data)}, length: {len(page_data)}\")\nprint(f\"\
      \U0001F50D DEBUG - Existing folders count: {len(existing_folders)}\")\nprint(f\"\
      \U0001F50D DEBUG - GitHub token present: {'yes' if github_token else 'no'}\"\
      )\nprint(f\"\U0001F50D DEBUG - Target repo: {target_repo}\")\n\n# GitHub API\
      \ helper functions (from github_api_mcp_sync.py)\ndef get_repo_contents(owner,\
      \ repo, path=None, ref='main'):\n    url = f\"https://api.github.com/repos/{owner}/{repo}/contents/{path\
      \ or ''}\"\n    params = {'ref': ref}\n    headers = {\n        'Authorization':\
      \ f'token {github_token}',\n        'Accept': 'application/vnd.github.v3+json',\n\
      \        'User-Agent': 'MCP-Sync-Workflow'\n    }\n    \n    response = requests.get(url,\
      \ headers=headers, params=params)\n    if response.status_code == 200:\n   \
      \     return response.json()\n    elif response.status_code == 404:\n      \
      \  # Try master branch if main doesn't exist\n        params = {'ref': 'master'}\n\
      \        response = requests.get(url, headers=headers, params=params)\n    \
      \    if response.status_code == 200:\n            return response.json()\n \
      \   return None\n\ndef download_file_content(download_url):\n    response =\
      \ requests.get(download_url)\n    if response.status_code == 200:\n        return\
      \ response.content\n    return None\n\ndef create_or_update_file(path, content,\
      \ message, sha=None):\n    url = f\"https://api.github.com/repos/{target_repo}/contents/{path}\"\
      \n    headers = {\n        'Authorization': f'token {github_token}',\n     \
      \   'Accept': 'application/vnd.github.v3+json',\n        'User-Agent': 'MCP-Sync-Workflow'\n\
      \    }\n    \n    # Encode content to base64\n    if isinstance(content, str):\n\
      \        content = content.encode('utf-8')\n    content_b64 = base64.b64encode(content).decode('utf-8')\n\
      \    \n    data = {\n        'message': message,\n        'content': content_b64\n\
      \    }\n    \n    if sha:\n        data['sha'] = sha\n        \n    response\
      \ = requests.put(url, headers=headers, json=data)\n    return response.status_code\
      \ in [200, 201], response.json()\n\ndef get_file_sha(path):\n    url = f\"https://api.github.com/repos/{target_repo}/contents/{path}\"\
      \n    headers = {\n        'Authorization': f'token {github_token}',\n     \
      \   'Accept': 'application/vnd.github.v3+json',\n        'User-Agent': 'MCP-Sync-Workflow'\n\
      \    }\n    \n    response = requests.get(url, headers=headers)\n    if response.status_code\
      \ == 200:\n        return response.json().get('sha')\n    return None\n\ndef\
      \ copy_repository_contents(source_owner, source_repo, target_folder):\n    print(f\"\
      \  \U0001F4E5 Fetching contents of {source_owner}/{source_repo}...\")\n    \n\
      \    def copy_directory(path=None, target_path=None):\n        contents = get_repo_contents(source_owner,\
      \ source_repo, path)\n        if not contents:\n            print(f\"    Failed\
      \ to fetch contents for path: {path}\")\n            return False\n        \
      \    \n        success_count = 0\n        total_count = len(contents) if isinstance(contents,\
      \ list) else 1\n        \n        # Handle single file response\n        if\
      \ not isinstance(contents, list):\n            contents = [contents]\n     \
      \   \n        for item in contents:\n            item_name = item['name']\n\
      \            item_path = f\"{path}/{item_name}\" if path else item_name\n  \
      \          target_item_path = f\"{target_path}/{item_name}\" if target_path\
      \ else item_name\n            \n            if item['type'] == 'file':\n   \
      \             print(f\"    Copying file: {item_path}\")\n                \n\
      \                # Download file content\n                file_content = download_file_content(item['download_url'])\n\
      \                if file_content:\n                    # Check if file exists\
      \ in target\n                    existing_sha = get_file_sha(target_item_path)\n\
      \                    \n                    # Create or update file\n       \
      \             success, result = create_or_update_file(\n                   \
      \     target_item_path,\n                        file_content,\n           \
      \             f\"Add/update {item_path} from {source_owner}/{source_repo}\"\
      ,\n                        existing_sha\n                    )\n           \
      \         \n                    if success:\n                        success_count\
      \ += 1\n                    else:\n                        print(f\"    Failed\
      \ to create/update {target_item_path}: {result}\")\n                else:\n\
      \                    print(f\"    Failed to download {item_path}\")\n      \
      \              \n            elif item['type'] == 'dir':\n                print(f\"\
      \    Processing directory: {item_path}\")\n                # Recursively copy\
      \ directory contents\n                if copy_directory(item_path, target_item_path):\n\
      \                    success_count += 1\n            \n            # Rate limiting\
      \ - be nice to GitHub API\n            time.sleep(0.1)\n        \n        return\
      \ success_count == total_count\n    \n    return copy_directory(target_path=target_folder)\n\
      \n# Process each server in current page (from github_api_mcp_sync.py logic)\n\
      page_successful = 0\npage_failed = 0\npage_skipped = 0\n\nfor i, server in enumerate(page_data,\
      \ 1):\n    name = server.get(\"name\", \"Unknown\")\n    github_url = server.get(\"\
      githubUrl\") or server.get(\"github_url\", \"\")\n    \n    if not github_url:\n\
      \        print(f\"  [{i}/{len(page_data)}] Skipping {name} - No GitHub URL\"\
      )\n        failed_servers.append(f\"{name}: No GitHub URL found\")\n       \
      \ page_failed += 1\n        continue\n    \n    print(f\"  [{i}/{len(page_data)}]\
      \ Processing {name}...\")\n    \n    # Parse GitHub URL\n    parsed = urlparse(github_url)\n\
      \    path_parts = parsed.path.strip('/').split('/')\n    if len(path_parts)\
      \ >= 2:\n        owner = path_parts[0]\n        repo = path_parts[1]\n    else:\n\
      \        print(f\"    \u274C Invalid GitHub URL: {github_url}\")\n        failed_servers.append(f\"\
      {name}: Invalid GitHub URL\")\n        page_failed += 1\n        continue\n\
      \    \n    # Create safe folder name\n    folder_name = name.lower().replace(\"\
      \ \", \"_\").replace(\"-\", \"_\")\n    \n    # Check if already exists\n  \
      \  if folder_name in existing_folders:\n        if update_mode:\n          \
      \  print(f\"    \U0001F504 Updating existing folder: {folder_name}\")\n    \
      \        success = copy_repository_contents(owner, repo, folder_name)\n    \
      \        if success:\n                processed_servers.append(f\"{name} ->\
      \ {folder_name} (updated)\")\n                page_successful += 1\n       \
      \     else:\n                failed_servers.append(f\"{name}: Failed to update\
      \ via API\")\n                page_failed += 1\n        else:\n            print(f\"\
      \    \u27A1\uFE0F Skipping {name} - folder exists\")\n            skipped_servers.append(f\"\
      {name} -> {folder_name}\")\n            page_skipped += 1\n    else:\n     \
      \   print(f\"    \u2795 Creating new folder: {folder_name}\")\n        success\
      \ = copy_repository_contents(owner, repo, folder_name)\n        if success:\n\
      \            processed_servers.append(f\"{name} -> {folder_name}\")\n      \
      \      existing_folders.add(folder_name)  # Update cache\n            page_successful\
      \ += 1\n        else:\n            failed_servers.append(f\"{name}: Failed to\
      \ sync via API\")\n            page_failed += 1\n    \n    # Rate limiting between\
      \ servers\n    time.sleep(1)\n\n# Update state variables\nstate_updates = {}\n\
      state_updates['total_processed'] = total_processed + len(page_data)\nstate_updates['total_successful']\
      \ = total_successful + page_successful\nstate_updates['total_failed'] = total_failed\
      \ + page_failed  \nstate_updates['total_skipped'] = total_skipped + page_skipped\n\
      state_updates['processed_servers'] = processed_servers\nstate_updates['failed_servers']\
      \ = failed_servers\nstate_updates['skipped_servers'] = skipped_servers\n\n#\
      \ Output page results\noutputs = {\n    \"page\": current_page,\n    \"servers_in_page\"\
      : len(page_data),\n    \"page_successful\": page_successful,\n    \"page_failed\"\
      : page_failed,\n    \"page_skipped\": page_skipped\n}\n\nprint(f\"\U0001F4CA\
      \ Page {current_page} Summary:\")\nprint(f\"  \u2705 Successful: {page_successful}\"\
      )\nprint(f\"  \u274C Failed: {page_failed}\")\nprint(f\"  \u27A1\uFE0F Skipped:\
      \ {page_skipped}\")\nprint(f\"  \U0001F4CB Total so far: {state_updates['total_processed']}\
      \ processed\")\n\nprint(f\"__OUTPUTS__ {json.dumps(outputs)}\")\nprint(f\"__STATE_UPDATES__\
      \ {json.dumps(state_updates)}\")\n"
    depends_on:
    - fetch_page_data
    description: Process all servers in current page
    timeout_seconds: 900
  description: Iteratively process each page of MCP servers
  max_iterations: 35
  state_variables:
    total_failed: 0
    total_skipped: 0
    failed_servers: []
    skipped_servers: []
    total_processed: 0
    total_successful: 0
    processed_servers: []
  timeout_seconds: 5400
  iteration_variable: page_number
- id: final_summary
  name: Generate Final Summary
  type: script
  script: "import json\nfrom datetime import datetime\n\n# Get loop results using\
    \ template resolution (outside loop context)\nloop_results = ${process_pages_loop}\n\
    calculate_results = ${calculate_pages}\n\n# Extract final state\nfinal_state =\
    \ loop_results.get(\"final_state\", {})\niterations_completed = loop_results.get(\"\
    iterations_completed\", 0)\n\n# Get totals\ntotal_processed = final_state.get(\"\
    total_processed\", 0)\ntotal_successful = final_state.get(\"total_successful\"\
    , 0) \ntotal_failed = final_state.get(\"total_failed\", 0)\ntotal_skipped = final_state.get(\"\
    total_skipped\", 0)\n\nprocessed_servers = final_state.get(\"processed_servers\"\
    , [])\nfailed_servers = final_state.get(\"failed_servers\", [])\nskipped_servers\
    \ = final_state.get(\"skipped_servers\", [])\n\n# Calculate metrics\nsuccess_rate\
    \ = (total_successful / total_processed * 100) if total_processed > 0 else 0\n\
    \n# Create comprehensive summary\nsummary = {\n    \"execution_summary\": {\n\
    \        \"workflow_id\": \"${workflow_id}\",\n        \"execution_id\": \"${execution_id}\"\
    ,\n        \"completed_at\": datetime.utcnow().isoformat(),\n        \"total_servers_available\"\
    : calculate_results.get(\"total_count\", 0),\n        \"pages_processed\": iterations_completed,\n\
    \        \"total_pages_planned\": calculate_results.get(\"total_pages\", 0)\n\
    \    },\n    \"processing_results\": {\n        \"total_processed\": total_processed,\n\
    \        \"successful\": total_successful,\n        \"failed\": total_failed,\n\
    \        \"skipped\": total_skipped,\n        \"success_rate_percent\": round(success_rate,\
    \ 2)\n    },\n    \"detailed_results\": {\n        \"successful_syncs\": processed_servers,\n\
    \        \"failed_syncs\": failed_servers,\n        \"skipped_syncs\": skipped_servers\n\
    \    }\n}\n\nprint(\"\U0001F389 MCP SERVERS SYNC COMPLETED!\")\nprint(\"=\" *\
    \ 60)\nprint(f\"\U0001F4CA FINAL SUMMARY:\")\nprint(f\"  \u2022 Total servers\
    \ available: {calculate_results.get('total_count', 0)}\")\nprint(f\"  \u2022 Pages\
    \ processed: {iterations_completed}/{calculate_results.get('total_pages', 0)}\"\
    )\nprint(f\"  \u2022 Servers processed: {total_processed}\")\nprint(f\"  \u2022\
    \ \u2705 Successful: {total_successful}\")\nprint(f\"  \u2022 \u274C Failed: {total_failed}\"\
    )\nprint(f\"  \u2022 \u27A1\uFE0F Skipped: {total_skipped}\")\nprint(f\"  \u2022\
    \ \U0001F4C8 Success rate: {success_rate:.1f}%\")\nprint(\"=\" * 60)\n\nif processed_servers:\n\
    \    print(f\"\u2705 SUCCESSFULLY SYNCED ({len(processed_servers)}):\")\n    for\
    \ server in processed_servers[:10]:  # Show first 10\n        print(f\"  \u2022\
    \ {server}\")\n    if len(processed_servers) > 10:\n        print(f\"  ... and\
    \ {len(processed_servers) - 10} more\")\n\nif failed_servers:\n    print(f\"\u274C\
    \ FAILED SYNCS ({len(failed_servers)}):\")\n    for server in failed_servers[:5]:\
    \  # Show first 5\n        print(f\"  \u2022 {server}\")\n    if len(failed_servers)\
    \ > 5:\n        print(f\"  ... and {len(failed_servers) - 5} more\")\n\noutputs\
    \ = {\n    \"summary\": summary,\n    \"total_processed\": total_processed,\n\
    \    \"success_rate\": success_rate,\n    \"pages_completed\": iterations_completed\n\
    }\n\nprint(f\"__OUTPUTS__ {json.dumps(outputs)}\")"
  depends_on:
  - process_pages_loop
  - calculate_pages
  description: Create comprehensive summary of sync operation
  timeout_seconds: 30
inputs:
- name: github_token
  type: string
  default: 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
  required: true
  description: GitHub access token for private repository access
- name: target_repo
  type: string
  default: Ampcome/mcp_servers
  required: true
  description: Target GitHub repository (owner/repo format)
- name: page_size
  type: integer
  default: 10
  required: false
  description: Number of servers to fetch per page
- name: update_mode
  type: boolean
  default: false
  required: false
  description: Whether to update existing repositories
outputs:
  sync_summary:
    type: object
    source: final_summary.summary
    description: Complete sync operation summary
  total_processed:
    type: integer
    source: final_summary.total_processed
    description: Total number of servers processed
version: 1
namespace: mcp_sync
description: Iteratively fetch MCP servers from API and sync to private GitHub repository
timeout_seconds: 7200
Execution ID Status Started Duration Actions
6a6d57c7... FAILED 2025-08-16
04:00:00
N/A View
0478abe9... FAILED 2025-08-16
03:00:00
N/A View
a3659ada... FAILED 2025-08-16
02:00:00
N/A View
59e8f572... FAILED 2025-08-16
01:00:00
N/A View
9b6595b9... FAILED 2025-08-16
00:00:00
N/A View
88673285... FAILED 2025-08-15
23:00:00
N/A View
1c430385... FAILED 2025-08-15
22:00:00
N/A View
b93eca13... FAILED 2025-08-15
21:00:00
N/A View
3e14cc9c... FAILED 2025-08-15
20:00:00
N/A View
cd79671d... FAILED 2025-08-15
19:00:00
N/A View
3ad48288... FAILED 2025-08-15
18:00:00
N/A View
e97f7ee9... FAILED 2025-08-15
17:00:00
N/A View
b4a2c620... FAILED 2025-08-15
16:00:00
N/A View
c07eda3c... FAILED 2025-08-15
15:00:00
N/A View
13629bb6... FAILED 2025-08-15
14:00:00
N/A View
85051f27... FAILED 2025-08-15
13:00:00
N/A View
745770f0... FAILED 2025-08-15
12:00:00
N/A View
7746d9f3... FAILED 2025-08-15
11:00:00
N/A View
ac55979a... FAILED 2025-08-15
10:00:00
N/A View
13b32f79... FAILED 2025-08-15
09:00:00
N/A View