Technical SEO Audit
Comprehensive technical SEO audit workflow using for website analysis, ranking assessment, backlink profile, and optimization recommendations
Workflow Information
ID: technical_seo_audit
Namespace: default
Version: 2.0
Created: 2025-07-06
Updated: 2025-07-07
Tasks: 13
Quick Actions
Inputs
| Name | Type | Required | Default |
|---|---|---|---|
target_domain |
string | Optional |
example.com
|
audit_depth |
string | Optional |
standard
|
dataforseo_login |
string | Optional | None |
dataforseo_password |
string | Optional | None |
location |
string | Optional |
United States
|
language |
string | Optional |
en
|
crawl_pages |
integer | Optional |
500
|
competitor_count |
integer | Optional |
5
|
include_backlinks |
string | Optional |
yes
|
urgency |
string | Optional |
medium
|
Outputs
| Name | Type | Source |
|---|---|---|
report_data |
object | Complete audit report data |
audit_status |
string | Overall audit completion status |
recommendations |
string | AI-generated recommendations |
performance_metrics |
object | Key performance indicators |
Tasks
audit_strategy_router
conditional_routerRoute to appropriate audit strategy based on inputs
Conditional Router
Router Type: condition
Default Route: standard_path
Default Route: standard_path
initialize_audit
scriptNo description
premium_crawl_task
scriptNo description
comprehensive_crawl_task
scriptNo description
standard_crawl_task
scriptNo description
quick_audit_task
scriptNo description
analyze_serp_rankings
scriptNo description
backlink_router
conditional_routerRoute based on backlink analysis preference
Conditional Router
Router Type: condition
Default Route: skip_backlinks_path
Default Route: skip_backlinks_path
analyze_backlinks
scriptNo description
skip_backlinks
scriptNo description
wait_for_crawls
scriptNo description
ai_seo_analysis
ai_agentNo description
generate_report
scriptNo description
Triggers
Manual Trigger: Manual Audit Trigger
Schedule Trigger: Scheduled Monthly Audit
Webhook Trigger: API Webhook Trigger
POST /webhook/seo-audit
YAML Source
id: technical_seo_audit_advanced
name: Advanced Technical SEO Audit with Dynamic Routing
tasks:
- id: audit_strategy_router
type: conditional_router
conditions:
- name: premium_audit
route: premium_path
condition: ${audit_depth} == 'comprehensive' && ${urgency} == 'high'
- name: comprehensive_audit
route: comprehensive_path
condition: ${audit_depth} == 'comprehensive'
- name: quick_audit
route: quick_path
condition: ${audit_depth} == 'quick'
- name: enterprise_audit
route: enterprise_path
condition: ${crawl_pages} > 5000
description: Route to appropriate audit strategy based on inputs
default_route: standard_path
- id: initialize_audit
name: Initialize Technical Audit
type: script
script: "import json\nimport base64\nfrom datetime import datetime\n\nprint(\"\U0001F680\
\ Initializing Technical SEO Audit\")\nprint(\"=\" * 50)\n\n# Use template resolution\
\ for inputs\ndomain = '''${target_domain}'''.strip()\naudit_depth = '''${audit_depth}'''\n\
urgency = '''${urgency}'''\n\n# Validate domain format\nif not domain or '/' in\
\ domain or 'http' in domain:\n print(\"\u274C Invalid domain format\")\n \
\ print('__OUTPUTS__ {\"status\": \"error\", \"message\": \"Invalid domain format.\
\ Use format: example.com\"}')\n exit(1)\n\n# Create auth header for DataForSEO\n\
login = '''${dataforseo_login}'''\npassword = '''${dataforseo_password}'''\n\n\
if not login or not password:\n print(\"\u274C Missing DataForSEO credentials\"\
)\n print('__OUTPUTS__ {\"status\": \"error\", \"message\": \"DataForSEO credentials\
\ required\"}')\n exit(1)\n\ncredentials = f\"{login}:{password}\"\nauth_header\
\ = base64.b64encode(credentials.encode()).decode()\n\n# Initialize audit data\n\
audit_data = {\n \"domain\": domain,\n \"audit_id\": f\"audit_{domain}_{datetime.now().strftime('%Y%m%d_%H%M%S')}\"\
,\n \"start_time\": datetime.now().isoformat(),\n \"auth_header\": auth_header,\n\
\ \"location\": '''${location}''',\n \"language\": '''${language}''',\n\
\ \"audit_depth\": audit_depth,\n \"urgency\": urgency,\n \"route_context\"\
: '''${_route_context}'''\n}\n\nprint(f\"\u2705 Audit initialized for: {domain}\"\
)\nprint(f\"\U0001F4CA Audit depth: {audit_depth}\")\nprint(f\"\u26A1 Urgency:\
\ {urgency}\")\nprint(f\"\U0001F4CD Location: {audit_data['location']}\")\n\n\
print(f\"__OUTPUTS__ {json.dumps(audit_data)}\")\n"
depends_on:
- audit_strategy_router
- id: premium_crawl_task
name: Premium Website Crawl (Live API)
type: script
script: "import requests\nimport json\nimport time\nimport base64\n\nprint(\"\U0001F48E\
\ Starting PREMIUM crawl with live API...\")\n\n# Get initialization data with\
\ error handling\ninit_data_str = '''${initialize_audit}'''\n\nif not init_data_str.startswith('UNRESOLVED_'):\n\
\ try:\n init_data = json.loads(init_data_str)\n auth_header\
\ = init_data['auth_header']\n except:\n # Fallback\n login =\
\ '''${dataforseo_login}'''\n password = '''${dataforseo_password}'''\n\
\ credentials = f\"{login}:{password}\"\n auth_header = base64.b64encode(credentials.encode()).decode()\n\
else:\n # Direct auth creation\n login = '''${dataforseo_login}'''\n \
\ password = '''${dataforseo_password}'''\n credentials = f\"{login}:{password}\"\
\n auth_header = base64.b64encode(credentials.encode()).decode()\n\ndomain\
\ = '''${target_domain}'''\nmax_pages = int('''${crawl_pages}''')\n\n# Use Live\
\ On-Page API for premium audit\nurl = \"https://api.dataforseo.com/v3/on_page/instant_pages\"\
\n\nheaders = {\n \"Authorization\": f\"Basic {auth_header}\",\n \"Content-Type\"\
: \"application/json\"\n}\n\n# Get instant page analysis\npayload = [{\n \"\
url\": f\"https://{domain}\",\n \"enable_javascript\": True,\n \"custom_js\"\
: \"window.scrollTo(0, document.body.scrollHeight);\"\n}]\n\ntry:\n response\
\ = requests.post(url, headers=headers, json=payload)\n result = response.json()\n\
\ \n crawl_data = {\n \"status\": \"premium_analysis\",\n \
\ \"method\": \"live_api\",\n \"pages_analyzed\": 1,\n \"instant_insights\"\
: []\n }\n \n if result.get(\"status_code\") == 20000:\n page_data\
\ = result[\"tasks\"][0][\"result\"][0]\n crawl_data[\"instant_insights\"\
] = {\n \"onpage_score\": page_data.get(\"onpage_score\", 0),\n \
\ \"meta\": page_data.get(\"meta\", {}),\n \"page_timing\"\
: page_data.get(\"page_timing\", {}),\n \"checks\": page_data.get(\"\
checks\", {})\n }\n \n print(f\"\u2705 Premium crawl completed\"\
)\n print(f\"__OUTPUTS__ {json.dumps(crawl_data)}\")\nexcept Exception as e:\n\
\ print(f\"\u274C Error: {str(e)}\")\n print('__OUTPUTS__ {\"status\": \"\
error\", \"message\": \"Premium crawl failed\"}')\n"
depends_on:
- initialize_audit
execute_on_routes:
- premium_path
- id: comprehensive_crawl_task
name: Comprehensive Website Crawl
type: script
script: "import requests\nimport json\nimport time\n\nprint(\"\U0001F50D Starting\
\ COMPREHENSIVE crawl...\")\n\n# Get initialization data\ninit_data = json.loads('''${initialize_audit}''')\n\
auth_header = init_data['auth_header']\ndomain = '''${target_domain}'''\nmax_pages\
\ = int('''${crawl_pages}''')\n\n# Start comprehensive crawl task\nurl = \"https://api.dataforseo.com/v3/on_page/task_post\"\
\n\nheaders = {\n \"Authorization\": f\"Basic {auth_header}\",\n \"Content-Type\"\
: \"application/json\"\n}\n\npayload = [{\n \"target\": domain,\n \"max_crawl_pages\"\
: max_pages,\n \"crawl_delay\": 2,\n \"enable_javascript\": True,\n \"\
enable_browser_rendering\": True,\n \"load_resources\": True,\n \"custom_js\"\
: \"window.scrollTo(0, document.body.scrollHeight);\",\n \"tag\": \"comprehensive_audit\"\
\n}]\n\nresponse = requests.post(url, headers=headers, json=payload)\nresult =\
\ response.json()\n\nif result.get(\"status_code\") == 20000:\n task_id = result[\"\
tasks\"][0][\"id\"]\n print(f\"\u2705 Crawl task created: {task_id}\")\n \
\ print(f'__OUTPUTS__ {{\"status\": \"started\", \"task_id\": \"{task_id}\",\
\ \"max_pages\": {max_pages}}}')\nelse:\n print(f\"\u274C Failed to start crawl\"\
)\n print(f'__OUTPUTS__ {{\"status\": \"error\", \"message\": \"{result.get(\"\
status_message\", \"Unknown error\")}\"}}')\n"
depends_on:
- initialize_audit
execute_on_routes:
- comprehensive_path
- enterprise_path
- id: standard_crawl_task
name: Standard Website Crawl
type: script
script: "import requests\nimport json\nimport base64\n\nprint(\"\U0001F4CA Starting\
\ STANDARD crawl...\")\n\n# Get initialization data with error handling\ninit_data_str\
\ = '''${initialize_audit}'''\n\nif not init_data_str.startswith('UNRESOLVED_'):\n\
\ try:\n init_data = json.loads(init_data_str)\n auth_header\
\ = init_data['auth_header']\n except:\n login = '''${dataforseo_login}'''\n\
\ password = '''${dataforseo_password}'''\n credentials = f\"{login}:{password}\"\
\n auth_header = base64.b64encode(credentials.encode()).decode()\nelse:\n\
\ login = '''${dataforseo_login}'''\n password = '''${dataforseo_password}'''\n\
\ credentials = f\"{login}:{password}\"\n auth_header = base64.b64encode(credentials.encode()).decode()\n\
\ndomain = '''${target_domain}'''\n\n# Use standard crawl with fewer pages\nmax_pages\
\ = min(int('''${crawl_pages}'''), 100)\n\nurl = \"https://api.dataforseo.com/v3/on_page/task_post\"\
\n\nheaders = {\n \"Authorization\": f\"Basic {auth_header}\",\n \"Content-Type\"\
: \"application/json\"\n}\n\npayload = [{\n \"target\": domain,\n \"max_crawl_pages\"\
: max_pages,\n \"crawl_delay\": 3,\n \"tag\": \"standard_audit\"\n}]\n\n\
try:\n response = requests.post(url, headers=headers, json=payload)\n result\
\ = response.json()\n \n if result.get(\"status_code\") == 20000:\n \
\ task_id = result[\"tasks\"][0][\"id\"]\n print(f\"\u2705 Standard\
\ crawl started: {task_id}\")\n print(f'__OUTPUTS__ {{\"status\": \"started\"\
, \"task_id\": \"{task_id}\", \"max_pages\": {max_pages}}}')\n else:\n \
\ print(f'__OUTPUTS__ {{\"status\": \"error\"}}')\nexcept Exception as e:\n\
\ print(f\"\u274C Error: {str(e)}\")\n print('__OUTPUTS__ {\"status\": \"\
error\"}')\n"
depends_on:
- initialize_audit
execute_on_routes:
- standard_path
- id: quick_audit_task
name: Quick SEO Check
type: script
script: "import requests\nimport json\nimport base64\n\nprint(\"\u26A1 Running QUICK\
\ audit...\")\n\n# Get initialization data with error handling\ninit_data_str\
\ = '''${initialize_audit}'''\n\nif not init_data_str.startswith('UNRESOLVED_'):\n\
\ try:\n init_data = json.loads(init_data_str)\n auth_header\
\ = init_data['auth_header']\n except:\n login = '''${dataforseo_login}'''\n\
\ password = '''${dataforseo_password}'''\n credentials = f\"{login}:{password}\"\
\n auth_header = base64.b64encode(credentials.encode()).decode()\nelse:\n\
\ login = '''${dataforseo_login}'''\n password = '''${dataforseo_password}'''\n\
\ credentials = f\"{login}:{password}\"\n auth_header = base64.b64encode(credentials.encode()).decode()\n\
\ndomain = '''${target_domain}'''\n\n# Quick check with instant pages\nurl = \"\
https://api.dataforseo.com/v3/on_page/instant_pages\"\n\nheaders = {\n \"Authorization\"\
: f\"Basic {auth_header}\",\n \"Content-Type\": \"application/json\"\n}\n\n\
# Check homepage only for quick audit\npayload = [{\n \"url\": f\"https://{domain}\"\
,\n \"enable_javascript\": False\n}]\n\ntry:\n response = requests.post(url,\
\ headers=headers, json=payload)\n result = response.json()\n \n quick_results\
\ = {\"status\": \"quick_check\", \"issues\": []}\n \n if result.get(\"\
status_code\") == 20000:\n page = result[\"tasks\"][0][\"result\"][0]\n\
\ \n # Quick checks\n if not page.get(\"title\"):\n \
\ quick_results[\"issues\"].append(\"Missing title tag\")\n if not\
\ page.get(\"description\"):\n quick_results[\"issues\"].append(\"\
Missing meta description\")\n if page.get(\"onpage_score\", 100) < 80:\n\
\ quick_results[\"issues\"].append(f\"Low on-page score: {page.get('onpage_score')}\"\
)\n \n quick_results[\"onpage_score\"] = page.get(\"onpage_score\"\
, 0)\n \n print(f\"\u2705 Quick audit completed\")\n print(f\"__OUTPUTS__\
\ {json.dumps(quick_results)}\")\nexcept Exception as e:\n print(f\"\u274C\
\ Error: {str(e)}\")\n print('__OUTPUTS__ {\"status\": \"error\", \"issues\"\
: []}')\n"
depends_on:
- initialize_audit
execute_on_routes:
- quick_path
- id: analyze_serp_rankings
name: Analyze Current SERP Rankings
type: script
script: "import requests\nimport json\nimport base64\n\nprint(\"\U0001F4C8 Analyzing\
\ SERP rankings...\")\n\n# Get initialization data with error handling\ninit_data_str\
\ = '''${initialize_audit}'''\n\nif not init_data_str.startswith('UNRESOLVED_'):\n\
\ try:\n init_data = json.loads(init_data_str)\n auth_header\
\ = init_data['auth_header']\n except:\n login = '''${dataforseo_login}'''\n\
\ password = '''${dataforseo_password}'''\n credentials = f\"{login}:{password}\"\
\n auth_header = base64.b64encode(credentials.encode()).decode()\nelse:\n\
\ login = '''${dataforseo_login}'''\n password = '''${dataforseo_password}'''\n\
\ credentials = f\"{login}:{password}\"\n auth_header = base64.b64encode(credentials.encode()).decode()\n\
\ndomain = '''${target_domain}'''\nlocation = '''${location}'''\nlanguage = '''${language}'''\n\
\n# DataForSEO Labs - Ranked Keywords\nurl = \"https://api.dataforseo.com/v3/dataforseo_labs/google/ranked_keywords/live\"\
\n\nheaders = {\n \"Authorization\": f\"Basic {auth_header}\",\n \"Content-Type\"\
: \"application/json\"\n}\n\npayload = [{\n \"target\": domain,\n \"location_name\"\
: location,\n \"language_code\": language,\n \"limit\": 1000,\n \"order_by\"\
: [\"keyword_data.keyword_info.search_volume,desc\"]\n}]\n\ntry:\n response\
\ = requests.post(url, headers=headers, json=payload)\n result = response.json()\n\
\ \n ranking_data = {\n \"total_keywords\": 0,\n \"top_10\"\
: 0,\n \"top_20\": 0,\n \"top_50\": 0,\n \"avg_position\"\
: 0\n }\n \n if result.get(\"status_code\") == 20000 and result.get(\"\
tasks\") and result[\"tasks\"][0].get(\"result\"):\n items = result[\"\
tasks\"][0][\"result\"][0].get(\"items\", [])\n \n ranking_data[\"\
total_keywords\"] = len(items)\n positions = []\n \n for\
\ item in items:\n pos = item[\"ranked_serp_element\"][\"serp_item\"\
][\"rank_group\"]\n positions.append(pos)\n \n \
\ if pos <= 10:\n ranking_data[\"top_10\"] += 1\n \
\ if pos <= 20:\n ranking_data[\"top_20\"] += 1\n if\
\ pos <= 50:\n ranking_data[\"top_50\"] += 1\n \n \
\ if positions:\n ranking_data[\"avg_position\"] = round(sum(positions)\
\ / len(positions), 1)\n \n print(f\"\u2705 Found {ranking_data['total_keywords']}\
\ ranking keywords\")\n print(f\"__OUTPUTS__ {json.dumps(ranking_data)}\")\n\
except Exception as e:\n print(f\"\u274C Error: {str(e)}\")\n print('__OUTPUTS__\
\ {\"total_keywords\": 0, \"top_10\": 0, \"top_20\": 0, \"top_50\": 0, \"avg_position\"\
: 0}')\n"
depends_on:
- initialize_audit
- id: backlink_router
type: conditional_router
conditions:
- name: include_backlinks
route: analyze_backlinks_path
condition: ${include_backlinks} == 'yes'
depends_on:
- initialize_audit
description: Route based on backlink analysis preference
default_route: skip_backlinks_path
- id: analyze_backlinks
name: Analyze Backlink Profile
type: script
script: "import requests\nimport json\nimport base64\n\nprint(\"\U0001F517 Analyzing\
\ backlink profile...\")\n\n# Get init data with proper error handling\ninit_data_str\
\ = '''${initialize_audit}'''\n\n# Check if the template was resolved\nif init_data_str.startswith('UNRESOLVED_'):\n\
\ print(\"\u274C Could not access initialization data\")\n # Fallback: create\
\ auth header from inputs\n login = '''${dataforseo_login}'''\n password\
\ = '''${dataforseo_password}'''\n credentials = f\"{login}:{password}\"\n\
\ auth_header = base64.b64encode(credentials.encode()).decode()\nelse:\n \
\ try:\n init_data = json.loads(init_data_str)\n auth_header =\
\ init_data['auth_header']\n except:\n print(\"\u274C Error parsing\
\ init data, using fallback\")\n login = '''${dataforseo_login}'''\n \
\ password = '''${dataforseo_password}'''\n credentials = f\"{login}:{password}\"\
\n auth_header = base64.b64encode(credentials.encode()).decode()\n\ndomain\
\ = '''${target_domain}'''\n\n# Backlinks summary\nurl = \"https://api.dataforseo.com/v3/backlinks/summary/live\"\
\n\nheaders = {\n \"Authorization\": f\"Basic {auth_header}\",\n \"Content-Type\"\
: \"application/json\"\n}\n\npayload = [{\n \"target\": domain,\n \"include_subdomains\"\
: True\n}]\n\ntry:\n response = requests.post(url, headers=headers, json=payload)\n\
\ result = response.json()\n \n backlink_data = {\"analyzed\": True}\n\
\ \n if result.get(\"status_code\") == 20000 and result.get(\"tasks\") and\
\ result[\"tasks\"][0].get(\"result\"):\n summary = result[\"tasks\"][0][\"\
result\"][0]\n \n backlink_data.update({\n \"total_backlinks\"\
: summary.get(\"backlinks\", 0),\n \"referring_domains\": summary.get(\"\
referring_domains\", 0),\n \"referring_ips\": summary.get(\"referring_ips\"\
, 0),\n \"broken_backlinks\": summary.get(\"broken_backlinks\", 0)\n\
\ })\n \n print(f\"\u2705 Found {backlink_data['referring_domains']}\
\ referring domains\")\n else:\n print(f\"\u26A0\uFE0F API returned\
\ status: {result.get('status_code')}\")\n backlink_data[\"error\"] = result.get(\"\
status_message\", \"Unknown error\")\n \n print(f\"__OUTPUTS__ {json.dumps(backlink_data)}\"\
)\n \nexcept Exception as e:\n print(f\"\u274C Error calling API: {str(e)}\"\
)\n print('__OUTPUTS__ {\"analyzed\": false, \"error\": \"API call failed\"\
}')\n"
depends_on:
- backlink_router
execute_on_routes:
- analyze_backlinks_path
- id: skip_backlinks
name: Skip Backlink Analysis
type: script
script: "import json\nprint(\"\u23ED\uFE0F Skipping backlink analysis as requested\"\
)\nprint('__OUTPUTS__ {\"analyzed\": false, \"reason\": \"skipped_by_user\"}')\n"
depends_on:
- backlink_router
execute_on_routes:
- skip_backlinks_path
- id: wait_for_crawls
name: Wait for Crawl Completion
type: script
script: "import json\n\nprint(\"\u23F3 Waiting for all tasks to complete...\")\n\
\n# This is a synchronization point to ensure all tasks complete\n# before proceeding\
\ to AI analysis\n\noutputs = {\n \"status\": \"ready_for_analysis\",\n \
\ \"crawl_tasks_completed\": True\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(outputs)}\"\
)\n"
depends_on:
- premium_crawl_task
- comprehensive_crawl_task
- standard_crawl_task
- quick_audit_task
- analyze_serp_rankings
- analyze_backlinks
- skip_backlinks
- id: ai_seo_analysis
name: AI-Powered SEO Analysis
type: ai_agent
config:
streaming: true
input_format: text
output_format: text
agent_type: seo_specialist
depends_on:
- wait_for_crawls
user_message: "Analyze the SEO audit results for domain: ${target_domain}\n\nAudit\
\ Configuration:\n- Audit Depth: ${audit_depth}\n- Location: ${location}\n\nSERP\
\ Rankings Data:\n${analyze_serp_rankings}\n\nCrawl Results (based on route taken):\n\
Premium: ${premium_crawl_task}\nComprehensive: ${comprehensive_crawl_task}\nStandard:\
\ ${standard_crawl_task}\nQuick: ${quick_audit_task}\n\nBacklink Analysis:\nAnalyze:\
\ ${analyze_backlinks}\nSkip: ${skip_backlinks}\n\nBased on the available data\
\ (some fields may show UNRESOLVED if not executed), provide:\n\n1. **Current\
\ Performance Summary**\n - Overall health score (1-100)\n - Key strengths\n\
\ - Critical weaknesses\n\n2. **Immediate Actions** (Can do this week)\n -\
\ List 3-5 specific fixes\n - Expected impact of each\n\n3. **Strategic Recommendations**\
\ (1-3 month plan)\n - Growth opportunities\n - Competitive positioning\n\
\ - Content strategy\n\n4. **Technical Priorities**\n - Performance optimizations\n\
\ - Structure improvements\n\nFormat your response with clear headings and bullet\
\ points.\nBe specific and actionable in your recommendations.\nNote: Ignore any\
\ UNRESOLVED values - these are tasks that didn't run based on the selected route.\n"
system_message: 'You are an expert SEO analyst specializing in technical audits
and strategic recommendations.
Analyze the provided data and deliver actionable insights based on the audit depth
and findings.
Consider the route taken and adjust your analysis accordingly:
- Premium/Comprehensive: Provide detailed technical analysis
- Standard: Focus on key improvements
- Quick: Highlight critical issues only
'
model_client_id: seo_analyst
- id: generate_report
name: Generate Final SEO Audit Report
type: script
script: "import json\nfrom datetime import datetime\n\nprint(\"\U0001F4C4 Generating\
\ final audit report...\")\n\n# Helper function to safely parse JSON\ndef safe_json_parse(data_str,\
\ default=None):\n if not data_str or data_str.startswith('UNRESOLVED_'):\n\
\ return default or {}\n try:\n return json.loads(data_str)\n\
\ except:\n return default or {}\n\n# Collect all data using template\
\ resolution\ndomain = '''${target_domain}'''\naudit_depth = '''${audit_depth}'''\n\
init_data = safe_json_parse('''${initialize_audit}''')\nranking_data = safe_json_parse('''${analyze_serp_rankings}''')\n\
\n# Handle conditional data\nbacklink_analyze = '''${analyze_backlinks}'''\nbacklink_skip\
\ = '''${skip_backlinks}'''\n\nif backlink_analyze and not backlink_analyze.startswith('UNRESOLVED_'):\n\
\ backlink_data = safe_json_parse(backlink_analyze)\nelif backlink_skip and\
\ not backlink_skip.startswith('UNRESOLVED_'):\n backlink_data = safe_json_parse(backlink_skip)\n\
else:\n backlink_data = {\"analyzed\": False, \"reason\": \"not_executed\"\
}\n\n# Handle crawl results from different paths\ncrawl_data = {}\npremium_result\
\ = safe_json_parse('''${premium_crawl_task}''')\ncomprehensive_result = safe_json_parse('''${comprehensive_crawl_task}''')\n\
standard_result = safe_json_parse('''${standard_crawl_task}''')\nquick_result\
\ = safe_json_parse('''${quick_audit_task}''')\n\n# Determine which crawl was\
\ executed\nif premium_result:\n crawl_data = premium_result\n audit_path\
\ = \"premium\"\nelif comprehensive_result.get('task_id'):\n crawl_data = comprehensive_result\n\
\ audit_path = \"comprehensive\"\nelif standard_result.get('task_id'):\n \
\ crawl_data = standard_result\n audit_path = \"standard\"\nelif quick_result:\n\
\ crawl_data = quick_result\n audit_path = \"quick\"\nelse:\n audit_path\
\ = \"unknown\"\n\nai_insights = '''${ai_seo_analysis}'''\n\n# Create comprehensive\
\ report\nreport = {\n \"audit_summary\": {\n \"domain\": domain,\n\
\ \"audit_id\": init_data.get('audit_id', 'unknown'),\n \"timestamp\"\
: datetime.now().isoformat(),\n \"audit_depth\": audit_depth,\n \
\ \"audit_path\": audit_path,\n \"location\": init_data.get('location',\
\ 'United States'),\n \"language\": init_data.get('language', 'en')\n \
\ },\n \"performance_metrics\": {\n \"ranking_keywords\": ranking_data.get('total_keywords',\
\ 0),\n \"top_10_rankings\": ranking_data.get('top_10', 0),\n \"\
average_position\": ranking_data.get('avg_position', 0),\n \"referring_domains\"\
: backlink_data.get('referring_domains', 'N/A'),\n \"total_backlinks\"\
: backlink_data.get('total_backlinks', 'N/A')\n },\n \"crawl_summary\":\
\ crawl_data,\n \"ai_recommendations\": ai_insights,\n \"execution_details\"\
: {\n \"route_taken\": audit_path,\n \"completion_time\": datetime.now().isoformat()\n\
\ }\n}\n\n# Save report\nreport_json = json.dumps(report, indent=2)\nwith open('/tmp/seo_audit_report.json',\
\ 'w') as f:\n f.write(report_json)\n\nprint(f\"\u2705 Audit completed successfully!\"\
)\nprint(f\"\U0001F4CA Path taken: {audit_path}\")\nprint(f\"\U0001F4C8 Keywords\
\ found: {ranking_data.get('total_keywords', 0)}\")\nprint(f\"\U0001F517 Backlinks\
\ analyzed: {'Yes' if backlink_data.get('analyzed') else 'No'}\")\n\nprint(f\"\
__OUTPUTS__ {json.dumps(report)}\")\n"
depends_on:
- ai_seo_analysis
inputs:
- name: target_domain
type: string
default: example.com
description: Domain to audit (without https:// or www.)
- name: audit_depth
type: string
default: standard
description: Depth of audit (quick, standard, comprehensive)
- name: dataforseo_login
type: string
default: ''
description: DataForSEO API login
- name: dataforseo_password
type: string
default: ''
description: DataForSEO API password
- name: location
type: string
default: United States
description: Target location for SERP data
- name: language
type: string
default: en
description: Language code for analysis
- name: crawl_pages
type: integer
default: 500
description: Maximum number of pages to crawl
- name: competitor_count
type: integer
default: 5
description: Number of competitors to analyze
- name: include_backlinks
type: string
default: 'yes'
description: Include detailed backlink analysis (yes/no)
- name: urgency
type: string
default: medium
description: Analysis urgency (low, medium, high)
outputs:
report_data:
type: object
description: Complete audit report data
audit_status:
type: string
description: Overall audit completion status
recommendations:
type: string
description: AI-generated recommendations
performance_metrics:
type: object
description: Key performance indicators
version: '2.0'
triggers:
- name: Manual Audit Trigger
type: manual
description: Manually trigger a technical SEO audit
- name: Scheduled Monthly Audit
type: schedule
config:
cron: 0 2 1 * *
timezone: UTC
description: Run monthly technical SEO audit
- name: API Webhook Trigger
path: /webhook/seo-audit
type: webhook
config:
method: POST
headers:
- name: Content-Type
value: application/json
required: true
authentication: bearer
description: Trigger audit via webhook
input_mapping:
- webhook_field: domain
workflow_input: target_domain
- webhook_field: depth
workflow_input: audit_depth
tenant_id: 0572fa8d-d7c3-47db-9219-ef40b80d42b7
description: Comprehensive technical SEO audit with intelligent routing, DataForSEO
API integration, and AI-powered insights
model_clients:
- id: seo_analyst
config:
model: gpt-4o-mini
api_key: ${OPENAI_API_KEY}
max_tokens: 2000
temperature: 0.3
provider: openai
| Execution ID | Status | Started | Duration | Actions |
|---|---|---|---|---|
20642772...
|
COMPLETED |
2025-07-07
01:22:13 |
N/A | View |
9b7e0f4d...
|
COMPLETED |
2025-07-07
00:44:58 |
N/A | View |
21b451e8...
|
COMPLETED |
2025-07-07
00:42:53 |
N/A | View |
e78e1d0e...
|
COMPLETED |
2025-07-06
03:13:46 |
N/A | View |
b184c12c...
|
COMPLETED |
2025-07-06
03:07:59 |
N/A | View |
16bd610e...
|
COMPLETED |
2025-07-06
03:03:52 |
N/A | View |
c91930a1...
|
COMPLETED |
2025-07-06
02:50:18 |
N/A | View |
01ab3fc7...
|
COMPLETED |
2025-07-06
02:46:34 |
N/A | View |