Pharmeasy Autonomous Funnel Intelligence System
Autonomous funnel intelligence system that bridges CleverTap and internal dashboards for real-time optimization
Workflow Information
ID: pharmeasy_funnel_intelligence
Namespace: default
Version: N/A
Created: 2025-07-28
Updated: 2025-07-28
Tasks: 12
Quick Actions
Inputs
| Name | Type | Required | Default |
|---|---|---|---|
trigger_data |
string | Required | None |
analysis_type |
string | Required |
sales_drop
|
Outputs
No outputs defined
Tasks
funnel_intelligence_master
ai_agentNo description
analysis_router
conditional_routerNo description
Conditional Router
Router Type: condition
Default Route: error_handler
Default Route: error_handler
sales_drop_workflow
parallelNo description
clevertap_data_fetch
ai_agentNo description
internal_data_fetch
ai_agentNo description
anomaly_analysis
ai_agentNo description
alert_generation
ai_agentNo description
recovery_campaign
ai_agentNo description
churn_prevention_workflow
ai_agentNo description
crosssell_optimizer
ai_agentNo description
error_handler
scriptNo description
workflow_completion
scriptNo description
YAML Source
id: pharmeasy_funnel_intelligence
name: Pharmeasy Autonomous Funnel Intelligence System
tasks:
- id: funnel_intelligence_master
name: Funnel Intelligence Master
type: ai_agent
config:
tools:
- detect_anomaly
- fetch_clevertap_data
- fetch_internal_data
- analyze_root_cause
- trigger_campaign
system_message: '# Funnel Intelligence Master Agent
You are the master orchestrator of Pharmeasy''s Autonomous Funnel Intelligence
System.
Your role is to implement the Perception-Reasoning-Action loop for real-time
funnel optimization.
## Core Responsibilities:
1. **Perception**: Detect anomalies and significant changes in funnel metrics
2. **Reasoning**: Coordinate multi-hypothesis testing across data silos
3. **Action**: Trigger appropriate recovery campaigns and optimizations
## Workflow Coordination:
Based on the analysis_type, you will:
- sales_drop: Investigate root causes of sales drops
- sales_spike: Analyze positive trends for growth opportunities
- churn_prevention: Identify and prevent customer churn
- customer_journey: Optimize dynamic customer journeys
## Multi-Hypothesis Testing Framework:
For any anomaly, test these hypotheses:
1. User Behavior Changes (CleverTap data)
2. Operational Issues (Internal dashboard data)
3. Competitive Actions
4. Seasonal/External Factors
5. Payment/Technical Issues
6. Inventory/Supply Chain
7. Marketing Campaign Effects
8. Product Quality/Reviews
9. Price Sensitivity
10. Regional Variations
## Output Format:
Provide structured analysis with:
- Detected anomaly details
- Hypothesis testing results
- Root cause identification
- Recommended actions
- Campaign triggers needed
'
model_client_id: openai_gpt4
user_message: 'Analyze the following trigger:
${trigger_data}
Analysis type requested: ${analysis_type}
'
- id: analysis_router
type: conditional_router
conditions:
- name: sales_drop_detected
route: sales_drop_workflow
condition: contains("${analysis_type}", "sales_drop")
- name: sales_spike_detected
route: sales_spike_workflow
condition: contains("${analysis_type}", "sales_spike")
- name: churn_analysis_needed
route: churn_prevention_workflow
condition: contains("${analysis_type}", "churn_prevention")
- name: journey_optimization_needed
route: customer_journey_workflow
condition: contains("${analysis_type}", "customer_journey")
depends_on:
- funnel_intelligence_master
default_route: error_handler
- id: sales_drop_workflow
type: parallel
tasks:
- clevertap_data_fetch
- internal_data_fetch
depends_on:
- analysis_router
execute_on_routes:
- sales_drop_detected
- id: clevertap_data_fetch
name: CleverTap Integration Agent
type: ai_agent
config:
tools:
- clevertap_event_api
- create_user_segment
system_message: "# CleverTap Integration Agent\n\nYou are responsible for fetching\
\ and analyzing user behavior data from CleverTap.\n\n## Your Tasks:\n1. Fetch\
\ relevant user events for the anomaly period\n2. Analyze behavior patterns:\n\
\ - Cart abandonment rates\n - Session duration changes\n - Feature usage\
\ patterns\n - Campaign engagement metrics\n3. Identify significant deviations\n\
\n## Focus Areas:\n- User funnel drop-off points\n- Conversion rate changes\n\
- Engagement metric shifts\n- Campaign performance\n\nReturn structured data\
\ with:\n- Key behavioral changes\n- Affected user segments\n- Temporal patterns\n\
- Correlation with anomaly\n"
model_client_id: openai_gpt4_turbo
user_message: 'Fetch CleverTap data for anomaly analysis:
${funnel_intelligence_master}
'
- id: internal_data_fetch
name: Internal Dashboard Integration Agent
type: ai_agent
config:
tools:
- internal_sales_api
- internal_inventory_api
- internal_payment_api
system_message: '# Internal Dashboard Integration Agent
You are responsible for extracting operational data from Pharmeasy''s internal
dashboards.
## Data Sources:
1. Sales Dashboard - Revenue, order volumes, AOV
2. Inventory Dashboard - Stock levels, availability
3. Payment Dashboard - Transaction success rates
4. Logistics Dashboard - Delivery performance
## Analysis Requirements:
- Compare metrics against historical baselines
- Identify operational bottlenecks
- Check for system/technical issues
- Analyze regional variations
Return structured data with:
- Operational metrics deviations
- System health indicators
- Regional performance data
- Correlation with anomaly timeline
'
model_client_id: openai_gpt4_turbo
user_message: 'Fetch internal operational data for anomaly analysis:
${funnel_intelligence_master}
'
- id: anomaly_analysis
name: Anomaly Detection & Diagnostics Agent
type: ai_agent
config:
tools:
- anomaly_detector
- statistical_analyzer
system_message: "# Anomaly Detection & Diagnostics Agent\n\nYou perform deep multi-hypothesis\
\ testing to identify root causes.\n\n## Analysis Framework:\n1. **Data Synthesis**:\n\
\ - Combine CleverTap behavioral data\n - Merge with internal operational\
\ data\n - Create unified timeline\n\n2. **Hypothesis Testing**:\n Test\
\ each hypothesis with supporting evidence:\n - H1: User behavior shift\n\
\ - H2: Technical/payment issues\n - H3: Inventory problems\n - H4: Competitive\
\ actions\n - H5: External factors\n - H6: Marketing campaign effects\n\
\ - H7: Regional issues\n - H8: Product quality concerns\n - H9: Price\
\ sensitivity\n - H10: Seasonal patterns\n\n3. **Root Cause Determination**:\n\
\ - Score each hypothesis (0-100)\n - Identify primary and secondary causes\n\
\ - Quantify impact contribution\n\n## Output Requirements:\nProvide:\n- Root\
\ cause analysis with confidence scores\n- Impact quantification\n- Recovery\
\ recommendations\n- Priority actions\n"
model_client_id: anthropic_claude3
depends_on:
- clevertap_data_fetch
- internal_data_fetch
user_message: 'Analyze anomaly with data:
CleverTap Data: ${clevertap_data_fetch}
Internal Data: ${internal_data_fetch}
Master Context: ${funnel_intelligence_master}
'
- id: alert_generation
name: Alert Generator
type: ai_agent
config:
output_format: json
system_message: '# Alert Generation Agent
Generate actionable alerts based on root cause analysis.
## Alert Components:
1. **Severity Level**: Critical/High/Medium/Low
2. **Affected Metrics**: Specific KPIs impacted
3. **Root Cause Summary**: Clear explanation
4. **Business Impact**: Revenue/user impact
5. **Recommended Actions**: Prioritized steps
## Alert Channels:
- Slack: #pharmeasy-alerts
- Email: stakeholders
- Dashboard: Real-time updates
- SMS: Critical alerts only
Format alerts for maximum clarity and actionability.
'
model_client_id: openai_gpt4
depends_on:
- anomaly_analysis
user_message: 'Generate alert based on analysis:
${anomaly_analysis}
'
- id: recovery_campaign
name: Campaign Trigger Agent
type: ai_agent
config:
tools:
- trigger_campaign
- create_user_segment
system_message: "# Recovery Campaign Agent\n\nDesign and trigger recovery campaigns\
\ based on root cause analysis.\n\n## Campaign Types:\n1. **User Re-engagement**:\n\
\ - Cart abandonment recovery\n - Lapsed user win-back\n - Feature adoption\
\ campaigns\n\n2. **Promotional Recovery**:\n - Targeted discounts\n - Free\
\ delivery offers\n - Bundle promotions\n\n3. **Communication Campaigns**:\n\
\ - Service status updates\n - Inventory notifications\n - Payment issue\
\ resolution\n\n## Campaign Design:\n- Segment: Affected users\n- Channel: Push/Email/SMS/WhatsApp\n\
- Message: Personalized content\n- Timing: Optimal delivery\n- Success Metrics:\
\ Define KPIs\n"
model_client_id: openai_gpt4_turbo
depends_on:
- anomaly_analysis
- alert_generation
user_message: 'Design recovery campaign based on:
Analysis: ${anomaly_analysis}
Alert: ${alert_generation}
'
- id: churn_prevention_workflow
name: Retention & Churn Prevention Agent
type: ai_agent
config:
tools:
- churn_predictor
- clevertap_event_api
- trigger_campaign
system_message: "# Retention & Churn Prevention Agent\n\nPredict and prevent customer\
\ churn through behavioral analysis.\n\n## Churn Indicators:\n1. **Behavioral\
\ Signals**:\n - Decreasing order frequency\n - Reduced app engagement\n\
\ - Cart abandonment increase\n - Feature usage decline\n\n2. **Transaction\
\ Patterns**:\n - Order value reduction\n - Category switching\n - Payment\
\ failures\n - Support ticket increase\n\n3. **Engagement Metrics**:\n -\
\ Email/push open rates\n - Campaign responsiveness\n - NPS score decline\n\
\ - Review sentiment\n\n## Intervention Strategies:\n- Personalized retention\
\ offers\n- Chronic patient programs\n- VIP customer benefits\n- Proactive support\
\ outreach\n\n## Special Focus:\nChronic patients requiring regular medication\
\ refills\n"
model_client_id: anthropic_claude3
depends_on:
- analysis_router
user_message: 'Analyze churn risk and design prevention:
${trigger_data}
'
execute_on_routes:
- churn_analysis_needed
- id: crosssell_optimizer
name: Cross-sell Optimization Agent
type: ai_agent
config:
tools:
- clevertap_event_api
- internal_sales_api
- create_user_segment
- trigger_campaign
system_message: "# Cross-sell Optimization Agent\n\nIdentify and capitalize on\
\ cross-funnel opportunities.\n\n## Cross-sell Analysis:\n1. **Category Affinity**:\n\
\ - Medicine \u2192 Wellness products\n - Diagnostics \u2192 Preventive\
\ care\n - Chronic care \u2192 Health devices\n\n2. **Behavioral Patterns**:\n\
\ - Purchase sequences\n - Browse behavior\n - Cart combinations\n -\
\ Seasonal patterns\n\n3. **Opportunity Identification**:\n - Complementary\
\ products\n - Upgrade paths\n - Bundle opportunities\n - Subscription\
\ conversions\n\n## Campaign Creation:\n- Personalized recommendations\n- Bundle\
\ offers\n- Category introductions\n- Educational content\n"
model_client_id: openai_gpt4_turbo
depends_on:
- analysis_router
- id: error_handler
type: script
script: 'print("__OUTPUTS__ {\"status\": \"ERROR\", \"message\": \"Unhandled analysis
type: ${analysis_type}\"}")
'
depends_on:
- analysis_router
- id: workflow_completion
type: script
script: "import json\nfrom datetime import datetime\n\nresult = {\n \"status\"\
: \"COMPLETED\",\n \"timestamp\": datetime.now().isoformat(),\n \"analysis_type\"\
: \"${analysis_type}\",\n \"actions_taken\": \"Recovery campaign triggered successfully\"\
\n}\nprint(f\"__OUTPUTS__ {json.dumps(result)}\")\n"
depends_on:
- recovery_campaign
- churn_prevention_workflow
- crosssell_optimizer
tools:
- name: clevertap_event_api
function: "async function clevertap_event_api(api_key, event_name, from_date, to_date,\
\ limit = 1000) {\n const axios = require('axios');\n \n const headers = {\n\
\ 'X-CleverTap-Account-Id': '${CLEVERTAP_ACCOUNT_ID}',\n 'X-CleverTap-Passcode':\
\ api_key,\n 'Content-Type': 'application/json'\n };\n \n const payload\
\ = {\n event_name: event_name,\n from: from_date,\n to: to_date,\n \
\ common_profile_properties: {\n limit: limit\n }\n };\n \n try {\n\
\ const response = await axios.post(\n 'https://api.clevertap.com/1/events.json',\n\
\ payload,\n { headers: headers }\n );\n \n return {\n \
\ success: true,\n data: response.data,\n event_count: response.data.count,\n\
\ events: response.data.records\n };\n } catch (error) {\n return\
\ {\n success: false,\n error: error.message\n };\n }\n}\n"
parameters:
limit: int = 1000
api_key: str
to_date: str
from_date: str
event_name: str
description: Fetch user events from CleverTap
- name: create_user_segment
function: "async function create_user_segment(segment_name, criteria, api_key) {\n\
\ const axios = require('axios');\n \n const headers = {\n 'X-CleverTap-Account-Id':\
\ '${CLEVERTAP_ACCOUNT_ID}',\n 'X-CleverTap-Passcode': api_key,\n 'Content-Type':\
\ 'application/json'\n };\n \n const payload = {\n name: segment_name,\n\
\ segment_query: criteria,\n description: `Auto-generated segment for ${segment_name}`\n\
\ };\n \n try {\n const response = await axios.post(\n 'https://api.clevertap.com/1/segments/create.json',\n\
\ payload,\n { headers: headers }\n );\n \n return {\n \
\ success: true,\n segment_id: response.data.id,\n status: response.data.status\n\
\ };\n } catch (error) {\n return {\n success: false,\n error:\
\ error.message\n };\n }\n}\n"
parameters:
api_key: str
criteria: dict
segment_name: str
description: Create a user segment in CleverTap
- name: trigger_campaign
function: "async function trigger_campaign(campaign_id, segment_id, channel, message,\
\ api_key) {\n const axios = require('axios');\n \n const headers = {\n \
\ 'X-CleverTap-Account-Id': '${CLEVERTAP_ACCOUNT_ID}',\n 'X-CleverTap-Passcode':\
\ api_key,\n 'Content-Type': 'application/json'\n };\n \n const payload\
\ = {\n campaign_id: campaign_id,\n segment_id: segment_id,\n channel:\
\ channel,\n message_content: message,\n when: \"now\"\n };\n \n try\
\ {\n const response = await axios.post(\n 'https://api.clevertap.com/1/campaigns/trigger.json',\n\
\ payload,\n { headers: headers }\n );\n \n return {\n \
\ success: true,\n campaign_status: response.data.status,\n estimated_reach:\
\ response.data.estimated_reach\n };\n } catch (error) {\n return {\n \
\ success: false,\n error: error.message\n };\n }\n}\n"
parameters:
api_key: str
channel: str
message: dict
segment_id: str
campaign_id: str
description: Trigger a campaign in CleverTap
- name: internal_sales_api
function: "async function internal_sales_api(metric, region, time_range, granularity\
\ = \"hourly\") {\n const axios = require('axios');\n \n const headers = {\n\
\ 'Authorization': 'Bearer ${INTERNAL_API_KEY}',\n 'Content-Type': 'application/json'\n\
\ };\n \n const params = {\n metric: metric,\n region: region || 'all',\n\
\ time_range: time_range,\n granularity: granularity\n };\n \n try {\n\
\ const response = await axios.get(\n '${INTERNAL_API_BASE_URL}/api/v1/sales',\n\
\ { headers: headers, params: params }\n );\n \n return {\n \
\ success: true,\n data: response.data.metrics,\n period: response.data.period,\n\
\ aggregated_value: response.data.total\n };\n } catch (error) {\n \
\ return {\n success: false,\n error: error.message\n };\n }\n\
}\n"
parameters:
metric: str
region: str | null = null
time_range: str
granularity: str = "hourly"
description: Fetch sales data from internal dashboard
- name: internal_inventory_api
function: "async function internal_inventory_api(category, sku, region) {\n //\
\ Implementation for inventory API\n // Similar structure to sales API\n return\
\ {\n success: true,\n stock_levels: {},\n out_of_stock: [],\n low_stock:\
\ []\n };\n}\n"
parameters:
sku: str | null = null
region: str | null = null
category: str | null = null
description: Check inventory levels from internal system
- name: internal_payment_api
function: "async function internal_payment_api(time_range, payment_method) {\n \
\ // Implementation for payment API\n return {\n success: true,\n success_rate:\
\ 0.0,\n failure_reasons: {},\n affected_transactions: 0\n };\n}\n"
parameters:
time_range: str
payment_method: str | null = null
description: Get payment success rates and issues
- name: anomaly_detector
function: "def anomaly_detector(metric: str, data: list, sensitivity: float = 2.0):\n\
\ import numpy as np\n from scipy import stats\n \n # Convert to numpy\
\ array\n values = np.array([d['value'] for d in data])\n timestamps = [d['timestamp']\
\ for d in data]\n \n # Calculate z-scores\n z_scores = np.abs(stats.zscore(values))\n\
\ \n # Identify anomalies\n anomaly_indices = np.where(z_scores > sensitivity)[0]\n\
\ \n anomalies = []\n for idx in anomaly_indices:\n anomalies.append({\n\
\ 'timestamp': timestamps[idx],\n 'value': values[idx],\n\
\ 'z_score': z_scores[idx],\n 'deviation': values[idx] -\
\ np.mean(values)\n })\n \n return {\n 'metric': metric,\n\
\ 'anomalies_detected': len(anomalies),\n 'anomalies': anomalies,\n\
\ 'mean': np.mean(values),\n 'std': np.std(values),\n 'sensitivity_used':\
\ sensitivity\n }\n"
parameters:
data: list
metric: str
sensitivity: float = 2.0
description: Detect anomalies in metrics using statistical analysis
- name: churn_predictor
function: "def churn_predictor(user_id: str, behavioral_data: dict):\n # Simplified\
\ churn prediction logic\n # In production, this would use ML models\n \n\
\ risk_score = 0\n risk_factors = []\n \n # Check order frequency\
\ decline\n if behavioral_data.get('order_frequency_change', 0) < -30:\n \
\ risk_score += 30\n risk_factors.append('Significant order frequency\
\ decline')\n \n # Check engagement metrics\n if behavioral_data.get('app_opens_change',\
\ 0) < -50:\n risk_score += 25\n risk_factors.append('App engagement\
\ dropped')\n \n # Check cart abandonment\n if behavioral_data.get('cart_abandonment_rate',\
\ 0) > 0.7:\n risk_score += 20\n risk_factors.append('High cart\
\ abandonment rate')\n \n # Check customer lifetime value trend\n if\
\ behavioral_data.get('clv_trend', 0) < -20:\n risk_score += 25\n \
\ risk_factors.append('Declining customer lifetime value')\n \n # Determine\
\ risk level\n if risk_score >= 70:\n risk_level = 'HIGH'\n elif\
\ risk_score >= 40:\n risk_level = 'MEDIUM'\n else:\n risk_level\
\ = 'LOW'\n \n return {\n 'user_id': user_id,\n 'churn_probability':\
\ risk_score / 100,\n 'risk_level': risk_level,\n 'risk_factors':\
\ risk_factors,\n 'recommended_actions': get_retention_recommendations(risk_level,\
\ risk_factors)\n }\n\ndef get_retention_recommendations(risk_level, risk_factors):\n\
\ recommendations = []\n \n if risk_level == 'HIGH':\n recommendations.extend([\n\
\ 'Send personalized retention offer immediately',\n 'Trigger\
\ win-back campaign',\n 'Assign to customer success team'\n \
\ ])\n elif risk_level == 'MEDIUM':\n recommendations.extend([\n \
\ 'Send engagement campaign',\n 'Offer loyalty program benefits',\n\
\ 'Provide personalized recommendations'\n ])\n \n return\
\ recommendations\n"
parameters:
user_id: str
behavioral_data: dict
description: Predict customer churn probability
- name: statistical_analyzer
function: "def statistical_analyzer(data1: list, data2: list = None, test_type:\
\ str = \"correlation\"):\n import numpy as np\n from scipy import stats\n\
\ \n if test_type == \"correlation\" and data2:\n correlation, p_value\
\ = stats.pearsonr(data1, data2)\n return {\n 'test_type': 'correlation',\n\
\ 'correlation_coefficient': correlation,\n 'p_value': p_value,\n\
\ 'significant': p_value < 0.05\n }\n elif test_type == \"\
trend\":\n x = np.arange(len(data1))\n slope, intercept, r_value,\
\ p_value, std_err = stats.linregress(x, data1)\n return {\n \
\ 'test_type': 'trend',\n 'slope': slope,\n 'r_squared':\
\ r_value**2,\n 'p_value': p_value,\n 'trend_direction':\
\ 'increasing' if slope > 0 else 'decreasing'\n }\n \n"
parameters:
data1: list
data2: list | null = null
test_type: str = "correlation"
description: Perform statistical analysis on data
- name: detect_anomaly
function: "def detect_anomaly(metrics: dict, threshold: float = 0.2):\n anomalies\
\ = []\n \n for metric_name, metric_data in metrics.items():\n current\
\ = metric_data['current']\n baseline = metric_data['baseline']\n \
\ \n change = (current - baseline) / baseline if baseline != 0 else 0\n\
\ \n if abs(change) > threshold:\n anomalies.append({\n\
\ 'metric': metric_name,\n 'current_value': current,\n\
\ 'baseline_value': baseline,\n 'change_percentage':\
\ change * 100,\n 'severity': 'high' if abs(change) > 0.5 else\
\ 'medium'\n })\n \n return {\n 'anomalies_detected':\
\ len(anomalies) > 0,\n 'anomaly_count': len(anomalies),\n 'anomalies':\
\ anomalies\n }\n"
parameters:
metrics: dict
threshold: float = 0.2
description: Master anomaly detection orchestrator
- name: fetch_clevertap_data
function: "async function fetch_clevertap_data(event_type, time_period) {\n //\
\ Wrapper function that calls clevertap_event_api\n // with appropriate date\
\ parsing\n return await clevertap_event_api(\n '${CLEVERTAP_API_KEY}',\n\
\ event_type,\n time_period.from,\n time_period.to,\n 1000\n );\n\
}\n"
parameters:
event_type: str
time_period: str
description: Wrapper for fetching CleverTap data
- name: fetch_internal_data
function: "async function fetch_internal_data(metric_type, time_period) {\n //\
\ Wrapper that aggregates data from multiple internal APIs\n const salesData\
\ = await internal_sales_api(metric_type, null, time_period);\n const inventoryData\
\ = await internal_inventory_api(null, null, null);\n const paymentData = await\
\ internal_payment_api(time_period, null);\n \n return {\n sales: salesData,\n\
\ inventory: inventoryData,\n payments: paymentData\n };\n}\n"
parameters:
metric_type: str
time_period: str
description: Wrapper for fetching internal dashboard data
- name: analyze_root_cause
function: "def analyze_root_cause(clevertap_data: dict, internal_data: dict, anomaly_data:\
\ dict):\n # Multi-hypothesis testing logic\n hypotheses = {\n 'user_behavior':\
\ 0,\n 'technical_issues': 0,\n 'inventory_problems': 0,\n \
\ 'competitive_factors': 0,\n 'external_events': 0,\n 'payment_issues':\
\ 0,\n 'marketing_effects': 0,\n 'product_quality': 0,\n \
\ 'price_sensitivity': 0,\n 'regional_factors': 0\n }\n \n # Score\
\ each hypothesis based on data\n # This is simplified - real implementation\
\ would be more sophisticated\n \n # Find highest scoring hypotheses\n \
\ sorted_hypotheses = sorted(\n hypotheses.items(),\n key=lambda\
\ x: x[1],\n reverse=True\n )\n \n return {\n 'primary_cause':\
\ sorted_hypotheses[0],\n 'secondary_causes': sorted_hypotheses[1:3],\n\
\ 'confidence_scores': hypotheses,\n 'recommended_actions': []\n\
\ }\n"
parameters:
anomaly_data: dict
internal_data: dict
clevertap_data: dict
description: Analyze root cause from multiple data sources
inputs:
- name: trigger_data
type: string
required: true
description: Trigger data containing anomaly detection results or scheduled trigger
info
- name: analysis_type
type: string
default: sales_drop
required: true
description: Type of analysis to perform (sales_drop, sales_spike, churn_prevention,
customer_journey)
description: Autonomous funnel intelligence system that bridges CleverTap and internal
dashboards for real-time optimization
retry_policy:
max_attempts: 3
initial_interval: 5
maximum_interval: 60
backoff_coefficient: 2.0
model_clients:
- id: openai_gpt4
config:
model: gpt-4
api_key: ${OPENAI_API_KEY}
max_tokens: 4000
temperature: 0.3
provider: openai
- id: openai_gpt4_turbo
config:
model: gpt-4-turbo
api_key: ${OPENAI_API_KEY}
max_tokens: 4000
temperature: 0.5
provider: openai
- id: anthropic_claude3
config:
model: claude-3-opus-20240229
api_key: ${ANTHROPIC_API_KEY}
max_tokens: 4000
temperature: 0.3
provider: anthropic
execution_mode: temporal
timeout_seconds: 1800
No executions yet. Execute this workflow to see results here.