Pharmeasy Sales Drop Diagnostic Workflow
Pharmeasy Sales Drop Diagnostic Workflow - Automated root cause analysis and recovery
Workflow Information
ID: pharmeasy_sales_drop_diagnostic
Namespace: pharmeasy
Version: 1.0
Created: 2025-07-28
Updated: 2025-07-28
Tasks: 6
Quick Actions
Inputs
| Name | Type | Required | Default |
|---|---|---|---|
clevertap_account_id |
string | Required | None |
clevertap_passcode |
string | Required | None |
internal_api_key |
string | Required | None |
sales_drop_percentage |
float | Required | None |
affected_region |
string | Required | None |
affected_category |
string | Required | None |
lookback_hours |
integer | Optional |
24
|
Outputs
| Name | Type | Source |
|---|---|---|
internal_data |
object | Operational data from internal systems |
sales_anomaly |
object | Detected sales anomaly details |
clevertap_data |
object | User behavior data from CleverTap |
recovery_campaign |
object | Recovery campaign execution results |
alert_notification |
object | Alert generated for stakeholders |
root_cause_analysis |
object | Multi-hypothesis root cause analysis results |
Tasks
detect_anomaly
scriptStep 1 - Detect and quantify the sales anomaly
fetch_clevertap_data
scriptStep 2A - Fetch user behavior data from CleverTap (parallel)
fetch_internal_data
scriptStep 2B - Fetch operational data from internal systems (parallel)
analyze_root_cause
scriptStep 3 - Analyze root cause using multi-hypothesis testing
generate_alert
ai_agentStep 4 - Generate intelligent alerts for stakeholders
trigger_recovery_campaign
scriptStep 5 - Trigger automated recovery campaigns
YAML Source
id: pharmeasy_sales_drop_diagnostic
name: Pharmeasy Sales Drop Diagnostic Workflow
tasks:
- id: detect_anomaly
name: Detect Sales Anomaly
type: script
script: "import json\nfrom datetime import datetime, timedelta\nimport random\n\
import math\n\nprint(\"\U0001F6A8 SALES DROP DIAGNOSTIC WORKFLOW INITIATED\")\n\
print(\"=\" * 60)\nprint(\"\")\n\n# Input parameters\nsales_drop = float(\"${sales_drop_percentage}\"\
)\nregion = \"${affected_region}\"\ncategory = \"${affected_category}\"\nlookback_hours\
\ = int(\"${lookback_hours}\")\n\nprint(f\"\U0001F4C9 Anomaly Detection Parameters:\"\
)\nprint(f\" Sales Drop: {sales_drop:.1f}%\")\nprint(f\" Region: {region}\"\
)\nprint(f\" Category: {category}\")\nprint(f\" Analysis Window: {lookback_hours}\
\ hours\")\nprint(\"\")\n\n# Generate time series data for anomaly context\ncurrent_time\
\ = datetime.now()\ntime_series = []\n\nfor i in range(lookback_hours):\n hour_ago\
\ = current_time - timedelta(hours=i)\n if i < 6: # Recent hours show the\
\ drop\n hourly_sales = random.randint(80000, 120000) * (1 - sales_drop/100)\n\
\ else: # Normal sales before the drop\n hourly_sales = random.randint(100000,\
\ 150000)\n \n time_series.append({\n \"timestamp\": hour_ago.isoformat(),\n\
\ \"sales\": int(hourly_sales),\n \"orders\": int(hourly_sales /\
\ random.randint(500, 700))\n })\n\n# Calculate statistics\nrecent_avg = sum(ts['sales']\
\ for ts in time_series[:6]) / 6\nhistorical_avg = sum(ts['sales'] for ts in time_series[6:])\
\ / (len(time_series) - 6)\nactual_drop = (historical_avg - recent_avg) / historical_avg\
\ * 100\n\n# Calculate z-score\nall_sales = [ts['sales'] for ts in time_series]\n\
mean_sales = sum(all_sales) / len(all_sales)\nstd_dev = math.sqrt(sum((x - mean_sales)\
\ ** 2 for x in all_sales) / len(all_sales))\nz_score = (recent_avg - mean_sales)\
\ / std_dev if std_dev > 0 else 0\n\nprint(\"\U0001F4CA Anomaly Analysis Results:\"\
)\nprint(f\" Historical Average: \u20B9{historical_avg:,.0f}/hour\")\nprint(f\"\
\ Current Average: \u20B9{recent_avg:,.0f}/hour\")\nprint(f\" Actual Drop:\
\ {actual_drop:.1f}%\")\nprint(f\" Z-Score: {z_score:.2f}\")\nprint(f\" Severity:\
\ {'CRITICAL' if actual_drop > 20 else 'HIGH' if actual_drop > 15 else 'MODERATE'}\"\
)\n\nanomaly_result = {\n \"timestamp\": current_time.isoformat(),\n \"\
region\": region,\n \"category\": category,\n \"reported_drop\": sales_drop,\n\
\ \"actual_drop\": round(actual_drop, 1),\n \"z_score\": round(z_score,\
\ 2),\n \"severity\": \"CRITICAL\" if actual_drop > 20 else \"HIGH\" if actual_drop\
\ > 15 else \"MODERATE\",\n \"time_series_data\": time_series,\n \"recent_avg_sales\"\
: recent_avg,\n \"historical_avg_sales\": historical_avg,\n \"detection_confidence\"\
: 0.95 if abs(z_score) > 2 else 0.85\n}\n\nprint(\"\")\nprint(\"\u2705 Anomaly\
\ detection complete. Proceeding to data collection...\")\nprint(f\"__OUTPUTS__\
\ {json.dumps(anomaly_result)}\")\n"
description: Step 1 - Detect and quantify the sales anomaly
timeout_seconds: 30
- id: fetch_clevertap_data
name: Fetch CleverTap Behavioral Data
type: script
script: "import json\nimport time\nimport random\nfrom datetime import datetime,\
\ timedelta\n\nprint(\"\")\nprint(\"\U0001F4F1 FETCHING CLEVERTAP DATA\")\nprint(\"\
-\" * 40)\n\nanomaly_data = json.loads('${detect_anomaly}')\nregion = anomaly_data['region']\n\
category = anomaly_data['category']\nlookback = int(\"${lookback_hours}\")\n\n\
print(f\"\U0001F50D Querying CleverTap for {region} - {category}\")\nprint(\"\
\ Events: app_open, search, product_view, add_to_cart, checkout, purchase\"\
)\n\n# Simulate API call delay\ntime.sleep(1)\n\n# Generate behavioral data showing\
\ funnel drop patterns\nclevertap_data = {\n \"query_params\": {\n \"\
region\": region,\n \"category\": category,\n \"time_range\": f\"\
last_{lookback}_hours\"\n },\n \"funnel_data\": {\n \"app_opens\"\
: random.randint(25000, 35000),\n \"searches\": random.randint(18000, 25000),\n\
\ \"product_views\": random.randint(15000, 20000),\n \"add_to_cart\"\
: random.randint(8000, 12000),\n \"checkout_initiated\": random.randint(4000,\
\ 6000),\n \"purchases\": random.randint(2500, 4000)\n },\n \"conversion_rates\"\
: {\n \"search_to_view\": 0.82,\n \"view_to_cart\": 0.45,\n \
\ \"cart_to_checkout\": 0.48, # Lower than normal\n \"checkout_to_purchase\"\
: 0.65 # Lower than normal\n },\n \"user_segments\": {\n \"new_users\"\
: {\n \"count\": random.randint(3000, 5000),\n \"conversion_rate\"\
: 0.05\n },\n \"returning_users\": {\n \"count\": random.randint(15000,\
\ 20000),\n \"conversion_rate\": 0.12\n },\n \"chronic_patients\"\
: {\n \"count\": random.randint(8000, 10000),\n \"conversion_rate\"\
: 0.08 # Lower than normal\n }\n },\n \"device_breakdown\": {\n\
\ \"android\": 0.75,\n \"ios\": 0.20,\n \"web\": 0.05\n \
\ },\n \"cart_abandonment\": {\n \"rate\": 0.52, # Higher than normal\n\
\ \"avg_cart_value\": random.randint(450, 650),\n \"top_abandoned_products\"\
: [\n \"Metformin 500mg\",\n \"Amlodipine 5mg\",\n \
\ \"Paracetamol 650mg\"\n ]\n },\n \"session_metrics\": {\n\
\ \"avg_duration_seconds\": 245,\n \"pages_per_session\": 4.2,\n\
\ \"bounce_rate\": 0.32\n }\n}\n\nprint(\"\u2713 CleverTap data retrieved\
\ successfully\")\nprint(f\" - Total events: {sum(clevertap_data['funnel_data'].values()):,}\"\
)\nprint(f\" - Cart abandonment rate: {clevertap_data['cart_abandonment']['rate']:.1%}\"\
)\nprint(f\" - Chronic patient conversion: {clevertap_data['user_segments']['chronic_patients']['conversion_rate']:.1%}\"\
)\n\nprint(f\"__OUTPUTS__ {json.dumps(clevertap_data)}\")\n"
parallel: true
depends_on:
- detect_anomaly
description: Step 2A - Fetch user behavior data from CleverTap (parallel)
timeout_seconds: 45
- id: fetch_internal_data
name: Fetch Internal Operational Data
type: script
script: "import json\nimport time\nimport random\nfrom datetime import datetime\n\
\nprint(\"\")\nprint(\"\U0001F3E2 FETCHING INTERNAL DASHBOARD DATA\")\nprint(\"\
-\" * 40)\n\nanomaly_data = json.loads('${detect_anomaly}')\nregion = anomaly_data['region']\n\
category = anomaly_data['category']\n\nprint(f\"\U0001F50D Querying internal systems\
\ for {region} - {category}\")\nprint(\" Systems: Inventory, Payment, Logistics,\
\ Pricing\")\n\n# Simulate API call delay\ntime.sleep(1)\n\n# Generate operational\
\ data that could explain the sales drop\ninternal_data = {\n \"query_params\"\
: {\n \"region\": region,\n \"category\": category,\n \"\
timestamp\": datetime.now().isoformat()\n },\n \"inventory_status\": {\n\
\ \"total_skus\": 2500,\n \"in_stock_skus\": 2100,\n \"stockout_skus\"\
: 400, # Higher than normal\n \"critical_stockouts\": 85, # Key products\
\ out of stock\n \"top_stockout_products\": [\n {\"name\": \"\
Metformin 500mg\", \"demand\": \"HIGH\", \"days_out\": 3},\n {\"name\"\
: \"Insulin Glargine\", \"demand\": \"CRITICAL\", \"days_out\": 2},\n \
\ {\"name\": \"Losartan 50mg\", \"demand\": \"HIGH\", \"days_out\": 1}\n \
\ ],\n \"stockout_impact_percentage\": 18.5\n },\n \"payment_gateway_status\"\
: {\n \"success_rate\": 0.89, # Lower than normal 95%\n \"failed_transactions\"\
: 450,\n \"gateway_errors\": {\n \"timeout\": 180,\n \
\ \"authentication_failed\": 120,\n \"insufficient_funds\": 150\n\
\ },\n \"affected_payment_methods\": [\"credit_card\", \"debit_card\"\
],\n \"avg_processing_time_ms\": 4500 # Higher than normal\n },\n \
\ \"logistics_metrics\": {\n \"on_time_delivery_rate\": 0.82, # Lower\
\ than normal\n \"avg_delivery_time_hours\": 36,\n \"delayed_orders\"\
: 850,\n \"delivery_partner_issues\": {\n \"partner_1\": {\"\
name\": \"BlueDart\", \"performance\": 0.78},\n \"partner_2\": {\"\
name\": \"Delhivery\", \"performance\": 0.85}\n }\n },\n \"pricing_data\"\
: {\n \"recent_price_changes\": 12,\n \"avg_price_increase_percentage\"\
: 8.5,\n \"competitor_price_comparison\": {\n \"status\": \"\
HIGHER\",\n \"difference_percentage\": 5.2\n }\n },\n \
\ \"customer_service\": {\n \"complaint_volume_increase\": 35, # 35% increase\n\
\ \"top_complaints\": [\n \"Product not available\",\n \
\ \"Payment failure\",\n \"Delayed delivery\"\n ],\n \
\ \"resolution_rate\": 0.72\n }\n}\n\nprint(\"\u2713 Internal data retrieved\
\ successfully\")\nprint(f\" - Stockout SKUs: {internal_data['inventory_status']['stockout_skus']}\"\
)\nprint(f\" - Payment success rate: {internal_data['payment_gateway_status']['success_rate']:.1%}\"\
)\nprint(f\" - On-time delivery: {internal_data['logistics_metrics']['on_time_delivery_rate']:.1%}\"\
)\n\nprint(f\"__OUTPUTS__ {json.dumps(internal_data)}\")\n"
parallel: true
depends_on:
- detect_anomaly
description: Step 2B - Fetch operational data from internal systems (parallel)
timeout_seconds: 45
- id: analyze_root_cause
name: Multi-Hypothesis Root Cause Analysis
type: script
script: "import json\nfrom datetime import datetime\n\nprint(\"\")\nprint(\"\U0001F52C\
\ MULTI-HYPOTHESIS ROOT CAUSE ANALYSIS\")\nprint(\"=\" * 50)\nprint(\"\")\n\n\
# Parse all data sources\nanomaly_data = json.loads('${detect_anomaly}')\nclevertap_data\
\ = json.loads('${fetch_clevertap_data}')\ninternal_data = json.loads('${fetch_internal_data}')\n\
\nprint(\"\U0001F4CA Analyzing data from multiple sources...\")\nprint(f\" Sales\
\ Drop: {anomaly_data['actual_drop']:.1f}%\")\nprint(f\" Data Sources: CleverTap\
\ + Internal Systems\")\nprint(\"\")\n\n# Define and test hypotheses\nhypotheses\
\ = {\n \"inventory_stockouts\": {\n \"description\": \"Critical product\
\ stockouts causing lost sales\",\n \"evidence\": [],\n \"score\"\
: 0.0,\n \"impact\": \"HIGH\"\n },\n \"payment_gateway_issues\":\
\ {\n \"description\": \"Payment failures preventing order completion\"\
,\n \"evidence\": [],\n \"score\": 0.0,\n \"impact\": \"\
MEDIUM\"\n },\n \"user_experience_friction\": {\n \"description\"\
: \"High cart abandonment due to UX issues\",\n \"evidence\": [],\n \
\ \"score\": 0.0,\n \"impact\": \"MEDIUM\"\n },\n \"delivery_delays\"\
: {\n \"description\": \"Poor delivery performance affecting repeat purchases\"\
,\n \"evidence\": [],\n \"score\": 0.0,\n \"impact\": \"\
MEDIUM\"\n },\n \"price_sensitivity\": {\n \"description\": \"Recent\
\ price increases causing customer churn\",\n \"evidence\": [],\n \
\ \"score\": 0.0,\n \"impact\": \"LOW\"\n }\n}\n\n# Test Hypothesis\
\ 1: Inventory Stockouts\nstockout_impact = internal_data['inventory_status']['stockout_impact_percentage']\n\
if stockout_impact > 15:\n hypotheses[\"inventory_stockouts\"][\"score\"] =\
\ 0.85\n hypotheses[\"inventory_stockouts\"][\"evidence\"].append(\n \
\ f\"Stockout impact: {stockout_impact:.1f}% of potential sales\"\n )\n \
\ hypotheses[\"inventory_stockouts\"][\"evidence\"].append(\n f\"Critical\
\ items out of stock: {internal_data['inventory_status']['critical_stockouts']}\"\
\n )\n\n# Test Hypothesis 2: Payment Gateway Issues\npayment_success = internal_data['payment_gateway_status']['success_rate']\n\
if payment_success < 0.92:\n hypotheses[\"payment_gateway_issues\"][\"score\"\
] = 0.70\n hypotheses[\"payment_gateway_issues\"][\"evidence\"].append(\n \
\ f\"Payment success rate: {payment_success:.1%} (below 92% threshold)\"\
\n )\n hypotheses[\"payment_gateway_issues\"][\"evidence\"].append(\n \
\ f\"Failed transactions: {internal_data['payment_gateway_status']['failed_transactions']}\"\
\n )\n\n# Test Hypothesis 3: User Experience Friction\ncart_abandonment = clevertap_data['cart_abandonment']['rate']\n\
checkout_conversion = clevertap_data['conversion_rates']['checkout_to_purchase']\n\
if cart_abandonment > 0.45 and checkout_conversion < 0.70:\n hypotheses[\"\
user_experience_friction\"][\"score\"] = 0.75\n hypotheses[\"user_experience_friction\"\
][\"evidence\"].append(\n f\"Cart abandonment rate: {cart_abandonment:.1%}\"\
\n )\n hypotheses[\"user_experience_friction\"][\"evidence\"].append(\n\
\ f\"Low checkout conversion: {checkout_conversion:.1%}\"\n )\n\n# Test\
\ Hypothesis 4: Delivery Delays\non_time_delivery = internal_data['logistics_metrics']['on_time_delivery_rate']\n\
if on_time_delivery < 0.85:\n hypotheses[\"delivery_delays\"][\"score\"] =\
\ 0.60\n hypotheses[\"delivery_delays\"][\"evidence\"].append(\n f\"\
On-time delivery: {on_time_delivery:.1%}\"\n )\n hypotheses[\"delivery_delays\"\
][\"evidence\"].append(\n f\"Delayed orders: {internal_data['logistics_metrics']['delayed_orders']}\"\
\n )\n\n# Test Hypothesis 5: Price Sensitivity\nif internal_data['pricing_data']['avg_price_increase_percentage']\
\ > 5:\n hypotheses[\"price_sensitivity\"][\"score\"] = 0.45\n hypotheses[\"\
price_sensitivity\"][\"evidence\"].append(\n f\"Recent price increase:\
\ {internal_data['pricing_data']['avg_price_increase_percentage']:.1f}%\"\n \
\ )\n\n# Rank hypotheses by score\nranked_hypotheses = sorted(\n hypotheses.items(),\
\ \n key=lambda x: x[1]['score'], \n reverse=True\n)\n\nprint(\"\U0001F3AF\
\ Hypothesis Testing Results:\")\nprint(\"\")\nfor rank, (hypothesis, data) in\
\ enumerate(ranked_hypotheses[:3], 1):\n print(f\"{rank}. {hypothesis.replace('_',\
\ ' ').title()}\")\n print(f\" Score: {data['score']:.2f}\")\n print(f\"\
\ Impact: {data['impact']}\")\n for evidence in data['evidence']:\n \
\ print(f\" - {evidence}\")\n print(\"\")\n\n# Determine primary root cause\n\
primary_cause = ranked_hypotheses[0][0]\nconfidence = \"HIGH\" if ranked_hypotheses[0][1]['score']\
\ > 0.7 else \"MEDIUM\"\n\n# Generate recommendations\nrecommendations = []\n\
if primary_cause == \"inventory_stockouts\":\n recommendations = [\n \
\ \"Immediately notify affected customers about stockouts\",\n \"Activate\
\ substitute product recommendations\",\n \"Expedite replenishment for\
\ critical SKUs\",\n \"Launch 'back in stock' notification campaign\"\n\
\ ]\nelif primary_cause == \"payment_gateway_issues\":\n recommendations\
\ = [\n \"Switch to backup payment gateway\",\n \"Promote COD as\
\ alternative payment method\",\n \"Send payment retry notifications\"\
,\n \"Offer payment assistance through customer support\"\n ]\nelif\
\ primary_cause == \"user_experience_friction\":\n recommendations = [\n \
\ \"Simplify checkout process immediately\",\n \"Launch cart recovery\
\ campaign with incentives\",\n \"A/B test one-click checkout\",\n \
\ \"Reduce form fields in checkout\"\n ]\n\nroot_cause_result = {\n \"\
timestamp\": datetime.now().isoformat(),\n \"hypotheses_tested\": len(hypotheses),\n\
\ \"hypothesis_results\": dict(ranked_hypotheses),\n \"primary_root_cause\"\
: primary_cause,\n \"confidence_level\": confidence,\n \"contributing_factors\"\
: [h[0] for h in ranked_hypotheses if h[1]['score'] > 0.5],\n \"recommendations\"\
: recommendations,\n \"estimated_recovery_time\": \"4-6 hours\" if primary_cause\
\ in [\"payment_gateway_issues\", \"user_experience_friction\"] else \"24-48 hours\"\
\n}\n\nprint(\"\u2705 Root cause analysis complete.\")\nprint(f\" Primary Cause:\
\ {primary_cause.replace('_', ' ').title()}\")\nprint(f\" Confidence: {confidence}\"\
)\n\nprint(f\"__OUTPUTS__ {json.dumps(root_cause_result)}\")\n"
depends_on:
- fetch_clevertap_data
- fetch_internal_data
description: Step 3 - Analyze root cause using multi-hypothesis testing
timeout_seconds: 60
- id: generate_alert
name: Generate Alert Notification
type: ai_agent
config:
model_client_id: alert_generator
depends_on:
- analyze_root_cause
description: Step 4 - Generate intelligent alerts for stakeholders
user_message: 'Generate alert for this sales drop incident:
Anomaly Details:
- Region: ${detect_anomaly.region}
- Category: ${detect_anomaly.category}
- Sales Drop: ${detect_anomaly.actual_drop}%
- Severity: ${detect_anomaly.severity}
Root Cause Analysis:
- Primary Cause: ${analyze_root_cause.primary_root_cause}
- Confidence: ${analyze_root_cause.confidence_level}
- Contributing Factors: ${analyze_root_cause.contributing_factors}
Create a clear, actionable alert for immediate distribution.
'
system_message: "You are an expert at creating concise, actionable alerts for business\
\ stakeholders.\n\nGenerate an alert notification in JSON format:\n{\n \"alert_title\"\
: \"Clear, urgent title\",\n \"severity\": \"CRITICAL|HIGH|MEDIUM\",\n \"summary\"\
: \"2-3 sentence executive summary\",\n \"key_metrics\": {\n \"sales_impact\"\
: \"percentage and amount\",\n \"affected_users\": \"number\",\n \"recovery_eta\"\
: \"timeframe\"\n },\n \"root_cause_summary\": \"One sentence explanation\"\
,\n \"immediate_actions\": [\"action1\", \"action2\", \"action3\"],\n \"stakeholders_to_notify\"\
: [\"role1\", \"role2\"],\n \"escalation_required\": true/false\n}\n"
- id: trigger_recovery_campaign
name: Execute Recovery Campaign
type: script
script: "import json\nimport time\nimport random\nfrom datetime import datetime\n\
\nprint(\"\")\nprint(\"\U0001F680 EXECUTING RECOVERY CAMPAIGNS\")\nprint(\"=\"\
\ * 50)\nprint(\"\")\n\n# Parse previous results\nroot_cause_data = json.loads('${analyze_root_cause}')\n\
alert_data = json.loads('${generate_alert}')\nprimary_cause = root_cause_data['primary_root_cause']\n\
\nprint(f\"\U0001F4E2 Launching targeted campaigns for: {primary_cause.replace('_',\
\ ' ').title()}\")\nprint(\"\")\n\ncampaigns_executed = []\n\nif primary_cause\
\ == \"inventory_stockouts\":\n print(\"\U0001F514 Campaign 1: Stockout Notification\
\ & Alternatives\")\n campaign1 = {\n \"name\": \"Critical Stockout\
\ Alert Campaign\",\n \"type\": \"multi_channel\",\n \"channels\"\
: [\"push_notification\", \"in_app\", \"email\"],\n \"segment\": \"users_with_oos_items_in_cart\"\
,\n \"message_template\": \"Important: {product} is currently out of stock.\
\ Similar alternatives available!\",\n \"cta\": \"View Alternatives\",\n\
\ \"personalization\": \"product_based\"\n }\n \n # Simulate campaign\
\ execution\n time.sleep(0.5)\n campaign1[\"execution_stats\"] = {\n \
\ \"targeted_users\": random.randint(3000, 5000),\n \"messages_sent\"\
: random.randint(2800, 4800),\n \"opens\": random.randint(1500, 2500),\n\
\ \"clicks\": random.randint(800, 1200),\n \"conversions\": random.randint(400,\
\ 600)\n }\n campaigns_executed.append(campaign1)\n print(\" \u2713\
\ Notifications sent to affected users\")\n \n print(\"\")\n print(\"\
\U0001F48A Campaign 2: Substitute Product Recommendations\")\n campaign2 =\
\ {\n \"name\": \"Smart Alternative Suggestions\",\n \"type\": \"\
personalized_recommendations\",\n \"algorithm\": \"collaborative_filtering\"\
,\n \"display_locations\": [\"home_page\", \"search_results\", \"cart_page\"\
]\n }\n campaign2[\"execution_stats\"] = {\n \"recommendations_shown\"\
: random.randint(8000, 12000),\n \"clicks\": random.randint(2000, 3000),\n\
\ \"add_to_cart\": random.randint(800, 1200),\n \"purchases\": random.randint(400,\
\ 700)\n }\n campaigns_executed.append(campaign2)\n print(\" \u2713\
\ Alternative products activated\")\n \nelif primary_cause == \"payment_gateway_issues\"\
:\n print(\"\U0001F4B3 Campaign 1: Payment Retry Assistance\")\n campaign1\
\ = {\n \"name\": \"Payment Recovery Campaign\",\n \"type\": \"\
targeted_sms_email\",\n \"segment\": \"failed_payment_last_24h\",\n \
\ \"offers\": [\"COD available\", \"Try alternate payment method\", \"10%\
\ off on retry\"]\n }\n campaign1[\"execution_stats\"] = {\n \"targeted_users\"\
: random.randint(400, 600),\n \"messages_sent\": random.randint(380, 580),\n\
\ \"payment_retries\": random.randint(150, 250),\n \"successful_orders\"\
: random.randint(100, 180)\n }\n campaigns_executed.append(campaign1)\n\
\ print(\" \u2713 Payment assistance campaign launched\")\n \nelif primary_cause\
\ == \"user_experience_friction\":\n print(\"\U0001F6D2 Campaign 1: Cart Recovery\
\ with Incentives\")\n campaign1 = {\n \"name\": \"Abandoned Cart Win-back\"\
,\n \"type\": \"progressive_discount\",\n \"discount_tiers\": [\"\
5% after 2 hours\", \"10% after 6 hours\", \"15% after 24 hours\"],\n \"\
segment\": \"cart_abandoners\",\n \"urgency_messaging\": True\n }\n\
\ campaign1[\"execution_stats\"] = {\n \"targeted_users\": random.randint(5000,\
\ 8000),\n \"emails_sent\": random.randint(4800, 7800),\n \"cart_recoveries\"\
: random.randint(800, 1200),\n \"revenue_recovered\": random.randint(400000,\
\ 600000)\n }\n campaigns_executed.append(campaign1)\n print(\" \u2713\
\ Cart recovery campaign activated\")\n\n# Calculate total impact\ntotal_conversions\
\ = sum(\n c[\"execution_stats\"].get(\"conversions\", 0) + \n c[\"execution_stats\"\
].get(\"successful_orders\", 0) + \n c[\"execution_stats\"].get(\"purchases\"\
, 0) \n for c in campaigns_executed\n)\n\nestimated_revenue_recovery = total_conversions\
\ * random.randint(500, 800)\n\nprint(\"\")\nprint(\"\U0001F4CA Campaign Execution\
\ Summary:\")\nprint(f\" Campaigns Launched: {len(campaigns_executed)}\")\n\
print(f\" Total Conversions: {total_conversions:,}\")\nprint(f\" Estimated\
\ Revenue Recovery: \u20B9{estimated_revenue_recovery:,}\")\nprint(f\" Recovery\
\ ETA: {root_cause_data['estimated_recovery_time']}\")\n\nrecovery_result = {\n\
\ \"timestamp\": datetime.now().isoformat(),\n \"primary_intervention\"\
: primary_cause,\n \"campaigns_executed\": campaigns_executed,\n \"total_campaigns\"\
: len(campaigns_executed),\n \"total_conversions\": total_conversions,\n \
\ \"estimated_revenue_recovery\": estimated_revenue_recovery,\n \"recovery_status\"\
: \"IN_PROGRESS\",\n \"next_checkpoint\": \"2_hours\",\n \"success_metrics\"\
: {\n \"target_recovery_percentage\": 70,\n \"current_recovery_percentage\"\
: random.randint(25, 35),\n \"confidence_interval\": \"\xB15%\"\n }\n\
}\n\nprint(\"\")\nprint(\"\u2705 Recovery campaigns executed successfully.\")\n\
print(\" Monitoring performance and ready for optimization...\")\n\nprint(f\"\
__OUTPUTS__ {json.dumps(recovery_result)}\")\n"
depends_on:
- generate_alert
description: Step 5 - Trigger automated recovery campaigns
timeout_seconds: 90
inputs:
- name: clevertap_account_id
type: string
required: true
description: CleverTap Account ID
- name: clevertap_passcode
type: string
required: true
description: CleverTap API Passcode
- name: internal_api_key
type: string
required: true
description: Internal Dashboard API Key
- name: sales_drop_percentage
type: float
required: true
description: Percentage drop in sales that triggered this workflow
- name: affected_region
type: string
required: true
description: Region experiencing the sales drop
- name: affected_category
type: string
required: true
description: Category with sales drop (medicines, diagnostics, wellness)
- name: lookback_hours
type: integer
default: 24
description: Hours to look back for analysis
outputs:
internal_data:
type: object
source: fetch_internal_data
description: Operational data from internal systems
sales_anomaly:
type: object
source: detect_anomaly
description: Detected sales anomaly details
clevertap_data:
type: object
source: fetch_clevertap_data
description: User behavior data from CleverTap
recovery_campaign:
type: object
source: trigger_recovery_campaign
description: Recovery campaign execution results
alert_notification:
type: object
source: generate_alert
description: Alert generated for stakeholders
root_cause_analysis:
type: object
source: analyze_root_cause
description: Multi-hypothesis root cause analysis results
version: 1.0
namespace: pharmeasy
description: Pharmeasy Sales Drop Diagnostic Workflow - Automated root cause analysis
and recovery
model_clients:
- id: diagnostic_analyst
config:
model: gpt-4o-mini
api_key: ${OPENAI_API_KEY}
max_tokens: 2000
temperature: 0.1
provider: openai
- id: alert_generator
config:
model: gpt-4o-mini
api_key: ${OPENAI_API_KEY}
max_tokens: 1500
temperature: 0.3
provider: openai
timeout_seconds: 300
No executions yet. Execute this workflow to see results here.