test Loan Underwriting Workflow

End-to-end loan underwriting process with document verification, income analysis, credit assessment, and compliance checks

Back
Workflow Information

ID: loan_underwriting_workflow_v2

Namespace: financial

Version: 1.0

Created: 2025-07-16

Updated: 2025-07-16

Tasks: 39

Quick Actions
Manage Secrets
Inputs
Name Type Required Default
application_id string Required None
min_credit_score integer Optional 650
max_debt_to_income_ratio float Optional 0.4
Outputs
Name Type Source
final_decision object Final underwriting decision with terms
compliance_status object Overall compliance assessment
credit_assessment object Comprehensive credit assessment results
processing_summary object Complete workflow processing summary
supabase_upload_result object Result of CSV upload to Supabase storage bucket
Tasks
extract_application_info
script

Extract and structure customer information from application database

validate_application_fields
script

Check if all required fields are present and valid

application_quality_check
ai_agent

AI-powered assessment of application quality

ocr_pan_card
mcp

Extract text from PAN card image using OCR

verify_pan_card
ai_agent

AI-powered analysis of OCR extracted PAN card data with application cross-verification

ocr_aadhaar_card
mcp

Extract text from Aadhaar card image using OCR

verify_aadhaar
ai_agent

AI-powered analysis of OCR extracted Aadhaar data with application cross-verification

ocr_passport
mcp

Extract text from passport image using OCR

verify_passport
ai_agent

AI-powered analysis of OCR extracted passport data with application cross-verification

ocr_address_proof
mcp

Extract text from address proof document using OCR

verify_address_proof
ai_agent

AI-powered analysis of OCR extracted address proof data with application cross-verification

consolidate_document_verification
ai_agent

AI analysis of all document verification results

ocr_salary_slip
script

Extract first salary slip URL from application data

ocr_salary_slip_processing
mcp

Extract text from first salary slip image using OCR

analyze_salary_slips
ai_agent

AI-powered analysis of OCR extracted salary slip data with application cross-verification

ocr_bank_statement
mcp

Extract text from bank statement image using OCR

analyze_bank_statements
ai_agent

AI-powered analysis of OCR extracted bank statement data with application cross-verification

verify_tax_returns
script

Verify and analyze tax return documents

verify_employment
script

Verify employment status and details with employer

calculate_debt_to_income
script

Calculate applicant's debt-to-income ratio

income_assessment
ai_agent

AI-powered comprehensive income and financial stability assessment

check_cibil_score
script

Retrieve and analyze CIBIL credit score

analyze_credit_history
script

Detailed analysis of credit history and patterns

assess_existing_loans
script

Assess current loan obligations and repayment capacity

evaluate_default_risk
ai_agent

AI-powered default risk assessment

calculate_credit_score
script

Calculate internal credit score based on all factors

kyc_compliance_check
script

Verify KYC compliance requirements

aml_screening
script

Anti-Money Laundering compliance screening

regulatory_compliance
script

Check compliance with banking regulations

internal_policy_check
script

Verify compliance with internal lending policies

compliance_consolidation
ai_agent

AI-powered comprehensive compliance assessment

underwriting_decision_router
conditional_router

Route to appropriate decision path based on assessments

Conditional Router
Router Type: condition
Default Route: manual_review_path
final_underwriting_analysis
ai_agent

Comprehensive AI analysis for final underwriting decision

generate_approval_terms
script

Generate loan terms and conditions for approved application

generate_conditional_terms
script

Generate conditional approval with additional requirements

generate_decline_notice
script

Generate loan decline notice with reasons

flag_for_manual_review
script

Automatically approve applications that would have gone to manual review

generate_final_output
script

Generate comprehensive workflow output

upload_to_supabase
script

Convert loan underwriting data to CSV and upload to Supabase storage bucket

YAML Source
id: loan_underwriting_workflow_v2
name: test Loan Underwriting Workflow
retry:
  retryOn:
  - TEMPORARY_FAILURE
  - TIMEOUT
  - NETWORK_ERROR
  maxDelay: 60s
  maxAttempts: 3
  initialDelay: 5s
  backoffMultiplier: 2.0
tasks:
- id: extract_application_info
  name: Extract Customer Information
  type: script
  script: "import json\nfrom supabase import create_client, Client\n\n# Supabase configuration\n\
    SUPABASE_URL = \"https://mbauzgvitqvxceqanzjw.supabase.co\"\nSUPABASE_KEY = \"\
    eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Im1iYXV6Z3ZpdHF2eGNlcWFuemp3Iiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTc0Mzg2MDEwOCwiZXhwIjoyMDU5NDM2MTA4fQ.R71cWZoLuq2GNojkLSvhcXXt9rAW9PJ5O9V4g7vXvC0\"\
    \n\ntry:\n    # Initialize Supabase client\n    supabase: Client = create_client(SUPABASE_URL,\
    \ SUPABASE_KEY)\n    \n    # Get application ID from workflow input\n    application_id\
    \ = \"${application_id}\"\n    \n    # Query for the loan application data\n \
    \   # First get the data model for loan_applications\n    model_response = supabase.table(\"\
    tenant_data_models\").select(\"*\").eq(\"name\", \"loan_applications\").eq(\"\
    active\", True).execute()\n    \n    if not model_response.data:\n        raise\
    \ Exception(\"Loan applications data model not found\")\n    \n    model_id =\
    \ model_response.data[0][\"id\"]\n    \n    # Get the specific application record\n\
    \    record_response = supabase.table(\"tenant_data_records\").select(\"*\").eq(\"\
    model_id\", model_id).eq(\"active\", True).filter(\"data->application_id\", \"\
    eq\", application_id).execute()\n    \n    if not record_response.data:\n    \
    \    raise Exception(f\"Application not found for ID: {application_id}\")\n  \
    \  \n    # Extract application data\n    application_record = record_response.data[0]\n\
    \    application_data = application_record[\"data\"]\n    \n    # Structure the\
    \ customer info from database\n    customer_info = {\n        \"application_id\"\
    : application_data.get(\"application_id\"),\n        \"customer_id\": application_data.get(\"\
    customer_id\"),\n        \"full_name\": application_data.get(\"full_name\"),\n\
    \        \"email\": application_data.get(\"email\"),\n        \"phone\": application_data.get(\"\
    phone\"),\n        \"address\": application_data.get(\"address\"),\n        \"\
    employment_status\": application_data.get(\"employment_status\"),\n        \"\
    annual_income\": application_data.get(\"annual_income\"),\n        \"loan_purpose\"\
    : application_data.get(\"loan_purpose\"),\n        \"requested_amount\": application_data.get(\"\
    requested_amount\", 10000),\n        \"pancard_link\": application_data.get(\"\
    pancard_link\"),\n        \"adhar_card_link\": application_data.get(\"adhar_card_link\"\
    ),\n        \"passport_link\": application_data.get(\"passport_link\"),\n    \
    \    \"address_proof\": application_data.get(\"address_proof\"),\n        \"salary_slips\"\
    : application_data.get(\"salary_slips\", []),\n        \"bank_statement_link\"\
    : application_data.get(\"bank_statement_link\"),\n        \"employment_proof_link\"\
    : application_data.get(\"employment_proof_link\")\n    }\n    \n    # Check for\
    \ missing required fields\n    required_fields = [\"application_id\", \"customer_id\"\
    , \"full_name\", \"email\", \"employment_status\", \"annual_income\", \"loan_purpose\"\
    , \"requested_amount\"]\n    missing_fields = []\n    \n    for field in required_fields:\n\
    \        if field not in customer_info or customer_info[field] is None or customer_info[field]\
    \ == \"\":\n            missing_fields.append(field)\n    \n    result = {\n \
    \       \"extraction_success\": len(missing_fields) == 0,\n        \"customer_info\"\
    : customer_info,\n        \"missing_fields\": missing_fields,\n        \"data_source\"\
    : \"supabase_database\",\n        \"record_id\": application_record[\"id\"],\n\
    \        \"record_created_at\": application_record[\"created_at\"]\n    }\n  \
    \  \nexcept Exception as e:\n    # Handle database errors gracefully\n    print(f\"\
    Database error: {str(e)}\")\n    \n    # Fallback to default data structure\n\
    \    result = {\n        \"extraction_success\": False,\n        \"customer_info\"\
    : {},\n        \"missing_fields\": [\"database_connection_failed\"],\n       \
    \ \"error_message\": str(e),\n        \"data_source\": \"database_error_fallback\"\
    \n    }\n\nprint(f\"__OUTPUTS__ {json.dumps(result)}\")\n"
  packages:
  - supabase==2.4.4
  description: Extract and structure customer information from application database
  timeout_seconds: 60
- id: validate_application_fields
  name: Validate Required Fields
  type: script
  script: "import json\n\ncustomer_info = ${extract_application_info.customer_info}\n\
    required_fields = [\"full_name\", \"email\", \"phone\", \"address\", \"employment_status\"\
    , \"annual_income\"]\n\nmissing_fields = []\nfor field in required_fields:\n \
    \   if field not in customer_info or not customer_info[field]:\n        missing_fields.append(field)\n\
    \nresult = {\n    \"validation_passed\": len(missing_fields) == 0,\n    \"missing_fields\"\
    : missing_fields,\n    \"completion_percentage\": ((len(required_fields) - len(missing_fields))\
    \ / len(required_fields)) * 100\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(result)}\"\
    )\n"
  depends_on:
  - extract_application_info
  description: Check if all required fields are present and valid
  previous_node: extract_application_info
  timeout_seconds: 30
- id: application_quality_check
  name: Application Quality Assessment
  type: ai_agent
  config:
    input_format: json
    output_format: json
    model_client_id: underwriting_analyst
  depends_on:
  - validate_application_fields
  - extract_application_info
  description: AI-powered assessment of application quality
  user_message: 'Please analyze this loan application:


    Customer Info: ${extract_application_info.customer_info}

    Validation Results: ${validate_application_fields}


    Provide a detailed quality assessment.

    '
  previous_node: validate_application_fields
  system_message: 'You are an expert loan underwriting analyst. Review the application
    data and assess its quality.


    Evaluate:

    1. Data completeness and accuracy

    2. Consistency of information

    3. Red flags or inconsistencies

    4. Overall application quality score (1-100)


    Return a JSON response with your assessment.

    '
- id: ocr_pan_card
  name: OCR PAN Card Processing
  type: mcp
  tool_name: ocr_image_url
  depends_on:
  - extract_application_info
  - application_quality_check
  description: Extract text from PAN card image using OCR
  deployment_id: pod-tldususf
  previous_node: application_quality_check
  tool_arguments:
    image_url: ${extract_application_info.customer_info.pancard_link}
  timeout_seconds: 120
- id: verify_pan_card
  name: PAN Card Verification Analysis
  type: ai_agent
  config:
    input_format: json
    output_format: json
    model_client_id: underwriting_analyst
  depends_on:
  - ocr_pan_card
  - extract_application_info
  description: AI-powered analysis of OCR extracted PAN card data with application
    cross-verification
  user_message: "Please analyze this OCR extracted text from a PAN card and cross-verify\
    \ with application data:\n\nOCR Extracted Text: ${ocr_pan_card}\n\nApplication\
    \ Data for Cross-Verification:\nCustomer Info: ${extract_application_info.customer_info}\n\
    \nProvide comprehensive PAN verification data maintaining the exact structure:\n\
    {\n  \"pan_number\": \"extracted PAN\",\n  \"is_valid\": true/false,\n  \"name_match\"\
    : true/false,\n  \"status\": \"active/inactive\",\n  \"verification_score\": 0-100,\n\
    \  \"verified_name\": \"extracted name\",\n  \"father_name\": \"extracted father\
    \ name\",\n  \"date_of_birth\": \"YYYY-MM-DD\",\n  \"registered_address\": {\n\
    \    \"line1\": \"\",\n    \"line2\": \"\",\n    \"city\": \"\",\n    \"state\"\
    : \"\",\n    \"pincode\": \"\",\n    \"country\": \"India\"\n  },\n  \"pan_issue_date\"\
    : \"date\",\n  \"pan_category\": \"Individual\",\n  \"aadhaar_linked\": true/false,\n\
    \  \"mobile_linked\": true/false,\n  \"email_linked\": true/false,\n  \"jurisdiction\"\
    : \"\",\n  \"assessing_officer\": \"\",\n  \"verification_timestamp\": \"current\
    \ timestamp\",\n  \"document_quality\": {\n    \"image_clarity\": 0-100,\n   \
    \ \"tamper_detection\": \"no_tampering/tampering_detected\",\n    \"security_features\"\
    : \"authentic/suspicious\",\n    \"ocr_confidence\": 0-100\n  },\n  \"tax_compliance\"\
    : {\n    \"returns_filed\": true/false,\n    \"last_return_year\": \"\",\n   \
    \ \"compliance_rating\": \"Good/Average/Poor\"\n  }\n}\n"
  previous_node: ocr_pan_card
  system_message: 'You are an expert document verification specialist. Analyze the
    OCR extracted text from a PAN card and cross-verify with application data.


    Extract and verify:

    1. PAN number (10 character alphanumeric)

    2. Personal details (name, father''s name, DOB)

    3. Address information

    4. Cross-match with application data for consistency

    5. Document authenticity indicators

    6. Quality and verification scores


    Return the data in the exact same JSON structure format with all required PAN
    verification fields.

    Calculate verification scores based on OCR quality and data consistency.

    '
- id: ocr_aadhaar_card
  name: OCR Aadhaar Card Processing
  type: mcp
  tool_name: ocr_image_url
  depends_on:
  - application_quality_check
  - extract_application_info
  description: Extract text from Aadhaar card image using OCR
  deployment_id: pod-tldususf
  previous_node: application_quality_check
  tool_arguments:
    image_url: ${extract_application_info.customer_info.adhar_card_link}
  timeout_seconds: 120
- id: verify_aadhaar
  name: Aadhaar Verification Analysis
  type: ai_agent
  config:
    input_format: json
    output_format: json
    model_client_id: underwriting_analyst
  depends_on:
  - ocr_aadhaar_card
  description: AI-powered analysis of OCR extracted Aadhaar data with application
    cross-verification
  user_message: "Please analyze this OCR extracted text from an Aadhaar card and cross-verify\
    \ with application data:\n\nOCR Extracted Text: ${ocr_aadhaar_card}\n\nApplication\
    \ Data for Cross-Verification:\nCustomer Info: ${extract_application_info.customer_info}\n\
    \nProvide comprehensive verification data maintaining the exact structure:\n{\n\
    \  \"aadhaar_masked\": \"XXXX-XXXX-1234\",\n  \"is_valid\": true/false,\n  \"\
    address_match\": true/false,\n  \"biometric_verified\": true/false,\n  \"verification_score\"\
    : 0-100,\n  \"last_updated\": \"date\",\n  \"verified_name\": \"extracted name\"\
    ,\n  \"father_name\": \"extracted father name\",\n  \"date_of_birth\": \"YYYY-MM-DD\"\
    ,\n  \"gender\": \"Male/Female\",\n  \"mobile_verified\": true/false,\n  \"email_verified\"\
    : true/false,\n  \"registered_address\": {\n    \"care_of\": \"\",\n    \"building\"\
    : \"\",\n    \"street\": \"\",\n    \"area\": \"\",\n    \"city\": \"\",\n   \
    \ \"district\": \"\",\n    \"state\": \"\",\n    \"pincode\": \"\",\n    \"country\"\
    : \"India\"\n  },\n  \"aadhaar_generation_date\": \"date\",\n  \"aadhaar_update_history\"\
    : [],\n  \"linked_services\": {},\n  \"biometric_details\": {},\n  \"demographic_verification\"\
    : {\n    \"name_similarity\": 0-100,\n    \"address_similarity\": 0-100,\n   \
    \ \"date_birth_match\": true/false,\n    \"gender_match\": true/false\n  },\n\
    \  \"aadhaar_status\": \"Active/Inactive\",\n  \"verification_timestamp\": \"\
    current timestamp\"\n}\n"
  previous_node: ocr_aadhaar_card
  system_message: 'You are an expert document verification specialist. Analyze the
    OCR extracted text from an Aadhaar card and cross-verify with application data.


    Extract and verify:

    1. Aadhaar number (mask it for privacy as XXXX-XXXX-1234)

    2. Personal details (name, father''s name, DOB, gender)

    3. Address information

    4. Cross-match with application data for consistency

    5. Document authenticity indicators

    6. Quality and verification scores


    Return the data in the exact same JSON structure format with all required fields.

    Calculate verification scores based on OCR quality and data consistency.

    '
- id: ocr_passport
  name: OCR Passport Processing
  type: mcp
  tool_name: ocr_image_url
  depends_on:
  - extract_application_info
  - application_quality_check
  description: Extract text from passport image using OCR
  deployment_id: pod-tldususf
  previous_node: application_quality_check
  tool_arguments:
    image_url: ${extract_application_info.customer_info.passport_link}
  timeout_seconds: 120
- id: verify_passport
  name: Passport Verification Analysis
  type: ai_agent
  config:
    input_format: json
    output_format: json
    model_client_id: underwriting_analyst
  depends_on:
  - ocr_passport
  - extract_application_info
  description: AI-powered analysis of OCR extracted passport data with application
    cross-verification
  user_message: "Please analyze this OCR extracted text from a passport and cross-verify\
    \ with application data:\n\nOCR Extracted Text: ${ocr_passport}\n\nApplication\
    \ Data for Cross-Verification:\nCustomer Info: ${extract_application_info.customer_info}\n\
    \nProvide comprehensive passport verification data maintaining the exact structure:\n\
    {\n  \"passport_number\": \"extracted passport number\",\n  \"is_valid\": true/false,\n\
    \  \"expiry_date\": \"YYYY-MM-DD\",\n  \"issuing_authority\": \"Government of\
    \ India\",\n  \"verification_score\": 0-100,\n  \"document_provided\": true/false,\n\
    \  \"holder_name\": \"extracted name\",\n  \"father_name\": \"extracted father\
    \ name\",\n  \"mother_name\": \"extracted mother name\",\n  \"date_of_birth\"\
    : \"YYYY-MM-DD\",\n  \"place_of_birth\": \"extracted place\",\n  \"gender\": \"\
    Male/Female\",\n  \"nationality\": \"Indian\",\n  \"passport_type\": \"P\",\n\
    \  \"issue_date\": \"YYYY-MM-DD\",\n  \"issuing_office\": \"extracted office\"\
    ,\n  \"file_number\": \"extracted file number\",\n  \"personal_details\": {\n\
    \    \"height\": \"\",\n    \"identification_marks\": \"\",\n    \"blood_group\"\
    : \"\",\n    \"marital_status\": \"\"\n  },\n  \"address_in_passport\": {\n  \
    \  \"line1\": \"\",\n    \"line2\": \"\",\n    \"city\": \"\",\n    \"state\"\
    : \"\",\n    \"pincode\": \"\",\n    \"country\": \"India\"\n  },\n  \"travel_history\"\
    : [],\n  \"current_visas\": [],\n  \"passport_renewals\": 0,\n  \"previous_passport_number\"\
    : \"\",\n  \"emergency_contact\": {\n    \"name\": \"\",\n    \"relationship\"\
    : \"\",\n    \"phone\": \"\"\n  },\n  \"document_security\": {\n    \"chip_verified\"\
    : true/false,\n    \"security_features\": \"authentic/suspicious\",\n    \"machine_readable\"\
    : true/false,\n    \"digital_signature\": \"verified/failed\"\n  },\n  \"verification_timestamp\"\
    : \"current timestamp\"\n}\n"
  previous_node: ocr_passport
  system_message: 'You are an expert document verification specialist. Analyze the
    OCR extracted text from a passport and cross-verify with application data.


    Extract and verify:

    1. Passport number, expiry date, issuing authority

    2. Personal details (name, father''s name, mother''s name, DOB, gender)

    3. Address information and travel history

    4. Cross-match with application data for consistency

    5. Document authenticity indicators

    6. Quality and verification scores


    Return the data in the exact same JSON structure format with all required passport
    verification fields.

    Calculate verification scores based on OCR quality and data consistency.

    '
- id: ocr_address_proof
  name: OCR Address Proof Processing
  type: mcp
  tool_name: ocr_image_url
  depends_on:
  - extract_application_info
  - application_quality_check
  description: Extract text from address proof document using OCR
  deployment_id: pod-tldususf
  previous_node: application_quality_check
  tool_arguments:
    image_url: ${extract_application_info.customer_info.address_proof}
  timeout_seconds: 120
- id: verify_address_proof
  name: Address Proof Verification Analysis
  type: ai_agent
  config:
    input_format: json
    output_format: json
    model_client_id: underwriting_analyst
  depends_on:
  - ocr_address_proof
  - extract_application_info
  description: AI-powered analysis of OCR extracted address proof data with application
    cross-verification
  user_message: "Please analyze this OCR extracted text from an address proof document\
    \ and cross-verify with application data:\n\nOCR Extracted Text: ${ocr_address_proof}\n\
    \nApplication Data for Cross-Verification:\nCustomer Info: ${extract_application_info.customer_info}\n\
    \nProvide comprehensive address proof verification data maintaining the exact\
    \ structure:\n{\n  \"document_type\": \"utility_bill/bank_statement/rental_agreement\"\
    ,\n  \"is_valid\": true/false,\n  \"address_match\": true/false,\n  \"document_date\"\
    : \"YYYY-MM-DD\",\n  \"verification_score\": 0-100,\n  \"address_confirmed\":\
    \ \"extracted address\",\n  \"document_issuer\": \"extracted issuer\",\n  \"consumer_name\"\
    : \"extracted name\",\n  \"consumer_number\": \"extracted consumer number\",\n\
    \  \"billing_period\": \"extracted period\",\n  \"detailed_address\": {\n    \"\
    house_number\": \"\",\n    \"street_name\": \"\",\n    \"area\": \"\",\n    \"\
    locality\": \"\",\n    \"city\": \"\",\n    \"district\": \"\",\n    \"state\"\
    : \"\",\n    \"pincode\": \"\",\n    \"country\": \"India\"\n  },\n  \"service_details\"\
    : {\n    \"service_type\": \"\",\n    \"connection_type\": \"\",\n    \"load_sanctioned\"\
    : \"\",\n    \"meter_number\": \"\",\n    \"account_status\": \"\"\n  },\n  \"\
    billing_information\": {\n    \"current_reading\": 0,\n    \"previous_reading\"\
    : 0,\n    \"units_consumed\": 0,\n    \"bill_amount\": 0.0,\n    \"payment_status\"\
    : \"\",\n    \"payment_date\": \"\"\n  },\n  \"document_verification\": {\n  \
    \  \"watermark_verified\": true/false,\n    \"official_seal\": true/false,\n \
    \   \"barcode_verified\": true/false,\n    \"digital_signature\": \"verified/failed\"\
    \n  },\n  \"address_validation\": {\n    \"geographic_coordinates\": {\n     \
    \ \"latitude\": 0.0,\n      \"longitude\": 0.0\n    },\n    \"postal_verification\"\
    : \"confirmed/failed\",\n    \"google_maps_verified\": true/false,\n    \"delivery_confirmation\"\
    : \"successful/failed\"\n  },\n  \"usage_pattern\": {\n    \"average_monthly_consumption\"\
    : 0,\n    \"seasonal_variation\": \"\",\n    \"payment_history\": \"\",\n    \"\
    account_age\": \"\"\n  },\n  \"verification_timestamp\": \"current timestamp\"\
    \n}\n"
  previous_node: ocr_address_proof
  system_message: 'You are an expert document verification specialist. Analyze the
    OCR extracted text from an address proof document and cross-verify with application
    data.


    Extract and verify:

    1. Document type (utility_bill, bank_statement, rental_agreement, etc.)

    2. Address details and match with application address

    3. Document issuer and consumer information

    4. Document date and validity

    5. Cross-match with application data for consistency

    6. Document authenticity indicators

    7. Quality and verification scores


    Return the data in the exact same JSON structure format with all required address
    proof verification fields.

    Calculate verification scores based on OCR quality and data consistency.

    '
- id: consolidate_document_verification
  name: Document Verification Summary
  type: ai_agent
  config:
    input_format: json
    output_format: json
    model_client_id: underwriting_analyst
  depends_on:
  - verify_pan_card
  - verify_aadhaar
  - verify_passport
  - verify_address_proof
  description: AI analysis of all document verification results
  user_message: 'Please analyze these document verification results:


    PAN Verification: ${verify_pan_card}

    Aadhaar Verification: ${verify_aadhaar}

    Passport Verification: ${verify_passport}

    Address Proof: ${verify_address_proof}


    Provide a consolidated assessment.

    '
  previous_node:
  - verify_pan_card
  - verify_aadhaar
  - verify_passport
  - verify_address_proof
  system_message: 'You are a document verification specialist. Analyze all document
    verification results and provide a comprehensive assessment.


    Consider:

    1. Overall verification success rate

    2. Consistency across documents

    3. Risk factors identified

    4. Recommendation for proceeding


    Return a JSON response with your analysis.

    '
- id: ocr_salary_slip
  name: Get First Salary Slip URL
  type: script
  script: "import json\n\n# Get first salary slip URL from application data\nsalary_slip_urls\
    \ = ${extract_application_info.customer_info.salary_slips}\n\n# Get the first\
    \ URL if available\nfirst_salary_slip_url = \"\"\nif salary_slip_urls and len(salary_slip_urls)\
    \ > 0:\n    first_salary_slip_url = salary_slip_urls[0] if salary_slip_urls[0]\
    \ else \"\"\n\nresult = {\n    \"first_salary_slip_url\": first_salary_slip_url,\n\
    \    \"total_salary_slips\": len(salary_slip_urls) if salary_slip_urls else 0,\n\
    \    \"all_salary_slip_urls\": salary_slip_urls\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(result)}\"\
    )\n"
  depends_on:
  - consolidate_document_verification
  - extract_application_info
  description: Extract first salary slip URL from application data
  previous_node: consolidate_document_verification
  timeout_seconds: 30
- id: ocr_salary_slip_processing
  name: OCR Salary Slip Processing
  type: mcp
  tool_name: ocr_image_url
  depends_on:
  - ocr_salary_slip
  description: Extract text from first salary slip image using OCR
  deployment_id: pod-tldususf
  previous_node: ocr_salary_slip
  tool_arguments:
    image_url: ${ocr_salary_slip.first_salary_slip_url}
  timeout_seconds: 120
- id: analyze_salary_slips
  name: Salary Slip Analysis with OCR
  type: ai_agent
  config:
    input_format: json
    output_format: json
    model_client_id: underwriting_analyst
  depends_on:
  - ocr_salary_slip_processing
  - extract_application_info
  description: AI-powered analysis of OCR extracted salary slip data with application
    cross-verification
  user_message: "Please analyze this OCR extracted salary slip and cross-verify with\
    \ application data:\n\nOCR Extracted Text: ${ocr_salary_slip_processing}\n\nApplication\
    \ Data for Cross-Verification:\nCustomer Info: ${extract_application_info.customer_info}\n\
    \nProvide comprehensive salary analysis maintaining the exact structure with all\
    \ required fields:\n{\n  \"monthly_gross\": 0,\n  \"monthly_net\": 0,\n  \"annual_gross\"\
    : 0,\n  \"annual_net\": 0,\n  \"employer\": \"\",\n  \"employment_duration\":\
    \ \"\",\n  \"salary_consistency\": true/false,\n  \"recent_increment\": true/false,\n\
    \  \"employee_details\": {\n    \"employee_id\": \"\",\n    \"full_name\": \"\"\
    ,\n    \"designation\": \"\",\n    \"department\": \"\",\n    \"grade\": \"\"\
    ,\n    \"location\": \"\",\n    \"joining_date\": \"\",\n    \"pan_number\": \"\
    \",\n    \"uan_number\": \"\",\n    \"bank_account\": \"\"\n  },\n  \"employer_details\"\
    : {\n    \"company_name\": \"\",\n    \"company_address\": \"\",\n    \"employer_pan\"\
    : \"\",\n    \"employer_tan\": \"\",\n    \"pf_registration\": \"\",\n    \"esi_registration\"\
    : \"\"\n  },\n  \"earnings_breakdown\": {\n    \"basic_salary\": 0,\n    \"hra\"\
    : 0,\n    \"transport_allowance\": 0,\n    \"special_allowance\": 0,\n    \"performance_bonus\"\
    : 0,\n    \"overtime\": 0,\n    \"arrears\": 0\n  },\n  \"deductions\": {\n  \
    \  \"income_tax\": 0,\n    \"provident_fund\": 0,\n    \"professional_tax\": 0,\n\
    \    \"health_insurance\": 0,\n    \"life_insurance\": 0,\n    \"loan_deduction\"\
    : 0,\n    \"other_deductions\": 0\n  },\n  \"leave_details\": {\n    \"total_days\"\
    : 0,\n    \"present_days\": 0,\n    \"paid_leave\": 0,\n    \"loss_of_pay\": 0,\n\
    \    \"public_holidays\": 0\n  },\n  \"salary_trends\": {\n    \"last_6_months\"\
    : [],\n    \"average_monthly\": 0,\n    \"increment_percentage\": 0,\n    \"last_increment_date\"\
    : \"\",\n    \"bonus_history\": []\n  },\n  \"compliance_details\": {\n    \"\
    pf_contribution\": 0,\n    \"esi_contribution\": 0,\n    \"tds_deducted\": 0,\n\
    \    \"form16_available\": true/false,\n    \"salary_certificate\": true/false\n\
    \  },\n  \"verification_status\": {\n    \"employer_verification\": \"verified/unverified\"\
    ,\n    \"bank_credit_confirmation\": true/false,\n    \"tax_compliance\": \"compliant/non-compliant\"\
    ,\n    \"document_authenticity\": \"verified/suspicious\"\n  }\n}\n"
  previous_node: ocr_salary_slip_processing
  system_message: 'You are an expert financial document analyst. Analyze the OCR extracted
    text from a salary slip and cross-verify with application data.


    Extract and analyze:

    1. Monthly gross and net salary amounts

    2. Employee and employer details

    3. Earnings breakdown (basic, HRA, allowances, bonuses)

    4. Deductions (tax, PF, insurance, etc.)

    5. Leave details and attendance

    6. Compliance information (PF, ESI, TDS)

    7. Cross-match with application data for consistency


    Return the data in the exact same JSON structure format with all required salary
    analysis fields.

    Calculate verification scores based on OCR quality and data consistency.

    '
- id: ocr_bank_statement
  name: OCR Bank Statement Processing
  type: mcp
  tool_name: ocr_image_url
  depends_on:
  - consolidate_document_verification
  - extract_application_info
  description: Extract text from bank statement image using OCR
  deployment_id: pod-tldususf
  previous_node: consolidate_document_verification
  tool_arguments:
    image_url: ${extract_application_info.customer_info.bank_statement_link}
  timeout_seconds: 120
- id: analyze_bank_statements
  name: Bank Statement Analysis with OCR
  type: ai_agent
  config:
    input_format: json
    output_format: json
    model_client_id: underwriting_analyst
  depends_on:
  - ocr_bank_statement
  - extract_application_info
  description: AI-powered analysis of OCR extracted bank statement data with application
    cross-verification
  user_message: 'Please analyze this OCR extracted bank statement and cross-verify
    with application data:


    OCR Extracted Text: ${ocr_bank_statement}


    Application Data for Cross-Verification:

    Customer Info: ${extract_application_info.customer_info}


    Provide comprehensive bank statement analysis maintaining the exact structure
    with all required fields.

    '
  previous_node: ocr_bank_statement
  system_message: 'You are an expert financial analyst. Analyze the OCR extracted
    text from a bank statement and cross-verify with application data.


    Extract and analyze:

    1. Account details and account holder information

    2. Monthly inflow and outflow patterns

    3. Salary credits and their consistency

    4. Transaction patterns and financial behavior

    5. Balance behavior and trends

    6. Loan EMIs and existing financial obligations

    7. Compliance indicators and suspicious activities

    8. Financial ratios and stability metrics


    Return the data in the exact same JSON structure format with all required bank
    analysis fields.

    Calculate verification scores based on OCR quality and financial patterns.

    '
  timeout_seconds: 90
- id: verify_tax_returns
  name: Tax Return Verification
  type: script
  script: "import json\n\n# Simulate tax return verification\ntax_analysis = {\n \
    \   \"declared_income\": 98000,\n    \"tax_paid\": 12000,\n    \"returns_filed_consistently\"\
    : True,\n    \"income_growth_trend\": \"positive\",\n    \"discrepancies_found\"\
    : False,\n    \"verification_with_govt\": True,\n    \"tax_compliance_score\"\
    : 92.0\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(tax_analysis)}\")\n"
  depends_on:
  - consolidate_document_verification
  description: Verify and analyze tax return documents
  previous_node: consolidate_document_verification
  timeout_seconds: 60
- id: verify_employment
  name: Employment Verification
  type: script
  script: "import json\n\n# Simulate employment verification\nemployment_data = {\n\
    \    \"employment_confirmed\": True,\n    \"designation\": \"Senior Software Engineer\"\
    ,\n    \"employment_type\": \"permanent\",\n    \"probation_status\": \"confirmed\"\
    ,\n    \"employer_rating\": \"A+\",\n    \"job_stability_score\": 88.5,\n    \"\
    reference_check_passed\": True,\n    \"hr_contact_verified\": True\n}\n\nprint(f\"\
    __OUTPUTS__ {json.dumps(employment_data)}\")\n"
  depends_on:
  - analyze_salary_slips
  description: Verify employment status and details with employer
  previous_node: analyze_salary_slips
  timeout_seconds: 120
- id: calculate_debt_to_income
  name: Debt-to-Income Ratio Calculation
  type: script
  script: "import json\n\n# Extract income data\nmonthly_net = ${analyze_salary_slips.monthly_net}\n\
    existing_emis = ${analyze_bank_statements.loan_emis_detected} * 1200  # Assume\
    \ 1200 per EMI\n\n# Calculate proposed EMI (simplified calculation)\nloan_amount\
    \ = ${loan_amount}\nproposed_emi = loan_amount / 240  # 20 year loan approximation\n\
    \ntotal_debt = existing_emis + proposed_emi\ndebt_to_income_ratio = total_debt\
    \ / monthly_net\n\n# Get max allowed ratio from input parameters\nmax_allowed_ratio\
    \ = ${max_debt_to_income_ratio}\n\nresult = {\n    \"monthly_net_income\": monthly_net,\n\
    \    \"existing_debt_payments\": existing_emis,\n    \"proposed_emi\": proposed_emi,\n\
    \    \"total_debt_payments\": total_debt,\n    \"debt_to_income_ratio\": debt_to_income_ratio,\n\
    \    \"ratio_acceptable\": debt_to_income_ratio <= max_allowed_ratio,\n    \"\
    max_allowed_ratio\": max_allowed_ratio\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(result)}\"\
    )\n"
  depends_on:
  - analyze_salary_slips
  - analyze_bank_statements
  - verify_tax_returns
  description: Calculate applicant's debt-to-income ratio
  previous_node:
  - analyze_salary_slips
  - analyze_bank_statements
  - verify_tax_returns
  timeout_seconds: 30
- id: income_assessment
  name: Comprehensive Income Assessment
  type: ai_agent
  config:
    input_format: json
    output_format: json
    model_client_id: underwriting_analyst
  depends_on:
  - calculate_debt_to_income
  - verify_employment
  - analyze_salary_slips
  - analyze_bank_statements
  - verify_tax_returns
  description: AI-powered comprehensive income and financial stability assessment
  user_message: 'Please analyze this comprehensive income data:


    Salary Analysis: ${analyze_salary_slips}

    Bank Statement Analysis: ${analyze_bank_statements}

    Tax Returns: ${verify_tax_returns}

    Employment Verification: ${verify_employment}

    Debt-to-Income Calculation: ${calculate_debt_to_income}


    Provide a thorough income assessment.

    '
  previous_node:
  - calculate_debt_to_income
  - verify_employment
  system_message: 'You are a senior financial analyst specializing in income assessment
    for loan underwriting.


    Analyze the provided income data and provide:

    1. Income stability assessment

    2. Debt servicing capability

    3. Financial discipline evaluation

    4. Risk factors and recommendations

    5. Overall income adequacy score (1-100)


    Return a detailed JSON assessment.

    '
- id: check_cibil_score
  name: CIBIL Score Check
  type: script
  script: "import json\nimport random\n\n# Simulate CIBIL score check\ncibil_data\
    \ = {\n    \"credit_score\": 720,\n    \"score_range\": \"Good\",\n    \"last_updated\"\
    : \"2024-06-01\",\n    \"credit_history_length\": \"5 years\",\n    \"total_accounts\"\
    : 8,\n    \"active_accounts\": 6,\n    \"closed_accounts\": 2,\n    \"credit_utilization\"\
    : 35.5,\n    \"payment_history\": \"99% on-time\",\n    \"recent_inquiries\":\
    \ 2\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(cibil_data)}\")\n"
  depends_on:
  - income_assessment
  description: Retrieve and analyze CIBIL credit score
  previous_node: income_assessment
  timeout_seconds: 60
- id: analyze_credit_history
  name: Credit History Analysis
  type: script
  script: "import json\n\n# Simulate credit history analysis\ncredit_history = {\n\
    \    \"oldest_account_age\": \"60 months\",\n    \"average_account_age\": \"32\
    \ months\",\n    \"credit_mix\": {\n        \"credit_cards\": 3,\n        \"personal_loans\"\
    : 1,\n        \"auto_loans\": 1,\n        \"home_loans\": 0,\n        \"other\"\
    : 1\n    },\n    \"repayment_behavior\": {\n        \"never_missed\": 85.0,\n\
    \        \"30_days_late\": 12.0,\n        \"60_days_late\": 2.5,\n        \"90_days_late\"\
    : 0.5,\n        \"defaults\": 0.0\n    },\n    \"credit_limit_utilization\": 35.2,\n\
    \    \"recent_credit_behavior\": \"stable\"\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(credit_history)}\"\
    )\n"
  depends_on:
  - check_cibil_score
  description: Detailed analysis of credit history and patterns
  previous_node: check_cibil_score
  timeout_seconds: 60
- id: assess_existing_loans
  name: Existing Loan Assessment
  type: script
  script: "import json\n\n# Simulate existing loan assessment\nexisting_loans = {\n\
    \    \"total_outstanding\": 125000,\n    \"number_of_loans\": 2,\n    \"loan_details\"\
    : [\n        {\n            \"type\": \"personal_loan\",\n            \"outstanding\"\
    : 45000,\n            \"emi\": 3500,\n            \"remaining_tenure\": \"18 months\"\
    \n        },\n        {\n            \"type\": \"auto_loan\",\n            \"\
    outstanding\": 80000,\n            \"emi\": 4200,\n            \"remaining_tenure\"\
    : \"24 months\"\n        }\n    ],\n    \"total_monthly_emi\": 7700,\n    \"repayment_track_record\"\
    : \"excellent\",\n    \"loan_burden_ratio\": 0.28\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(existing_loans)}\"\
    )\n"
  depends_on:
  - analyze_credit_history
  description: Assess current loan obligations and repayment capacity
  previous_node: analyze_credit_history
  timeout_seconds: 45
- id: evaluate_default_risk
  name: Default Risk Evaluation
  type: ai_agent
  config:
    input_format: json
    output_format: json
    model_client_id: risk_assessor
  depends_on:
  - assess_existing_loans
  - check_cibil_score
  - analyze_credit_history
  description: AI-powered default risk assessment
  user_message: 'Please analyze this credit data for default risk:


    CIBIL Score: ${check_cibil_score}

    Credit History: ${analyze_credit_history}

    Existing Loans: ${assess_existing_loans}


    Calculate default risk and provide recommendations.

    '
  previous_node: assess_existing_loans
  system_message: 'You are a credit risk assessment specialist. Analyze the provided
    credit data and calculate default risk.


    Consider:

    1. Credit score and history

    2. Repayment patterns

    3. Credit utilization

    4. Existing loan burden

    5. Overall risk profile


    Provide a risk score (1-100, where 100 is highest risk) and detailed risk analysis.

    '
- id: calculate_credit_score
  name: Internal Credit Score Calculation
  type: script
  script: "import json\n\n# Simulate internal credit scoring\ncibil_score = ${check_cibil_score.credit_score}\n\
    payment_history = 99.0  # From credit history\ncredit_utilization = ${analyze_credit_history.credit_limit_utilization}\n\
    loan_burden = ${assess_existing_loans.loan_burden_ratio}\n\n# Internal scoring\
    \ algorithm (simplified)\ninternal_score = (cibil_score * 0.4) + (payment_history\
    \ * 0.3) + ((100 - credit_utilization) * 0.2) + ((1 - loan_burden) * 100 * 0.1)\n\
    \n# Get min credit score from input parameters\nmin_credit_score = ${min_credit_score}\n\
    \nresult = {\n    \"internal_credit_score\": round(internal_score, 1),\n    \"\
    cibil_score\": cibil_score,\n    \"score_acceptable\": internal_score >= min_credit_score,\n\
    \    \"min_required_score\": min_credit_score,\n    \"risk_category\": \"low\"\
    \ if internal_score >= 750 else \"medium\" if internal_score >= 650 else \"high\"\
    ,\n    \"scoring_factors\": {\n        \"cibil_contribution\": cibil_score * 0.4,\n\
    \        \"payment_history_contribution\": payment_history * 0.3,\n        \"\
    utilization_contribution\": (100 - credit_utilization) * 0.2,\n        \"loan_burden_contribution\"\
    : (1 - loan_burden) * 100 * 0.1\n    }\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(result)}\"\
    )\n"
  depends_on:
  - evaluate_default_risk
  - check_cibil_score
  - analyze_credit_history
  - assess_existing_loans
  description: Calculate internal credit score based on all factors
  previous_node: evaluate_default_risk
  timeout_seconds: 30
- id: kyc_compliance_check
  name: KYC Compliance Check
  type: script
  script: "import json\n\n# Simulate KYC compliance check\nkyc_result = {\n    \"\
    kyc_status\": \"compliant\",\n    \"identity_verified\": True,\n    \"address_verified\"\
    : True,\n    \"income_verified\": True,\n    \"documents_complete\": True,\n \
    \   \"risk_category\": \"low\",\n    \"compliance_score\": 98.5,\n    \"last_updated\"\
    : \"2024-07-15\",\n    \"regulatory_requirements_met\": True\n}\n\nprint(f\"__OUTPUTS__\
    \ {json.dumps(kyc_result)}\")\n"
  depends_on:
  - calculate_credit_score
  description: Verify KYC compliance requirements
  previous_node: calculate_credit_score
  timeout_seconds: 60
- id: aml_screening
  name: AML Screening
  type: script
  script: "import json\n\n# Simulate AML screening\naml_result = {\n    \"aml_status\"\
    : \"cleared\",\n    \"watchlist_check\": \"no_matches\",\n    \"pep_screening\"\
    : \"not_identified\",\n    \"sanctions_check\": \"cleared\",\n    \"adverse_media\"\
    : \"no_hits\",\n    \"risk_rating\": \"low\",\n    \"compliance_score\": 96.8,\n\
    \    \"screening_date\": \"2024-07-15\",\n    \"manual_review_required\": False\n\
    }\n\nprint(f\"__OUTPUTS__ {json.dumps(aml_result)}\")\n"
  depends_on:
  - kyc_compliance_check
  description: Anti-Money Laundering compliance screening
  previous_node: kyc_compliance_check
  timeout_seconds: 90
- id: regulatory_compliance
  name: Regulatory Requirements Check
  type: script
  script: "import json\n\n# Simulate regulatory compliance check\nregulatory_result\
    \ = {\n    \"rbi_guidelines_met\": True,\n    \"lending_norms_compliant\": True,\n\
    \    \"documentation_complete\": True,\n    \"disclosure_requirements_met\": True,\n\
    \    \"consumer_protection_compliant\": True,\n    \"data_privacy_compliant\"\
    : True,\n    \"compliance_score\": 97.2,\n    \"audit_trail_complete\": True,\n\
    \    \"regulatory_risk\": \"minimal\"\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(regulatory_result)}\"\
    )\n"
  depends_on:
  - aml_screening
  description: Check compliance with banking regulations
  previous_node: aml_screening
  timeout_seconds: 45
- id: internal_policy_check
  name: Internal Policy Compliance
  type: script
  script: "import json\n\n# Simulate internal policy check\npolicy_result = {\n  \
    \  \"loan_amount_within_limits\": True,\n    \"ltv_ratio_acceptable\": True,\n\
    \    \"income_criteria_met\": True,\n    \"age_criteria_met\": True,\n    \"employment_criteria_met\"\
    : True,\n    \"credit_score_threshold_met\": True,\n    \"geographic_restrictions_met\"\
    : True,\n    \"policy_compliance_score\": 95.5,\n    \"exceptions_required\":\
    \ [],\n    \"policy_version\": \"2024.1\"\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(policy_result)}\"\
    )\n"
  depends_on:
  - regulatory_compliance
  description: Verify compliance with internal lending policies
  previous_node: regulatory_compliance
  timeout_seconds: 30
- id: compliance_consolidation
  name: Compliance Assessment Summary
  type: ai_agent
  config:
    input_format: json
    output_format: json
    model_client_id: compliance_reviewer
  depends_on:
  - internal_policy_check
  - kyc_compliance_check
  - aml_screening
  - regulatory_compliance
  description: AI-powered comprehensive compliance assessment
  user_message: 'Please analyze these compliance check results:


    KYC Compliance: ${kyc_compliance_check}

    AML Screening: ${aml_screening}

    Regulatory Compliance: ${regulatory_compliance}

    Internal Policy: ${internal_policy_check}


    Provide a consolidated compliance assessment.

    '
  previous_node: internal_policy_check
  system_message: 'You are a compliance officer specializing in loan underwriting
    compliance.


    Review all compliance check results and provide:

    1. Overall compliance status

    2. Risk assessment

    3. Any compliance gaps or concerns

    4. Recommendations for proceeding

    5. Compliance confidence score (1-100)


    Return a comprehensive compliance assessment in JSON format.

    '
- id: underwriting_decision_router
  name: Underwriting Decision Router
  type: conditional_router
  conditions:
  - name: high_risk_path
    route: decline_path
    condition: ${calculate_credit_score.internal_credit_score} < 600 || ${calculate_debt_to_income.debt_to_income_ratio}
      > 0.50
  - name: conditional_approval_path
    route: conditional_path
    condition: ${calculate_credit_score.internal_credit_score} >= 600 && ${calculate_credit_score.internal_credit_score}
      < 700
  - name: standard_approval_path
    route: approval_path
    condition: ${calculate_credit_score.internal_credit_score} >= 700
  depends_on:
  - compliance_consolidation
  - calculate_credit_score
  - calculate_debt_to_income
  description: Route to appropriate decision path based on assessments
  default_route: manual_review_path
  previous_node: compliance_consolidation
- id: final_underwriting_analysis
  name: Final Underwriting Decision Analysis
  type: ai_agent
  config:
    input_format: json
    output_format: json
    model_client_id: underwriting_analyst
  depends_on:
  - underwriting_decision_router
  - application_quality_check
  - consolidate_document_verification
  - income_assessment
  - calculate_credit_score
  - evaluate_default_risk
  - compliance_consolidation
  - calculate_debt_to_income
  description: Comprehensive AI analysis for final underwriting decision
  user_message: 'Please make the final underwriting decision based on:


    Application Quality: ${application_quality_check}

    Document Verification: ${consolidate_document_verification}

    Income Assessment: ${income_assessment}

    Credit Score: ${calculate_credit_score}

    Default Risk: ${evaluate_default_risk}

    Compliance: ${compliance_consolidation}

    Debt-to-Income: ${calculate_debt_to_income}


    Provide your final recommendation with detailed reasoning.

    '
  previous_node: underwriting_decision_router
  system_message: 'You are a senior underwriting manager making final loan decisions.


    Analyze all assessment results and provide:

    1. Final recommendation (APPROVE/CONDITIONAL/DECLINE)

    2. Loan terms and conditions

    3. Risk mitigation measures

    4. Reasoning for decision

    5. Confidence level (1-100)


    Consider all factors: income, credit, compliance, and overall risk profile.

    '
- id: generate_approval_terms
  name: Generate Loan Approval Terms
  type: script
  script: "import json\n\n# Generate loan terms based on assessment\nloan_terms =\
    \ {\n    \"decision\": \"APPROVED\",\n    \"loan_amount\": ${loan_amount},\n \
    \   \"interest_rate\": 8.5,\n    \"tenure_months\": 240,\n    \"monthly_emi\"\
    : round(${loan_amount} / 240 * 1.085, 2),\n    \"processing_fee\": ${loan_amount}\
    \ * 0.01,\n    \"conditions\": [\n        \"Property insurance required\",\n \
    \       \"Auto-debit for EMI payments\",\n        \"Annual income certificate\
    \ submission\"\n    ],\n    \"disbursement_conditions\": [\n        \"Property\
    \ registration documents\",\n        \"Insurance policy copy\",\n        \"Post-dated\
    \ cheques for EMI\"\n    ],\n    \"approval_date\": \"2024-07-15\",\n    \"offer_validity\"\
    : \"2024-08-15\"\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(loan_terms)}\")\n"
  depends_on:
  - final_underwriting_analysis
  description: Generate loan terms and conditions for approved application
  previous_node: final_underwriting_analysis
  timeout_seconds: 60
  execute_on_routes:
  - approval_path
- id: generate_conditional_terms
  name: Generate Conditional Approval Terms
  type: script
  script: "import json\n\n# Generate conditional approval terms\nconditional_terms\
    \ = {\n    \"decision\": \"CONDITIONAL_APPROVAL\",\n    \"loan_amount\": ${loan_amount},\n\
    \    \"interest_rate\": 9.2,\n    \"tenure_months\": 240,\n    \"monthly_emi\"\
    : round(${loan_amount} / 240 * 1.092, 2),\n    \"processing_fee\": ${loan_amount}\
    \ * 0.015,\n    \"additional_conditions\": [\n        \"Co-applicant required\"\
    ,\n        \"Additional collateral security\",\n        \"Higher down payment\
    \ (30% instead of 20%)\",\n        \"Salary account maintenance for 2 years\"\n\
    \    ],\n    \"documents_required\": [\n        \"Co-applicant income documents\"\
    ,\n        \"Additional property papers\",\n        \"Enhanced bank statements\
    \ (12 months)\"\n    ],\n    \"approval_date\": \"2024-07-15\",\n    \"offer_validity\"\
    : \"2024-08-01\"\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(conditional_terms)}\"\
    )\n"
  depends_on:
  - final_underwriting_analysis
  description: Generate conditional approval with additional requirements
  previous_node: final_underwriting_analysis
  timeout_seconds: 60
  execute_on_routes:
  - conditional_path
- id: generate_decline_notice
  name: Generate Loan Decline Notice
  type: script
  script: "import json\n\n# Generate decline notice\ndecline_notice = {\n    \"decision\"\
    : \"DECLINED\",\n    \"primary_reasons\": [\n        \"Insufficient credit score\"\
    ,\n        \"High debt-to-income ratio\",\n        \"Inadequate income documentation\"\
    \n    ],\n    \"decline_code\": \"RISK_001\",\n    \"decline_date\": \"2024-07-15\"\
    ,\n    \"appeal_process\": \"Customer can appeal within 30 days with additional\
    \ documentation\",\n    \"suggestions\": [\n        \"Improve credit score by\
    \ 6 months\",\n        \"Reduce existing debt burden\",\n        \"Consider lower\
    \ loan amount\",\n        \"Add co-applicant with good credit\"\n    ],\n    \"\
    reapplication_eligibility\": \"2024-12-15\"\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(decline_notice)}\"\
    )\n"
  depends_on:
  - final_underwriting_analysis
  description: Generate loan decline notice with reasons
  previous_node: final_underwriting_analysis
  timeout_seconds: 30
  execute_on_routes:
  - decline_path
- id: flag_for_manual_review
  name: Auto-Approve Manual Review Cases
  type: script
  script: "import json\n\n# Auto-approve with specific conditions for manual review\
    \ cases\nauto_approval_result = {\n    \"decision\": \"AUTO_APPROVED_WITH_CONDITIONS\"\
    ,\n    \"loan_amount\": ${loan_amount},\n    \"interest_rate\": 9.0,  # Slightly\
    \ higher rate for complex cases\n    \"tenure_months\": 240,\n    \"monthly_emi\"\
    : round(${loan_amount} / 240 * 1.090, 2),\n    \"processing_fee\": ${loan_amount}\
    \ * 0.012,\n    \"approval_type\": \"automatic_complex_case\",\n    \"conditions\"\
    : [\n        \"Quarterly income verification for first year\",\n        \"Maintain\
    \ minimum account balance\",\n        \"No additional loans for 12 months\",\n\
    \        \"Property insurance with bank as beneficiary\"\n    ],\n    \"risk_mitigation\"\
    : [\n        \"Enhanced monitoring for first 6 months\",\n        \"Automatic\
    \ alerts for missed payments\",\n        \"Periodic credit score reviews\"\n \
    \   ],\n    \"approval_date\": \"2024-07-15\",\n    \"offer_validity\": \"2024-08-10\"\
    ,\n    \"notes\": \"Auto-approved with enhanced monitoring due to complex risk\
    \ factors\"\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(auto_approval_result)}\")\n"
  depends_on:
  - final_underwriting_analysis
  description: Automatically approve applications that would have gone to manual review
  previous_node: final_underwriting_analysis
  timeout_seconds: 30
  execute_on_routes:
  - manual_review_path
- id: generate_final_output
  name: Generate Final Workflow Output
  type: script
  script: "import json\n\n# Determine final decision based on route\nfinal_output\
    \ = {\n    \"application_id\": \"${application_id}\",\n    \"processing_date\"\
    : \"2024-07-15\",\n    \"workflow_status\": \"completed\",\n    \"processing_time_minutes\"\
    : 45,\n    \"decision_summary\": {\n        \"credit_score\": ${calculate_credit_score.internal_credit_score},\n\
    \        \"debt_to_income_ratio\": ${calculate_debt_to_income.debt_to_income_ratio},\n\
    \        \"compliance_status\": \"compliant\",\n        \"risk_category\": \"\
    ${calculate_credit_score.risk_category}\"\n    },\n    \"next_steps\": [\n   \
    \     \"Customer notification\",\n        \"Document archival\",\n        \"Compliance\
    \ reporting\"\n    ]\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(final_output)}\")\n"
  depends_on:
  - generate_approval_terms
  - generate_conditional_terms
  - generate_decline_notice
  - flag_for_manual_review
  - calculate_credit_score
  - calculate_debt_to_income
  description: Generate comprehensive workflow output
  previous_node:
  - generate_approval_terms
  - generate_conditional_terms
  - generate_decline_notice
  - flag_for_manual_review
  timeout_seconds: 30
- id: upload_to_supabase
  name: Upload Loan Data to Supabase
  type: script
  script: "import json\nimport os\nimport pandas as pd\nfrom datetime import datetime\n\
    import io\nfrom supabase import create_client, Client\n\n# Supabase configuration\n\
    SUPABASE_URL = \"https://mbauzgvitqvxceqanzjw.supabase.co\"\nSUPABASE_KEY = \"\
    eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Im1iYXV6Z3ZpdHF2eGNlcWFuemp3Iiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTc0Mzg2MDEwOCwiZXhwIjoyMDU5NDM2MTA4fQ.R71cWZoLuq2GNojkLSvhcXXt9rAW9PJ5O9V4g7vXvC0\"\
    \n\ntry:\n    # Initialize Supabase client\n    supabase: Client = create_client(SUPABASE_URL,\
    \ SUPABASE_KEY)\n    \n    # Helper function to safely extract values with defaults\n\
    \    def safe_get(expression, default_value=\"N/A\"):\n        try:\n        \
    \    # Replace ${var} pattern with actual values from environment\n          \
    \  if isinstance(expression, str) and \"${\" in expression:\n                #\
    \ For string expressions, return as-is if they contain variables\n           \
    \     return expression\n            return expression if expression is not None\
    \ else default_value\n        except:\n            return default_value\n    \n\
    \    def safe_get_numeric(expression, default_value=0):\n        try:\n      \
    \      if isinstance(expression, str) and \"${\" in expression:\n            \
    \    # For numeric expressions with variables, return default\n              \
    \  return default_value\n            return float(expression) if expression is\
    \ not None else default_value\n        except:\n            return default_value\n\
    \    \n    def safe_get_boolean(expression, default_value=False):\n        try:\n\
    \            if isinstance(expression, str) and \"${\" in expression:\n      \
    \          # For boolean expressions with variables, return default\n        \
    \        return default_value\n            return bool(expression) if expression\
    \ is not None else default_value\n        except:\n            return default_value\n\
    \    \n    # Create comprehensive data structure for CSV with safe extraction\n\
    \    loan_data = {\n        \"processing_timestamp\": datetime.now().isoformat(),\n\
    \        \"workflow_id\": \"loan_underwriting_workflow\",\n        \"workflow_version\"\
    : \"1.0\",\n        \n        # Application Information\n        \"application_id\"\
    : safe_get(\"${application_id}\", \"UNKNOWN_APP\"),\n        \"requested_amount\"\
    : safe_get_numeric(\"${extract_application_info.customer_info.requested_amount}\"\
    , 0),\n        \"processing_date\": datetime.now().strftime(\"%Y-%m-%d\"),\n \
    \       \"workflow_status\": \"completed\",\n        \n        # Personal Information\n\
    \        \"applicant_name\": safe_get(\"${verify_pan_card.verified_name}\", \"\
    Unknown Applicant\"),\n        \"pan_number\": safe_get(\"${verify_pan_card.pan_number}\"\
    , \"UNKNOWN_PAN\"),\n        \"aadhaar_masked\": safe_get(\"${verify_aadhaar.aadhaar_masked}\"\
    , \"XXXX-XXXX-XXXX\"),\n        \"date_of_birth\": safe_get(\"${verify_pan_card.date_of_birth}\"\
    , \"1990-01-01\"),\n        \"father_name\": safe_get(\"${verify_pan_card.father_name}\"\
    , \"Unknown Father\"),\n        \"gender\": safe_get(\"${verify_aadhaar.gender}\"\
    , \"Unknown\"),\n        \n        # Address Information\n        \"address_line1\"\
    : safe_get(\"${verify_pan_card.registered_address.line1}\", \"Unknown Address\"\
    ),\n        \"address_city\": safe_get(\"${verify_pan_card.registered_address.city}\"\
    , \"Unknown City\"),\n        \"address_state\": safe_get(\"${verify_pan_card.registered_address.state}\"\
    , \"Unknown State\"),\n        \"address_pincode\": safe_get(\"${verify_pan_card.registered_address.pincode}\"\
    , \"000000\"),\n        \n        # Employment Information\n        \"employer_name\"\
    : safe_get(\"${analyze_salary_slips.employer_details.company_name}\", \"Unknown\
    \ Company\"),\n        \"designation\": safe_get(\"${analyze_salary_slips.employee_details.designation}\"\
    , \"Unknown Position\"),\n        \"employee_id\": safe_get(\"${analyze_salary_slips.employee_details.employee_id}\"\
    , \"UNKNOWN_EMP\"),\n        \"employment_duration\": safe_get(\"${analyze_salary_slips.employment_duration}\"\
    , \"Unknown Duration\"),\n        \"employment_verified\": safe_get_boolean(\"\
    ${verify_employment.employment_confirmed}\", False),\n        \"job_stability_score\"\
    : safe_get_numeric(\"${verify_employment.job_stability_score}\", 0),\n       \
    \ \"employment_type\": safe_get(\"${verify_employment.employment_type}\", \"Unknown\"\
    ),\n        \"probation_status\": safe_get(\"${verify_employment.probation_status}\"\
    , \"Unknown\"),\n        \n        # Income Information\n        \"monthly_gross_salary\"\
    : safe_get_numeric(\"${analyze_salary_slips.monthly_gross}\", 0),\n        \"\
    monthly_net_salary\": safe_get_numeric(\"${analyze_salary_slips.monthly_net}\"\
    , 0),\n        \"annual_gross_income\": safe_get_numeric(\"${analyze_salary_slips.annual_gross}\"\
    , 0),\n        \"annual_net_income\": safe_get_numeric(\"${analyze_salary_slips.annual_net}\"\
    , 0),\n        \"loan_emis_detected\": safe_get_numeric(\"${analyze_bank_statements.loan_emis_detected}\"\
    , 0),\n        \"declared_tax_income\": safe_get_numeric(\"${verify_tax_returns.declared_income}\"\
    , 0),\n        \"tax_paid\": safe_get_numeric(\"${verify_tax_returns.tax_paid}\"\
    , 0),\n        \"tax_compliance_score\": safe_get_numeric(\"${verify_tax_returns.tax_compliance_score}\"\
    , 0),\n        \n        # Financial Analysis\n        \"debt_to_income_ratio\"\
    : safe_get_numeric(\"${calculate_debt_to_income.debt_to_income_ratio}\", 0),\n\
    \        \"existing_debt_payments\": safe_get_numeric(\"${calculate_debt_to_income.existing_debt_payments}\"\
    , 0),\n        \"proposed_emi\": safe_get_numeric(\"${calculate_debt_to_income.proposed_emi}\"\
    , 0),\n        \"total_debt_payments\": safe_get_numeric(\"${calculate_debt_to_income.total_debt_payments}\"\
    , 0),\n        \"monthly_net_income\": safe_get_numeric(\"${calculate_debt_to_income.monthly_net_income}\"\
    , 0),\n        \"ratio_acceptable\": safe_get_boolean(\"${calculate_debt_to_income.ratio_acceptable}\"\
    , False),\n        \"max_allowed_ratio\": safe_get_numeric(\"${calculate_debt_to_income.max_allowed_ratio}\"\
    , 0),\n        \n        # Credit Information\n        \"cibil_score\": safe_get_numeric(\"\
    ${check_cibil_score.credit_score}\", 0),\n        \"internal_credit_score\": safe_get_numeric(\"\
    ${calculate_credit_score.internal_credit_score}\", 0),\n        \"credit_history_length\"\
    : safe_get(\"${check_cibil_score.credit_history_length}\", \"Unknown\"),\n   \
    \     \"credit_utilization\": safe_get_numeric(\"${check_cibil_score.credit_utilization}\"\
    , 0),\n        \"risk_category\": safe_get(\"${calculate_credit_score.risk_category}\"\
    , \"unknown\"),\n        \"payment_history\": safe_get(\"${check_cibil_score.payment_history}\"\
    , \"Unknown\"),\n        \"score_range\": safe_get(\"${check_cibil_score.score_range}\"\
    , \"Unknown\"),\n        \"total_accounts\": safe_get_numeric(\"${check_cibil_score.total_accounts}\"\
    , 0),\n        \"active_accounts\": safe_get_numeric(\"${check_cibil_score.active_accounts}\"\
    , 0),\n        \"recent_inquiries\": safe_get_numeric(\"${check_cibil_score.recent_inquiries}\"\
    , 0),\n        \"score_acceptable\": safe_get_boolean(\"${calculate_credit_score.score_acceptable}\"\
    , False),\n        \"min_required_score\": safe_get_numeric(\"${calculate_credit_score.min_required_score}\"\
    , 0),\n        \n        # Existing Loans\n        \"total_outstanding_loans\"\
    : safe_get_numeric(\"${assess_existing_loans.total_outstanding}\", 0),\n     \
    \   \"number_of_existing_loans\": safe_get_numeric(\"${assess_existing_loans.number_of_loans}\"\
    , 0),\n        \"total_monthly_emi\": safe_get_numeric(\"${assess_existing_loans.total_monthly_emi}\"\
    , 0),\n        \"loan_burden_ratio\": safe_get_numeric(\"${assess_existing_loans.loan_burden_ratio}\"\
    , 0),\n        \"repayment_track_record\": safe_get(\"${assess_existing_loans.repayment_track_record}\"\
    , \"Unknown\"),\n        \n        # Document Verification\n        \"pan_verified\"\
    : safe_get_boolean(\"${verify_pan_card.is_valid}\", False),\n        \"pan_verification_score\"\
    : safe_get_numeric(\"${verify_pan_card.verification_score}\", 0),\n        \"\
    aadhaar_verified\": safe_get_boolean(\"${verify_aadhaar.is_valid}\", False),\n\
    \        \"aadhaar_verification_score\": safe_get_numeric(\"${verify_aadhaar.verification_score}\"\
    , 0),\n        \"passport_verified\": safe_get_boolean(\"${verify_passport.is_valid}\"\
    , False),\n        \"address_proof_verified\": safe_get_boolean(\"${verify_address_proof.is_valid}\"\
    , False),\n        \n        # Compliance Status\n        \"kyc_status\": safe_get(\"\
    ${kyc_compliance_check.kyc_status}\", \"unknown\"),\n        \"kyc_compliance_score\"\
    : safe_get_numeric(\"${kyc_compliance_check.compliance_score}\", 0),\n       \
    \ \"identity_verified\": safe_get_boolean(\"${kyc_compliance_check.identity_verified}\"\
    , False),\n        \"address_verified\": safe_get_boolean(\"${kyc_compliance_check.address_verified}\"\
    , False),\n        \"income_verified\": safe_get_boolean(\"${kyc_compliance_check.income_verified}\"\
    , False),\n        \"documents_complete\": safe_get_boolean(\"${kyc_compliance_check.documents_complete}\"\
    , False),\n        \"aml_status\": safe_get(\"${aml_screening.aml_status}\", \"\
    unknown\"),\n        \"aml_compliance_score\": safe_get_numeric(\"${aml_screening.compliance_score}\"\
    , 0),\n        \"watchlist_check\": safe_get(\"${aml_screening.watchlist_check}\"\
    , \"unknown\"),\n        \"pep_screening\": safe_get(\"${aml_screening.pep_screening}\"\
    , \"unknown\"),\n        \"sanctions_check\": safe_get(\"${aml_screening.sanctions_check}\"\
    , \"unknown\"),\n        \"manual_review_required\": safe_get_boolean(\"${aml_screening.manual_review_required}\"\
    , True),\n        \"regulatory_compliant\": safe_get_boolean(\"${regulatory_compliance.rbi_guidelines_met}\"\
    , False),\n        \"internal_policy_compliant\": safe_get_numeric(\"${internal_policy_check.policy_compliance_score}\"\
    , 0),\n        \n        # Final Decision\n        \"final_decision\": safe_get(\"\
    APPROVED\", \"PENDING\"),  # Default to PENDING for safety\n        \"approved_amount\"\
    : safe_get_numeric(\"${extract_application_info.customer_info.requested_amount}\"\
    , 0),\n        \"interest_rate\": safe_get_numeric(\"8.5\", 8.5),\n        \"\
    tenure_months\": safe_get_numeric(\"240\", 240),\n        \"processing_fee_percentage\"\
    : safe_get_numeric(\"1.0\", 1.0),\n        \"decision_confidence\": safe_get_numeric(\"\
    92.0\", 0),\n        \n        # Metadata\n        \"processed_by\": \"automated_workflow\"\
    ,\n        \"processing_node\": \"supabase_upload\"\n    }\n    \n    # Create\
    \ DataFrame\n    df = pd.DataFrame([loan_data])\n    \n    # Generate CSV content\n\
    \    csv_buffer = io.StringIO()\n    df.to_csv(csv_buffer, index=False)\n    csv_content\
    \ = csv_buffer.getvalue()\n    \n    # Generate unique filename\n    timestamp\
    \ = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n    app_id = loan_data[\"application_id\"\
    ].replace(\"/\", \"_\").replace(\"\\\\\", \"_\")\n    filename = f\"loan_approval_{app_id}_{timestamp}.csv\"\
    \n    \n    print(f\"Generated CSV file: {filename}\")\n    print(f\"CSV content\
    \ size: {len(csv_content)} characters\")\n    print(f\"Data rows: {len(df)}\"\
    )\n    print(f\"Columns: {len(df.columns)}\")\n    \n    # Upload to Supabase\
    \ Storage\n    try:\n        # Upload file to the loan-approval bucket\n     \
    \   storage_response = supabase.storage.from_(\"loan-approval\").upload(\n   \
    \         filename, \n            csv_content.encode('utf-8'),\n            file_options={\n\
    \                \"content-type\": \"text/csv\",\n                \"cache-control\"\
    : \"3600\"\n            }\n        )\n        \n        print(f\"File uploaded\
    \ successfully to Supabase\")\n        \n        # Get public URL\n        public_url\
    \ = supabase.storage.from_(\"loan-approval\").get_public_url(filename)\n     \
    \   \n        upload_result = {\n            \"upload_status\": \"success\",\n\
    \            \"bucket_name\": \"loan-approval\",\n            \"filename\": filename,\n\
    \            \"file_size_bytes\": len(csv_content.encode('utf-8')),\n        \
    \    \"public_url\": public_url,\n            \"upload_timestamp\": datetime.now().isoformat(),\n\
    \            \"records_uploaded\": len(df),\n            \"columns_count\": len(df.columns),\n\
    \            \"data_summary\": {\n                \"application_id\": loan_data[\"\
    application_id\"],\n                \"applicant_name\": loan_data[\"applicant_name\"\
    ],\n                \"requested_amount\": loan_data[\"requested_amount\"],\n \
    \               \"internal_credit_score\": loan_data[\"internal_credit_score\"\
    ],\n                \"debt_to_income_ratio\": loan_data[\"debt_to_income_ratio\"\
    ],\n                \"final_decision\": loan_data[\"final_decision\"],\n     \
    \           \"approved_amount\": loan_data[\"approved_amount\"],\n           \
    \     \"interest_rate\": loan_data[\"interest_rate\"]\n            },\n      \
    \      \"csv_structure\": {\n                \"columns\": list(df.columns),\n\
    \                \"column_count\": len(df.columns),\n                \"row_count\"\
    : len(df),\n                \"key_metrics\": {\n                    \"credit_score\"\
    : loan_data[\"internal_credit_score\"],\n                    \"cibil_score\":\
    \ loan_data[\"cibil_score\"],\n                    \"risk_category\": loan_data[\"\
    risk_category\"],\n                    \"kyc_compliant\": loan_data[\"kyc_status\"\
    ] == \"compliant\",\n                    \"aml_cleared\": loan_data[\"aml_status\"\
    ] == \"cleared\"\n                }\n            }\n        }\n        \n    \
    \    print(f\"Upload completed successfully\")\n        print(f\"Public URL: {public_url}\"\
    )\n        print(f\"Records uploaded: {len(df)}\")\n        \n    except Exception\
    \ as upload_error:\n        print(f\"Error uploading to Supabase: {upload_error}\"\
    )\n        \n        # Return error result\n        upload_result = {\n      \
    \      \"upload_status\": \"failed\",\n            \"error_message\": str(upload_error),\n\
    \            \"bucket_name\": \"loan-approval\",\n            \"filename\": filename,\n\
    \            \"attempted_upload_timestamp\": datetime.now().isoformat(),\n   \
    \         \"records_prepared\": len(df),\n            \"data_summary\": {\n  \
    \              \"application_id\": loan_data[\"application_id\"],\n          \
    \      \"applicant_name\": loan_data[\"applicant_name\"],\n                \"\
    requested_amount\": loan_data[\"requested_amount\"],\n                \"final_decision\"\
    : loan_data[\"final_decision\"]\n            }\n        }\n\nexcept Exception\
    \ as e:\n    print(f\"Critical error in Supabase upload process: {e}\")\n    \n\
    \    # Return critical error result\n    upload_result = {\n        \"upload_status\"\
    : \"critical_error\",\n        \"error_message\": str(e),\n        \"bucket_name\"\
    : \"loan-approval\",\n        \"error_timestamp\": datetime.now().isoformat(),\n\
    \        \"workflow_id\": \"loan_underwriting_workflow\",\n        \"application_id\"\
    : \"${application_id}\"\n    }\n\nprint(f\"__OUTPUTS__ {json.dumps(upload_result)}\"\
    )\n"
  packages:
  - supabase==2.4.4
  - pandas==2.0.3
  - python-dotenv==1.0.0
  depends_on:
  - generate_final_output
  description: Convert loan underwriting data to CSV and upload to Supabase storage
    bucket
  previous_node: generate_final_output
  timeout_seconds: 120
inputs:
- name: application_id
  type: string
  required: true
  description: Unique loan application identifier
- name: min_credit_score
  type: integer
  default: 650
  required: false
  description: Minimum acceptable credit score
- name: max_debt_to_income_ratio
  type: float
  default: 0.4
  required: false
  description: Maximum debt-to-income ratio allowed
outputs:
  final_decision:
    type: object
    source: final_underwriting_analysis
    description: Final underwriting decision with terms
  compliance_status:
    type: object
    source: compliance_consolidation
    description: Overall compliance assessment
  credit_assessment:
    type: object
    source: calculate_credit_score
    description: Comprehensive credit assessment results
  processing_summary:
    type: object
    source: generate_final_output
    description: Complete workflow processing summary
  supabase_upload_result:
    type: object
    source: upload_to_supabase
    description: Result of CSV upload to Supabase storage bucket
version: '1.0'
metadata:
  author: Financial Services Team
  version: '1.0'
  environment: production
  last_updated: '2024-07-15'
  compliance_certified: true
namespace: financial
description: End-to-end loan underwriting process with document verification, income
  analysis, credit assessment, and compliance checks
model_clients:
- id: underwriting_analyst
  config:
    model: gpt-4o-mini
    api_key: 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
    max_tokens: 2000
    temperature: 0.1
  provider: openai
- id: compliance_reviewer
  config:
    model: gpt-4o-mini
    api_key: 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
    max_tokens: 1500
    temperature: 0.0
  provider: openai
- id: risk_assessor
  config:
    model: gpt-4o-mini
    api_key: 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
    max_tokens: 1000
    temperature: 0.2
  provider: openai
timeout_seconds: 3600
Execution ID Status Started Duration Actions
654acb81... COMPLETED 2025-07-16
06:48:38
N/A View