Comprehensive Recruitment Pipeline

End-to-end candidate sourcing, screening, matching, assessment, scheduling, and evaluation workflow

Back
Workflow Information

ID: recruitment_pipeline_v1

Namespace: default

Version: 1.0.0

Created: 2025-07-09

Updated: 2025-07-09

Tasks: 14

Quick Actions
Manage Secrets
Inputs
Name Type Required Default
job_requisition_id string Required None
job_title string Required None
job_description string Required None
required_skills array Required None
experience_years number Required None
location string Required None
salary_range string Optional None
sourcing_channels array Optional ['linkedin', 'indeed', 'internal_database', 'referrals']
Outputs
Name Type Source
hired_candidates string List of candidates who received offers
pipeline_summary string Summary of the entire recruitment pipeline
recruitment_metrics string Complete recruitment analytics and metrics
Tasks
parse_job_requirements
ai_agent

No description

source_candidates
script

No description

screen_candidates
ai_agent

No description

filter_qualified_candidates
script

No description

match_and_rank
ai_agent

No description

generate_assessments
script

No description

send_assessment_invites
script

No description

collect_assessment_results
script

No description

prepare_interview_schedule
script

No description

send_interview_invites
script

No description

collect_interview_feedback
script

No description

generate_final_recommendations
ai_agent

No description

prepare_offer_packages
script

No description

generate_recruitment_analytics
script

No description

YAML Source
id: recruitment_pipeline_v1
name: Comprehensive Recruitment Pipeline
tasks:
- id: parse_job_requirements
  type: ai_agent
  prompt: 'Parse the following job details and extract structured requirements:


    Job Title: ${job_title}

    Description: ${job_description}

    Required Skills: ${required_skills}

    Experience: ${experience_years} years

    Location: ${location}


    Extract and structure:

    1. Core technical skills

    2. Soft skills requirements

    3. Education requirements

    4. Industry/domain knowledge

    5. Key responsibilities

    6. Ideal candidate profile


    Format as JSON for automated processing.

    '
  agent_type: analyst
  model_client_id: gpt4_screener
  timeout_seconds: 60
- id: source_candidates
  type: script
  script: "import json\nimport os\nimport requests\nfrom datetime import datetime\n\
    \n# Parse job requirements\nrequirements = json.loads(os.environ.get('parse_job_requirements.output',\
    \ '{}'))\nchannels = json.loads(os.environ.get('sourcing_channels', '[]'))\n\n\
    # Simulate candidate sourcing from various channels\n# In production, this would\
    \ integrate with LinkedIn API, Indeed API, etc.\ncandidates = []\n\n# Mock candidate\
    \ data for demonstration\nmock_candidates = [\n    {\n        \"id\": f\"CAND-{i:04d}\"\
    ,\n        \"name\": f\"Candidate {i}\",\n        \"email\": f\"candidate{i}@example.com\"\
    ,\n        \"phone\": f\"+1-555-{i:04d}\",\n        \"source\": channels[i % len(channels)]\
    \ if channels else \"direct\",\n        \"resume_url\": f\"https://storage.example.com/resumes/candidate_{i}.pdf\"\
    ,\n        \"linkedin_url\": f\"https://linkedin.com/in/candidate{i}\",\n    \
    \    \"years_experience\": 3 + (i % 10),\n        \"skills\": [\"Python\", \"\
    JavaScript\", \"React\", \"Node.js\", \"AWS\"][:(i % 5) + 1],\n        \"education\"\
    : [\"BS Computer Science\", \"MS Software Engineering\", \"BS Information Systems\"\
    ][i % 3],\n        \"current_company\": f\"Company {chr(65 + (i % 5))}\",\n  \
    \      \"current_title\": [\"Software Engineer\", \"Senior Developer\", \"Tech\
    \ Lead\", \"Full Stack Developer\"][i % 4],\n        \"location\": [\"San Francisco,\
    \ CA\", \"New York, NY\", \"Remote\", \"Austin, TX\"][i % 4],\n        \"sourced_at\"\
    : datetime.utcnow().isoformat()\n    }\n    for i in range(1, 21)  # Source 20\
    \ candidates\n]\n\nresult = {\n    \"total_sourced\": len(mock_candidates),\n\
    \    \"by_channel\": {ch: sum(1 for c in mock_candidates if c['source'] == ch)\
    \ for ch in set(c['source'] for c in mock_candidates)},\n    \"candidates\": mock_candidates,\n\
    \    \"requirements\": requirements\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(result)}\"\
    )\n"
  depends_on:
  - parse_job_requirements
  requirements:
  - requests==2.31.0
  - beautifulsoup4==4.12.2
  retry_policy:
    max_attempts: 3
    initial_interval: 5
- id: screen_candidates
  type: ai_agent
  prompt: 'Screen the following candidates against job requirements:


    Requirements: ${parse_job_requirements}

    Candidates: ${source_candidates.candidates}


    For each candidate, evaluate:

    1. Skills match (0-100)

    2. Experience relevance (0-100)

    3. Location compatibility

    4. Overall fit score (0-100)

    5. Red flags or concerns

    6. Strengths and highlights


    Return a JSON array with screening results for each candidate.

    Include recommendation: "advance", "maybe", or "reject"

    '
  agent_type: screener
  depends_on:
  - source_candidates
  model_client_id: gpt4_screener
  timeout_seconds: 120
- id: filter_qualified_candidates
  type: script
  script: "import json\nimport os\n\n# Get screening results\nscreening_data = os.environ.get('screen_candidates.output',\
    \ '{}')\ntry:\n    screening_results = json.loads(screening_data)\n    if isinstance(screening_results,\
    \ dict) and 'screening_results' in screening_results:\n        screening_results\
    \ = screening_results['screening_results']\nexcept:\n    screening_results = []\n\
    \n# Get original candidates\nsource_data = json.loads(os.environ.get('source_candidates.output',\
    \ '{}'))\nall_candidates = source_data.get('candidates', [])\n\n# Filter candidates\
    \ based on screening\nqualified = []\nmaybe_qualified = []\nrejected = []\n\n\
    # Create a simple screening if AI didn't provide detailed results\nif not screening_results\
    \ or not isinstance(screening_results, list):\n    # Simple rule-based screening\n\
    \    for candidate in all_candidates:\n        score = 50  # Base score\n    \
    \    if candidate.get('years_experience', 0) >= 3:\n            score += 20\n\
    \        if any(skill in candidate.get('skills', []) for skill in ['Python', 'JavaScript']):\n\
    \            score += 30\n        \n        if score >= 70:\n            qualified.append({\n\
    \                **candidate,\n                'screening_score': score,\n   \
    \             'recommendation': 'advance'\n            })\n        elif score\
    \ >= 50:\n            maybe_qualified.append({\n                **candidate,\n\
    \                'screening_score': score,\n                'recommendation':\
    \ 'maybe'\n            })\n        else:\n            rejected.append({\n    \
    \            **candidate,\n                'screening_score': score,\n       \
    \         'recommendation': 'reject'\n            })\nelse:\n    # Use AI screening\
    \ results\n    for i, candidate in enumerate(all_candidates):\n        if i <\
    \ len(screening_results):\n            result = screening_results[i]\n       \
    \     candidate['screening_score'] = result.get('overall_fit_score', 50)\n   \
    \         candidate['recommendation'] = result.get('recommendation', 'maybe')\n\
    \            \n            if candidate['recommendation'] == 'advance':\n    \
    \            qualified.append(candidate)\n            elif candidate['recommendation']\
    \ == 'maybe':\n                maybe_qualified.append(candidate)\n           \
    \ else:\n                rejected.append(candidate)\n\noutput = {\n    \"qualified_count\"\
    : len(qualified),\n    \"maybe_count\": len(maybe_qualified),\n    \"rejected_count\"\
    : len(rejected),\n    \"qualified_candidates\": qualified,\n    \"maybe_qualified\"\
    : maybe_qualified,\n    \"screening_complete\": True,\n    \"next_phase_candidates\"\
    : qualified + maybe_qualified[:5]  # Include top 5 maybes\n}\n\nprint(f\"__OUTPUTS__\
    \ {json.dumps(output)}\")\n"
  depends_on:
  - screen_candidates
- id: match_and_rank
  type: ai_agent
  prompt: 'Perform detailed matching and ranking for qualified candidates:


    Job Requirements: ${parse_job_requirements}

    Candidates: ${filter_qualified_candidates.next_phase_candidates}


    For each candidate, provide:

    1. Detailed skills matching analysis

    2. Cultural fit assessment

    3. Growth potential score

    4. Salary expectations alignment

    5. Availability and start date considerations

    6. Overall ranking (1-10)

    7. Specific interview focus areas


    Return as JSON with candidates sorted by rank.

    '
  agent_type: matcher
  depends_on:
  - filter_qualified_candidates
  model_client_id: gpt4_matcher
  timeout_seconds: 120
- id: generate_assessments
  type: script
  script: "import json\nimport os\nimport hashlib\nfrom datetime import datetime,\
    \ timedelta\n\n# Get matched candidates\nmatch_data = os.environ.get('match_and_rank.output',\
    \ '{}')\ntry:\n    matched_candidates = json.loads(match_data)\n    if isinstance(matched_candidates,\
    \ dict):\n        matched_candidates = matched_candidates.get('candidates', [])\n\
    except:\n    matched_candidates = []\n\n# Get job requirements\nrequirements =\
    \ json.loads(os.environ.get('parse_job_requirements.output', '{}'))\n\n# Generate\
    \ personalized assessments\nassessments = []\n\nfor candidate in matched_candidates[:10]:\
    \  # Top 10 candidates\n    # Create unique assessment ID\n    assessment_id =\
    \ hashlib.md5(f\"{candidate.get('id', '')}_{datetime.utcnow().isoformat()}\".encode()).hexdigest()[:12]\n\
    \    \n    # Determine assessment types based on role\n    assessment_types =\
    \ []\n    \n    # Technical assessment\n    if any(skill in str(requirements)\
    \ for skill in ['Python', 'JavaScript', 'Java', 'coding']):\n        assessment_types.append({\n\
    \            \"type\": \"technical\",\n            \"duration_minutes\": 90,\n\
    \            \"topics\": [\"algorithms\", \"data structures\", \"system design\"\
    , \"coding\"],\n            \"difficulty\": \"medium\"\n        })\n    \n   \
    \ # Behavioral assessment\n    assessment_types.append({\n        \"type\": \"\
    behavioral\",\n        \"duration_minutes\": 45,\n        \"topics\": [\"teamwork\"\
    , \"problem-solving\", \"communication\", \"leadership\"],\n        \"questions_count\"\
    : 10\n    })\n    \n    # Domain-specific assessment\n    if \"domain_knowledge\"\
    \ in str(requirements):\n        assessment_types.append({\n            \"type\"\
    : \"domain\",\n            \"duration_minutes\": 60,\n            \"topics\":\
    \ [\"industry knowledge\", \"best practices\", \"tools and technologies\"],\n\
    \            \"format\": \"case study\"\n        })\n    \n    assessment = {\n\
    \        \"assessment_id\": assessment_id,\n        \"candidate_id\": candidate.get('id'),\n\
    \        \"candidate_email\": candidate.get('email'),\n        \"assessment_types\"\
    : assessment_types,\n        \"total_duration_minutes\": sum(a['duration_minutes']\
    \ for a in assessment_types),\n        \"created_at\": datetime.utcnow().isoformat(),\n\
    \        \"expires_at\": (datetime.utcnow() + timedelta(days=7)).isoformat(),\n\
    \        \"assessment_link\": f\"https://assessments.example.com/{assessment_id}\"\
    ,\n        \"instructions\": \"Complete all sections within 7 days. Each section\
    \ is timed separately.\"\n    }\n    \n    assessments.append(assessment)\n\n\
    output = {\n    \"assessments_generated\": len(assessments),\n    \"assessments\"\
    : assessments,\n    \"ready_to_send\": True\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(output)}\"\
    )\n"
  depends_on:
  - match_and_rank
- id: send_assessment_invites
  type: script
  script: "import json\nimport os\nfrom datetime import datetime\n\n# Get assessment\
    \ data\nassessment_data = json.loads(os.environ.get('generate_assessments.output',\
    \ '{}'))\nassessments = assessment_data.get('assessments', [])\n\n# Simulate sending\
    \ emails (in production, use email MCP server)\nsent_invites = []\n\nfor assessment\
    \ in assessments:\n    # Create invitation record\n    invite = {\n        \"\
    candidate_id\": assessment['candidate_id'],\n        \"candidate_email\": assessment['candidate_email'],\n\
    \        \"assessment_id\": assessment['assessment_id'],\n        \"assessment_link\"\
    : assessment['assessment_link'],\n        \"sent_at\": datetime.utcnow().isoformat(),\n\
    \        \"email_subject\": f\"Assessment Invitation - {os.environ.get('job_title',\
    \ 'Position')}\",\n        \"email_status\": \"sent\",  # In production, track\
    \ actual delivery\n        \"reminder_scheduled\": (datetime.utcnow().isoformat(),\
    \ \"3_days\")\n    }\n    \n    sent_invites.append(invite)\n\noutput = {\n  \
    \  \"invitations_sent\": len(sent_invites),\n    \"invitations\": sent_invites,\n\
    \    \"next_step\": \"monitor_assessments\"\n}\n\nprint(f\"__OUTPUTS__ {json.dumps(output)}\"\
    )\n"
  depends_on:
  - generate_assessments
- id: collect_assessment_results
  type: script
  script: "import json\nimport os\nimport random\nfrom datetime import datetime\n\n\
    # Get invitation data\ninvite_data = json.loads(os.environ.get('send_assessment_invites.output',\
    \ '{}'))\ninvitations = invite_data.get('invitations', [])\n\n# Simulate assessment\
    \ completion (in production, this would poll assessment platform)\nassessment_results\
    \ = []\n\nfor invite in invitations:\n    # Simulate 80% completion rate\n   \
    \ if random.random() < 0.8:\n        # Generate mock scores\n        technical_score\
    \ = random.randint(60, 95) if random.random() < 0.7 else random.randint(40, 59)\n\
    \        behavioral_score = random.randint(65, 98)\n        domain_score = random.randint(55,\
    \ 92)\n        \n        result = {\n            \"candidate_id\": invite['candidate_id'],\n\
    \            \"assessment_id\": invite['assessment_id'],\n            \"completed_at\"\
    : datetime.utcnow().isoformat(),\n            \"scores\": {\n                \"\
    technical\": technical_score,\n                \"behavioral\": behavioral_score,\n\
    \                \"domain\": domain_score,\n                \"overall\": round((technical_score\
    \ * 0.4 + behavioral_score * 0.3 + domain_score * 0.3), 1)\n            },\n \
    \           \"time_taken_minutes\": random.randint(90, 180),\n            \"flagged_issues\"\
    : [] if random.random() < 0.9 else [\"possible_assistance_detected\"],\n     \
    \       \"recommendation\": \"proceed\" if technical_score >= 60 and behavioral_score\
    \ >= 65 else \"review_needed\"\n        }\n        \n        assessment_results.append(result)\n\
    \noutput = {\n    \"total_invited\": len(invitations),\n    \"completed\": len(assessment_results),\n\
    \    \"completion_rate\": round(len(assessment_results) / len(invitations) * 100,\
    \ 1) if invitations else 0,\n    \"results\": assessment_results,\n    \"high_performers\"\
    : [r for r in assessment_results if r['scores']['overall'] >= 80]\n}\n\nprint(f\"\
    __OUTPUTS__ {json.dumps(output)}\")\n"
  depends_on:
  - send_assessment_invites
- id: prepare_interview_schedule
  type: script
  script: "import json\nimport os\nfrom datetime import datetime, timedelta\nimport\
    \ random\n\n# Get assessment results\nresults_data = json.loads(os.environ.get('collect_assessment_results.output',\
    \ '{}'))\nhigh_performers = results_data.get('high_performers', [])\nall_results\
    \ = results_data.get('results', [])\n\n# Select candidates for interviews\ninterview_candidates\
    \ = high_performers + [r for r in all_results if r not in high_performers and\
    \ r['scores']['overall'] >= 70][:5]\n\n# Generate interview slots\ninterview_schedules\
    \ = []\nstart_date = datetime.utcnow() + timedelta(days=3)  # Start interviews\
    \ in 3 days\n\n# Define interviewer pool\ninterviewers = [\n    {\"id\": \"INT001\"\
    , \"name\": \"Sarah Chen\", \"role\": \"Engineering Manager\", \"expertise\":\
    \ [\"technical\", \"leadership\"]},\n    {\"id\": \"INT002\", \"name\": \"Michael\
    \ Roberts\", \"role\": \"Senior Engineer\", \"expertise\": [\"technical\", \"\
    system design\"]},\n    {\"id\": \"INT003\", \"name\": \"Emily Johnson\", \"role\"\
    : \"HR Director\", \"expertise\": [\"behavioral\", \"culture fit\"]},\n    {\"\
    id\": \"INT004\", \"name\": \"David Kim\", \"role\": \"Tech Lead\", \"expertise\"\
    : [\"technical\", \"team dynamics\"]}\n]\n\nfor idx, candidate in enumerate(interview_candidates):\n\
    \    # Schedule multiple rounds\n    rounds = []\n    current_date = start_date\
    \ + timedelta(days=idx // 3)  # 3 interviews per day\n    \n    # Round 1: Technical\
    \ Screen (45 min)\n    rounds.append({\n        \"round\": 1,\n        \"type\"\
    : \"technical_screen\",\n        \"date\": current_date.strftime(\"%Y-%m-%d\"\
    ),\n        \"time\": f\"{9 + (idx % 3) * 2}:00\",\n        \"duration_minutes\"\
    : 45,\n        \"interviewer\": random.choice([i for i in interviewers if \"technical\"\
    \ in i[\"expertise\"]]),\n        \"format\": \"video_call\",\n        \"focus_areas\"\
    : [\"coding\", \"problem solving\", \"technical knowledge\"]\n    })\n    \n \
    \   # Round 2: System Design (60 min)\n    rounds.append({\n        \"round\"\
    : 2,\n        \"type\": \"system_design\",\n        \"date\": (current_date +\
    \ timedelta(days=2)).strftime(\"%Y-%m-%d\"),\n        \"time\": f\"{10 + (idx\
    \ % 3) * 2}:00\",\n        \"duration_minutes\": 60,\n        \"interviewer\"\
    : random.choice([i for i in interviewers if \"system design\" in str(i[\"expertise\"\
    ])]),\n        \"format\": \"video_call\",\n        \"focus_areas\": [\"architecture\"\
    , \"scalability\", \"trade-offs\"]\n    })\n    \n    # Round 3: Behavioral/Culture\
    \ Fit (45 min)\n    rounds.append({\n        \"round\": 3,\n        \"type\":\
    \ \"behavioral\",\n        \"date\": (current_date + timedelta(days=4)).strftime(\"\
    %Y-%m-%d\"),\n        \"time\": f\"{14 + (idx % 3)}:00\",\n        \"duration_minutes\"\
    : 45,\n        \"interviewer\": random.choice([i for i in interviewers if \"behavioral\"\
    \ in str(i[\"expertise\"])]),\n        \"format\": \"video_call\",\n        \"\
    focus_areas\": [\"teamwork\", \"communication\", \"culture fit\", \"career goals\"\
    ]\n    })\n    \n    schedule = {\n        \"candidate_id\": candidate['candidate_id'],\n\
    \        \"assessment_score\": candidate['scores']['overall'],\n        \"interview_rounds\"\
    : rounds,\n        \"total_rounds\": len(rounds),\n        \"calendar_link\":\
    \ f\"https://calendar.example.com/schedule/{candidate['candidate_id']}\",\n  \
    \      \"status\": \"scheduled\",\n        \"special_accommodations\": None\n\
    \    }\n    \n    interview_schedules.append(schedule)\n\noutput = {\n    \"interviews_scheduled\"\
    : len(interview_schedules),\n    \"schedules\": interview_schedules,\n    \"earliest_interview\"\
    : start_date.strftime(\"%Y-%m-%d\"),\n    \"interview_period_days\": 7\n}\n\n\
    print(f\"__OUTPUTS__ {json.dumps(output)}\")\n"
  depends_on:
  - collect_assessment_results
- id: send_interview_invites
  type: script
  script: "import json\nimport os\nfrom datetime import datetime\n\n# Get schedule\
    \ data\nschedule_data = json.loads(os.environ.get('prepare_interview_schedule.output',\
    \ '{}'))\nschedules = schedule_data.get('schedules', [])\n\n# Get candidate data\
    \ for email addresses\ncandidates_data = json.loads(os.environ.get('filter_qualified_candidates.output',\
    \ '{}'))\nall_candidates = candidates_data.get('qualified_candidates', []) + candidates_data.get('maybe_qualified',\
    \ [])\ncandidate_map = {c['id']: c for c in all_candidates}\n\n# Send interview\
    \ invitations\nsent_invites = []\n\nfor schedule in schedules:\n    candidate\
    \ = candidate_map.get(schedule['candidate_id'], {})\n    \n    invite = {\n  \
    \      \"candidate_id\": schedule['candidate_id'],\n        \"candidate_email\"\
    : candidate.get('email', 'unknown@example.com'),\n        \"candidate_name\":\
    \ candidate.get('name', 'Candidate'),\n        \"interview_rounds\": schedule['interview_rounds'],\n\
    \        \"calendar_link\": schedule['calendar_link'],\n        \"sent_at\": datetime.utcnow().isoformat(),\n\
    \        \"email_subject\": f\"Interview Invitation - {os.environ.get('job_title',\
    \ 'Position')}\",\n        \"confirmation_required\": True,\n        \"confirmation_deadline\"\
    : schedule['interview_rounds'][0]['date']\n    }\n    \n    sent_invites.append(invite)\n\
    \noutput = {\n    \"invitations_sent\": len(sent_invites),\n    \"interview_invitations\"\
    : sent_invites,\n    \"awaiting_confirmation\": len(sent_invites)\n}\n\nprint(f\"\
    __OUTPUTS__ {json.dumps(output)}\")\n"
  depends_on:
  - prepare_interview_schedule
- id: collect_interview_feedback
  type: script
  script: "import json\nimport os\nimport random\nfrom datetime import datetime\n\n\
    # Get interview schedule\ninvite_data = json.loads(os.environ.get('send_interview_invites.output',\
    \ '{}'))\ninvitations = invite_data.get('interview_invitations', [])\n\n# Simulate\
    \ interview feedback collection\ninterview_feedback = []\n\nfor invite in invitations:\n\
    \    candidate_feedback = {\n        \"candidate_id\": invite['candidate_id'],\n\
    \        \"candidate_name\": invite['candidate_name'],\n        \"rounds_feedback\"\
    : []\n    }\n    \n    for round_info in invite['interview_rounds']:\n       \
    \ # Generate realistic feedback\n        technical_rating = random.randint(3,\
    \ 5) if random.random() < 0.7 else random.randint(2, 3)\n        communication_rating\
    \ = random.randint(3, 5)\n        problem_solving_rating = random.randint(3, 5)\
    \ if random.random() < 0.6 else random.randint(2, 3)\n        culture_fit_rating\
    \ = random.randint(3, 5)\n        \n        feedback = {\n            \"round\"\
    : round_info['round'],\n            \"type\": round_info['type'],\n          \
    \  \"interviewer\": round_info['interviewer']['name'],\n            \"date\":\
    \ round_info['date'],\n            \"ratings\": {\n                \"technical_skills\"\
    : technical_rating,\n                \"communication\": communication_rating,\n\
    \                \"problem_solving\": problem_solving_rating,\n              \
    \  \"culture_fit\": culture_fit_rating,\n                \"overall\": round((technical_rating\
    \ + communication_rating + problem_solving_rating + culture_fit_rating) / 4, 1)\n\
    \            },\n            \"strengths\": [\n                \"Strong technical\
    \ foundation\",\n                \"Excellent communication skills\",\n       \
    \         \"Good problem-solving approach\"\n            ][:random.randint(1,\
    \ 3)],\n            \"concerns\": [] if random.random() < 0.7 else [\"Needs more\
    \ experience with cloud platforms\"],\n            \"recommendation\": \"strong_yes\"\
    \ if technical_rating >= 4 and culture_fit_rating >= 4 else (\n              \
    \  \"yes\" if technical_rating >= 3 and culture_fit_rating >= 3 else \"no\"\n\
    \            ),\n            \"notes\": \"Candidate showed good understanding\
    \ of core concepts and communicated ideas clearly.\"\n        }\n        \n  \
    \      candidate_feedback[\"rounds_feedback\"].append(feedback)\n    \n    # Calculate\
    \ aggregate scores\n    all_ratings = [r[\"ratings\"][\"overall\"] for r in candidate_feedback[\"\
    rounds_feedback\"]]\n    candidate_feedback[\"average_rating\"] = round(sum(all_ratings)\
    \ / len(all_ratings), 2)\n    candidate_feedback[\"final_recommendation\"] = (\n\
    \        \"hire\" if all(r[\"recommendation\"] in [\"strong_yes\", \"yes\"] for\
    \ r in candidate_feedback[\"rounds_feedback\"][:2])\n        else \"maybe\" if\
    \ any(r[\"recommendation\"] in [\"strong_yes\", \"yes\"] for r in candidate_feedback[\"\
    rounds_feedback\"])\n        else \"no_hire\"\n    )\n    \n    interview_feedback.append(candidate_feedback)\n\
    \noutput = {\n    \"feedback_collected\": len(interview_feedback),\n    \"interview_feedback\"\
    : interview_feedback,\n    \"recommended_hires\": [f for f in interview_feedback\
    \ if f[\"final_recommendation\"] == \"hire\"],\n    \"maybe_hires\": [f for f\
    \ in interview_feedback if f[\"final_recommendation\"] == \"maybe\"]\n}\n\nprint(f\"\
    __OUTPUTS__ {json.dumps(output)}\")\n"
  depends_on:
  - send_interview_invites
- id: generate_final_recommendations
  type: ai_agent
  prompt: 'Generate final hiring recommendations based on complete candidate data:


    Interview Feedback: ${collect_interview_feedback}

    Assessment Results: ${collect_assessment_results}

    Job Requirements: ${parse_job_requirements}


    For each recommended candidate, provide:

    1. Overall hiring recommendation with confidence level

    2. Salary recommendation based on market and candidate level

    3. Start date considerations

    4. Onboarding recommendations

    5. Team placement suggestions

    6. Growth potential analysis

    7. Any conditions or concerns to address


    Rank candidates and provide top 3 recommendations with detailed justification.

    '
  agent_type: evaluator
  depends_on:
  - collect_interview_feedback
  - collect_assessment_results
  model_client_id: gpt4_evaluator
  timeout_seconds: 120
- id: prepare_offer_packages
  type: script
  script: "import json\nimport os\nfrom datetime import datetime, timedelta\n\n# Get\
    \ recommendations\nrecommendations = os.environ.get('generate_final_recommendations.output',\
    \ '{}')\ntry:\n    final_recs = json.loads(recommendations)\n    if isinstance(final_recs,\
    \ str):\n        # If it's a string, try to extract JSON from it\n        import\
    \ re\n        json_match = re.search(r'\\{.*\\}', final_recs, re.DOTALL)\n   \
    \     if json_match:\n            final_recs = json.loads(json_match.group())\n\
    \        else:\n            final_recs = {\"top_candidates\": []}\nexcept:\n \
    \   final_recs = {\"top_candidates\": []}\n\n# Get feedback data\nfeedback_data\
    \ = json.loads(os.environ.get('collect_interview_feedback.output', '{}'))\nrecommended_hires\
    \ = feedback_data.get('recommended_hires', [])\n\n# Prepare offer packages for\
    \ top candidates\noffer_packages = []\n\nfor idx, candidate in enumerate(recommended_hires[:3]):\
    \  # Top 3 candidates\n    # Calculate offer details\n    base_salary = 120000\
    \ + (idx * -5000)  # Decreasing by rank\n    signing_bonus = 15000 if idx == 0\
    \ else (10000 if idx == 1 else 5000)\n    \n    offer = {\n        \"candidate_id\"\
    : candidate['candidate_id'],\n        \"candidate_name\": candidate['candidate_name'],\n\
    \        \"offer_id\": f\"OFFER-{datetime.utcnow().strftime('%Y%m%d')}-{idx+1:03d}\"\
    ,\n        \"position\": os.environ.get('job_title', 'Software Engineer'),\n \
    \       \"level\": \"Senior\" if candidate['average_rating'] >= 4.5 else \"Mid-Level\"\
    ,\n        \"compensation\": {\n            \"base_salary\": base_salary,\n  \
    \          \"signing_bonus\": signing_bonus,\n            \"equity\": \"Stock\
    \ options per company policy\",\n            \"benefits\": \"Full medical, dental,\
    \ vision, 401k matching\"\n        },\n        \"start_date\": (datetime.utcnow()\
    \ + timedelta(days=30)).strftime(\"%Y-%m-%d\"),\n        \"offer_expiry\": (datetime.utcnow()\
    \ + timedelta(days=7)).strftime(\"%Y-%m-%d\"),\n        \"conditions\": [\n  \
    \          \"Background check clearance\",\n            \"Reference verification\"\
    ,\n            \"Proof of eligibility to work\"\n        ],\n        \"offer_letter_url\"\
    : f\"https://offers.example.com/{candidate['candidate_id']}/offer.pdf\",\n   \
    \     \"status\": \"prepared\",\n        \"prepared_at\": datetime.utcnow().isoformat()\n\
    \    }\n    \n    offer_packages.append(offer)\n\noutput = {\n    \"offers_prepared\"\
    : len(offer_packages),\n    \"offer_packages\": offer_packages,\n    \"ready_to_send\"\
    : True,\n    \"total_compensation_budget\": sum(o['compensation']['base_salary']\
    \ + o['compensation']['signing_bonus'] for o in offer_packages)\n}\n\nprint(f\"\
    __OUTPUTS__ {json.dumps(output)}\")\n"
  depends_on:
  - generate_final_recommendations
- id: generate_recruitment_analytics
  type: script
  script: "import json\nimport os\nfrom datetime import datetime\n\n# Gather all data\n\
    sourcing_data = json.loads(os.environ.get('source_candidates.output', '{}'))\n\
    screening_data = json.loads(os.environ.get('filter_qualified_candidates.output',\
    \ '{}'))\nassessment_data = json.loads(os.environ.get('collect_assessment_results.output',\
    \ '{}'))\ninterview_data = json.loads(os.environ.get('collect_interview_feedback.output',\
    \ '{}'))\noffer_data = json.loads(os.environ.get('prepare_offer_packages.output',\
    \ '{}'))\n\n# Calculate funnel metrics\nanalytics = {\n    \"recruitment_funnel\"\
    : {\n        \"sourced\": sourcing_data.get('total_sourced', 0),\n        \"screened\"\
    : screening_data.get('qualified_count', 0) + screening_data.get('maybe_count',\
    \ 0),\n        \"assessed\": assessment_data.get('completed', 0),\n        \"\
    interviewed\": interview_data.get('feedback_collected', 0),\n        \"offered\"\
    : offer_data.get('offers_prepared', 0)\n    },\n    \"conversion_rates\": {\n\
    \        \"source_to_screen\": round((screening_data.get('qualified_count', 0)\
    \ + screening_data.get('maybe_count', 0)) / max(sourcing_data.get('total_sourced',\
    \ 1), 1) * 100, 1),\n        \"screen_to_assess\": round(assessment_data.get('completed',\
    \ 0) / max(screening_data.get('qualified_count', 1), 1) * 100, 1),\n        \"\
    assess_to_interview\": round(interview_data.get('feedback_collected', 0) / max(assessment_data.get('completed',\
    \ 1), 1) * 100, 1),\n        \"interview_to_offer\": round(offer_data.get('offers_prepared',\
    \ 0) / max(interview_data.get('feedback_collected', 1), 1) * 100, 1),\n      \
    \  \"overall\": round(offer_data.get('offers_prepared', 0) / max(sourcing_data.get('total_sourced',\
    \ 1), 1) * 100, 1)\n    },\n    \"time_metrics\": {\n        \"total_pipeline_days\"\
    : 14,  # Simulated\n        \"sourcing_days\": 2,\n        \"screening_days\"\
    : 1,\n        \"assessment_days\": 5,\n        \"interview_days\": 5,\n      \
    \  \"offer_days\": 1\n    },\n    \"quality_metrics\": {\n        \"avg_assessment_score\"\
    : round(sum(r['scores']['overall'] for r in assessment_data.get('results', []))\
    \ / max(len(assessment_data.get('results', [])), 1), 1) if assessment_data.get('results')\
    \ else 0,\n        \"avg_interview_rating\": round(sum(f['average_rating'] for\
    \ f in interview_data.get('interview_feedback', [])) / max(len(interview_data.get('interview_feedback',\
    \ [])), 1), 1) if interview_data.get('interview_feedback') else 0,\n        \"\
    high_performer_ratio\": round(len(assessment_data.get('high_performers', []))\
    \ / max(assessment_data.get('completed', 1), 1) * 100, 1)\n    },\n    \"cost_metrics\"\
    : {\n        \"total_compensation_budget\": offer_data.get('total_compensation_budget',\
    \ 0),\n        \"cost_per_hire\": round(offer_data.get('total_compensation_budget',\
    \ 0) / max(offer_data.get('offers_prepared', 1), 1), 0),\n        \"sourcing_channels_roi\"\
    : sourcing_data.get('by_channel', {})\n    },\n    \"diversity_metrics\": {\n\
    \        \"note\": \"Diversity data should be collected separately in compliance\
    \ with local regulations\"\n    },\n    \"recommendations\": [\n        \"Continue\
    \ using current sourcing channels with good conversion rates\",\n        \"Consider\
    \ automating initial screening to reduce time-to-hire\",\n        \"Assessment\
    \ completion rate is good - maintain current process\",\n        \"Interview-to-offer\
    \ ratio suggests good candidate quality\"\n    ]\n}\n\noutput = {\n    \"analytics_generated\"\
    : True,\n    \"report_generated_at\": datetime.utcnow().isoformat(),\n    \"recruitment_analytics\"\
    : analytics,\n    \"dashboard_url\": \"https://analytics.example.com/recruitment/dashboard\"\
    \n}\n\nprint(f\"__OUTPUTS__ {json.dumps(output)}\")\n"
  depends_on:
  - prepare_offer_packages
  - collect_interview_feedback
  - filter_qualified_candidates
inputs:
- name: job_requisition_id
  type: string
  required: true
  description: Unique identifier for the job requisition
- name: job_title
  type: string
  required: true
  description: Title of the position
- name: job_description
  type: string
  required: true
  description: Full job description with requirements
- name: required_skills
  type: array
  default: []
  required: true
  description: List of required skills and qualifications
- name: experience_years
  type: number
  default: 0
  required: true
  description: Minimum years of experience required
- name: location
  type: string
  required: true
  description: Job location or remote status
- name: salary_range
  type: string
  required: false
  description: Salary range for the position
- name: sourcing_channels
  type: array
  default:
  - linkedin
  - indeed
  - internal_database
  - referrals
  description: Channels to source candidates from
outputs:
  hired_candidates:
    source: prepare_offer_packages.offer_packages
    description: List of candidates who received offers
  pipeline_summary:
    source: generate_recruitment_analytics
    description: Summary of the entire recruitment pipeline
  recruitment_metrics:
    source: generate_recruitment_analytics.recruitment_analytics
    description: Complete recruitment analytics and metrics
version: 1.0.0
description: End-to-end candidate sourcing, screening, matching, assessment, scheduling,
  and evaluation workflow
model_clients:
- id: gpt4_screener
  config:
    model: gpt-4o-mini
    api_key: ${env.OPENAI_API_KEY}
    max_tokens: 2000
    temperature: 0.3
  provider: openai
- id: gpt4_matcher
  config:
    model: gpt-4o-mini
    api_key: ${env.OPENAI_API_KEY}
    max_tokens: 1500
    temperature: 0.5
  provider: openai
- id: gpt4_evaluator
  config:
    model: gpt-4o-mini
    api_key: ${env.OPENAI_API_KEY}
    max_tokens: 3000
    temperature: 0.2
  provider: openai

No executions yet. Execute this workflow to see results here.