No description
  • PureBasic 48.9%
  • HTML 22.2%
  • Python 12.7%
  • JavaScript 11.9%
  • TypeScript 4%
  • Other 0.2%
Find a file
Colja Vendel 0a8f091764
Some checks failed
Build & Deploy DEV / build-and-deploy (push) Failing after 1s
fix: correct nginx.conf COPY path in frontend Dockerfile
2026-04-17 07:05:45 +00:00
.forgejo/workflows ci: add container files, CI workflow, and CLUSTER_MODE scraper guard (D-08) 2026-04-16 22:43:21 +00:00
.planning docs: finalize scraper migration research and strategy 2026-04-17 08:56:48 +02:00
containers fix: correct nginx.conf COPY path in frontend Dockerfile 2026-04-17 07:05:45 +00:00
scripts feat(phase-09): stabilize architecture, harden binary data handling, and overhaul asset management 2026-04-16 22:58:16 +02:00
src chore: add TypeScript config files for container builds 2026-04-17 09:02:58 +02:00
storage/chrome_profile feat: stable paginated scraper with deep extraction and description popup 2026-04-09 22:38:02 +02:00
tests feat(phase9): Transition to React Dashboard with SSE real-time updates and API hardening 2026-04-14 03:11:32 +02:00
VERYOLD_job_bot feat: stable paginated scraper with deep extraction and description popup 2026-04-09 22:38:02 +02:00
.env.example feat(01-01): configure database settings 2026-04-08 17:06:24 +02:00
.gitignore feat(phase9): Transition to React Dashboard with SSE real-time updates and API hardening 2026-04-14 03:11:32 +02:00
debug_login.py feat: stable paginated scraper with deep extraction and description popup 2026-04-09 22:38:02 +02:00
login_debug_final.html feat: stable paginated scraper with deep extraction and description popup 2026-04-09 22:38:02 +02:00
login_debug_final.png feat: stable paginated scraper with deep extraction and description popup 2026-04-09 22:38:02 +02:00
main.py feat: stable paginated scraper with deep extraction and description popup 2026-04-09 22:38:02 +02:00
pyproject.toml feat: implement workflow engine, graphical builder and production-ready vertex ai config 2026-04-11 13:12:17 +02:00
README.md feat: stable paginated scraper with deep extraction and description popup 2026-04-09 22:38:02 +02:00
test_nav.html feat: stable paginated scraper with deep extraction and description popup 2026-04-09 22:38:02 +02:00
test_navigation.py feat: stable paginated scraper with deep extraction and description popup 2026-04-09 22:38:02 +02:00
uv.lock feat: implement workflow engine, graphical builder and production-ready vertex ai config 2026-04-11 13:12:17 +02:00

NEW_job_bot

Python FastAPI PostgreSQL Google Gemini

NEW_job_bot is an advanced, AI-powered job application automation system designed to eliminate the manual drudgery of the job hunt. By combining intelligent scraping with state-of-the-art LLMs, it discovers, evaluates, and prepares tailored applications for you.

🚀 Value Proposition

Stop spending hours manually searching for jobs and rewriting the same cover letter. NEW_job_bot allows you to:

  • Automate Discovery: Continuously scrape LinkedIn for new roles matching your criteria.
  • Intelligent Evaluation: Use Google Gemini to instantly determine if a job is a good fit for your background.
  • Precision Tailoring: Automatically generate a bespoke CV and Cover Letter for every application, specifically highlighting the most relevant skills for that specific role.
  • Centralized Control: Manage your entire pipeline from a clean, modern FastAPI dashboard.

🏗️ Architecture Overview

The system is built as a set of modular services coordinated by a central orchestrator.

graph TD
    A[LinkedIn] -- Selenium --> B(Scraper Service)
    B --> C[PostgreSQL Database]
    C <--> D(Orchestrator)
    D <--> E[Google Gemini AI]
    D --> F(Application Flow)
    F -- Jinja2 + WeasyPrint --> G(Tailored PDF CV/CL)
    H[FastAPI Dashboard] <--> C
    H -- Trigger --> D
  • Scraper Service: Uses Selenium to interact with LinkedIn and extract job details.
  • Orchestrator: The "brain" of the system, managing background tasks for discovery and application generation.
  • AI Service: Leverages Gemini (via google-genai) for text extraction, fit analysis, and document tailoring.
  • Persistence: A robust PostgreSQL database using SQLModel (SQLAlchemy + Pydantic) for schema management and data integrity.
  • Web UI: A FastAPI-based dashboard providing real-time status updates and manual triggers for the bot's actions.

🛠️ Setup Instructions

Prerequisites

  • Python 3.13+
  • uv (The extremely fast Python package manager)
  • PostgreSQL instance
  • Google AI API Key (Gemini)

Installation

  1. Clone the repository:

    git clone <repository-url>
    cd NEW_job_bot
    
  2. Install dependencies:

    uv sync
    
  3. Configure the environment: Copy the example environment file and fill in your credentials:

    cp .env.example .env
    # Edit .env with your Google API Key, Database URL, and LinkedIn credentials
    
  4. Initialize the database: The database tables are automatically created on first run via the FastAPI startup event.


🚦 Usage Instructions

Starting the Dashboard

Launch the FastAPI server to access the web interface and API:

uv run uvicorn src.bot.api.main:app --reload

The dashboard will be available at http://localhost:8000.

Core Features

  • Dashboard: View all scraped jobs, their "fit score," and current application status.
  • Manual Discovery: Trigger a new job crawl from the dashboard or via POST /jobs/discover.
  • Application Flow: Initiate the full AI tailoring process for a specific job with one click.
  • PDF Generation: Download the AI-generated CV and Cover Letter directly from the application details page.

API Entry Points

Method Path Description
GET / Web Dashboard
POST /jobs/discover Start background job scraping
POST /jobs/{id}/apply Start AI-driven application tailoring
GET /applications/{id}/cv Download the tailored CV PDF

📄 License

No license specified.