- PureBasic 48.9%
- HTML 22.2%
- Python 12.7%
- JavaScript 11.9%
- TypeScript 4%
- Other 0.2%
|
Some checks failed
Build & Deploy DEV / build-and-deploy (push) Failing after 1s
|
||
|---|---|---|
| .forgejo/workflows | ||
| .planning | ||
| containers | ||
| scripts | ||
| src | ||
| storage/chrome_profile | ||
| tests | ||
| VERYOLD_job_bot | ||
| .env.example | ||
| .gitignore | ||
| debug_login.py | ||
| login_debug_final.html | ||
| login_debug_final.png | ||
| main.py | ||
| pyproject.toml | ||
| README.md | ||
| test_nav.html | ||
| test_navigation.py | ||
| uv.lock | ||
NEW_job_bot
NEW_job_bot is an advanced, AI-powered job application automation system designed to eliminate the manual drudgery of the job hunt. By combining intelligent scraping with state-of-the-art LLMs, it discovers, evaluates, and prepares tailored applications for you.
🚀 Value Proposition
Stop spending hours manually searching for jobs and rewriting the same cover letter. NEW_job_bot allows you to:
- Automate Discovery: Continuously scrape LinkedIn for new roles matching your criteria.
- Intelligent Evaluation: Use Google Gemini to instantly determine if a job is a good fit for your background.
- Precision Tailoring: Automatically generate a bespoke CV and Cover Letter for every application, specifically highlighting the most relevant skills for that specific role.
- Centralized Control: Manage your entire pipeline from a clean, modern FastAPI dashboard.
🏗️ Architecture Overview
The system is built as a set of modular services coordinated by a central orchestrator.
graph TD
A[LinkedIn] -- Selenium --> B(Scraper Service)
B --> C[PostgreSQL Database]
C <--> D(Orchestrator)
D <--> E[Google Gemini AI]
D --> F(Application Flow)
F -- Jinja2 + WeasyPrint --> G(Tailored PDF CV/CL)
H[FastAPI Dashboard] <--> C
H -- Trigger --> D
- Scraper Service: Uses Selenium to interact with LinkedIn and extract job details.
- Orchestrator: The "brain" of the system, managing background tasks for discovery and application generation.
- AI Service: Leverages Gemini (via
google-genai) for text extraction, fit analysis, and document tailoring. - Persistence: A robust PostgreSQL database using
SQLModel(SQLAlchemy + Pydantic) for schema management and data integrity. - Web UI: A FastAPI-based dashboard providing real-time status updates and manual triggers for the bot's actions.
🛠️ Setup Instructions
Prerequisites
- Python 3.13+
- uv (The extremely fast Python package manager)
- PostgreSQL instance
- Google AI API Key (Gemini)
Installation
-
Clone the repository:
git clone <repository-url> cd NEW_job_bot -
Install dependencies:
uv sync -
Configure the environment: Copy the example environment file and fill in your credentials:
cp .env.example .env # Edit .env with your Google API Key, Database URL, and LinkedIn credentials -
Initialize the database: The database tables are automatically created on first run via the FastAPI startup event.
🚦 Usage Instructions
Starting the Dashboard
Launch the FastAPI server to access the web interface and API:
uv run uvicorn src.bot.api.main:app --reload
The dashboard will be available at http://localhost:8000.
Core Features
- Dashboard: View all scraped jobs, their "fit score," and current application status.
- Manual Discovery: Trigger a new job crawl from the dashboard or via
POST /jobs/discover. - Application Flow: Initiate the full AI tailoring process for a specific job with one click.
- PDF Generation: Download the AI-generated CV and Cover Letter directly from the application details page.
API Entry Points
| Method | Path | Description |
|---|---|---|
GET |
/ |
Web Dashboard |
POST |
/jobs/discover |
Start background job scraping |
POST |
/jobs/{id}/apply |
Start AI-driven application tailoring |
GET |
/applications/{id}/cv |
Download the tailored CV PDF |
📄 License
No license specified.