Merge db-migrations: Add Flask-Migrate support and clean up old migration system

This commit is contained in:
2025-07-13 12:17:20 +02:00
parent 7140aeba41
commit 1500b2cf88
65 changed files with 2153 additions and 7881 deletions

205
DEBUGGING_MIGRATIONS.md Normal file
View File

@@ -0,0 +1,205 @@
# Debugging Migration Issues in Docker
## Quick Solutions
### Container Exits Immediately
Use one of these approaches:
1. **Debug Mode (Recommended)**
```bash
docker-compose down
DEBUG_MODE=true docker-compose up
```
2. **Skip Migrations Temporarily**
```bash
docker-compose down
SKIP_MIGRATIONS=true docker-compose up
```
3. **Use Debug Compose File**
```bash
docker-compose -f docker-compose.debug.yml up
docker exec -it timetrack_web_1 bash
```
## Debug Entrypoint
The `debug_entrypoint.sh` keeps the container running and provides diagnostic info:
```bash
# In docker-compose.yml, change:
command: ["./startup_postgres.sh"]
# To:
entrypoint: ["./debug_entrypoint.sh"]
# Then:
docker-compose up -d
docker exec -it <container_name> bash
```
## Safe Startup Script
`startup_postgres_safe.sh` has three modes:
1. **Normal Mode**: Exits on migration failure (default)
2. **Debug Mode**: Continues running even if migrations fail
```bash
DEBUG_MODE=true docker-compose up
```
3. **Skip Mode**: Skips migrations entirely
```bash
SKIP_MIGRATIONS=true docker-compose up
```
## Common Debugging Steps
### 1. Get Into the Container
```bash
# If container keeps exiting, use debug compose:
docker-compose -f docker-compose.debug.yml up -d web
docker exec -it timetrack_web_1 bash
# Or modify your docker-compose.yml:
# Add: stdin_open: true
# Add: tty: true
# Change: entrypoint: ["/bin/bash"]
```
### 2. Manual Migration Setup
```bash
# Inside container:
export FLASK_APP=app.py
# Check what's wrong
python diagnose_migrations.py
# Initialize migrations
python docker_migrate_init.py
# Fix revision issues
python fix_revision_mismatch.py
```
### 3. Database Connection Issues
```bash
# Test connection
python -c "from app import app, db; app.app_context().push(); db.engine.execute('SELECT 1')"
# Check environment
echo $DATABASE_URL
echo $POSTGRES_HOST
```
### 4. Reset Everything
```bash
# Inside container:
rm -rf migrations
python docker_migrate_init.py
flask db stamp head # For existing DB
flask db upgrade # For new DB
```
## Docker Compose Examples
### Development with Auto-Restart
```yaml
services:
web:
environment:
- DEBUG_MODE=true
restart: unless-stopped # Auto-restart on failure
```
### Interactive Debugging
```yaml
services:
web:
entrypoint: ["/app/debug_entrypoint.sh"]
stdin_open: true
tty: true
```
### Skip Migrations for Testing
```yaml
services:
web:
environment:
- SKIP_MIGRATIONS=true
```
## Environment Variables
- `DEBUG_MODE=true` - Continue running even if migrations fail
- `SKIP_MIGRATIONS=true` - Skip all migration steps
- `FLASK_APP=app.py` - Required for Flask-Migrate
- `DATABASE_URL` - PostgreSQL connection string
## Step-by-Step Troubleshooting
1. **Container won't start?**
```bash
# Use debug compose
docker-compose -f docker-compose.debug.yml up
```
2. **Migration fails?**
```bash
# Get into container
docker exec -it <container> bash
# Run diagnostics
python diagnose_migrations.py
```
3. **Revision mismatch?**
```bash
# Quick fix
./quick_fix_revision.sh
# Or manual fix
flask db stamp <revision>
```
4. **Can't initialize migrations?**
```bash
# Check database connection first
python -c "from app import app; print(app.config['SQLALCHEMY_DATABASE_URI'])"
# Then initialize
python docker_migrate_init.py
```
## Tips
1. **Always use volumes** for migrations directory in development
2. **Check logs carefully** - the error is usually clear
3. **Don't run migrations in production containers** - include pre-tested migrations in image
4. **Use DEBUG_MODE** during development for easier troubleshooting
5. **Test locally first** before deploying to production
## Recovery Commands
If everything is broken:
```bash
# 1. Start with debug entrypoint
docker-compose -f docker-compose.debug.yml up -d web
# 2. Get into container
docker exec -it timetrack_web_1 bash
# 3. Reset migrations
rm -rf migrations
python docker_migrate_init.py
# 4. Mark as current (existing DB) or create tables (new DB)
flask db stamp head # Existing
flask db upgrade # New
# 5. Test the app
python app.py # Run in debug mode
# 6. If working, update docker-compose.yml and restart normally
```

189
DOCKER_MIGRATIONS_GUIDE.md Normal file
View File

@@ -0,0 +1,189 @@
# Flask-Migrate in Docker Deployments
## Overview
Docker containers typically don't include Git repositories, so we can't use Git commands to extract historical schemas. This guide explains how to use Flask-Migrate in Docker environments.
## Initial Setup (First Deployment)
When deploying with Flask-Migrate for the first time:
### Automatic Setup (via startup scripts)
The `startup.sh` and `startup_postgres.sh` scripts now automatically handle migration initialization:
1. **For existing databases with data:**
- Creates a baseline migration from current models
- Stamps the database as current (no changes applied)
- Ready for future migrations
2. **For empty databases:**
- Creates a baseline migration from current models
- Applies it to create all tables
- Ready for future migrations
### Manual Setup
If you need to set up manually:
```bash
# Inside your Docker container
python docker_migrate_init.py
# For existing database with tables:
flask db stamp head
# For new empty database:
flask db upgrade
```
## Creating New Migrations
After initial setup, create new migrations normally:
```bash
# 1. Make changes to your models
# 2. Generate migration
flask db migrate -m "Add user preferences"
# 3. Review the generated migration
cat migrations/versions/*.py
# 4. Apply the migration
flask db upgrade
```
## Helper Script
The `docker_migrate_init.py` script creates a `migrate.sh` helper:
```bash
# Check current migration status
./migrate.sh status
# Apply pending migrations
./migrate.sh apply
# Create new migration
./migrate.sh create "Add company settings"
# Mark database as current (existing DBs)
./migrate.sh mark-current
```
## Docker Compose Example
```yaml
version: '3.8'
services:
web:
build: .
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/timetrack
- FLASK_APP=app.py
volumes:
# Persist migrations between container restarts
- ./migrations:/app/migrations
depends_on:
- db
command: ./startup_postgres.sh
db:
image: postgres:13
environment:
- POSTGRES_DB=timetrack
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
```
## Important Notes
### 1. Migrations Directory
- The `migrations/` directory should be persisted between deployments
- Either use a volume mount or include it in your Docker image
- Don't regenerate migrations on each deployment
### 2. Environment Variables
Always set these in your Docker environment:
```bash
FLASK_APP=app.py
DATABASE_URL=your_database_url
```
### 3. Production Workflow
1. **Development**: Create and test migrations locally
2. **Commit**: Add migration files to Git
3. **Build**: Include migrations in Docker image
4. **Deploy**: Startup script applies migrations automatically
### 4. Rollback Strategy
To rollback a migration:
```bash
# Inside container
flask db downgrade # Go back one migration
flask db downgrade -2 # Go back two migrations
```
## Troubleshooting
### "No Git repository found"
This is expected in Docker. Use `docker_migrate_init.py` instead of the Git-based scripts.
### "Can't locate revision"
Your database references a migration that doesn't exist:
```bash
# Reset to current state
python docker_migrate_init.py
flask db stamp head
```
### Migration conflicts after deployment
If migrations were created in different environments:
```bash
# Merge migrations
flask db merge -m "Merge production and development"
flask db upgrade
```
## Best Practices
1. **Always test migrations** in a staging environment first
2. **Back up your database** before applying migrations in production
3. **Include migrations in your Docker image** for consistency
4. **Don't generate migrations in production** - only apply pre-tested ones
5. **Monitor the startup logs** to ensure migrations apply successfully
## Migration State in Different Scenarios
### Scenario 1: Fresh deployment, empty database
- Startup script runs `docker_migrate_init.py`
- Creates baseline migration
- Applies it to create all tables
### Scenario 2: Existing database, first Flask-Migrate setup
- Startup script runs `docker_migrate_init.py`
- Creates baseline migration matching current schema
- Stamps database as current (no changes)
### Scenario 3: Subsequent deployments with new migrations
- Startup script detects `migrations/` exists
- Runs `flask db upgrade` to apply new migrations
### Scenario 4: Container restart (no new code)
- Startup script detects `migrations/` exists
- Runs `flask db upgrade` (no-op if already current)
This approach ensures migrations work correctly in all Docker deployment scenarios!

223
FLASK_MIGRATE_GUIDE.md Normal file
View File

@@ -0,0 +1,223 @@
# Flask-Migrate Migration Guide
## Overview
TimeTrack has been refactored to use Flask-Migrate (which wraps Alembic) for database migrations instead of manual SQL scripts. This provides automatic migration generation, version control, and rollback capabilities.
**IMPORTANT**: The baseline for Flask-Migrate is set at git commit `4214e88d18fce7a9c75927753b8d4e9222771e14`. All schema changes after this commit need to be recreated as Flask-Migrate migrations.
**For Docker Deployments**: See `DOCKER_MIGRATIONS_GUIDE.md` for Docker-specific instructions (no Git required).
## Migration from Old System
### For Existing Deployments
If you have an existing database with the old migration system:
```bash
# 1. Install new dependencies
pip install -r requirements.txt
# 2. Establish baseline from commit 4214e88
python simple_baseline_4214e88.py
# Note: Use simple_baseline_4214e88.py as it handles the models.py transition correctly
# 3. Mark your database as being at the baseline
flask db stamp head
# 4. Apply any post-baseline migrations
# Review migrations_old/postgres_only_migration.py for changes after 4214e88
# Create new migrations for each feature:
flask db migrate -m "Add company updated_at column"
flask db migrate -m "Add user 2FA columns"
flask db migrate -m "Add company invitation table"
# etc...
```
### For New Deployments
```bash
# 1. Install dependencies
pip install -r requirements.txt
# 2. Initialize and create database
python manage_migrations.py init
python manage_migrations.py apply
```
## Daily Usage
### Creating Migrations
When you modify models (add columns, tables, etc.):
```bash
# Generate migration automatically
flask db migrate -m "Add user avatar field"
# Or use the helper script
python manage_migrations.py create -m "Add user avatar field"
```
**Always review the generated migration** in `migrations/versions/` before applying!
### Applying Migrations
```bash
# Apply all pending migrations
flask db upgrade
# Or use the helper script
python manage_migrations.py apply
```
### Rolling Back
```bash
# Rollback one migration
flask db downgrade
# Or use the helper script
python manage_migrations.py rollback
```
### Viewing Status
```bash
# Current migration version
flask db current
# Migration history
flask db history
# Or use the helper script
python manage_migrations.py history
```
## Important Considerations
### 1. PostgreSQL Enums
Flask-Migrate may not perfectly handle PostgreSQL enum types. When adding new enum values:
```python
# In the migration file, you may need to add:
from alembic import op
import sqlalchemy as sa
def upgrade():
# Add new enum value
op.execute("ALTER TYPE taskstatus ADD VALUE 'NEW_STATUS'")
```
### 2. Data Migrations
For complex data transformations, add custom code to migration files:
```python
def upgrade():
# Schema changes
op.add_column('user', sa.Column('new_field', sa.String()))
# Data migration
connection = op.get_bind()
result = connection.execute('SELECT id, old_field FROM user')
for row in result:
connection.execute(
f"UPDATE user SET new_field = '{process(row.old_field)}' WHERE id = {row.id}"
)
```
### 3. Production Deployments
The startup scripts have been updated to automatically run migrations:
```bash
# startup_postgres.sh now includes:
flask db upgrade
```
### 4. Development Workflow
1. Pull latest code
2. Run `flask db upgrade` to apply any new migrations
3. Make your model changes
4. Run `flask db migrate -m "Description"`
5. Review the generated migration
6. Test with `flask db upgrade`
7. Commit both model changes and migration file
## Troubleshooting
### "Target database is not up to date"
```bash
# Check current version
flask db current
# Force upgrade
flask db stamp head # Mark as latest without running
flask db upgrade # Apply any pending
```
### "Can't locate revision"
Your database revision doesn't match any migration file. This happens when switching branches.
```bash
# See all migrations
flask db history
# Stamp to a specific revision
flask db stamp <revision_id>
```
### Migration Conflicts
When multiple developers create migrations:
1. Merge the migration files carefully
2. Update the `down_revision` in the newer migration
3. Test thoroughly
## Best Practices
1. **One migration per feature** - Don't bundle unrelated changes
2. **Descriptive messages** - Use clear migration messages
3. **Review before applying** - Always check generated SQL
4. **Test rollbacks** - Ensure downgrade() works
5. **Backup before major migrations** - Especially in production
## Migration File Structure
```
migrations/
├── README.md # Quick reference
├── alembic.ini # Alembic configuration
├── env.py # Migration environment
├── script.py.mako # Migration template
└── versions/ # Migration files
├── 001_initial_migration.py
├── 002_add_user_avatars.py
└── ...
```
## Helper Scripts
- `manage_migrations.py` - Simplified migration management
- `migrate_to_alembic.py` - One-time transition from old system
- `init_migrations.py` - Quick initialization script
## Environment Variables
```bash
# Required for migrations
export FLASK_APP=app.py
export DATABASE_URL=postgresql://user:pass@host/db
```
## References
- [Flask-Migrate Documentation](https://flask-migrate.readthedocs.io/)
- [Alembic Documentation](https://alembic.sqlalchemy.org/)
- [SQLAlchemy Documentation](https://docs.sqlalchemy.org/)

View File

@@ -0,0 +1,224 @@
# Flask-Migrate Troubleshooting Guide
## Common Issues and Solutions
### 0. Baseline Script Fails - "Could not extract models/"
**Error**: When running `establish_baseline_4214e88.py`:
```
⚠️ Could not extract models/__init__.py
⚠️ Could not extract models/base.py
```
**Cause**: Commit 4214e88 uses a single `models.py` file, not the modular `models/` directory.
**Solution**:
```bash
# For local development with Git:
python simple_baseline_4214e88.py
# For Docker deployments (no Git):
python docker_migrate_init.py
```
### 1. "Target database is not up to date"
**Error**: When running `flask db migrate`, you get:
```
ERROR [flask_migrate] Target database is not up to date.
```
**Solution**:
```bash
# Apply pending migrations first
flask db upgrade
# Then create new migration
flask db migrate -m "Your changes"
```
### 2. "No changes in schema detected"
**Possible Causes**:
1. No actual model changes were made
2. Model not imported in `models/__init__.py`
3. Database already has the changes
**Solutions**:
```bash
# Check what Flask-Migrate sees
flask db compare
# Force detection by editing a model slightly
# (add a comment, save, then remove it)
# Check current state
python diagnose_migrations.py
```
### 3. After First Migration, Second One Fails
**This is the most common issue!**
After creating the baseline migration, you must apply it before creating new ones:
```bash
# Sequence:
flask db migrate -m "Initial migration" # Works ✓
flask db migrate -m "Add new column" # Fails ✗
# Fix:
flask db upgrade # Apply first migration
flask db migrate -m "Add new column" # Now works ✓
```
### 4. Import Errors
**Error**: `ModuleNotFoundError` or `ImportError`
**Solution**:
```bash
# Ensure FLASK_APP is set
export FLASK_APP=app.py
# Check imports
python -c "from app import app, db; print('OK')"
```
### 5. PostgreSQL Enum Issues
**Error**: Cannot add new enum value in migration
**Solution**: Edit the generated migration file:
```python
def upgrade():
# Instead of using Enum type directly
# Use raw SQL for PostgreSQL enums
op.execute("ALTER TYPE taskstatus ADD VALUE IF NOT EXISTS 'NEW_VALUE'")
```
### 6. Migration Conflicts After Git Pull
**Error**: Conflicting migration heads
**Solution**:
```bash
# Merge the migrations
flask db merge -m "Merge migrations"
# Then upgrade
flask db upgrade
```
## Quick Diagnostic Commands
```bash
# Run full diagnostics
python diagnose_migrations.py
# Fix sequence issues
python fix_migration_sequence.py
# Check current state
flask db current # Current DB revision
flask db heads # Latest file revision
flask db history # All migrations
# Compare DB with models
flask db compare # Shows differences
```
## Best Practices to Avoid Issues
1. **Always upgrade before new migrations**:
```bash
flask db upgrade
flask db migrate -m "New changes"
```
2. **Review generated migrations**:
- Check `migrations/versions/` folder
- Look for DROP commands you didn't intend
3. **Test on development first**:
```bash
# Test the migration
flask db upgrade
# Test rollback
flask db downgrade
```
4. **Handle enums carefully**:
- PostgreSQL enums need special handling
- Consider using String columns instead
5. **Commit migrations with code**:
- Always commit migration files with model changes
- This keeps database and code in sync
## Revision Mismatch Errors
### "Can't locate revision identified by 'xxxxx'"
This means your database thinks it's at a revision that doesn't exist in your migration files.
**Quick Fix**:
```bash
# Run the automated fix
./quick_fix_revision.sh
# Or manually:
# 1. Find your latest migration
ls migrations/versions/*.py
# 2. Get the revision from the file
grep "revision = " migrations/versions/latest_file.py
# 3. Stamp database to that revision
flask db stamp <revision_id>
```
**Detailed Diagnosis**:
```bash
python fix_revision_mismatch.py
```
## Emergency Fixes
### Reset Migration State (Development Only!)
```bash
# Remove migrations and start over
rm -rf migrations
python establish_baseline_4214e88.py
flask db stamp head
```
### Force Database to Current State
```bash
# Mark database as up-to-date without running migrations
flask db stamp head
# Or stamp to specific revision
flask db stamp <revision_id>
```
### Manual Migration Edit
Sometimes you need to edit the generated migration:
1. Generate migration: `flask db migrate -m "Changes"`
2. Edit file in `migrations/versions/`
3. Test with: `flask db upgrade`
4. Test rollback: `flask db downgrade`
## Getting Help
If these solutions don't work:
1. Run diagnostics: `python diagnose_migrations.py`
2. Check the full error message
3. Look at the generated SQL: `flask db upgrade --sql`
4. Check Flask-Migrate logs in detail

View File

@@ -1,180 +0,0 @@
# Freelancer Migration Guide
This document explains the database migration for freelancer support in TimeTrack.
## Overview
The freelancer migration adds support for independent users who can register without a company token. It introduces:
1. **Account Types**: Users can be either "Company User" or "Freelancer"
2. **Personal Companies**: Freelancers automatically get their own company workspace
3. **Business Names**: Optional field for freelancers to specify their business name
## Database Changes
### User Table Changes
- `account_type` VARCHAR(20) DEFAULT 'COMPANY_USER' - Type of account
- `business_name` VARCHAR(100) - Optional business name for freelancers
- `company_id` INTEGER - Foreign key to company table (for multi-tenancy)
### Company Table Changes
- `is_personal` BOOLEAN DEFAULT 0 - Marks companies auto-created for freelancers
## Migration Options
### Option 1: Automatic Migration (Recommended)
The main migration script (`migrate_db.py`) now includes freelancer support:
```bash
python migrate_db.py
```
This will:
- Add new columns to existing tables
- Create company table if it doesn't exist
- Set default values for existing users
### Option 2: Dedicated Freelancer Migration
Use the dedicated freelancer migration script:
```bash
python migrate_freelancers.py
```
### Option 3: Manual SQL Migration
If you prefer manual control:
```sql
-- Add columns to user table
ALTER TABLE user ADD COLUMN account_type VARCHAR(20) DEFAULT 'COMPANY_USER';
ALTER TABLE user ADD COLUMN business_name VARCHAR(100);
ALTER TABLE user ADD COLUMN company_id INTEGER;
-- Create company table (if it doesn't exist)
CREATE TABLE company (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR(100) UNIQUE NOT NULL,
slug VARCHAR(50) UNIQUE NOT NULL,
description TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
is_personal BOOLEAN DEFAULT 0,
is_active BOOLEAN DEFAULT 1,
max_users INTEGER DEFAULT 100
);
-- Or add column to existing company table
ALTER TABLE company ADD COLUMN is_personal BOOLEAN DEFAULT 0;
-- Update existing users
UPDATE user SET account_type = 'COMPANY_USER' WHERE account_type IS NULL;
```
## Post-Migration Steps
### For Existing Installations
1. **Create Default Company**: If you have existing users without a company, create one:
```python
# In Python/Flask shell
from models import db, Company, User
# Create default company
company = Company(
name="Default Company",
slug="default-company",
description="Default company for existing users"
)
db.session.add(company)
db.session.flush()
# Assign existing users to default company
User.query.filter_by(company_id=None).update({'company_id': company.id})
db.session.commit()
```
2. **Verify Migration**: Check that all users have a company_id:
```sql
SELECT COUNT(*) FROM user WHERE company_id IS NULL;
-- Should return 0
```
### Testing Freelancer Registration
1. Visit `/register/freelancer`
2. Register a new freelancer account
3. Verify the personal company was created
4. Test login and time tracking functionality
## New Features Available
### Freelancer Registration
- **URL**: `/register/freelancer`
- **Features**:
- No company token required
- Auto-creates personal workspace
- Optional business name field
- Immediate account activation
### Registration Options
- **Company Registration**: `/register` (existing)
- **Freelancer Registration**: `/register/freelancer` (new)
- **Login Page**: Shows both registration options
### User Experience
- Freelancers get admin privileges in their personal company
- Can create projects and track time immediately
- Personal workspace is limited to 1 user by default
- Can optionally expand to hire employees later
## Troubleshooting
### Common Issues
**Migration fails with "column already exists"**
- This is normal if you've run the migration before
- The migration script checks for existing columns
**Users missing company_id after migration**
- Run the post-migration steps above to assign a default company
**Freelancer registration fails**
- Check that the AccountType enum is imported correctly
- Verify database migration completed successfully
### Rollback (Limited)
SQLite doesn't support dropping columns, so rollback is limited:
```bash
python migrate_freelancers.py rollback
```
For full rollback, you would need to:
1. Export user data
2. Recreate tables without freelancer columns
3. Re-import data
## Verification Commands
```bash
# Verify migration applied
python migrate_freelancers.py verify
# Check table structure
sqlite3 timetrack.db ".schema user"
sqlite3 timetrack.db ".schema company"
# Check data
sqlite3 timetrack.db "SELECT account_type, COUNT(*) FROM user GROUP BY account_type;"
```
## Security Considerations
- Freelancers get unique usernames/emails globally (not per-company)
- Personal companies are limited to 1 user by default
- Freelancers have admin privileges only in their personal workspace
- Multi-tenant isolation is maintained
## Future Enhancements
- Allow freelancers to upgrade to team accounts
- Billing integration for freelancer vs company accounts
- Advanced freelancer-specific features
- Integration with invoicing systems

View File

@@ -1,174 +0,0 @@
# Project Time Logging Migration Guide
This document explains how to migrate your TimeTrack database to support the new Project Time Logging feature.
## Overview
The Project Time Logging feature adds the ability to:
- Track time against specific projects
- Manage projects with role-based access control
- Filter and report on project-based time entries
- Export data with project information
## Database Changes
### New Tables
- **`project`**: Stores project information including name, code, description, team assignment, and dates
### Modified Tables
- **`time_entry`**: Added `project_id` (foreign key) and `notes` (text) columns
- **Existing data**: All existing time entries remain unchanged and will show as "No project assigned"
## Migration Options
### Option 1: Run Main Migration Script (Recommended)
The main migration script has been updated to include project functionality:
```bash
python migrate_db.py
```
This will:
- Create the project table
- Add project_id and notes columns to time_entry
- Create 3 sample projects (if no admin user exists)
- Maintain all existing data
### Option 2: Run Project-Specific Migration
For existing installations, you can run the project-specific migration:
```bash
python migrate_projects.py
```
### Option 3: Manual Migration
If you prefer to handle the migration manually, execute these SQL commands:
```sql
-- Create project table
CREATE TABLE project (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR(100) NOT NULL,
description TEXT,
code VARCHAR(20) NOT NULL UNIQUE,
is_active BOOLEAN DEFAULT 1,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
created_by_id INTEGER NOT NULL,
team_id INTEGER,
start_date DATE,
end_date DATE,
FOREIGN KEY (created_by_id) REFERENCES user (id),
FOREIGN KEY (team_id) REFERENCES team (id)
);
-- Add columns to time_entry table
ALTER TABLE time_entry ADD COLUMN project_id INTEGER;
ALTER TABLE time_entry ADD COLUMN notes TEXT;
```
## Sample Projects
The migration creates these sample projects (if admin user exists):
1. **ADMIN001** - General Administration
2. **DEV001** - Development Project
3. **SUPPORT001** - Customer Support
These can be modified or deleted after migration.
## Rollback
To rollback the project functionality (removes projects but keeps time entry columns):
```bash
python migrate_projects.py rollback
```
**Note**: Due to SQLite limitations, the `project_id` and `notes` columns cannot be removed from the `time_entry` table during rollback.
## Post-Migration Steps
1. **Verify Migration**: Check that the migration completed successfully
2. **Create Projects**: Admin/Supervisor users can create projects via the web interface
3. **Assign Teams**: Optionally assign projects to specific teams
4. **User Training**: Inform users about the new project selection feature
## Migration Verification
After running the migration, verify it worked by:
1. **Check Tables**:
```sql
.tables -- Should show 'project' table
.schema project -- Verify project table structure
.schema time_entry -- Verify project_id and notes columns
```
2. **Check Web Interface**:
- Admin/Supervisor users should see "Manage Projects" in their dropdown menu
- Time tracking interface should show project selection dropdown
- History page should have project filtering
3. **Check Sample Projects**:
```sql
SELECT * FROM project; -- Should show 3 sample projects
```
## Troubleshooting
### Migration Fails
- Ensure no active connections to the database
- Check file permissions
- Verify admin user exists in the database
### Missing Navigation Links
- Clear browser cache
- Verify user has Admin or Supervisor role
- Check that the templates have been updated
### Project Selection Not Available
- Verify migration completed successfully
- Check that active projects exist in the database
- Ensure user has permission to access projects
## Feature Access
### Admin Users
- Create, edit, delete, and manage all projects
- Access project management interface
- View all project reports
### Supervisor Users
- Create, edit, and manage projects
- Access project management interface
- View project reports
### Team Leader Users
- View team hours with project breakdown
- No project creation/management access
### Team Member Users
- Select projects when tracking time
- View personal history with project filtering
- No project management access
## File Changes
The migration affects these files:
- `migrate_db.py` - Updated main migration script
- `migrate_projects.py` - New project-specific migration
- `models.py` - Added Project model and updated TimeEntry
- `app.py` - Added project routes and updated existing routes
- Templates - Updated with project functionality
- `static/js/script.js` - Updated time tracking JavaScript
## Backup Recommendation
Before running any migration, it's recommended to backup your database:
```bash
cp timetrack.db timetrack.db.backup
```
This allows you to restore the original database if needed.

74
MIGRATION_SETUP_FINAL.md Normal file
View File

@@ -0,0 +1,74 @@
# Final Migration Setup for TimeTrack
## What's Working Now
Your migration system is now fully functional with:
1. **Flask-Migrate** - Handles database schema changes
2. **Automatic Enum Sync** - Handles PostgreSQL enum values
3. **Docker Support** - Works without Git in containers
## Essential Files to Keep
### Core Migration Files
- `migrations/` - Flask-Migrate directory (required)
- `sync_postgres_enums.py` - Auto-syncs enum values on startup
- `docker_migrate_init.py` - Initializes migrations in Docker
### Updated Startup Scripts
- `startup_postgres.sh` - Now includes enum sync
- `startup_postgres_safe.sh` - Debug version with error handling
- `startup.sh` - Updated for Flask-Migrate
### Debug Tools (Optional)
- `debug_entrypoint.sh` - For troubleshooting
- `docker-compose.debug.yml` - Debug Docker setup
### Documentation
- `FLASK_MIGRATE_GUIDE.md` - Complete guide
- `DOCKER_MIGRATIONS_GUIDE.md` - Docker-specific guide
- `POSTGRES_ENUM_GUIDE.md` - Enum handling guide
- `FLASK_MIGRATE_TROUBLESHOOTING.md` - Troubleshooting guide
## Workflow Summary
### For New Schema Changes
```bash
# 1. Modify your models
# 2. Generate migration
flask db migrate -m "Add new feature"
# 3. Review the generated file
# 4. Apply migration
flask db upgrade
```
### For New Enum Values
```python
# Just add to Python enum - sync happens automatically
class TaskStatus(enum.Enum):
NEW_STATUS = "New Status"
```
### Docker Deployment
```bash
# Everything is automatic in startup scripts:
# 1. Migrations applied
# 2. Enums synced
# 3. App starts
```
## Cleanup
Run the cleanup script to remove all temporary files:
```bash
./cleanup_migration_cruft.sh
```
This removes ~20+ temporary scripts while keeping the essential ones.
## Notes
- The old migration system (`migrations_old/`) can be removed after confirming everything works
- PostgreSQL enums now support both names (TODO) and values (To Do)
- All future migrations are handled by Flask-Migrate
- Enum sync runs automatically on every startup

View File

@@ -12,6 +12,7 @@ itsdangerous = "==2.0.1"
click = "==8.0.1" click = "==8.0.1"
flask-sqlalchemy = "==2.5.1" flask-sqlalchemy = "==2.5.1"
sqlalchemy = "==1.4.23" sqlalchemy = "==1.4.23"
flask-migrate = "==3.1.0"
[dev-packages] [dev-packages]

View File

@@ -1,176 +0,0 @@
# Database Schema Changes Summary
This document summarizes all database schema changes between commit 4214e88 and the current state of the TimeTrack application.
## Architecture Changes
### 1. **Model Structure Refactoring**
- **Before**: Single monolithic `models.py` file containing all models
- **After**: Models split into domain-specific modules:
- `models/__init__.py` - Package initialization
- `models/base.py` - Base model definitions
- `models/company.py` - Company-related models
- `models/user.py` - User-related models
- `models/project.py` - Project-related models
- `models/task.py` - Task-related models
- `models/time_entry.py` - Time entry model
- `models/sprint.py` - Sprint model
- `models/team.py` - Team model
- `models/system.py` - System settings models
- `models/announcement.py` - Announcement model
- `models/dashboard.py` - Dashboard-related models
- `models/work_config.py` - Work configuration model
- `models/invitation.py` - Company invitation model
- `models/enums.py` - All enum definitions
## New Tables Added
### 1. **company_invitation** (NEW)
- Purpose: Email-based company registration invitations
- Columns:
- `id` (INTEGER, PRIMARY KEY)
- `company_id` (INTEGER, FOREIGN KEY → company.id)
- `email` (VARCHAR(120), NOT NULL)
- `token` (VARCHAR(64), UNIQUE, NOT NULL)
- `role` (VARCHAR(50), DEFAULT 'Team Member')
- `invited_by_id` (INTEGER, FOREIGN KEY → user.id)
- `created_at` (TIMESTAMP, DEFAULT CURRENT_TIMESTAMP)
- `expires_at` (TIMESTAMP, NOT NULL)
- `accepted` (BOOLEAN, DEFAULT FALSE)
- `accepted_at` (TIMESTAMP)
- `accepted_by_user_id` (INTEGER, FOREIGN KEY → user.id)
- Indexes:
- `idx_invitation_token` on token
- `idx_invitation_email` on email
- `idx_invitation_company` on company_id
- `idx_invitation_expires` on expires_at
## Modified Tables
### 1. **company**
- Added columns:
- `updated_at` (TIMESTAMP, DEFAULT CURRENT_TIMESTAMP) - NEW
### 2. **user**
- Added columns:
- `two_factor_enabled` (BOOLEAN, DEFAULT FALSE) - NEW
- `two_factor_secret` (VARCHAR(32), NULLABLE) - NEW
- `avatar_url` (VARCHAR(255), NULLABLE) - NEW
### 3. **user_preferences**
- Added columns:
- `theme` (VARCHAR(20), DEFAULT 'light')
- `language` (VARCHAR(10), DEFAULT 'en')
- `timezone` (VARCHAR(50), DEFAULT 'UTC')
- `date_format` (VARCHAR(20), DEFAULT 'YYYY-MM-DD')
- `time_format` (VARCHAR(10), DEFAULT '24h')
- `email_notifications` (BOOLEAN, DEFAULT TRUE)
- `email_daily_summary` (BOOLEAN, DEFAULT FALSE)
- `email_weekly_summary` (BOOLEAN, DEFAULT TRUE)
- `default_project_id` (INTEGER, FOREIGN KEY → project.id)
- `timer_reminder_enabled` (BOOLEAN, DEFAULT TRUE)
- `timer_reminder_interval` (INTEGER, DEFAULT 60)
- `dashboard_layout` (JSON, NULLABLE)
### 4. **user_dashboard**
- Added columns:
- `layout` (JSON, NULLABLE) - Alternative grid layout configuration
- `is_locked` (BOOLEAN, DEFAULT FALSE) - Prevent accidental changes
### 5. **company_work_config**
- Added columns:
- `standard_hours_per_day` (FLOAT, DEFAULT 8.0)
- `standard_hours_per_week` (FLOAT, DEFAULT 40.0)
- `overtime_enabled` (BOOLEAN, DEFAULT TRUE)
- `overtime_rate` (FLOAT, DEFAULT 1.5)
- `double_time_enabled` (BOOLEAN, DEFAULT FALSE)
- `double_time_threshold` (FLOAT, DEFAULT 12.0)
- `double_time_rate` (FLOAT, DEFAULT 2.0)
- `require_breaks` (BOOLEAN, DEFAULT TRUE)
- `break_duration_minutes` (INTEGER, DEFAULT 30)
- `break_after_hours` (FLOAT, DEFAULT 6.0)
- `weekly_overtime_threshold` (FLOAT, DEFAULT 40.0)
- `weekly_overtime_rate` (FLOAT, DEFAULT 1.5)
### 6. **company_settings**
- Added columns:
- `work_week_start` (INTEGER, DEFAULT 1)
- `work_days` (VARCHAR(20), DEFAULT '1,2,3,4,5')
- `allow_overlapping_entries` (BOOLEAN, DEFAULT FALSE)
- `require_project_for_time_entry` (BOOLEAN, DEFAULT TRUE)
- `allow_future_entries` (BOOLEAN, DEFAULT FALSE)
- `max_hours_per_entry` (FLOAT, DEFAULT 24.0)
- `enable_tasks` (BOOLEAN, DEFAULT TRUE)
- `enable_sprints` (BOOLEAN, DEFAULT FALSE)
- `enable_client_access` (BOOLEAN, DEFAULT FALSE)
- `notify_on_overtime` (BOOLEAN, DEFAULT TRUE)
- `overtime_threshold_daily` (FLOAT, DEFAULT 8.0)
- `overtime_threshold_weekly` (FLOAT, DEFAULT 40.0)
### 7. **dashboard_widget**
- Added columns:
- `config` (JSON) - Widget-specific configuration
- `is_visible` (BOOLEAN, DEFAULT TRUE)
## Enum Changes
### 1. **WorkRegion** enum
- Added value:
- `GERMANY = "Germany"` - NEW
### 2. **TaskStatus** enum
- Added value:
- `ARCHIVED = "Archived"` - NEW
### 3. **WidgetType** enum
- Expanded with many new widget types:
- Time Tracking: `CURRENT_TIMER`, `DAILY_SUMMARY`, `WEEKLY_CHART`, `BREAK_REMINDER`, `TIME_SUMMARY`
- Project Management: `ACTIVE_PROJECTS`, `PROJECT_PROGRESS`, `PROJECT_ACTIVITY`, `PROJECT_DEADLINES`, `PROJECT_STATUS`
- Task Management: `ASSIGNED_TASKS`, `TASK_PRIORITY`, `TASK_CALENDAR`, `UPCOMING_TASKS`, `TASK_LIST`
- Sprint: `SPRINT_OVERVIEW`, `SPRINT_BURNDOWN`, `SPRINT_PROGRESS`
- Team & Analytics: `TEAM_WORKLOAD`, `TEAM_PRESENCE`, `TEAM_ACTIVITY`
- Performance: `PRODUCTIVITY_STATS`, `TIME_DISTRIBUTION`, `PERSONAL_STATS`
- Actions: `QUICK_ACTIONS`, `RECENT_ACTIVITY`
## Migration Requirements
### PostgreSQL Migration Steps:
1. **Add company_invitation table** (migration 19)
2. **Add updated_at to company table** (migration 20)
3. **Add new columns to user table** for 2FA and avatar
4. **Add new columns to user_preferences table**
5. **Add new columns to user_dashboard table**
6. **Add new columns to company_work_config table**
7. **Add new columns to company_settings table**
8. **Add new columns to dashboard_widget table**
9. **Update enum types** for WorkRegion and TaskStatus
10. **Update WidgetType enum** with new values
### Data Migration Considerations:
1. **Default values**: All new columns have appropriate defaults
2. **Nullable fields**: Most new fields are nullable or have defaults
3. **Foreign keys**: New invitation table has proper FK constraints
4. **Indexes**: Performance indexes added for invitation lookups
5. **Enum migrations**: Need to handle enum type changes carefully in PostgreSQL
### Breaking Changes:
- None identified - all changes are additive or have defaults
### Rollback Strategy:
1. Drop new tables (company_invitation)
2. Drop new columns from existing tables
3. Revert enum changes (remove new values)
## Summary
The main changes involve:
1. Adding email invitation functionality with a new table
2. Enhancing user features with 2FA and avatars
3. Expanding dashboard and widget capabilities
4. Adding comprehensive work configuration options
5. Better tracking with updated_at timestamps
6. Regional compliance support with expanded WorkRegion enum

23
app.py
View File

@@ -1,4 +1,5 @@
from flask import Flask, render_template, request, redirect, url_for, jsonify, flash, session, g, Response, send_file, abort from flask import Flask, render_template, request, redirect, url_for, jsonify, flash, session, g, Response, send_file, abort
from flask_migrate import Migrate
from models import db, TimeEntry, WorkConfig, User, SystemSettings, Team, Role, Project, Company, CompanyWorkConfig, CompanySettings, UserPreferences, WorkRegion, AccountType, ProjectCategory, Task, SubTask, TaskStatus, TaskPriority, TaskDependency, Sprint, SprintStatus, Announcement, SystemEvent, WidgetType, UserDashboard, DashboardWidget, WidgetTemplate, Comment, CommentVisibility, BrandingSettings, CompanyInvitation, Note, NoteFolder, NoteShare from models import db, TimeEntry, WorkConfig, User, SystemSettings, Team, Role, Project, Company, CompanyWorkConfig, CompanySettings, UserPreferences, WorkRegion, AccountType, ProjectCategory, Task, SubTask, TaskStatus, TaskPriority, TaskDependency, Sprint, SprintStatus, Announcement, SystemEvent, WidgetType, UserDashboard, DashboardWidget, WidgetTemplate, Comment, CommentVisibility, BrandingSettings, CompanyInvitation, Note, NoteFolder, NoteShare
from data_formatting import ( from data_formatting import (
format_duration, prepare_export_data, prepare_team_hours_export_data, format_duration, prepare_export_data, prepare_team_hours_export_data,
@@ -47,6 +48,7 @@ from routes.auth import login_required, admin_required, system_admin_required, r
# Import utility functions # Import utility functions
from utils.auth import is_system_admin, can_access_system_settings from utils.auth import is_system_admin, can_access_system_settings
from security_headers import init_security
from utils.settings import get_system_setting from utils.settings import get_system_setting
# Import analytics data function from export module # Import analytics data function from export module
@@ -65,6 +67,24 @@ app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.config['SECRET_KEY'] = os.environ.get('SECRET_KEY', 'dev_key_for_timetrack') app.config['SECRET_KEY'] = os.environ.get('SECRET_KEY', 'dev_key_for_timetrack')
app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(days=7) # Session lasts for 7 days app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(days=7) # Session lasts for 7 days
# Fix for HTTPS behind proxy (nginx, load balancer, etc)
# This ensures forms use https:// URLs when behind a reverse proxy
from werkzeug.middleware.proxy_fix import ProxyFix
app.wsgi_app = ProxyFix(
app.wsgi_app,
x_for=1, # Trust X-Forwarded-For
x_proto=1, # Trust X-Forwarded-Proto
x_host=1, # Trust X-Forwarded-Host
x_prefix=1 # Trust X-Forwarded-Prefix
)
# Force HTTPS URL scheme in production
if not app.debug and os.environ.get('FORCE_HTTPS', 'false').lower() in ['true', '1', 'yes']:
app.config['PREFERRED_URL_SCHEME'] = 'https'
# Initialize security headers
init_security(app)
# Configure Flask-Mail # Configure Flask-Mail
app.config['MAIL_SERVER'] = os.environ.get('MAIL_SERVER', 'smtp.example.com') app.config['MAIL_SERVER'] = os.environ.get('MAIL_SERVER', 'smtp.example.com')
app.config['MAIL_PORT'] = int(os.environ.get('MAIL_PORT') or 587) app.config['MAIL_PORT'] = int(os.environ.get('MAIL_PORT') or 587)
@@ -85,6 +105,9 @@ mail = Mail(app)
# Initialize the database with the app # Initialize the database with the app
db.init_app(app) db.init_app(app)
# Initialize Flask-Migrate
migrate = Migrate(app, db)
# Register blueprints # Register blueprints
app.register_blueprint(notes_bp) app.register_blueprint(notes_bp)
app.register_blueprint(notes_download_bp) app.register_blueprint(notes_download_bp)

16
apply_migration.py Normal file
View File

@@ -0,0 +1,16 @@
#!/usr/bin/env python
"""Apply database migrations with Flask-Migrate"""
from flask_migrate import upgrade
from app import app, db
if __name__ == '__main__':
with app.app_context():
print("Applying migrations...")
try:
upgrade()
print("Migrations applied successfully!")
except Exception as e:
print(f"Error applying migrations: {e}")
import traceback
traceback.print_exc()

65
check_migration_state.py Normal file
View File

@@ -0,0 +1,65 @@
#!/usr/bin/env python
"""Check and fix migration state in the database"""
from app import app, db
from sqlalchemy import text
def check_alembic_version():
"""Check the current alembic version in the database"""
with app.app_context():
try:
# Check if alembic_version table exists
result = db.session.execute(text(
"SELECT table_name FROM information_schema.tables "
"WHERE table_schema = 'public' AND table_name = 'alembic_version'"
))
if result.rowcount == 0:
print("No alembic_version table found. This is a fresh database.")
return None
# Get current version
result = db.session.execute(text("SELECT version_num FROM alembic_version"))
row = result.fetchone()
if row:
print(f"Current migration version in database: {row[0]}")
return row[0]
else:
print("alembic_version table exists but is empty")
return None
except Exception as e:
print(f"Error checking migration state: {e}")
return None
def clean_migration_state():
"""Clean up the migration state"""
with app.app_context():
try:
print("\nCleaning migration state...")
# Drop the alembic_version table
db.session.execute(text("DROP TABLE IF EXISTS alembic_version"))
db.session.commit()
print("Migration state cleaned successfully!")
return True
except Exception as e:
print(f"Error cleaning migration state: {e}")
db.session.rollback()
return False
if __name__ == '__main__':
print("Checking migration state...")
version = check_alembic_version()
if version:
print(f"\nThe database references migration '{version}' which doesn't exist in files.")
response = input("Do you want to clean the migration state? (yes/no): ")
if response.lower() == 'yes':
if clean_migration_state():
print("\nYou can now create a fresh initial migration.")
else:
print("\nFailed to clean migration state.")
else:
print("\nNo migration issues found. You can create a fresh initial migration.")

96
clean_migration_state.py Normal file
View File

@@ -0,0 +1,96 @@
#!/usr/bin/env python
"""Clean migration state and handle orphaned tables"""
from app import app, db
from sqlalchemy import text
def get_all_tables():
"""Get all tables in the database"""
with app.app_context():
result = db.session.execute(text(
"SELECT table_name FROM information_schema.tables "
"WHERE table_schema = 'public' AND table_type = 'BASE TABLE'"
))
return [row[0] for row in result]
def check_migration_state():
"""Check current migration state"""
with app.app_context():
try:
result = db.session.execute(text("SELECT version_num FROM alembic_version"))
row = result.fetchone()
if row:
print(f"Current migration version: {row[0]}")
return row[0]
except:
print("No alembic_version table found")
return None
def clean_migration_only():
"""Clean only the migration state, keep all other tables"""
with app.app_context():
try:
print("Cleaning migration state only...")
db.session.execute(text("DELETE FROM alembic_version"))
db.session.commit()
print("Migration state cleaned successfully!")
return True
except Exception as e:
print(f"Error: {e}")
db.session.rollback()
return False
def list_orphaned_tables():
"""List tables that exist in DB but not in models"""
with app.app_context():
all_tables = get_all_tables()
# Get tables from current models
model_tables = set()
for table in db.metadata.tables.values():
model_tables.add(table.name)
# Find orphaned tables
orphaned = []
for table in all_tables:
if table not in model_tables and table != 'alembic_version':
orphaned.append(table)
return orphaned
if __name__ == '__main__':
print("=== Migration State Check ===")
# Check current state
version = check_migration_state()
# List all tables
print("\n=== Database Tables ===")
tables = get_all_tables()
for table in sorted(tables):
print(f" - {table}")
# Check for orphaned tables
orphaned = list_orphaned_tables()
if orphaned:
print("\n=== Orphaned Tables (not in current models) ===")
for table in sorted(orphaned):
print(f" - {table}")
print("\nThese tables exist in the database but are not defined in your current models.")
print("They might be from old features or previous schema versions.")
if version:
print(f"\n=== Action Required ===")
print(f"The database has migration '{version}' but no migration files exist.")
print("\nOptions:")
print("1. Clean migration state only (keeps all tables)")
print("2. Cancel and handle manually")
choice = input("\nEnter your choice (1 or 2): ")
if choice == '1':
if clean_migration_only():
print("\n✓ Migration state cleaned!")
print("You can now run: python create_migration.py")
else:
print("\nCancelled. No changes made.")

23
create_migration.py Normal file
View File

@@ -0,0 +1,23 @@
#!/usr/bin/env python
"""Create a new migration with Flask-Migrate"""
import os
import sys
from flask_migrate import migrate as _migrate
from app import app, db
if __name__ == '__main__':
with app.app_context():
print("Creating migration...")
try:
# Get migration message from command line or use default
message = sys.argv[1] if len(sys.argv) > 1 else "Initial migration"
# Create the migration
_migrate(message=message)
print(f"Migration '{message}' created successfully!")
print("Review the migration file in migrations/versions/")
print("To apply the migration, run: python apply_migration.py")
except Exception as e:
print(f"Error creating migration: {e}")
sys.exit(1)

53
docker-compose.debug.yml Normal file
View File

@@ -0,0 +1,53 @@
version: '3.8'
# Debug version of docker-compose for troubleshooting migration issues
# Usage: docker-compose -f docker-compose.debug.yml up
services:
web:
build: .
ports:
- "5000:5000"
environment:
- DATABASE_URL=postgresql://timetrack:timetrack@db:5432/timetrack
- FLASK_APP=app.py
- FLASK_ENV=development
# Debug options - uncomment as needed:
- DEBUG_MODE=true # Continue running even if migrations fail
# - SKIP_MIGRATIONS=true # Skip migrations entirely
volumes:
- .:/app # Mount entire directory for easy debugging
depends_on:
- db
# Use debug entrypoint that keeps container running
entrypoint: ["/app/debug_entrypoint.sh"]
stdin_open: true # Keep stdin open
tty: true # Allocate a pseudo-TTY
web_safe:
build: .
ports:
- "5001:5000"
environment:
- DATABASE_URL=postgresql://timetrack:timetrack@db:5432/timetrack
- FLASK_APP=app.py
- DEBUG_MODE=true # Won't exit on migration failure
volumes:
- .:/app
depends_on:
- db
command: ["/app/startup_postgres_safe.sh"]
db:
image: postgres:13
environment:
- POSTGRES_DB=timetrack
- POSTGRES_USER=timetrack
- POSTGRES_PASSWORD=timetrack
ports:
- "5432:5432" # Expose for external debugging
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:

View File

@@ -48,7 +48,6 @@ services:
condition: service_healthy condition: service_healthy
volumes: volumes:
- ${DATA_PATH:-./data}:/data - ${DATA_PATH:-./data}:/data
- shared_socket:/host/shared
volumes: volumes:
postgres_data: postgres_data:

231
docker_migrate_init.py Executable file
View File

@@ -0,0 +1,231 @@
#!/usr/bin/env python3
"""
Docker-friendly Flask-Migrate initialization.
No Git required - works with current schema as baseline.
"""
import os
import sys
import subprocess
import shutil
from datetime import datetime
def run_command(cmd, description, check=True):
"""Run a command and handle errors."""
print(f"\n{description}")
print(f" Command: {cmd}")
result = subprocess.run(cmd, shell=True, capture_output=True, text=True)
if result.returncode == 0:
print(f"✓ Success")
if result.stdout.strip():
print(f" {result.stdout.strip()}")
return True
else:
print(f"✗ Failed")
if result.stderr:
print(f" Error: {result.stderr}")
if check:
sys.exit(1)
return False
def check_database_connection():
"""Check if we can connect to the database."""
print("\nChecking database connection...")
try:
from app import app, db
with app.app_context():
# Try a simple query
db.engine.execute("SELECT 1")
print("✓ Database connection successful")
return True
except Exception as e:
print(f"✗ Database connection failed: {e}")
return False
def check_existing_tables():
"""Check what tables exist in the database."""
print("\nChecking existing tables...")
try:
from app import app, db
with app.app_context():
# Get table names
inspector = db.inspect(db.engine)
tables = inspector.get_table_names()
if tables:
print(f"✓ Found {len(tables)} existing tables:")
for table in sorted(tables):
if table != 'alembic_version':
print(f" - {table}")
return True
else:
print(" No tables found (empty database)")
return False
except Exception as e:
print(f"✗ Error checking tables: {e}")
return False
def main():
"""Main initialization function."""
print("=== Flask-Migrate Docker Initialization ===")
print("\nThis script will set up Flask-Migrate for your Docker deployment.")
print("It uses your CURRENT schema as the baseline (no Git required).")
# Set environment
os.environ['FLASK_APP'] = 'app.py'
# Check prerequisites
if not check_database_connection():
print("\n❌ Cannot connect to database. Check your DATABASE_URL.")
return 1
has_tables = check_existing_tables()
print("\n" + "="*50)
if has_tables:
print("SCENARIO: Existing database with tables")
print("="*50)
print("\nYour database already has tables. We'll create a baseline")
print("migration and mark it as already applied.")
else:
print("SCENARIO: Empty database")
print("="*50)
print("\nYour database is empty. We'll create a baseline")
print("migration that can be applied to create all tables.")
response = input("\nContinue? (y/N): ")
if response.lower() != 'y':
print("Aborting...")
return 1
# Step 1: Clean up any existing migrations
if os.path.exists('migrations'):
print("\n⚠️ Removing existing migrations directory...")
shutil.rmtree('migrations')
# Step 2: Initialize Flask-Migrate
print("\nInitializing Flask-Migrate...")
if not run_command("flask db init", "Creating migrations directory"):
return 1
# Step 3: Create baseline migration
print("\nCreating baseline migration from current models...")
baseline_date = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
if not run_command(
f'flask db migrate -m "Docker baseline migration - {baseline_date}"',
"Generating migration"
):
return 1
# Step 4: Add documentation to the migration
print("\nDocumenting the migration...")
try:
import glob
migration_files = glob.glob('migrations/versions/*.py')
if migration_files:
latest = max(migration_files, key=os.path.getctime)
with open(latest, 'r') as f:
content = f.read()
note = f'''"""DOCKER BASELINE MIGRATION
Generated: {baseline_date}
This migration represents the current state of your models.
It serves as the baseline for all future migrations.
For existing databases with tables:
flask db stamp head # Mark as current without running
For new empty databases:
flask db upgrade # Create all tables
DO NOT MODIFY THIS MIGRATION
"""
'''
with open(latest, 'w') as f:
f.write(note + content)
print(f"✓ Documented {os.path.basename(latest)}")
except Exception as e:
print(f"⚠️ Could not document migration: {e}")
# Step 5: Handle based on database state
print("\n" + "="*50)
print("NEXT STEPS")
print("="*50)
if has_tables:
print("\nYour database already has tables. Run this command to")
print("mark it as up-to-date WITHOUT running the migration:")
print("\n flask db stamp head")
print("\nThen you can create new migrations normally:")
print(" flask db migrate -m 'Add new feature'")
print(" flask db upgrade")
else:
print("\nYour database is empty. Run this command to")
print("create all tables from the baseline migration:")
print("\n flask db upgrade")
print("\nThen you can create new migrations normally:")
print(" flask db migrate -m 'Add new feature'")
print(" flask db upgrade")
# Create a helper script
helper_content = f"""#!/bin/bash
# Flask-Migrate helper for Docker
# Generated: {baseline_date}
export FLASK_APP=app.py
case "$1" in
status)
echo "Current migration status:"
flask db current
;;
apply)
echo "Applying pending migrations..."
flask db upgrade
;;
create)
if [ -z "$2" ]; then
echo "Usage: $0 create 'Migration message'"
exit 1
fi
echo "Creating new migration: $2"
flask db migrate -m "$2"
echo "Review the migration, then run: $0 apply"
;;
mark-current)
echo "Marking database as current (no changes)..."
flask db stamp head
;;
*)
echo "Flask-Migrate Docker Helper"
echo "Usage:"
echo " $0 status - Show current migration status"
echo " $0 apply - Apply pending migrations"
echo " $0 create 'msg' - Create new migration"
echo " $0 mark-current - Mark DB as current (existing DBs)"
;;
esac
"""
with open('migrate.sh', 'w') as f:
f.write(helper_content)
os.chmod('migrate.sh', 0o755)
print("\n✓ Created migrate.sh helper script")
print("\n✨ Initialization complete!")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,34 +0,0 @@
# fly.toml app configuration file generated for timetrack-2whuug on 2025-07-01T09:27:14Z
#
# See https://fly.io/docs/reference/configuration/ for information about how to use this file.
#
app = 'timetrack-2whuug'
primary_region = 'fra'
[build]
[http_service]
internal_port = 5000
force_https = true
auto_stop_machines = 'stop'
auto_start_machines = true
min_machines_running = 0
processes = ['app']
[env]
MAIL_SERVER = "smtp.ionos.de"
MAIL_PORT = 587
MAIL_USE_TLS = 1
MAIL_USERNAME = "jens@luedicke.cloud"
MAIL_DEFAULT_SENDER = "jens@luedicke.cloud"
[mounts]
source = "timetrack_data"
destination = "/data"
[[vm]]
cpu_kind = 'shared'
cpus = 1
memory_mb = 256

27
init_db.py Normal file
View File

@@ -0,0 +1,27 @@
#!/usr/bin/env python
"""Initialize the database migrations manually"""
import os
import sys
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate, init
# Create a minimal Flask app
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get('DATABASE_URL', 'sqlite:////data/timetrack.db')
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
# Create db and migrate instances
db = SQLAlchemy(app)
migrate = Migrate(app, db)
if __name__ == '__main__':
with app.app_context():
print("Initializing migration repository...")
try:
init()
print("Migration repository initialized successfully!")
except Exception as e:
print(f"Error: {e}")
sys.exit(1)

1
migrations/README Normal file
View File

@@ -0,0 +1 @@
Single-database configuration for Flask.

View File

@@ -1,20 +0,0 @@
-- Migration to add CASCADE delete to note_link foreign keys
-- This ensures that when a note is deleted, all links to/from it are also deleted
-- For PostgreSQL
-- Drop existing foreign key constraints
ALTER TABLE note_link DROP CONSTRAINT IF EXISTS note_link_source_note_id_fkey;
ALTER TABLE note_link DROP CONSTRAINT IF EXISTS note_link_target_note_id_fkey;
-- Add new foreign key constraints with CASCADE
ALTER TABLE note_link
ADD CONSTRAINT note_link_source_note_id_fkey
FOREIGN KEY (source_note_id)
REFERENCES note(id)
ON DELETE CASCADE;
ALTER TABLE note_link
ADD CONSTRAINT note_link_target_note_id_fkey
FOREIGN KEY (target_note_id)
REFERENCES note(id)
ON DELETE CASCADE;

View File

@@ -1,25 +0,0 @@
-- SQLite migration for cascade delete on note_link
-- SQLite doesn't support ALTER TABLE for foreign keys, so we need to recreate the table
-- Create new table with CASCADE delete
CREATE TABLE note_link_new (
id INTEGER PRIMARY KEY,
source_note_id INTEGER NOT NULL,
target_note_id INTEGER NOT NULL,
link_type VARCHAR(50) DEFAULT 'related',
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
created_by_id INTEGER NOT NULL,
FOREIGN KEY (source_note_id) REFERENCES note(id) ON DELETE CASCADE,
FOREIGN KEY (target_note_id) REFERENCES note(id) ON DELETE CASCADE,
FOREIGN KEY (created_by_id) REFERENCES user(id),
UNIQUE(source_note_id, target_note_id)
);
-- Copy data from old table
INSERT INTO note_link_new SELECT * FROM note_link;
-- Drop old table
DROP TABLE note_link;
-- Rename new table
ALTER TABLE note_link_new RENAME TO note_link;

View File

@@ -1,5 +0,0 @@
-- Add folder column to notes table
ALTER TABLE note ADD COLUMN IF NOT EXISTS folder VARCHAR(100);
-- Create an index on folder for faster filtering
CREATE INDEX IF NOT EXISTS idx_note_folder ON note(folder) WHERE folder IS NOT NULL;

View File

@@ -1,17 +0,0 @@
-- Create note_folder table for tracking folders independently of notes
CREATE TABLE IF NOT EXISTS note_folder (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
path VARCHAR(500) NOT NULL,
parent_path VARCHAR(500),
description TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
created_by_id INTEGER NOT NULL REFERENCES "user"(id),
company_id INTEGER NOT NULL REFERENCES company(id),
CONSTRAINT uq_folder_path_company UNIQUE (path, company_id)
);
-- Create indexes for better performance
CREATE INDEX IF NOT EXISTS idx_note_folder_company ON note_folder(company_id);
CREATE INDEX IF NOT EXISTS idx_note_folder_parent_path ON note_folder(parent_path);
CREATE INDEX IF NOT EXISTS idx_note_folder_created_by ON note_folder(created_by_id);

View File

@@ -1,21 +0,0 @@
-- Add note_share table for public note sharing functionality
CREATE TABLE IF NOT EXISTS note_share (
id SERIAL PRIMARY KEY,
note_id INTEGER NOT NULL REFERENCES note(id) ON DELETE CASCADE,
token VARCHAR(64) UNIQUE NOT NULL,
expires_at TIMESTAMP,
password_hash VARCHAR(255),
view_count INTEGER DEFAULT 0,
max_views INTEGER,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
created_by_id INTEGER NOT NULL REFERENCES "user"(id),
last_accessed_at TIMESTAMP
);
-- Create indexes for better performance
CREATE INDEX IF NOT EXISTS idx_note_share_token ON note_share(token);
CREATE INDEX IF NOT EXISTS idx_note_share_note_id ON note_share(note_id);
CREATE INDEX IF NOT EXISTS idx_note_share_created_by ON note_share(created_by_id);
-- Add comment
COMMENT ON TABLE note_share IS 'Public sharing links for notes with optional password protection and view limits';

View File

@@ -1,20 +0,0 @@
-- Add time formatting and rounding preferences to user_preferences table
-- These columns support user-specific time display and rounding settings
-- Add time formatting preference (24h vs 12h)
ALTER TABLE user_preferences
ADD COLUMN IF NOT EXISTS time_format_24h BOOLEAN DEFAULT TRUE;
-- Add time rounding preference (0, 5, 10, 15, 30, 60 minutes)
ALTER TABLE user_preferences
ADD COLUMN IF NOT EXISTS time_rounding_minutes INTEGER DEFAULT 0;
-- Add rounding direction preference (false=round down, true=round to nearest)
ALTER TABLE user_preferences
ADD COLUMN IF NOT EXISTS round_to_nearest BOOLEAN DEFAULT FALSE;
-- Update existing date_format column default if needed
-- (The column should already exist, but let's ensure the default is correct)
UPDATE user_preferences
SET date_format = 'ISO'
WHERE date_format = 'YYYY-MM-DD' OR date_format IS NULL;

50
migrations/alembic.ini Normal file
View File

@@ -0,0 +1,50 @@
# A generic, single database configuration.
[alembic]
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic,flask_migrate
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[logger_flask_migrate]
level = INFO
handlers =
qualname = flask_migrate
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

91
migrations/env.py Normal file
View File

@@ -0,0 +1,91 @@
from __future__ import with_statement
import logging
from logging.config import fileConfig
from flask import current_app
from alembic import context
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
logger = logging.getLogger('alembic.env')
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
config.set_main_option(
'sqlalchemy.url',
str(current_app.extensions['migrate'].db.get_engine().url).replace(
'%', '%%'))
target_metadata = current_app.extensions['migrate'].db.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url, target_metadata=target_metadata, literal_binds=True
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
# this callback is used to prevent an auto-migration from being generated
# when there are no changes to the schema
# reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html
def process_revision_directives(context, revision, directives):
if getattr(config.cmd_opts, 'autogenerate', False):
script = directives[0]
if script.upgrade_ops.is_empty():
directives[:] = []
logger.info('No changes in schema detected.')
connectable = current_app.extensions['migrate'].db.get_engine()
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
process_revision_directives=process_revision_directives,
**current_app.extensions['migrate'].configure_args
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@@ -1,24 +0,0 @@
# Database Migration Scripts - In Order of Execution
## Phase 1: SQLite Schema Updates (Run first)
01_migrate_db.py - Update SQLite schema with all necessary columns and tables
## Phase 2: Data Migration (Run after SQLite updates)
02_migrate_sqlite_to_postgres.py - Migrate data from updated SQLite to PostgreSQL
## Phase 3: PostgreSQL Schema Migrations (Run after data migration)
03_add_dashboard_columns.py - Add missing columns to user_dashboard table
04_add_user_preferences_columns.py - Add missing columns to user_preferences table
05_fix_task_status_enum.py - Fix task status enum values in database
06_add_archived_status.py - Add ARCHIVED status to task_status enum
07_fix_company_work_config_columns.py - Fix company work config column names
08_fix_work_region_enum.py - Fix work region enum values
09_add_germany_to_workregion.py - Add GERMANY back to work_region enum
10_add_company_settings_columns.py - Add missing columns to company_settings table
## Phase 4: Code Migrations (Run after all schema migrations)
11_fix_company_work_config_usage.py - Update code references to CompanyWorkConfig fields
12_fix_task_status_usage.py - Update code references to TaskStatus enum values
13_fix_work_region_usage.py - Update code references to WorkRegion enum values
14_fix_removed_fields.py - Handle removed fields in code
15_repair_user_roles.py - Fix user roles from string to enum values

View File

@@ -1,79 +0,0 @@
#!/usr/bin/env python3
"""
Summary of all model migrations to be performed
"""
import os
from pathlib import Path
def print_section(title, items):
"""Print a formatted section"""
print(f"\n{'='*60}")
print(f"📌 {title}")
print('='*60)
for item in items:
print(f" {item}")
def main():
print("🔍 Model Migration Summary")
print("="*60)
print("\nThis will update your codebase to match the refactored models.")
# CompanyWorkConfig changes
print_section("CompanyWorkConfig Field Changes", [
"✓ work_hours_per_day → standard_hours_per_day",
"✓ mandatory_break_minutes → break_duration_minutes",
"✓ break_threshold_hours → break_after_hours",
"✓ region → work_region",
"✗ REMOVED: additional_break_minutes",
"✗ REMOVED: additional_break_threshold_hours",
"✗ REMOVED: region_name (use work_region.value)",
"✗ REMOVED: created_by_id",
"+ ADDED: standard_hours_per_week, overtime_enabled, overtime_rate, etc."
])
# TaskStatus changes
print_section("TaskStatus Enum Changes", [
"✓ NOT_STARTED → TODO",
"✓ COMPLETED → DONE",
"✓ ON_HOLD → IN_REVIEW",
"+ KEPT: ARCHIVED (separate from CANCELLED)"
])
# WorkRegion changes
print_section("WorkRegion Enum Changes", [
"✓ UNITED_STATES → USA",
"✓ UNITED_KINGDOM → UK",
"✓ FRANCE → EU",
"✓ EUROPEAN_UNION → EU",
"✓ CUSTOM → OTHER",
"! KEPT: GERMANY (specific labor laws)"
])
# Files to be modified
print_section("Files That Will Be Modified", [
"Python files: app.py, routes/*.py",
"Templates: admin_company.html, admin_work_policies.html, config.html",
"JavaScript: static/js/*.js (for task status)",
"Removed field references will be commented out"
])
# Safety notes
print_section("⚠️ Important Notes", [
"BACKUP your code before running migrations",
"Removed fields will be commented with # REMOVED:",
"Review all changes after migration",
"Test thoroughly, especially:",
" - Company work policy configuration",
" - Task status transitions",
" - Regional preset selection",
"Consider implementing audit logging for created_by tracking"
])
print("\n" + "="*60)
print("🎯 To run all migrations: python migrations/run_all_migrations.py")
print("🎯 To run individually: python migrations/01_fix_company_work_config_usage.py")
print("="*60)
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

View File

@@ -1,408 +0,0 @@
#!/usr/bin/env python3
"""
SQLite to PostgreSQL Migration Script for TimeTrack
This script migrates data from SQLite to PostgreSQL database.
"""
import sqlite3
import psycopg2
import os
import sys
import logging
from datetime import datetime
from psycopg2.extras import RealDictCursor
import json
# Add parent directory to path to import app
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('migration.log'),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
class SQLiteToPostgresMigration:
def __init__(self, sqlite_path, postgres_url):
self.sqlite_path = sqlite_path
self.postgres_url = postgres_url
self.sqlite_conn = None
self.postgres_conn = None
self.migration_stats = {}
def connect_databases(self):
"""Connect to both SQLite and PostgreSQL databases"""
try:
# Connect to SQLite
self.sqlite_conn = sqlite3.connect(self.sqlite_path)
self.sqlite_conn.row_factory = sqlite3.Row
logger.info(f"Connected to SQLite database: {self.sqlite_path}")
# Connect to PostgreSQL
self.postgres_conn = psycopg2.connect(self.postgres_url)
self.postgres_conn.autocommit = False
logger.info("Connected to PostgreSQL database")
return True
except Exception as e:
logger.error(f"Failed to connect to databases: {e}")
return False
def close_connections(self):
"""Close database connections"""
if self.sqlite_conn:
self.sqlite_conn.close()
if self.postgres_conn:
self.postgres_conn.close()
def backup_postgres(self):
"""Create a backup of existing PostgreSQL data"""
try:
with self.postgres_conn.cursor() as cursor:
# Check if tables exist and have data
cursor.execute("""
SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public'
""")
tables = cursor.fetchall()
if tables:
backup_file = f"postgres_backup_{datetime.now().strftime('%Y%m%d_%H%M%S')}.sql"
logger.info(f"Creating PostgreSQL backup: {backup_file}")
# Use pg_dump for backup
os.system(f"pg_dump '{self.postgres_url}' > {backup_file}")
logger.info(f"Backup created: {backup_file}")
return backup_file
else:
logger.info("No existing PostgreSQL tables found, skipping backup")
return None
except Exception as e:
logger.error(f"Failed to create backup: {e}")
return None
def check_sqlite_database(self):
"""Check if SQLite database exists and has data"""
if not os.path.exists(self.sqlite_path):
logger.error(f"SQLite database not found: {self.sqlite_path}")
return False
try:
cursor = self.sqlite_conn.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
tables = cursor.fetchall()
if not tables:
logger.info("SQLite database is empty, nothing to migrate")
return False
logger.info(f"Found {len(tables)} tables in SQLite database")
return True
except Exception as e:
logger.error(f"Error checking SQLite database: {e}")
return False
def create_postgres_tables(self, clear_existing=False):
"""Create PostgreSQL tables using Flask-SQLAlchemy models"""
try:
# Import Flask app and create tables
from app import app, db
with app.app_context():
# Set the database URI to PostgreSQL
app.config['SQLALCHEMY_DATABASE_URI'] = self.postgres_url
if clear_existing:
logger.info("Clearing existing PostgreSQL data...")
db.drop_all()
logger.info("Dropped all existing tables")
# Create all tables
db.create_all()
logger.info("Created PostgreSQL tables")
return True
except Exception as e:
logger.error(f"Failed to create PostgreSQL tables: {e}")
return False
def migrate_table_data(self, table_name, column_mapping=None):
"""Migrate data from SQLite table to PostgreSQL"""
try:
sqlite_cursor = self.sqlite_conn.cursor()
postgres_cursor = self.postgres_conn.cursor()
# Check if table exists in SQLite
sqlite_cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name=?", (table_name,))
if not sqlite_cursor.fetchone():
logger.info(f"Table {table_name} does not exist in SQLite, skipping...")
self.migration_stats[table_name] = 0
return True
# Get data from SQLite
sqlite_cursor.execute(f"SELECT * FROM {table_name}")
rows = sqlite_cursor.fetchall()
if not rows:
logger.info(f"No data found in table: {table_name}")
self.migration_stats[table_name] = 0
return True
# Get column names
column_names = [description[0] for description in sqlite_cursor.description]
# Apply column mapping if provided
if column_mapping:
column_names = [column_mapping.get(col, col) for col in column_names]
# Prepare insert statement
placeholders = ', '.join(['%s'] * len(column_names))
columns = ', '.join([f'"{col}"' for col in column_names]) # Quote column names
insert_sql = f'INSERT INTO "{table_name}" ({columns}) VALUES ({placeholders})' # Quote table name
# Convert rows to list of tuples
data_rows = []
for row in rows:
data_row = []
for i, value in enumerate(row):
col_name = column_names[i]
# Handle special data type conversions
if value is None:
data_row.append(None)
elif isinstance(value, str) and value.startswith('{"') and value.endswith('}'):
# Handle JSON strings
data_row.append(value)
elif (col_name.startswith('is_') or col_name.endswith('_enabled') or col_name in ['is_paused']) and isinstance(value, int):
# Convert integer boolean to actual boolean for PostgreSQL
data_row.append(bool(value))
elif isinstance(value, str) and value == '':
# Convert empty strings to None for PostgreSQL
data_row.append(None)
else:
data_row.append(value)
data_rows.append(tuple(data_row))
# Check if we should clear existing data first (for tables with unique constraints)
if table_name in ['company', 'team', 'user']:
postgres_cursor.execute(f'SELECT COUNT(*) FROM "{table_name}"')
existing_count = postgres_cursor.fetchone()[0]
if existing_count > 0:
logger.warning(f"Table {table_name} already has {existing_count} rows. Skipping to avoid duplicates.")
self.migration_stats[table_name] = 0
return True
# Insert data in batches
batch_size = 1000
for i in range(0, len(data_rows), batch_size):
batch = data_rows[i:i + batch_size]
try:
postgres_cursor.executemany(insert_sql, batch)
self.postgres_conn.commit()
except Exception as batch_error:
logger.error(f"Error inserting batch {i//batch_size + 1} for table {table_name}: {batch_error}")
# Try inserting rows one by one to identify problematic rows
self.postgres_conn.rollback()
for j, row in enumerate(batch):
try:
postgres_cursor.execute(insert_sql, row)
self.postgres_conn.commit()
except Exception as row_error:
logger.error(f"Error inserting row {i + j} in table {table_name}: {row_error}")
logger.error(f"Problematic row data: {row}")
self.postgres_conn.rollback()
logger.info(f"Migrated {len(rows)} rows from table: {table_name}")
self.migration_stats[table_name] = len(rows)
return True
except Exception as e:
logger.error(f"Failed to migrate table {table_name}: {e}")
self.postgres_conn.rollback()
return False
def update_sequences(self):
"""Update PostgreSQL sequences after data migration"""
try:
with self.postgres_conn.cursor() as cursor:
# Get all sequences - fix the query to properly extract sequence names
cursor.execute("""
SELECT
pg_get_serial_sequence(table_name, column_name) as sequence_name,
column_name,
table_name
FROM information_schema.columns
WHERE column_default LIKE 'nextval%'
AND table_schema = 'public'
""")
sequences = cursor.fetchall()
for seq_name, col_name, table_name in sequences:
if seq_name is None:
continue
# Get the maximum value for each sequence
cursor.execute(f'SELECT MAX("{col_name}") FROM "{table_name}"')
max_val = cursor.fetchone()[0]
if max_val is not None:
# Update sequence to start from max_val + 1 - don't quote sequence name from pg_get_serial_sequence
cursor.execute(f'ALTER SEQUENCE {seq_name} RESTART WITH {max_val + 1}')
logger.info(f"Updated sequence {seq_name} to start from {max_val + 1}")
self.postgres_conn.commit()
logger.info("Updated PostgreSQL sequences")
return True
except Exception as e:
logger.error(f"Failed to update sequences: {e}")
self.postgres_conn.rollback()
return False
def migrate_all_data(self):
"""Migrate all data from SQLite to PostgreSQL"""
# Define table migration order (respecting foreign key constraints)
migration_order = [
'company',
'team',
'project_category',
'user',
'project',
'task',
'sub_task',
'time_entry',
'work_config',
'company_work_config',
'user_preferences',
'system_settings'
]
for table_name in migration_order:
if not self.migrate_table_data(table_name):
logger.error(f"Migration failed at table: {table_name}")
return False
# Update sequences after all data is migrated
if not self.update_sequences():
logger.error("Failed to update sequences")
return False
return True
def verify_migration(self):
"""Verify that migration was successful"""
try:
sqlite_cursor = self.sqlite_conn.cursor()
postgres_cursor = self.postgres_conn.cursor()
# Get table names from SQLite
sqlite_cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
sqlite_tables = [row[0] for row in sqlite_cursor.fetchall()]
verification_results = {}
for table_name in sqlite_tables:
if table_name == 'sqlite_sequence':
continue
# Count rows in SQLite
sqlite_cursor.execute(f"SELECT COUNT(*) FROM {table_name}")
sqlite_count = sqlite_cursor.fetchone()[0]
# Count rows in PostgreSQL
postgres_cursor.execute(f'SELECT COUNT(*) FROM "{table_name}"')
postgres_count = postgres_cursor.fetchone()[0]
verification_results[table_name] = {
'sqlite_count': sqlite_count,
'postgres_count': postgres_count,
'match': sqlite_count == postgres_count
}
if sqlite_count == postgres_count:
logger.info(f"✓ Table {table_name}: {sqlite_count} rows migrated successfully")
else:
logger.error(f"✗ Table {table_name}: SQLite={sqlite_count}, PostgreSQL={postgres_count}")
return verification_results
except Exception as e:
logger.error(f"Verification failed: {e}")
return None
def run_migration(self, clear_existing=False):
"""Run the complete migration process"""
logger.info("Starting SQLite to PostgreSQL migration...")
# Connect to databases
if not self.connect_databases():
return False
try:
# Check SQLite database
if not self.check_sqlite_database():
return False
# Create backup
backup_file = self.backup_postgres()
# Create PostgreSQL tables
if not self.create_postgres_tables(clear_existing=clear_existing):
return False
# Migrate data
if not self.migrate_all_data():
return False
# Verify migration
verification = self.verify_migration()
if verification:
logger.info("Migration verification completed")
for table, stats in verification.items():
if not stats['match']:
logger.error(f"Migration verification failed for table: {table}")
return False
logger.info("Migration completed successfully!")
logger.info(f"Migration statistics: {self.migration_stats}")
return True
except Exception as e:
logger.error(f"Migration failed: {e}")
return False
finally:
self.close_connections()
def main():
"""Main migration function"""
import argparse
parser = argparse.ArgumentParser(description='Migrate SQLite to PostgreSQL')
parser.add_argument('--clear-existing', action='store_true',
help='Clear existing PostgreSQL data before migration')
parser.add_argument('--sqlite-path', default=os.environ.get('SQLITE_PATH', '/data/timetrack.db'),
help='Path to SQLite database')
args = parser.parse_args()
# Get database paths from environment variables
sqlite_path = args.sqlite_path
postgres_url = os.environ.get('DATABASE_URL')
if not postgres_url:
logger.error("DATABASE_URL environment variable not set")
return 1
# Check if SQLite database exists
if not os.path.exists(sqlite_path):
logger.info(f"SQLite database not found at {sqlite_path}, skipping migration")
return 0
# Run migration
migration = SQLiteToPostgresMigration(sqlite_path, postgres_url)
success = migration.run_migration(clear_existing=args.clear_existing)
return 0 if success else 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,361 +0,0 @@
#!/usr/bin/env python3
"""
Fixed SQLite to PostgreSQL Migration Script for TimeTrack
This script properly handles empty SQLite databases and column mapping issues.
"""
import sqlite3
import psycopg2
import os
import sys
import logging
from datetime import datetime
from psycopg2.extras import RealDictCursor
import json
# Add parent directory to path to import app
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('migration.log'),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
class SQLiteToPostgresMigration:
def __init__(self, sqlite_path, postgres_url):
self.sqlite_path = sqlite_path
self.postgres_url = postgres_url
self.sqlite_conn = None
self.postgres_conn = None
self.migration_stats = {}
# Column mapping for SQLite to PostgreSQL
self.column_mapping = {
'project': {
# Map SQLite columns to PostgreSQL columns
# Ensure company_id is properly mapped
'company_id': 'company_id',
'user_id': 'company_id' # Map user_id to company_id if needed
}
}
def connect_databases(self):
"""Connect to both SQLite and PostgreSQL databases"""
try:
# Connect to SQLite
self.sqlite_conn = sqlite3.connect(self.sqlite_path)
self.sqlite_conn.row_factory = sqlite3.Row
logger.info(f"Connected to SQLite database: {self.sqlite_path}")
# Connect to PostgreSQL
self.postgres_conn = psycopg2.connect(self.postgres_url)
self.postgres_conn.autocommit = False
logger.info("Connected to PostgreSQL database")
return True
except Exception as e:
logger.error(f"Failed to connect to databases: {e}")
return False
def close_connections(self):
"""Close database connections"""
if self.sqlite_conn:
self.sqlite_conn.close()
if self.postgres_conn:
self.postgres_conn.close()
def check_sqlite_database(self):
"""Check if SQLite database exists and has data"""
if not os.path.exists(self.sqlite_path):
logger.error(f"SQLite database not found: {self.sqlite_path}")
return False
try:
cursor = self.sqlite_conn.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
tables = cursor.fetchall()
if not tables:
logger.info("SQLite database is empty, nothing to migrate")
return False
logger.info(f"Found {len(tables)} tables in SQLite database")
for table in tables:
logger.info(f" - {table[0]}")
return True
except Exception as e:
logger.error(f"Error checking SQLite database: {e}")
return False
def clear_postgres_data(self):
"""Clear existing data from PostgreSQL tables that will be migrated"""
try:
with self.postgres_conn.cursor() as cursor:
# Tables to clear in reverse order of dependencies
tables_to_clear = [
'time_entry',
'sub_task',
'task',
'project',
'user',
'team',
'company',
'work_config',
'system_settings'
]
for table in tables_to_clear:
try:
cursor.execute(f'DELETE FROM "{table}"')
logger.info(f"Cleared table: {table}")
except Exception as e:
logger.warning(f"Could not clear table {table}: {e}")
self.postgres_conn.rollback()
self.postgres_conn.commit()
return True
except Exception as e:
logger.error(f"Failed to clear PostgreSQL data: {e}")
self.postgres_conn.rollback()
return False
def migrate_table_data(self, table_name):
"""Migrate data from SQLite table to PostgreSQL"""
try:
sqlite_cursor = self.sqlite_conn.cursor()
postgres_cursor = self.postgres_conn.cursor()
# Check if table exists in SQLite
sqlite_cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name=?", (table_name,))
if not sqlite_cursor.fetchone():
logger.info(f"Table {table_name} does not exist in SQLite, skipping...")
self.migration_stats[table_name] = 0
return True
# Get data from SQLite
sqlite_cursor.execute(f"SELECT * FROM {table_name}")
rows = sqlite_cursor.fetchall()
if not rows:
logger.info(f"No data found in table: {table_name}")
self.migration_stats[table_name] = 0
return True
# Get column names from SQLite
column_names = [description[0] for description in sqlite_cursor.description]
logger.info(f"SQLite columns for {table_name}: {column_names}")
# Get PostgreSQL column names
postgres_cursor.execute(f"""
SELECT column_name
FROM information_schema.columns
WHERE table_name = %s
ORDER BY ordinal_position
""", (table_name,))
pg_columns = [row[0] for row in postgres_cursor.fetchall()]
logger.info(f"PostgreSQL columns for {table_name}: {pg_columns}")
# For project table, ensure company_id is properly handled
if table_name == 'project':
# Check if company_id exists in the data
for i, row in enumerate(rows):
row_dict = dict(zip(column_names, row))
if 'company_id' not in row_dict or row_dict['company_id'] is None:
# If user_id exists, use it as company_id
if 'user_id' in row_dict and row_dict['user_id'] is not None:
logger.info(f"Mapping user_id {row_dict['user_id']} to company_id for project {row_dict.get('id')}")
# Update the row data
row_list = list(row)
if 'company_id' in column_names:
company_id_idx = column_names.index('company_id')
user_id_idx = column_names.index('user_id')
row_list[company_id_idx] = row_list[user_id_idx]
else:
# Add company_id column
column_names.append('company_id')
user_id_idx = column_names.index('user_id')
row_list.append(row[user_id_idx])
rows[i] = tuple(row_list)
# Filter columns to only those that exist in PostgreSQL
valid_columns = [col for col in column_names if col in pg_columns]
column_indices = [column_names.index(col) for col in valid_columns]
# Prepare insert statement
placeholders = ', '.join(['%s'] * len(valid_columns))
columns = ', '.join([f'"{col}"' for col in valid_columns])
insert_sql = f'INSERT INTO "{table_name}" ({columns}) VALUES ({placeholders})'
# Convert rows to list of tuples with only valid columns
data_rows = []
for row in rows:
data_row = []
for i in column_indices:
value = row[i]
col_name = valid_columns[column_indices.index(i)]
# Handle special data type conversions
if value is None:
data_row.append(None)
elif isinstance(value, str) and value.startswith('{"') and value.endswith('}'):
# Handle JSON strings
data_row.append(value)
elif (col_name.startswith('is_') or col_name.endswith('_enabled') or col_name in ['is_paused']) and isinstance(value, int):
# Convert integer boolean to actual boolean for PostgreSQL
data_row.append(bool(value))
elif isinstance(value, str) and value == '':
# Convert empty strings to None for PostgreSQL
data_row.append(None)
else:
data_row.append(value)
data_rows.append(tuple(data_row))
# Insert data one by one to better handle errors
successful_inserts = 0
for i, row in enumerate(data_rows):
try:
postgres_cursor.execute(insert_sql, row)
self.postgres_conn.commit()
successful_inserts += 1
except Exception as row_error:
logger.error(f"Error inserting row {i} in table {table_name}: {row_error}")
logger.error(f"Problematic row data: {row}")
logger.error(f"Columns: {valid_columns}")
self.postgres_conn.rollback()
logger.info(f"Migrated {successful_inserts}/{len(rows)} rows from table: {table_name}")
self.migration_stats[table_name] = successful_inserts
return True
except Exception as e:
logger.error(f"Failed to migrate table {table_name}: {e}")
self.postgres_conn.rollback()
return False
def update_sequences(self):
"""Update PostgreSQL sequences after data migration"""
try:
with self.postgres_conn.cursor() as cursor:
# Get all sequences
cursor.execute("""
SELECT
pg_get_serial_sequence(table_name, column_name) as sequence_name,
column_name,
table_name
FROM information_schema.columns
WHERE column_default LIKE 'nextval%'
AND table_schema = 'public'
""")
sequences = cursor.fetchall()
for seq_name, col_name, table_name in sequences:
if seq_name is None:
continue
# Get the maximum value for each sequence
cursor.execute(f'SELECT MAX("{col_name}") FROM "{table_name}"')
max_val = cursor.fetchone()[0]
if max_val is not None:
# Update sequence to start from max_val + 1
cursor.execute(f'ALTER SEQUENCE {seq_name} RESTART WITH {max_val + 1}')
logger.info(f"Updated sequence {seq_name} to start from {max_val + 1}")
self.postgres_conn.commit()
logger.info("Updated PostgreSQL sequences")
return True
except Exception as e:
logger.error(f"Failed to update sequences: {e}")
self.postgres_conn.rollback()
return False
def run_migration(self, clear_existing=False):
"""Run the complete migration process"""
logger.info("Starting SQLite to PostgreSQL migration...")
# Connect to databases
if not self.connect_databases():
return False
try:
# Check SQLite database
if not self.check_sqlite_database():
logger.info("No data to migrate from SQLite")
return True
# Clear existing PostgreSQL data if requested
if clear_existing:
if not self.clear_postgres_data():
logger.warning("Failed to clear some PostgreSQL data, continuing anyway...")
# Define table migration order (respecting foreign key constraints)
migration_order = [
'company',
'team',
'project_category',
'user',
'project',
'task',
'sub_task',
'time_entry',
'work_config',
'company_work_config',
'user_preferences',
'system_settings'
]
# Migrate data
for table_name in migration_order:
if not self.migrate_table_data(table_name):
logger.error(f"Migration failed at table: {table_name}")
# Update sequences after all data is migrated
if not self.update_sequences():
logger.error("Failed to update sequences")
logger.info("Migration completed!")
logger.info(f"Migration statistics: {self.migration_stats}")
return True
except Exception as e:
logger.error(f"Migration failed: {e}")
return False
finally:
self.close_connections()
def main():
"""Main migration function"""
import argparse
parser = argparse.ArgumentParser(description='Migrate SQLite to PostgreSQL')
parser.add_argument('--clear-existing', action='store_true',
help='Clear existing PostgreSQL data before migration')
parser.add_argument('--sqlite-path', default=os.environ.get('SQLITE_PATH', '/data/timetrack.db'),
help='Path to SQLite database')
args = parser.parse_args()
# Get database paths from environment variables
sqlite_path = args.sqlite_path
postgres_url = os.environ.get('DATABASE_URL')
if not postgres_url:
logger.error("DATABASE_URL environment variable not set")
return 1
# Check if SQLite database exists
if not os.path.exists(sqlite_path):
logger.info(f"SQLite database not found at {sqlite_path}, skipping migration")
return 0
# Run migration
migration = SQLiteToPostgresMigration(sqlite_path, postgres_url)
success = migration.run_migration(clear_existing=args.clear_existing)
return 0 if success else 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,104 +0,0 @@
#!/usr/bin/env python3
"""
Add missing columns to user_dashboard table
"""
import os
import psycopg2
from psycopg2 import sql
from urllib.parse import urlparse
# Get database URL from environment
DATABASE_URL = os.environ.get('DATABASE_URL', 'postgresql://timetrack:timetrack123@localhost:5432/timetrack')
def add_missing_columns():
"""Add missing columns to user_dashboard table"""
# Parse database URL
parsed = urlparse(DATABASE_URL)
# Connect to database
conn = psycopg2.connect(
host=parsed.hostname,
port=parsed.port or 5432,
user=parsed.username,
password=parsed.password,
database=parsed.path[1:] # Remove leading slash
)
try:
with conn.cursor() as cur:
# Check if columns exist
cur.execute("""
SELECT column_name
FROM information_schema.columns
WHERE table_name = 'user_dashboard'
AND column_name IN ('layout', 'is_locked', 'created_at', 'updated_at',
'name', 'is_default', 'layout_config', 'grid_columns',
'theme', 'auto_refresh')
""")
existing_columns = [row[0] for row in cur.fetchall()]
# Add missing columns
if 'name' not in existing_columns:
print("Adding 'name' column to user_dashboard table...")
cur.execute("ALTER TABLE user_dashboard ADD COLUMN name VARCHAR(100) DEFAULT 'My Dashboard'")
print("Added 'name' column")
if 'is_default' not in existing_columns:
print("Adding 'is_default' column to user_dashboard table...")
cur.execute("ALTER TABLE user_dashboard ADD COLUMN is_default BOOLEAN DEFAULT TRUE")
print("Added 'is_default' column")
if 'layout_config' not in existing_columns:
print("Adding 'layout_config' column to user_dashboard table...")
cur.execute("ALTER TABLE user_dashboard ADD COLUMN layout_config TEXT")
print("Added 'layout_config' column")
if 'grid_columns' not in existing_columns:
print("Adding 'grid_columns' column to user_dashboard table...")
cur.execute("ALTER TABLE user_dashboard ADD COLUMN grid_columns INTEGER DEFAULT 6")
print("Added 'grid_columns' column")
if 'theme' not in existing_columns:
print("Adding 'theme' column to user_dashboard table...")
cur.execute("ALTER TABLE user_dashboard ADD COLUMN theme VARCHAR(20) DEFAULT 'light'")
print("Added 'theme' column")
if 'auto_refresh' not in existing_columns:
print("Adding 'auto_refresh' column to user_dashboard table...")
cur.execute("ALTER TABLE user_dashboard ADD COLUMN auto_refresh INTEGER DEFAULT 300")
print("Added 'auto_refresh' column")
if 'layout' not in existing_columns:
print("Adding 'layout' column to user_dashboard table...")
cur.execute("ALTER TABLE user_dashboard ADD COLUMN layout JSON")
print("Added 'layout' column")
if 'is_locked' not in existing_columns:
print("Adding 'is_locked' column to user_dashboard table...")
cur.execute("ALTER TABLE user_dashboard ADD COLUMN is_locked BOOLEAN DEFAULT FALSE")
print("Added 'is_locked' column")
if 'created_at' not in existing_columns:
print("Adding 'created_at' column to user_dashboard table...")
cur.execute("ALTER TABLE user_dashboard ADD COLUMN created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP")
print("Added 'created_at' column")
if 'updated_at' not in existing_columns:
print("Adding 'updated_at' column to user_dashboard table...")
cur.execute("ALTER TABLE user_dashboard ADD COLUMN updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP")
print("Added 'updated_at' column")
# Commit changes
conn.commit()
print("Dashboard columns migration completed successfully!")
except Exception as e:
print(f"Error during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
add_missing_columns()

View File

@@ -1,159 +0,0 @@
#!/usr/bin/env python3
"""
Add missing columns to user_preferences table
"""
import os
import psycopg2
from psycopg2 import sql
from urllib.parse import urlparse
# Get database URL from environment
DATABASE_URL = os.environ.get('DATABASE_URL', 'postgresql://timetrack:timetrack123@localhost:5432/timetrack')
def add_missing_columns():
"""Add missing columns to user_preferences table"""
# Parse database URL
parsed = urlparse(DATABASE_URL)
# Connect to database
conn = psycopg2.connect(
host=parsed.hostname,
port=parsed.port or 5432,
user=parsed.username,
password=parsed.password,
database=parsed.path[1:] # Remove leading slash
)
try:
with conn.cursor() as cur:
# Check if table exists
cur.execute("""
SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_name = 'user_preferences'
)
""")
table_exists = cur.fetchone()[0]
if not table_exists:
print("user_preferences table does not exist. Creating it...")
cur.execute("""
CREATE TABLE user_preferences (
id SERIAL PRIMARY KEY,
user_id INTEGER UNIQUE NOT NULL REFERENCES "user"(id),
theme VARCHAR(20) DEFAULT 'light',
language VARCHAR(10) DEFAULT 'en',
timezone VARCHAR(50) DEFAULT 'UTC',
date_format VARCHAR(20) DEFAULT 'YYYY-MM-DD',
time_format VARCHAR(10) DEFAULT '24h',
email_notifications BOOLEAN DEFAULT TRUE,
email_daily_summary BOOLEAN DEFAULT FALSE,
email_weekly_summary BOOLEAN DEFAULT TRUE,
default_project_id INTEGER REFERENCES project(id),
timer_reminder_enabled BOOLEAN DEFAULT TRUE,
timer_reminder_interval INTEGER DEFAULT 60,
dashboard_layout JSON,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")
print("Created user_preferences table")
else:
# Check which columns exist
cur.execute("""
SELECT column_name
FROM information_schema.columns
WHERE table_name = 'user_preferences'
AND column_name IN ('theme', 'language', 'timezone', 'date_format',
'time_format', 'email_notifications', 'email_daily_summary',
'email_weekly_summary', 'default_project_id',
'timer_reminder_enabled', 'timer_reminder_interval',
'dashboard_layout', 'created_at', 'updated_at')
""")
existing_columns = [row[0] for row in cur.fetchall()]
# Add missing columns
if 'theme' not in existing_columns:
print("Adding 'theme' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN theme VARCHAR(20) DEFAULT 'light'")
print("Added 'theme' column")
if 'language' not in existing_columns:
print("Adding 'language' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN language VARCHAR(10) DEFAULT 'en'")
print("Added 'language' column")
if 'timezone' not in existing_columns:
print("Adding 'timezone' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN timezone VARCHAR(50) DEFAULT 'UTC'")
print("Added 'timezone' column")
if 'date_format' not in existing_columns:
print("Adding 'date_format' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN date_format VARCHAR(20) DEFAULT 'YYYY-MM-DD'")
print("Added 'date_format' column")
if 'time_format' not in existing_columns:
print("Adding 'time_format' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN time_format VARCHAR(10) DEFAULT '24h'")
print("Added 'time_format' column")
if 'email_notifications' not in existing_columns:
print("Adding 'email_notifications' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN email_notifications BOOLEAN DEFAULT TRUE")
print("Added 'email_notifications' column")
if 'email_daily_summary' not in existing_columns:
print("Adding 'email_daily_summary' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN email_daily_summary BOOLEAN DEFAULT FALSE")
print("Added 'email_daily_summary' column")
if 'email_weekly_summary' not in existing_columns:
print("Adding 'email_weekly_summary' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN email_weekly_summary BOOLEAN DEFAULT TRUE")
print("Added 'email_weekly_summary' column")
if 'default_project_id' not in existing_columns:
print("Adding 'default_project_id' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN default_project_id INTEGER REFERENCES project(id)")
print("Added 'default_project_id' column")
if 'timer_reminder_enabled' not in existing_columns:
print("Adding 'timer_reminder_enabled' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN timer_reminder_enabled BOOLEAN DEFAULT TRUE")
print("Added 'timer_reminder_enabled' column")
if 'timer_reminder_interval' not in existing_columns:
print("Adding 'timer_reminder_interval' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN timer_reminder_interval INTEGER DEFAULT 60")
print("Added 'timer_reminder_interval' column")
if 'dashboard_layout' not in existing_columns:
print("Adding 'dashboard_layout' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN dashboard_layout JSON")
print("Added 'dashboard_layout' column")
if 'created_at' not in existing_columns:
print("Adding 'created_at' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP")
print("Added 'created_at' column")
if 'updated_at' not in existing_columns:
print("Adding 'updated_at' column to user_preferences table...")
cur.execute("ALTER TABLE user_preferences ADD COLUMN updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP")
print("Added 'updated_at' column")
# Commit changes
conn.commit()
print("User preferences migration completed successfully!")
except Exception as e:
print(f"Error during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
add_missing_columns()

View File

@@ -1,244 +0,0 @@
#!/usr/bin/env python3
"""
Fix task status enum in the database to match Python enum
"""
import os
import psycopg2
from psycopg2 import sql
from urllib.parse import urlparse
# Get database URL from environment
DATABASE_URL = os.environ.get('DATABASE_URL', 'postgresql://timetrack:timetrack123@localhost:5432/timetrack')
def fix_task_status_enum():
"""Update task status enum in database"""
# Parse database URL
parsed = urlparse(DATABASE_URL)
# Connect to database
conn = psycopg2.connect(
host=parsed.hostname,
port=parsed.port or 5432,
user=parsed.username,
password=parsed.password,
database=parsed.path[1:] # Remove leading slash
)
try:
with conn.cursor() as cur:
print("Starting task status enum migration...")
# First check if the enum already has the correct values
cur.execute("""
SELECT enumlabel
FROM pg_enum
WHERE enumtypid = (SELECT oid FROM pg_type WHERE typname = 'taskstatus')
ORDER BY enumsortorder
""")
current_values = [row[0] for row in cur.fetchall()]
print(f"Current enum values: {current_values}")
# Check if migration is needed
expected_values = ['TODO', 'IN_PROGRESS', 'IN_REVIEW', 'DONE', 'CANCELLED']
if all(val in current_values for val in expected_values):
print("Task status enum already has correct values. Skipping migration.")
return
# Check if task table exists and has a status column
cur.execute("""
SELECT column_name, data_type
FROM information_schema.columns
WHERE table_name = 'task' AND column_name = 'status'
""")
if not cur.fetchone():
print("No task table or status column found. Skipping migration.")
return
# Check if temporary column already exists
cur.execute("""
SELECT column_name
FROM information_schema.columns
WHERE table_name = 'task' AND column_name = 'status_temp'
""")
temp_exists = cur.fetchone() is not None
if not temp_exists:
# First, we need to create a temporary column to hold the data
print("1. Creating temporary column...")
cur.execute("ALTER TABLE task ADD COLUMN status_temp VARCHAR(50)")
cur.execute("ALTER TABLE sub_task ADD COLUMN status_temp VARCHAR(50)")
else:
print("1. Temporary column already exists...")
# Copy current status values to temp column with mapping
print("2. Copying and mapping status values...")
# First check what values actually exist in the database
cur.execute("SELECT DISTINCT status::text FROM task WHERE status IS NOT NULL")
existing_statuses = [row[0] for row in cur.fetchall()]
print(f" Existing status values in task table: {existing_statuses}")
# If no statuses exist, skip the mapping
if not existing_statuses:
print(" No existing status values to migrate")
else:
# Build dynamic mapping based on what exists
mapping_sql = "UPDATE task SET status_temp = CASE "
has_cases = False
if 'NOT_STARTED' in existing_statuses:
mapping_sql += "WHEN status::text = 'NOT_STARTED' THEN 'TODO' "
has_cases = True
if 'TODO' in existing_statuses:
mapping_sql += "WHEN status::text = 'TODO' THEN 'TODO' "
has_cases = True
if 'IN_PROGRESS' in existing_statuses:
mapping_sql += "WHEN status::text = 'IN_PROGRESS' THEN 'IN_PROGRESS' "
has_cases = True
if 'ON_HOLD' in existing_statuses:
mapping_sql += "WHEN status::text = 'ON_HOLD' THEN 'IN_REVIEW' "
has_cases = True
if 'IN_REVIEW' in existing_statuses:
mapping_sql += "WHEN status::text = 'IN_REVIEW' THEN 'IN_REVIEW' "
has_cases = True
if 'COMPLETED' in existing_statuses:
mapping_sql += "WHEN status::text = 'COMPLETED' THEN 'DONE' "
has_cases = True
if 'DONE' in existing_statuses:
mapping_sql += "WHEN status::text = 'DONE' THEN 'DONE' "
has_cases = True
if 'CANCELLED' in existing_statuses:
mapping_sql += "WHEN status::text = 'CANCELLED' THEN 'CANCELLED' "
has_cases = True
if 'ARCHIVED' in existing_statuses:
mapping_sql += "WHEN status::text = 'ARCHIVED' THEN 'CANCELLED' "
has_cases = True
if has_cases:
mapping_sql += "ELSE status::text END WHERE status IS NOT NULL"
cur.execute(mapping_sql)
print(f" Updated {cur.rowcount} tasks")
# Check sub_task table
cur.execute("""
SELECT column_name
FROM information_schema.columns
WHERE table_name = 'sub_task' AND column_name = 'status'
""")
if cur.fetchone():
# Get existing subtask statuses
cur.execute("SELECT DISTINCT status::text FROM sub_task WHERE status IS NOT NULL")
existing_subtask_statuses = [row[0] for row in cur.fetchall()]
print(f" Existing status values in sub_task table: {existing_subtask_statuses}")
# If no statuses exist, skip the mapping
if not existing_subtask_statuses:
print(" No existing subtask status values to migrate")
else:
# Build dynamic mapping for subtasks
mapping_sql = "UPDATE sub_task SET status_temp = CASE "
has_cases = False
if 'NOT_STARTED' in existing_subtask_statuses:
mapping_sql += "WHEN status::text = 'NOT_STARTED' THEN 'TODO' "
has_cases = True
if 'TODO' in existing_subtask_statuses:
mapping_sql += "WHEN status::text = 'TODO' THEN 'TODO' "
has_cases = True
if 'IN_PROGRESS' in existing_subtask_statuses:
mapping_sql += "WHEN status::text = 'IN_PROGRESS' THEN 'IN_PROGRESS' "
has_cases = True
if 'ON_HOLD' in existing_subtask_statuses:
mapping_sql += "WHEN status::text = 'ON_HOLD' THEN 'IN_REVIEW' "
has_cases = True
if 'IN_REVIEW' in existing_subtask_statuses:
mapping_sql += "WHEN status::text = 'IN_REVIEW' THEN 'IN_REVIEW' "
has_cases = True
if 'COMPLETED' in existing_subtask_statuses:
mapping_sql += "WHEN status::text = 'COMPLETED' THEN 'DONE' "
has_cases = True
if 'DONE' in existing_subtask_statuses:
mapping_sql += "WHEN status::text = 'DONE' THEN 'DONE' "
has_cases = True
if 'CANCELLED' in existing_subtask_statuses:
mapping_sql += "WHEN status::text = 'CANCELLED' THEN 'CANCELLED' "
has_cases = True
if 'ARCHIVED' in existing_subtask_statuses:
mapping_sql += "WHEN status::text = 'ARCHIVED' THEN 'CANCELLED' "
has_cases = True
if has_cases:
mapping_sql += "ELSE status::text END WHERE status IS NOT NULL"
cur.execute(mapping_sql)
print(f" Updated {cur.rowcount} subtasks")
# Drop the old status columns
print("3. Dropping old status columns...")
cur.execute("ALTER TABLE task DROP COLUMN status")
cur.execute("ALTER TABLE sub_task DROP COLUMN status")
# Drop the old enum type
print("4. Dropping old enum type...")
cur.execute("DROP TYPE IF EXISTS taskstatus")
# Create new enum type with correct values
print("5. Creating new enum type...")
cur.execute("""
CREATE TYPE taskstatus AS ENUM (
'TODO',
'IN_PROGRESS',
'IN_REVIEW',
'DONE',
'CANCELLED'
)
""")
# Add new status columns with correct enum type
print("6. Adding new status columns...")
cur.execute("ALTER TABLE task ADD COLUMN status taskstatus")
cur.execute("ALTER TABLE sub_task ADD COLUMN status taskstatus")
# Copy data from temp columns to new status columns
print("7. Copying data to new columns...")
cur.execute("UPDATE task SET status = status_temp::taskstatus")
cur.execute("UPDATE sub_task SET status = status_temp::taskstatus")
# Drop temporary columns
print("8. Dropping temporary columns...")
cur.execute("ALTER TABLE task DROP COLUMN status_temp")
cur.execute("ALTER TABLE sub_task DROP COLUMN status_temp")
# Add NOT NULL constraint
print("9. Adding NOT NULL constraints...")
cur.execute("ALTER TABLE task ALTER COLUMN status SET NOT NULL")
cur.execute("ALTER TABLE sub_task ALTER COLUMN status SET NOT NULL")
# Set default value
print("10. Setting default values...")
cur.execute("ALTER TABLE task ALTER COLUMN status SET DEFAULT 'TODO'")
cur.execute("ALTER TABLE sub_task ALTER COLUMN status SET DEFAULT 'TODO'")
# Commit changes
conn.commit()
print("\nTask status enum migration completed successfully!")
# Verify the new enum values
print("\nVerifying new enum values:")
cur.execute("""
SELECT enumlabel
FROM pg_enum
WHERE enumtypid = (
SELECT oid FROM pg_type WHERE typname = 'taskstatus'
)
ORDER BY enumsortorder
""")
for row in cur.fetchall():
print(f" - {row[0]}")
except Exception as e:
print(f"Error during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
fix_task_status_enum()

View File

@@ -1,77 +0,0 @@
#!/usr/bin/env python3
"""
Add ARCHIVED status back to task status enum
"""
import os
import psycopg2
from psycopg2 import sql
from urllib.parse import urlparse
# Get database URL from environment
DATABASE_URL = os.environ.get('DATABASE_URL', 'postgresql://timetrack:timetrack123@localhost:5432/timetrack')
def add_archived_status():
"""Add ARCHIVED status to task status enum"""
# Parse database URL
parsed = urlparse(DATABASE_URL)
# Connect to database
conn = psycopg2.connect(
host=parsed.hostname,
port=parsed.port or 5432,
user=parsed.username,
password=parsed.password,
database=parsed.path[1:] # Remove leading slash
)
try:
with conn.cursor() as cur:
print("Adding ARCHIVED status to taskstatus enum...")
# Check if ARCHIVED already exists
cur.execute("""
SELECT EXISTS (
SELECT 1 FROM pg_enum
WHERE enumtypid = (SELECT oid FROM pg_type WHERE typname = 'taskstatus')
AND enumlabel = 'ARCHIVED'
)
""")
if cur.fetchone()[0]:
print("ARCHIVED status already exists in enum")
return
# Add ARCHIVED to the enum
cur.execute("""
ALTER TYPE taskstatus ADD VALUE IF NOT EXISTS 'ARCHIVED' AFTER 'CANCELLED'
""")
print("Successfully added ARCHIVED status to enum")
# Verify the enum values
print("\nCurrent taskstatus enum values:")
cur.execute("""
SELECT enumlabel
FROM pg_enum
WHERE enumtypid = (
SELECT oid FROM pg_type WHERE typname = 'taskstatus'
)
ORDER BY enumsortorder
""")
for row in cur.fetchall():
print(f" - {row[0]}")
# Commit changes
conn.commit()
print("\nMigration completed successfully!")
except Exception as e:
print(f"Error during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
add_archived_status()

View File

@@ -1,141 +0,0 @@
#!/usr/bin/env python3
"""
Fix company_work_config table columns to match model definition
"""
import os
import psycopg2
from psycopg2 import sql
from urllib.parse import urlparse
# Get database URL from environment
DATABASE_URL = os.environ.get('DATABASE_URL', 'postgresql://timetrack:timetrack123@localhost:5432/timetrack')
def fix_company_work_config_columns():
"""Rename and add columns to match the new model definition"""
# Parse database URL
parsed = urlparse(DATABASE_URL)
# Connect to database
conn = psycopg2.connect(
host=parsed.hostname,
port=parsed.port or 5432,
user=parsed.username,
password=parsed.password,
database=parsed.path[1:] # Remove leading slash
)
try:
with conn.cursor() as cur:
# Check which columns exist
cur.execute("""
SELECT column_name
FROM information_schema.columns
WHERE table_name = 'company_work_config'
""")
existing_columns = [row[0] for row in cur.fetchall()]
print(f"Existing columns: {existing_columns}")
# Rename columns if they exist with old names
if 'work_hours_per_day' in existing_columns and 'standard_hours_per_day' not in existing_columns:
print("Renaming work_hours_per_day to standard_hours_per_day...")
cur.execute("ALTER TABLE company_work_config RENAME COLUMN work_hours_per_day TO standard_hours_per_day")
# Add missing columns
if 'standard_hours_per_day' not in existing_columns and 'work_hours_per_day' not in existing_columns:
print("Adding standard_hours_per_day column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN standard_hours_per_day FLOAT DEFAULT 8.0")
if 'standard_hours_per_week' not in existing_columns:
print("Adding standard_hours_per_week column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN standard_hours_per_week FLOAT DEFAULT 40.0")
# Rename region to work_region if needed
if 'region' in existing_columns and 'work_region' not in existing_columns:
print("Renaming region to work_region...")
cur.execute("ALTER TABLE company_work_config RENAME COLUMN region TO work_region")
elif 'work_region' not in existing_columns:
print("Adding work_region column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN work_region VARCHAR(50) DEFAULT 'OTHER'")
# Add new columns that don't exist
if 'overtime_enabled' not in existing_columns:
print("Adding overtime_enabled column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN overtime_enabled BOOLEAN DEFAULT TRUE")
if 'overtime_rate' not in existing_columns:
print("Adding overtime_rate column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN overtime_rate FLOAT DEFAULT 1.5")
if 'double_time_enabled' not in existing_columns:
print("Adding double_time_enabled column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN double_time_enabled BOOLEAN DEFAULT FALSE")
if 'double_time_threshold' not in existing_columns:
print("Adding double_time_threshold column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN double_time_threshold FLOAT DEFAULT 12.0")
if 'double_time_rate' not in existing_columns:
print("Adding double_time_rate column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN double_time_rate FLOAT DEFAULT 2.0")
if 'require_breaks' not in existing_columns:
print("Adding require_breaks column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN require_breaks BOOLEAN DEFAULT TRUE")
if 'break_duration_minutes' not in existing_columns:
# Rename mandatory_break_minutes if it exists
if 'mandatory_break_minutes' in existing_columns:
print("Renaming mandatory_break_minutes to break_duration_minutes...")
cur.execute("ALTER TABLE company_work_config RENAME COLUMN mandatory_break_minutes TO break_duration_minutes")
else:
print("Adding break_duration_minutes column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN break_duration_minutes INTEGER DEFAULT 30")
if 'break_after_hours' not in existing_columns:
# Rename break_threshold_hours if it exists
if 'break_threshold_hours' in existing_columns:
print("Renaming break_threshold_hours to break_after_hours...")
cur.execute("ALTER TABLE company_work_config RENAME COLUMN break_threshold_hours TO break_after_hours")
else:
print("Adding break_after_hours column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN break_after_hours FLOAT DEFAULT 6.0")
if 'weekly_overtime_threshold' not in existing_columns:
print("Adding weekly_overtime_threshold column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN weekly_overtime_threshold FLOAT DEFAULT 40.0")
if 'weekly_overtime_rate' not in existing_columns:
print("Adding weekly_overtime_rate column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN weekly_overtime_rate FLOAT DEFAULT 1.5")
# Drop columns that are no longer needed
if 'region_name' in existing_columns:
print("Dropping region_name column...")
cur.execute("ALTER TABLE company_work_config DROP COLUMN region_name")
if 'additional_break_minutes' in existing_columns:
print("Dropping additional_break_minutes column...")
cur.execute("ALTER TABLE company_work_config DROP COLUMN additional_break_minutes")
if 'additional_break_threshold_hours' in existing_columns:
print("Dropping additional_break_threshold_hours column...")
cur.execute("ALTER TABLE company_work_config DROP COLUMN additional_break_threshold_hours")
if 'created_by_id' in existing_columns:
print("Dropping created_by_id column...")
cur.execute("ALTER TABLE company_work_config DROP COLUMN created_by_id")
# Commit changes
conn.commit()
print("\nCompany work config migration completed successfully!")
except Exception as e:
print(f"Error during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
fix_company_work_config_columns()

View File

@@ -1,145 +0,0 @@
#!/usr/bin/env python3
"""
Fix work region enum values in the database
"""
import os
import psycopg2
from psycopg2 import sql
from urllib.parse import urlparse
# Get database URL from environment
DATABASE_URL = os.environ.get('DATABASE_URL', 'postgresql://timetrack:timetrack123@localhost:5432/timetrack')
def fix_work_region_enum():
"""Update work region enum values in database"""
# Parse database URL
parsed = urlparse(DATABASE_URL)
# Connect to database
conn = psycopg2.connect(
host=parsed.hostname,
port=parsed.port or 5432,
user=parsed.username,
password=parsed.password,
database=parsed.path[1:] # Remove leading slash
)
try:
with conn.cursor() as cur:
print("Starting work region enum migration...")
# First check if work_region column is using enum type
cur.execute("""
SELECT data_type
FROM information_schema.columns
WHERE table_name = 'company_work_config'
AND column_name = 'work_region'
""")
data_type = cur.fetchone()
if data_type and data_type[0] == 'USER-DEFINED':
# It's an enum, we need to update it
print("work_region is an enum type, migrating...")
# Create temporary column
print("1. Creating temporary column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN work_region_temp VARCHAR(50)")
# Copy and map values
print("2. Copying and mapping values...")
cur.execute("""
UPDATE company_work_config SET work_region_temp = CASE
WHEN work_region::text = 'GERMANY' THEN 'EU'
WHEN work_region::text = 'DE' THEN 'EU'
WHEN work_region::text = 'UNITED_STATES' THEN 'USA'
WHEN work_region::text = 'US' THEN 'USA'
WHEN work_region::text = 'UNITED_KINGDOM' THEN 'UK'
WHEN work_region::text = 'GB' THEN 'UK'
WHEN work_region::text = 'FRANCE' THEN 'EU'
WHEN work_region::text = 'FR' THEN 'EU'
WHEN work_region::text = 'EUROPEAN_UNION' THEN 'EU'
WHEN work_region::text = 'CUSTOM' THEN 'OTHER'
ELSE COALESCE(work_region::text, 'OTHER')
END
""")
print(f" Updated {cur.rowcount} rows")
# Drop old column
print("3. Dropping old work_region column...")
cur.execute("ALTER TABLE company_work_config DROP COLUMN work_region")
# Check if enum type exists and drop it
cur.execute("""
SELECT EXISTS (
SELECT 1 FROM pg_type WHERE typname = 'workregion'
)
""")
if cur.fetchone()[0]:
print("4. Dropping old workregion enum type...")
cur.execute("DROP TYPE IF EXISTS workregion CASCADE")
# Create new enum type
print("5. Creating new workregion enum type...")
cur.execute("""
CREATE TYPE workregion AS ENUM (
'USA',
'CANADA',
'UK',
'EU',
'AUSTRALIA',
'OTHER'
)
""")
# Add new column with enum type
print("6. Adding new work_region column...")
cur.execute("ALTER TABLE company_work_config ADD COLUMN work_region workregion DEFAULT 'OTHER'")
# Copy data back
print("7. Copying data to new column...")
cur.execute("UPDATE company_work_config SET work_region = work_region_temp::workregion")
# Drop temporary column
print("8. Dropping temporary column...")
cur.execute("ALTER TABLE company_work_config DROP COLUMN work_region_temp")
else:
# It's already a varchar, just update the values
print("work_region is already a varchar, updating values...")
cur.execute("""
UPDATE company_work_config SET work_region = CASE
WHEN work_region = 'GERMANY' THEN 'EU'
WHEN work_region = 'DE' THEN 'EU'
WHEN work_region = 'UNITED_STATES' THEN 'USA'
WHEN work_region = 'US' THEN 'USA'
WHEN work_region = 'UNITED_KINGDOM' THEN 'UK'
WHEN work_region = 'GB' THEN 'UK'
WHEN work_region = 'FRANCE' THEN 'EU'
WHEN work_region = 'FR' THEN 'EU'
WHEN work_region = 'EUROPEAN_UNION' THEN 'EU'
WHEN work_region = 'CUSTOM' THEN 'OTHER'
ELSE COALESCE(work_region, 'OTHER')
END
""")
print(f"Updated {cur.rowcount} rows")
# Commit changes
conn.commit()
print("\nWork region enum migration completed successfully!")
# Verify the results
print("\nCurrent work_region values in database:")
cur.execute("SELECT DISTINCT work_region FROM company_work_config ORDER BY work_region")
for row in cur.fetchall():
print(f" - {row[0]}")
except Exception as e:
print(f"Error during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
fix_work_region_enum()

View File

@@ -1,78 +0,0 @@
#!/usr/bin/env python3
"""
Add GERMANY back to work region enum
"""
import os
import psycopg2
from psycopg2 import sql
from urllib.parse import urlparse
# Get database URL from environment
DATABASE_URL = os.environ.get('DATABASE_URL', 'postgresql://timetrack:timetrack123@localhost:5432/timetrack')
def add_germany_to_workregion():
"""Add GERMANY to work region enum"""
# Parse database URL
parsed = urlparse(DATABASE_URL)
# Connect to database
conn = psycopg2.connect(
host=parsed.hostname,
port=parsed.port or 5432,
user=parsed.username,
password=parsed.password,
database=parsed.path[1:] # Remove leading slash
)
try:
with conn.cursor() as cur:
print("Adding GERMANY to workregion enum...")
# Check if GERMANY already exists
cur.execute("""
SELECT EXISTS (
SELECT 1 FROM pg_enum
WHERE enumtypid = (SELECT oid FROM pg_type WHERE typname = 'workregion')
AND enumlabel = 'GERMANY'
)
""")
if cur.fetchone()[0]:
print("GERMANY already exists in enum")
return
# Add GERMANY to the enum after UK
cur.execute("""
ALTER TYPE workregion ADD VALUE IF NOT EXISTS 'GERMANY' AFTER 'UK'
""")
print("Successfully added GERMANY to enum")
# Update any EU records that should be Germany based on other criteria
# For now, we'll leave existing EU records as is, but new records can choose Germany
# Verify the enum values
print("\nCurrent workregion enum values:")
cur.execute("""
SELECT enumlabel
FROM pg_enum
WHERE enumtypid = (SELECT oid FROM pg_type WHERE typname = 'workregion')
ORDER BY enumsortorder
""")
for row in cur.fetchall():
print(f" - {row[0]}")
# Commit changes
conn.commit()
print("\nMigration completed successfully!")
except Exception as e:
print(f"Error during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
add_germany_to_workregion()

View File

@@ -1,108 +0,0 @@
#!/usr/bin/env python3
"""
Add missing columns to company_settings table
"""
import os
import psycopg2
from psycopg2 import sql
from urllib.parse import urlparse
# Get database URL from environment
DATABASE_URL = os.environ.get('DATABASE_URL', 'postgresql://timetrack:timetrack123@localhost:5432/timetrack')
def add_missing_columns():
"""Add missing columns to company_settings table"""
# Parse database URL
parsed = urlparse(DATABASE_URL)
# Connect to database
conn = psycopg2.connect(
host=parsed.hostname,
port=parsed.port or 5432,
user=parsed.username,
password=parsed.password,
database=parsed.path[1:] # Remove leading slash
)
try:
with conn.cursor() as cur:
# Check if table exists
cur.execute("""
SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_name = 'company_settings'
)
""")
table_exists = cur.fetchone()[0]
if not table_exists:
print("company_settings table does not exist. Creating it...")
cur.execute("""
CREATE TABLE company_settings (
id SERIAL PRIMARY KEY,
company_id INTEGER UNIQUE NOT NULL REFERENCES company(id),
work_week_start INTEGER DEFAULT 1,
work_days VARCHAR(20) DEFAULT '1,2,3,4,5',
allow_overlapping_entries BOOLEAN DEFAULT FALSE,
require_project_for_time_entry BOOLEAN DEFAULT TRUE,
allow_future_entries BOOLEAN DEFAULT FALSE,
max_hours_per_entry FLOAT DEFAULT 24.0,
enable_tasks BOOLEAN DEFAULT TRUE,
enable_sprints BOOLEAN DEFAULT FALSE,
enable_client_access BOOLEAN DEFAULT FALSE,
notify_on_overtime BOOLEAN DEFAULT TRUE,
overtime_threshold_daily FLOAT DEFAULT 8.0,
overtime_threshold_weekly FLOAT DEFAULT 40.0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")
print("Created company_settings table")
else:
# Check which columns exist
cur.execute("""
SELECT column_name
FROM information_schema.columns
WHERE table_name = 'company_settings'
""")
existing_columns = [row[0] for row in cur.fetchall()]
print(f"Existing columns: {existing_columns}")
# Add missing columns
columns_to_add = {
'work_week_start': 'INTEGER DEFAULT 1',
'work_days': "VARCHAR(20) DEFAULT '1,2,3,4,5'",
'allow_overlapping_entries': 'BOOLEAN DEFAULT FALSE',
'require_project_for_time_entry': 'BOOLEAN DEFAULT TRUE',
'allow_future_entries': 'BOOLEAN DEFAULT FALSE',
'max_hours_per_entry': 'FLOAT DEFAULT 24.0',
'enable_tasks': 'BOOLEAN DEFAULT TRUE',
'enable_sprints': 'BOOLEAN DEFAULT FALSE',
'enable_client_access': 'BOOLEAN DEFAULT FALSE',
'notify_on_overtime': 'BOOLEAN DEFAULT TRUE',
'overtime_threshold_daily': 'FLOAT DEFAULT 8.0',
'overtime_threshold_weekly': 'FLOAT DEFAULT 40.0',
'created_at': 'TIMESTAMP DEFAULT CURRENT_TIMESTAMP',
'updated_at': 'TIMESTAMP DEFAULT CURRENT_TIMESTAMP'
}
for column, definition in columns_to_add.items():
if column not in existing_columns:
print(f"Adding {column} column...")
cur.execute(f"ALTER TABLE company_settings ADD COLUMN {column} {definition}")
print(f"Added {column} column")
# Commit changes
conn.commit()
print("\nCompany settings migration completed successfully!")
except Exception as e:
print(f"Error during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
add_missing_columns()

View File

@@ -1,188 +0,0 @@
#!/usr/bin/env python3
"""
Fix CompanyWorkConfig field usage throughout the codebase
"""
import os
import re
from pathlib import Path
# Define old to new field mappings
FIELD_MAPPINGS = {
'work_hours_per_day': 'standard_hours_per_day',
'mandatory_break_minutes': 'break_duration_minutes',
'break_threshold_hours': 'break_after_hours',
'region': 'work_region',
}
# Fields that were removed
REMOVED_FIELDS = [
'additional_break_minutes',
'additional_break_threshold_hours',
'region_name',
'created_by_id'
]
def update_python_files():
"""Update Python files with new field names"""
python_files = [
'app.py',
'routes/company.py',
]
for filepath in python_files:
if not os.path.exists(filepath):
print(f"Skipping {filepath} - file not found")
continue
print(f"Processing {filepath}...")
with open(filepath, 'r') as f:
content = f.read()
original_content = content
# Update field references
for old_field, new_field in FIELD_MAPPINGS.items():
# Update attribute access: .old_field -> .new_field
content = re.sub(
rf'\.{old_field}\b',
f'.{new_field}',
content
)
# Update dictionary access: ['old_field'] -> ['new_field']
content = re.sub(
rf'\[[\'"]{old_field}[\'"]\]',
f"['{new_field}']",
content
)
# Update keyword arguments: old_field= -> new_field=
content = re.sub(
rf'\b{old_field}=',
f'{new_field}=',
content
)
# Handle special cases for app.py
if filepath == 'app.py':
# Update WorkRegion.GERMANY references where appropriate
content = re.sub(
r'WorkRegion\.GERMANY',
'WorkRegion.GERMANY # Note: Germany has specific labor laws',
content
)
# Handle removed fields - comment them out with explanation
for removed_field in ['additional_break_minutes', 'additional_break_threshold_hours']:
content = re.sub(
rf'^(\s*)(.*{removed_field}.*)$',
r'\1# REMOVED: \2 # This field no longer exists in the model',
content,
flags=re.MULTILINE
)
# Handle region_name specially in routes/company.py
if filepath == 'routes/company.py':
# Remove region_name assignments
content = re.sub(
r"work_config\.region_name = .*\n",
"# region_name removed - using work_region enum value instead\n",
content
)
# Fix WorkRegion.CUSTOM -> WorkRegion.OTHER
content = re.sub(
r'WorkRegion\.CUSTOM',
'WorkRegion.OTHER',
content
)
if content != original_content:
with open(filepath, 'w') as f:
f.write(content)
print(f" ✓ Updated {filepath}")
else:
print(f" - No changes needed in {filepath}")
def update_template_files():
"""Update template files with new field names"""
template_files = [
'templates/admin_company.html',
'templates/admin_work_policies.html',
'templates/config.html',
]
for filepath in template_files:
if not os.path.exists(filepath):
print(f"Skipping {filepath} - file not found")
continue
print(f"Processing {filepath}...")
with open(filepath, 'r') as f:
content = f.read()
original_content = content
# Update field references in templates
for old_field, new_field in FIELD_MAPPINGS.items():
# Update Jinja2 variable access: {{ obj.old_field }} -> {{ obj.new_field }}
content = re.sub(
r'(\{\{[^}]*\.)' + re.escape(old_field) + r'(\s*\}\})',
r'\1' + new_field + r'\2',
content
)
# Update form field names and IDs
content = re.sub(
rf'(name|id)=[\'"]{old_field}[\'"]',
rf'\1="{new_field}"',
content
)
# Handle region_name in templates
if 'region_name' in content:
# Replace region_name with work_region.value
content = re.sub(
r'(\{\{[^}]*\.)region_name(\s*\}\})',
r'\1work_region.value\2',
content
)
# Handle removed fields in admin_company.html
if filepath == 'templates/admin_company.html' and 'additional_break' in content:
# Remove entire config-item divs for removed fields
content = re.sub(
r'<div class="config-item">.*?additional_break.*?</div>\s*',
'',
content,
flags=re.DOTALL
)
if content != original_content:
with open(filepath, 'w') as f:
f.write(content)
print(f" ✓ Updated {filepath}")
else:
print(f" - No changes needed in {filepath}")
def main():
print("=== Fixing CompanyWorkConfig Field Usage ===\n")
print("1. Updating Python files...")
update_python_files()
print("\n2. Updating template files...")
update_template_files()
print("\n✅ CompanyWorkConfig migration complete!")
print("\nNote: Some fields have been removed from the model:")
print(" - additional_break_minutes")
print(" - additional_break_threshold_hours")
print(" - region_name (use work_region.value instead)")
print(" - created_by_id")
if __name__ == "__main__":
main()

View File

@@ -1,172 +0,0 @@
#!/usr/bin/env python3
"""
Fix TaskStatus enum usage throughout the codebase
"""
import os
import re
from pathlib import Path
# Define old to new status mappings
STATUS_MAPPINGS = {
'NOT_STARTED': 'TODO',
'COMPLETED': 'DONE',
'ON_HOLD': 'IN_REVIEW',
}
def update_python_files():
"""Update Python files with new TaskStatus values"""
# Find all Python files that might use TaskStatus
python_files = []
# Add specific known files
known_files = ['app.py', 'routes/tasks.py', 'routes/tasks_api.py', 'routes/sprints.py', 'routes/sprints_api.py']
python_files.extend([f for f in known_files if os.path.exists(f)])
# Search for more Python files in routes/
if os.path.exists('routes'):
python_files.extend([str(p) for p in Path('routes').glob('*.py')])
# Remove duplicates
python_files = list(set(python_files))
for filepath in python_files:
print(f"Processing {filepath}...")
with open(filepath, 'r') as f:
content = f.read()
original_content = content
# Update TaskStatus enum references
for old_status, new_status in STATUS_MAPPINGS.items():
# Update enum access: TaskStatus.OLD_STATUS -> TaskStatus.NEW_STATUS
content = re.sub(
rf'TaskStatus\.{old_status}\b',
f'TaskStatus.{new_status}',
content
)
# Update string comparisons: == 'OLD_STATUS' -> == 'NEW_STATUS'
content = re.sub(
rf"['\"]({old_status})['\"]",
f"'{new_status}'",
content
)
if content != original_content:
with open(filepath, 'w') as f:
f.write(content)
print(f" ✓ Updated {filepath}")
else:
print(f" - No changes needed in {filepath}")
def update_javascript_files():
"""Update JavaScript files with new TaskStatus values"""
js_files = []
# Find all JS files
if os.path.exists('static/js'):
js_files.extend([str(p) for p in Path('static/js').glob('*.js')])
for filepath in js_files:
print(f"Processing {filepath}...")
with open(filepath, 'r') as f:
content = f.read()
original_content = content
# Update status values in JavaScript
for old_status, new_status in STATUS_MAPPINGS.items():
# Update string literals
content = re.sub(
rf"['\"]({old_status})['\"]",
f"'{new_status}'",
content
)
# Update in case statements or object keys
content = re.sub(
rf'\b{old_status}\b:',
f'{new_status}:',
content
)
if content != original_content:
with open(filepath, 'w') as f:
f.write(content)
print(f" ✓ Updated {filepath}")
else:
print(f" - No changes needed in {filepath}")
def update_template_files():
"""Update template files with new TaskStatus values"""
template_files = []
# Find all template files that might have task status
if os.path.exists('templates'):
template_files.extend([str(p) for p in Path('templates').glob('*.html')])
for filepath in template_files:
# Skip if file doesn't contain task-related content
with open(filepath, 'r') as f:
content = f.read()
if 'task' not in content.lower() and 'status' not in content.lower():
continue
print(f"Processing {filepath}...")
original_content = content
# Update status values in templates
for old_status, new_status in STATUS_MAPPINGS.items():
# Update in option values: value="OLD_STATUS" -> value="NEW_STATUS"
content = re.sub(
rf'value=[\'"]{old_status}[\'"]',
f'value="{new_status}"',
content
)
# Update display text (be more careful here)
if old_status == 'NOT_STARTED':
content = re.sub(r'>Not Started<', '>To Do<', content)
elif old_status == 'COMPLETED':
content = re.sub(r'>Completed<', '>Done<', content)
elif old_status == 'ON_HOLD':
content = re.sub(r'>On Hold<', '>In Review<', content)
# Update in JavaScript within templates
content = re.sub(
rf"['\"]({old_status})['\"]",
f"'{new_status}'",
content
)
if content != original_content:
with open(filepath, 'w') as f:
f.write(content)
print(f" ✓ Updated {filepath}")
else:
print(f" - No changes needed in {filepath}")
def main():
print("=== Fixing TaskStatus Enum Usage ===\n")
print("1. Updating Python files...")
update_python_files()
print("\n2. Updating JavaScript files...")
update_javascript_files()
print("\n3. Updating template files...")
update_template_files()
print("\n✅ TaskStatus migration complete!")
print("\nStatus mappings applied:")
for old, new in STATUS_MAPPINGS.items():
print(f" - {old}{new}")
if __name__ == "__main__":
main()

View File

@@ -1,154 +0,0 @@
#!/usr/bin/env python3
"""
Fix WorkRegion enum usage throughout the codebase
"""
import os
import re
from pathlib import Path
# Define old to new region mappings
REGION_MAPPINGS = {
'UNITED_STATES': 'USA',
'UNITED_KINGDOM': 'UK',
'FRANCE': 'EU',
'EUROPEAN_UNION': 'EU',
'CUSTOM': 'OTHER',
}
# Note: GERMANY is kept as is - it has specific labor laws
def update_python_files():
"""Update Python files with new WorkRegion values"""
python_files = []
# Add known files
known_files = ['app.py', 'routes/company.py', 'routes/system_admin.py']
python_files.extend([f for f in known_files if os.path.exists(f)])
# Search for more Python files
if os.path.exists('routes'):
python_files.extend([str(p) for p in Path('routes').glob('*.py')])
# Remove duplicates
python_files = list(set(python_files))
for filepath in python_files:
with open(filepath, 'r') as f:
content = f.read()
# Skip if no WorkRegion references
if 'WorkRegion' not in content:
continue
print(f"Processing {filepath}...")
original_content = content
# Update WorkRegion enum references
for old_region, new_region in REGION_MAPPINGS.items():
# Update enum access: WorkRegion.OLD_REGION -> WorkRegion.NEW_REGION
content = re.sub(
rf'WorkRegion\.{old_region}\b',
f'WorkRegion.{new_region}',
content
)
# Update string comparisons
content = re.sub(
rf"['\"]({old_region})['\"]",
f"'{new_region}'",
content
)
# Add comments for GERMANY usage to note it has specific laws
if 'WorkRegion.GERMANY' in content and '# Note:' not in content:
content = re.sub(
r'(WorkRegion\.GERMANY)',
r'\1 # Germany has specific labor laws beyond EU',
content,
count=1 # Only comment the first occurrence
)
if content != original_content:
with open(filepath, 'w') as f:
f.write(content)
print(f" ✓ Updated {filepath}")
else:
print(f" - No changes needed in {filepath}")
def update_template_files():
"""Update template files with new WorkRegion values"""
template_files = []
# Find relevant templates
if os.path.exists('templates'):
for template in Path('templates').glob('*.html'):
with open(template, 'r') as f:
if 'region' in f.read().lower():
template_files.append(str(template))
for filepath in template_files:
print(f"Processing {filepath}...")
with open(filepath, 'r') as f:
content = f.read()
original_content = content
# Update region values
for old_region, new_region in REGION_MAPPINGS.items():
# Update in option values
content = re.sub(
rf'value=[\'"]{old_region}[\'"]',
f'value="{new_region}"',
content
)
# Update display names
display_mappings = {
'UNITED_STATES': 'United States',
'United States': 'United States',
'UNITED_KINGDOM': 'United Kingdom',
'United Kingdom': 'United Kingdom',
'FRANCE': 'European Union',
'France': 'European Union',
'EUROPEAN_UNION': 'European Union',
'European Union': 'European Union',
'CUSTOM': 'Other',
'Custom': 'Other'
}
for old_display, new_display in display_mappings.items():
if old_display in ['France', 'FRANCE']:
# France is now part of EU
content = re.sub(
rf'>{old_display}<',
f'>{new_display}<',
content
)
if content != original_content:
with open(filepath, 'w') as f:
f.write(content)
print(f" ✓ Updated {filepath}")
else:
print(f" - No changes needed in {filepath}")
def main():
print("=== Fixing WorkRegion Enum Usage ===\n")
print("1. Updating Python files...")
update_python_files()
print("\n2. Updating template files...")
update_template_files()
print("\n✅ WorkRegion migration complete!")
print("\nRegion mappings applied:")
for old, new in REGION_MAPPINGS.items():
print(f" - {old}{new}")
print("\nNote: GERMANY remains as a separate option due to specific labor laws")
if __name__ == "__main__":
main()

View File

@@ -1,227 +0,0 @@
#!/usr/bin/env python3
"""
Fix references to removed fields throughout the codebase
"""
import os
import re
from pathlib import Path
# Fields that were removed from various models
REMOVED_FIELDS = {
'created_by_id': {
'models': ['Task', 'Project', 'Sprint', 'Announcement', 'CompanyWorkConfig'],
'replacement': 'None', # or could track via audit log
'comment': 'Field removed - consider using audit log for creator tracking'
},
'region_name': {
'models': ['CompanyWorkConfig'],
'replacement': 'work_region.value',
'comment': 'Use work_region enum value instead'
},
'additional_break_minutes': {
'models': ['CompanyWorkConfig'],
'replacement': 'None',
'comment': 'Field removed - simplified break configuration'
},
'additional_break_threshold_hours': {
'models': ['CompanyWorkConfig'],
'replacement': 'None',
'comment': 'Field removed - simplified break configuration'
}
}
def update_python_files():
"""Update Python files to handle removed fields"""
python_files = []
# Get all Python files
for root, dirs, files in os.walk('.'):
# Skip virtual environments and cache
if 'venv' in root or '__pycache__' in root or '.git' in root:
continue
for file in files:
if file.endswith('.py'):
python_files.append(os.path.join(root, file))
for filepath in python_files:
# Skip migration scripts
if 'migrations/' in filepath:
continue
with open(filepath, 'r') as f:
content = f.read()
original_content = content
modified = False
for field, info in REMOVED_FIELDS.items():
if field not in content:
continue
print(f"Processing {filepath} for {field}...")
# Handle different patterns
if field == 'created_by_id':
# Comment out lines that assign created_by_id
content = re.sub(
rf'^(\s*)([^#\n]*created_by_id\s*=\s*[^,\n]+,?)(.*)$',
rf'\1# REMOVED: \2 # {info["comment"]}\3',
content,
flags=re.MULTILINE
)
# Remove from query filters
content = re.sub(
rf'\.filter_by\(created_by_id=[^)]+\)',
'.filter_by() # REMOVED: created_by_id filter',
content
)
# Remove from dictionary accesses
content = re.sub(
rf"['\"]created_by_id['\"]\s*:\s*[^,}}]+[,}}]",
'# "created_by_id" removed from model',
content
)
elif field == 'region_name':
# Replace with work_region.value
content = re.sub(
rf'\.region_name\b',
'.work_region.value',
content
)
content = re.sub(
rf"\['region_name'\]",
"['work_region'].value",
content
)
elif field in ['additional_break_minutes', 'additional_break_threshold_hours']:
# Comment out references
content = re.sub(
rf'^(\s*)([^#\n]*{field}[^#\n]*)$',
rf'\1# REMOVED: \2 # {info["comment"]}',
content,
flags=re.MULTILINE
)
if content != original_content:
modified = True
if modified:
with open(filepath, 'w') as f:
f.write(content)
print(f" ✓ Updated {filepath}")
def update_template_files():
"""Update template files to handle removed fields"""
template_files = []
if os.path.exists('templates'):
template_files = [str(p) for p in Path('templates').glob('*.html')]
for filepath in template_files:
with open(filepath, 'r') as f:
content = f.read()
original_content = content
modified = False
for field, info in REMOVED_FIELDS.items():
if field not in content:
continue
print(f"Processing {filepath} for {field}...")
if field == 'created_by_id':
# Remove or comment out created_by references in templates
# Match {{...created_by_id...}} patterns
pattern = r'\{\{[^}]*\.created_by_id[^}]*\}\}'
content = re.sub(
pattern,
'<!-- REMOVED: created_by_id no longer available -->',
content
)
elif field == 'region_name':
# Replace with work_region.value
# Match {{...region_name...}} and replace region_name with work_region.value
pattern = r'(\{\{[^}]*\.)region_name([^}]*\}\})'
content = re.sub(
pattern,
r'\1work_region.value\2',
content
)
elif field in ['additional_break_minutes', 'additional_break_threshold_hours']:
# Remove entire form groups for these fields
pattern = r'<div[^>]*>(?:[^<]|<(?!/div))*' + re.escape(field) + r'.*?</div>\s*'
content = re.sub(
pattern,
f'<!-- REMOVED: {field} no longer in model -->\n',
content,
flags=re.DOTALL
)
if content != original_content:
modified = True
if modified:
with open(filepath, 'w') as f:
f.write(content)
print(f" ✓ Updated {filepath}")
def create_audit_log_migration():
"""Create a migration to add audit fields if needed"""
migration_content = '''#!/usr/bin/env python3
"""
Add audit log fields to replace removed created_by_id
"""
# This is a template for adding audit logging if needed
# to replace the removed created_by_id functionality
def add_audit_fields():
"""
Consider adding these fields to models that lost created_by_id:
- created_by_username (store username instead of ID)
- created_at (if not already present)
- updated_by_username
- updated_at
Or implement a separate audit log table
"""
pass
if __name__ == "__main__":
print("Consider implementing audit logging to track who created/modified records")
'''
with open('migrations/05_add_audit_fields_template.py', 'w') as f:
f.write(migration_content)
print("\n✓ Created template for audit field migration")
def main():
print("=== Fixing References to Removed Fields ===\n")
print("1. Updating Python files...")
update_python_files()
print("\n2. Updating template files...")
update_template_files()
print("\n3. Creating audit field migration template...")
create_audit_log_migration()
print("\n✅ Removed fields migration complete!")
print("\nFields handled:")
for field, info in REMOVED_FIELDS.items():
print(f" - {field}: {info['comment']}")
print("\n⚠️ Important: Review commented-out code and decide on appropriate replacements")
print(" Consider implementing audit logging for creator tracking")
if __name__ == "__main__":
main()

View File

@@ -1,67 +0,0 @@
#!/usr/bin/env python3
"""
Repair user roles from string to enum values
"""
import os
import sys
import logging
# Add parent directory to path to import app
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
try:
from app import app, db
from models import User, Role
except Exception as e:
print(f"Error importing modules: {e}")
print("This migration requires Flask app context. Skipping...")
sys.exit(0)
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def repair_user_roles():
with app.app_context():
logger.info("Starting user role repair...")
# Map string role values to enum values
role_mapping = {
'Team Member': Role.TEAM_MEMBER,
'TEAM_MEMBER': Role.TEAM_MEMBER,
'Team Leader': Role.TEAM_LEADER,
'TEAM_LEADER': Role.TEAM_LEADER,
'Supervisor': Role.SUPERVISOR,
'SUPERVISOR': Role.SUPERVISOR,
'Administrator': Role.ADMIN,
'ADMIN': Role.ADMIN
}
users = User.query.all()
fixed_count = 0
for user in users:
original_role = user.role
# Fix role if it's a string or None
if isinstance(user.role, str):
user.role = role_mapping.get(user.role, Role.TEAM_MEMBER)
fixed_count += 1
elif user.role is None:
user.role = Role.TEAM_MEMBER
fixed_count += 1
if fixed_count > 0:
db.session.commit()
logger.info(f"Fixed roles for {fixed_count} users")
else:
logger.info("No role fixes needed")
logger.info("Role repair completed")
if __name__ == "__main__":
try:
repair_user_roles()
except Exception as e:
logger.error(f"Migration failed: {e}")
sys.exit(1)

View File

@@ -1,65 +0,0 @@
#!/usr/bin/env python3
"""
Add company invitations table for email-based registration
"""
import sys
import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from flask import Flask
from models import db
from sqlalchemy import text
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def migrate():
"""Add company_invitation table"""
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get('DATABASE_URL', 'sqlite:////data/timetrack.db')
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
db.init_app(app)
with app.app_context():
try:
# Create company_invitation table
create_table_sql = text("""
CREATE TABLE IF NOT EXISTS company_invitation (
id SERIAL PRIMARY KEY,
company_id INTEGER NOT NULL REFERENCES company(id),
email VARCHAR(120) NOT NULL,
token VARCHAR(64) UNIQUE NOT NULL,
role VARCHAR(50) DEFAULT 'Team Member',
invited_by_id INTEGER NOT NULL REFERENCES "user"(id),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
expires_at TIMESTAMP NOT NULL,
accepted BOOLEAN DEFAULT FALSE,
accepted_at TIMESTAMP,
accepted_by_user_id INTEGER REFERENCES "user"(id)
);
""")
db.session.execute(create_table_sql)
# Create indexes for better performance
db.session.execute(text("CREATE INDEX IF NOT EXISTS idx_invitation_token ON company_invitation(token);"))
db.session.execute(text("CREATE INDEX IF NOT EXISTS idx_invitation_email ON company_invitation(email);"))
db.session.execute(text("CREATE INDEX IF NOT EXISTS idx_invitation_company ON company_invitation(company_id);"))
db.session.execute(text("CREATE INDEX IF NOT EXISTS idx_invitation_expires ON company_invitation(expires_at);"))
db.session.commit()
logger.info("Successfully created company_invitation table")
return True
except Exception as e:
logger.error(f"Error creating company_invitation table: {str(e)}")
db.session.rollback()
return False
if __name__ == '__main__':
success = migrate()
sys.exit(0 if success else 1)

View File

@@ -1,94 +0,0 @@
#!/usr/bin/env python3
"""
Add updated_at column to company table
"""
import os
import sys
import logging
from datetime import datetime
# Add parent directory to path to import app
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from app import app, db
from sqlalchemy import text
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
def run_migration():
"""Add updated_at column to company table"""
with app.app_context():
try:
# Check if we're using PostgreSQL or SQLite
database_url = app.config['SQLALCHEMY_DATABASE_URI']
is_postgres = 'postgresql://' in database_url or 'postgres://' in database_url
if is_postgres:
# PostgreSQL migration
logger.info("Running PostgreSQL migration to add updated_at to company table...")
# Check if column exists
result = db.session.execute(text("""
SELECT column_name
FROM information_schema.columns
WHERE table_name = 'company' AND column_name = 'updated_at'
"""))
if not result.fetchone():
logger.info("Adding updated_at column to company table...")
db.session.execute(text("""
ALTER TABLE company
ADD COLUMN updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
"""))
# Update existing rows to have updated_at = created_at
db.session.execute(text("""
UPDATE company
SET updated_at = created_at
WHERE updated_at IS NULL
"""))
db.session.commit()
logger.info("Successfully added updated_at column to company table")
else:
logger.info("updated_at column already exists in company table")
else:
# SQLite migration
logger.info("Running SQLite migration to add updated_at to company table...")
# For SQLite, we need to check differently
result = db.session.execute(text("PRAGMA table_info(company)"))
columns = [row[1] for row in result.fetchall()]
if 'updated_at' not in columns:
logger.info("Adding updated_at column to company table...")
db.session.execute(text("""
ALTER TABLE company
ADD COLUMN updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
"""))
# Update existing rows to have updated_at = created_at
db.session.execute(text("""
UPDATE company
SET updated_at = created_at
WHERE updated_at IS NULL
"""))
db.session.commit()
logger.info("Successfully added updated_at column to company table")
else:
logger.info("updated_at column already exists in company table")
return True
except Exception as e:
logger.error(f"Migration failed: {e}")
db.session.rollback()
return False
if __name__ == "__main__":
success = run_migration()
sys.exit(0 if success else 1)

View File

@@ -1,138 +0,0 @@
#!/usr/bin/env python3
"""
Master database migration runner
Runs all database schema migrations in the correct order
"""
import os
import sys
import subprocess
import json
from datetime import datetime
# Migration state file
MIGRATION_STATE_FILE = '/data/db_migrations_state.json'
# List of database schema migrations in order
DB_MIGRATIONS = [
'01_migrate_db.py', # SQLite schema updates (must run before data migration)
'20_add_company_updated_at.py', # Add updated_at column BEFORE data migration
'02_migrate_sqlite_to_postgres_fixed.py', # Fixed SQLite to PostgreSQL data migration
'03_add_dashboard_columns.py',
'04_add_user_preferences_columns.py',
'05_fix_task_status_enum.py',
'06_add_archived_status.py',
'07_fix_company_work_config_columns.py',
'08_fix_work_region_enum.py',
'09_add_germany_to_workregion.py',
'10_add_company_settings_columns.py',
'19_add_company_invitations.py'
]
def load_migration_state():
"""Load the migration state from file"""
if os.path.exists(MIGRATION_STATE_FILE):
try:
with open(MIGRATION_STATE_FILE, 'r') as f:
return json.load(f)
except:
return {}
return {}
def save_migration_state(state):
"""Save the migration state to file"""
os.makedirs(os.path.dirname(MIGRATION_STATE_FILE), exist_ok=True)
with open(MIGRATION_STATE_FILE, 'w') as f:
json.dump(state, f, indent=2)
def run_migration(migration_file):
"""Run a single migration script"""
script_path = os.path.join(os.path.dirname(__file__), migration_file)
if not os.path.exists(script_path):
print(f"⚠️ Migration {migration_file} not found, skipping...")
return False
print(f"\n🔄 Running migration: {migration_file}")
try:
# Run the migration script
result = subprocess.run(
[sys.executable, script_path],
capture_output=True,
text=True
)
if result.returncode == 0:
print(f"{migration_file} completed successfully")
if result.stdout:
print(result.stdout)
return True
else:
print(f"{migration_file} failed with return code {result.returncode}")
if result.stderr:
print(f"Error output: {result.stderr}")
if result.stdout:
print(f"Standard output: {result.stdout}")
return False
except Exception as e:
print(f"❌ Error running {migration_file}: {e}")
return False
def main():
"""Run all database migrations"""
print("=== Database Schema Migrations ===")
print(f"Running {len(DB_MIGRATIONS)} migrations...")
# Load migration state
state = load_migration_state()
success_count = 0
failed_count = 0
skipped_count = 0
for migration in DB_MIGRATIONS:
# Check if migration has already been run successfully
if state.get(migration, {}).get('status') == 'success':
print(f"\n⏭️ Skipping {migration} (already completed)")
skipped_count += 1
continue
# Run the migration
success = run_migration(migration)
# Update state
state[migration] = {
'status': 'success' if success else 'failed',
'timestamp': datetime.now().isoformat(),
'attempts': state.get(migration, {}).get('attempts', 0) + 1
}
if success:
success_count += 1
else:
failed_count += 1
# Don't stop on failure, continue with other migrations
print(f"⚠️ Continuing despite failure in {migration}")
# Save state after each migration
save_migration_state(state)
# Summary
print("\n" + "="*50)
print("Database Migration Summary:")
print(f"✅ Successful: {success_count}")
print(f"❌ Failed: {failed_count}")
print(f"⏭️ Skipped: {skipped_count}")
print(f"📊 Total: {len(DB_MIGRATIONS)}")
if failed_count > 0:
print("\n⚠️ Some migrations failed. Check the logs above for details.")
return 1
else:
print("\n✨ All database migrations completed successfully!")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,166 +0,0 @@
#!/usr/bin/env python3
"""
Run code migrations during startup - updates code to match model changes
"""
import os
import sys
import subprocess
from pathlib import Path
import hashlib
import json
from datetime import datetime
MIGRATION_STATE_FILE = '/data/code_migrations_state.json'
def get_migration_hash(script_path):
"""Get hash of migration script to detect changes"""
with open(script_path, 'rb') as f:
return hashlib.md5(f.read()).hexdigest()
def load_migration_state():
"""Load state of previously run migrations"""
if os.path.exists(MIGRATION_STATE_FILE):
try:
with open(MIGRATION_STATE_FILE, 'r') as f:
return json.load(f)
except:
return {}
return {}
def save_migration_state(state):
"""Save migration state"""
os.makedirs(os.path.dirname(MIGRATION_STATE_FILE), exist_ok=True)
with open(MIGRATION_STATE_FILE, 'w') as f:
json.dump(state, f, indent=2)
def should_run_migration(script_path, state):
"""Check if migration should run based on state"""
script_name = os.path.basename(script_path)
current_hash = get_migration_hash(script_path)
if script_name not in state:
return True
# Re-run if script has changed
if state[script_name].get('hash') != current_hash:
return True
# Skip if already run successfully
if state[script_name].get('status') == 'success':
return False
return True
def run_migration(script_path, state):
"""Run a single migration script"""
script_name = os.path.basename(script_path)
print(f"\n{'='*60}")
print(f"Running code migration: {script_name}")
print('='*60)
try:
result = subprocess.run(
[sys.executable, script_path],
capture_output=True,
text=True,
check=True,
timeout=300 # 5 minute timeout
)
print(result.stdout)
if result.stderr:
print("Warnings:", result.stderr)
# Update state
state[script_name] = {
'hash': get_migration_hash(script_path),
'status': 'success',
'last_run': str(datetime.now()),
'output': result.stdout[-1000:] if result.stdout else '' # Last 1000 chars
}
save_migration_state(state)
return True
except subprocess.CalledProcessError as e:
print(f"❌ Error running {script_name}:")
print(e.stdout)
print(e.stderr)
# Update state with failure
state[script_name] = {
'hash': get_migration_hash(script_path),
'status': 'failed',
'last_run': str(datetime.now()),
'error': str(e)
}
save_migration_state(state)
return False
except subprocess.TimeoutExpired:
print(f"❌ Migration {script_name} timed out!")
state[script_name] = {
'hash': get_migration_hash(script_path),
'status': 'timeout',
'last_run': str(datetime.now())
}
save_migration_state(state)
return False
def main():
"""Run all code migrations that need to be run"""
print("🔄 Checking for code migrations...")
# Get migration state
state = load_migration_state()
# Get all migration scripts
migrations_dir = Path(__file__).parent
migration_scripts = sorted([
str(p) for p in migrations_dir.glob('*.py')
if p.name.startswith(('11_', '12_', '13_', '14_', '15_'))
and 'template' not in p.name.lower()
])
if not migration_scripts:
print("No code migration scripts found.")
return 0
# Check which migrations need to run
to_run = []
for script in migration_scripts:
if should_run_migration(script, state):
to_run.append(script)
if not to_run:
print("✅ All code migrations are up to date.")
return 0
print(f"\n📋 Found {len(to_run)} code migrations to run:")
for script in to_run:
print(f" - {Path(script).name}")
# Run migrations
failed = []
for script in to_run:
if not run_migration(script, state):
failed.append(script)
# Continue with other migrations even if one fails
print(f"\n⚠️ Migration {Path(script).name} failed, continuing with others...")
# Summary
print("\n" + "="*60)
if failed:
print(f"⚠️ {len(failed)} code migrations failed:")
for script in failed:
print(f" - {Path(script).name}")
print("\nThe application may not work correctly.")
print("Check the logs and fix the issues.")
# Don't exit with error - let the app start anyway
return 0
else:
print("✅ All code migrations completed successfully!")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,327 +0,0 @@
#!/usr/bin/env python3
"""
PostgreSQL-only migration script for TimeTrack
Applies all schema changes from commit 4214e88 onward
"""
import os
import sys
import psycopg2
from psycopg2.extras import RealDictCursor
import logging
from datetime import datetime
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
class PostgresMigration:
def __init__(self, database_url):
self.database_url = database_url
self.conn = None
def connect(self):
"""Connect to PostgreSQL database"""
try:
self.conn = psycopg2.connect(self.database_url)
self.conn.autocommit = False
logger.info("Connected to PostgreSQL database")
return True
except Exception as e:
logger.error(f"Failed to connect to database: {e}")
return False
def close(self):
"""Close database connection"""
if self.conn:
self.conn.close()
def execute_migration(self, name, sql_statements):
"""Execute a migration with proper error handling"""
logger.info(f"Running migration: {name}")
cursor = self.conn.cursor()
try:
for statement in sql_statements:
if statement.strip():
cursor.execute(statement)
self.conn.commit()
logger.info(f"{name} completed successfully")
return True
except Exception as e:
self.conn.rollback()
logger.error(f"{name} failed: {e}")
return False
finally:
cursor.close()
def check_column_exists(self, table_name, column_name):
"""Check if a column exists in a table"""
cursor = self.conn.cursor()
cursor.execute("""
SELECT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = %s AND column_name = %s
)
""", (table_name, column_name))
exists = cursor.fetchone()[0]
cursor.close()
return exists
def check_table_exists(self, table_name):
"""Check if a table exists"""
cursor = self.conn.cursor()
cursor.execute("""
SELECT EXISTS (
SELECT 1 FROM information_schema.tables
WHERE table_name = %s
)
""", (table_name,))
exists = cursor.fetchone()[0]
cursor.close()
return exists
def check_enum_value_exists(self, enum_name, value):
"""Check if an enum value exists"""
cursor = self.conn.cursor()
cursor.execute("""
SELECT EXISTS (
SELECT 1 FROM pg_enum
WHERE enumlabel = %s
AND enumtypid = (SELECT oid FROM pg_type WHERE typname = %s)
)
""", (value, enum_name))
exists = cursor.fetchone()[0]
cursor.close()
return exists
def run_all_migrations(self):
"""Run all migrations in order"""
if not self.connect():
return False
success = True
# 1. Add company.updated_at
if not self.check_column_exists('company', 'updated_at'):
success &= self.execute_migration("Add company.updated_at", [
"""
ALTER TABLE company
ADD COLUMN updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP;
""",
"""
UPDATE company SET updated_at = created_at WHERE updated_at IS NULL;
"""
])
# 2. Add user columns for 2FA and avatar
if not self.check_column_exists('user', 'two_factor_enabled'):
success &= self.execute_migration("Add user 2FA and avatar columns", [
"""
ALTER TABLE "user"
ADD COLUMN two_factor_enabled BOOLEAN DEFAULT FALSE,
ADD COLUMN two_factor_secret VARCHAR(32),
ADD COLUMN avatar_url VARCHAR(255);
"""
])
# 3. Create company_invitation table
if not self.check_table_exists('company_invitation'):
success &= self.execute_migration("Create company_invitation table", [
"""
CREATE TABLE company_invitation (
id SERIAL PRIMARY KEY,
company_id INTEGER NOT NULL REFERENCES company(id),
email VARCHAR(255) NOT NULL,
role VARCHAR(50) NOT NULL,
token VARCHAR(255) UNIQUE NOT NULL,
invited_by_id INTEGER REFERENCES "user"(id),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
expires_at TIMESTAMP NOT NULL,
used_at TIMESTAMP,
used_by_id INTEGER REFERENCES "user"(id)
);
""",
"""
CREATE INDEX idx_invitation_token ON company_invitation(token);
""",
"""
CREATE INDEX idx_invitation_company ON company_invitation(company_id);
""",
"""
CREATE INDEX idx_invitation_email ON company_invitation(email);
"""
])
# 4. Add user_preferences columns
if self.check_table_exists('user_preferences'):
columns_to_add = [
('theme', 'VARCHAR(20) DEFAULT \'light\''),
('language', 'VARCHAR(10) DEFAULT \'en\''),
('timezone', 'VARCHAR(50) DEFAULT \'UTC\''),
('date_format', 'VARCHAR(20) DEFAULT \'YYYY-MM-DD\''),
('time_format', 'VARCHAR(10) DEFAULT \'24h\''),
('week_start', 'INTEGER DEFAULT 1'),
('show_weekends', 'BOOLEAN DEFAULT TRUE'),
('compact_mode', 'BOOLEAN DEFAULT FALSE'),
('email_notifications', 'BOOLEAN DEFAULT TRUE'),
('push_notifications', 'BOOLEAN DEFAULT FALSE'),
('task_reminders', 'BOOLEAN DEFAULT TRUE'),
('daily_summary', 'BOOLEAN DEFAULT FALSE'),
('weekly_report', 'BOOLEAN DEFAULT TRUE'),
('mention_notifications', 'BOOLEAN DEFAULT TRUE'),
('task_assigned_notifications', 'BOOLEAN DEFAULT TRUE'),
('task_completed_notifications', 'BOOLEAN DEFAULT FALSE'),
('sound_enabled', 'BOOLEAN DEFAULT TRUE'),
('keyboard_shortcuts', 'BOOLEAN DEFAULT TRUE'),
('auto_start_timer', 'BOOLEAN DEFAULT FALSE'),
('idle_time_detection', 'BOOLEAN DEFAULT TRUE'),
('pomodoro_enabled', 'BOOLEAN DEFAULT FALSE'),
('pomodoro_duration', 'INTEGER DEFAULT 25'),
('pomodoro_break', 'INTEGER DEFAULT 5')
]
for col_name, col_def in columns_to_add:
if not self.check_column_exists('user_preferences', col_name):
success &= self.execute_migration(f"Add user_preferences.{col_name}", [
f'ALTER TABLE user_preferences ADD COLUMN {col_name} {col_def};'
])
# 5. Add user_dashboard columns
if self.check_table_exists('user_dashboard'):
if not self.check_column_exists('user_dashboard', 'layout'):
success &= self.execute_migration("Add user_dashboard layout columns", [
"""
ALTER TABLE user_dashboard
ADD COLUMN layout JSON DEFAULT '{}',
ADD COLUMN is_locked BOOLEAN DEFAULT FALSE;
"""
])
# 6. Add company_work_config columns
if self.check_table_exists('company_work_config'):
columns_to_add = [
('standard_hours_per_day', 'FLOAT DEFAULT 8.0'),
('standard_hours_per_week', 'FLOAT DEFAULT 40.0'),
('overtime_rate', 'FLOAT DEFAULT 1.5'),
('double_time_enabled', 'BOOLEAN DEFAULT FALSE'),
('double_time_threshold', 'FLOAT DEFAULT 12.0'),
('double_time_rate', 'FLOAT DEFAULT 2.0'),
('weekly_overtime_threshold', 'FLOAT DEFAULT 40.0'),
('weekly_overtime_rate', 'FLOAT DEFAULT 1.5'),
('created_at', 'TIMESTAMP DEFAULT CURRENT_TIMESTAMP'),
('updated_at', 'TIMESTAMP DEFAULT CURRENT_TIMESTAMP')
]
for col_name, col_def in columns_to_add:
if not self.check_column_exists('company_work_config', col_name):
success &= self.execute_migration(f"Add company_work_config.{col_name}", [
f'ALTER TABLE company_work_config ADD COLUMN {col_name} {col_def};'
])
# 7. Add company_settings columns
if self.check_table_exists('company_settings'):
columns_to_add = [
('work_week_start', 'INTEGER DEFAULT 1'),
('work_days', 'VARCHAR(20) DEFAULT \'1,2,3,4,5\''),
('time_tracking_mode', 'VARCHAR(20) DEFAULT \'flexible\''),
('allow_manual_time', 'BOOLEAN DEFAULT TRUE'),
('require_project_selection', 'BOOLEAN DEFAULT TRUE'),
('allow_future_entries', 'BOOLEAN DEFAULT FALSE'),
('max_hours_per_entry', 'FLOAT DEFAULT 24.0'),
('min_hours_per_entry', 'FLOAT DEFAULT 0.0'),
('round_time_to', 'INTEGER DEFAULT 1'),
('auto_break_deduction', 'BOOLEAN DEFAULT FALSE'),
('allow_overlapping_entries', 'BOOLEAN DEFAULT FALSE'),
('require_daily_notes', 'BOOLEAN DEFAULT FALSE'),
('enable_tasks', 'BOOLEAN DEFAULT TRUE'),
('enable_projects', 'BOOLEAN DEFAULT TRUE'),
('enable_teams', 'BOOLEAN DEFAULT TRUE'),
('enable_reports', 'BOOLEAN DEFAULT TRUE'),
('enable_invoicing', 'BOOLEAN DEFAULT FALSE'),
('enable_client_access', 'BOOLEAN DEFAULT FALSE'),
('default_currency', 'VARCHAR(3) DEFAULT \'USD\''),
('created_at', 'TIMESTAMP DEFAULT CURRENT_TIMESTAMP'),
('updated_at', 'TIMESTAMP DEFAULT CURRENT_TIMESTAMP')
]
for col_name, col_def in columns_to_add:
if not self.check_column_exists('company_settings', col_name):
success &= self.execute_migration(f"Add company_settings.{col_name}", [
f'ALTER TABLE company_settings ADD COLUMN {col_name} {col_def};'
])
# 8. Add dashboard_widget columns
if self.check_table_exists('dashboard_widget'):
if not self.check_column_exists('dashboard_widget', 'config'):
success &= self.execute_migration("Add dashboard_widget config columns", [
"""
ALTER TABLE dashboard_widget
ADD COLUMN config JSON DEFAULT '{}',
ADD COLUMN is_visible BOOLEAN DEFAULT TRUE;
"""
])
# 9. Update WorkRegion enum
if not self.check_enum_value_exists('workregion', 'GERMANY'):
success &= self.execute_migration("Add GERMANY to WorkRegion enum", [
"""
ALTER TYPE workregion ADD VALUE IF NOT EXISTS 'GERMANY';
"""
])
# 10. Update TaskStatus enum
if not self.check_enum_value_exists('taskstatus', 'ARCHIVED'):
success &= self.execute_migration("Add ARCHIVED to TaskStatus enum", [
"""
ALTER TYPE taskstatus ADD VALUE IF NOT EXISTS 'ARCHIVED';
"""
])
# 11. Update WidgetType enum
widget_types_to_add = [
'REVENUE_CHART', 'EXPENSE_CHART', 'PROFIT_CHART', 'CASH_FLOW',
'INVOICE_STATUS', 'CLIENT_LIST', 'PROJECT_BUDGET', 'TEAM_CAPACITY',
'SPRINT_BURNDOWN', 'VELOCITY_CHART', 'BACKLOG_STATUS', 'RELEASE_TIMELINE',
'CODE_COMMITS', 'BUILD_STATUS', 'DEPLOYMENT_HISTORY', 'ERROR_RATE',
'SYSTEM_HEALTH', 'USER_ACTIVITY', 'SECURITY_ALERTS', 'AUDIT_LOG'
]
for widget_type in widget_types_to_add:
if not self.check_enum_value_exists('widgettype', widget_type):
success &= self.execute_migration(f"Add {widget_type} to WidgetType enum", [
f"ALTER TYPE widgettype ADD VALUE IF NOT EXISTS '{widget_type}';"
])
self.close()
if success:
logger.info("\n✅ All migrations completed successfully!")
else:
logger.error("\n❌ Some migrations failed. Check the logs above.")
return success
def main():
"""Main migration function"""
# Get database URL from environment
database_url = os.environ.get('DATABASE_URL')
if not database_url:
logger.error("DATABASE_URL environment variable not set")
return 1
# Run migrations
migration = PostgresMigration(database_url)
success = migration.run_all_migrations()
return 0 if success else 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,8 +0,0 @@
-- Remove unused columns from user_preferences table
-- These columns were defined in the model but never used in the application
ALTER TABLE user_preferences
DROP COLUMN IF EXISTS email_daily_summary,
DROP COLUMN IF EXISTS email_notifications,
DROP COLUMN IF EXISTS email_weekly_summary,
DROP COLUMN IF EXISTS default_project_id;

View File

@@ -1,161 +0,0 @@
#!/usr/bin/env python3
"""
PostgreSQL-only migration runner
Manages migration state and runs migrations in order
"""
import os
import sys
import json
import subprocess
from datetime import datetime
from pathlib import Path
# Migration state file
MIGRATION_STATE_FILE = '/data/postgres_migrations_state.json'
# List of PostgreSQL migrations in order
POSTGRES_MIGRATIONS = [
'postgres_only_migration.py', # Main migration from commit 4214e88 onward
'add_note_sharing.sql', # Add note sharing functionality
'remove_email_preferences.sql', # Remove unused email preference columns
'add_time_preferences.sql', # Add time formatting and rounding preferences
]
def load_migration_state():
"""Load the migration state from file"""
if os.path.exists(MIGRATION_STATE_FILE):
try:
with open(MIGRATION_STATE_FILE, 'r') as f:
return json.load(f)
except:
return {}
return {}
def save_migration_state(state):
"""Save the migration state to file"""
os.makedirs(os.path.dirname(MIGRATION_STATE_FILE), exist_ok=True)
with open(MIGRATION_STATE_FILE, 'w') as f:
json.dump(state, f, indent=2)
def run_migration(migration_file):
"""Run a single migration script"""
script_path = os.path.join(os.path.dirname(__file__), migration_file)
if not os.path.exists(script_path):
print(f"⚠️ Migration {migration_file} not found, skipping...")
return False
print(f"\n🔄 Running migration: {migration_file}")
try:
# Check if it's a SQL file
if migration_file.endswith('.sql'):
# Run SQL file using psql
# Try to parse DATABASE_URL first, fall back to individual env vars
database_url = os.environ.get('DATABASE_URL')
if database_url:
# Parse DATABASE_URL: postgresql://user:password@host:port/dbname
from urllib.parse import urlparse
parsed = urlparse(database_url)
db_host = parsed.hostname or 'db'
db_port = parsed.port or 5432
db_name = parsed.path.lstrip('/') or 'timetrack'
db_user = parsed.username or 'timetrack'
db_password = parsed.password or 'timetrack'
else:
db_host = os.environ.get('POSTGRES_HOST', 'db')
db_name = os.environ.get('POSTGRES_DB', 'timetrack')
db_user = os.environ.get('POSTGRES_USER', 'timetrack')
db_password = os.environ.get('POSTGRES_PASSWORD', 'timetrack')
result = subprocess.run(
['psql', '-h', db_host, '-U', db_user, '-d', db_name, '-f', script_path],
capture_output=True,
text=True,
env={**os.environ, 'PGPASSWORD': db_password}
)
else:
# Run Python migration script
result = subprocess.run(
[sys.executable, script_path],
capture_output=True,
text=True
)
if result.returncode == 0:
print(f"{migration_file} completed successfully")
if result.stdout:
print(result.stdout)
return True
else:
print(f"{migration_file} failed with return code {result.returncode}")
if result.stderr:
print(f"Error output: {result.stderr}")
if result.stdout:
print(f"Standard output: {result.stdout}")
return False
except Exception as e:
print(f"❌ Error running {migration_file}: {e}")
return False
def main():
"""Run all PostgreSQL migrations"""
print("=== PostgreSQL Database Migrations ===")
print(f"Running {len(POSTGRES_MIGRATIONS)} migrations...")
# Load migration state
state = load_migration_state()
success_count = 0
failed_count = 0
skipped_count = 0
for migration in POSTGRES_MIGRATIONS:
# Check if migration has already been run successfully
if state.get(migration, {}).get('status') == 'success':
print(f"\n⏭️ Skipping {migration} (already completed)")
skipped_count += 1
continue
# Run the migration
success = run_migration(migration)
# Update state
state[migration] = {
'status': 'success' if success else 'failed',
'timestamp': datetime.now().isoformat(),
'attempts': state.get(migration, {}).get('attempts', 0) + 1
}
if success:
success_count += 1
else:
failed_count += 1
# Save state after each migration
save_migration_state(state)
# Summary
print("\n" + "="*50)
print("PostgreSQL Migration Summary:")
print(f"✅ Successful: {success_count}")
print(f"❌ Failed: {failed_count}")
print(f"⏭️ Skipped: {skipped_count}")
print(f"📊 Total: {len(POSTGRES_MIGRATIONS)}")
if failed_count > 0:
print("\n⚠️ Some migrations failed. Check the logs above for details.")
return 1
else:
print("\n✨ All PostgreSQL migrations completed successfully!")
return 0
if __name__ == "__main__":
sys.exit(main())

24
migrations/script.py.mako Normal file
View File

@@ -0,0 +1,24 @@
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
branch_labels = ${repr(branch_labels)}
depends_on = ${repr(depends_on)}
def upgrade():
${upgrades if upgrades else "pass"}
def downgrade():
${downgrades if downgrades else "pass"}

View File

@@ -0,0 +1,251 @@
"""Initial migration
Revision ID: c72667903a91
Revises:
Create Date: 2025-07-10 08:35:55.412151
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic.
revision = 'c72667903a91'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('kanban_card')
op.drop_table('kanban_column')
op.drop_table('kanban_board')
op.drop_index(op.f('idx_invitation_company'), table_name='company_invitation')
op.drop_index(op.f('idx_invitation_email'), table_name='company_invitation')
op.drop_index(op.f('idx_invitation_expires'), table_name='company_invitation')
op.drop_index(op.f('idx_invitation_token'), table_name='company_invitation')
op.drop_constraint(op.f('company_settings_created_by_id_fkey'), 'company_settings', type_='foreignkey')
op.drop_column('company_settings', 'allow_team_visibility_comments')
op.drop_column('company_settings', 'round_time_to')
op.drop_column('company_settings', 'time_tracking_mode')
op.drop_column('company_settings', 'enable_projects')
op.drop_column('company_settings', 'default_comment_visibility')
op.drop_column('company_settings', 'require_task_assignment')
op.drop_column('company_settings', 'enable_teams')
op.drop_column('company_settings', 'require_project_selection')
op.drop_column('company_settings', 'restrict_project_access_by_team')
op.drop_column('company_settings', 'enable_invoicing')
op.drop_column('company_settings', 'auto_break_deduction')
op.drop_column('company_settings', 'allow_task_creation_by_members')
op.drop_column('company_settings', 'created_by_id')
op.drop_column('company_settings', 'enable_reports')
op.drop_column('company_settings', 'require_daily_notes')
op.drop_column('company_settings', 'min_hours_per_entry')
op.drop_column('company_settings', 'default_currency')
op.drop_column('company_settings', 'allow_manual_time')
op.drop_constraint(op.f('uq_company_work_config'), 'company_work_config', type_='unique')
op.drop_index(op.f('idx_note_folder'), table_name='note')
op.drop_index(op.f('idx_note_folder_company'), table_name='note_folder')
op.drop_index(op.f('idx_note_folder_created_by'), table_name='note_folder')
op.drop_index(op.f('idx_note_folder_parent_path'), table_name='note_folder')
op.drop_constraint(op.f('note_link_target_note_id_fkey'), 'note_link', type_='foreignkey')
op.drop_constraint(op.f('note_link_source_note_id_fkey'), 'note_link', type_='foreignkey')
op.create_foreign_key(None, 'note_link', 'note', ['target_note_id'], ['id'], ondelete='CASCADE')
op.create_foreign_key(None, 'note_link', 'note', ['source_note_id'], ['id'], ondelete='CASCADE')
op.drop_index(op.f('idx_note_share_created_by'), table_name='note_share')
op.drop_index(op.f('idx_note_share_note_id'), table_name='note_share')
op.drop_index(op.f('idx_note_share_token'), table_name='note_share')
op.drop_table_comment(
'note_share',
existing_comment='Public sharing links for notes with optional password protection and view limits',
schema=None
)
op.alter_column('project_category', 'name',
existing_type=sa.VARCHAR(length=100),
type_=sa.String(length=50),
existing_nullable=False)
op.alter_column('project_category', 'description',
existing_type=sa.TEXT(),
type_=sa.String(length=255),
existing_nullable=True)
op.drop_constraint(op.f('project_category_created_by_id_fkey'), 'project_category', type_='foreignkey')
op.drop_column('project_category', 'icon')
op.drop_column('project_category', 'created_by_id')
op.drop_column('project_category', 'updated_at')
op.alter_column('sub_task', 'status',
existing_type=postgresql.ENUM('TODO', 'IN_PROGRESS', 'IN_REVIEW', 'DONE', 'CANCELLED', 'ARCHIVED', 'In Progress', 'To Do', 'Cancelled', 'In Review', 'Archived', 'Done', name='taskstatus'),
nullable=True,
existing_server_default=sa.text("'TODO'::taskstatus"))
op.alter_column('task', 'status',
existing_type=postgresql.ENUM('TODO', 'IN_PROGRESS', 'IN_REVIEW', 'DONE', 'CANCELLED', 'ARCHIVED', 'In Progress', 'To Do', 'Cancelled', 'In Review', 'Archived', 'Done', name='taskstatus'),
nullable=True,
existing_server_default=sa.text("'TODO'::taskstatus"))
op.drop_index(op.f('idx_user_default_dashboard'), table_name='user_dashboard')
op.drop_constraint(op.f('user_preferences_default_project_id_fkey'), 'user_preferences', type_='foreignkey')
op.drop_column('user_preferences', 'daily_summary')
op.drop_column('user_preferences', 'mention_notifications')
op.drop_column('user_preferences', 'pomodoro_duration')
op.drop_column('user_preferences', 'keyboard_shortcuts')
op.drop_column('user_preferences', 'week_start')
op.drop_column('user_preferences', 'default_project_id')
op.drop_column('user_preferences', 'sound_enabled')
op.drop_column('user_preferences', 'email_notifications')
op.drop_column('user_preferences', 'task_assigned_notifications')
op.drop_column('user_preferences', 'pomodoro_enabled')
op.drop_column('user_preferences', 'email_daily_summary')
op.drop_column('user_preferences', 'email_weekly_summary')
op.drop_column('user_preferences', 'task_reminders')
op.drop_column('user_preferences', 'auto_start_timer')
op.drop_column('user_preferences', 'weekly_report')
op.drop_column('user_preferences', 'push_notifications')
op.drop_column('user_preferences', 'compact_mode')
op.drop_column('user_preferences', 'pomodoro_break')
op.drop_column('user_preferences', 'idle_time_detection')
op.drop_column('user_preferences', 'task_completed_notifications')
op.drop_column('user_preferences', 'show_weekends')
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('user_preferences', sa.Column('show_weekends', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('task_completed_notifications', sa.BOOLEAN(), server_default=sa.text('false'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('idle_time_detection', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('pomodoro_break', sa.INTEGER(), server_default=sa.text('5'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('compact_mode', sa.BOOLEAN(), server_default=sa.text('false'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('push_notifications', sa.BOOLEAN(), server_default=sa.text('false'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('weekly_report', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('auto_start_timer', sa.BOOLEAN(), server_default=sa.text('false'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('task_reminders', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('email_weekly_summary', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('email_daily_summary', sa.BOOLEAN(), server_default=sa.text('false'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('pomodoro_enabled', sa.BOOLEAN(), server_default=sa.text('false'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('task_assigned_notifications', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('email_notifications', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('sound_enabled', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('default_project_id', sa.INTEGER(), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('week_start', sa.INTEGER(), server_default=sa.text('1'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('keyboard_shortcuts', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('pomodoro_duration', sa.INTEGER(), server_default=sa.text('25'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('mention_notifications', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('user_preferences', sa.Column('daily_summary', sa.BOOLEAN(), server_default=sa.text('false'), autoincrement=False, nullable=True))
op.create_foreign_key(op.f('user_preferences_default_project_id_fkey'), 'user_preferences', 'project', ['default_project_id'], ['id'])
op.create_index(op.f('idx_user_default_dashboard'), 'user_dashboard', ['user_id', 'is_default'], unique=False)
op.alter_column('task', 'status',
existing_type=postgresql.ENUM('TODO', 'IN_PROGRESS', 'IN_REVIEW', 'DONE', 'CANCELLED', 'ARCHIVED', 'In Progress', 'To Do', 'Cancelled', 'In Review', 'Archived', 'Done', name='taskstatus'),
nullable=False,
existing_server_default=sa.text("'TODO'::taskstatus"))
op.alter_column('sub_task', 'status',
existing_type=postgresql.ENUM('TODO', 'IN_PROGRESS', 'IN_REVIEW', 'DONE', 'CANCELLED', 'ARCHIVED', 'In Progress', 'To Do', 'Cancelled', 'In Review', 'Archived', 'Done', name='taskstatus'),
nullable=False,
existing_server_default=sa.text("'TODO'::taskstatus"))
op.add_column('project_category', sa.Column('updated_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=True))
op.add_column('project_category', sa.Column('created_by_id', sa.INTEGER(), autoincrement=False, nullable=False))
op.add_column('project_category', sa.Column('icon', sa.VARCHAR(length=50), autoincrement=False, nullable=True))
op.create_foreign_key(op.f('project_category_created_by_id_fkey'), 'project_category', 'user', ['created_by_id'], ['id'])
op.alter_column('project_category', 'description',
existing_type=sa.String(length=255),
type_=sa.TEXT(),
existing_nullable=True)
op.alter_column('project_category', 'name',
existing_type=sa.String(length=50),
type_=sa.VARCHAR(length=100),
existing_nullable=False)
op.create_table_comment(
'note_share',
'Public sharing links for notes with optional password protection and view limits',
existing_comment=None,
schema=None
)
op.create_index(op.f('idx_note_share_token'), 'note_share', ['token'], unique=False)
op.create_index(op.f('idx_note_share_note_id'), 'note_share', ['note_id'], unique=False)
op.create_index(op.f('idx_note_share_created_by'), 'note_share', ['created_by_id'], unique=False)
op.drop_constraint(None, 'note_link', type_='foreignkey')
op.drop_constraint(None, 'note_link', type_='foreignkey')
op.create_foreign_key(op.f('note_link_source_note_id_fkey'), 'note_link', 'note', ['source_note_id'], ['id'])
op.create_foreign_key(op.f('note_link_target_note_id_fkey'), 'note_link', 'note', ['target_note_id'], ['id'])
op.create_index(op.f('idx_note_folder_parent_path'), 'note_folder', ['parent_path'], unique=False)
op.create_index(op.f('idx_note_folder_created_by'), 'note_folder', ['created_by_id'], unique=False)
op.create_index(op.f('idx_note_folder_company'), 'note_folder', ['company_id'], unique=False)
op.create_index(op.f('idx_note_folder'), 'note', ['folder'], unique=False)
op.create_unique_constraint(op.f('uq_company_work_config'), 'company_work_config', ['company_id'])
op.add_column('company_settings', sa.Column('allow_manual_time', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('default_currency', sa.VARCHAR(length=3), server_default=sa.text("'USD'::character varying"), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('min_hours_per_entry', postgresql.DOUBLE_PRECISION(precision=53), server_default=sa.text('0.0'), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('require_daily_notes', sa.BOOLEAN(), server_default=sa.text('false'), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('enable_reports', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('created_by_id', sa.INTEGER(), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('allow_task_creation_by_members', sa.BOOLEAN(), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('auto_break_deduction', sa.BOOLEAN(), server_default=sa.text('false'), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('enable_invoicing', sa.BOOLEAN(), server_default=sa.text('false'), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('restrict_project_access_by_team', sa.BOOLEAN(), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('require_project_selection', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('enable_teams', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('require_task_assignment', sa.BOOLEAN(), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('default_comment_visibility', postgresql.ENUM('TEAM', 'COMPANY', name='commentvisibility'), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('enable_projects', sa.BOOLEAN(), server_default=sa.text('true'), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('time_tracking_mode', sa.VARCHAR(length=20), server_default=sa.text("'flexible'::character varying"), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('round_time_to', sa.INTEGER(), server_default=sa.text('1'), autoincrement=False, nullable=True))
op.add_column('company_settings', sa.Column('allow_team_visibility_comments', sa.BOOLEAN(), autoincrement=False, nullable=True))
op.create_foreign_key(op.f('company_settings_created_by_id_fkey'), 'company_settings', 'user', ['created_by_id'], ['id'])
op.create_index(op.f('idx_invitation_token'), 'company_invitation', ['token'], unique=False)
op.create_index(op.f('idx_invitation_expires'), 'company_invitation', ['expires_at'], unique=False)
op.create_index(op.f('idx_invitation_email'), 'company_invitation', ['email'], unique=False)
op.create_index(op.f('idx_invitation_company'), 'company_invitation', ['company_id'], unique=False)
op.create_table('kanban_board',
sa.Column('id', sa.INTEGER(), server_default=sa.text("nextval('kanban_board_id_seq'::regclass)"), autoincrement=True, nullable=False),
sa.Column('name', sa.VARCHAR(length=100), autoincrement=False, nullable=False),
sa.Column('description', sa.TEXT(), autoincrement=False, nullable=True),
sa.Column('company_id', sa.INTEGER(), autoincrement=False, nullable=False),
sa.Column('is_active', sa.BOOLEAN(), autoincrement=False, nullable=True),
sa.Column('is_default', sa.BOOLEAN(), autoincrement=False, nullable=True),
sa.Column('created_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=True),
sa.Column('updated_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=True),
sa.Column('created_by_id', sa.INTEGER(), autoincrement=False, nullable=False),
sa.ForeignKeyConstraint(['company_id'], ['company.id'], name='kanban_board_company_id_fkey'),
sa.ForeignKeyConstraint(['created_by_id'], ['user.id'], name='kanban_board_created_by_id_fkey'),
sa.PrimaryKeyConstraint('id', name='kanban_board_pkey'),
sa.UniqueConstraint('company_id', 'name', name='uq_kanban_board_name_per_company'),
postgresql_ignore_search_path=False
)
op.create_table('kanban_column',
sa.Column('id', sa.INTEGER(), server_default=sa.text("nextval('kanban_column_id_seq'::regclass)"), autoincrement=True, nullable=False),
sa.Column('name', sa.VARCHAR(length=100), autoincrement=False, nullable=False),
sa.Column('description', sa.TEXT(), autoincrement=False, nullable=True),
sa.Column('position', sa.INTEGER(), autoincrement=False, nullable=False),
sa.Column('color', sa.VARCHAR(length=7), autoincrement=False, nullable=True),
sa.Column('wip_limit', sa.INTEGER(), autoincrement=False, nullable=True),
sa.Column('is_active', sa.BOOLEAN(), autoincrement=False, nullable=True),
sa.Column('board_id', sa.INTEGER(), autoincrement=False, nullable=False),
sa.Column('created_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=True),
sa.Column('updated_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=True),
sa.ForeignKeyConstraint(['board_id'], ['kanban_board.id'], name='kanban_column_board_id_fkey'),
sa.PrimaryKeyConstraint('id', name='kanban_column_pkey'),
sa.UniqueConstraint('board_id', 'name', name='uq_kanban_column_name_per_board'),
postgresql_ignore_search_path=False
)
op.create_table('kanban_card',
sa.Column('id', sa.INTEGER(), autoincrement=True, nullable=False),
sa.Column('title', sa.VARCHAR(length=200), autoincrement=False, nullable=False),
sa.Column('description', sa.TEXT(), autoincrement=False, nullable=True),
sa.Column('position', sa.INTEGER(), autoincrement=False, nullable=False),
sa.Column('color', sa.VARCHAR(length=7), autoincrement=False, nullable=True),
sa.Column('is_active', sa.BOOLEAN(), autoincrement=False, nullable=True),
sa.Column('column_id', sa.INTEGER(), autoincrement=False, nullable=False),
sa.Column('project_id', sa.INTEGER(), autoincrement=False, nullable=True),
sa.Column('task_id', sa.INTEGER(), autoincrement=False, nullable=True),
sa.Column('assigned_to_id', sa.INTEGER(), autoincrement=False, nullable=True),
sa.Column('due_date', sa.DATE(), autoincrement=False, nullable=True),
sa.Column('completed_date', sa.DATE(), autoincrement=False, nullable=True),
sa.Column('created_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=True),
sa.Column('updated_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=True),
sa.Column('created_by_id', sa.INTEGER(), autoincrement=False, nullable=False),
sa.ForeignKeyConstraint(['assigned_to_id'], ['user.id'], name=op.f('kanban_card_assigned_to_id_fkey')),
sa.ForeignKeyConstraint(['column_id'], ['kanban_column.id'], name=op.f('kanban_card_column_id_fkey')),
sa.ForeignKeyConstraint(['created_by_id'], ['user.id'], name=op.f('kanban_card_created_by_id_fkey')),
sa.ForeignKeyConstraint(['project_id'], ['project.id'], name=op.f('kanban_card_project_id_fkey')),
sa.ForeignKeyConstraint(['task_id'], ['task.id'], name=op.f('kanban_card_task_id_fkey')),
sa.PrimaryKeyConstraint('id', name=op.f('kanban_card_pkey'))
)
# ### end Alembic commands ###

View File

@@ -5,7 +5,7 @@ Task-related models
from datetime import datetime from datetime import datetime
from . import db from . import db
from .enums import TaskStatus, TaskPriority, CommentVisibility, Role from .enums import TaskStatus, TaskPriority, CommentVisibility, Role
from .project import Project
class Task(db.Model): class Task(db.Model):
"""Task model for project management""" """Task model for project management"""

File diff suppressed because it is too large Load Diff

View File

@@ -13,6 +13,7 @@ numpy==1.26.4
pandas==1.5.3 pandas==1.5.3
xlsxwriter==3.1.2 xlsxwriter==3.1.2
Flask-Mail==0.9.1 Flask-Mail==0.9.1
Flask-Migrate==3.1.0
psycopg2-binary==2.9.9 psycopg2-binary==2.9.9
markdown==3.4.4 markdown==3.4.4
PyYAML==6.0.1 PyYAML==6.0.1

View File

@@ -163,6 +163,15 @@ def delete_project(project_id):
# Delete all related data in the correct order # Delete all related data in the correct order
# Delete time entries first (they reference tasks)
# Delete by project_id
TimeEntry.query.filter_by(project_id=project_id).delete()
# Also delete time entries that reference tasks in this project
TimeEntry.query.filter(TimeEntry.task_id.in_(
db.session.query(Task.id).filter(Task.project_id == project_id)
)).delete(synchronize_session=False)
# Delete comments on tasks in this project # Delete comments on tasks in this project
Comment.query.filter(Comment.task_id.in_( Comment.query.filter(Comment.task_id.in_(
db.session.query(Task.id).filter(Task.project_id == project_id) db.session.query(Task.id).filter(Task.project_id == project_id)
@@ -182,15 +191,12 @@ def delete_project(project_id):
) )
).delete(synchronize_session=False) ).delete(synchronize_session=False)
# Delete tasks # Delete tasks (after all references are removed)
Task.query.filter_by(project_id=project_id).delete() Task.query.filter_by(project_id=project_id).delete()
# Delete sprints # Delete sprints
Sprint.query.filter_by(project_id=project_id).delete() Sprint.query.filter_by(project_id=project_id).delete()
# Delete time entries
TimeEntry.query.filter_by(project_id=project_id).delete()
# Finally, delete the project # Finally, delete the project
project_repo.delete(project) project_repo.delete(project)
db.session.commit() db.session.commit()

40
security_headers.py Normal file
View File

@@ -0,0 +1,40 @@
"""
Security headers middleware for Flask.
Add this to ensure secure form submission and prevent security warnings.
"""
from flask import request
def add_security_headers(response):
"""Add security headers to all responses."""
# Force HTTPS for all resources
if request.is_secure or not request.app.debug:
# Strict Transport Security - force HTTPS for 1 year
response.headers['Strict-Transport-Security'] = 'max-age=31536000; includeSubDomains'
# Content Security Policy - allow forms to submit only over HTTPS
# Adjust this based on your needs
csp = (
"default-src 'self' https:; "
"script-src 'self' 'unsafe-inline' 'unsafe-eval' https:; "
"style-src 'self' 'unsafe-inline' https:; "
"img-src 'self' data: https:; "
"font-src 'self' data: https:; "
"form-action 'self' https:; " # Forms can only submit to HTTPS
"upgrade-insecure-requests; " # Upgrade any HTTP requests to HTTPS
)
response.headers['Content-Security-Policy'] = csp
# Other security headers
response.headers['X-Content-Type-Options'] = 'nosniff'
response.headers['X-Frame-Options'] = 'SAMEORIGIN'
response.headers['X-XSS-Protection'] = '1; mode=block'
response.headers['Referrer-Policy'] = 'strict-origin-when-cross-origin'
return response
def init_security(app):
"""Initialize security headers for the Flask app."""
app.after_request(add_security_headers)

View File

@@ -11,43 +11,59 @@ while ! pg_isready -h db -p 5432 -U "$POSTGRES_USER" > /dev/null 2>&1; do
done done
echo "PostgreSQL is ready!" echo "PostgreSQL is ready!"
# SQLite to PostgreSQL migration is now handled by the migration system below # Run Flask-Migrate migrations
echo ""
echo "=== Running Database Migrations ==="
export FLASK_APP=app.py
# Initialize database tables if they don't exist # Check if migrations directory exists
echo "Ensuring database tables exist..." if [ -d "migrations" ]; then
echo "Applying database migrations..."
flask db upgrade
if [ $? -ne 0 ]; then
echo "❌ Migration failed! Check the logs above."
exit 1
fi
echo "✅ Database migrations completed successfully"
else
echo "⚠️ No migrations directory found. Initializing Flask-Migrate..."
# Use Docker-friendly initialization (no Git required)
python docker_migrate_init.py
if [ $? -ne 0 ]; then
echo "❌ Migration initialization failed!"
exit 1
fi
# Check if database has existing tables
python -c " python -c "
from app import app, db from app import app, db
with app.app_context(): with app.app_context():
db.create_all() inspector = db.inspect(db.engine)
print('Database tables created/verified') tables = [t for t in inspector.get_table_names() if t != 'alembic_version']
" if tables:
print('has_tables')
" > /tmp/db_check.txt
# Run all database schema migrations if grep -q "has_tables" /tmp/db_check.txt 2>/dev/null; then
echo "" echo "📊 Existing database detected. Marking as current..."
echo "=== Running Database Schema Migrations ===" flask db stamp head
if [ -d "migrations" ] && [ -f "migrations/run_all_db_migrations.py" ]; then echo "✅ Database marked as current"
echo "Checking and applying database schema updates..."
python migrations/run_all_db_migrations.py
if [ $? -ne 0 ]; then
echo "⚠️ Some database migrations had issues, but continuing..."
fi
else else
echo "No migrations directory found, skipping database migrations..." echo "🆕 Empty database detected. Creating tables..."
flask db upgrade
echo "✅ Database tables created"
fi fi
# Run code migrations to update code for model changes rm -f /tmp/db_check.txt
fi
# Legacy migration support (can be removed after full transition)
if [ -f "migrations_old/run_all_db_migrations.py" ]; then
echo "" echo ""
echo "=== Running Code Migrations ===" echo "=== Checking Legacy Migrations ==="
echo "Code migrations temporarily disabled for debugging" echo "Found old migration system. Consider removing after confirming Flask-Migrate is working."
# if [ -d "migrations" ] && [ -f "migrations/run_code_migrations.py" ]; then fi
# echo "Checking and applying code updates for model changes..."
# python migrations/run_code_migrations.py
# if [ $? -ne 0 ]; then
# echo "⚠️ Code migrations had issues, but continuing..."
# fi
# else
# echo "No migrations directory found, skipping code migrations..."
# fi
# Start the Flask application with gunicorn # Start the Flask application with gunicorn
echo "" echo ""

View File

@@ -11,26 +11,66 @@ while ! pg_isready -h db -p 5432 -U "$POSTGRES_USER" > /dev/null 2>&1; do
done done
echo "PostgreSQL is ready!" echo "PostgreSQL is ready!"
# Initialize database tables if they don't exist # Run Flask-Migrate migrations
echo "Ensuring database tables exist..." echo ""
echo "=== Running Database Migrations ==="
export FLASK_APP=app.py
# Check if migrations directory exists
if [ -d "migrations" ]; then
echo "Applying database migrations..."
flask db upgrade
if [ $? -ne 0 ]; then
echo "❌ Migration failed! Check the logs above."
exit 1
fi
echo "✅ Database migrations completed successfully"
else
echo "⚠️ No migrations directory found. Initializing Flask-Migrate..."
# Use Docker-friendly initialization (no Git required)
python docker_migrate_init.py
if [ $? -ne 0 ]; then
echo "❌ Migration initialization failed!"
exit 1
fi
# Check if database has existing tables
python -c " python -c "
from app import app, db from app import app, db
with app.app_context(): with app.app_context():
db.create_all() inspector = db.inspect(db.engine)
print('Database tables created/verified') tables = [t for t in inspector.get_table_names() if t != 'alembic_version']
" if tables:
print('has_tables')
" > /tmp/db_check.txt
# Run PostgreSQL-only migrations if grep -q "has_tables" /tmp/db_check.txt 2>/dev/null; then
echo "" echo "📊 Existing database detected. Marking as current..."
echo "=== Running PostgreSQL Migrations ===" flask db stamp head
if [ -f "migrations/run_postgres_migrations.py" ]; then echo "✅ Database marked as current"
echo "Applying PostgreSQL schema updates..."
python migrations/run_postgres_migrations.py
if [ $? -ne 0 ]; then
echo "⚠️ Some migrations failed, but continuing..."
fi
else else
echo "PostgreSQL migration runner not found, skipping..." echo "🆕 Empty database detected. Creating tables..."
flask db upgrade
echo "✅ Database tables created"
fi
rm -f /tmp/db_check.txt
fi
# Sync PostgreSQL enums with Python models
echo ""
echo "=== Syncing PostgreSQL Enums ==="
python sync_postgres_enums.py
if [ $? -ne 0 ]; then
echo "⚠️ Enum sync failed, but continuing..."
fi
# Legacy migration support (can be removed after full transition)
if [ -f "migrations_old/run_postgres_migrations.py" ]; then
echo ""
echo "=== Checking Legacy Migrations ==="
echo "Found old migration system. Consider removing after confirming Flask-Migrate is working."
fi fi
# Start the Flask application with gunicorn # Start the Flask application with gunicorn

111
sync_postgres_enums.py Executable file
View File

@@ -0,0 +1,111 @@
#!/usr/bin/env python3
"""
Automatically sync PostgreSQL enums with Python models.
Run this before starting the application to ensure all enum values exist.
"""
import os
import sys
from sqlalchemy import create_engine, text
from sqlalchemy.exc import ProgrammingError
def get_enum_values_from_db(engine, enum_name):
"""Get current enum values from PostgreSQL."""
try:
result = engine.execute(text(f"""
SELECT enumlabel
FROM pg_enum
WHERE enumtypid = (SELECT oid FROM pg_type WHERE typname = :enum_name)
ORDER BY enumsortorder
"""), {"enum_name": enum_name})
return set(row[0] for row in result)
except Exception:
return set()
def sync_enum(engine, enum_name, python_enum_class):
"""Sync a PostgreSQL enum with Python enum values."""
print(f"\nSyncing {enum_name}...")
# Get current DB values
db_values = get_enum_values_from_db(engine, enum_name)
if not db_values:
print(f" ⚠️ Enum {enum_name} not found in database (might not be used)")
return
print(f" DB values: {sorted(db_values)}")
# Get Python values - BOTH name and value
python_values = set()
for item in python_enum_class:
python_values.add(item.name) # Add the NAME (what SQLAlchemy sends)
python_values.add(item.value) # Add the VALUE (for compatibility)
print(f" Python values: {sorted(python_values)}")
# Find missing values
missing_values = python_values - db_values
if not missing_values:
print(f" ✅ All values present")
return
# Add missing values
print(f" 📝 Adding missing values: {missing_values}")
for value in missing_values:
try:
# Use parameterized query for safety, but we need dynamic SQL for ALTER TYPE
# Validate that value is safe (alphanumeric, spaces, underscores only)
if not all(c.isalnum() or c in ' _-' for c in value):
print(f" ⚠️ Skipping unsafe value: {value}")
continue
engine.execute(text(f"ALTER TYPE {enum_name} ADD VALUE IF NOT EXISTS '{value}'"))
print(f" ✅ Added: {value}")
except Exception as e:
print(f" ❌ Failed to add {value}: {e}")
def main():
"""Main sync function."""
print("=== PostgreSQL Enum Sync ===")
# Get database URL
database_url = os.environ.get('DATABASE_URL')
if not database_url:
print("❌ DATABASE_URL not set")
return 1
# Create engine
engine = create_engine(database_url)
# Import enums
try:
from models.enums import TaskStatus, TaskPriority, Role, WorkRegion, SprintStatus
# Define enum mappings (db_type_name, python_enum_class)
enum_mappings = [
('taskstatus', TaskStatus),
('taskpriority', TaskPriority),
('role', Role),
('workregion', WorkRegion),
('sprintstatus', SprintStatus),
]
# Sync each enum
for db_enum_name, python_enum in enum_mappings:
sync_enum(engine, db_enum_name, python_enum)
print("\n✅ Enum sync complete!")
except Exception as e:
print(f"\n❌ Error: {e}")
import traceback
traceback.print_exc()
return 1
finally:
engine.dispose()
return 0
if __name__ == "__main__":
sys.exit(main())

19
test_migrate.py Normal file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env python
"""Test script to verify Flask-Migrate setup"""
from app import app, db, migrate
from flask_migrate import init, migrate as _migrate, upgrade
with app.app_context():
print("Flask app created successfully")
print(f"Database URI: {app.config['SQLALCHEMY_DATABASE_URI']}")
print(f"Migrate instance: {migrate}")
print(f"Available commands: {app.cli.commands}")
# Check if 'db' command is registered
if 'db' in app.cli.commands:
print("'db' command is registered!")
print(f"Subcommands: {list(app.cli.commands['db'].commands.keys())}")
else:
print("ERROR: 'db' command is NOT registered!")
print(f"Available commands: {list(app.cli.commands.keys())}")

View File

@@ -1,55 +0,0 @@
[uwsgi]
# Application module
wsgi-file = app.py
callable = app
pythonpath = /app
chdir = /app
# Process management
master = true
processes = 4
threads = 2
max-requests = 1000
harakiri = 30
thunder-lock = true
# UNIX Domain Socket configuration for nginx
socket = /host/shared/uwsgi.sock
chmod-socket = 666
chown-socket = www-data:www-data
# HTTP socket for direct access
http-socket = :5000
vacuum = true
# Logging
logto = /var/log/uwsgi/timetrack.log
log-maxsize = 50000000
disable-logging = false
# Memory and CPU optimization
memory-report = true
cpu-affinity = 1
reload-on-rss = 512
worker-reload-mercy = 60
# Security
no-site = true
strict = true
# Hot reload in development
py-autoreload = 1
# Buffer size
buffer-size = 32768
# Enable stats server (optional)
stats = 127.0.0.1:9191
stats-http = true
# Die on term signal
die-on-term = true
# Lazy apps for better memory usage
lazy-apps = true