Squashed commit of the following:

commit cb82580f868b629902ba96c7f09f885b7d9c24dc
Author: Jens Luedicke <jens.luedicke@gmail.com>
Date:   Thu Jul 3 22:42:49 2025 +0200

    Fix for postgres db migration. #5

commit 6a4505e2db1cdb2cec65e630b63535ba08c02fc4
Author: Jens Luedicke <jens.luedicke@gmail.com>
Date:   Thu Jul 3 22:39:58 2025 +0200

    Fix for postgres db migration. #4

commit 7d9a5bb12c591182e67d7d52f90d6b1a45260d9f
Author: Jens Luedicke <jens.luedicke@gmail.com>
Date:   Thu Jul 3 22:38:02 2025 +0200

    Fix for postgres db migration. #3

commit 29dbb8b62d873dfbc4901b21e637a7181d545ec7
Author: Jens Luedicke <jens.luedicke@gmail.com>
Date:   Thu Jul 3 22:35:08 2025 +0200

    Fix for postgres db migration. #2

commit d5afc56290d05f53e06a77366214c605d0989c1d
Author: Jens Luedicke <jens.luedicke@gmail.com>
Date:   Thu Jul 3 22:33:09 2025 +0200

    Fix for postgres db migration.

commit 936008fe1c56b6e699c4a45b503507b6423e15eb
Author: Jens Luedicke <jens.luedicke@gmail.com>
Date:   Thu Jul 3 21:46:32 2025 +0200

    Add changes for gunicorn.

commit 464c71e5102117f35d05e1504165299ffa50c70c
Author: Jens Luedicke <jens.luedicke@gmail.com>
Date:   Thu Jul 3 20:30:29 2025 +0200

    Add changes for Postgres migration.
This commit is contained in:
2025-07-03 22:50:37 +02:00
parent 91abaeb433
commit 667040d7f8
11 changed files with 969 additions and 19 deletions

27
.env.example Normal file
View File

@@ -0,0 +1,27 @@
# Database Configuration
POSTGRES_DB=timetrack
POSTGRES_USER=timetrack
POSTGRES_PASSWORD=timetrack_password
POSTGRES_PORT=5432
# pgAdmin Configuration
PGADMIN_EMAIL=admin@timetrack.com
PGADMIN_PASSWORD=admin
PGADMIN_PORT=5050
# TimeTrack App Configuration
TIMETRACK_PORT=5000
FLASK_ENV=production
SECRET_KEY=your-secret-key-here
DATABASE_URL=postgresql://timetrack:timetrack_password@db:5432/timetrack
# Data Path Configuration
DATA_PATH=./data
# Mail Configuration
MAIL_SERVER=smtp.example.com
MAIL_PORT=587
MAIL_USE_TLS=true
MAIL_USERNAME=your-email@example.com
MAIL_PASSWORD=your-password
MAIL_DEFAULT_SENDER=TimeTrack <noreply@timetrack.com>

View File

@@ -11,14 +11,23 @@ ENV PYTHONDONTWRITEBYTECODE=1 \
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
build-essential \
python3-dev \
postgresql-client \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Create www-data user and log directory
RUN groupadd -r www-data && useradd -r -g www-data www-data || true
RUN mkdir -p /var/log/uwsgi && chown -R www-data:www-data /var/log/uwsgi
RUN mkdir -p /host/shared && chown -R www-data:www-data /host/shared
# Copy requirements file first for better caching
COPY requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
RUN pip install gunicorn==21.2.0
# Copy the rest of the application
COPY . .
@@ -29,10 +38,11 @@ RUN mkdir -p /app/instance && chmod 777 /app/instance
VOLUME /data
RUN mkdir /data && chmod 777 /data
# Expose the port the app runs on
# Make startup script executable
RUN chmod +x startup.sh
# Expose the port the app runs on (though we'll use unix socket)
EXPOSE 5000
# Database will be created at runtime when /data volume is mounted
# Command to run the application
CMD ["flask", "run", "--host=0.0.0.0", "--port=5000"]
# Use startup script for automatic migration
CMD ["./startup.sh"]

199
MIGRATION.md Normal file
View File

@@ -0,0 +1,199 @@
# TimeTrack Database Migration Guide
This guide explains how to migrate your TimeTrack application from SQLite to PostgreSQL using Docker.
## Overview
TimeTrack now supports both SQLite and PostgreSQL databases. The migration process automatically converts your existing SQLite database to PostgreSQL while preserving all your data.
## Prerequisites
- Docker and Docker Compose installed
- Existing TimeTrack SQLite database
- Basic understanding of command line operations
## Quick Start (Automatic Migration)
1. **Set up environment variables:**
```bash
cp .env.example .env
# Edit .env file with your configuration
```
2. **Start the services:**
```bash
docker-compose up -d
```
The migration will happen automatically when you first start the application with PostgreSQL configured.
## Manual Migration Process
If you prefer to control the migration process manually:
1. **Prepare your environment:**
```bash
cp .env.example .env
# Edit .env file with your database credentials
```
2. **Start PostgreSQL and pgAdmin:**
```bash
docker-compose up -d postgres pgadmin
```
3. **Run the migration script:**
```bash
./scripts/migrate.sh
```
4. **Start the application:**
```bash
docker-compose up -d timetrack
```
## Configuration
### Environment Variables (.env)
```env
# Database Configuration
POSTGRES_DB=timetrack
POSTGRES_USER=timetrack
POSTGRES_PASSWORD=timetrack_password
POSTGRES_PORT=5432
# pgAdmin Configuration
PGADMIN_EMAIL=admin@timetrack.com
PGADMIN_PASSWORD=admin
PGADMIN_PORT=5050
# TimeTrack App Configuration
TIMETRACK_PORT=5000
FLASK_ENV=production
SECRET_KEY=your-secret-key-here
DATABASE_URL=postgresql://timetrack:timetrack_password@postgres:5432/timetrack
# Mail Configuration
MAIL_SERVER=smtp.example.com
MAIL_PORT=587
MAIL_USE_TLS=true
MAIL_USERNAME=your-email@example.com
MAIL_PASSWORD=your-password
MAIL_DEFAULT_SENDER=TimeTrack <noreply@timetrack.com>
```
### SQLite Path
By default, the migration looks for your SQLite database at `/data/timetrack.db`. If your database is located elsewhere, set the `SQLITE_PATH` environment variable:
```env
SQLITE_PATH=/path/to/your/timetrack.db
```
## Migration Process Details
The migration process includes:
1. **Database Connection**: Connects to both SQLite and PostgreSQL
2. **Backup Creation**: Creates a backup of existing PostgreSQL data (if any)
3. **Schema Creation**: Creates PostgreSQL tables using SQLAlchemy models
4. **Data Migration**: Transfers all data from SQLite to PostgreSQL
5. **Sequence Updates**: Updates PostgreSQL auto-increment sequences
6. **Verification**: Verifies that all data was migrated correctly
### Tables Migrated
- Companies and multi-tenancy data
- Users and authentication information
- Teams and project assignments
- Projects and categories
- Tasks and subtasks
- Time entries and work logs
- Work configurations and user preferences
- System settings
## Post-Migration
After successful migration:
1. **SQLite Database**: Renamed to `.migrated` to indicate completion
2. **Backup Files**: Created with timestamp for rollback if needed
3. **Application**: Automatically uses PostgreSQL for all operations
4. **Verification**: Check `migration.log` for detailed migration results
## Accessing pgAdmin
After starting the services, you can access pgAdmin at:
- URL: http://localhost:5050
- Email: admin@timetrack.com (or your configured email)
- Password: admin (or your configured password)
**Server Connection in pgAdmin:**
- Host: postgres
- Port: 5432
- Database: timetrack
- Username: timetrack
- Password: timetrack_password
## Troubleshooting
### Common Issues
1. **PostgreSQL Connection Failed**
- Ensure PostgreSQL container is running: `docker-compose ps postgres`
- Check connection settings in .env file
2. **SQLite Database Not Found**
- Verify the SQLite database path
- Ensure the database file is accessible from the container
3. **Migration Fails**
- Check `migration.log` for detailed error messages
- Verify PostgreSQL has sufficient permissions
- Ensure no data conflicts exist
4. **Data Verification Failed**
- Compare row counts between SQLite and PostgreSQL
- Check for foreign key constraint violations
- Review migration.log for specific table issues
### Manual Recovery
If migration fails, you can:
1. **Restore from backup:**
```bash
# Restore PostgreSQL from backup
docker-compose exec postgres psql -U timetrack -d timetrack < postgres_backup_TIMESTAMP.sql
```
2. **Revert to SQLite:**
```bash
# Rename migrated database back
mv /data/timetrack.db.migrated /data/timetrack.db
# Update .env to use SQLite
DATABASE_URL=sqlite:////data/timetrack.db
```
## Performance Considerations
- Migration time depends on database size
- Large databases may take several minutes
- Migration runs in batches to optimize memory usage
- All operations are logged for monitoring
## Security Notes
- Change default passwords before production use
- Use strong, unique passwords for database access
- Ensure .env file is not committed to version control
- Regularly backup your PostgreSQL database
## Support
For issues or questions about the migration process:
1. Check the migration.log file for detailed error messages
2. Review this documentation for common solutions
3. Ensure all prerequisites are met
4. Verify environment configuration

42
app.py
View File

@@ -29,14 +29,14 @@ logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////data/timetrack.db'
app.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get('DATABASE_URL', 'sqlite:////data/timetrack.db')
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.config['SECRET_KEY'] = os.environ.get('SECRET_KEY', 'dev_key_for_timetrack')
app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(days=7) # Session lasts for 7 days
# Configure Flask-Mail
app.config['MAIL_SERVER'] = os.environ.get('MAIL_SERVER', 'smtp.example.com')
app.config['MAIL_PORT'] = int(os.environ.get('MAIL_PORT', 587))
app.config['MAIL_PORT'] = int(os.environ.get('MAIL_PORT') or 587)
app.config['MAIL_USE_TLS'] = os.environ.get('MAIL_USE_TLS', 'true').lower() in ['true', 'on', '1']
app.config['MAIL_USERNAME'] = os.environ.get('MAIL_USERNAME', 'your-email@example.com')
app.config['MAIL_PASSWORD'] = os.environ.get('MAIL_PASSWORD', 'your-password')
@@ -57,19 +57,37 @@ db.init_app(app)
# Consolidated migration using migrate_db module
def run_migrations():
"""Run all database migrations using the consolidated migrate_db module."""
try:
from migrate_db import run_all_migrations
run_all_migrations()
print("Database migrations completed successfully!")
except ImportError as e:
print(f"Error importing migrate_db: {e}")
print("Falling back to basic table creation...")
# Check if we're using PostgreSQL or SQLite
database_url = app.config['SQLALCHEMY_DATABASE_URI']
print(f"DEBUG: Database URL: {database_url}")
is_postgresql = 'postgresql://' in database_url or 'postgres://' in database_url
print(f"DEBUG: Is PostgreSQL: {is_postgresql}")
if is_postgresql:
print("Using PostgreSQL - skipping SQLite migrations, ensuring tables exist...")
with app.app_context():
db.create_all()
init_system_settings()
except Exception as e:
print(f"Error during database migration: {e}")
raise
print("PostgreSQL setup completed successfully!")
else:
print("Using SQLite - running SQLite migrations...")
try:
from migrate_db import run_all_migrations
run_all_migrations()
print("SQLite database migrations completed successfully!")
except ImportError as e:
print(f"Error importing migrate_db: {e}")
print("Falling back to basic table creation...")
with app.app_context():
db.create_all()
init_system_settings()
except Exception as e:
print(f"Error during SQLite migration: {e}")
print("Falling back to basic table creation...")
with app.app_context():
db.create_all()
init_system_settings()
def migrate_to_company_model():
"""Migrate existing data to support company model (stub - handled by migrate_db)"""

54
docker-compose.yml Normal file
View File

@@ -0,0 +1,54 @@
services:
db:
image: postgres:15
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "${POSTGRES_PORT:-5432}:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 10s
timeout: 5s
retries: 5
pgadmin:
image: dpage/pgadmin4:latest
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_EMAIL}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_PASSWORD}
ports:
- "${PGADMIN_PORT:-5050}:80"
depends_on:
- db
volumes:
- pgadmin_data:/var/lib/pgadmin
timetrack:
build: .
environment:
FLASK_ENV: ${FLASK_ENV:-production}
SECRET_KEY: ${SECRET_KEY}
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
MAIL_SERVER: ${MAIL_SERVER}
MAIL_PORT: ${MAIL_PORT}
MAIL_USE_TLS: ${MAIL_USE_TLS}
MAIL_USERNAME: ${MAIL_USERNAME}
MAIL_PASSWORD: ${MAIL_PASSWORD}
MAIL_DEFAULT_SENDER: ${MAIL_DEFAULT_SENDER}
ports:
- "${TIMETRACK_PORT:-5000}:5000"
depends_on:
db:
condition: service_healthy
volumes:
- ${DATA_PATH:-./data}:/data
- shared_socket:/host/shared
volumes:
postgres_data:
pgadmin_data:
shared_socket:

View File

@@ -368,6 +368,44 @@ def migrate_user_roles(cursor):
null_roles = cursor.rowcount
if null_roles > 0:
print(f"Set {null_roles} NULL/invalid roles to 'Team Member'")
# Ensure all users have a company_id before creating NOT NULL constraint
print("Checking for users without company_id...")
cursor.execute("SELECT COUNT(*) FROM user WHERE company_id IS NULL")
null_company_count = cursor.fetchone()[0]
print(f"Found {null_company_count} users without company_id")
if null_company_count > 0:
print(f"Assigning {null_company_count} users to default company...")
# Get or create a default company
cursor.execute("SELECT id FROM company ORDER BY id LIMIT 1")
company_result = cursor.fetchone()
if company_result:
default_company_id = company_result[0]
print(f"Using existing company ID {default_company_id} as default")
else:
# Create a default company if none exists
print("No companies found, creating default company...")
cursor.execute("""
INSERT INTO company (name, slug, description, created_at, is_personal, is_active, max_users)
VALUES (?, ?, ?, CURRENT_TIMESTAMP, 0, 1, 100)
""", ("Default Company", "default-company", "Auto-created default company for migration"))
default_company_id = cursor.lastrowid
print(f"Created default company with ID {default_company_id}")
# Assign all users without company_id to the default company
cursor.execute("UPDATE user SET company_id = ? WHERE company_id IS NULL", (default_company_id,))
updated_users = cursor.rowcount
print(f"Assigned {updated_users} users to default company")
# Verify the fix
cursor.execute("SELECT COUNT(*) FROM user WHERE company_id IS NULL")
remaining_null = cursor.fetchone()[0]
print(f"After assignment, {remaining_null} users still have NULL company_id")
else:
print("All users already have company_id assigned")
# Drop user_new table if it exists from previous failed migration
cursor.execute("DROP TABLE IF EXISTS user_new")
@@ -396,10 +434,16 @@ def migrate_user_roles(cursor):
)
""")
# Get default company ID for any remaining NULL company_id values
cursor.execute("SELECT id FROM company ORDER BY id LIMIT 1")
company_result = cursor.fetchone()
default_company_id = company_result[0] if company_result else 1
# Copy all data from old table to new table with validation
cursor.execute("""
INSERT INTO user_new
SELECT id, username, email, password_hash, created_at, company_id,
SELECT id, username, email, password_hash, created_at,
COALESCE(company_id, ?) as company_id,
is_verified, verification_token, token_expiry, is_blocked,
CASE
WHEN role IN (?, ?, ?, ?, ?) THEN role
@@ -412,7 +456,7 @@ def migrate_user_roles(cursor):
END as account_type,
business_name, two_factor_enabled, two_factor_secret
FROM user
""", (Role.TEAM_MEMBER.value, Role.TEAM_LEADER.value, Role.SUPERVISOR.value,
""", (default_company_id, Role.TEAM_MEMBER.value, Role.TEAM_LEADER.value, Role.SUPERVISOR.value,
Role.ADMIN.value, Role.SYSTEM_ADMIN.value, Role.TEAM_MEMBER.value,
AccountType.COMPANY_USER.value, AccountType.FREELANCER.value,
AccountType.COMPANY_USER.value))

View File

@@ -0,0 +1,396 @@
#!/usr/bin/env python3
"""
SQLite to PostgreSQL Migration Script for TimeTrack
This script migrates data from SQLite to PostgreSQL database.
"""
import sqlite3
import psycopg2
import os
import sys
import logging
from datetime import datetime
from psycopg2.extras import RealDictCursor
import json
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('migration.log'),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
class SQLiteToPostgresMigration:
def __init__(self, sqlite_path, postgres_url):
self.sqlite_path = sqlite_path
self.postgres_url = postgres_url
self.sqlite_conn = None
self.postgres_conn = None
self.migration_stats = {}
def connect_databases(self):
"""Connect to both SQLite and PostgreSQL databases"""
try:
# Connect to SQLite
self.sqlite_conn = sqlite3.connect(self.sqlite_path)
self.sqlite_conn.row_factory = sqlite3.Row
logger.info(f"Connected to SQLite database: {self.sqlite_path}")
# Connect to PostgreSQL
self.postgres_conn = psycopg2.connect(self.postgres_url)
self.postgres_conn.autocommit = False
logger.info("Connected to PostgreSQL database")
return True
except Exception as e:
logger.error(f"Failed to connect to databases: {e}")
return False
def close_connections(self):
"""Close database connections"""
if self.sqlite_conn:
self.sqlite_conn.close()
if self.postgres_conn:
self.postgres_conn.close()
def backup_postgres(self):
"""Create a backup of existing PostgreSQL data"""
try:
with self.postgres_conn.cursor() as cursor:
# Check if tables exist and have data
cursor.execute("""
SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public'
""")
tables = cursor.fetchall()
if tables:
backup_file = f"postgres_backup_{datetime.now().strftime('%Y%m%d_%H%M%S')}.sql"
logger.info(f"Creating PostgreSQL backup: {backup_file}")
# Use pg_dump for backup
os.system(f"pg_dump '{self.postgres_url}' > {backup_file}")
logger.info(f"Backup created: {backup_file}")
return backup_file
else:
logger.info("No existing PostgreSQL tables found, skipping backup")
return None
except Exception as e:
logger.error(f"Failed to create backup: {e}")
return None
def check_sqlite_database(self):
"""Check if SQLite database exists and has data"""
if not os.path.exists(self.sqlite_path):
logger.error(f"SQLite database not found: {self.sqlite_path}")
return False
try:
cursor = self.sqlite_conn.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
tables = cursor.fetchall()
if not tables:
logger.info("SQLite database is empty, nothing to migrate")
return False
logger.info(f"Found {len(tables)} tables in SQLite database")
return True
except Exception as e:
logger.error(f"Error checking SQLite database: {e}")
return False
def create_postgres_tables(self, clear_existing=False):
"""Create PostgreSQL tables using Flask-SQLAlchemy models"""
try:
# Import Flask app and create tables
from app import app, db
with app.app_context():
# Set the database URI to PostgreSQL
app.config['SQLALCHEMY_DATABASE_URI'] = self.postgres_url
if clear_existing:
logger.info("Clearing existing PostgreSQL data...")
db.drop_all()
logger.info("Dropped all existing tables")
# Create all tables
db.create_all()
logger.info("Created PostgreSQL tables")
return True
except Exception as e:
logger.error(f"Failed to create PostgreSQL tables: {e}")
return False
def migrate_table_data(self, table_name, column_mapping=None):
"""Migrate data from SQLite table to PostgreSQL"""
try:
sqlite_cursor = self.sqlite_conn.cursor()
postgres_cursor = self.postgres_conn.cursor()
# Check if table exists in SQLite
sqlite_cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name=?", (table_name,))
if not sqlite_cursor.fetchone():
logger.info(f"Table {table_name} does not exist in SQLite, skipping...")
self.migration_stats[table_name] = 0
return True
# Get data from SQLite
sqlite_cursor.execute(f"SELECT * FROM {table_name}")
rows = sqlite_cursor.fetchall()
if not rows:
logger.info(f"No data found in table: {table_name}")
self.migration_stats[table_name] = 0
return True
# Get column names
column_names = [description[0] for description in sqlite_cursor.description]
# Apply column mapping if provided
if column_mapping:
column_names = [column_mapping.get(col, col) for col in column_names]
# Prepare insert statement
placeholders = ', '.join(['%s'] * len(column_names))
columns = ', '.join([f'"{col}"' for col in column_names]) # Quote column names
insert_sql = f'INSERT INTO "{table_name}" ({columns}) VALUES ({placeholders})' # Quote table name
# Convert rows to list of tuples
data_rows = []
for row in rows:
data_row = []
for i, value in enumerate(row):
col_name = column_names[i]
# Handle special data type conversions
if value is None:
data_row.append(None)
elif isinstance(value, str) and value.startswith('{"') and value.endswith('}'):
# Handle JSON strings
data_row.append(value)
elif (col_name.startswith('is_') or col_name.endswith('_enabled') or col_name in ['is_paused']) and isinstance(value, int):
# Convert integer boolean to actual boolean for PostgreSQL
data_row.append(bool(value))
elif isinstance(value, str) and value == '':
# Convert empty strings to None for PostgreSQL
data_row.append(None)
else:
data_row.append(value)
data_rows.append(tuple(data_row))
# Insert data in batches
batch_size = 1000
for i in range(0, len(data_rows), batch_size):
batch = data_rows[i:i + batch_size]
try:
postgres_cursor.executemany(insert_sql, batch)
self.postgres_conn.commit()
except Exception as batch_error:
logger.error(f"Error inserting batch {i//batch_size + 1} for table {table_name}: {batch_error}")
# Try inserting rows one by one to identify problematic rows
self.postgres_conn.rollback()
for j, row in enumerate(batch):
try:
postgres_cursor.execute(insert_sql, row)
self.postgres_conn.commit()
except Exception as row_error:
logger.error(f"Error inserting row {i + j} in table {table_name}: {row_error}")
logger.error(f"Problematic row data: {row}")
self.postgres_conn.rollback()
logger.info(f"Migrated {len(rows)} rows from table: {table_name}")
self.migration_stats[table_name] = len(rows)
return True
except Exception as e:
logger.error(f"Failed to migrate table {table_name}: {e}")
self.postgres_conn.rollback()
return False
def update_sequences(self):
"""Update PostgreSQL sequences after data migration"""
try:
with self.postgres_conn.cursor() as cursor:
# Get all sequences - fix the query to properly extract sequence names
cursor.execute("""
SELECT
pg_get_serial_sequence(table_name, column_name) as sequence_name,
column_name,
table_name
FROM information_schema.columns
WHERE column_default LIKE 'nextval%'
AND table_schema = 'public'
""")
sequences = cursor.fetchall()
for seq_name, col_name, table_name in sequences:
if seq_name is None:
continue
# Get the maximum value for each sequence
cursor.execute(f'SELECT MAX("{col_name}") FROM "{table_name}"')
max_val = cursor.fetchone()[0]
if max_val is not None:
# Update sequence to start from max_val + 1 - don't quote sequence name from pg_get_serial_sequence
cursor.execute(f'ALTER SEQUENCE {seq_name} RESTART WITH {max_val + 1}')
logger.info(f"Updated sequence {seq_name} to start from {max_val + 1}")
self.postgres_conn.commit()
logger.info("Updated PostgreSQL sequences")
return True
except Exception as e:
logger.error(f"Failed to update sequences: {e}")
self.postgres_conn.rollback()
return False
def migrate_all_data(self):
"""Migrate all data from SQLite to PostgreSQL"""
# Define table migration order (respecting foreign key constraints)
migration_order = [
'company',
'team',
'project_category',
'user',
'project',
'task',
'sub_task',
'time_entry',
'work_config',
'company_work_config',
'user_preferences',
'system_settings'
]
for table_name in migration_order:
if not self.migrate_table_data(table_name):
logger.error(f"Migration failed at table: {table_name}")
return False
# Update sequences after all data is migrated
if not self.update_sequences():
logger.error("Failed to update sequences")
return False
return True
def verify_migration(self):
"""Verify that migration was successful"""
try:
sqlite_cursor = self.sqlite_conn.cursor()
postgres_cursor = self.postgres_conn.cursor()
# Get table names from SQLite
sqlite_cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
sqlite_tables = [row[0] for row in sqlite_cursor.fetchall()]
verification_results = {}
for table_name in sqlite_tables:
if table_name == 'sqlite_sequence':
continue
# Count rows in SQLite
sqlite_cursor.execute(f"SELECT COUNT(*) FROM {table_name}")
sqlite_count = sqlite_cursor.fetchone()[0]
# Count rows in PostgreSQL
postgres_cursor.execute(f'SELECT COUNT(*) FROM "{table_name}"')
postgres_count = postgres_cursor.fetchone()[0]
verification_results[table_name] = {
'sqlite_count': sqlite_count,
'postgres_count': postgres_count,
'match': sqlite_count == postgres_count
}
if sqlite_count == postgres_count:
logger.info(f"✓ Table {table_name}: {sqlite_count} rows migrated successfully")
else:
logger.error(f"✗ Table {table_name}: SQLite={sqlite_count}, PostgreSQL={postgres_count}")
return verification_results
except Exception as e:
logger.error(f"Verification failed: {e}")
return None
def run_migration(self, clear_existing=False):
"""Run the complete migration process"""
logger.info("Starting SQLite to PostgreSQL migration...")
# Connect to databases
if not self.connect_databases():
return False
try:
# Check SQLite database
if not self.check_sqlite_database():
return False
# Create backup
backup_file = self.backup_postgres()
# Create PostgreSQL tables
if not self.create_postgres_tables(clear_existing=clear_existing):
return False
# Migrate data
if not self.migrate_all_data():
return False
# Verify migration
verification = self.verify_migration()
if verification:
logger.info("Migration verification completed")
for table, stats in verification.items():
if not stats['match']:
logger.error(f"Migration verification failed for table: {table}")
return False
logger.info("Migration completed successfully!")
logger.info(f"Migration statistics: {self.migration_stats}")
return True
except Exception as e:
logger.error(f"Migration failed: {e}")
return False
finally:
self.close_connections()
def main():
"""Main migration function"""
import argparse
parser = argparse.ArgumentParser(description='Migrate SQLite to PostgreSQL')
parser.add_argument('--clear-existing', action='store_true',
help='Clear existing PostgreSQL data before migration')
parser.add_argument('--sqlite-path', default=os.environ.get('SQLITE_PATH', '/data/timetrack.db'),
help='Path to SQLite database')
args = parser.parse_args()
# Get database paths from environment variables
sqlite_path = args.sqlite_path
postgres_url = os.environ.get('DATABASE_URL')
if not postgres_url:
logger.error("DATABASE_URL environment variable not set")
return 1
# Check if SQLite database exists
if not os.path.exists(sqlite_path):
logger.info(f"SQLite database not found at {sqlite_path}, skipping migration")
return 0
# Run migration
migration = SQLiteToPostgresMigration(sqlite_path, postgres_url)
success = migration.run_migration(clear_existing=args.clear_existing)
return 0 if success else 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -13,3 +13,4 @@ numpy==1.26.4
pandas==1.5.3
xlsxwriter==3.1.2
Flask-Mail==0.9.1
psycopg2-binary==2.9.9

86
scripts/migrate.sh Executable file
View File

@@ -0,0 +1,86 @@
#!/bin/bash
# Manual migration script for TimeTrack SQLite to PostgreSQL
set -e
echo "TimeTrack Database Migration Script"
echo "==================================="
# Check if .env file exists
if [ ! -f .env ]; then
echo "Error: .env file not found. Please create it from .env.example"
exit 1
fi
# Load environment variables
set -a
source .env
set +a
# Check required environment variables
if [ -z "$DATABASE_URL" ]; then
echo "Error: DATABASE_URL not set in .env file"
exit 1
fi
if [ -z "$POSTGRES_USER" ] || [ -z "$POSTGRES_PASSWORD" ] || [ -z "$POSTGRES_DB" ]; then
echo "Error: PostgreSQL connection variables not set in .env file"
exit 1
fi
# Default SQLite path
SQLITE_PATH="${SQLITE_PATH:-/data/timetrack.db}"
echo "Configuration:"
echo " SQLite DB: $SQLITE_PATH"
echo " PostgreSQL: $DATABASE_URL"
echo ""
# Check if SQLite database exists
if [ ! -f "$SQLITE_PATH" ]; then
echo "Error: SQLite database not found at $SQLITE_PATH"
echo "Please ensure the database file exists or update SQLITE_PATH in .env"
exit 1
fi
# Check if PostgreSQL is accessible
echo "Testing PostgreSQL connection..."
if ! docker-compose exec postgres pg_isready -U "$POSTGRES_USER" > /dev/null 2>&1; then
echo "Error: Cannot connect to PostgreSQL. Please ensure docker-compose is running:"
echo " docker-compose up -d postgres"
exit 1
fi
echo "PostgreSQL is accessible!"
# Confirm migration
echo ""
echo "This will:"
echo "1. Create a backup of your SQLite database"
echo "2. Migrate all data from SQLite to PostgreSQL"
echo "3. Verify the migration was successful"
echo ""
read -p "Do you want to proceed? (y/N): " -n 1 -r
echo ""
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Migration cancelled."
exit 0
fi
# Run migration
echo "Starting migration..."
docker-compose exec timetrack python migrate_sqlite_to_postgres.py
if [ $? -eq 0 ]; then
echo ""
echo "Migration completed successfully!"
echo "Check migration.log for detailed information."
echo ""
echo "Your SQLite database has been backed up and the original renamed to .migrated"
echo "You can now use PostgreSQL as your primary database."
else
echo ""
echo "Migration failed! Check migration.log for details."
echo "Your original SQLite database remains unchanged."
exit 1
fi

60
startup.sh Executable file
View File

@@ -0,0 +1,60 @@
#!/bin/bash
set -e
echo "Starting TimeTrack application..."
# Wait for PostgreSQL to be ready
echo "Waiting for PostgreSQL to be ready..."
while ! pg_isready -h db -p 5432 -U "$POSTGRES_USER" > /dev/null 2>&1; do
echo "PostgreSQL is not ready yet. Waiting..."
sleep 2
done
echo "PostgreSQL is ready!"
# Check if SQLite database exists and has data
SQLITE_PATH="/data/timetrack.db"
if [ -f "$SQLITE_PATH" ]; then
echo "SQLite database found at $SQLITE_PATH"
# Check if PostgreSQL database is empty
POSTGRES_TABLE_COUNT=$(psql "$DATABASE_URL" -t -c "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = 'public';" 2>/dev/null || echo "0")
if [ "$POSTGRES_TABLE_COUNT" -eq 0 ]; then
echo "PostgreSQL database is empty, running migration..."
# Create a backup of SQLite database
cp "$SQLITE_PATH" "${SQLITE_PATH}.backup.$(date +%Y%m%d_%H%M%S)"
echo "Created SQLite backup"
# Run migration
python migrate_sqlite_to_postgres.py
if [ $? -eq 0 ]; then
echo "Migration completed successfully!"
# Rename SQLite database to indicate it's been migrated
mv "$SQLITE_PATH" "${SQLITE_PATH}.migrated"
echo "SQLite database renamed to indicate migration completion"
else
echo "Migration failed! Check migration.log for details"
exit 1
fi
else
echo "PostgreSQL database already contains tables, skipping migration"
fi
else
echo "No SQLite database found, starting with fresh PostgreSQL database"
fi
# Initialize database tables if they don't exist
echo "Ensuring database tables exist..."
python -c "
from app import app, db
with app.app_context():
db.create_all()
print('Database tables created/verified')
"
# Start the Flask application with gunicorn
echo "Starting Flask application with gunicorn..."
exec gunicorn --bind 0.0.0.0:5000 --workers 4 --threads 2 --timeout 30 app:app

55
uwsgi.ini Normal file
View File

@@ -0,0 +1,55 @@
[uwsgi]
# Application module
wsgi-file = app.py
callable = app
pythonpath = /app
chdir = /app
# Process management
master = true
processes = 4
threads = 2
max-requests = 1000
harakiri = 30
thunder-lock = true
# UNIX Domain Socket configuration for nginx
socket = /host/shared/uwsgi.sock
chmod-socket = 666
chown-socket = www-data:www-data
# HTTP socket for direct access
http-socket = :5000
vacuum = true
# Logging
logto = /var/log/uwsgi/timetrack.log
log-maxsize = 50000000
disable-logging = false
# Memory and CPU optimization
memory-report = true
cpu-affinity = 1
reload-on-rss = 512
worker-reload-mercy = 60
# Security
no-site = true
strict = true
# Hot reload in development
py-autoreload = 1
# Buffer size
buffer-size = 32768
# Enable stats server (optional)
stats = 127.0.0.1:9191
stats-http = true
# Die on term signal
die-on-term = true
# Lazy apps for better memory usage
lazy-apps = true