Troubleshooting Guide¶
This guide covers common issues you may encounter when developing, deploying, or running the pydotorg application, along with their solutions.
Table of Contents¶
Setup Issues¶
Database Connection Problems¶
Symptom: “Connection refused” on startup¶
DATABASE CONNECTION FAILED
PostgreSQL is not running.
Cause: PostgreSQL is not running or not accessible on the expected port.
Solution:
# Start the infrastructure containers (PostgreSQL, Redis, etc.)
make infra-up
# Verify PostgreSQL is running
docker compose ps postgres
# Check logs if there are issues
docker compose logs postgres
Symptom: “password authentication failed”¶
Database authentication failed.
Cause: The DATABASE_URL credentials do not match the PostgreSQL configuration.
Solution:
Check your
.envfile for the correct DATABASE_URL:DATABASE_URL=postgresql+asyncpg://postgres:postgres@localhost:5432/pydotorgIf using Docker, ensure the postgres container environment matches:
# docker-compose.yml postgres: environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres POSTGRES_DB: pydotorg
Reset the database if needed:
make infra-reset make litestar-db-upgrade
Symptom: “database does not exist”¶
Database does not exist.
Cause: The database has not been created or migrations have not been run.
Solution:
# Create and migrate the database
make litestar-db-upgrade
# Or reset and reseed
make db-reset
make db-seed
Redis Connection Issues¶
Symptom: “Connection refused” to Redis¶
Cause: Redis is not running or is on a different port.
Solution:
# Start Redis with infrastructure
make infra-up
# Verify Redis is running
docker compose ps redis
# Test Redis connection
docker exec -it $(docker compose ps -q redis) redis-cli ping
# Should return: PONG
Symptom: SAQ worker fails to connect¶
Could not connect to Redis at localhost:6379
Cause: The REDIS_URL in your environment does not match the running Redis instance.
Solution:
Check your
.envfile:REDIS_URL=redis://localhost:6379/0If Redis is running in Docker, ensure port mapping is correct:
redis: ports: - "6379:6379"
Port Conflicts¶
Symptom: “Address already in use” on port 8000¶
Cause: Another process is using port 8000.
Solution:
# Find what's using the port
lsof -i :8000
# Kill the process if needed
kill -9 <PID>
# Or use a different port
UV_RUN_PORT=8001 make serve
Symptom: PostgreSQL port conflict (5432)¶
Cause: Another PostgreSQL instance is running on port 5432.
Solution:
Stop the local PostgreSQL:
brew services stop postgresql # macOS sudo systemctl stop postgresql # Linux
Or use a different port in docker-compose:
postgres: ports: - "5433:5432"
Then update DATABASE_URL accordingly.
Development Issues¶
Migration Errors¶
Symptom: “Target database is not up to date”¶
Cause: There are pending migrations that need to be applied.
Solution:
# Check current revision
make litestar-db-current
# Apply pending migrations
make litestar-db-upgrade
# View migration history
make litestar-db-history
Symptom: “Can’t locate revision” or migration conflicts¶
Cause: Migration files are out of sync, possibly due to branch switching.
Solution:
# Check database status
make litestar-db-check
# If stuck, downgrade and re-upgrade
make litestar-db-downgrade
make litestar-db-upgrade
# For severe conflicts, reset the database (WARNING: data loss)
make db-reset
Symptom: “No changes detected” when creating migration¶
Cause: Alembic cannot detect model changes, possibly due to import issues.
Solution:
Ensure all models are imported in
src/pydotorg/db/migrations/env.pyCheck that model changes are properly annotated with
Mapped[]Run with verbose output:
LITESTAR_APP=pydotorg.main:app uv run litestar database make-migrations -m "description"
Type Checking Failures¶
Symptom: ty (or mypy) errors with Pydantic models¶
Cause: Missing type annotations or incorrect Pydantic schema definitions.
Solution:
Ensure all Pydantic models have proper type hints:
from pydantic import BaseModel class UserSchema(BaseModel): id: UUID username: str email: str | None = None # Use | None, not Optional
For SQLAlchemy models, use
Mapped[]:class User(Base): id: Mapped[UUID] = mapped_column(primary_key=True) username: Mapped[str] = mapped_column(String(150))
Run type checking:
make type-check
Symptom: TYPE_CHECKING import errors¶
Cause: Circular imports or incorrect conditional imports.
Solution:
Use from __future__ import annotations and TYPE_CHECKING properly:
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from pydotorg.domains.users.models import User
def get_user() -> User: # Works due to PEP 563
...
Test Failures¶
Symptom: Tests fail with database errors¶
Cause: Test database is not set up or fixtures are missing.
Solution:
# Ensure test infrastructure is running
make infra-up
# Run unit tests only (no external deps)
make test
# Run integration tests (requires DB)
make test-integration
Symptom: “fixture not found” errors¶
Cause: Missing conftest.py or incorrect fixture scope.
Solution:
Ensure
conftest.pyexists in the test directoryCheck fixture imports and scope:
# tests/conftest.py import pytest from pydotorg.main import app @pytest.fixture(scope="function") async def test_client(): async with app.test_client() as client: yield client
Symptom: Async test failures¶
Cause: Missing pytest-asyncio configuration.
Solution:
Ensure pyproject.toml has:
[tool.pytest.ini_options]
asyncio_mode = "auto"
Runtime Issues¶
MissingGreenlet Errors¶
This was a HIGH priority issue that was fixed in the project.
Symptom: “MissingGreenlet: greenlet_spawn has not been called”¶
sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called;
can't call await_only() here.
Cause: Accessing lazy-loaded relationships outside of an async context, typically in background tasks.
Solution (Applied Fix):
Add eager loading with selectinload() to queries that access relationships:
from sqlalchemy.orm import selectinload
# Before (causes MissingGreenlet):
stmt = select(Event).where(Event.id == event_id)
# After (fixed):
stmt = (
select(Event)
.where(Event.id == event_id)
.options(
selectinload(Event.venue),
selectinload(Event.occurrences),
selectinload(Event.categories),
)
)
Reference: This fix was applied to:
EventRepository.get_upcoming()andget_featured()tasks/search.pyforindex_event()andindex_all_events()tasks/cache.pyforwarm_homepage_cache()
Session Management Problems¶
Symptom: Session not persisting across requests¶
Cause: Session cookie configuration or CSRF issues.
Solution:
Check session configuration in
config.py:session_secret_key: str = "change-me-in-production-session" session_expire_minutes: int = 60 * 24 * 7 # 7 days session_cookie_name: str = "session_id"
Ensure CSRF is properly configured:
# API routes should be excluded from CSRF csrf_exclude_routes = ["/api/*", "/health"]
For development, check that cookies are being set:
curl -c cookies.txt -b cookies.txt http://localhost:8000/auth/login
Symptom: “Invalid or expired session”¶
Cause: Session has expired or secret key changed.
Solution:
Clear browser cookies
Restart the server with consistent secret keys
For Redis session store issues, flush the Redis database:
docker exec -it $(docker compose ps -q redis) redis-cli FLUSHDB
Background Task Failures¶
Symptom: SAQ tasks not executing¶
Cause: Worker not running or queue misconfiguration.
Solution:
Ensure the worker is running:
make workerCheck worker logs for errors:
# If using tmux tmux attach -t pydotorg # Switch to worker window # Or check log files tail -f logs/worker.log
Verify Redis connectivity:
docker exec -it $(docker compose ps -q redis) redis-cli > KEYS saq:*
Symptom: Task fails with “session_maker not in context”¶
Cause: The SAQ context is not properly configured with database session.
Solution:
Ensure the worker context includes session_maker:
# tasks/worker.py
async def startup(ctx: dict) -> None:
from pydotorg.main import sqlalchemy_config
ctx["session_maker"] = sqlalchemy_config.create_session_maker()
async def shutdown(ctx: dict) -> None:
pass
saq_settings = {
"queue": queue,
"functions": get_task_functions(),
"startup": startup,
"shutdown": shutdown,
}
Symptom: Meilisearch indexing fails¶
Cause: Meilisearch is not running or API key is incorrect.
Solution:
Start Meilisearch:
make infra-up # Includes meilisearchCheck configuration:
# .env MEILISEARCH_URL=http://127.0.0.1:7700 MEILISEARCH_API_KEY= # Empty for development
Verify Meilisearch is accessible:
curl http://localhost:7700/health # Should return: {"status":"available"}
Performance Issues¶
Slow Database Queries¶
Symptom: Page loads are slow¶
Cause: N+1 query problems or missing database indexes.
Solution:
Enable SQL logging to identify slow queries:
# In development, set in .env DATABASE_ECHO=true
Add eager loading for relationships:
stmt = ( select(Job) .options( selectinload(Job.job_types), selectinload(Job.categories), ) .limit(20) )
Ensure proper indexes exist:
class Job(Base): __tablename__ = "jobs" status: Mapped[str] = mapped_column(index=True) created_at: Mapped[datetime] = mapped_column(index=True) __table_args__ = ( Index('ix_jobs_status_created', 'status', 'created_at'), )
Use pagination for large result sets:
@get("/jobs") async def list_jobs( limit_offset: LimitOffset, # Pagination dependency job_service: JobService, ) -> OffsetPagination[JobRead]: ...
Symptom: Connection pool exhaustion¶
QueuePool limit reached
Cause: Too many concurrent database connections.
Solution:
Adjust pool settings in
config.py:database_pool_size: int = 20 # Default connections database_max_overflow: int = 10 # Extra connections when needed
Ensure connections are properly released:
async with session_maker() as session: # Work with session ... # Connection automatically released
Memory Usage¶
Symptom: High memory usage in production¶
Cause: Large result sets loaded into memory or memory leaks.
Solution:
Use streaming for large queries:
async for job in await session.stream_scalars(stmt): yield job
Limit result set sizes:
stmt = select(BlogEntry).limit(100).offset(page * 100)Monitor with:
# Check process memory ps aux | grep granian # Or use docker stats docker stats
Deployment Issues¶
Symptom: Production validation fails¶
SECRET_KEY must be changed in prod environment
Cause: Default/insecure secrets are being used in production.
Solution:
Generate secure secrets for production:
# Generate a secure secret key
python -c "import secrets; print(secrets.token_urlsafe(32))"
Set in environment:
SECRET_KEY=your-generated-secret-key-here
SESSION_SECRET_KEY=another-generated-secret-key
CSRF_SECRET=yet-another-secret-key
Symptom: Static files not loading¶
Cause: Static files not built or path misconfiguration.
Solution:
Build frontend assets:
make assets-build # Or legacy: make css
Check static file configuration:
static_url: str = "/static" static_dir: Path = BASE_DIR / "static"
Verify files exist:
ls -la static/css/tailwind.css
Symptom: CORS errors in production¶
Cause: CORS not configured for production domain.
Solution:
Update allowed hosts in config.py:
allowed_hosts: list[str] = [
"localhost",
"127.0.0.1",
"python.org",
"*.python.org",
"your-domain.com",
]
FAQ¶
How do I reset everything and start fresh?¶
# Stop all containers
make docker-down
make infra-down
# Clean up
make clean
docker volume prune -f
# Start fresh
make infra-up
make litestar-db-upgrade
make db-seed
make serve
How do I run the full development environment?¶
The easiest way is with tmux:
make dev-tmux
tmux attach -t pydotorg
This starts three windows:
server: Litestar web serverworker: SAQ background workercss: TailwindCSS watcher
How do I check if my configuration is correct?¶
Start the server and check the startup banner:
make serve
The banner shows:
Environment (dev/staging/prod)
Database connection status
Redis connection
Feature flags
Any configuration warnings
How do I debug a specific route?¶
Add debug logging:
import logging logger = logging.getLogger(__name__) @get("/my-route") async def my_route() -> dict: logger.debug("Entering my_route") ...
Run with debug logging:
make serve-debugCheck logs:
tail -f logs/dev.log
Why are my templates not updating?¶
Check you are running with reload enabled:
make serve # Uses --reload flagClear browser cache or hard refresh (Cmd+Shift+R / Ctrl+Shift+R)
Template changes should hot-reload, but if not:
# Restart the server # Ctrl+C then: make serve
How do I test email sending locally?¶
The project uses MailDev for local email testing:
Ensure MailDev is running:
make infra-up # Includes maildevAccess the web interface:
http://localhost:1080Configure SMTP in
.env:SMTP_HOST=localhost SMTP_PORT=1025
All emails sent locally will appear in the MailDev interface.
How do I add a new domain/feature?¶
Create the domain structure:
src/pydotorg/domains/myfeature/ |-- __init__.py |-- models.py |-- schemas.py |-- repositories.py |-- services.py |-- controllers.py |-- dependencies.py
Create migrations:
make litestar-db-makeRegister controllers in
main.pyAdd tests in
tests/unit/domains/myfeature/Run CI to verify:
make ci
Getting Help¶
If you encounter an issue not covered here:
Check the GitHub Issues for known issues
Search existing GitHub issues
Check the Architecture Documentation
Review the API Documentation
For bugs, please file an issue with:
Steps to reproduce
Expected behavior
Actual behavior
Environment details (OS, Python version, etc.)
Relevant log output