14 KiB
SEREACT Testing Guide
This document provides comprehensive information about testing the SEREACT API, including different test types, setup instructions, and best practices.
Test Types
SEREACT uses a multi-layered testing approach to ensure reliability and maintainability:
1. Unit Tests
- Purpose: Test individual components in isolation
- Speed: Fast (< 1 second per test)
- Dependencies: Use mocks and stubs
- Coverage: Functions, classes, and modules
- Location:
tests/(excludingtests/integration/)
2. Integration Tests
- Purpose: Test interactions with real external services
- Speed: Moderate (1-10 seconds per test)
- Dependencies: Real Firestore database
- Coverage: Database operations, service integrations
- Location:
tests/integration/
3. End-to-End (E2E) Tests
- Purpose: Test complete user workflows
- Speed: Moderate to slow (5-30 seconds per test)
- Dependencies: Full application stack (mocked or real)
- Coverage: Complete API workflows
- Location:
tests/test_e2e.py
Test Structure
tests/
├── conftest.py # Global test configuration
├── test_e2e.py # End-to-end workflow tests
├── api/ # API endpoint tests
│ ├── conftest.py # API-specific fixtures
│ ├── test_auth.py # Authentication tests
│ ├── test_teams.py # Team management tests
│ ├── test_users.py # User management tests
│ ├── test_images.py # Image management tests
│ └── test_search.py # Search functionality tests
├── auth/ # Authentication module tests
├── db/ # Database layer tests
├── integration/ # Integration tests
│ ├── __init__.py
│ └── test_firestore_integration.py
├── models/ # Data model tests
└── services/ # Business logic tests
Running Tests
Prerequisites
-
Virtual Environment: Ensure you're in the project's virtual environment:
# Windows (Git Bash) source venv/Scripts/activate # Linux/macOS source venv/bin/activate -
Dependencies: Install test dependencies:
pip install -r requirements.txt
Quick Start
Use the test runner script for convenient test execution:
# Run unit tests only (fast, recommended for development)
python scripts/run_tests.py unit
# Run end-to-end tests with mocked services
python scripts/run_tests.py e2e
# Run integration tests (requires real database)
python scripts/run_tests.py integration
# Run all tests
python scripts/run_tests.py all
# Run tests with coverage report
python scripts/run_tests.py coverage
Direct pytest Commands
For more control, use pytest directly:
# Unit tests only
pytest -m "not integration and not e2e" -v
# End-to-end tests
pytest -m e2e -v
# Integration tests
FIRESTORE_INTEGRATION_TEST=1 pytest -m integration -v
# Specific test file
pytest tests/test_e2e.py -v
# Specific test function
pytest tests/test_e2e.py::TestE2EWorkflows::test_complete_team_workflow -v
# Run with coverage
pytest --cov=src --cov-report=html --cov-report=term
End-to-End Test Coverage
The E2E tests cover the following complete workflows:
1. Team Management Workflow
- Create a new team
- Retrieve team details
- Update team information
- List all teams
- Verify team isolation
2. User Management Workflow
- Create admin and regular users
- Assign users to teams
- Update user roles and permissions
- List team members
- Verify user access controls
3. API Key Authentication Workflow
- Generate API keys for users
- Authenticate requests using API keys
- Test protected endpoints
- Manage API key lifecycle (create, use, deactivate)
- Verify authentication failures
4. Image Upload and Management Workflow
- Upload images with metadata
- Retrieve image details
- Update image metadata and tags
- List team images
- Download images
- Verify file handling
5. Search Workflow
- Text-based search by description
- Tag-based filtering
- Combined search queries
- Search result pagination
- Verify search accuracy
6. Multi-Team Isolation
- Create multiple teams
- Upload images to different teams
- Verify cross-team access restrictions
- Test search result isolation
- Ensure data privacy
7. Error Handling
- Invalid data validation
- Authentication failures
- Resource not found scenarios
- File upload errors
- Proper error responses
Integration Test Setup
Integration tests require real external services. Follow these steps:
1. Firestore Setup
-
Create a test database:
- Use a separate Firestore database for testing
- Database name should end with
-test(e.g.,sereact-test)
-
Set environment variables:
export FIRESTORE_INTEGRATION_TEST=1 export FIRESTORE_PROJECT_ID=your-test-project export FIRESTORE_DATABASE_NAME=sereact-test export FIRESTORE_CREDENTIALS_FILE=path/to/test-credentials.json -
Run integration tests:
python scripts/run_tests.py integration
2. Full E2E Integration Setup
For testing with real cloud services:
-
Set up all services:
- Google Cloud Storage bucket
- Firestore database
- Cloud Vision API
- Pinecone vector database
-
Configure environment:
export E2E_INTEGRATION_TEST=1 export GCS_BUCKET_NAME=your-test-bucket export VECTOR_DB_API_KEY=your-pinecone-key # ... other service credentials -
Run E2E integration tests:
python scripts/run_tests.py e2e --with-integration
Test Data Management
Fixtures and Test Data
- Shared fixtures: Defined in
tests/conftest.py - API fixtures: Defined in
tests/api/conftest.py - Sample images: Generated programmatically using PIL
- Test data: Isolated per test function
Cleanup Strategy
- Unit tests: Automatic cleanup through mocking
- Integration tests: Manual cleanup in test teardown
- E2E tests: Resource tracking and cleanup utilities
Best Practices
Writing Tests
-
Test naming: Use descriptive names that explain the scenario
def test_user_cannot_access_other_team_images(self): -
Test structure: Follow Arrange-Act-Assert pattern
def test_create_team(self): # Arrange team_data = {"name": "Test Team"} # Act response = client.post("/api/v1/teams", json=team_data) # Assert assert response.status_code == 201 assert response.json()["name"] == "Test Team" -
Test isolation: Each test should be independent
-
Mock external services: Use mocks for unit tests
-
Use fixtures: Leverage pytest fixtures for common setup
Running Tests in Development
-
Fast feedback loop: Run unit tests frequently
pytest -m "not integration and not e2e" --tb=short -
Pre-commit testing: Run E2E tests before committing
python scripts/run_tests.py e2e -
Coverage monitoring: Check test coverage regularly
python scripts/run_tests.py coverage
CI/CD Integration
For continuous integration, use different test strategies:
# Example GitHub Actions workflow
- name: Run unit tests
run: python scripts/run_tests.py unit
- name: Run E2E tests
run: python scripts/run_tests.py e2e
- name: Run integration tests (if credentials available)
run: python scripts/run_tests.py integration
if: env.FIRESTORE_INTEGRATION_TEST == '1'
Troubleshooting
Common Issues
- Import errors: Ensure you're in the virtual environment
- Database connection: Check Firestore credentials for integration tests
- Slow tests: Use unit tests for development, integration tests for CI
- Test isolation: Clear test data between runs
Debug Mode
Run tests with additional debugging:
# Verbose output with full tracebacks
pytest -v --tb=long
# Stop on first failure
pytest -x
# Run specific test with debugging
pytest tests/test_e2e.py::TestE2EWorkflows::test_complete_team_workflow -v -s
Performance Monitoring
Monitor test performance:
# Show slowest tests
pytest --durations=10
# Profile test execution
pytest --profile
Test Metrics
Track these metrics to ensure test quality:
- Coverage: Aim for >80% code coverage
- Speed: Unit tests <1s, E2E tests <30s
- Reliability: Tests should pass consistently
- Maintainability: Tests should be easy to update
Contributing
When adding new features:
- Write tests first: Use TDD approach
- Cover all scenarios: Happy path, edge cases, error conditions
- Update documentation: Keep this guide current
- Run full test suite: Ensure no regressions
For more information about the SEREACT API architecture and features, see the main README.md.
Running E2E Tests
With Fresh Database
If you have a fresh database, the E2E tests will automatically run the bootstrap process:
pytest tests/test_e2e.py -v -m e2e
With Existing Database
If your database already has teams and users (bootstrap completed), you need to provide an API key:
- Get an existing API key from your application or create one via the API
- Set the environment variable:
export E2E_TEST_API_KEY="your-api-key-here" - Run the tests:
pytest tests/test_e2e.py -v -m e2e
Example with API Key
# Set your API key
export E2E_TEST_API_KEY="sk_test_1234567890abcdef"
# Run E2E tests
python scripts/run_tests.py e2e
Test Features
Idempotent Tests
The E2E tests are designed to be idempotent - they can be run multiple times against the same database without conflicts:
- Unique identifiers: Each test run uses unique suffixes for all created data
- Graceful handling: Tests handle existing data gracefully
- Cleanup: Tests create isolated data that doesn't interfere with existing data
Test Data Isolation
- Each test run creates unique teams, users, and images
- Tests use UUID-based suffixes to avoid naming conflicts
- Search tests use unique tags to find only test-created data
Test Configuration
Environment Variables
E2E_TEST_API_KEY: API key for E2E tests with existing databaseE2E_INTEGRATION_TEST: Set to1to enable integration testsTEST_DATABASE_URL: Override database for testing (optional)
Pytest Configuration
The pytest.ini file contains:
- Test markers for categorizing tests
- Async test configuration
- Warning filters
Best Practices
Writing Tests
- Use descriptive names: Test names should clearly describe what they test
- Test one thing: Each test should focus on a single workflow or feature
- Use fixtures: Leverage pytest fixtures for common setup
- Handle errors: Test both success and error scenarios
- Clean up: Ensure tests don't leave behind test data (when possible)
Running Tests
- Run frequently: Run unit tests during development
- CI/CD integration: Ensure all tests pass before deployment
- Test environments: Use separate databases for testing
- Monitor performance: Track test execution time
Troubleshooting
Common Issues
"Bootstrap already completed"
- Cause: Database already has teams/users
- Solution: Set
E2E_TEST_API_KEYenvironment variable
"No existing API key found"
- Cause: No valid API key provided for existing database
- Solution: Create an API key via the API or bootstrap endpoint
"Failed to create test team"
- Cause: Insufficient permissions or API key issues
- Solution: Ensure the API key belongs to an admin user
Import errors
- Cause: Python path or dependency issues
- Solution: Ensure virtual environment is activated and dependencies installed
Getting Help
- Check the test output for specific error messages
- Verify environment variables are set correctly
- Ensure the API server is running (for integration tests)
- Check database connectivity
CI/CD Integration
GitHub Actions Example
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.10
- name: Install dependencies
run: |
pip install -r requirements.txt
- name: Run unit tests
run: python scripts/run_tests.py unit
- name: Run integration tests
run: python scripts/run_tests.py integration
env:
E2E_INTEGRATION_TEST: 1
Docker Testing
# Test stage
FROM python:3.10-slim as test
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
RUN python scripts/run_tests.py unit
Coverage Reports
Generate coverage reports:
python scripts/run_tests.py coverage
View HTML coverage report:
open htmlcov/index.html
Performance Testing
For performance testing:
- Use
pytest-benchmarkfor micro-benchmarks - Test with realistic data volumes
- Monitor database query performance
- Test concurrent user scenarios