501 lines
14 KiB
Markdown
501 lines
14 KiB
Markdown
# SEREACT Testing Guide
|
|
|
|
This document provides comprehensive information about testing the SEREACT API, including different test types, setup instructions, and best practices.
|
|
|
|
## Test Types
|
|
|
|
SEREACT uses a multi-layered testing approach to ensure reliability and maintainability:
|
|
|
|
### 1. Unit Tests
|
|
- **Purpose**: Test individual components in isolation
|
|
- **Speed**: Fast (< 1 second per test)
|
|
- **Dependencies**: Use mocks and stubs
|
|
- **Coverage**: Functions, classes, and modules
|
|
- **Location**: `tests/` (excluding `tests/integration/`)
|
|
|
|
### 2. Integration Tests
|
|
- **Purpose**: Test interactions with real external services
|
|
- **Speed**: Moderate (1-10 seconds per test)
|
|
- **Dependencies**: Real Firestore database
|
|
- **Coverage**: Database operations, service integrations
|
|
- **Location**: `tests/integration/`
|
|
|
|
### 3. End-to-End (E2E) Tests
|
|
- **Purpose**: Test complete user workflows
|
|
- **Speed**: Moderate to slow (5-30 seconds per test)
|
|
- **Dependencies**: Full application stack (mocked or real)
|
|
- **Coverage**: Complete API workflows
|
|
- **Location**: `tests/test_e2e.py`
|
|
|
|
## Test Structure
|
|
|
|
```
|
|
tests/
|
|
├── conftest.py # Global test configuration
|
|
├── test_e2e.py # End-to-end workflow tests
|
|
├── api/ # API endpoint tests
|
|
│ ├── conftest.py # API-specific fixtures
|
|
│ ├── test_auth.py # Authentication tests
|
|
│ ├── test_teams.py # Team management tests
|
|
│ ├── test_users.py # User management tests
|
|
│ ├── test_images.py # Image management tests
|
|
│ └── test_search.py # Search functionality tests
|
|
├── auth/ # Authentication module tests
|
|
├── db/ # Database layer tests
|
|
├── integration/ # Integration tests
|
|
│ ├── __init__.py
|
|
│ └── test_firestore_integration.py
|
|
├── models/ # Data model tests
|
|
└── services/ # Business logic tests
|
|
```
|
|
|
|
## Running Tests
|
|
|
|
### Prerequisites
|
|
|
|
1. **Virtual Environment**: Ensure you're in the project's virtual environment:
|
|
```bash
|
|
# Windows (Git Bash)
|
|
source venv/Scripts/activate
|
|
|
|
# Linux/macOS
|
|
source venv/bin/activate
|
|
```
|
|
|
|
2. **Dependencies**: Install test dependencies:
|
|
```bash
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
### Quick Start
|
|
|
|
Use the test runner script for convenient test execution:
|
|
|
|
```bash
|
|
# Run unit tests only (fast, recommended for development)
|
|
python scripts/run_tests.py unit
|
|
|
|
# Run end-to-end tests with mocked services
|
|
python scripts/run_tests.py e2e
|
|
|
|
# Run integration tests (requires real database)
|
|
python scripts/run_tests.py integration
|
|
|
|
# Run all tests
|
|
python scripts/run_tests.py all
|
|
|
|
# Run tests with coverage report
|
|
python scripts/run_tests.py coverage
|
|
```
|
|
|
|
### Direct pytest Commands
|
|
|
|
For more control, use pytest directly:
|
|
|
|
```bash
|
|
# Unit tests only
|
|
pytest -m "not integration and not e2e" -v
|
|
|
|
# End-to-end tests
|
|
pytest -m e2e -v
|
|
|
|
# Integration tests
|
|
FIRESTORE_INTEGRATION_TEST=1 pytest -m integration -v
|
|
|
|
# Specific test file
|
|
pytest tests/test_e2e.py -v
|
|
|
|
# Specific test function
|
|
pytest tests/test_e2e.py::TestE2EWorkflows::test_complete_team_workflow -v
|
|
|
|
# Run with coverage
|
|
pytest --cov=src --cov-report=html --cov-report=term
|
|
```
|
|
|
|
## End-to-End Test Coverage
|
|
|
|
The E2E tests cover the following complete workflows:
|
|
|
|
### 1. Team Management Workflow
|
|
- Create a new team
|
|
- Retrieve team details
|
|
- Update team information
|
|
- List all teams
|
|
- Verify team isolation
|
|
|
|
### 2. User Management Workflow
|
|
- Create admin and regular users
|
|
- Assign users to teams
|
|
- Update user roles and permissions
|
|
- List team members
|
|
- Verify user access controls
|
|
|
|
### 3. API Key Authentication Workflow
|
|
- Generate API keys for users
|
|
- Authenticate requests using API keys
|
|
- Test protected endpoints
|
|
- Manage API key lifecycle (create, use, deactivate)
|
|
- Verify authentication failures
|
|
|
|
### 4. Image Upload and Management Workflow
|
|
- Upload images with metadata
|
|
- Retrieve image details
|
|
- Update image metadata and tags
|
|
- List team images
|
|
- Download images
|
|
- Verify file handling
|
|
|
|
### 5. Search Workflow
|
|
- Text-based search by description
|
|
- Tag-based filtering
|
|
- Combined search queries
|
|
- Search result pagination
|
|
- Verify search accuracy
|
|
|
|
### 6. Multi-Team Isolation
|
|
- Create multiple teams
|
|
- Upload images to different teams
|
|
- Verify cross-team access restrictions
|
|
- Test search result isolation
|
|
- Ensure data privacy
|
|
|
|
### 7. Error Handling
|
|
- Invalid data validation
|
|
- Authentication failures
|
|
- Resource not found scenarios
|
|
- File upload errors
|
|
- Proper error responses
|
|
|
|
## Integration Test Setup
|
|
|
|
Integration tests require real external services. Follow these steps:
|
|
|
|
### 1. Firestore Setup
|
|
|
|
1. **Create a test database**:
|
|
- Use a separate Firestore database for testing
|
|
- Database name should end with `-test` (e.g., `sereact-test`)
|
|
|
|
2. **Set environment variables**:
|
|
```bash
|
|
export FIRESTORE_INTEGRATION_TEST=1
|
|
export FIRESTORE_PROJECT_ID=your-test-project
|
|
export FIRESTORE_DATABASE_NAME=sereact-test
|
|
export FIRESTORE_CREDENTIALS_FILE=path/to/test-credentials.json
|
|
```
|
|
|
|
3. **Run integration tests**:
|
|
```bash
|
|
python scripts/run_tests.py integration
|
|
```
|
|
|
|
### 2. Full E2E Integration Setup
|
|
|
|
For testing with real cloud services:
|
|
|
|
1. **Set up all services**:
|
|
- Google Cloud Storage bucket
|
|
- Firestore database
|
|
- Cloud Vision API
|
|
- Pinecone vector database
|
|
|
|
2. **Configure environment**:
|
|
```bash
|
|
export E2E_INTEGRATION_TEST=1
|
|
export GCS_BUCKET_NAME=your-test-bucket
|
|
export VECTOR_DB_API_KEY=your-pinecone-key
|
|
# ... other service credentials
|
|
```
|
|
|
|
3. **Run E2E integration tests**:
|
|
```bash
|
|
python scripts/run_tests.py e2e --with-integration
|
|
```
|
|
|
|
## Test Data Management
|
|
|
|
### Fixtures and Test Data
|
|
|
|
- **Shared fixtures**: Defined in `tests/conftest.py`
|
|
- **API fixtures**: Defined in `tests/api/conftest.py`
|
|
- **Sample images**: Generated programmatically using PIL
|
|
- **Test data**: Isolated per test function
|
|
|
|
### Cleanup Strategy
|
|
|
|
- **Unit tests**: Automatic cleanup through mocking
|
|
- **Integration tests**: Manual cleanup in test teardown
|
|
- **E2E tests**: Resource tracking and cleanup utilities
|
|
|
|
## Best Practices
|
|
|
|
### Writing Tests
|
|
|
|
1. **Test naming**: Use descriptive names that explain the scenario
|
|
```python
|
|
def test_user_cannot_access_other_team_images(self):
|
|
```
|
|
|
|
2. **Test structure**: Follow Arrange-Act-Assert pattern
|
|
```python
|
|
def test_create_team(self):
|
|
# Arrange
|
|
team_data = {"name": "Test Team"}
|
|
|
|
# Act
|
|
response = client.post("/api/v1/teams", json=team_data)
|
|
|
|
# Assert
|
|
assert response.status_code == 201
|
|
assert response.json()["name"] == "Test Team"
|
|
```
|
|
|
|
3. **Test isolation**: Each test should be independent
|
|
4. **Mock external services**: Use mocks for unit tests
|
|
5. **Use fixtures**: Leverage pytest fixtures for common setup
|
|
|
|
### Running Tests in Development
|
|
|
|
1. **Fast feedback loop**: Run unit tests frequently
|
|
```bash
|
|
pytest -m "not integration and not e2e" --tb=short
|
|
```
|
|
|
|
2. **Pre-commit testing**: Run E2E tests before committing
|
|
```bash
|
|
python scripts/run_tests.py e2e
|
|
```
|
|
|
|
3. **Coverage monitoring**: Check test coverage regularly
|
|
```bash
|
|
python scripts/run_tests.py coverage
|
|
```
|
|
|
|
### CI/CD Integration
|
|
|
|
For continuous integration, use different test strategies:
|
|
|
|
```yaml
|
|
# Example GitHub Actions workflow
|
|
- name: Run unit tests
|
|
run: python scripts/run_tests.py unit
|
|
|
|
- name: Run E2E tests
|
|
run: python scripts/run_tests.py e2e
|
|
|
|
- name: Run integration tests (if credentials available)
|
|
run: python scripts/run_tests.py integration
|
|
if: env.FIRESTORE_INTEGRATION_TEST == '1'
|
|
```
|
|
|
|
## Troubleshooting
|
|
|
|
### Common Issues
|
|
|
|
1. **Import errors**: Ensure you're in the virtual environment
|
|
2. **Database connection**: Check Firestore credentials for integration tests
|
|
3. **Slow tests**: Use unit tests for development, integration tests for CI
|
|
4. **Test isolation**: Clear test data between runs
|
|
|
|
### Debug Mode
|
|
|
|
Run tests with additional debugging:
|
|
|
|
```bash
|
|
# Verbose output with full tracebacks
|
|
pytest -v --tb=long
|
|
|
|
# Stop on first failure
|
|
pytest -x
|
|
|
|
# Run specific test with debugging
|
|
pytest tests/test_e2e.py::TestE2EWorkflows::test_complete_team_workflow -v -s
|
|
```
|
|
|
|
### Performance Monitoring
|
|
|
|
Monitor test performance:
|
|
|
|
```bash
|
|
# Show slowest tests
|
|
pytest --durations=10
|
|
|
|
# Profile test execution
|
|
pytest --profile
|
|
```
|
|
|
|
## Test Metrics
|
|
|
|
Track these metrics to ensure test quality:
|
|
|
|
- **Coverage**: Aim for >80% code coverage
|
|
- **Speed**: Unit tests <1s, E2E tests <30s
|
|
- **Reliability**: Tests should pass consistently
|
|
- **Maintainability**: Tests should be easy to update
|
|
|
|
## Contributing
|
|
|
|
When adding new features:
|
|
|
|
1. **Write tests first**: Use TDD approach
|
|
2. **Cover all scenarios**: Happy path, edge cases, error conditions
|
|
3. **Update documentation**: Keep this guide current
|
|
4. **Run full test suite**: Ensure no regressions
|
|
|
|
For more information about the SEREACT API architecture and features, see the main [README.md](../README.md).
|
|
|
|
## Running E2E Tests
|
|
|
|
### With Fresh Database
|
|
If you have a fresh database, the E2E tests will automatically run the bootstrap process:
|
|
|
|
```bash
|
|
pytest tests/test_e2e.py -v -m e2e
|
|
```
|
|
|
|
### With Existing Database
|
|
If your database already has teams and users (bootstrap completed), you need to provide an API key:
|
|
|
|
1. **Get an existing API key** from your application or create one via the API
|
|
2. **Set the environment variable**:
|
|
```bash
|
|
export E2E_TEST_API_KEY="your-api-key-here"
|
|
```
|
|
3. **Run the tests**:
|
|
```bash
|
|
pytest tests/test_e2e.py -v -m e2e
|
|
```
|
|
|
|
### Example with API Key
|
|
```bash
|
|
# Set your API key
|
|
export E2E_TEST_API_KEY="sk_test_1234567890abcdef"
|
|
|
|
# Run E2E tests
|
|
python scripts/run_tests.py e2e
|
|
```
|
|
|
|
## Test Features
|
|
|
|
### Idempotent Tests
|
|
The E2E tests are designed to be idempotent - they can be run multiple times against the same database without conflicts:
|
|
|
|
- **Unique identifiers**: Each test run uses unique suffixes for all created data
|
|
- **Graceful handling**: Tests handle existing data gracefully
|
|
- **Cleanup**: Tests create isolated data that doesn't interfere with existing data
|
|
|
|
### Test Data Isolation
|
|
- Each test run creates unique teams, users, and images
|
|
- Tests use UUID-based suffixes to avoid naming conflicts
|
|
- Search tests use unique tags to find only test-created data
|
|
|
|
## Test Configuration
|
|
|
|
### Environment Variables
|
|
- `E2E_TEST_API_KEY`: API key for E2E tests with existing database
|
|
- `E2E_INTEGRATION_TEST`: Set to `1` to enable integration tests
|
|
- `TEST_DATABASE_URL`: Override database for testing (optional)
|
|
|
|
### Pytest Configuration
|
|
The `pytest.ini` file contains:
|
|
- Test markers for categorizing tests
|
|
- Async test configuration
|
|
- Warning filters
|
|
|
|
## Best Practices
|
|
|
|
### Writing Tests
|
|
1. **Use descriptive names**: Test names should clearly describe what they test
|
|
2. **Test one thing**: Each test should focus on a single workflow or feature
|
|
3. **Use fixtures**: Leverage pytest fixtures for common setup
|
|
4. **Handle errors**: Test both success and error scenarios
|
|
5. **Clean up**: Ensure tests don't leave behind test data (when possible)
|
|
|
|
### Running Tests
|
|
1. **Run frequently**: Run unit tests during development
|
|
2. **CI/CD integration**: Ensure all tests pass before deployment
|
|
3. **Test environments**: Use separate databases for testing
|
|
4. **Monitor performance**: Track test execution time
|
|
|
|
## Troubleshooting
|
|
|
|
### Common Issues
|
|
|
|
#### "Bootstrap already completed"
|
|
- **Cause**: Database already has teams/users
|
|
- **Solution**: Set `E2E_TEST_API_KEY` environment variable
|
|
|
|
#### "No existing API key found"
|
|
- **Cause**: No valid API key provided for existing database
|
|
- **Solution**: Create an API key via the API or bootstrap endpoint
|
|
|
|
#### "Failed to create test team"
|
|
- **Cause**: Insufficient permissions or API key issues
|
|
- **Solution**: Ensure the API key belongs to an admin user
|
|
|
|
#### Import errors
|
|
- **Cause**: Python path or dependency issues
|
|
- **Solution**: Ensure virtual environment is activated and dependencies installed
|
|
|
|
### Getting Help
|
|
1. Check the test output for specific error messages
|
|
2. Verify environment variables are set correctly
|
|
3. Ensure the API server is running (for integration tests)
|
|
4. Check database connectivity
|
|
|
|
## CI/CD Integration
|
|
|
|
### GitHub Actions Example
|
|
```yaml
|
|
name: Tests
|
|
on: [push, pull_request]
|
|
jobs:
|
|
test:
|
|
runs-on: ubuntu-latest
|
|
steps:
|
|
- uses: actions/checkout@v2
|
|
- name: Set up Python
|
|
uses: actions/setup-python@v2
|
|
with:
|
|
python-version: 3.10
|
|
- name: Install dependencies
|
|
run: |
|
|
pip install -r requirements.txt
|
|
- name: Run unit tests
|
|
run: python scripts/run_tests.py unit
|
|
- name: Run integration tests
|
|
run: python scripts/run_tests.py integration
|
|
env:
|
|
E2E_INTEGRATION_TEST: 1
|
|
```
|
|
|
|
### Docker Testing
|
|
```dockerfile
|
|
# Test stage
|
|
FROM python:3.10-slim as test
|
|
WORKDIR /app
|
|
COPY requirements.txt .
|
|
RUN pip install -r requirements.txt
|
|
COPY . .
|
|
RUN python scripts/run_tests.py unit
|
|
```
|
|
|
|
## Coverage Reports
|
|
|
|
Generate coverage reports:
|
|
```bash
|
|
python scripts/run_tests.py coverage
|
|
```
|
|
|
|
View HTML coverage report:
|
|
```bash
|
|
open htmlcov/index.html
|
|
```
|
|
|
|
## Performance Testing
|
|
|
|
For performance testing:
|
|
1. Use `pytest-benchmark` for micro-benchmarks
|
|
2. Test with realistic data volumes
|
|
3. Monitor database query performance
|
|
4. Test concurrent user scenarios |