334 lines
9.9 KiB
Markdown
334 lines
9.9 KiB
Markdown
# SEREACT Testing Guide
|
|
|
|
This document provides comprehensive information about testing the SEREACT API, including unit tests, integration tests, and end-to-end tests.
|
|
|
|
## Test Types
|
|
|
|
SEREACT includes several types of tests to ensure code quality and functionality:
|
|
|
|
### 1. Unit Tests (`unit`)
|
|
- **Purpose**: Test individual components in isolation using mocks
|
|
- **Speed**: Fast (< 1 second per test)
|
|
- **Dependencies**: None (uses mocks)
|
|
- **Location**: `tests/` (excluding `test_e2e.py`)
|
|
|
|
### 2. Integration Tests (`integration`)
|
|
- **Purpose**: Test component interactions with real services
|
|
- **Speed**: Medium (1-5 seconds per test)
|
|
- **Dependencies**: Real database connections
|
|
- **Location**: `tests/integration/`
|
|
|
|
### 3. End-to-End Tests (`e2e`)
|
|
- **Purpose**: Test complete user workflows from API to database
|
|
- **Speed**: Medium to slow (2-10 seconds per test)
|
|
- **Dependencies**: **Self-contained with artificial test data**
|
|
- **Location**: `tests/test_e2e.py`
|
|
|
|
### 4. Real Database Tests (`realdb`)
|
|
- **Purpose**: Test performance and scalability with real database
|
|
- **Speed**: Slow (5-30 seconds per test)
|
|
- **Dependencies**: Real database with artificial test data
|
|
- **Location**: `tests/test_e2e.py` (marked with `@pytest.mark.realdb`)
|
|
|
|
## Running Tests
|
|
|
|
### Quick Start
|
|
|
|
```bash
|
|
# Run all tests (recommended for development)
|
|
python scripts/run_tests.py all
|
|
|
|
# Run only unit tests (fastest)
|
|
python scripts/run_tests.py unit
|
|
|
|
# Run E2E tests (completely self-contained)
|
|
python scripts/run_tests.py e2e
|
|
|
|
# Run with coverage report
|
|
python scripts/run_tests.py coverage
|
|
```
|
|
|
|
### Using pytest directly
|
|
|
|
```bash
|
|
# Run all tests
|
|
pytest
|
|
|
|
# Run specific test types
|
|
pytest -m unit # Unit tests only
|
|
pytest -m integration # Integration tests only
|
|
pytest -m e2e # End-to-end tests only
|
|
pytest -m realdb # Real database tests only
|
|
|
|
# Run specific test files
|
|
pytest tests/test_e2e.py # All E2E tests
|
|
pytest tests/api/ # All API tests
|
|
|
|
# Run specific test methods
|
|
pytest tests/test_e2e.py::TestE2EWorkflows::test_bootstrap_and_basic_workflow
|
|
```
|
|
|
|
### Test Combinations
|
|
|
|
```bash
|
|
# Run unit and integration tests (skip E2E)
|
|
pytest -m "not e2e and not realdb"
|
|
|
|
# Run all tests except real database tests
|
|
pytest -m "not realdb"
|
|
|
|
# Run only E2E tests that don't require real database
|
|
pytest -m "e2e and not realdb"
|
|
```
|
|
|
|
## End-to-End Test Setup
|
|
|
|
**The E2E tests are now completely self-contained!** They automatically:
|
|
|
|
1. **Create artificial test data** at the start of each test class
|
|
2. **Run all tests** against this isolated test environment
|
|
3. **Clean up all test data** at the end automatically
|
|
|
|
### No Setup Required!
|
|
|
|
```bash
|
|
# Just run the tests - no environment variables or API keys needed!
|
|
python scripts/run_tests.py e2e
|
|
|
|
# Or with pytest directly
|
|
pytest -m e2e
|
|
```
|
|
|
|
### Test Environment Creation
|
|
|
|
Each test class automatically creates its own isolated environment:
|
|
|
|
- **Unique team** with timestamp-based naming to avoid conflicts
|
|
- **Admin user** with unique email addresses
|
|
- **API keys** for authentication
|
|
- **Test images** uploaded during tests
|
|
- **Additional users/teams** as needed for specific tests
|
|
|
|
### Automatic Cleanup
|
|
|
|
At the end of each test class, all created resources are automatically deleted:
|
|
|
|
- All uploaded images are removed
|
|
- All created users are deleted
|
|
- All created teams are removed
|
|
- All API keys are revoked
|
|
|
|
### Advanced Test Modes
|
|
|
|
#### Integration Tests with Real Services
|
|
For testing with real Google Cloud services:
|
|
|
|
```bash
|
|
# Enable integration tests
|
|
export E2E_INTEGRATION_TEST=1
|
|
|
|
# Run integration tests
|
|
pytest -m integration
|
|
```
|
|
|
|
#### Real Database Performance Tests
|
|
For testing with real database connections and larger datasets:
|
|
|
|
```bash
|
|
# Enable real database tests
|
|
export E2E_REALDB_TEST=1
|
|
|
|
# Run real database tests
|
|
pytest -m realdb
|
|
```
|
|
|
|
## E2E Test Coverage
|
|
|
|
The E2E tests cover the following workflows with artificial test data:
|
|
|
|
### Core Functionality
|
|
- ✅ **Bootstrap Setup**: Automatic creation of isolated test environment
|
|
- ✅ **Authentication**: API key validation and verification
|
|
- ✅ **Team Management**: Create, read, update, delete teams
|
|
- ✅ **User Management**: Create, read, update, delete users
|
|
- ✅ **API Key Management**: Create, list, revoke API keys
|
|
|
|
### Image Operations
|
|
- ✅ **Image Upload**: File upload with metadata
|
|
- ✅ **Image Retrieval**: Get image details and download
|
|
- ✅ **Image Updates**: Modify descriptions and tags
|
|
- ✅ **Image Listing**: Paginated image lists with filters
|
|
|
|
### Advanced Search Functionality
|
|
- ✅ **Text Search**: Search by description content
|
|
- ✅ **Tag Search**: Filter by tags
|
|
- ✅ **Advanced Search**: Combined filters and thresholds
|
|
- ✅ **Similarity Search**: Find similar images using embeddings
|
|
- ✅ **Search Performance**: Response time validation
|
|
|
|
### Security and Isolation
|
|
- ✅ **User Roles**: Admin vs regular user permissions
|
|
- ✅ **Multi-team Isolation**: Data privacy between teams
|
|
- ✅ **Access Control**: Unauthorized access prevention
|
|
- ✅ **Error Handling**: Graceful error responses
|
|
|
|
### Performance and Scalability
|
|
- ✅ **Bulk Operations**: Multiple image uploads
|
|
- ✅ **Concurrent Access**: Simultaneous user operations
|
|
- ✅ **Database Performance**: Query response times
|
|
- ✅ **Data Consistency**: Transaction integrity
|
|
|
|
## Test Data Management
|
|
|
|
### Unique Identifiers
|
|
All E2E tests use unique suffixes to avoid conflicts:
|
|
```python
|
|
unique_suffix = str(uuid.uuid4())[:8]
|
|
team_name = f"E2E Test Team {unique_suffix}_{int(time.time())}"
|
|
```
|
|
|
|
### Isolation Strategy
|
|
Tests are completely isolated:
|
|
- Each test class creates its own environment
|
|
- Uses timestamp-based unique identifiers
|
|
- No dependency on existing database state
|
|
- Can run in parallel without conflicts
|
|
|
|
### Automatic Resource Tracking
|
|
The test environment tracks all created resources:
|
|
```python
|
|
"created_resources": {
|
|
"teams": [team_id],
|
|
"users": [admin_user_id],
|
|
"api_keys": [api_key_id],
|
|
"images": []
|
|
}
|
|
```
|
|
|
|
### Cleanup Strategy
|
|
Comprehensive cleanup at test completion:
|
|
- Images deleted first (to avoid orphaned files)
|
|
- Additional users deleted (preserving admin for team deletion)
|
|
- Additional teams deleted
|
|
- Main team deleted last (cascades to remaining resources)
|
|
|
|
## Environment Variables
|
|
|
|
### No Variables Required for Basic E2E Tests!
|
|
The standard E2E tests now run without any environment variables.
|
|
|
|
### Optional for Enhanced Testing
|
|
```bash
|
|
# Enable integration tests with real services
|
|
E2E_INTEGRATION_TEST=1
|
|
|
|
# Enable real database performance tests
|
|
E2E_REALDB_TEST=1
|
|
|
|
# Custom test database (if different from main)
|
|
TEST_FIRESTORE_PROJECT_ID="your-test-project"
|
|
TEST_GCS_BUCKET_NAME="your-test-bucket"
|
|
```
|
|
|
|
## Continuous Integration
|
|
|
|
### GitHub Actions Example
|
|
```yaml
|
|
name: Tests
|
|
on: [push, pull_request]
|
|
jobs:
|
|
test:
|
|
runs-on: ubuntu-latest
|
|
steps:
|
|
- uses: actions/checkout@v2
|
|
- name: Set up Python
|
|
uses: actions/setup-python@v2
|
|
with:
|
|
python-version: 3.10
|
|
- name: Install dependencies
|
|
run: pip install -r requirements.txt
|
|
- name: Run unit tests
|
|
run: python scripts/run_tests.py unit
|
|
- name: Run E2E tests (self-contained)
|
|
run: python scripts/run_tests.py e2e
|
|
# No environment variables needed!
|
|
```
|
|
|
|
## Troubleshooting
|
|
|
|
### Common Issues
|
|
|
|
#### "Cannot create isolated test environment" Error
|
|
```bash
|
|
# This is rare but can happen if database has conflicting constraints
|
|
# Solution: Check database state or use a clean test database
|
|
```
|
|
|
|
#### Tests Skipped Due to Missing Environment Variables
|
|
```bash
|
|
# Only affects integration and realdb tests
|
|
echo $E2E_INTEGRATION_TEST # Should be "1" for integration tests
|
|
echo $E2E_REALDB_TEST # Should be "1" for real database tests
|
|
```
|
|
|
|
#### Slow Test Performance
|
|
```bash
|
|
# Run only fast tests
|
|
pytest -m "not realdb and not integration"
|
|
|
|
# Run tests in parallel (requires pytest-xdist)
|
|
pip install pytest-xdist
|
|
pytest -n auto
|
|
```
|
|
|
|
### Debug Mode
|
|
```bash
|
|
# Run with verbose output
|
|
pytest -v -s tests/test_e2e.py
|
|
|
|
# Run single test with full output
|
|
pytest -v -s tests/test_e2e.py::TestE2EWorkflows::test_bootstrap_and_basic_workflow
|
|
```
|
|
|
|
## Best Practices
|
|
|
|
### Writing New Tests
|
|
1. **Use the test_environment fixture** for automatic setup/cleanup
|
|
2. **Track created resources** in env["created_resources"]
|
|
3. **Use unique identifiers** for all test data
|
|
4. **Test both success and failure** scenarios
|
|
5. **Use appropriate markers** (`@pytest.mark.e2e`, etc.)
|
|
|
|
### Test Organization
|
|
1. **Group related tests** in classes with shared fixtures
|
|
2. **Use descriptive test names** that explain the scenario
|
|
3. **Keep tests independent** - no shared state between methods
|
|
4. **Use class-scoped fixtures** for expensive setup
|
|
5. **Document test purpose** in docstrings
|
|
|
|
### Performance Considerations
|
|
1. **Use class-scoped fixtures** to share expensive setup
|
|
2. **Minimize database operations** in individual tests
|
|
3. **Clean up test data** automatically
|
|
4. **Run expensive tests** only when necessary
|
|
5. **Use artificial data** instead of real external dependencies
|
|
|
|
## Test Metrics
|
|
|
|
### Coverage Goals
|
|
- **Unit Tests**: > 90% code coverage
|
|
- **Integration Tests**: > 80% API endpoint coverage
|
|
- **E2E Tests**: > 95% user workflow coverage
|
|
|
|
### Performance Targets
|
|
- **Unit Tests**: < 1 second per test
|
|
- **Integration Tests**: < 5 seconds per test
|
|
- **E2E Tests**: < 10 seconds per test
|
|
- **Real DB Tests**: < 30 seconds per test
|
|
|
|
### Quality Metrics
|
|
- **Test Reliability**: > 99% pass rate
|
|
- **Test Maintainability**: Clear, readable test code
|
|
- **Test Coverage**: All critical paths tested
|
|
- **Test Documentation**: All test purposes documented
|
|
- **Test Isolation**: No dependencies between tests |