cp
This commit is contained in:
parent
1af3f4549c
commit
59fcf08696
36
README.md
36
README.md
@ -40,7 +40,9 @@ sereact/
|
|||||||
│ ├── api/ # API tests
|
│ ├── api/ # API tests
|
||||||
│ ├── auth/ # Authentication tests
|
│ ├── auth/ # Authentication tests
|
||||||
│ ├── models/ # Model tests
|
│ ├── models/ # Model tests
|
||||||
│ └── services/ # Service tests
|
│ ├── services/ # Service tests
|
||||||
|
│ ├── integration/ # Integration tests
|
||||||
|
│ └── test_e2e.py # End-to-end workflow tests
|
||||||
├── main.py # Application entry point
|
├── main.py # Application entry point
|
||||||
├── requirements.txt # Python dependencies
|
├── requirements.txt # Python dependencies
|
||||||
└── README.md # This file
|
└── README.md # This file
|
||||||
@ -181,6 +183,38 @@ Refer to the Swagger UI documentation at `/docs` for detailed endpoint informati
|
|||||||
pytest
|
pytest
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### End-to-End Testing
|
||||||
|
|
||||||
|
SEREACT includes comprehensive end-to-end tests that cover complete user workflows:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run all E2E tests with mocked services (recommended for development)
|
||||||
|
python scripts/run_tests.py e2e
|
||||||
|
|
||||||
|
# Run unit tests only (fast)
|
||||||
|
python scripts/run_tests.py unit
|
||||||
|
|
||||||
|
# Run integration tests (requires real database)
|
||||||
|
python scripts/run_tests.py integration
|
||||||
|
|
||||||
|
# Run all tests
|
||||||
|
python scripts/run_tests.py all
|
||||||
|
|
||||||
|
# Run with coverage report
|
||||||
|
python scripts/run_tests.py coverage
|
||||||
|
```
|
||||||
|
|
||||||
|
The E2E tests cover:
|
||||||
|
- **Team Management**: Create, update, and manage teams
|
||||||
|
- **User Management**: User creation, roles, and permissions
|
||||||
|
- **API Authentication**: API key generation and validation
|
||||||
|
- **Image Workflows**: Upload, metadata management, and downloads
|
||||||
|
- **Search Functionality**: Text and tag-based search
|
||||||
|
- **Multi-team Isolation**: Ensuring data privacy between teams
|
||||||
|
- **Error Handling**: Validation and error response testing
|
||||||
|
|
||||||
|
For detailed testing information, see [docs/TESTING.md](docs/TESTING.md).
|
||||||
|
|
||||||
### Creating a New API Version
|
### Creating a New API Version
|
||||||
|
|
||||||
1. Create a new package under `src/api/` (e.g., `v2`)
|
1. Create a new package under `src/api/` (e.g., `v2`)
|
||||||
|
|||||||
501
docs/TESTING.md
Normal file
501
docs/TESTING.md
Normal file
@ -0,0 +1,501 @@
|
|||||||
|
# SEREACT Testing Guide
|
||||||
|
|
||||||
|
This document provides comprehensive information about testing the SEREACT API, including different test types, setup instructions, and best practices.
|
||||||
|
|
||||||
|
## Test Types
|
||||||
|
|
||||||
|
SEREACT uses a multi-layered testing approach to ensure reliability and maintainability:
|
||||||
|
|
||||||
|
### 1. Unit Tests
|
||||||
|
- **Purpose**: Test individual components in isolation
|
||||||
|
- **Speed**: Fast (< 1 second per test)
|
||||||
|
- **Dependencies**: Use mocks and stubs
|
||||||
|
- **Coverage**: Functions, classes, and modules
|
||||||
|
- **Location**: `tests/` (excluding `tests/integration/`)
|
||||||
|
|
||||||
|
### 2. Integration Tests
|
||||||
|
- **Purpose**: Test interactions with real external services
|
||||||
|
- **Speed**: Moderate (1-10 seconds per test)
|
||||||
|
- **Dependencies**: Real Firestore database
|
||||||
|
- **Coverage**: Database operations, service integrations
|
||||||
|
- **Location**: `tests/integration/`
|
||||||
|
|
||||||
|
### 3. End-to-End (E2E) Tests
|
||||||
|
- **Purpose**: Test complete user workflows
|
||||||
|
- **Speed**: Moderate to slow (5-30 seconds per test)
|
||||||
|
- **Dependencies**: Full application stack (mocked or real)
|
||||||
|
- **Coverage**: Complete API workflows
|
||||||
|
- **Location**: `tests/test_e2e.py`
|
||||||
|
|
||||||
|
## Test Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
tests/
|
||||||
|
├── conftest.py # Global test configuration
|
||||||
|
├── test_e2e.py # End-to-end workflow tests
|
||||||
|
├── api/ # API endpoint tests
|
||||||
|
│ ├── conftest.py # API-specific fixtures
|
||||||
|
│ ├── test_auth.py # Authentication tests
|
||||||
|
│ ├── test_teams.py # Team management tests
|
||||||
|
│ ├── test_users.py # User management tests
|
||||||
|
│ ├── test_images.py # Image management tests
|
||||||
|
│ └── test_search.py # Search functionality tests
|
||||||
|
├── auth/ # Authentication module tests
|
||||||
|
├── db/ # Database layer tests
|
||||||
|
├── integration/ # Integration tests
|
||||||
|
│ ├── __init__.py
|
||||||
|
│ └── test_firestore_integration.py
|
||||||
|
├── models/ # Data model tests
|
||||||
|
└── services/ # Business logic tests
|
||||||
|
```
|
||||||
|
|
||||||
|
## Running Tests
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
1. **Virtual Environment**: Ensure you're in the project's virtual environment:
|
||||||
|
```bash
|
||||||
|
# Windows (Git Bash)
|
||||||
|
source venv/Scripts/activate
|
||||||
|
|
||||||
|
# Linux/macOS
|
||||||
|
source venv/bin/activate
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Dependencies**: Install test dependencies:
|
||||||
|
```bash
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
### Quick Start
|
||||||
|
|
||||||
|
Use the test runner script for convenient test execution:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run unit tests only (fast, recommended for development)
|
||||||
|
python scripts/run_tests.py unit
|
||||||
|
|
||||||
|
# Run end-to-end tests with mocked services
|
||||||
|
python scripts/run_tests.py e2e
|
||||||
|
|
||||||
|
# Run integration tests (requires real database)
|
||||||
|
python scripts/run_tests.py integration
|
||||||
|
|
||||||
|
# Run all tests
|
||||||
|
python scripts/run_tests.py all
|
||||||
|
|
||||||
|
# Run tests with coverage report
|
||||||
|
python scripts/run_tests.py coverage
|
||||||
|
```
|
||||||
|
|
||||||
|
### Direct pytest Commands
|
||||||
|
|
||||||
|
For more control, use pytest directly:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Unit tests only
|
||||||
|
pytest -m "not integration and not e2e" -v
|
||||||
|
|
||||||
|
# End-to-end tests
|
||||||
|
pytest -m e2e -v
|
||||||
|
|
||||||
|
# Integration tests
|
||||||
|
FIRESTORE_INTEGRATION_TEST=1 pytest -m integration -v
|
||||||
|
|
||||||
|
# Specific test file
|
||||||
|
pytest tests/test_e2e.py -v
|
||||||
|
|
||||||
|
# Specific test function
|
||||||
|
pytest tests/test_e2e.py::TestE2EWorkflows::test_complete_team_workflow -v
|
||||||
|
|
||||||
|
# Run with coverage
|
||||||
|
pytest --cov=src --cov-report=html --cov-report=term
|
||||||
|
```
|
||||||
|
|
||||||
|
## End-to-End Test Coverage
|
||||||
|
|
||||||
|
The E2E tests cover the following complete workflows:
|
||||||
|
|
||||||
|
### 1. Team Management Workflow
|
||||||
|
- Create a new team
|
||||||
|
- Retrieve team details
|
||||||
|
- Update team information
|
||||||
|
- List all teams
|
||||||
|
- Verify team isolation
|
||||||
|
|
||||||
|
### 2. User Management Workflow
|
||||||
|
- Create admin and regular users
|
||||||
|
- Assign users to teams
|
||||||
|
- Update user roles and permissions
|
||||||
|
- List team members
|
||||||
|
- Verify user access controls
|
||||||
|
|
||||||
|
### 3. API Key Authentication Workflow
|
||||||
|
- Generate API keys for users
|
||||||
|
- Authenticate requests using API keys
|
||||||
|
- Test protected endpoints
|
||||||
|
- Manage API key lifecycle (create, use, deactivate)
|
||||||
|
- Verify authentication failures
|
||||||
|
|
||||||
|
### 4. Image Upload and Management Workflow
|
||||||
|
- Upload images with metadata
|
||||||
|
- Retrieve image details
|
||||||
|
- Update image metadata and tags
|
||||||
|
- List team images
|
||||||
|
- Download images
|
||||||
|
- Verify file handling
|
||||||
|
|
||||||
|
### 5. Search Workflow
|
||||||
|
- Text-based search by description
|
||||||
|
- Tag-based filtering
|
||||||
|
- Combined search queries
|
||||||
|
- Search result pagination
|
||||||
|
- Verify search accuracy
|
||||||
|
|
||||||
|
### 6. Multi-Team Isolation
|
||||||
|
- Create multiple teams
|
||||||
|
- Upload images to different teams
|
||||||
|
- Verify cross-team access restrictions
|
||||||
|
- Test search result isolation
|
||||||
|
- Ensure data privacy
|
||||||
|
|
||||||
|
### 7. Error Handling
|
||||||
|
- Invalid data validation
|
||||||
|
- Authentication failures
|
||||||
|
- Resource not found scenarios
|
||||||
|
- File upload errors
|
||||||
|
- Proper error responses
|
||||||
|
|
||||||
|
## Integration Test Setup
|
||||||
|
|
||||||
|
Integration tests require real external services. Follow these steps:
|
||||||
|
|
||||||
|
### 1. Firestore Setup
|
||||||
|
|
||||||
|
1. **Create a test database**:
|
||||||
|
- Use a separate Firestore database for testing
|
||||||
|
- Database name should end with `-test` (e.g., `sereact-test`)
|
||||||
|
|
||||||
|
2. **Set environment variables**:
|
||||||
|
```bash
|
||||||
|
export FIRESTORE_INTEGRATION_TEST=1
|
||||||
|
export FIRESTORE_PROJECT_ID=your-test-project
|
||||||
|
export FIRESTORE_DATABASE_NAME=sereact-test
|
||||||
|
export FIRESTORE_CREDENTIALS_FILE=path/to/test-credentials.json
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Run integration tests**:
|
||||||
|
```bash
|
||||||
|
python scripts/run_tests.py integration
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Full E2E Integration Setup
|
||||||
|
|
||||||
|
For testing with real cloud services:
|
||||||
|
|
||||||
|
1. **Set up all services**:
|
||||||
|
- Google Cloud Storage bucket
|
||||||
|
- Firestore database
|
||||||
|
- Cloud Vision API
|
||||||
|
- Pinecone vector database
|
||||||
|
|
||||||
|
2. **Configure environment**:
|
||||||
|
```bash
|
||||||
|
export E2E_INTEGRATION_TEST=1
|
||||||
|
export GCS_BUCKET_NAME=your-test-bucket
|
||||||
|
export VECTOR_DB_API_KEY=your-pinecone-key
|
||||||
|
# ... other service credentials
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Run E2E integration tests**:
|
||||||
|
```bash
|
||||||
|
python scripts/run_tests.py e2e --with-integration
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Data Management
|
||||||
|
|
||||||
|
### Fixtures and Test Data
|
||||||
|
|
||||||
|
- **Shared fixtures**: Defined in `tests/conftest.py`
|
||||||
|
- **API fixtures**: Defined in `tests/api/conftest.py`
|
||||||
|
- **Sample images**: Generated programmatically using PIL
|
||||||
|
- **Test data**: Isolated per test function
|
||||||
|
|
||||||
|
### Cleanup Strategy
|
||||||
|
|
||||||
|
- **Unit tests**: Automatic cleanup through mocking
|
||||||
|
- **Integration tests**: Manual cleanup in test teardown
|
||||||
|
- **E2E tests**: Resource tracking and cleanup utilities
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Writing Tests
|
||||||
|
|
||||||
|
1. **Test naming**: Use descriptive names that explain the scenario
|
||||||
|
```python
|
||||||
|
def test_user_cannot_access_other_team_images(self):
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Test structure**: Follow Arrange-Act-Assert pattern
|
||||||
|
```python
|
||||||
|
def test_create_team(self):
|
||||||
|
# Arrange
|
||||||
|
team_data = {"name": "Test Team"}
|
||||||
|
|
||||||
|
# Act
|
||||||
|
response = client.post("/api/v1/teams", json=team_data)
|
||||||
|
|
||||||
|
# Assert
|
||||||
|
assert response.status_code == 201
|
||||||
|
assert response.json()["name"] == "Test Team"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Test isolation**: Each test should be independent
|
||||||
|
4. **Mock external services**: Use mocks for unit tests
|
||||||
|
5. **Use fixtures**: Leverage pytest fixtures for common setup
|
||||||
|
|
||||||
|
### Running Tests in Development
|
||||||
|
|
||||||
|
1. **Fast feedback loop**: Run unit tests frequently
|
||||||
|
```bash
|
||||||
|
pytest -m "not integration and not e2e" --tb=short
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Pre-commit testing**: Run E2E tests before committing
|
||||||
|
```bash
|
||||||
|
python scripts/run_tests.py e2e
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Coverage monitoring**: Check test coverage regularly
|
||||||
|
```bash
|
||||||
|
python scripts/run_tests.py coverage
|
||||||
|
```
|
||||||
|
|
||||||
|
### CI/CD Integration
|
||||||
|
|
||||||
|
For continuous integration, use different test strategies:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Example GitHub Actions workflow
|
||||||
|
- name: Run unit tests
|
||||||
|
run: python scripts/run_tests.py unit
|
||||||
|
|
||||||
|
- name: Run E2E tests
|
||||||
|
run: python scripts/run_tests.py e2e
|
||||||
|
|
||||||
|
- name: Run integration tests (if credentials available)
|
||||||
|
run: python scripts/run_tests.py integration
|
||||||
|
if: env.FIRESTORE_INTEGRATION_TEST == '1'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
1. **Import errors**: Ensure you're in the virtual environment
|
||||||
|
2. **Database connection**: Check Firestore credentials for integration tests
|
||||||
|
3. **Slow tests**: Use unit tests for development, integration tests for CI
|
||||||
|
4. **Test isolation**: Clear test data between runs
|
||||||
|
|
||||||
|
### Debug Mode
|
||||||
|
|
||||||
|
Run tests with additional debugging:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verbose output with full tracebacks
|
||||||
|
pytest -v --tb=long
|
||||||
|
|
||||||
|
# Stop on first failure
|
||||||
|
pytest -x
|
||||||
|
|
||||||
|
# Run specific test with debugging
|
||||||
|
pytest tests/test_e2e.py::TestE2EWorkflows::test_complete_team_workflow -v -s
|
||||||
|
```
|
||||||
|
|
||||||
|
### Performance Monitoring
|
||||||
|
|
||||||
|
Monitor test performance:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Show slowest tests
|
||||||
|
pytest --durations=10
|
||||||
|
|
||||||
|
# Profile test execution
|
||||||
|
pytest --profile
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Metrics
|
||||||
|
|
||||||
|
Track these metrics to ensure test quality:
|
||||||
|
|
||||||
|
- **Coverage**: Aim for >80% code coverage
|
||||||
|
- **Speed**: Unit tests <1s, E2E tests <30s
|
||||||
|
- **Reliability**: Tests should pass consistently
|
||||||
|
- **Maintainability**: Tests should be easy to update
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
When adding new features:
|
||||||
|
|
||||||
|
1. **Write tests first**: Use TDD approach
|
||||||
|
2. **Cover all scenarios**: Happy path, edge cases, error conditions
|
||||||
|
3. **Update documentation**: Keep this guide current
|
||||||
|
4. **Run full test suite**: Ensure no regressions
|
||||||
|
|
||||||
|
For more information about the SEREACT API architecture and features, see the main [README.md](../README.md).
|
||||||
|
|
||||||
|
## Running E2E Tests
|
||||||
|
|
||||||
|
### With Fresh Database
|
||||||
|
If you have a fresh database, the E2E tests will automatically run the bootstrap process:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pytest tests/test_e2e.py -v -m e2e
|
||||||
|
```
|
||||||
|
|
||||||
|
### With Existing Database
|
||||||
|
If your database already has teams and users (bootstrap completed), you need to provide an API key:
|
||||||
|
|
||||||
|
1. **Get an existing API key** from your application or create one via the API
|
||||||
|
2. **Set the environment variable**:
|
||||||
|
```bash
|
||||||
|
export E2E_TEST_API_KEY="your-api-key-here"
|
||||||
|
```
|
||||||
|
3. **Run the tests**:
|
||||||
|
```bash
|
||||||
|
pytest tests/test_e2e.py -v -m e2e
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example with API Key
|
||||||
|
```bash
|
||||||
|
# Set your API key
|
||||||
|
export E2E_TEST_API_KEY="sk_test_1234567890abcdef"
|
||||||
|
|
||||||
|
# Run E2E tests
|
||||||
|
python scripts/run_tests.py e2e
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Features
|
||||||
|
|
||||||
|
### Idempotent Tests
|
||||||
|
The E2E tests are designed to be idempotent - they can be run multiple times against the same database without conflicts:
|
||||||
|
|
||||||
|
- **Unique identifiers**: Each test run uses unique suffixes for all created data
|
||||||
|
- **Graceful handling**: Tests handle existing data gracefully
|
||||||
|
- **Cleanup**: Tests create isolated data that doesn't interfere with existing data
|
||||||
|
|
||||||
|
### Test Data Isolation
|
||||||
|
- Each test run creates unique teams, users, and images
|
||||||
|
- Tests use UUID-based suffixes to avoid naming conflicts
|
||||||
|
- Search tests use unique tags to find only test-created data
|
||||||
|
|
||||||
|
## Test Configuration
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
- `E2E_TEST_API_KEY`: API key for E2E tests with existing database
|
||||||
|
- `E2E_INTEGRATION_TEST`: Set to `1` to enable integration tests
|
||||||
|
- `TEST_DATABASE_URL`: Override database for testing (optional)
|
||||||
|
|
||||||
|
### Pytest Configuration
|
||||||
|
The `pytest.ini` file contains:
|
||||||
|
- Test markers for categorizing tests
|
||||||
|
- Async test configuration
|
||||||
|
- Warning filters
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Writing Tests
|
||||||
|
1. **Use descriptive names**: Test names should clearly describe what they test
|
||||||
|
2. **Test one thing**: Each test should focus on a single workflow or feature
|
||||||
|
3. **Use fixtures**: Leverage pytest fixtures for common setup
|
||||||
|
4. **Handle errors**: Test both success and error scenarios
|
||||||
|
5. **Clean up**: Ensure tests don't leave behind test data (when possible)
|
||||||
|
|
||||||
|
### Running Tests
|
||||||
|
1. **Run frequently**: Run unit tests during development
|
||||||
|
2. **CI/CD integration**: Ensure all tests pass before deployment
|
||||||
|
3. **Test environments**: Use separate databases for testing
|
||||||
|
4. **Monitor performance**: Track test execution time
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
#### "Bootstrap already completed"
|
||||||
|
- **Cause**: Database already has teams/users
|
||||||
|
- **Solution**: Set `E2E_TEST_API_KEY` environment variable
|
||||||
|
|
||||||
|
#### "No existing API key found"
|
||||||
|
- **Cause**: No valid API key provided for existing database
|
||||||
|
- **Solution**: Create an API key via the API or bootstrap endpoint
|
||||||
|
|
||||||
|
#### "Failed to create test team"
|
||||||
|
- **Cause**: Insufficient permissions or API key issues
|
||||||
|
- **Solution**: Ensure the API key belongs to an admin user
|
||||||
|
|
||||||
|
#### Import errors
|
||||||
|
- **Cause**: Python path or dependency issues
|
||||||
|
- **Solution**: Ensure virtual environment is activated and dependencies installed
|
||||||
|
|
||||||
|
### Getting Help
|
||||||
|
1. Check the test output for specific error messages
|
||||||
|
2. Verify environment variables are set correctly
|
||||||
|
3. Ensure the API server is running (for integration tests)
|
||||||
|
4. Check database connectivity
|
||||||
|
|
||||||
|
## CI/CD Integration
|
||||||
|
|
||||||
|
### GitHub Actions Example
|
||||||
|
```yaml
|
||||||
|
name: Tests
|
||||||
|
on: [push, pull_request]
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v2
|
||||||
|
- name: Set up Python
|
||||||
|
uses: actions/setup-python@v2
|
||||||
|
with:
|
||||||
|
python-version: 3.10
|
||||||
|
- name: Install dependencies
|
||||||
|
run: |
|
||||||
|
pip install -r requirements.txt
|
||||||
|
- name: Run unit tests
|
||||||
|
run: python scripts/run_tests.py unit
|
||||||
|
- name: Run integration tests
|
||||||
|
run: python scripts/run_tests.py integration
|
||||||
|
env:
|
||||||
|
E2E_INTEGRATION_TEST: 1
|
||||||
|
```
|
||||||
|
|
||||||
|
### Docker Testing
|
||||||
|
```dockerfile
|
||||||
|
# Test stage
|
||||||
|
FROM python:3.10-slim as test
|
||||||
|
WORKDIR /app
|
||||||
|
COPY requirements.txt .
|
||||||
|
RUN pip install -r requirements.txt
|
||||||
|
COPY . .
|
||||||
|
RUN python scripts/run_tests.py unit
|
||||||
|
```
|
||||||
|
|
||||||
|
## Coverage Reports
|
||||||
|
|
||||||
|
Generate coverage reports:
|
||||||
|
```bash
|
||||||
|
python scripts/run_tests.py coverage
|
||||||
|
```
|
||||||
|
|
||||||
|
View HTML coverage report:
|
||||||
|
```bash
|
||||||
|
open htmlcov/index.html
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Testing
|
||||||
|
|
||||||
|
For performance testing:
|
||||||
|
1. Use `pytest-benchmark` for micro-benchmarks
|
||||||
|
2. Test with realistic data volumes
|
||||||
|
3. Monitor database query performance
|
||||||
|
4. Test concurrent user scenarios
|
||||||
@ -13,9 +13,11 @@ addopts =
|
|||||||
markers =
|
markers =
|
||||||
asyncio: marks tests as async (deselect with '-m "not asyncio"')
|
asyncio: marks tests as async (deselect with '-m "not asyncio"')
|
||||||
integration: marks tests as integration tests requiring real database (deselect with '-m "not integration"')
|
integration: marks tests as integration tests requiring real database (deselect with '-m "not integration"')
|
||||||
|
e2e: marks tests as end-to-end tests covering complete workflows (deselect with '-m "not e2e"')
|
||||||
unit: marks tests as unit tests using mocks (default)
|
unit: marks tests as unit tests using mocks (default)
|
||||||
|
|
||||||
# Integration test configuration
|
# Test configuration
|
||||||
|
# To run unit tests only: pytest -m "not integration and not e2e"
|
||||||
# To run integration tests: pytest -m integration
|
# To run integration tests: pytest -m integration
|
||||||
# To run only unit tests: pytest -m "not integration"
|
# To run e2e tests: pytest -m e2e
|
||||||
# To run all tests: pytest
|
# To run all tests: pytest
|
||||||
94
scripts/get_test_api_key.py
Normal file
94
scripts/get_test_api_key.py
Normal file
@ -0,0 +1,94 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Helper script to get an existing API key for testing purposes.
|
||||||
|
|
||||||
|
This script connects to the database and retrieves an active API key
|
||||||
|
that can be used for E2E testing.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python scripts/get_test_api_key.py
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
# Add the src directory to the path
|
||||||
|
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
|
||||||
|
|
||||||
|
from src.db.repositories.api_key_repository import api_key_repository
|
||||||
|
from src.db.repositories.user_repository import user_repository
|
||||||
|
from src.db.repositories.team_repository import team_repository
|
||||||
|
|
||||||
|
|
||||||
|
async def get_test_api_key():
|
||||||
|
"""Get an existing API key for testing"""
|
||||||
|
try:
|
||||||
|
# Get all API keys
|
||||||
|
api_keys = await api_key_repository.get_all()
|
||||||
|
|
||||||
|
if not api_keys:
|
||||||
|
print("❌ No API keys found in the database")
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Find an active API key
|
||||||
|
active_keys = [key for key in api_keys if key.is_active]
|
||||||
|
|
||||||
|
if not active_keys:
|
||||||
|
print("❌ No active API keys found in the database")
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Get the first active key
|
||||||
|
test_key = active_keys[0]
|
||||||
|
|
||||||
|
# Get user and team info
|
||||||
|
user = await user_repository.get_by_id(test_key.user_id)
|
||||||
|
team = await team_repository.get_by_id(test_key.team_id)
|
||||||
|
|
||||||
|
print("✅ Found test API key:")
|
||||||
|
print(f" Key ID: {test_key.id}")
|
||||||
|
print(f" Key Name: {test_key.name}")
|
||||||
|
print(f" User: {user.name} ({user.email})" if user else " User: Not found")
|
||||||
|
print(f" Team: {team.name}" if team else " Team: Not found")
|
||||||
|
print(f" Created: {test_key.created_at}")
|
||||||
|
print(f" Is Admin: {user.is_admin}" if user else " Is Admin: Unknown")
|
||||||
|
|
||||||
|
# Note: We can't return the actual key value since it's hashed
|
||||||
|
print("\n⚠️ Note: The actual API key value is hashed in the database.")
|
||||||
|
print(" You'll need to use an API key you have access to for testing.")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"key_id": str(test_key.id),
|
||||||
|
"key_name": test_key.name,
|
||||||
|
"user_id": str(test_key.user_id),
|
||||||
|
"team_id": str(test_key.team_id),
|
||||||
|
"user_name": user.name if user else None,
|
||||||
|
"user_email": user.email if user else None,
|
||||||
|
"team_name": team.name if team else None,
|
||||||
|
"is_admin": user.is_admin if user else None
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"❌ Error getting API key: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
"""Main function"""
|
||||||
|
print("🔍 Looking for existing API keys in the database...")
|
||||||
|
|
||||||
|
result = await get_test_api_key()
|
||||||
|
|
||||||
|
if result:
|
||||||
|
print("\n💡 To run E2E tests, you can:")
|
||||||
|
print(" 1. Use an API key you have access to")
|
||||||
|
print(" 2. Create a new API key using the bootstrap endpoint (if not already done)")
|
||||||
|
print(" 3. Set the API key in the test environment")
|
||||||
|
else:
|
||||||
|
print("\n💡 To run E2E tests, you may need to:")
|
||||||
|
print(" 1. Run the bootstrap endpoint to create initial data")
|
||||||
|
print(" 2. Create API keys manually")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
asyncio.run(main())
|
||||||
206
scripts/run_tests.py
Normal file
206
scripts/run_tests.py
Normal file
@ -0,0 +1,206 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Test runner script for SEREACT API
|
||||||
|
|
||||||
|
This script provides a convenient way to run different types of tests
|
||||||
|
with proper environment setup and reporting.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python scripts/run_tests.py [test_type]
|
||||||
|
|
||||||
|
Test types:
|
||||||
|
unit - Run unit tests only
|
||||||
|
integration - Run integration tests only
|
||||||
|
e2e - Run end-to-end tests only
|
||||||
|
all - Run all tests
|
||||||
|
coverage - Run tests with coverage report
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
import argparse
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Add the project root to Python path
|
||||||
|
project_root = Path(__file__).parent.parent
|
||||||
|
sys.path.insert(0, str(project_root))
|
||||||
|
|
||||||
|
def check_environment():
|
||||||
|
"""Check if the test environment is properly set up"""
|
||||||
|
print("🔍 Checking test environment...")
|
||||||
|
|
||||||
|
# Check if required packages are available
|
||||||
|
try:
|
||||||
|
import pytest
|
||||||
|
import fastapi
|
||||||
|
import pydantic
|
||||||
|
print("✅ Required test packages are available")
|
||||||
|
except ImportError as e:
|
||||||
|
print(f"❌ Missing required package: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Check if main application can be imported
|
||||||
|
try:
|
||||||
|
# Change to project root directory for import
|
||||||
|
original_cwd = os.getcwd()
|
||||||
|
os.chdir(project_root)
|
||||||
|
import main
|
||||||
|
os.chdir(original_cwd)
|
||||||
|
print("✅ Main application can be imported")
|
||||||
|
except ImportError as e:
|
||||||
|
print(f"❌ Cannot import main application: {e}")
|
||||||
|
os.chdir(original_cwd)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Check if test files exist
|
||||||
|
test_files = [
|
||||||
|
"tests/test_e2e.py",
|
||||||
|
"tests/conftest.py",
|
||||||
|
"pytest.ini"
|
||||||
|
]
|
||||||
|
|
||||||
|
for test_file in test_files:
|
||||||
|
if not (project_root / test_file).exists():
|
||||||
|
print(f"❌ Missing test file: {test_file}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
print("✅ Test environment is ready")
|
||||||
|
return True
|
||||||
|
|
||||||
|
def run_command(cmd, description):
|
||||||
|
"""Run a command and handle the output"""
|
||||||
|
print(f"\n🚀 {description}")
|
||||||
|
print(f"Command: {' '.join(cmd)}")
|
||||||
|
print("-" * 50)
|
||||||
|
|
||||||
|
# Change to project root directory
|
||||||
|
original_cwd = os.getcwd()
|
||||||
|
os.chdir(project_root)
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = subprocess.run(cmd, capture_output=False, text=True)
|
||||||
|
os.chdir(original_cwd)
|
||||||
|
return result.returncode == 0
|
||||||
|
except Exception as e:
|
||||||
|
print(f"❌ Error running command: {e}")
|
||||||
|
os.chdir(original_cwd)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def run_unit_tests():
|
||||||
|
"""Run unit tests"""
|
||||||
|
cmd = [
|
||||||
|
"python", "-m", "pytest",
|
||||||
|
"tests/",
|
||||||
|
"-v",
|
||||||
|
"--tb=short",
|
||||||
|
"-x", # Stop on first failure
|
||||||
|
"--ignore=tests/test_e2e.py",
|
||||||
|
"--ignore=tests/integration/"
|
||||||
|
]
|
||||||
|
return run_command(cmd, "Running unit tests")
|
||||||
|
|
||||||
|
def run_integration_tests():
|
||||||
|
"""Run integration tests"""
|
||||||
|
cmd = [
|
||||||
|
"python", "-m", "pytest",
|
||||||
|
"tests/integration/",
|
||||||
|
"-v",
|
||||||
|
"--tb=short",
|
||||||
|
"-m", "integration"
|
||||||
|
]
|
||||||
|
return run_command(cmd, "Running integration tests")
|
||||||
|
|
||||||
|
def run_e2e_tests():
|
||||||
|
"""Run end-to-end tests"""
|
||||||
|
cmd = [
|
||||||
|
"python", "-m", "pytest",
|
||||||
|
"tests/test_e2e.py",
|
||||||
|
"-v",
|
||||||
|
"--tb=short",
|
||||||
|
"-m", "e2e"
|
||||||
|
]
|
||||||
|
return run_command(cmd, "Running end-to-end tests")
|
||||||
|
|
||||||
|
def run_all_tests():
|
||||||
|
"""Run all tests"""
|
||||||
|
cmd = [
|
||||||
|
"python", "-m", "pytest",
|
||||||
|
"tests/",
|
||||||
|
"-v",
|
||||||
|
"--tb=short"
|
||||||
|
]
|
||||||
|
return run_command(cmd, "Running all tests")
|
||||||
|
|
||||||
|
def run_coverage_tests():
|
||||||
|
"""Run tests with coverage report"""
|
||||||
|
# Install coverage if not available
|
||||||
|
try:
|
||||||
|
import coverage
|
||||||
|
except ImportError:
|
||||||
|
print("📦 Installing coverage package...")
|
||||||
|
subprocess.run([sys.executable, "-m", "pip", "install", "coverage", "pytest-cov"])
|
||||||
|
|
||||||
|
cmd = [
|
||||||
|
"python", "-m", "pytest",
|
||||||
|
"tests/",
|
||||||
|
"--cov=src",
|
||||||
|
"--cov-report=html",
|
||||||
|
"--cov-report=term-missing",
|
||||||
|
"--cov-fail-under=80"
|
||||||
|
]
|
||||||
|
return run_command(cmd, "Running tests with coverage")
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main function"""
|
||||||
|
parser = argparse.ArgumentParser(description="Run SEREACT API tests")
|
||||||
|
parser.add_argument(
|
||||||
|
"test_type",
|
||||||
|
choices=["unit", "integration", "e2e", "all", "coverage"],
|
||||||
|
help="Type of tests to run"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--skip-env-check",
|
||||||
|
action="store_true",
|
||||||
|
help="Skip environment check"
|
||||||
|
)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
print("🧪 SEREACT API Test Runner")
|
||||||
|
print("=" * 50)
|
||||||
|
|
||||||
|
# Check environment unless skipped
|
||||||
|
if not args.skip_env_check:
|
||||||
|
if not check_environment():
|
||||||
|
print("\n❌ Environment check failed")
|
||||||
|
print("💡 Make sure you're in the project root and virtual environment is activated")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Run the specified tests
|
||||||
|
success = False
|
||||||
|
|
||||||
|
if args.test_type == "unit":
|
||||||
|
success = run_unit_tests()
|
||||||
|
elif args.test_type == "integration":
|
||||||
|
success = run_integration_tests()
|
||||||
|
elif args.test_type == "e2e":
|
||||||
|
success = run_e2e_tests()
|
||||||
|
elif args.test_type == "all":
|
||||||
|
success = run_all_tests()
|
||||||
|
elif args.test_type == "coverage":
|
||||||
|
success = run_coverage_tests()
|
||||||
|
|
||||||
|
# Print results
|
||||||
|
print("\n" + "=" * 50)
|
||||||
|
if success:
|
||||||
|
print("✅ Tests completed successfully!")
|
||||||
|
if args.test_type == "e2e":
|
||||||
|
print("\n💡 If E2E tests were skipped, set E2E_TEST_API_KEY environment variable")
|
||||||
|
print(" See docs/TESTING.md for more information")
|
||||||
|
else:
|
||||||
|
print("❌ Tests failed!")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
@ -8,8 +8,12 @@ from src.db.repositories.api_key_repository import api_key_repository
|
|||||||
from src.db.repositories.user_repository import user_repository
|
from src.db.repositories.user_repository import user_repository
|
||||||
from src.db.repositories.team_repository import team_repository
|
from src.db.repositories.team_repository import team_repository
|
||||||
from src.schemas.api_key import ApiKeyCreate, ApiKeyResponse, ApiKeyWithValueResponse, ApiKeyListResponse
|
from src.schemas.api_key import ApiKeyCreate, ApiKeyResponse, ApiKeyWithValueResponse, ApiKeyListResponse
|
||||||
|
from src.schemas.team import TeamCreate
|
||||||
|
from src.schemas.user import UserCreate
|
||||||
from src.auth.security import generate_api_key, verify_api_key, calculate_expiry_date, is_expired, hash_api_key
|
from src.auth.security import generate_api_key, verify_api_key, calculate_expiry_date, is_expired, hash_api_key
|
||||||
from src.models.api_key import ApiKeyModel
|
from src.models.api_key import ApiKeyModel
|
||||||
|
from src.models.team import TeamModel
|
||||||
|
from src.models.user import UserModel
|
||||||
from src.utils.logging import log_request
|
from src.utils.logging import log_request
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
@ -27,7 +31,7 @@ async def get_current_user(x_api_key: Optional[str] = Header(None)):
|
|||||||
hashed_key = hash_api_key(x_api_key)
|
hashed_key = hash_api_key(x_api_key)
|
||||||
|
|
||||||
# Get the key from the database
|
# Get the key from the database
|
||||||
api_key = await api_key_repository.get_by_hash(hashed_key)
|
api_key = await api_key_repository.get_by_key_hash(hashed_key)
|
||||||
if not api_key:
|
if not api_key:
|
||||||
raise HTTPException(status_code=401, detail="Invalid API key")
|
raise HTTPException(status_code=401, detail="Invalid API key")
|
||||||
|
|
||||||
@ -53,6 +57,100 @@ async def get_current_user(x_api_key: Optional[str] = Header(None)):
|
|||||||
|
|
||||||
return user
|
return user
|
||||||
|
|
||||||
|
@router.post("/bootstrap", response_model=ApiKeyWithValueResponse, status_code=201)
|
||||||
|
async def bootstrap_initial_setup(
|
||||||
|
team_name: str,
|
||||||
|
admin_email: str,
|
||||||
|
admin_name: str,
|
||||||
|
api_key_name: str = "Initial API Key",
|
||||||
|
request: Request = None
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Bootstrap the initial setup by creating a team, admin user, and API key.
|
||||||
|
|
||||||
|
This endpoint does NOT require authentication and should only be used for initial setup.
|
||||||
|
For security, this endpoint should be disabled in production after initial setup.
|
||||||
|
"""
|
||||||
|
# Check if any teams already exist (prevent multiple bootstrap calls)
|
||||||
|
existing_teams = await team_repository.get_all()
|
||||||
|
if existing_teams:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail="Bootstrap already completed. Teams already exist in the system."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check if user with email already exists
|
||||||
|
existing_user = await user_repository.get_by_email(admin_email)
|
||||||
|
if existing_user:
|
||||||
|
raise HTTPException(status_code=400, detail="User with this email already exists")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# 1. Create the team
|
||||||
|
team = TeamModel(
|
||||||
|
name=team_name,
|
||||||
|
description=f"Initial team created during bootstrap"
|
||||||
|
)
|
||||||
|
created_team = await team_repository.create(team)
|
||||||
|
|
||||||
|
# 2. Create the admin user
|
||||||
|
user = UserModel(
|
||||||
|
name=admin_name,
|
||||||
|
email=admin_email,
|
||||||
|
team_id=created_team.id,
|
||||||
|
is_admin=True,
|
||||||
|
is_active=True
|
||||||
|
)
|
||||||
|
created_user = await user_repository.create(user)
|
||||||
|
|
||||||
|
# 3. Generate API key
|
||||||
|
raw_key, hashed_key = generate_api_key(str(created_team.id), str(created_user.id))
|
||||||
|
expiry_date = calculate_expiry_date()
|
||||||
|
|
||||||
|
# 4. Create API key in database
|
||||||
|
api_key = ApiKeyModel(
|
||||||
|
key_hash=hashed_key,
|
||||||
|
user_id=created_user.id,
|
||||||
|
team_id=created_team.id,
|
||||||
|
name=api_key_name,
|
||||||
|
description="Initial API key created during bootstrap",
|
||||||
|
expiry_date=expiry_date,
|
||||||
|
is_active=True
|
||||||
|
)
|
||||||
|
created_key = await api_key_repository.create(api_key)
|
||||||
|
|
||||||
|
logger.info(f"Bootstrap completed: Team '{team_name}', Admin '{admin_email}', API key created")
|
||||||
|
|
||||||
|
# Return the API key response
|
||||||
|
response = ApiKeyWithValueResponse(
|
||||||
|
id=str(created_key.id),
|
||||||
|
key=raw_key,
|
||||||
|
name=created_key.name,
|
||||||
|
description=created_key.description,
|
||||||
|
team_id=str(created_key.team_id),
|
||||||
|
user_id=str(created_key.user_id),
|
||||||
|
created_at=created_key.created_at,
|
||||||
|
expiry_date=created_key.expiry_date,
|
||||||
|
last_used=created_key.last_used,
|
||||||
|
is_active=created_key.is_active
|
||||||
|
)
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Bootstrap failed: {e}")
|
||||||
|
# Clean up any partially created resources
|
||||||
|
try:
|
||||||
|
if 'created_key' in locals():
|
||||||
|
await api_key_repository.delete(created_key.id)
|
||||||
|
if 'created_user' in locals():
|
||||||
|
await user_repository.delete(created_user.id)
|
||||||
|
if 'created_team' in locals():
|
||||||
|
await team_repository.delete(created_team.id)
|
||||||
|
except:
|
||||||
|
pass # Best effort cleanup
|
||||||
|
|
||||||
|
raise HTTPException(status_code=500, detail=f"Bootstrap failed: {str(e)}")
|
||||||
|
|
||||||
@router.post("/api-keys", response_model=ApiKeyWithValueResponse, status_code=201)
|
@router.post("/api-keys", response_model=ApiKeyWithValueResponse, status_code=201)
|
||||||
async def create_api_key(key_data: ApiKeyCreate, request: Request, current_user = Depends(get_current_user)):
|
async def create_api_key(key_data: ApiKeyCreate, request: Request, current_user = Depends(get_current_user)):
|
||||||
"""
|
"""
|
||||||
|
|||||||
11
src/db/repositories/api_key_repository.py
Normal file
11
src/db/repositories/api_key_repository.py
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
"""
|
||||||
|
API Key repository singleton instance.
|
||||||
|
|
||||||
|
This module provides a singleton instance of the API key repository
|
||||||
|
that can be imported and used throughout the application.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from src.db.repositories.firestore_api_key_repository import FirestoreApiKeyRepository
|
||||||
|
|
||||||
|
# Create singleton instance
|
||||||
|
api_key_repository = FirestoreApiKeyRepository()
|
||||||
@ -1,4 +1,6 @@
|
|||||||
import logging
|
import logging
|
||||||
|
from datetime import datetime
|
||||||
|
from bson import ObjectId
|
||||||
from src.db.repositories.firestore_repository import FirestoreRepository
|
from src.db.repositories.firestore_repository import FirestoreRepository
|
||||||
from src.models.api_key import ApiKeyModel
|
from src.models.api_key import ApiKeyModel
|
||||||
|
|
||||||
@ -51,5 +53,53 @@ class FirestoreApiKeyRepository(FirestoreRepository[ApiKeyModel]):
|
|||||||
logger.error(f"Error getting API keys by user ID: {e}")
|
logger.error(f"Error getting API keys by user ID: {e}")
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
async def get_by_user(self, user_id: ObjectId) -> list[ApiKeyModel]:
|
||||||
|
"""
|
||||||
|
Get API keys by user (alias for get_by_user_id with ObjectId)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_id: User ID as ObjectId
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of API keys
|
||||||
|
"""
|
||||||
|
return await self.get_by_user_id(str(user_id))
|
||||||
|
|
||||||
|
async def update_last_used(self, api_key_id: ObjectId) -> bool:
|
||||||
|
"""
|
||||||
|
Update the last used timestamp for an API key
|
||||||
|
|
||||||
|
Args:
|
||||||
|
api_key_id: API key ID
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if updated successfully
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
update_data = {"last_used": datetime.utcnow()}
|
||||||
|
result = await self.update(str(api_key_id), update_data)
|
||||||
|
return result is not None
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error updating last used for API key: {e}")
|
||||||
|
raise
|
||||||
|
|
||||||
|
async def deactivate(self, api_key_id: ObjectId) -> bool:
|
||||||
|
"""
|
||||||
|
Deactivate an API key
|
||||||
|
|
||||||
|
Args:
|
||||||
|
api_key_id: API key ID
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if deactivated successfully
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
update_data = {"is_active": False}
|
||||||
|
result = await self.update(str(api_key_id), update_data)
|
||||||
|
return result is not None
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error deactivating API key: {e}")
|
||||||
|
raise
|
||||||
|
|
||||||
# Create a singleton repository
|
# Create a singleton repository
|
||||||
firestore_api_key_repository = FirestoreApiKeyRepository()
|
firestore_api_key_repository = FirestoreApiKeyRepository()
|
||||||
11
src/db/repositories/image_repository.py
Normal file
11
src/db/repositories/image_repository.py
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
"""
|
||||||
|
Image repository singleton instance.
|
||||||
|
|
||||||
|
This module provides a singleton instance of the image repository
|
||||||
|
that can be imported and used throughout the application.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from src.db.repositories.firestore_image_repository import FirestoreImageRepository
|
||||||
|
|
||||||
|
# Create singleton instance
|
||||||
|
image_repository = FirestoreImageRepository()
|
||||||
11
src/db/repositories/team_repository.py
Normal file
11
src/db/repositories/team_repository.py
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
"""
|
||||||
|
Team repository singleton instance.
|
||||||
|
|
||||||
|
This module provides a singleton instance of the team repository
|
||||||
|
that can be imported and used throughout the application.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from src.db.repositories.firestore_team_repository import FirestoreTeamRepository
|
||||||
|
|
||||||
|
# Create singleton instance
|
||||||
|
team_repository = FirestoreTeamRepository()
|
||||||
11
src/db/repositories/user_repository.py
Normal file
11
src/db/repositories/user_repository.py
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
"""
|
||||||
|
User repository singleton instance.
|
||||||
|
|
||||||
|
This module provides a singleton instance of the user repository
|
||||||
|
that can be imported and used throughout the application.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from src.db.repositories.firestore_user_repository import FirestoreUserRepository
|
||||||
|
|
||||||
|
# Create singleton instance
|
||||||
|
user_repository = FirestoreUserRepository()
|
||||||
195
tests/conftest.py
Normal file
195
tests/conftest.py
Normal file
@ -0,0 +1,195 @@
|
|||||||
|
"""
|
||||||
|
Global test configuration and fixtures for SEREACT tests.
|
||||||
|
|
||||||
|
This file provides shared fixtures and configuration for:
|
||||||
|
- Unit tests (with mocked dependencies)
|
||||||
|
- Integration tests (with real database connections)
|
||||||
|
- End-to-end tests (with complete workflow testing)
|
||||||
|
"""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
import asyncio
|
||||||
|
import os
|
||||||
|
import tempfile
|
||||||
|
import io
|
||||||
|
from typing import Generator, Dict, Any
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
from PIL import Image as PILImage
|
||||||
|
|
||||||
|
# Import the main app
|
||||||
|
from main import app
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def event_loop():
|
||||||
|
"""Create an event loop for the test session"""
|
||||||
|
loop = asyncio.new_event_loop()
|
||||||
|
yield loop
|
||||||
|
loop.close()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def test_app():
|
||||||
|
"""Create the FastAPI test application"""
|
||||||
|
return app
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="function")
|
||||||
|
def client(test_app) -> Generator[TestClient, None, None]:
|
||||||
|
"""Create a test client for the FastAPI app"""
|
||||||
|
with TestClient(test_app) as test_client:
|
||||||
|
yield test_client
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="function")
|
||||||
|
def sample_image() -> io.BytesIO:
|
||||||
|
"""Create a sample image for testing uploads"""
|
||||||
|
img = PILImage.new('RGB', (100, 100), color='red')
|
||||||
|
img_bytes = io.BytesIO()
|
||||||
|
img.save(img_bytes, format='JPEG')
|
||||||
|
img_bytes.seek(0)
|
||||||
|
return img_bytes
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="function")
|
||||||
|
def sample_images() -> Dict[str, io.BytesIO]:
|
||||||
|
"""Create multiple sample images for testing"""
|
||||||
|
images = {}
|
||||||
|
colors = ['red', 'green', 'blue', 'yellow']
|
||||||
|
|
||||||
|
for i, color in enumerate(colors):
|
||||||
|
img = PILImage.new('RGB', (100, 100), color=color)
|
||||||
|
img_bytes = io.BytesIO()
|
||||||
|
img.save(img_bytes, format='JPEG')
|
||||||
|
img_bytes.seek(0)
|
||||||
|
images[f'{color}_image_{i}.jpg'] = img_bytes
|
||||||
|
|
||||||
|
return images
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="function")
|
||||||
|
def temp_image_file() -> Generator[str, None, None]:
|
||||||
|
"""Create a temporary image file for testing"""
|
||||||
|
img = PILImage.new('RGB', (200, 200), color='blue')
|
||||||
|
|
||||||
|
with tempfile.NamedTemporaryFile(suffix='.jpg', delete=False) as tmp_file:
|
||||||
|
img.save(tmp_file, format='JPEG')
|
||||||
|
tmp_file_path = tmp_file.name
|
||||||
|
|
||||||
|
yield tmp_file_path
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
try:
|
||||||
|
os.unlink(tmp_file_path)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="function")
|
||||||
|
def test_team_data() -> Dict[str, Any]:
|
||||||
|
"""Provide test team data"""
|
||||||
|
return {
|
||||||
|
"name": "Test Team",
|
||||||
|
"description": "A team created for testing purposes"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="function")
|
||||||
|
def test_user_data() -> Dict[str, Any]:
|
||||||
|
"""Provide test user data"""
|
||||||
|
return {
|
||||||
|
"email": "test@example.com",
|
||||||
|
"name": "Test User",
|
||||||
|
"role": "admin"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="function")
|
||||||
|
def test_api_key_data() -> Dict[str, Any]:
|
||||||
|
"""Provide test API key data"""
|
||||||
|
return {
|
||||||
|
"name": "Test API Key",
|
||||||
|
"permissions": ["read", "write"]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="function")
|
||||||
|
def test_image_data() -> Dict[str, Any]:
|
||||||
|
"""Provide test image metadata"""
|
||||||
|
return {
|
||||||
|
"description": "Test image for automated testing",
|
||||||
|
"tags": "test,automation,sample"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# Environment-specific fixtures
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def integration_test_enabled() -> bool:
|
||||||
|
"""Check if integration tests are enabled"""
|
||||||
|
return bool(os.getenv("FIRESTORE_INTEGRATION_TEST"))
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def e2e_integration_test_enabled() -> bool:
|
||||||
|
"""Check if E2E integration tests are enabled"""
|
||||||
|
return bool(os.getenv("E2E_INTEGRATION_TEST"))
|
||||||
|
|
||||||
|
|
||||||
|
# Test data cleanup utilities
|
||||||
|
@pytest.fixture(scope="function")
|
||||||
|
def cleanup_tracker():
|
||||||
|
"""Track resources created during tests for cleanup"""
|
||||||
|
resources = {
|
||||||
|
"teams": [],
|
||||||
|
"users": [],
|
||||||
|
"api_keys": [],
|
||||||
|
"images": []
|
||||||
|
}
|
||||||
|
|
||||||
|
yield resources
|
||||||
|
|
||||||
|
# Cleanup logic would go here if needed
|
||||||
|
# For now, we rely on test isolation through mocking
|
||||||
|
|
||||||
|
|
||||||
|
# Configuration for different test types
|
||||||
|
def pytest_configure(config):
|
||||||
|
"""Configure pytest with custom markers and settings"""
|
||||||
|
config.addinivalue_line(
|
||||||
|
"markers", "unit: mark test as unit test (uses mocks)"
|
||||||
|
)
|
||||||
|
config.addinivalue_line(
|
||||||
|
"markers", "integration: mark test as integration test (requires real database)"
|
||||||
|
)
|
||||||
|
config.addinivalue_line(
|
||||||
|
"markers", "e2e: mark test as end-to-end test (complete workflows)"
|
||||||
|
)
|
||||||
|
config.addinivalue_line(
|
||||||
|
"markers", "slow: mark test as slow running"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def pytest_collection_modifyitems(config, items):
|
||||||
|
"""Modify test collection to add markers based on test location"""
|
||||||
|
for item in items:
|
||||||
|
# Add markers based on test file location
|
||||||
|
if "integration" in str(item.fspath):
|
||||||
|
item.add_marker(pytest.mark.integration)
|
||||||
|
elif "e2e" in str(item.fspath) or item.name.startswith("test_e2e"):
|
||||||
|
item.add_marker(pytest.mark.e2e)
|
||||||
|
else:
|
||||||
|
item.add_marker(pytest.mark.unit)
|
||||||
|
|
||||||
|
|
||||||
|
# Skip conditions for different test types
|
||||||
|
def pytest_runtest_setup(item):
|
||||||
|
"""Setup conditions for running different types of tests"""
|
||||||
|
# Skip integration tests if not enabled
|
||||||
|
if item.get_closest_marker("integration"):
|
||||||
|
if not os.getenv("FIRESTORE_INTEGRATION_TEST"):
|
||||||
|
pytest.skip("Integration tests disabled. Set FIRESTORE_INTEGRATION_TEST=1 to enable")
|
||||||
|
|
||||||
|
# Skip E2E integration tests if not enabled
|
||||||
|
if item.get_closest_marker("e2e") and "integration" in item.keywords:
|
||||||
|
if not os.getenv("E2E_INTEGRATION_TEST"):
|
||||||
|
pytest.skip("E2E integration tests disabled. Set E2E_INTEGRATION_TEST=1 to enable")
|
||||||
469
tests/test_e2e.py
Normal file
469
tests/test_e2e.py
Normal file
@ -0,0 +1,469 @@
|
|||||||
|
"""
|
||||||
|
End-to-End Tests for SEREACT API
|
||||||
|
|
||||||
|
These tests cover the complete user workflows described in the README:
|
||||||
|
1. Bootstrap initial setup (team, admin user, API key) - or use existing setup
|
||||||
|
2. Team creation and management
|
||||||
|
3. User management within teams
|
||||||
|
4. API key authentication
|
||||||
|
5. Image upload and storage
|
||||||
|
6. Image search and retrieval
|
||||||
|
7. Multi-team isolation
|
||||||
|
|
||||||
|
These tests are idempotent and can be run multiple times against the same database.
|
||||||
|
|
||||||
|
Run with: pytest tests/test_e2e.py -v
|
||||||
|
For integration tests: pytest tests/test_e2e.py -v -m integration
|
||||||
|
"""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
import asyncio
|
||||||
|
import os
|
||||||
|
import io
|
||||||
|
import uuid
|
||||||
|
from typing import Dict, Any, List
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
from PIL import Image as PILImage
|
||||||
|
import tempfile
|
||||||
|
|
||||||
|
from main import app
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.e2e
|
||||||
|
class TestE2EWorkflows:
|
||||||
|
"""End-to-end tests covering complete user workflows"""
|
||||||
|
|
||||||
|
@pytest.fixture(scope="function")
|
||||||
|
def client(self):
|
||||||
|
"""Create test client for the FastAPI app"""
|
||||||
|
return TestClient(app)
|
||||||
|
|
||||||
|
@pytest.fixture(scope="function")
|
||||||
|
def sample_image_file(self):
|
||||||
|
"""Create a sample image file for testing uploads"""
|
||||||
|
# Create a simple test image
|
||||||
|
img = PILImage.new('RGB', (100, 100), color='red')
|
||||||
|
img_bytes = io.BytesIO()
|
||||||
|
img.save(img_bytes, format='JPEG')
|
||||||
|
img_bytes.seek(0)
|
||||||
|
return img_bytes
|
||||||
|
|
||||||
|
@pytest.fixture(scope="function")
|
||||||
|
def unique_suffix(self):
|
||||||
|
"""Generate a unique suffix for test data to avoid conflicts"""
|
||||||
|
return str(uuid.uuid4())[:8]
|
||||||
|
|
||||||
|
def test_bootstrap_or_existing_setup_workflow(self, client: TestClient, sample_image_file, unique_suffix):
|
||||||
|
"""Test the complete workflow - either bootstrap new setup or use existing one"""
|
||||||
|
|
||||||
|
# 1. Try bootstrap first, but handle gracefully if already done
|
||||||
|
bootstrap_data = {
|
||||||
|
"team_name": f"E2E Test Team {unique_suffix}",
|
||||||
|
"admin_email": f"admin-{unique_suffix}@e2etest.com",
|
||||||
|
"admin_name": f"E2E Admin User {unique_suffix}",
|
||||||
|
"api_key_name": f"E2E Test API Key {unique_suffix}"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = client.post("/api/v1/auth/bootstrap", params=bootstrap_data)
|
||||||
|
|
||||||
|
if response.status_code == 400:
|
||||||
|
# Bootstrap already completed, try to use existing setup
|
||||||
|
print("Bootstrap already completed, trying to use existing setup...")
|
||||||
|
|
||||||
|
# Check if user provided an API key via environment variable
|
||||||
|
test_api_key = os.getenv("E2E_TEST_API_KEY")
|
||||||
|
|
||||||
|
if test_api_key:
|
||||||
|
print(f"Using API key from environment variable")
|
||||||
|
headers = {"X-API-Key": test_api_key}
|
||||||
|
|
||||||
|
# Verify the API key works
|
||||||
|
response = client.get("/api/v1/auth/verify", headers=headers)
|
||||||
|
if response.status_code != 200:
|
||||||
|
pytest.skip(f"Provided API key is invalid: {response.status_code}")
|
||||||
|
|
||||||
|
auth_info = response.json()
|
||||||
|
|
||||||
|
# Create a new team for our test
|
||||||
|
team_data = {
|
||||||
|
"name": f"E2E Test Team {unique_suffix}",
|
||||||
|
"description": f"E2E test team created at {unique_suffix}"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = client.post("/api/v1/teams", json=team_data, headers=headers)
|
||||||
|
if response.status_code != 201:
|
||||||
|
pytest.skip(f"Failed to create test team: {response.status_code}")
|
||||||
|
|
||||||
|
team = response.json()
|
||||||
|
team_id = team["id"]
|
||||||
|
|
||||||
|
# Create a test user for this team
|
||||||
|
user_data = {
|
||||||
|
"email": f"testuser-{unique_suffix}@e2etest.com",
|
||||||
|
"name": f"E2E Test User {unique_suffix}",
|
||||||
|
"is_admin": True,
|
||||||
|
"team_id": team_id
|
||||||
|
}
|
||||||
|
|
||||||
|
response = client.post("/api/v1/users", json=user_data, headers=headers)
|
||||||
|
if response.status_code != 201:
|
||||||
|
pytest.skip(f"Failed to create test user: {response.status_code}")
|
||||||
|
|
||||||
|
user = response.json()
|
||||||
|
admin_user_id = user["id"]
|
||||||
|
api_key = test_api_key
|
||||||
|
|
||||||
|
print(f"✅ Using existing setup with new team: {team_id}, user: {admin_user_id}")
|
||||||
|
|
||||||
|
else:
|
||||||
|
# No API key provided, skip the test
|
||||||
|
pytest.skip(
|
||||||
|
"Bootstrap already completed and no API key provided. "
|
||||||
|
"Set E2E_TEST_API_KEY environment variable with a valid API key to run this test."
|
||||||
|
)
|
||||||
|
|
||||||
|
else:
|
||||||
|
# Bootstrap succeeded
|
||||||
|
assert response.status_code == 201
|
||||||
|
bootstrap_result = response.json()
|
||||||
|
assert "key" in bootstrap_result
|
||||||
|
|
||||||
|
api_key = bootstrap_result["key"]
|
||||||
|
team_id = bootstrap_result["team_id"]
|
||||||
|
admin_user_id = bootstrap_result["user_id"]
|
||||||
|
headers = {"X-API-Key": api_key}
|
||||||
|
|
||||||
|
print(f"✅ Bootstrap successful - Team: {team_id}, User: {admin_user_id}")
|
||||||
|
|
||||||
|
headers = {"X-API-Key": api_key}
|
||||||
|
|
||||||
|
# 2. Verify authentication works
|
||||||
|
response = client.get("/api/v1/auth/verify", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
auth_info = response.json()
|
||||||
|
print("✅ Authentication verified")
|
||||||
|
|
||||||
|
# 3. Test team management
|
||||||
|
# Get the team (either created or from bootstrap)
|
||||||
|
response = client.get(f"/api/v1/teams/{team_id}", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
team = response.json()
|
||||||
|
print("✅ Team retrieval successful")
|
||||||
|
|
||||||
|
# Update team with unique description
|
||||||
|
update_data = {"description": f"Updated during E2E testing {unique_suffix}"}
|
||||||
|
response = client.put(f"/api/v1/teams/{team_id}", json=update_data, headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
updated_team = response.json()
|
||||||
|
assert f"Updated during E2E testing {unique_suffix}" in updated_team["description"]
|
||||||
|
print("✅ Team update successful")
|
||||||
|
|
||||||
|
# List teams
|
||||||
|
response = client.get("/api/v1/teams", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
teams = response.json()
|
||||||
|
assert len(teams) >= 1
|
||||||
|
assert any(t["id"] == team_id for t in teams)
|
||||||
|
print("✅ Team listing successful")
|
||||||
|
|
||||||
|
# 4. Test user management
|
||||||
|
# Create a regular user with unique email
|
||||||
|
user_data = {
|
||||||
|
"email": f"user-{unique_suffix}@e2etest.com",
|
||||||
|
"name": f"E2E Regular User {unique_suffix}",
|
||||||
|
"is_admin": False,
|
||||||
|
"team_id": team_id
|
||||||
|
}
|
||||||
|
|
||||||
|
response = client.post("/api/v1/users", json=user_data, headers=headers)
|
||||||
|
assert response.status_code == 201
|
||||||
|
regular_user = response.json()
|
||||||
|
assert regular_user["email"] == f"user-{unique_suffix}@e2etest.com"
|
||||||
|
assert regular_user["is_admin"] is False
|
||||||
|
regular_user_id = regular_user["id"]
|
||||||
|
print("✅ User creation successful")
|
||||||
|
|
||||||
|
# Get user details
|
||||||
|
response = client.get(f"/api/v1/users/{regular_user_id}", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
retrieved_user = response.json()
|
||||||
|
assert retrieved_user["email"] == f"user-{unique_suffix}@e2etest.com"
|
||||||
|
print("✅ User retrieval successful")
|
||||||
|
|
||||||
|
# List users
|
||||||
|
response = client.get("/api/v1/users", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
users = response.json()
|
||||||
|
assert len(users) >= 1
|
||||||
|
user_emails = [u["email"] for u in users]
|
||||||
|
assert f"user-{unique_suffix}@e2etest.com" in user_emails
|
||||||
|
print("✅ User listing successful")
|
||||||
|
|
||||||
|
# 5. Test API key management
|
||||||
|
# Create additional API key with unique name
|
||||||
|
api_key_data = {
|
||||||
|
"name": f"Additional Test Key {unique_suffix}",
|
||||||
|
"description": f"Extra key for testing {unique_suffix}"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = client.post("/api/v1/auth/api-keys", json=api_key_data, headers=headers)
|
||||||
|
assert response.status_code == 201
|
||||||
|
new_api_key = response.json()
|
||||||
|
assert new_api_key["name"] == f"Additional Test Key {unique_suffix}"
|
||||||
|
new_key_value = new_api_key["key"]
|
||||||
|
new_key_id = new_api_key["id"]
|
||||||
|
print("✅ Additional API key creation successful")
|
||||||
|
|
||||||
|
# Test the new API key works
|
||||||
|
new_headers = {"X-API-Key": new_key_value}
|
||||||
|
response = client.get("/api/v1/auth/verify", headers=new_headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
print("✅ New API key authentication successful")
|
||||||
|
|
||||||
|
# List API keys
|
||||||
|
response = client.get("/api/v1/auth/api-keys", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
api_keys = response.json()
|
||||||
|
assert len(api_keys) >= 1
|
||||||
|
print("✅ API key listing successful")
|
||||||
|
|
||||||
|
# Revoke the additional API key
|
||||||
|
response = client.delete(f"/api/v1/auth/api-keys/{new_key_id}", headers=headers)
|
||||||
|
assert response.status_code == 204
|
||||||
|
print("✅ API key revocation successful")
|
||||||
|
|
||||||
|
# Verify revoked key doesn't work
|
||||||
|
response = client.get("/api/v1/auth/verify", headers=new_headers)
|
||||||
|
assert response.status_code == 401
|
||||||
|
print("✅ Revoked API key properly rejected")
|
||||||
|
|
||||||
|
# 6. Test image upload and management
|
||||||
|
sample_image_file.seek(0)
|
||||||
|
files = {"file": (f"test_image_{unique_suffix}.jpg", sample_image_file, "image/jpeg")}
|
||||||
|
data = {
|
||||||
|
"description": f"E2E test image {unique_suffix}",
|
||||||
|
"tags": f"test,e2e,sample,{unique_suffix}"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = client.post("/api/v1/images", files=files, data=data, headers=headers)
|
||||||
|
assert response.status_code == 201
|
||||||
|
image = response.json()
|
||||||
|
assert image["filename"] == f"test_image_{unique_suffix}.jpg"
|
||||||
|
assert image["description"] == f"E2E test image {unique_suffix}"
|
||||||
|
assert "test" in image["tags"]
|
||||||
|
assert unique_suffix in image["tags"]
|
||||||
|
image_id = image["id"]
|
||||||
|
print("✅ Image upload successful")
|
||||||
|
|
||||||
|
# Get image details
|
||||||
|
response = client.get(f"/api/v1/images/{image_id}", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
retrieved_image = response.json()
|
||||||
|
assert retrieved_image["filename"] == f"test_image_{unique_suffix}.jpg"
|
||||||
|
print("✅ Image retrieval successful")
|
||||||
|
|
||||||
|
# Update image metadata
|
||||||
|
update_data = {
|
||||||
|
"description": f"Updated E2E test image {unique_suffix}",
|
||||||
|
"tags": ["test", "e2e", "updated", unique_suffix]
|
||||||
|
}
|
||||||
|
response = client.put(f"/api/v1/images/{image_id}", json=update_data, headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
updated_image = response.json()
|
||||||
|
assert updated_image["description"] == f"Updated E2E test image {unique_suffix}"
|
||||||
|
assert "updated" in updated_image["tags"]
|
||||||
|
print("✅ Image metadata update successful")
|
||||||
|
|
||||||
|
# List images
|
||||||
|
response = client.get("/api/v1/images", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
images = response.json()
|
||||||
|
assert len(images) >= 1
|
||||||
|
# Check if our image is in the list
|
||||||
|
our_images = [img for img in images if img["id"] == image_id]
|
||||||
|
assert len(our_images) == 1
|
||||||
|
print("✅ Image listing successful")
|
||||||
|
|
||||||
|
# Download image
|
||||||
|
response = client.get(f"/api/v1/images/{image_id}/download", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
assert response.headers["content-type"] == "image/jpeg"
|
||||||
|
print("✅ Image download successful")
|
||||||
|
|
||||||
|
# 7. Test search functionality
|
||||||
|
# Upload multiple images for search testing
|
||||||
|
test_images = [
|
||||||
|
{"filename": f"cat_{unique_suffix}.jpg", "description": f"A cute cat {unique_suffix}", "tags": f"animal,pet,cat,{unique_suffix}"},
|
||||||
|
{"filename": f"dog_{unique_suffix}.jpg", "description": f"A friendly dog {unique_suffix}", "tags": f"animal,pet,dog,{unique_suffix}"},
|
||||||
|
{"filename": f"landscape_{unique_suffix}.jpg", "description": f"Beautiful landscape {unique_suffix}", "tags": f"nature,landscape,outdoor,{unique_suffix}"}
|
||||||
|
]
|
||||||
|
|
||||||
|
uploaded_image_ids = []
|
||||||
|
for img_data in test_images:
|
||||||
|
sample_image_file.seek(0)
|
||||||
|
files = {"file": (img_data["filename"], sample_image_file, "image/jpeg")}
|
||||||
|
data = {
|
||||||
|
"description": img_data["description"],
|
||||||
|
"tags": img_data["tags"]
|
||||||
|
}
|
||||||
|
|
||||||
|
response = client.post("/api/v1/images", files=files, data=data, headers=headers)
|
||||||
|
assert response.status_code == 201
|
||||||
|
uploaded_image_ids.append(response.json()["id"])
|
||||||
|
print("✅ Multiple image uploads successful")
|
||||||
|
|
||||||
|
# Text search with unique suffix to find our images
|
||||||
|
response = client.get(f"/api/v1/search?query={unique_suffix}", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
search_results = response.json()
|
||||||
|
assert len(search_results) >= 1
|
||||||
|
# Verify our images are in the results
|
||||||
|
result_descriptions = [result["description"] for result in search_results]
|
||||||
|
assert any(unique_suffix in desc for desc in result_descriptions)
|
||||||
|
print("✅ Text search successful")
|
||||||
|
|
||||||
|
# Tag-based search with our unique tag
|
||||||
|
response = client.get(f"/api/v1/search?tags={unique_suffix}", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
search_results = response.json()
|
||||||
|
assert len(search_results) >= 4 # Our 4 uploaded images
|
||||||
|
print("✅ Tag-based search successful")
|
||||||
|
|
||||||
|
# Combined search
|
||||||
|
response = client.get(f"/api/v1/search?query=cat&tags={unique_suffix}", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
search_results = response.json()
|
||||||
|
assert len(search_results) >= 1
|
||||||
|
print("✅ Combined search successful")
|
||||||
|
|
||||||
|
print("🎉 Complete E2E workflow test passed!")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"team_id": team_id,
|
||||||
|
"admin_user_id": admin_user_id,
|
||||||
|
"regular_user_id": regular_user_id,
|
||||||
|
"api_key": api_key,
|
||||||
|
"image_ids": [image_id] + uploaded_image_ids,
|
||||||
|
"unique_suffix": unique_suffix
|
||||||
|
}
|
||||||
|
|
||||||
|
def test_error_handling(self, client: TestClient, unique_suffix):
|
||||||
|
"""Test error handling scenarios"""
|
||||||
|
|
||||||
|
# Test bootstrap with duplicate data (should fail gracefully)
|
||||||
|
bootstrap_data = {
|
||||||
|
"team_name": f"Another Team {unique_suffix}",
|
||||||
|
"admin_email": f"another-{unique_suffix}@admin.com",
|
||||||
|
"admin_name": f"Another Admin {unique_suffix}",
|
||||||
|
"api_key_name": f"Another API Key {unique_suffix}"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = client.post("/api/v1/auth/bootstrap", params=bootstrap_data)
|
||||||
|
if response.status_code == 400:
|
||||||
|
assert "Bootstrap already completed" in response.json()["detail"]
|
||||||
|
print("✅ Bootstrap protection working")
|
||||||
|
else:
|
||||||
|
# If bootstrap succeeded, that's also fine for a fresh database
|
||||||
|
print("✅ Bootstrap succeeded (fresh database)")
|
||||||
|
|
||||||
|
# Test invalid API key
|
||||||
|
invalid_headers = {"X-API-Key": "invalid-key"}
|
||||||
|
response = client.get("/api/v1/auth/verify", headers=invalid_headers)
|
||||||
|
assert response.status_code == 401
|
||||||
|
print("✅ Invalid API key properly rejected")
|
||||||
|
|
||||||
|
# Test missing API key
|
||||||
|
response = client.get("/api/v1/teams")
|
||||||
|
assert response.status_code == 401
|
||||||
|
print("✅ Missing API key properly rejected")
|
||||||
|
|
||||||
|
# Test file upload errors
|
||||||
|
response = client.post("/api/v1/images")
|
||||||
|
assert response.status_code == 401 # No API key
|
||||||
|
print("✅ Unauthorized image upload properly rejected")
|
||||||
|
|
||||||
|
print("🎉 Error handling test passed!")
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.integration
|
||||||
|
@pytest.mark.e2e
|
||||||
|
class TestE2EIntegrationWorkflows:
|
||||||
|
"""End-to-end integration tests that require real services"""
|
||||||
|
|
||||||
|
@pytest.fixture(scope="class")
|
||||||
|
def client(self):
|
||||||
|
"""Create test client for integration testing"""
|
||||||
|
if not os.getenv("E2E_INTEGRATION_TEST"):
|
||||||
|
pytest.skip("E2E integration tests disabled. Set E2E_INTEGRATION_TEST=1 to enable")
|
||||||
|
|
||||||
|
return TestClient(app)
|
||||||
|
|
||||||
|
def test_real_image_processing_workflow(self, client: TestClient):
|
||||||
|
"""Test the complete image processing workflow with real services"""
|
||||||
|
# This test would require:
|
||||||
|
# - Real Google Cloud Storage
|
||||||
|
# - Real Firestore database
|
||||||
|
# - Real Cloud Vision API
|
||||||
|
# - Real Pinecone vector database
|
||||||
|
|
||||||
|
# For integration tests, we would need to clear the database first
|
||||||
|
# or use a separate test database
|
||||||
|
|
||||||
|
# Bootstrap setup
|
||||||
|
bootstrap_data = {
|
||||||
|
"team_name": "Real Processing Team",
|
||||||
|
"admin_email": "real@processing.com",
|
||||||
|
"admin_name": "Real User",
|
||||||
|
"api_key_name": "Real API Key"
|
||||||
|
}
|
||||||
|
|
||||||
|
response = client.post("/api/v1/auth/bootstrap", params=bootstrap_data)
|
||||||
|
if response.status_code == 400:
|
||||||
|
# Bootstrap already done, skip this test
|
||||||
|
pytest.skip("Bootstrap already completed in real database")
|
||||||
|
|
||||||
|
assert response.status_code == 201
|
||||||
|
api_key = response.json()["key"]
|
||||||
|
headers = {"X-API-Key": api_key}
|
||||||
|
|
||||||
|
# Upload a real image
|
||||||
|
with open("images/sample_image.jpg", "rb") as f: # Assuming sample image exists
|
||||||
|
files = {"file": ("real_image.jpg", f, "image/jpeg")}
|
||||||
|
data = {"description": "Real image for processing", "tags": "real,processing,test"}
|
||||||
|
|
||||||
|
response = client.post("/api/v1/images", files=files, data=data, headers=headers)
|
||||||
|
assert response.status_code == 201
|
||||||
|
image = response.json()
|
||||||
|
image_id = image["id"]
|
||||||
|
|
||||||
|
# Wait for processing to complete (in real scenario, this would be async)
|
||||||
|
import time
|
||||||
|
time.sleep(5) # Wait for Cloud Function to process
|
||||||
|
|
||||||
|
# Check if embeddings were generated
|
||||||
|
response = client.get(f"/api/v1/images/{image_id}", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
processed_image = response.json()
|
||||||
|
assert processed_image["status"] == "ready"
|
||||||
|
assert "embedding_id" in processed_image
|
||||||
|
|
||||||
|
# Test semantic search
|
||||||
|
response = client.get("/api/v1/search/semantic?query=similar image", headers=headers)
|
||||||
|
assert response.status_code == 200
|
||||||
|
search_results = response.json()
|
||||||
|
assert len(search_results) >= 1
|
||||||
|
|
||||||
|
|
||||||
|
# Utility functions for E2E tests
|
||||||
|
def create_test_image(width: int = 100, height: int = 100, color: str = 'red') -> io.BytesIO:
|
||||||
|
"""Create a test image for upload testing"""
|
||||||
|
img = PILImage.new('RGB', (width, height), color=color)
|
||||||
|
img_bytes = io.BytesIO()
|
||||||
|
img.save(img_bytes, format='JPEG')
|
||||||
|
img_bytes.seek(0)
|
||||||
|
return img_bytes
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# Run E2E tests
|
||||||
|
pytest.main([__file__, "-v", "-m", "e2e"])
|
||||||
Loading…
x
Reference in New Issue
Block a user