245 lines
7.6 KiB
Markdown
245 lines
7.6 KiB
Markdown
|
# Forgejo Actions for Pricing Tests
|
|||
|
|
|||
|
This directory contains Forgejo Actions (Gitea Actions) workflows that automatically run pricing tests in the CI/CD pipeline. These workflows ensure that pricing calculations remain accurate and that changes to pricing logic don't introduce regressions.
|
|||
|
|
|||
|
## Workflow Files
|
|||
|
|
|||
|
### 1. `ci.yml` - Main CI/CD Pipeline
|
|||
|
**Triggers**: Push to `main`/`develop`, Pull Requests
|
|||
|
**Purpose**: Complete CI/CD pipeline including testing, building, and deployment
|
|||
|
|
|||
|
**Jobs**:
|
|||
|
- **test**: Runs all Django tests including pricing tests
|
|||
|
- **lint**: Code quality checks with ruff
|
|||
|
- **security**: Security scanning with safety and bandit
|
|||
|
- **build**: Docker image building (only on main/develop)
|
|||
|
- **deploy**: Production deployment (only on main)
|
|||
|
|
|||
|
**Key Features**:
|
|||
|
- Uses PostgreSQL service for realistic testing
|
|||
|
- Runs pricing tests in separate groups for better visibility
|
|||
|
- Includes Django system checks
|
|||
|
- Only builds/deploys if tests pass
|
|||
|
|
|||
|
### 2. `pricing-tests.yml` - Dedicated Pricing Tests
|
|||
|
**Triggers**: Changes to pricing-related files
|
|||
|
**Purpose**: Comprehensive testing of pricing models and calculations
|
|||
|
|
|||
|
**Path Triggers**:
|
|||
|
- `hub/services/models/pricing.py`
|
|||
|
- `hub/services/tests/test_pricing*.py`
|
|||
|
- `hub/services/forms.py`
|
|||
|
- `hub/services/views/**`
|
|||
|
- `hub/services/templates/**`
|
|||
|
|
|||
|
**Jobs**:
|
|||
|
- **pricing-tests**: Matrix testing across Python and Django versions
|
|||
|
- **pricing-documentation**: Documentation and coverage checks
|
|||
|
|
|||
|
**Key Features**:
|
|||
|
- Matrix testing: Python 3.12/3.13 × Django 5.0/5.1
|
|||
|
- Test coverage reporting
|
|||
|
- Performance testing with large datasets
|
|||
|
- Pricing validation with sample scenarios
|
|||
|
|
|||
|
### 3. `pr-pricing-validation.yml` - Pull Request Validation
|
|||
|
**Triggers**: Pull requests affecting pricing code
|
|||
|
**Purpose**: Validate pricing changes in PRs before merge
|
|||
|
|
|||
|
**Jobs**:
|
|||
|
- **pricing-validation**: Comprehensive validation of pricing changes
|
|||
|
|
|||
|
**Key Features**:
|
|||
|
- Migration detection for pricing model changes
|
|||
|
- Coverage tracking with minimum threshold (85%)
|
|||
|
- Critical method change detection
|
|||
|
- Backward compatibility checking
|
|||
|
- Test addition validation
|
|||
|
- PR summary generation
|
|||
|
|
|||
|
### 4. `scheduled-pricing-tests.yml` - Scheduled Testing
|
|||
|
**Triggers**: Daily at 6 AM UTC, manual dispatch
|
|||
|
**Purpose**: Regular validation to catch time-based or dependency issues
|
|||
|
|
|||
|
**Jobs**:
|
|||
|
- **scheduled-pricing-tests**: Matrix testing on different databases
|
|||
|
- **notify-on-failure**: Automatic issue creation on failure
|
|||
|
|
|||
|
**Key Features**:
|
|||
|
- SQLite and PostgreSQL database testing
|
|||
|
- Stress testing with concurrent calculations
|
|||
|
- Data integrity checks
|
|||
|
- Daily pricing system reports
|
|||
|
- Automatic issue creation on failures
|
|||
|
|
|||
|
## Environment Variables
|
|||
|
|
|||
|
The workflows use the following environment variables:
|
|||
|
|
|||
|
### Required Secrets
|
|||
|
```yaml
|
|||
|
REGISTRY_USERNAME # Container registry username
|
|||
|
REGISTRY_PASSWORD # Container registry password
|
|||
|
OPENSHIFT_SERVER # OpenShift server URL
|
|||
|
OPENSHIFT_TOKEN # OpenShift authentication token
|
|||
|
```
|
|||
|
|
|||
|
### Environment Variables
|
|||
|
```yaml
|
|||
|
REGISTRY # Container registry URL
|
|||
|
NAMESPACE # Kubernetes namespace
|
|||
|
DATABASE_URL # Database connection string
|
|||
|
DJANGO_SETTINGS_MODULE # Django settings module
|
|||
|
```
|
|||
|
|
|||
|
## Workflow Triggers
|
|||
|
|
|||
|
### Automatic Triggers
|
|||
|
- **Push to main/develop**: Full CI/CD pipeline
|
|||
|
- **Pull Requests**: Pricing validation and full testing
|
|||
|
- **File Changes**: Pricing-specific tests when pricing files change
|
|||
|
- **Schedule**: Daily pricing validation at 6 AM UTC
|
|||
|
|
|||
|
### Manual Triggers
|
|||
|
- **Workflow Dispatch**: Manual execution with options
|
|||
|
- **Re-run**: Any workflow can be manually re-run from the Actions UI
|
|||
|
|
|||
|
## Test Coverage
|
|||
|
|
|||
|
The workflows ensure comprehensive testing of:
|
|||
|
|
|||
|
### Core Functionality
|
|||
|
- ✅ Pricing model CRUD operations
|
|||
|
- ✅ Progressive discount calculations
|
|||
|
- ✅ Final price calculations with addons
|
|||
|
- ✅ Multi-currency support
|
|||
|
- ✅ Service level pricing
|
|||
|
|
|||
|
### Edge Cases
|
|||
|
- ✅ Zero and negative values
|
|||
|
- ✅ Very large calculations
|
|||
|
- ✅ Missing data handling
|
|||
|
- ✅ Decimal precision issues
|
|||
|
- ✅ Database constraints
|
|||
|
|
|||
|
### Integration Scenarios
|
|||
|
- ✅ Complete service setups
|
|||
|
- ✅ Real-world pricing scenarios
|
|||
|
- ✅ External price comparisons
|
|||
|
- ✅ Cross-model relationships
|
|||
|
|
|||
|
### Performance Testing
|
|||
|
- ✅ Large dataset calculations
|
|||
|
- ✅ Concurrent price calculations
|
|||
|
- ✅ Stress testing with complex discount models
|
|||
|
- ✅ Performance regression detection
|
|||
|
|
|||
|
## Monitoring and Alerts
|
|||
|
|
|||
|
### Test Failures
|
|||
|
- Failed tests are clearly reported in the workflow logs
|
|||
|
- PR validation includes detailed summaries
|
|||
|
- Scheduled tests create GitHub issues on failure
|
|||
|
|
|||
|
### Coverage Tracking
|
|||
|
- Test coverage reports are generated and uploaded
|
|||
|
- Minimum coverage threshold enforced (85%)
|
|||
|
- Coverage trends tracked over time
|
|||
|
|
|||
|
### Performance Monitoring
|
|||
|
- Performance tests ensure calculations complete within time limits
|
|||
|
- Stress tests validate concurrent processing
|
|||
|
- Large dataset handling verified
|
|||
|
|
|||
|
## Usage Examples
|
|||
|
|
|||
|
### Running Specific Test Categories
|
|||
|
```bash
|
|||
|
# Trigger pricing-specific tests
|
|||
|
git push origin feature/pricing-changes
|
|||
|
|
|||
|
# Manual workflow dispatch with specific scope
|
|||
|
# Use GitHub UI to run scheduled-pricing-tests.yml with "pricing-only" scope
|
|||
|
```
|
|||
|
|
|||
|
### Viewing Results
|
|||
|
- Check the Actions tab in your repository
|
|||
|
- Download coverage reports from workflow artifacts
|
|||
|
- Review PR summaries for detailed analysis
|
|||
|
|
|||
|
### Debugging Failures
|
|||
|
1. Check workflow logs for detailed error messages
|
|||
|
2. Download test artifacts for coverage reports
|
|||
|
3. Review database-specific failures in matrix results
|
|||
|
4. Use manual workflow dispatch to re-run with different parameters
|
|||
|
|
|||
|
## Best Practices
|
|||
|
|
|||
|
### For Developers
|
|||
|
1. **Run Tests Locally**: Use `./run_pricing_tests.sh` before pushing
|
|||
|
2. **Add Tests**: Include tests for new pricing features
|
|||
|
3. **Check Coverage**: Ensure new code has adequate test coverage
|
|||
|
4. **Performance**: Consider performance impact of pricing changes
|
|||
|
|
|||
|
### For Maintainers
|
|||
|
1. **Monitor Scheduled Tests**: Review daily test results
|
|||
|
2. **Update Dependencies**: Keep test dependencies current
|
|||
|
3. **Adjust Thresholds**: Update coverage and performance thresholds as needed
|
|||
|
4. **Review Failures**: Investigate and resolve test failures promptly
|
|||
|
|
|||
|
## Troubleshooting
|
|||
|
|
|||
|
### Common Issues
|
|||
|
|
|||
|
**Database Connection Failures**
|
|||
|
- Check PostgreSQL service configuration
|
|||
|
- Verify DATABASE_URL environment variable
|
|||
|
- Ensure database is ready before tests start
|
|||
|
|
|||
|
**Test Timeouts**
|
|||
|
- Increase timeout values for complex calculations
|
|||
|
- Check for infinite loops in discount calculations
|
|||
|
- Verify performance test thresholds
|
|||
|
|
|||
|
**Coverage Failures**
|
|||
|
- Add tests for uncovered code paths
|
|||
|
- Adjust coverage threshold if appropriate
|
|||
|
- Check for missing test imports
|
|||
|
|
|||
|
**Matrix Test Failures**
|
|||
|
- Verify compatibility across Python/Django versions
|
|||
|
- Check for version-specific issues
|
|||
|
- Update test configurations as needed
|
|||
|
|
|||
|
## Maintenance
|
|||
|
|
|||
|
### Regular Updates
|
|||
|
- Update action versions (e.g., `actions/checkout@v4`)
|
|||
|
- Update Python versions in matrix testing
|
|||
|
- Update Django versions for compatibility testing
|
|||
|
- Review and update test thresholds
|
|||
|
|
|||
|
### Monitoring
|
|||
|
- Check scheduled test results daily
|
|||
|
- Review coverage trends monthly
|
|||
|
- Update documentation quarterly
|
|||
|
- Archive old test artifacts annually
|
|||
|
|
|||
|
## Integration with Existing CI/CD
|
|||
|
|
|||
|
These Forgejo Actions complement the existing GitLab CI configuration in `.gitlab-ci.yml`. Key differences:
|
|||
|
|
|||
|
### GitLab CI (Existing)
|
|||
|
- Docker image building and deployment
|
|||
|
- Production-focused pipeline
|
|||
|
- Simple build-test-deploy flow
|
|||
|
|
|||
|
### Forgejo Actions (New)
|
|||
|
- Comprehensive testing with multiple scenarios
|
|||
|
- Detailed pricing validation
|
|||
|
- Matrix testing across versions
|
|||
|
- Automated issue creation
|
|||
|
- Coverage tracking and reporting
|
|||
|
|
|||
|
Both systems can coexist, with Forgejo Actions providing detailed testing and GitLab CI handling deployment.
|