redo the action for running tests
Some checks are pending
Django Tests / test (push) Blocked by required conditions

This commit is contained in:
Tobias Brunner 2025-06-20 14:47:12 +02:00
parent 78f52ea7f4
commit 5888e281ea
No known key found for this signature in database
6 changed files with 22 additions and 1648 deletions

View file

@ -1,244 +0,0 @@
# Forgejo Actions for Pricing Tests
This directory contains Forgejo Actions (Gitea Actions) workflows that automatically run pricing tests in the CI/CD pipeline. These workflows ensure that pricing calculations remain accurate and that changes to pricing logic don't introduce regressions.
## Workflow Files
### 1. `ci.yml` - Main CI/CD Pipeline
**Triggers**: Push to `main`/`develop`, Pull Requests
**Purpose**: Complete CI/CD pipeline including testing, building, and deployment
**Jobs**:
- **test**: Runs all Django tests including pricing tests
- **lint**: Code quality checks with ruff
- **security**: Security scanning with safety and bandit
- **build**: Docker image building (only on main/develop)
- **deploy**: Production deployment (only on main)
**Key Features**:
- Uses PostgreSQL service for realistic testing
- Runs pricing tests in separate groups for better visibility
- Includes Django system checks
- Only builds/deploys if tests pass
### 2. `pricing-tests.yml` - Dedicated Pricing Tests
**Triggers**: Changes to pricing-related files
**Purpose**: Comprehensive testing of pricing models and calculations
**Path Triggers**:
- `hub/services/models/pricing.py`
- `hub/services/tests/test_pricing*.py`
- `hub/services/forms.py`
- `hub/services/views/**`
- `hub/services/templates/**`
**Jobs**:
- **pricing-tests**: Matrix testing across Python and Django versions
- **pricing-documentation**: Documentation and coverage checks
**Key Features**:
- Matrix testing: Python 3.12/3.13 × Django 5.0/5.1
- Test coverage reporting
- Performance testing with large datasets
- Pricing validation with sample scenarios
### 3. `pr-pricing-validation.yml` - Pull Request Validation
**Triggers**: Pull requests affecting pricing code
**Purpose**: Validate pricing changes in PRs before merge
**Jobs**:
- **pricing-validation**: Comprehensive validation of pricing changes
**Key Features**:
- Migration detection for pricing model changes
- Coverage tracking with minimum threshold (85%)
- Critical method change detection
- Backward compatibility checking
- Test addition validation
- PR summary generation
### 4. `scheduled-pricing-tests.yml` - Scheduled Testing
**Triggers**: Daily at 6 AM UTC, manual dispatch
**Purpose**: Regular validation to catch time-based or dependency issues
**Jobs**:
- **scheduled-pricing-tests**: Matrix testing on different databases
- **notify-on-failure**: Automatic issue creation on failure
**Key Features**:
- SQLite and PostgreSQL database testing
- Stress testing with concurrent calculations
- Data integrity checks
- Daily pricing system reports
- Automatic issue creation on failures
## Environment Variables
The workflows use the following environment variables:
### Required Secrets
```yaml
REGISTRY_USERNAME # Container registry username
REGISTRY_PASSWORD # Container registry password
OPENSHIFT_SERVER # OpenShift server URL
OPENSHIFT_TOKEN # OpenShift authentication token
```
### Environment Variables
```yaml
REGISTRY # Container registry URL
NAMESPACE # Kubernetes namespace
DATABASE_URL # Database connection string
DJANGO_SETTINGS_MODULE # Django settings module
```
## Workflow Triggers
### Automatic Triggers
- **Push to main/develop**: Full CI/CD pipeline
- **Pull Requests**: Pricing validation and full testing
- **File Changes**: Pricing-specific tests when pricing files change
- **Schedule**: Daily pricing validation at 6 AM UTC
### Manual Triggers
- **Workflow Dispatch**: Manual execution with options
- **Re-run**: Any workflow can be manually re-run from the Actions UI
## Test Coverage
The workflows ensure comprehensive testing of:
### Core Functionality
- ✅ Pricing model CRUD operations
- ✅ Progressive discount calculations
- ✅ Final price calculations with addons
- ✅ Multi-currency support
- ✅ Service level pricing
### Edge Cases
- ✅ Zero and negative values
- ✅ Very large calculations
- ✅ Missing data handling
- ✅ Decimal precision issues
- ✅ Database constraints
### Integration Scenarios
- ✅ Complete service setups
- ✅ Real-world pricing scenarios
- ✅ External price comparisons
- ✅ Cross-model relationships
### Performance Testing
- ✅ Large dataset calculations
- ✅ Concurrent price calculations
- ✅ Stress testing with complex discount models
- ✅ Performance regression detection
## Monitoring and Alerts
### Test Failures
- Failed tests are clearly reported in the workflow logs
- PR validation includes detailed summaries
- Scheduled tests create GitHub issues on failure
### Coverage Tracking
- Test coverage reports are generated and uploaded
- Minimum coverage threshold enforced (85%)
- Coverage trends tracked over time
### Performance Monitoring
- Performance tests ensure calculations complete within time limits
- Stress tests validate concurrent processing
- Large dataset handling verified
## Usage Examples
### Running Specific Test Categories
```bash
# Trigger pricing-specific tests
git push origin feature/pricing-changes
# Manual workflow dispatch with specific scope
# Use GitHub UI to run scheduled-pricing-tests.yml with "pricing-only" scope
```
### Viewing Results
- Check the Actions tab in your repository
- Download coverage reports from workflow artifacts
- Review PR summaries for detailed analysis
### Debugging Failures
1. Check workflow logs for detailed error messages
2. Download test artifacts for coverage reports
3. Review database-specific failures in matrix results
4. Use manual workflow dispatch to re-run with different parameters
## Best Practices
### For Developers
1. **Run Tests Locally**: Use `./run_pricing_tests.sh` before pushing
2. **Add Tests**: Include tests for new pricing features
3. **Check Coverage**: Ensure new code has adequate test coverage
4. **Performance**: Consider performance impact of pricing changes
### For Maintainers
1. **Monitor Scheduled Tests**: Review daily test results
2. **Update Dependencies**: Keep test dependencies current
3. **Adjust Thresholds**: Update coverage and performance thresholds as needed
4. **Review Failures**: Investigate and resolve test failures promptly
## Troubleshooting
### Common Issues
**Database Connection Failures**
- Check PostgreSQL service configuration
- Verify DATABASE_URL environment variable
- Ensure database is ready before tests start
**Test Timeouts**
- Increase timeout values for complex calculations
- Check for infinite loops in discount calculations
- Verify performance test thresholds
**Coverage Failures**
- Add tests for uncovered code paths
- Adjust coverage threshold if appropriate
- Check for missing test imports
**Matrix Test Failures**
- Verify compatibility across Python/Django versions
- Check for version-specific issues
- Update test configurations as needed
## Maintenance
### Regular Updates
- Update action versions (e.g., `actions/checkout@v4`)
- Update Python versions in matrix testing
- Update Django versions for compatibility testing
- Review and update test thresholds
### Monitoring
- Check scheduled test results daily
- Review coverage trends monthly
- Update documentation quarterly
- Archive old test artifacts annually
## Integration with Existing CI/CD
These Forgejo Actions complement the existing GitLab CI configuration in `.gitlab-ci.yml`. Key differences:
### GitLab CI (Existing)
- Docker image building and deployment
- Production-focused pipeline
- Simple build-test-deploy flow
### Forgejo Actions (New)
- Comprehensive testing with multiple scenarios
- Detailed pricing validation
- Matrix testing across versions
- Automated issue creation
- Coverage tracking and reporting
Both systems can coexist, with Forgejo Actions providing detailed testing and GitLab CI handling deployment.

View file

@ -1,250 +0,0 @@
name: Test and Build
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
env:
REGISTRY: registry.vshn.net
NAMESPACE: vshn-servalafe-prod
jobs:
# Test job - runs Django tests including pricing tests
test:
name: Run Django Tests
runs-on: ubuntu-latest
services:
# Use PostgreSQL service for more realistic testing
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: servala_test
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.13"
- name: Install uv
uses: astral-sh/setup-uv@v3
with:
enable-cache: true
cache-dependency-glob: "uv.lock"
- name: Install dependencies
run: |
uv sync --extra dev
- name: Run pricing model tests
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/servala_test
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Running pricing model tests"
uv run --extra dev manage.py test hub.services.tests.test_pricing --verbosity=2
echo "::endgroup::"
- name: Run pricing edge case tests
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/servala_test
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Running pricing edge case tests"
uv run --extra dev manage.py test hub.services.tests.test_pricing_edge_cases --verbosity=2
echo "::endgroup::"
- name: Run pricing integration tests
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/servala_test
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Running pricing integration tests"
uv run --extra dev manage.py test hub.services.tests.test_pricing_integration --verbosity=2
echo "::endgroup::"
- name: Run all Django tests
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/servala_test
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Running all Django tests"
uv run --extra dev manage.py test --verbosity=2
echo "::endgroup::"
- name: Run Django system checks
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/servala_test
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Running Django system checks"
uv run --extra dev manage.py check --verbosity=2
echo "::endgroup::"
# Lint job - code quality checks
lint:
name: Code Quality Checks
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.13"
- name: Install uv
uses: astral-sh/setup-uv@v3
with:
enable-cache: true
cache-dependency-glob: "uv.lock"
- name: Install dependencies
run: |
uv sync --extra dev
- name: Run ruff linting
run: |
echo "::group::Running ruff linting"
uv run ruff check . --output-format=github || true
echo "::endgroup::"
- name: Run ruff formatting check
run: |
echo "::group::Checking code formatting"
uv run ruff format --check . || true
echo "::endgroup::"
# Security checks
security:
name: Security Checks
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.13"
- name: Install uv
uses: astral-sh/setup-uv@v3
with:
enable-cache: true
cache-dependency-glob: "uv.lock"
- name: Install dependencies
run: |
uv sync --extra dev
- name: Run safety check for known vulnerabilities
run: |
echo "::group::Running safety check"
uv run safety check || true
echo "::endgroup::"
- name: Run bandit security linter
run: |
echo "::group::Running bandit security scan"
uv run bandit -r hub/ -f json -o bandit-report.json || true
if [ -f bandit-report.json ]; then
echo "Bandit security scan results:"
cat bandit-report.json
fi
echo "::endgroup::"
# Build job - only runs if tests pass
build:
name: Build Docker Image
runs-on: ubuntu-latest
needs: [test, lint, security]
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/develop'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.NAMESPACE }}/servala
tags: |
type=ref,event=branch
type=ref,event=pr
type=sha,prefix={{branch}}-
type=raw,value=latest,enable={{is_default_branch}}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
# Deploy job - only runs on main branch after successful build
deploy:
name: Deploy to Production
runs-on: ubuntu-latest
needs: [build]
if: github.ref == 'refs/heads/main'
environment:
name: production
url: https://servala.com/
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Deploy to OpenShift
env:
OPENSHIFT_SERVER: ${{ secrets.OPENSHIFT_SERVER }}
OPENSHIFT_TOKEN: ${{ secrets.OPENSHIFT_TOKEN }}
run: |
# Install OpenShift CLI
curl -LO https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz
tar -xzf openshift-client-linux.tar.gz
sudo mv oc /usr/local/bin/
# Login to OpenShift
oc login --token=$OPENSHIFT_TOKEN --server=$OPENSHIFT_SERVER
# Apply deployment configuration
oc -n ${{ env.NAMESPACE }} apply --overwrite -f deployment/
# Restart deployment to pick up new image
oc -n ${{ env.NAMESPACE }} rollout restart deployment/servala
# Wait for deployment to complete
oc -n ${{ env.NAMESPACE }} rollout status deployment/servala --timeout=300s

View file

@ -1,296 +0,0 @@
name: PR Pricing Validation
on:
pull_request:
types: [opened, synchronize, reopened]
paths:
- "hub/services/models/pricing.py"
- "hub/services/tests/test_pricing*.py"
- "hub/services/views/**"
- "hub/services/forms.py"
- "hub/services/admin/**"
jobs:
pricing-validation:
name: Validate Pricing Changes
runs-on: ubuntu-latest
steps:
- name: Checkout PR branch
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.13"
- name: Install uv
uses: astral-sh/setup-uv@v3
with:
enable-cache: true
cache-dependency-glob: "uv.lock"
- name: Install dependencies
run: |
uv sync --extra dev
- name: Check for pricing model migrations
run: |
echo "::group::Checking for required database migrations"
# Check if pricing models were changed
if git diff --name-only origin/main...HEAD | grep -q "hub/services/models/pricing.py"; then
echo "📝 Pricing models were modified, checking for migrations..."
# Check if there are new migration files
if git diff --name-only origin/main...HEAD | grep -q "hub/services/migrations/"; then
echo "✅ Found migration files in the PR"
git diff --name-only origin/main...HEAD | grep "hub/services/migrations/" | head -5
else
echo "⚠️ Pricing models were changed but no migrations found"
echo "Please run: uv run --extra dev manage.py makemigrations"
echo "This will be treated as a warning, not a failure"
fi
else
echo " No pricing model changes detected"
fi
echo "::endgroup::"
- name: Run pricing tests with coverage
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Running pricing tests with coverage tracking"
# Run tests with coverage
uv run coverage run --source='hub/services/models/pricing,hub/services/views' \
manage.py test \
hub.services.tests.test_pricing \
hub.services.tests.test_pricing_edge_cases \
hub.services.tests.test_pricing_integration \
--verbosity=2
# Generate coverage report
uv run coverage report --show-missing --fail-under=85
# Generate HTML coverage report
uv run coverage html
echo "::endgroup::"
- name: Upload coverage report
uses: actions/upload-artifact@v4
with:
name: pr-pricing-coverage
path: htmlcov/
retention-days: 7
- name: Detect pricing calculation changes
run: |
echo "::group::Analyzing pricing calculation changes"
# Check if critical pricing methods were modified
CRITICAL_METHODS=(
"calculate_discount"
"calculate_final_price"
"get_price"
"get_unit_rate"
"get_base_fee"
)
echo "🔍 Checking for changes to critical pricing methods..."
changed_methods=()
for method in "${CRITICAL_METHODS[@]}"; do
if git diff origin/main...HEAD -- hub/services/models/pricing.py | grep -q "def $method"; then
changed_methods+=("$method")
echo "⚠️ Critical method '$method' was modified"
fi
done
if [ ${#changed_methods[@]} -gt 0 ]; then
echo ""
echo "🚨 CRITICAL PRICING METHODS CHANGED:"
printf ' - %s\n' "${changed_methods[@]}"
echo ""
echo "📋 Extra validation required:"
echo " 1. All pricing tests must pass"
echo " 2. Manual testing of price calculations recommended"
echo " 3. Consider adding regression tests for specific scenarios"
echo ""
echo "This will not fail the build but requires careful review."
else
echo "✅ No critical pricing methods were modified"
fi
echo "::endgroup::"
- name: Validate test additions
run: |
echo "::group::Validating test additions for pricing changes"
# Check if new pricing features have corresponding tests
python3 << 'EOF'
import subprocess
import re
def get_git_diff():
result = subprocess.run(
['git', 'diff', 'origin/main...HEAD', '--', 'hub/services/models/pricing.py'],
capture_output=True, text=True
)
return result.stdout
def get_test_diff():
result = subprocess.run(
['git', 'diff', 'origin/main...HEAD', '--', 'hub/services/tests/test_pricing*.py'],
capture_output=True, text=True
)
return result.stdout
pricing_diff = get_git_diff()
test_diff = get_test_diff()
# Look for new methods in pricing models
new_methods = re.findall(r'^\+\s*def\s+(\w+)', pricing_diff, re.MULTILINE)
new_classes = re.findall(r'^\+class\s+(\w+)', pricing_diff, re.MULTILINE)
# Look for new test methods
new_test_methods = re.findall(r'^\+\s*def\s+(test_\w+)', test_diff, re.MULTILINE)
print("📊 Analysis of pricing changes:")
if new_classes:
print(f" New classes: {', '.join(new_classes)}")
if new_methods:
print(f" New methods: {', '.join(new_methods)}")
if new_test_methods:
print(f" New test methods: {', '.join(new_test_methods)}")
if (new_classes or new_methods) and not new_test_methods:
print("⚠️ New pricing functionality detected but no new tests found")
print(" Consider adding tests for new features")
elif new_test_methods:
print("✅ New tests found alongside pricing changes")
else:
print(" No new pricing functionality detected")
EOF
echo "::endgroup::"
- name: Run backward compatibility check
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Checking backward compatibility of pricing changes"
# Create a simple backward compatibility test
cat << 'EOF' > check_compatibility.py
import os
import django
from decimal import Decimal
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'hub.settings')
django.setup()
from hub.services.models.base import Currency, Term
from hub.services.models.providers import CloudProvider
from hub.services.models.services import Service
from hub.services.models.pricing import VSHNAppCatPrice, VSHNAppCatBaseFee, VSHNAppCatUnitRate
print("🔄 Testing backward compatibility of pricing API...")
try:
# Test basic model creation (should work with existing API)
provider = CloudProvider.objects.create(
name="BC Test", slug="bc-test", description="Test", website="https://test.com"
)
service = Service.objects.create(
name="BC Service", slug="bc-service", description="Test", features="Test"
)
price_config = VSHNAppCatPrice.objects.create(
service=service,
variable_unit=VSHNAppCatPrice.VariableUnit.RAM,
term=Term.MTH
)
VSHNAppCatBaseFee.objects.create(
vshn_appcat_price_config=price_config,
currency=Currency.CHF,
amount=Decimal('50.00')
)
VSHNAppCatUnitRate.objects.create(
vshn_appcat_price_config=price_config,
currency=Currency.CHF,
service_level=VSHNAppCatPrice.ServiceLevel.GUARANTEED,
amount=Decimal('5.0000')
)
# Test basic price calculation
result = price_config.calculate_final_price(Currency.CHF, 'GA', 4)
if result and 'total_price' in result:
print(f"✅ Basic price calculation works: {result['total_price']} CHF")
else:
print("❌ Price calculation API may have changed")
exit(1)
# Test price retrieval methods
base_fee = price_config.get_base_fee(Currency.CHF)
unit_rate = price_config.get_unit_rate(Currency.CHF, 'GA')
if base_fee and unit_rate:
print("✅ Price retrieval methods work correctly")
else:
print("❌ Price retrieval API may have changed")
exit(1)
print("🎉 Backward compatibility check passed!")
except Exception as e:
print(f"❌ Backward compatibility issue detected: {e}")
exit(1)
EOF
uv run python check_compatibility.py
echo "::endgroup::"
- name: Generate pricing test summary
if: always()
run: |
echo "::group::Pricing Test Summary"
echo "## 🧮 Pricing Test Results" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
# Count test files and methods
total_test_files=$(find hub/services/tests -name "test_pricing*.py" | wc -l)
total_test_methods=$(grep -r "def test_" hub/services/tests/test_pricing*.py | wc -l)
echo "- **Test Files**: $total_test_files pricing-specific test files" >> $GITHUB_STEP_SUMMARY
echo "- **Test Methods**: $total_test_methods individual test methods" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
# Check if any pricing files were changed
if git diff --name-only origin/main...HEAD | grep -q "pricing"; then
echo "### 📝 Pricing-Related Changes Detected" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "The following pricing-related files were modified:" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
git diff --name-only origin/main...HEAD | grep "pricing" | sed 's/^/- /' >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "✅ All pricing tests have been executed to validate these changes." >> $GITHUB_STEP_SUMMARY
else
echo "### No Pricing Changes" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "No pricing-related files were modified in this PR." >> $GITHUB_STEP_SUMMARY
fi
echo "" >> $GITHUB_STEP_SUMMARY
echo "---" >> $GITHUB_STEP_SUMMARY
echo "*Pricing validation completed at $(date)*" >> $GITHUB_STEP_SUMMARY
echo "::endgroup::"

View file

@ -1,366 +0,0 @@
name: Pricing Tests
on:
push:
paths:
- "hub/services/models/pricing.py"
- "hub/services/tests/test_pricing*.py"
- "hub/services/forms.py"
- "hub/services/views/**"
- "hub/services/templates/**"
pull_request:
paths:
- "hub/services/models/pricing.py"
- "hub/services/tests/test_pricing*.py"
- "hub/services/forms.py"
- "hub/services/views/**"
- "hub/services/templates/**"
jobs:
pricing-tests:
name: Pricing Model Tests
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.12", "3.13"]
django-version: ["5.0", "5.1"]
fail-fast: false
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install uv
uses: astral-sh/setup-uv@v3
with:
enable-cache: true
cache-dependency-glob: "uv.lock"
- name: Install dependencies
run: |
uv sync --extra dev
- name: Set up test database
run: |
echo "Using SQLite for pricing tests"
export DJANGO_SETTINGS_MODULE=hub.settings
- name: Run pricing model structure tests
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Testing pricing model structure and basic functionality"
uv run --extra dev manage.py test hub.services.tests.test_pricing.ComputePlanTestCase --verbosity=2
uv run --extra dev manage.py test hub.services.tests.test_pricing.StoragePlanTestCase --verbosity=2
echo "::endgroup::"
- name: Run discount calculation tests
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Testing progressive discount calculations"
uv run --extra dev manage.py test hub.services.tests.test_pricing.ProgressiveDiscountModelTestCase --verbosity=2
echo "::endgroup::"
- name: Run AppCat pricing tests
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Testing AppCat service pricing and addons"
uv run --extra dev manage.py test hub.services.tests.test_pricing.VSHNAppCatPriceTestCase --verbosity=2
uv run --extra dev manage.py test hub.services.tests.test_pricing.VSHNAppCatAddonTestCase --verbosity=2
echo "::endgroup::"
- name: Run pricing edge case tests
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Testing pricing edge cases and error conditions"
uv run --extra dev manage.py test hub.services.tests.test_pricing_edge_cases --verbosity=2
echo "::endgroup::"
- name: Run pricing integration tests
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Testing pricing integration scenarios"
uv run --extra dev manage.py test hub.services.tests.test_pricing_integration --verbosity=2
echo "::endgroup::"
- name: Generate pricing test coverage report
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Generating test coverage report for pricing models"
uv run coverage run --source='hub/services/models/pricing' manage.py test hub.services.tests.test_pricing hub.services.tests.test_pricing_edge_cases hub.services.tests.test_pricing_integration
uv run coverage report --show-missing
uv run coverage html
echo "::endgroup::"
- name: Upload coverage reports
uses: actions/upload-artifact@v4
if: always()
with:
name: pricing-coverage-${{ matrix.python-version }}-django${{ matrix.django-version }}
path: htmlcov/
retention-days: 7
- name: Validate pricing calculations with sample data
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Validating pricing calculations with sample scenarios"
cat << 'EOF' > validate_pricing.py
import os
import django
from decimal import Decimal
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'hub.settings')
django.setup()
from hub.services.models.base import Currency, Term
from hub.services.models.providers import CloudProvider
from hub.services.models.services import Service
from hub.services.models.pricing import (
VSHNAppCatPrice, VSHNAppCatBaseFee, VSHNAppCatUnitRate,
ProgressiveDiscountModel, DiscountTier
)
print("🧪 Creating test pricing scenario...")
# Create test data
provider = CloudProvider.objects.create(
name="Test Provider", slug="test", description="Test", website="https://test.com"
)
service = Service.objects.create(
name="Test Service", slug="test", description="Test", features="Test"
)
# Create discount model
discount = ProgressiveDiscountModel.objects.create(name="Test", active=True)
DiscountTier.objects.create(
discount_model=discount, min_units=0, max_units=10, discount_percent=Decimal('0')
)
DiscountTier.objects.create(
discount_model=discount, min_units=10, max_units=None, discount_percent=Decimal('10')
)
# Create pricing
price_config = VSHNAppCatPrice.objects.create(
service=service, variable_unit='RAM', term='MTH', discount_model=discount
)
VSHNAppCatBaseFee.objects.create(
vshn_appcat_price_config=price_config, currency='CHF', amount=Decimal('50.00')
)
VSHNAppCatUnitRate.objects.create(
vshn_appcat_price_config=price_config, currency='CHF',
service_level='GA', amount=Decimal('5.0000')
)
# Test calculations
result_small = price_config.calculate_final_price('CHF', 'GA', 5)
result_large = price_config.calculate_final_price('CHF', 'GA', 15)
print(f"✅ Small config (5 units): {result_small['total_price']} CHF")
print(f"✅ Large config (15 units): {result_large['total_price']} CHF")
# Validate expected results
assert result_small['total_price'] == Decimal('75.00'), f"Expected 75.00, got {result_small['total_price']}"
assert result_large['total_price'] == Decimal('122.50'), f"Expected 122.50, got {result_large['total_price']}"
print("🎉 All pricing validations passed!")
EOF
uv run python validate_pricing.py
echo "::endgroup::"
- name: Performance test for large calculations
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Testing pricing performance with large datasets"
cat << 'EOF' > performance_test.py
import os
import django
import time
from decimal import Decimal
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'hub.settings')
django.setup()
from hub.services.models.base import Currency, Term
from hub.services.models.providers import CloudProvider
from hub.services.models.services import Service
from hub.services.models.pricing import (
VSHNAppCatPrice, VSHNAppCatBaseFee, VSHNAppCatUnitRate,
ProgressiveDiscountModel, DiscountTier
)
print("⚡ Testing pricing calculation performance...")
# Create test data
provider = CloudProvider.objects.create(
name="Perf Test", slug="perf", description="Test", website="https://test.com"
)
service = Service.objects.create(
name="Perf Service", slug="perf", description="Test", features="Test"
)
# Create complex discount model
discount = ProgressiveDiscountModel.objects.create(name="Complex", active=True)
for i in range(0, 1000, 100):
DiscountTier.objects.create(
discount_model=discount,
min_units=i,
max_units=i+100 if i < 900 else None,
discount_percent=Decimal(str(min(50, i/20)))
)
price_config = VSHNAppCatPrice.objects.create(
service=service, variable_unit='RAM', term='MTH', discount_model=discount
)
VSHNAppCatBaseFee.objects.create(
vshn_appcat_price_config=price_config, currency='CHF', amount=Decimal('100.00')
)
VSHNAppCatUnitRate.objects.create(
vshn_appcat_price_config=price_config, currency='CHF',
service_level='GA', amount=Decimal('1.0000')
)
# Performance test
start_time = time.time()
result = price_config.calculate_final_price('CHF', 'GA', 5000) # Large calculation
end_time = time.time()
duration = end_time - start_time
print(f"✅ Large calculation (5000 units) completed in {duration:.3f} seconds")
print(f"✅ Result: {result['total_price']} CHF")
# Performance should be under 1 second for reasonable calculations
assert duration < 5.0, f"Calculation took too long: {duration} seconds"
print("🚀 Performance test passed!")
EOF
uv run python performance_test.py
echo "::endgroup::"
pricing-documentation:
name: Pricing Documentation Check
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Check pricing test documentation
run: |
echo "::group::Verifying pricing test documentation"
# Check if README exists and is up to date
if [ ! -f "hub/services/tests/README.md" ]; then
echo "❌ Missing hub/services/tests/README.md"
exit 1
fi
# Check if test files have proper docstrings
python3 << 'EOF'
import ast
import sys
def check_docstrings(filename):
with open(filename, 'r') as f:
tree = ast.parse(f.read())
classes_without_docs = []
methods_without_docs = []
for node in ast.walk(tree):
if isinstance(node, ast.ClassDef):
if not ast.get_docstring(node):
classes_without_docs.append(node.name)
elif isinstance(node, ast.FunctionDef) and node.name.startswith('test_'):
if not ast.get_docstring(node):
methods_without_docs.append(node.name)
return classes_without_docs, methods_without_docs
test_files = [
'hub/services/tests/test_pricing.py',
'hub/services/tests/test_pricing_edge_cases.py',
'hub/services/tests/test_pricing_integration.py'
]
all_good = True
for filename in test_files:
try:
classes, methods = check_docstrings(filename)
if classes or methods:
print(f"⚠️ {filename} has missing docstrings:")
for cls in classes:
print(f" - Class: {cls}")
for method in methods:
print(f" - Method: {method}")
all_good = False
else:
print(f"✅ {filename} - All classes and methods documented")
except FileNotFoundError:
print(f"❌ {filename} not found")
all_good = False
if not all_good:
print("\n📝 Please add docstrings to undocumented classes and test methods")
sys.exit(1)
else:
print("\n🎉 All pricing test files are properly documented!")
EOF
echo "::endgroup::"
- name: Check test coverage completeness
run: |
echo "::group::Checking test coverage completeness"
python3 << 'EOF'
import ast
import sys
# Read the pricing models file
with open('hub/services/models/pricing.py', 'r') as f:
tree = ast.parse(f.read())
# Extract all model classes and their methods
model_classes = []
for node in ast.walk(tree):
if isinstance(node, ast.ClassDef):
methods = []
for item in node.body:
if isinstance(item, ast.FunctionDef) and not item.name.startswith('_'):
methods.append(item.name)
if methods:
model_classes.append((node.name, methods))
print("📊 Pricing model classes and public methods:")
for class_name, methods in model_classes:
print(f" {class_name}: {', '.join(methods)}")
# Check if all important methods have corresponding tests
important_methods = ['get_price', 'calculate_discount', 'calculate_final_price']
missing_tests = []
# This is a simplified check - in practice you'd want more sophisticated analysis
for class_name, methods in model_classes:
for method in methods:
if method in important_methods:
print(f"✅ Found important method: {class_name}.{method}")
print("\n📈 Test coverage check completed")
EOF
echo "::endgroup::"

View file

@ -1,492 +0,0 @@
name: Scheduled Pricing Tests
on:
schedule:
# Run daily at 6 AM UTC
- cron: "0 6 * * *"
workflow_dispatch:
inputs:
test_scope:
description: "Test scope"
required: true
default: "all"
type: choice
options:
- all
- pricing-only
- integration-only
jobs:
scheduled-pricing-tests:
name: Scheduled Pricing Validation
runs-on: ubuntu-latest
strategy:
matrix:
database: ["sqlite", "postgresql"]
fail-fast: false
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: servala_test
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.13"
- name: Install uv
uses: astral-sh/setup-uv@v3
with:
enable-cache: true
cache-dependency-glob: "uv.lock"
- name: Install dependencies
run: |
uv sync --extra dev
- name: Set database configuration
run: |
if [ "${{ matrix.database }}" == "postgresql" ]; then
echo "DATABASE_URL=postgresql://postgres:postgres@localhost:5432/servala_test" >> $GITHUB_ENV
else
echo "DATABASE_URL=sqlite:///tmp/test.db" >> $GITHUB_ENV
fi
- name: Run comprehensive pricing tests
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Running comprehensive pricing test suite on ${{ matrix.database }}"
# Set test scope based on input or default to all
TEST_SCOPE="${{ github.event.inputs.test_scope || 'all' }}"
case $TEST_SCOPE in
"pricing-only")
echo "🎯 Running pricing-specific tests only"
uv run --extra dev manage.py test \
hub.services.tests.test_pricing \
--verbosity=2 \
--keepdb
;;
"integration-only")
echo "🔗 Running integration tests only"
uv run --extra dev manage.py test \
hub.services.tests.test_pricing_integration \
--verbosity=2 \
--keepdb
;;
*)
echo "🧪 Running all pricing tests"
uv run --extra dev manage.py test \
hub.services.tests.test_pricing \
hub.services.tests.test_pricing_edge_cases \
hub.services.tests.test_pricing_integration \
--verbosity=2 \
--keepdb
;;
esac
echo "::endgroup::"
- name: Run pricing stress tests
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Running pricing stress tests"
cat << 'EOF' > stress_test_pricing.py
import os
import django
import time
import concurrent.futures
from decimal import Decimal
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'hub.settings')
django.setup()
from hub.services.models.base import Currency, Term
from hub.services.models.providers import CloudProvider
from hub.services.models.services import Service
from hub.services.models.pricing import (
VSHNAppCatPrice, VSHNAppCatBaseFee, VSHNAppCatUnitRate,
ProgressiveDiscountModel, DiscountTier
)
def setup_test_data():
"""Set up test data for stress testing"""
provider = CloudProvider.objects.create(
name="Stress Test Provider",
slug="stress-test",
description="Test",
website="https://test.com"
)
service = Service.objects.create(
name="Stress Test Service",
slug="stress-test",
description="Test",
features="Test"
)
# Create complex discount model
discount = ProgressiveDiscountModel.objects.create(
name="Stress Test Discount",
active=True
)
# Create multiple discount tiers
for i in range(0, 1000, 100):
DiscountTier.objects.create(
discount_model=discount,
min_units=i,
max_units=i+100 if i < 900 else None,
discount_percent=Decimal(str(min(25, i/40)))
)
price_config = VSHNAppCatPrice.objects.create(
service=service,
variable_unit='RAM',
term='MTH',
discount_model=discount
)
VSHNAppCatBaseFee.objects.create(
vshn_appcat_price_config=price_config,
currency='CHF',
amount=Decimal('100.00')
)
VSHNAppCatUnitRate.objects.create(
vshn_appcat_price_config=price_config,
currency='CHF',
service_level='GA',
amount=Decimal('2.0000')
)
return price_config
def calculate_price_concurrent(price_config, units):
"""Calculate price in a concurrent context"""
try:
result = price_config.calculate_final_price('CHF', 'GA', units)
return result['total_price'] if result else None
except Exception as e:
return f"Error: {e}"
def main():
print("🚀 Starting pricing stress test...")
# Setup
price_config = setup_test_data()
# Test scenarios with increasing complexity
test_scenarios = [100, 500, 1000, 2000, 5000]
print("\n📊 Sequential performance test:")
for units in test_scenarios:
start_time = time.time()
result = price_config.calculate_final_price('CHF', 'GA', units)
end_time = time.time()
duration = end_time - start_time
print(f" {units:4d} units: {duration:.3f}s -> {result['total_price']} CHF")
if duration > 2.0:
print(f"⚠️ Performance warning: {units} units took {duration:.3f}s")
print("\n🔄 Concurrent performance test:")
start_time = time.time()
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
futures = []
for _ in range(50): # 50 concurrent calculations
future = executor.submit(calculate_price_concurrent, price_config, 1000)
futures.append(future)
results = []
for future in concurrent.futures.as_completed(futures):
result = future.result()
results.append(result)
end_time = time.time()
duration = end_time - start_time
successful_results = [r for r in results if isinstance(r, Decimal)]
failed_results = [r for r in results if not isinstance(r, Decimal)]
print(f" 50 concurrent calculations: {duration:.3f}s")
print(f" Successful: {len(successful_results)}")
print(f" Failed: {len(failed_results)}")
if failed_results:
print(f" Failures: {failed_results[:3]}...") # Show first 3 failures
# Validate results
if len(successful_results) < 45: # Allow up to 10% failures
raise Exception(f"Too many concurrent calculation failures: {len(failed_results)}")
if duration > 10.0: # Should complete within 10 seconds
raise Exception(f"Concurrent calculations too slow: {duration}s")
print("\n✅ Stress test completed successfully!")
if __name__ == "__main__":
main()
EOF
uv run python stress_test_pricing.py
echo "::endgroup::"
- name: Validate pricing data integrity
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Validating pricing data integrity"
cat << 'EOF' > integrity_check.py
import os
import django
from decimal import Decimal, InvalidOperation
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'hub.settings')
django.setup()
from django.db import connection
from hub.services.models.pricing import *
def check_pricing_constraints():
"""Check database constraints and data integrity"""
issues = []
print("🔍 Checking pricing data integrity...")
# Check for negative prices
negative_compute_prices = ComputePlanPrice.objects.filter(amount__lt=0)
if negative_compute_prices.exists():
issues.append(f"Found {negative_compute_prices.count()} negative compute plan prices")
negative_storage_prices = StoragePlanPrice.objects.filter(amount__lt=0)
if negative_storage_prices.exists():
issues.append(f"Found {negative_storage_prices.count()} negative storage prices")
# Check for invalid discount percentages
invalid_discounts = DiscountTier.objects.filter(
models.Q(discount_percent__lt=0) | models.Q(discount_percent__gt=100)
)
if invalid_discounts.exists():
issues.append(f"Found {invalid_discounts.count()} invalid discount percentages")
# Check for overlapping discount tiers (potential logic issues)
discount_models = ProgressiveDiscountModel.objects.filter(active=True)
for model in discount_models:
tiers = model.tiers.all().order_by('min_units')
for i in range(len(tiers) - 1):
current = tiers[i]
next_tier = tiers[i + 1]
if current.max_units and current.max_units > next_tier.min_units:
issues.append(f"Overlapping tiers in {model.name}: {current.min_units}-{current.max_units} overlaps with {next_tier.min_units}")
# Check for services without pricing
services_without_pricing = Service.objects.filter(vshn_appcat_price__isnull=True)
if services_without_pricing.exists():
print(f" Found {services_without_pricing.count()} services without AppCat pricing (this may be normal)")
# Check for price configurations without rates
price_configs_without_base_fee = VSHNAppCatPrice.objects.filter(base_fees__isnull=True)
if price_configs_without_base_fee.exists():
issues.append(f"Found {price_configs_without_base_fee.count()} price configs without base fees")
return issues
def main():
issues = check_pricing_constraints()
if issues:
print("\n❌ Data integrity issues found:")
for issue in issues:
print(f" - {issue}")
print(f"\nTotal issues: {len(issues)}")
# Don't fail the build for minor issues, but warn
if len(issues) > 5:
print("⚠️ Many integrity issues found - consider investigating")
exit(1)
else:
print("\n✅ All pricing data integrity checks passed!")
if __name__ == "__main__":
main()
EOF
uv run python integrity_check.py
echo "::endgroup::"
- name: Generate daily pricing report
env:
DJANGO_SETTINGS_MODULE: hub.settings
run: |
echo "::group::Generating daily pricing report"
cat << 'EOF' > daily_report.py
import os
import django
from decimal import Decimal
from datetime import datetime
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'hub.settings')
django.setup()
from hub.services.models.pricing import *
from hub.services.models.services import Service
from hub.services.models.providers import CloudProvider
def generate_report():
print("📊 Daily Pricing System Report")
print("=" * 50)
print(f"Generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S UTC')}")
print(f"Database: ${{ matrix.database }}")
print()
# Count models
print("📈 Model Counts:")
print(f" Cloud Providers: {CloudProvider.objects.count()}")
print(f" Services: {Service.objects.count()}")
print(f" Compute Plans: {ComputePlan.objects.count()}")
print(f" Storage Plans: {StoragePlan.objects.count()}")
print(f" AppCat Price Configs: {VSHNAppCatPrice.objects.count()}")
print(f" Discount Models: {ProgressiveDiscountModel.objects.count()}")
print(f" Active Discount Models: {ProgressiveDiscountModel.objects.filter(active=True).count()}")
print()
# Price ranges
print("💰 Price Ranges:")
compute_prices = ComputePlanPrice.objects.all()
if compute_prices.exists():
min_compute = compute_prices.order_by('amount').first().amount
max_compute = compute_prices.order_by('-amount').first().amount
print(f" Compute Plans: {min_compute} - {max_compute} CHF")
base_fees = VSHNAppCatBaseFee.objects.all()
if base_fees.exists():
min_base = base_fees.order_by('amount').first().amount
max_base = base_fees.order_by('-amount').first().amount
print(f" AppCat Base Fees: {min_base} - {max_base} CHF")
unit_rates = VSHNAppCatUnitRate.objects.all()
if unit_rates.exists():
min_unit = unit_rates.order_by('amount').first().amount
max_unit = unit_rates.order_by('-amount').first().amount
print(f" AppCat Unit Rates: {min_unit} - {max_unit} CHF")
print()
# Currency distribution
print("💱 Currency Distribution:")
currencies = ['CHF', 'EUR', 'USD']
for currency in currencies:
compute_count = ComputePlanPrice.objects.filter(currency=currency).count()
appcat_count = VSHNAppCatBaseFee.objects.filter(currency=currency).count()
print(f" {currency}: {compute_count} compute prices, {appcat_count} AppCat base fees")
print()
# Discount model analysis
print("🎯 Discount Model Analysis:")
active_discounts = ProgressiveDiscountModel.objects.filter(active=True)
for discount in active_discounts[:5]: # Show first 5
tier_count = discount.tiers.count()
max_discount = discount.tiers.order_by('-discount_percent').first()
max_percent = max_discount.discount_percent if max_discount else 0
print(f" {discount.name}: {tier_count} tiers, max {max_percent}% discount")
if active_discounts.count() > 5:
print(f" ... and {active_discounts.count() - 5} more")
print()
print("✅ Report generation completed")
if __name__ == "__main__":
generate_report()
EOF
uv run python daily_report.py
echo "::endgroup::"
- name: Save test results
if: always()
uses: actions/upload-artifact@v4
with:
name: scheduled-test-results-${{ matrix.database }}
path: |
htmlcov/
test-results.xml
retention-days: 30
notify-on-failure:
name: Notify on Test Failure
runs-on: ubuntu-latest
needs: [scheduled-pricing-tests]
if: failure()
steps:
- name: Create failure issue
uses: actions/github-script@v7
with:
script: |
const title = `🚨 Scheduled Pricing Tests Failed - ${new Date().toISOString().split('T')[0]}`;
const body = `
## Scheduled Pricing Test Failure
The scheduled pricing tests failed on ${new Date().toISOString()}.
**Run Details:**
- **Workflow**: ${context.workflow}
- **Run ID**: ${context.runId}
- **Commit**: ${context.sha}
**Next Steps:**
1. Check the workflow logs for detailed error information
2. Verify if this is a transient issue by re-running the workflow
3. If the issue persists, investigate potential regressions
**Links:**
- [Failed Workflow Run](https://github.com/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId})
/cc @tobru
`;
// Check if similar issue already exists
const existingIssues = await github.rest.issues.listForRepo({
owner: context.repo.owner,
repo: context.repo.repo,
labels: 'pricing-tests,automated',
state: 'open'
});
if (existingIssues.data.length === 0) {
await github.rest.issues.create({
owner: context.repo.owner,
repo: context.repo.repo,
title: title,
body: body,
labels: ['bug', 'pricing-tests', 'automated', 'priority-high']
});
} else {
console.log('Similar issue already exists, skipping creation');
}

View file

@ -0,0 +1,22 @@
name: Django Tests
on:
push:
branches: ["*"]
pull_request:
jobs:
test:
needs: build
runs-on: ubuntu-latest
container:
image: ${{ vars.CONTAINER_REGISTRY }}/${{ vars.CONTAINER_IMAGE_NAME }}:latest
options: --entrypoint ""
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Run Django tests
run: |
python -m hub migrate --noinput
python -m hub test --verbosity=2