9.8 KiB
AgMission Test Runner Guide
Overview
The AgMission test suite consists of standalone integration test scripts organized into feature-based categories. These are not traditional unit tests - they are full integration tests that connect to real services (MongoDB, Stripe, RabbitMQ, Partner APIs).
Test Organization
Tests are organized in feature-based directories:
tests/
├── dlq/ - Dead Letter Queue tests (3 files)
├── integration/ - Cross-feature integration (2 files)
├── job/ - Job processing tests (9 files)
├── parsing/ - Log parsing tests (7 files)
├── payment/ - Payment & billing (4 files)
├── promo/ - Promotion & coupon (13 files)
├── satloc/ - Partner integration (13 files)
├── utils/ - Utility tests (9 files)
└── run_all_tests.js - Test runner script
Running Tests
Run All Tests
npm run test:all
Run Tests by Category
npm run test:promo # Promotion/coupon tests
npm run test:satloc # SatLoc partner tests
npm run test:job # Job processing tests
npm run test:payment # Payment & billing tests
npm run test:dlq # DLQ management tests
npm run test:parsing # Log parsing tests
npm run test:integration # Integration tests
npm run test:utils # Utility tests
Run Single Test File
npm run test:single tests/promo/test_promo_details.js
Or directly with node:
node tests/promo/test_promo_details.js
Run Tests with Options
# Verbose output (show all test output)
npm run test:verbose
# Stop on first failure
npm run test:bail
# Run specific pattern
npm run test:file 'promo/test_promo_*.js'
# Run with custom pattern
node tests/run_all_tests.js --pattern 'job/test_*.js'
# Run single category with verbose output
node tests/run_all_tests.js --pattern 'dlq/test_*.js' --verbose
Test Runner Features
The custom test runner (tests/run_all_tests.js) provides:
✅ Separate Process Execution
- Each test runs in its own Node.js process
- Handles
process.exit()calls gracefully - Isolated test environments prevent interference
📊 Pass/Fail Reporting
- Reports ✅ PASSED or ❌ FAILED for each test
- Exit code 0 = test passed
- Exit code non-zero = test failed
- Summary shows passed/failed counts
⏱️ Duration Tracking
- Individual test duration in milliseconds
- Total test suite duration in seconds
🛑 Stop on Failure
- Use
--bailflag to stop after first failure - Useful for quick failure detection
📢 Verbose Mode
- Use
--verboseflag to see all test output - Default: only shows output for failed tests
Test Output Format
═══════════════════════════════════════════════════════
🧪 AgMission Test Runner
═══════════════════════════════════════════════════════
📁 Environment: ./environment.env
🔍 Pattern: dlq/test_*.js
📢 Verbose: false
🛑 Stop on failure: false
═══════════════════════════════════════════════════════
📋 Found 3 test files:
1. tests/dlq/test_dlq_messages_direct.js
2. tests/dlq/test_dlq_mgmt_api.js
3. tests/dlq/test_dlq_routes.js
────────────────────────────────────────────────────────────
🧪 Running: tests/dlq/test_dlq_messages_direct.js
────────────────────────────────────────────────────────────
✅ PASSED: test_dlq_messages_direct.js (912ms)
────────────────────────────────────────────────────────────
🧪 Running: tests/dlq/test_dlq_mgmt_api.js
────────────────────────────────────────────────────────────
✅ PASSED: test_dlq_mgmt_api.js (530ms)
────────────────────────────────────────────────────────────
🧪 Running: tests/dlq/test_dlq_routes.js
────────────────────────────────────────────────────────────
❌ FAILED: test_dlq_routes.js (624ms)
Exit code: 1
Output preview:
<last 5 lines of error output>
═══════════════════════════════════════════════════════
📊 TEST SUMMARY
═══════════════════════════════════════════════════════
✅ Passed: 2/3
❌ Failed: 1/3
⏱️ Total Duration: 2.07s
❌ FAILED TESTS:
1. test_dlq_routes.js - Exit code: 1
═══════════════════════════════════════════════════════
Understanding Test Results
Exit Codes
- 0: Test completed successfully (PASSED)
- 1: Test failed or encountered errors (FAILED)
- Other: Process crashed or was terminated
Test Failures
When a test fails:
- Check the exit code (usually 1 for logical failures)
- Review the output preview (last 5 lines shown by default)
- Re-run with
--verboseto see full output - Check test file directly for assertions and logic
Common Failure Reasons
- ❌ Service connectivity (MongoDB, Redis, RabbitMQ not running)
- ❌ API authentication (Stripe key invalid, partner credentials wrong)
- ❌ Environment variables missing or incorrect
- ❌ Test data conflicts (duplicate records, outdated IDs)
- ❌ Network timeouts or rate limits
Environment Configuration
Tests use environment.env by default:
# Use custom environment file
node tests/run_all_tests.js --env ./environment_prod.env
# Test runner auto-loads environment variables
Important: Tests are integration tests that connect to real services. Ensure:
- MongoDB is running and accessible
- Redis is running (if tests use caching)
- RabbitMQ is running (for queue tests)
- Stripe API keys are valid (for payment tests)
- Partner API credentials are configured (for partner tests)
Test Types
Integration Tests
Files in tests/ are integration tests that:
- Connect to real databases (MongoDB)
- Call external APIs (Stripe, partner APIs)
- Use message queues (RabbitMQ)
- Test end-to-end workflows
Note: These are NOT mocked unit tests. They require services to be running.
Mocha Tests (Optional)
For traditional Mocha/Chai unit tests, create files with .spec.js extension:
tests/promo/promo_validation.spec.js
Run Mocha tests:
npm test # Run all *.spec.js files
npm run test:mocha # Same as above
Best Practices
Writing New Tests
-
Use unique identifiers: Add timestamps to avoid conflicts
const testId = Date.now(); const promoName = `TestPromo_${testId}`; -
Track created resources: Clean up only what you create
const createdIds = []; // ... create resources, track IDs // Cleanup at end for (const id of createdIds) { await deleteResource(id); } -
Handle rate limits: Add delays between API calls
await sleep(100); // 100ms delay between calls -
Return proper exit codes:
if (allTestsPassed) { process.exit(0); } else { console.error('Tests failed'); process.exit(1); } -
Load environment properly (at top of file):
const path = require('path'); const envPath = path.resolve(process.cwd(), './environment.env'); require('dotenv').config({ path: envPath });
Naming Conventions
- Test files:
test_*.js(e.g.,test_promo_details.js) - Manual scripts:
manual_*.js(excluded from test runs) - Mocha tests:
*.spec.js(run separately via npm test)
Debugging Failing Tests
-
Run with verbose output:
node tests/run_all_tests.js --pattern 'dlq/test_*.js' --verbose -
Run single test directly:
node tests/dlq/test_dlq_routes.js -
Check services are running:
# MongoDB systemctl status mongod # RabbitMQ systemctl status rabbitmq-server # Redis systemctl status redis -
Verify environment variables:
cat environment.env | grep STRIPE cat environment.env | grep MONGO
Migration Notes
Tests were migrated from flat directory structure to feature-based organization:
- See
TESTS_ORGANIZED.mdfor migration details - All relative imports were updated automatically
- Tests maintain original functionality
Summary
Key Points:
- ✅ Tests organized by feature in subdirectories
- ✅ Custom test runner executes and reports results
- ✅ Each test runs in isolated process
- ✅ Pass/fail tracking with duration metrics
- ✅ Integration tests require real services
- ✅ Use npm scripts for organized test execution
- ✅ Support for verbose output and stop-on-failure
Quick Start:
# Run all tests
npm run test:all
# Run category
npm run test:promo
# Run single test
npm run test:single tests/promo/test_promo_details.js
# Verbose output
npm run test:verbose
# Stop on first failure
npm run test:bail