317 lines
9.8 KiB
Markdown
317 lines
9.8 KiB
Markdown
# AgMission Test Runner Guide
|
|
|
|
## Overview
|
|
|
|
The AgMission test suite consists of **standalone integration test scripts** organized into feature-based categories. These are **not** traditional unit tests - they are full integration tests that connect to real services (MongoDB, Stripe, RabbitMQ, Partner APIs).
|
|
|
|
## Test Organization
|
|
|
|
Tests are organized in feature-based directories:
|
|
|
|
```
|
|
tests/
|
|
├── dlq/ - Dead Letter Queue tests (3 files)
|
|
├── integration/ - Cross-feature integration (2 files)
|
|
├── job/ - Job processing tests (9 files)
|
|
├── parsing/ - Log parsing tests (7 files)
|
|
├── payment/ - Payment & billing (4 files)
|
|
├── promo/ - Promotion & coupon (13 files)
|
|
├── satloc/ - Partner integration (13 files)
|
|
├── utils/ - Utility tests (9 files)
|
|
└── run_all_tests.js - Test runner script
|
|
```
|
|
|
|
## Running Tests
|
|
|
|
### Run All Tests
|
|
```bash
|
|
npm run test:all
|
|
```
|
|
|
|
### Run Tests by Category
|
|
```bash
|
|
npm run test:promo # Promotion/coupon tests
|
|
npm run test:satloc # SatLoc partner tests
|
|
npm run test:job # Job processing tests
|
|
npm run test:payment # Payment & billing tests
|
|
npm run test:dlq # DLQ management tests
|
|
npm run test:parsing # Log parsing tests
|
|
npm run test:integration # Integration tests
|
|
npm run test:utils # Utility tests
|
|
```
|
|
|
|
### Run Single Test File
|
|
```bash
|
|
npm run test:single tests/promo/test_promo_details.js
|
|
```
|
|
|
|
Or directly with node:
|
|
```bash
|
|
node tests/promo/test_promo_details.js
|
|
```
|
|
|
|
### Run Tests with Options
|
|
```bash
|
|
# Verbose output (show all test output)
|
|
npm run test:verbose
|
|
|
|
# Stop on first failure
|
|
npm run test:bail
|
|
|
|
# Run specific pattern
|
|
npm run test:file 'promo/test_promo_*.js'
|
|
|
|
# Run with custom pattern
|
|
node tests/run_all_tests.js --pattern 'job/test_*.js'
|
|
|
|
# Run single category with verbose output
|
|
node tests/run_all_tests.js --pattern 'dlq/test_*.js' --verbose
|
|
```
|
|
|
|
## Test Runner Features
|
|
|
|
The custom test runner (`tests/run_all_tests.js`) provides:
|
|
|
|
### ✅ Separate Process Execution
|
|
- Each test runs in its own Node.js process
|
|
- Handles `process.exit()` calls gracefully
|
|
- Isolated test environments prevent interference
|
|
|
|
### 📊 Pass/Fail Reporting
|
|
- Reports ✅ PASSED or ❌ FAILED for each test
|
|
- Exit code 0 = test passed
|
|
- Exit code non-zero = test failed
|
|
- Summary shows passed/failed counts
|
|
|
|
### ⏱️ Duration Tracking
|
|
- Individual test duration in milliseconds
|
|
- Total test suite duration in seconds
|
|
|
|
### 🛑 Stop on Failure
|
|
- Use `--bail` flag to stop after first failure
|
|
- Useful for quick failure detection
|
|
|
|
### 📢 Verbose Mode
|
|
- Use `--verbose` flag to see all test output
|
|
- Default: only shows output for failed tests
|
|
|
|
## Test Output Format
|
|
|
|
```
|
|
═══════════════════════════════════════════════════════
|
|
🧪 AgMission Test Runner
|
|
═══════════════════════════════════════════════════════
|
|
📁 Environment: ./environment.env
|
|
🔍 Pattern: dlq/test_*.js
|
|
📢 Verbose: false
|
|
🛑 Stop on failure: false
|
|
═══════════════════════════════════════════════════════
|
|
|
|
📋 Found 3 test files:
|
|
1. tests/dlq/test_dlq_messages_direct.js
|
|
2. tests/dlq/test_dlq_mgmt_api.js
|
|
3. tests/dlq/test_dlq_routes.js
|
|
|
|
────────────────────────────────────────────────────────────
|
|
🧪 Running: tests/dlq/test_dlq_messages_direct.js
|
|
────────────────────────────────────────────────────────────
|
|
✅ PASSED: test_dlq_messages_direct.js (912ms)
|
|
|
|
────────────────────────────────────────────────────────────
|
|
🧪 Running: tests/dlq/test_dlq_mgmt_api.js
|
|
────────────────────────────────────────────────────────────
|
|
✅ PASSED: test_dlq_mgmt_api.js (530ms)
|
|
|
|
────────────────────────────────────────────────────────────
|
|
🧪 Running: tests/dlq/test_dlq_routes.js
|
|
────────────────────────────────────────────────────────────
|
|
❌ FAILED: test_dlq_routes.js (624ms)
|
|
Exit code: 1
|
|
Output preview:
|
|
<last 5 lines of error output>
|
|
|
|
═══════════════════════════════════════════════════════
|
|
📊 TEST SUMMARY
|
|
═══════════════════════════════════════════════════════
|
|
✅ Passed: 2/3
|
|
❌ Failed: 1/3
|
|
⏱️ Total Duration: 2.07s
|
|
|
|
❌ FAILED TESTS:
|
|
1. test_dlq_routes.js - Exit code: 1
|
|
═══════════════════════════════════════════════════════
|
|
```
|
|
|
|
## Understanding Test Results
|
|
|
|
### Exit Codes
|
|
- **0**: Test completed successfully (PASSED)
|
|
- **1**: Test failed or encountered errors (FAILED)
|
|
- **Other**: Process crashed or was terminated
|
|
|
|
### Test Failures
|
|
When a test fails:
|
|
1. Check the exit code (usually 1 for logical failures)
|
|
2. Review the output preview (last 5 lines shown by default)
|
|
3. Re-run with `--verbose` to see full output
|
|
4. Check test file directly for assertions and logic
|
|
|
|
### Common Failure Reasons
|
|
- ❌ Service connectivity (MongoDB, Redis, RabbitMQ not running)
|
|
- ❌ API authentication (Stripe key invalid, partner credentials wrong)
|
|
- ❌ Environment variables missing or incorrect
|
|
- ❌ Test data conflicts (duplicate records, outdated IDs)
|
|
- ❌ Network timeouts or rate limits
|
|
|
|
## Environment Configuration
|
|
|
|
Tests use `environment.env` by default:
|
|
|
|
```bash
|
|
# Use custom environment file
|
|
node tests/run_all_tests.js --env ./environment_prod.env
|
|
|
|
# Test runner auto-loads environment variables
|
|
```
|
|
|
|
**Important**: Tests are **integration tests** that connect to real services. Ensure:
|
|
- MongoDB is running and accessible
|
|
- Redis is running (if tests use caching)
|
|
- RabbitMQ is running (for queue tests)
|
|
- Stripe API keys are valid (for payment tests)
|
|
- Partner API credentials are configured (for partner tests)
|
|
|
|
## Test Types
|
|
|
|
### Integration Tests
|
|
Files in `tests/` are **integration tests** that:
|
|
- Connect to real databases (MongoDB)
|
|
- Call external APIs (Stripe, partner APIs)
|
|
- Use message queues (RabbitMQ)
|
|
- Test end-to-end workflows
|
|
|
|
**Note**: These are NOT mocked unit tests. They require services to be running.
|
|
|
|
### Mocha Tests (Optional)
|
|
For traditional Mocha/Chai unit tests, create files with `.spec.js` extension:
|
|
```bash
|
|
tests/promo/promo_validation.spec.js
|
|
```
|
|
|
|
Run Mocha tests:
|
|
```bash
|
|
npm test # Run all *.spec.js files
|
|
npm run test:mocha # Same as above
|
|
```
|
|
|
|
## Best Practices
|
|
|
|
### Writing New Tests
|
|
1. **Use unique identifiers**: Add timestamps to avoid conflicts
|
|
```javascript
|
|
const testId = Date.now();
|
|
const promoName = `TestPromo_${testId}`;
|
|
```
|
|
|
|
2. **Track created resources**: Clean up only what you create
|
|
```javascript
|
|
const createdIds = [];
|
|
// ... create resources, track IDs
|
|
// Cleanup at end
|
|
for (const id of createdIds) {
|
|
await deleteResource(id);
|
|
}
|
|
```
|
|
|
|
3. **Handle rate limits**: Add delays between API calls
|
|
```javascript
|
|
await sleep(100); // 100ms delay between calls
|
|
```
|
|
|
|
4. **Return proper exit codes**:
|
|
```javascript
|
|
if (allTestsPassed) {
|
|
process.exit(0);
|
|
} else {
|
|
console.error('Tests failed');
|
|
process.exit(1);
|
|
}
|
|
```
|
|
|
|
5. **Load environment properly** (at top of file):
|
|
```javascript
|
|
const path = require('path');
|
|
const envPath = path.resolve(process.cwd(), './environment.env');
|
|
require('dotenv').config({ path: envPath });
|
|
```
|
|
|
|
### Naming Conventions
|
|
- Test files: `test_*.js` (e.g., `test_promo_details.js`)
|
|
- Manual scripts: `manual_*.js` (excluded from test runs)
|
|
- Mocha tests: `*.spec.js` (run separately via npm test)
|
|
|
|
### Debugging Failing Tests
|
|
1. Run with verbose output:
|
|
```bash
|
|
node tests/run_all_tests.js --pattern 'dlq/test_*.js' --verbose
|
|
```
|
|
|
|
2. Run single test directly:
|
|
```bash
|
|
node tests/dlq/test_dlq_routes.js
|
|
```
|
|
|
|
3. Check services are running:
|
|
```bash
|
|
# MongoDB
|
|
systemctl status mongod
|
|
|
|
# RabbitMQ
|
|
systemctl status rabbitmq-server
|
|
|
|
# Redis
|
|
systemctl status redis
|
|
```
|
|
|
|
4. Verify environment variables:
|
|
```bash
|
|
cat environment.env | grep STRIPE
|
|
cat environment.env | grep MONGO
|
|
```
|
|
|
|
## Migration Notes
|
|
|
|
Tests were migrated from flat directory structure to feature-based organization:
|
|
- See `TESTS_ORGANIZED.md` for migration details
|
|
- All relative imports were updated automatically
|
|
- Tests maintain original functionality
|
|
|
|
## Summary
|
|
|
|
**Key Points**:
|
|
- ✅ Tests organized by feature in subdirectories
|
|
- ✅ Custom test runner executes and reports results
|
|
- ✅ Each test runs in isolated process
|
|
- ✅ Pass/fail tracking with duration metrics
|
|
- ✅ Integration tests require real services
|
|
- ✅ Use npm scripts for organized test execution
|
|
- ✅ Support for verbose output and stop-on-failure
|
|
|
|
**Quick Start**:
|
|
```bash
|
|
# Run all tests
|
|
npm run test:all
|
|
|
|
# Run category
|
|
npm run test:promo
|
|
|
|
# Run single test
|
|
npm run test:single tests/promo/test_promo_details.js
|
|
|
|
# Verbose output
|
|
npm run test:verbose
|
|
|
|
# Stop on first failure
|
|
npm run test:bail
|
|
```
|