301 lines
7.0 KiB
Markdown
301 lines
7.0 KiB
Markdown
# Test Framework Documentation
|
|
|
|
## Quick Start
|
|
|
|
```bash
|
|
# Run all tests (files ending in .spec.js)
|
|
npm test
|
|
|
|
# Run all test files (including test_*.js)
|
|
npm run test:all
|
|
|
|
# Run with watch mode (re-runs on file changes)
|
|
npm run test:watch
|
|
|
|
# Run a single test file
|
|
npm run test:single tests/sample.spec.js
|
|
|
|
# Run with coverage report
|
|
npm run test:coverage
|
|
|
|
# Coverage for all tests
|
|
npm run test:coverage-all
|
|
```
|
|
|
|
## How Mocha Handles Test Failures
|
|
|
|
### ✅ Key Behaviors:
|
|
|
|
1. **Runs ALL tests** - Mocha does NOT stop at the first failure
|
|
- All tests are executed regardless of failures
|
|
- Summary shows total passing/failing at the end
|
|
|
|
2. **Clear failure identification**
|
|
- Each failure is numbered (1), 2), 3), etc.)
|
|
- Shows the full test path: `Suite > Subsuite > Test Name`
|
|
- Displays exact line number: `tests/sample.spec.js:25:25`
|
|
|
|
3. **Detailed error messages**
|
|
- Shows expected vs actual values
|
|
- Color-coded diff (red for actual, green for expected)
|
|
- Full stack trace for debugging
|
|
|
|
4. **Exit code indicates failures**
|
|
- Exit code 0 = all tests passed
|
|
- Exit code > 0 = number of failures (capped at certain value)
|
|
- CI/CD systems can detect failures automatically
|
|
|
|
### Example Output:
|
|
|
|
```
|
|
10 passing (106ms)
|
|
4 failing
|
|
|
|
1) Sample Test Suite - Basic Math
|
|
Addition
|
|
should handle zero (INTENTIONAL FAIL):
|
|
AssertionError: expected +0 to equal 1
|
|
at Context.<anonymous> (tests/sample.spec.js:25:25)
|
|
```
|
|
|
|
### How to Locate Failed Tests:
|
|
|
|
1. **Line numbers**: Click the link `tests/sample.spec.js:25:25` in VSCode terminal
|
|
2. **Test hierarchy**: Follow the nested structure (Suite > Subsuite > Test)
|
|
3. **Search**: Copy the test name and use Ctrl+F in your test file
|
|
4. **Summary**: Scroll to top for count: "10 passing, 4 failing"
|
|
|
|
## Test File Naming
|
|
|
|
- **`*.spec.js`** - Unit/integration tests (run with `npm test`)
|
|
- **`test_*.js`** - Manual test scripts (run with `npm run test:all` or individually)
|
|
|
|
## Writing Tests
|
|
|
|
### Basic Structure:
|
|
|
|
```javascript
|
|
const { expect } = require('chai');
|
|
|
|
describe('Feature Name', () => {
|
|
|
|
describe('Sub-feature', () => {
|
|
|
|
it('should do something specific', () => {
|
|
const result = myFunction();
|
|
expect(result).to.equal(expectedValue);
|
|
});
|
|
|
|
it('should handle edge cases', async () => {
|
|
const result = await asyncFunction();
|
|
expect(result).to.be.an('object');
|
|
expect(result.status).to.equal('success');
|
|
});
|
|
});
|
|
});
|
|
```
|
|
|
|
### Common Assertions (Chai):
|
|
|
|
```javascript
|
|
// Equality
|
|
expect(value).to.equal(42);
|
|
expect(obj).to.deep.equal({ a: 1, b: 2 });
|
|
|
|
// Types
|
|
expect(value).to.be.a('string');
|
|
expect(arr).to.be.an('array');
|
|
|
|
// Arrays
|
|
expect(arr).to.have.lengthOf(3);
|
|
expect(arr).to.include(item);
|
|
expect(arr).to.deep.equal([1, 2, 3]);
|
|
|
|
// Objects
|
|
expect(obj).to.have.property('name');
|
|
expect(obj.name).to.equal('test');
|
|
|
|
// Numbers
|
|
expect(num).to.be.above(10);
|
|
expect(num).to.be.at.least(5);
|
|
expect(num).to.be.below(100);
|
|
|
|
// Existence
|
|
expect(value).to.exist;
|
|
expect(value).to.be.null;
|
|
expect(value).to.be.undefined;
|
|
|
|
// Async (returns promise)
|
|
await expect(promise).to.be.fulfilled;
|
|
await expect(promise).to.be.rejected;
|
|
```
|
|
|
|
### Test Lifecycle Hooks:
|
|
|
|
```javascript
|
|
describe('Feature', () => {
|
|
|
|
before(() => {
|
|
// Runs once before all tests in this suite
|
|
});
|
|
|
|
after(() => {
|
|
// Runs once after all tests in this suite
|
|
});
|
|
|
|
beforeEach(() => {
|
|
// Runs before each test
|
|
});
|
|
|
|
afterEach(() => {
|
|
// Runs after each test
|
|
});
|
|
|
|
it('test 1', () => { /* ... */ });
|
|
it('test 2', () => { /* ... */ });
|
|
});
|
|
```
|
|
|
|
### Async Tests:
|
|
|
|
```javascript
|
|
// Using async/await (preferred)
|
|
it('should fetch data', async () => {
|
|
const data = await fetchData();
|
|
expect(data).to.exist;
|
|
});
|
|
|
|
// Using done callback
|
|
it('should call callback', (done) => {
|
|
asyncFunction((err, result) => {
|
|
expect(err).to.be.null;
|
|
expect(result).to.equal('success');
|
|
done();
|
|
});
|
|
});
|
|
```
|
|
|
|
### Skipping Tests:
|
|
|
|
```javascript
|
|
// Skip a single test
|
|
it.skip('should be skipped', () => { /* ... */ });
|
|
|
|
// Skip entire suite
|
|
describe.skip('Skipped Suite', () => { /* ... */ });
|
|
|
|
// Run only specific tests (useful for debugging)
|
|
it.only('should run only this test', () => { /* ... */ });
|
|
describe.only('Only Suite', () => { /* ... */ });
|
|
```
|
|
|
|
## Coverage Reports
|
|
|
|
After running `npm run test:coverage`:
|
|
- Open `coverage/index.html` in browser for detailed coverage report
|
|
- Shows line, branch, function, and statement coverage
|
|
- Highlights uncovered lines in red
|
|
|
|
## Environment Variables
|
|
|
|
Tests automatically load from `environment.env` via `tests/setup.js`.
|
|
|
|
To use a different env file:
|
|
```bash
|
|
npm run test:single -- tests/my_test.spec.js --env ./environment_prod.env
|
|
```
|
|
|
|
## CI/CD Integration
|
|
|
|
Example GitHub Actions workflow:
|
|
|
|
```yaml
|
|
name: Tests
|
|
on: [push, pull_request]
|
|
|
|
jobs:
|
|
test:
|
|
runs-on: ubuntu-latest
|
|
steps:
|
|
- uses: actions/checkout@v3
|
|
- uses: actions/setup-node@v3
|
|
with:
|
|
node-version: '16'
|
|
- run: npm ci
|
|
- run: npm test
|
|
- run: npm run test:coverage
|
|
- uses: codecov/codecov-action@v3
|
|
with:
|
|
files: ./coverage/lcov.info
|
|
```
|
|
|
|
## Best Practices
|
|
|
|
1. **Test naming**: Use descriptive names that explain what's being tested
|
|
- ✅ `should return 404 when user not found`
|
|
- ❌ `test user function`
|
|
|
|
2. **One assertion per test**: Focus each test on a single behavior
|
|
- Makes failures easier to diagnose
|
|
- Tests are more maintainable
|
|
|
|
3. **Use beforeEach/afterEach**: Keep tests independent
|
|
- Create fresh test data for each test
|
|
- Clean up after tests complete
|
|
|
|
4. **Mock external services**: Don't hit real APIs in unit tests
|
|
- Use `sinon` for mocking
|
|
- Faster tests, no rate limits
|
|
|
|
5. **Test data isolation**: Use unique identifiers (timestamps)
|
|
- Prevents test conflicts
|
|
- Avoids cleanup issues
|
|
|
|
6. **Rate limiting**: Add delays between API calls (see STRIPE_RATE_LIMITING in copilot-instructions)
|
|
|
|
## Troubleshooting
|
|
|
|
### Tests hang and don't exit:
|
|
- Ensure async operations complete
|
|
- Close database/queue connections in `after()` hooks
|
|
- Use `--exit` flag (already in npm scripts)
|
|
|
|
### Environment variables not loaded:
|
|
- Check `tests/setup.js` is required
|
|
- Verify `environment.env` exists and has correct values
|
|
|
|
### Can't find modules:
|
|
- Run `npm install` to ensure all dependencies installed
|
|
- Check file paths are relative to project root
|
|
|
|
## Converting Existing Tests
|
|
|
|
To convert an existing `test_*.js` file to Mocha:
|
|
|
|
1. Wrap test logic in `describe` and `it` blocks
|
|
2. Replace console assertions with `expect()` assertions
|
|
3. Remove manual environment loading (handled by setup.js)
|
|
4. Rename to `*.spec.js` or keep as `test_*.js` and run with `npm run test:all`
|
|
|
|
Example:
|
|
```javascript
|
|
// Before (manual script)
|
|
console.log('Testing addition...');
|
|
const result = 2 + 2;
|
|
if (result !== 4) {
|
|
console.error('FAILED: Expected 4, got', result);
|
|
process.exit(1);
|
|
}
|
|
console.log('✅ PASSED');
|
|
|
|
// After (Mocha)
|
|
const { expect } = require('chai');
|
|
|
|
describe('Math Operations', () => {
|
|
it('should add numbers correctly', () => {
|
|
const result = 2 + 2;
|
|
expect(result).to.equal(4);
|
|
});
|
|
});
|
|
```
|