Add custom nodes, Civitai loras (LFS), and vast.ai setup script
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Has been cancelled
Execution Tests / test (macos-latest) (push) Has been cancelled
Execution Tests / test (ubuntu-latest) (push) Has been cancelled
Execution Tests / test (windows-latest) (push) Has been cancelled
Test server launches without errors / test (push) Has been cancelled
Unit Tests / test (macos-latest) (push) Has been cancelled
Unit Tests / test (ubuntu-latest) (push) Has been cancelled
Unit Tests / test (windows-2022) (push) Has been cancelled

Includes 30 custom nodes committed directly, 7 Civitai-exclusive
loras stored via Git LFS, and a setup script that installs all
dependencies and downloads HuggingFace-hosted models on vast.ai.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-09 00:55:26 +00:00
parent 2b70ab9ad0
commit f09734b0ee
2274 changed files with 748556 additions and 3 deletions

View File

@@ -0,0 +1,961 @@
# Wildcard System - Complete Test Suite
Comprehensive testing guide for the ComfyUI Impact Pack wildcard system.
---
## 📋 Quick Links
- **[Quick Start](#quick-start)** - Run tests in 5 minutes
- **[Test Categories](#test-categories)** - All test types
- **[Test Execution](#test-execution)** - How to run each test
- **[Troubleshooting](#troubleshooting)** - Common issues
---
## Overview
### Test Suite Structure
```
tests/
├── wildcards/ # Wildcard system tests
│ ├── Unit Tests (Python)
│ │ ├── test_wildcard_lazy_loading.py # LazyWildcardLoader class
│ │ ├── test_progressive_loading.py # Progressive loading
│ │ ├── test_wildcard_final.py # Final validation
│ │ └── test_lazy_load_verification.py # Lazy load verification
│ │
│ ├── Integration Tests (Shell + API)
│ │ ├── test_progressive_ondemand.sh # ⭐ Progressive loading (NEW)
│ │ ├── test_lazy_load_api.sh # Lazy loading consistency
│ │ ├── test_sequential_loading.sh # Transitive wildcards
│ │ ├── test_versatile_prompts.sh # Feature tests
│ │ ├── test_wildcard_consistency.sh # Consistency validation
│ │ └── test_wildcard_features.sh # Core features
│ │
│ ├── Utility Scripts
│ │ ├── find_transitive_wildcards.sh # Find transitive chains
│ │ ├── find_deep_transitive.py # Deep transitive analysis
│ │ ├── verify_ondemand_mode.sh # Verify on-demand activation
│ │ └── run_quick_test.sh # Quick validation
│ │
│ └── README.md (this file)
└── workflows/ # Workflow test files
├── advanced-sampler.json
├── detailer-pipe-test.json
└── ...
```
### Test Coverage
- **11 test files** (4 Python, 7 Shell)
- **100+ test scenarios**
- **~95% feature coverage**
- **~15 minutes** total execution time
---
## Quick Start
### Run All Tests
```bash
cd /path/to/ComfyUI/custom_nodes/comfyui-impact-pack/tests/wildcards
# Run all shell tests
for test in test_*.sh; do
echo "Running: $test"
bash "$test"
done
```
### Run Specific Test
```bash
cd /path/to/ComfyUI/custom_nodes/comfyui-impact-pack/tests/wildcards
# Progressive loading (NEW)
bash test_progressive_ondemand.sh
# Lazy loading
bash test_lazy_load_api.sh
# Sequential/transitive
bash test_sequential_loading.sh
# Versatile prompts
bash test_versatile_prompts.sh
```
---
## Test Categories
### 1. Progressive On-Demand Loading Tests ⭐ NEW
**Purpose**: Verify wildcards are loaded progressively as accessed.
**Test Files**:
- `test_progressive_ondemand.sh` (Shell, ~2 min)
- `test_progressive_loading.py` (Python unit test)
#### What's Tested
**Early Termination Size Calculation**:
```python
# Problem: 10GB scan takes 10-30 minutes
# Solution: Stop at cache limit
calculate_directory_size(path, limit=50MB) # < 1 second
```
**YAML Pre-loading + TXT On-Demand**:
```python
# Phase 1 (Startup): Pre-load ALL YAML files
# Reason: Keys are inside file content, not file path
load_yaml_files_only() # colors.yaml → colors, colors/warm, colors/cold
# Phase 2 (Runtime): Load TXT files on-demand
# File path = key (e.g., "flower.txt" → "__flower__")
# No metadata scan for TXT files
```
**Progressive Loading**:
```
Initial: /list/loaded → YAML keys only (e.g., colors, colors/warm, colors/cold)
After __flower__: /list/loaded → +1 TXT wildcard
After __dragon__: /list/loaded → +2-3 (TXT transitive)
```
**⚠️ YAML Limitation**:
YAML wildcards are excluded from on-demand mode because wildcard keys exist
inside the file content. To discover `__colors/warm__`, we must parse `colors.yaml`.
Solution: Convert large YAML collections to TXT file structure for true on-demand.
#### New API Endpoint
**`GET /impact/wildcards/list/loaded`**:
```json
{
"data": ["__colors__", "__colors/warm__", "__colors/cold__", "__samples/flower__"],
"on_demand_mode": true,
"total_available": 0
}
```
Note: `total_available` is 0 in on-demand mode (TXT files not pre-scanned)
**Progressive Example**:
```bash
# Initial state (YAML pre-loaded)
curl /impact/wildcards/list/loaded
{"data": ["__colors__", "__colors/warm__", "__colors/cold__"], "total_available": 0}
# Access first wildcard
curl -X POST /impact/wildcards -d '{"text": "__flower__", "seed": 42}'
# Check again (TXT wildcard added)
curl /impact/wildcards/list/loaded
{"data": ["__colors__", "__colors/warm__", "__colors/cold__", "__samples/flower__"], "total_available": 0}
```
#### Performance Improvements
**Large Dataset (10GB, 100K files)**:
| Metric | Before | After |
|--------|--------|-------|
| **Startup** | 20-60 min | **< 1 min** |
| **Memory** | 5-10 GB | **< 100MB** |
| **Size calc** | 10-30 min | **< 1 sec** |
#### Run Test
```bash
bash test_progressive_ondemand.sh
```
**Expected Output**:
```
Step 1: Initial state
Loaded wildcards: 0
Step 2: Access __samples/flower__
Loaded wildcards: 1
✓ PASS: Wildcard count increased
Step 3: Access __dragon__
Loaded wildcards: 3
✓ PASS: Wildcard count increased progressively
🎉 ALL TESTS PASSED
```
---
### 2. Lazy Loading Tests
**Purpose**: Verify on-demand loading produces identical results to full cache mode.
**Test Files**:
- `test_lazy_load_api.sh` (Shell, ~3 min)
- `test_wildcard_lazy_loading.py` (Python unit test)
- `test_lazy_load_verification.py` (Python verification)
#### What's Tested
**LazyWildcardLoader Class**:
- Loads data only on first access
- Acts as list-like proxy
- Thread-safe with locking
**Mode Detection**:
- Automatic based on total size vs cache limit
- Full cache: < 50MB (default)
- On-demand: ≥ 50MB
**Consistency**:
- Full cache results == On-demand results
- Same seeds produce same outputs
- All wildcard features work identically
#### Test Scenarios
**test_lazy_load_api.sh** runs both modes and compares:
1. **Wildcard list** (before access)
2. **Simple wildcard**: `__samples/flower__`
3. **Depth 3 transitive**: `__adnd__ creature`
4. **YAML wildcard**: `__colors__`
5. **Wildcard list** (after access)
**All results must match exactly**.
#### Run Test
```bash
bash test_lazy_load_api.sh
```
**Expected Output**:
```
Testing: full_cache (limit: 100MB, port: 8190)
✓ Server started
Test 1: Get wildcard list
Total wildcards: 1000
Testing: on_demand (limit: 1MB, port: 8191)
✓ Server started
Test 1: Get wildcard list
Total wildcards: 1000
COMPARISON RESULTS
Test: Simple Wildcard
✓ Results MATCH
🎉 ALL TESTS PASSED
On-demand loading produces IDENTICAL results!
```
---
### 3. Sequential/Transitive Loading Tests
**Purpose**: Verify transitive wildcards expand correctly across multiple stages.
**Test Files**:
- `test_sequential_loading.sh` (Shell, ~5 min)
- `find_transitive_wildcards.sh` (Utility)
#### What's Tested
**Transitive Expansion**:
```
Depth 1: __samples/flower__ → rose
Depth 2: __dragon__ → __dragon/warrior__ → content
Depth 3: __adnd__ → __dragon__ → __dragon_spirit__ → content
```
**Maximum Depth**: 3 levels verified (system supports up to 100)
#### Test Categories
**17 tests across 5 categories**:
1. **Depth Verification** (4 tests)
- Depth 1: Direct wildcard
- Depth 2: One level transitive
- Depth 3: Two levels + suffix
- Depth 3: Maximum chain
2. **Mixed Transitive** (3 tests)
- Dynamic selection of transitive
- Multiple transitive in one prompt
- Nested transitive in dynamic
3. **Complex Scenarios** (3 tests)
- Weighted selection with transitive
- Multi-select with transitive
- Quantified transitive
4. **Edge Cases** (4 tests)
- Compound grammar
- Multiple wildcards, different depths
- YAML wildcards (no transitive)
- Transitive + YAML combination
5. **On-Demand Mode** (3 tests)
- Depth 3 in on-demand
- Complex scenario in on-demand
- Multiple transitive in on-demand
#### Example: Depth 3 Chain
**Files**:
```
adnd.txt:
__dragon__
dragon.txt:
__dragon_spirit__
dragon_spirit.txt:
Shrewd Hatchling
Ancient Dragon
```
**Usage**:
```
__adnd__ creature
→ __dragon__ creature
→ __dragon_spirit__ creature
→ "Shrewd Hatchling creature"
```
#### Run Test
```bash
bash test_sequential_loading.sh
```
**Expected Output**:
```
=== Test 01: Depth 1 - Direct wildcard ===
Raw prompt: __samples/flower__
✓ All wildcards fully expanded
Final Output: rose
Status: ✅ SUCCESS
=== Test 04: Depth 3 - Maximum transitive chain ===
Raw prompt: __adnd__ creature
✓ All wildcards fully expanded
Final Output: Shrewd Hatchling creature
Status: ✅ SUCCESS
```
---
### 4. Versatile Prompts Tests
**Purpose**: Test all wildcard features and syntax variations.
**Test Files**:
- `test_versatile_prompts.sh` (Shell, ~2 min)
- `test_wildcard_features.sh` (Shell)
- `test_wildcard_consistency.sh` (Shell)
#### What's Tested
**30 prompts across 10 categories**:
1. **Simple Wildcards** (3 tests)
- Basic substitution
- Case insensitive (uppercase)
- Case insensitive (mixed)
2. **Dynamic Prompts** (3 tests)
- Simple: `{red|green|blue} apple`
- Nested: `{a|{d|e|f}|c}`
- Complex nested: `{blue apple|red {cherry|berry}}`
3. **Selection Weights** (2 tests)
- Weighted: `{5::red|4::green|7::blue} car`
- Multiple weighted: `{10::beautiful|5::stunning} {3::sunset|2::sunrise}`
4. **Compound Grammar** (3 tests)
- Wildcard + dynamic: `{pencil|apple|__flower__}`
- Complex compound: `1{girl|boy} {sitting|standing} with {__object__|item}`
- Nested compound: `{big|small} {red {apple|cherry}|blue __flower__}`
5. **Multi-Select** (4 tests)
- Fixed count: `{2$$, $$opt1|opt2|opt3|opt4}`
- Range: `{2-4$$, $$opt1|opt2|opt3|opt4|opt5}`
- With separator: `{3$$; $$a|b|c|d|e}`
- Short form: `{-3$$, $$opt1|opt2|opt3|opt4}`
6. **Quantifiers** (2 tests)
- Basic: `3#__wildcard__`
- With multi-select: `{2$$, $$5#__colors__}`
7. **Wildcard Fallback** (2 tests)
- Auto-expand: `__flower__``__*/flower__`
- Wildcard patterns: `__samples/*__`
8. **YAML Wildcards** (3 tests)
- Simple YAML: `__colors__`
- Nested YAML: `__colors/warm__`
- Multiple YAML: `__colors__ and __animals__`
9. **Transitive Wildcards** (4 tests)
- Depth 2: `__dragon__`
- Depth 3: `__adnd__`
- Mixed depth: `__flower__ and __dragon__`
- Dynamic transitive: `{__dragon__|__adnd__}`
10. **Real-World Scenarios** (4 tests)
- Portrait prompt
- Landscape prompt
- Fantasy prompt
- Abstract art prompt
#### Example Tests
**Test 04: Simple Dynamic Prompt**:
```
Raw: {red|green|blue} apple
Seed: 100
Result: "red apple" (deterministic)
```
**Test 09: Wildcard + Dynamic**:
```
Raw: 1girl holding {blue pencil|red apple|colorful __samples/flower__}
Seed: 100
Result: "1girl holding colorful chrysanthemum"
```
**Test 18: Multi-Select Range**:
```
Raw: {2-4$$, $$happy|sad|angry|excited|calm}
Seed: 100
Result: "happy, sad, angry" (2-4 emotions selected)
```
#### Run Test
```bash
bash test_versatile_prompts.sh
```
**Expected Output**:
```
========================================
Test 01: Basic Wildcard
========================================
Raw: __samples/flower__
Result: chrysanthemum
Status: ✅ PASS
========================================
Test 04: Simple Dynamic Prompt
========================================
Raw: {red|green|blue} apple
Result: red apple
Status: ✅ PASS
Total: 30 tests
Passed: 30
Failed: 0
```
---
## Test Execution
### Prerequisites
**Required**:
- ComfyUI installed
- Impact Pack installed
- Python 3.8+
- Bash shell
- curl (for API tests)
**Optional**:
- jq (for JSON parsing)
- git (for version control)
### Environment Setup
**1. Configure Impact Pack**:
```bash
cd /path/to/ComfyUI/custom_nodes/comfyui-impact-pack
# Create or edit config
cat > impact-pack.ini << EOF
[default]
dependency_version = 24
wildcard_cache_limit_mb = 50
custom_wildcards = $(pwd)/custom_wildcards
disable_gpu_opencv = True
EOF
```
**2. Prepare Wildcards**:
```bash
# Check wildcard files exist
ls wildcards/*.txt wildcards/*.yaml
ls custom_wildcards/*.txt
```
### Running Tests
#### Unit Tests (Python)
**Standalone** (no server required):
```bash
python3 test_wildcard_lazy_loading.py
python3 test_progressive_loading.py
```
**Note**: Requires ComfyUI environment or will show import errors.
#### Integration Tests (Shell)
**Manual Server Start**:
```bash
# Terminal 1: Start server
cd /path/to/ComfyUI
bash run.sh --listen 127.0.0.1 --port 8188
# Terminal 2: Run tests
cd custom_nodes/comfyui-impact-pack/tests
bash test_versatile_prompts.sh
```
**Automated** (tests start/stop server):
```bash
# Each test manages its own server
bash test_progressive_ondemand.sh # Port 8195
bash test_lazy_load_api.sh # Ports 8190-8191
bash test_sequential_loading.sh # Port 8193
```
### Test Timing
| Test | Duration | Server | Ports |
|------|----------|--------|-------|
| `test_progressive_ondemand.sh` | ~2 min | Auto | 8195 |
| `test_lazy_load_api.sh` | ~3 min | Auto | 8190-8191 |
| `test_sequential_loading.sh` | ~5 min | Auto | 8193 |
| `test_versatile_prompts.sh` | ~2 min | Manual | 8188 |
| `test_wildcard_consistency.sh` | ~1 min | Manual | 8188 |
| Python unit tests | < 5 sec | No | N/A |
### Logs
**Server Logs**:
```bash
/tmp/progressive_test.log
/tmp/comfyui_full_cache.log
/tmp/comfyui_on_demand.log
/tmp/sequential_test.log
```
**Check Logs**:
```bash
# View recent wildcard logs
tail -50 /tmp/progressive_test.log | grep -i wildcard
# Find errors
grep -i "error\|fail" /tmp/*.log
# Check mode activation
grep -i "mode" /tmp/progressive_test.log
```
---
## Expected Results
### Success Criteria
#### Progressive Loading
-`/list/loaded` starts at 0 (or low count)
-`/list/loaded` increases after each unique wildcard
-`/list/loaded` unchanged on cache hits
- ✅ Transitive wildcards load multiple entries
- ✅ Final results identical to full cache mode
#### Lazy Loading
- ✅ Full cache results == On-demand results (all tests)
- ✅ Mode detection correct (based on size vs limit)
- ✅ LazyWildcardLoader loads only on access
- ✅ All API endpoints return consistent data
#### Sequential Loading
- ✅ Depth 1-3 expand correctly
- ✅ Complex scenarios work (weighted, multi-select, etc.)
- ✅ On-demand mode matches full cache
- ✅ No infinite loops (max 100 iterations)
#### Versatile Prompts
- ✅ All 30 test prompts process successfully
- ✅ Deterministic (same seed → same result)
- ✅ No syntax errors
- ✅ Proper probability distribution
### Sample Output
**Progressive Loading Success**:
```
========================================
Progressive Loading Verification
========================================
Step 1: Initial state
On-demand mode: True
Total available: 1000
Loaded wildcards: 0
Step 2: Access __samples/flower__
Result: rose
Loaded wildcards: 1
✓ PASS
Step 3: Access __dragon__
Result: ancient dragon
Loaded wildcards: 3
✓ PASS
🎉 ALL TESTS PASSED
Progressive on-demand loading verified!
```
**Lazy Loading Success**:
```
========================================
COMPARISON RESULTS
========================================
Test: Wildcard List (before)
✓ Results MATCH
Test: Simple Wildcard
✓ Results MATCH
Test: Depth 3 Transitive
✓ Results MATCH
🎉 ALL TESTS PASSED
On-demand produces IDENTICAL results!
```
---
## Troubleshooting
### Common Issues
#### 1. Server Fails to Start
**Symptoms**:
```
✗ Server failed to start
curl: (7) Failed to connect
```
**Solutions**:
```bash
# Check if port in use
lsof -i :8188
netstat -tlnp | grep 8188
# Kill existing processes
pkill -f "python.*main.py"
# Increase startup wait time
# In test script: sleep 15 → sleep 30
```
#### 2. Module Not Found (Python)
**Symptoms**:
```
ModuleNotFoundError: No module named 'modules'
```
**Solutions**:
```bash
# Option 1: Run from ComfyUI directory
cd /path/to/ComfyUI
python3 custom_nodes/comfyui-impact-pack/tests/test_progressive_loading.py
# Option 2: Add to PYTHONPATH
export PYTHONPATH=/path/to/ComfyUI/custom_nodes/comfyui-impact-pack:$PYTHONPATH
python3 test_progressive_loading.py
```
#### 3. On-Demand Mode Not Activating
**Symptoms**:
```
Using full cache mode.
```
**Check**:
```bash
# View total size
grep "Wildcard total size" /tmp/progressive_test.log
# Check cache limit
grep "cache_limit_mb" impact-pack.ini
```
**Solutions**:
```bash
# Force on-demand mode
cat > impact-pack.ini << EOF
[default]
wildcard_cache_limit_mb = 0.5
EOF
```
#### 4. Tests Timeout
**Symptoms**:
```
Waiting for server startup...
✗ Server failed to start
```
**Solutions**:
```bash
# Check system resources
free -h
df -h
# View server logs
tail -100 /tmp/progressive_test.log
# Manually test server
cd /path/to/ComfyUI
bash run.sh --port 8195
# Increase timeout in test
# sleep 15 → sleep 60
```
#### 5. Results Don't Match
**Symptoms**:
```
✗ Results DIFFER
```
**Debug**:
```bash
# Compare results
diff /tmp/result_full_cache_simple.json /tmp/result_on_demand_simple.json
# Check seeds are same
grep "seed" /tmp/result_*.json
# Verify same wildcard files used
ls -la wildcards/samples/flower.txt
```
**File Bug Report**:
- Wildcard text
- Seed value
- Full cache result
- On-demand result
- Server logs
#### 6. Slow Performance
**Symptoms**:
- Tests take much longer than expected
- Server startup > 2 minutes
**Check**:
```bash
# Wildcard size
du -sh wildcards/
# Disk I/O
iostat -x 1 5
# System resources
top
```
**Solutions**:
- Use SSD (not HDD)
- Reduce wildcard size
- Increase cache limit (use full cache mode)
- Close other applications
---
## Performance Benchmarks
### Expected Performance
**Small Dataset (< 50MB)**:
```
Mode: Full cache
Startup: < 10 seconds
Memory: ~50MB
First access: Instant
```
**Medium Dataset (50MB - 1GB)**:
```
Mode: On-demand
Startup: < 30 seconds
Memory: < 200MB initial
First access: 10-50ms per wildcard
```
**Large Dataset (10GB+)**:
```
Mode: On-demand
Startup: < 1 minute
Memory: < 100MB initial
First access: 10-50ms per wildcard
Memory growth: Progressive
```
### Optimization Tips
**For Faster Tests**:
1. Use smaller wildcard dataset
2. Run specific tests (not all)
3. Use manual server (keep running)
4. Skip sleep times (if server already running)
**For Large Datasets**:
1. Verify on-demand mode activates
2. Monitor `/list/loaded` to track memory
3. Use SSD for file storage
4. Organize wildcards into subdirectories
---
## Contributing
### Adding New Tests
**1. Create Test File**:
```bash
touch tests/test_new_feature.sh
chmod +x tests/test_new_feature.sh
```
**2. Test Template**:
```bash
#!/bin/bash
# Test: New Feature
# Purpose: Verify new feature works correctly
set -e
PORT=8XXX
IMPACT_DIR="/path/to/comfyui-impact-pack"
# Setup config
cat > impact-pack.ini << EOF
[default]
wildcard_cache_limit_mb = 50
EOF
# Start server
cd /path/to/ComfyUI
bash run.sh --port $PORT > /tmp/test_new.log 2>&1 &
sleep 15
# Test
RESULT=$(curl -s http://127.0.0.1:$PORT/impact/wildcards/list)
# Validate
if [ "$RESULT" = "expected" ]; then
echo "✅ PASS"
exit 0
else
echo "❌ FAIL"
exit 1
fi
```
**3. Update Documentation**:
- Add test description to this README
- Update test count
- Add to appropriate category
### Testing Guidelines
**Test Structure**:
1. Clear purpose statement
2. Setup (config, wildcards)
3. Execution (API calls, processing)
4. Validation (assertions, comparisons)
5. Cleanup (kill servers, restore config)
**Good Practices**:
- Use unique port numbers
- Clean up background processes
- Provide clear success/failure messages
- Log to `/tmp/` for debugging
- Use deterministic seeds
- Test both modes (full cache + on-demand)
---
## Reference
### Test Files Quick Reference
```bash
# Progressive loading
test_progressive_ondemand.sh # Integration test
test_progressive_loading.py # Unit test
# Lazy loading
test_lazy_load_api.sh # Integration test
test_wildcard_lazy_loading.py # Unit test
# Sequential/transitive
test_sequential_loading.sh # Integration test
find_transitive_wildcards.sh # Utility
# Features
test_versatile_prompts.sh # Comprehensive features
test_wildcard_features.sh # Core features
test_wildcard_consistency.sh # Consistency
# Validation
test_wildcard_final.py # Final validation
test_lazy_load_verification.py # Lazy load verification
```
### Documentation
- **System Overview**: `../docs/WILDCARD_SYSTEM_OVERVIEW.md`
- **Testing Guide**: `../docs/WILDCARD_TESTING_GUIDE.md`
### API Endpoints
```
GET /impact/wildcards/list # All available wildcards
GET /impact/wildcards/list/loaded # Actually loaded (progressive)
POST /impact/wildcards # Process wildcard text
GET /impact/wildcards/refresh # Reload all wildcards
```
---
**Last Updated**: 2024-11-17
**Total Tests**: 11 files, 100+ scenarios
**Coverage**: ~95% of wildcard features

View File

@@ -0,0 +1,178 @@
#!/usr/bin/env python3
"""Find deep transitive wildcard references (5+ levels)"""
import re
from pathlib import Path
from collections import defaultdict
# Auto-detect paths
SCRIPT_DIR = Path(__file__).parent
IMPACT_PACK_DIR = SCRIPT_DIR.parent
WILDCARDS_DIR = IMPACT_PACK_DIR / "wildcards"
CUSTOM_WILDCARDS_DIR = IMPACT_PACK_DIR / "custom_wildcards"
# Build wildcard reference graph
wildcard_refs = defaultdict(set) # wildcard -> set of wildcards it references
wildcard_files = {} # wildcard_name -> file_path
def normalize_name(name):
"""Normalize wildcard name"""
return name.lower().replace('/', '_').replace('\\', '_')
def find_wildcard_file(name):
"""Find wildcard file by name"""
# Try different variations
variations = [
name,
name.replace('/', '_'),
name.replace('\\', '_'),
]
for var in variations:
# Check in wildcards/
for ext in ['.txt', '.yaml', '.yml']:
path = WILDCARDS_DIR / f"{var}{ext}"
if path.exists():
return str(path)
# Check in custom_wildcards/
for ext in ['.txt', '.yaml', '.yml']:
path = CUSTOM_WILDCARDS_DIR / f"{var}{ext}"
if path.exists():
return str(path)
return None
def scan_wildcards():
"""Scan all wildcard files and build reference graph"""
print("Scanning wildcard files...")
# Find all wildcard files
for base_dir in [WILDCARDS_DIR, CUSTOM_WILDCARDS_DIR]:
for ext in ['*.txt', '*.yaml', '*.yml']:
for file_path in base_dir.rglob(ext):
# Get wildcard name from file path
rel_path = file_path.relative_to(base_dir)
name = str(rel_path.with_suffix('')).replace('/', '_').replace('\\', '_')
wildcard_files[normalize_name(name)] = str(file_path)
# Find references in file
try:
content = file_path.read_text(encoding='utf-8', errors='ignore')
refs = re.findall(r'__([^_]+(?:/[^_]+)*)__', content)
for ref in refs:
ref_normalized = normalize_name(ref)
if ref_normalized and ref_normalized != '':
wildcard_refs[normalize_name(name)].add(ref_normalized)
except Exception as e:
print(f"Error reading {file_path}: {e}")
print(f"Found {len(wildcard_files)} wildcard files")
print(f"Found {sum(len(refs) for refs in wildcard_refs.values())} references")
print()
def find_max_depth(start_wildcard, visited=None, path=None):
"""Find maximum depth of transitive references starting from a wildcard"""
if visited is None:
visited = set()
if path is None:
path = []
if start_wildcard in visited:
return 0, path # Cycle detected
visited.add(start_wildcard)
path.append(start_wildcard)
refs = wildcard_refs.get(start_wildcard, set())
if not refs:
return 1, path # Leaf node
max_depth = 0
max_path = path.copy()
for ref in refs:
if ref in wildcard_files: # Only follow if target exists
depth, sub_path = find_max_depth(ref, visited.copy(), path.copy())
if depth > max_depth:
max_depth = depth
max_path = sub_path
return max_depth + 1, max_path
def main():
scan_wildcards()
# Find wildcards with references
wildcards_with_refs = [(name, refs) for name, refs in wildcard_refs.items() if refs]
print(f"Analyzing {len(wildcards_with_refs)} wildcards with references...")
print()
# Calculate depth for each wildcard
depths = []
for name, refs in wildcards_with_refs:
depth, path = find_max_depth(name)
if depth >= 2: # At least one level of transitive reference
depths.append((depth, name, path))
# Sort by depth (deepest first)
depths.sort(reverse=True)
print("=" * 80)
print("WILDCARD REFERENCE DEPTH ANALYSIS")
print("=" * 80)
print()
# Show top 20 deepest
print("Top 20 Deepest Transitive References:")
print()
for i, (depth, name, path) in enumerate(depths[:20], 1):
print(f"{i}. Depth {depth}: __{name}__")
print(f" Path: {''.join(f'__{p}__' for p in path)}")
if name in wildcard_files:
print(f" File: {wildcard_files[name]}")
print()
# Find 5+ depth wildcards
deep_wildcards = [(depth, name, path) for depth, name, path in depths if depth >= 5]
print()
print("=" * 80)
print(f"WILDCARDS WITH 5+ DEPTH ({len(deep_wildcards)} found)")
print("=" * 80)
print()
if deep_wildcards:
for depth, name, path in deep_wildcards:
print(f"🎯 __{name}__ (Depth: {depth})")
print(f" Chain: {''.join(f'__{p}__' for p in path)}")
if name in wildcard_files:
print(f" File: {wildcard_files[name]}")
print()
print()
print("=" * 80)
print("RECOMMENDED TEST CASE")
print("=" * 80)
print()
depth, name, path = deep_wildcards[0]
print(f"Use __{name}__ for testing deep transitive loading")
print(f"Depth: {depth} levels")
print(f"Chain: {''.join(f'__{p}__' for p in path)}")
print()
print(f"Test input: \"__{name}__\"")
print(f"Expected: Will resolve through {depth} levels to actual content")
else:
print("No wildcards with 5+ depth found.")
print()
if depths:
depth, name, path = depths[0]
print(f"Maximum depth found: {depth}")
print(f"Wildcard: __{name}__")
print(f"Chain: {''.join(f'__{p}__' for p in path)}")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,113 @@
#!/bin/bash
# Find transitive wildcard references in the wildcard directories
# Auto-detect paths
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
IMPACT_PACK_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
WILDCARDS_DIR="$IMPACT_PACK_DIR/wildcards"
CUSTOM_WILDCARDS_DIR="$IMPACT_PACK_DIR/custom_wildcards"
echo "=========================================="
echo "Transitive Wildcard Reference Scanner"
echo "=========================================="
echo ""
echo "Scanning for wildcard references (pattern: __*__)..."
echo ""
# Function to find references in a file
find_references() {
local file=$1
local relative_path=${file#$IMPACT_PACK_DIR/}
# Find all __wildcard__ patterns in the file
local refs=$(grep -o '__[^_]*__' "$file" 2>/dev/null | sort -u)
if [ -n "$refs" ]; then
echo "📄 $relative_path"
echo " References:"
echo "$refs" | while read -r ref; do
# Remove __ from both ends
local clean_ref=${ref#__}
clean_ref=${clean_ref%__}
# Check if referenced file exists
local found=false
# Check in wildcards/
if [ -f "$WILDCARDS_DIR/$clean_ref.txt" ]; then
echo "$ref (wildcards/$clean_ref.txt) ✓"
found=true
elif [ -f "$WILDCARDS_DIR/$clean_ref.yaml" ]; then
echo "$ref (wildcards/$clean_ref.yaml) ✓"
found=true
elif [ -f "$WILDCARDS_DIR/$clean_ref.yml" ]; then
echo "$ref (wildcards/$clean_ref.yml) ✓"
found=true
fi
# Check in custom_wildcards/
if [ -f "$CUSTOM_WILDCARDS_DIR/$clean_ref.txt" ]; then
echo "$ref (custom_wildcards/$clean_ref.txt) ✓"
found=true
elif [ -f "$CUSTOM_WILDCARDS_DIR/$clean_ref.yaml" ]; then
echo "$ref (custom_wildcards/$clean_ref.yaml) ✓"
found=true
elif [ -f "$CUSTOM_WILDCARDS_DIR/$clean_ref.yml" ]; then
echo "$ref (custom_wildcards/$clean_ref.yml) ✓"
found=true
fi
if [ "$found" = false ]; then
echo "$ref ❌ (not found)"
fi
done
echo ""
fi
}
# Scan TXT files
echo "=== TXT Files with References ==="
echo ""
find "$WILDCARDS_DIR" "$CUSTOM_WILDCARDS_DIR" -name "*.txt" 2>/dev/null | while read -r file; do
find_references "$file"
done
# Scan YAML files
echo ""
echo "=== YAML Files with References ==="
echo ""
find "$WILDCARDS_DIR" "$CUSTOM_WILDCARDS_DIR" -name "*.yaml" -o -name "*.yml" 2>/dev/null | while read -r file; do
find_references "$file"
done
echo ""
echo "=========================================="
echo "Recommended Test Cases"
echo "=========================================="
echo ""
echo "1. Simple TXT wildcard:"
echo " Input: __samples/flower__"
echo " Type: Direct reference (no transitive)"
echo ""
# Find a good transitive TXT example
echo "2. TXT → TXT transitive:"
find "$CUSTOM_WILDCARDS_DIR" -name "*.txt" -exec grep -l "__.*__" {} \; 2>/dev/null | head -1 | while read -r file; do
local basename=$(basename "$file" .txt)
local first_ref=$(grep -o '__[^_]*__' "$file" 2>/dev/null | head -1)
echo " Input: __${basename}__"
echo " Resolves to: $first_ref (and others)"
echo " File: ${file#$IMPACT_PACK_DIR/}"
done
echo ""
echo "3. YAML transitive:"
echo " Input: __colors__"
echo " Resolves to: __cold__ or __warm__ → blue|red|orange|yellow"
echo " File: custom_wildcards/test.yaml"
echo ""
echo "=========================================="
echo "Scan Complete"
echo "=========================================="

View File

@@ -0,0 +1,74 @@
#!/bin/bash
# Quick test for wildcard lazy loading
echo "=========================================="
echo "Wildcard Lazy Load Quick Test"
echo "=========================================="
echo ""
# Test 1: Get wildcard list (before accessing any wildcards)
echo "=== Test 1: Wildcard List (BEFORE access) ==="
curl -s http://127.0.0.1:8188/impact/wildcards/list > /tmp/wc_list_before.json
COUNT_BEFORE=$(cat /tmp/wc_list_before.json | python3 -c "import sys, json; print(len(json.load(sys.stdin).get('data', [])))")
echo "Total wildcards: $COUNT_BEFORE"
echo ""
# Test 2: Simple wildcard
echo "=== Test 2: Simple Wildcard ==="
curl -s -X POST http://127.0.0.1:8188/impact/wildcards \
-H "Content-Type: application/json" \
-d '{"text": "__samples/flower__", "seed": 42}' > /tmp/wc_simple.json
RESULT2=$(cat /tmp/wc_simple.json | python3 -c "import sys, json; print(json.load(sys.stdin).get('text', 'ERROR'))")
echo "Input: __samples/flower__"
echo "Output: $RESULT2"
echo ""
# Test 3: Depth 3 transitive
echo "=== Test 3: Depth 3 Transitive (TXT→TXT→TXT) ==="
curl -s -X POST http://127.0.0.1:8188/impact/wildcards \
-H "Content-Type: application/json" \
-d '{"text": "__adnd__ creature", "seed": 222}' > /tmp/wc_depth3.json
RESULT3=$(cat /tmp/wc_depth3.json | python3 -c "import sys, json; print(json.load(sys.stdin).get('text', 'ERROR'))")
echo "Input: __adnd__ creature"
echo "Output: $RESULT3"
echo "Chain: adnd → (dragon/beast/...) → (dragon_spirit/...)"
echo ""
# Test 4: YAML transitive
echo "=== Test 4: YAML Transitive ==="
curl -s -X POST http://127.0.0.1:8188/impact/wildcards \
-H "Content-Type: application/json" \
-d '{"text": "__colors__", "seed": 333}' > /tmp/wc_yaml.json
RESULT4=$(cat /tmp/wc_yaml.json | python3 -c "import sys, json; print(json.load(sys.stdin).get('text', 'ERROR'))")
echo "Input: __colors__"
echo "Output: $RESULT4"
echo "Chain: colors → (cold|warm) → (blue|red|orange|yellow)"
echo ""
# Test 5: Get wildcard list (AFTER accessing wildcards)
echo "=== Test 5: Wildcard List (AFTER access) ==="
curl -s http://127.0.0.1:8188/impact/wildcards/list > /tmp/wc_list_after.json
COUNT_AFTER=$(cat /tmp/wc_list_after.json | python3 -c "import sys, json; print(len(json.load(sys.stdin).get('data', [])))")
echo "Total wildcards: $COUNT_AFTER"
echo ""
# Compare
echo "=========================================="
echo "Results"
echo "=========================================="
echo ""
if [ "$COUNT_BEFORE" -eq "$COUNT_AFTER" ]; then
echo "✅ Wildcard list unchanged: $COUNT_BEFORE = $COUNT_AFTER"
else
echo "❌ Wildcard list changed: $COUNT_BEFORE != $COUNT_AFTER"
fi
if [ "$RESULT2" != "ERROR" ] && [ "$RESULT3" != "ERROR" ] && [ "$RESULT4" != "ERROR" ]; then
echo "✅ All wildcards resolved successfully"
else
echo "❌ Some wildcards failed"
fi
echo ""
echo "Check /tmp/comfyui_ondemand.log for loading mode"
grep -i "wildcard.*mode" /tmp/comfyui_ondemand.log | tail -1

View File

@@ -0,0 +1,186 @@
# Test Wildcard Files Documentation
This directory contains test wildcard files created to validate various features and edge cases of the wildcard system.
## Test Categories
### 1. Error Handling Tests
**test_error_cases.txt**
- Purpose: Test handling of non-existent wildcard references
- Contains: References to `__nonexistent_wildcard__` that should be handled gracefully
- Expected: System should not crash, provide meaningful error or leave unexpanded
**test_circular_a.txt + test_circular_b.txt**
- Purpose: Test circular reference detection (A→B→A)
- Contains: Mutual references between two wildcards
- Expected: System should detect cycle and prevent infinite loop (max 100 iterations)
### 2. Encoding Tests
**test_encoding_utf8.txt**
- Purpose: Test UTF-8 multi-language support
- Contains:
- Emoji: 🌸🌺🌼🌻🌷
- Japanese: さくら, はな, 美しい花, 桜の木
- Chinese: 花, 玫瑰, 莲花, 牡丹
- Korean: 꽃, 장미, 벚꽃
- Arabic (RTL): زهرة, وردة
- Mixed: `🌸 beautiful 美しい flower زهرة 꽃`
- Expected: All characters render correctly, no encoding errors
**test_encoding_emoji.txt**
- Purpose: Test emoji handling across categories
- Contains: Nature, animals, food, hearts, and mixed emoji with text
- Expected: Emojis render correctly in results
**test_encoding_special.txt**
- Purpose: Test special Unicode characters
- Contains:
- Mathematical symbols: ∀∂∃∅∆∇∈∉
- Greek letters: α β γ δ ε ζ
- Currency: $ € £ ¥ ₹ ₽ ₩
- Box drawing: ┌─┬─┐
- Diacritics: Café résumé naïve Zürich
- Special punctuation: … — • · °
- Expected: All symbols preserved correctly
### 3. Edge Case Tests
**test_edge_empty_lines.txt**
- Purpose: Test handling of empty lines and whitespace-only lines
- Contains: Options separated by variable empty lines
- Expected: Empty lines ignored, only non-empty options selected
**test_edge_whitespace.txt**
- Purpose: Test leading/trailing whitespace handling
- Contains: Options with tabs, spaces, mixed whitespace
- Expected: Whitespace handling according to parser rules
**test_edge_long_lines.txt**
- Purpose: Test very long line handling
- Contains:
- Short lines
- Medium lines (~100 chars)
- Very long lines with spaces (>200 chars)
- Ultra-long lines without spaces (continuous text)
- Expected: No truncation or memory issues, proper handling
**test_edge_special_chars.txt**
- Purpose: Test special characters that might cause parsing issues
- Contains:
- Embedded wildcard syntax: `__wildcard__` as literal text
- Dynamic prompt syntax: `{option|option}` as literal text
- Regex special chars: `.`, `*`, `+`, `?`, `|`, `\`, `$`, `^`
- Quote characters: `"`, `'`, `` ` ``
- HTML special chars: `&`, `<`, `>`, `=`
- Expected: Special chars treated as literal text in final output
**test_edge_case_insensitive.txt**
- Purpose: Validate case-insensitive wildcard matching
- Contains: Options in various case patterns
- Expected: `__test_edge_case_insensitive__` and `__TEST_EDGE_CASE_INSENSITIVE__` return same results
**test_comments.txt**
- Purpose: Test comment handling with `#` prefix
- Contains: Lines starting with `#` mixed with valid options
- Expected: Comment lines ignored, only non-comment lines selected
### 4. Deep Nesting Tests (7 levels)
**test_nesting_level1.txt → test_nesting_level7.txt**
- Purpose: Test transitive wildcard expansion up to 7 levels
- Structure:
- Level 1 → references Level 2
- Level 2 → references Level 3
- ...
- Level 7 → final options (no further references)
- Usage: Access `__test_nesting_level1__` to trigger 7-level expansion
- Expected: All levels expand correctly, result from level 7 appears
### 5. Syntax Feature Tests
**test_quantifier.txt**
- Purpose: Test quantifier syntax `N#__wildcard__`
- Contains: List of color options
- Usage: `3#__test_quantifier__` should expand to 3 repeated wildcards
- Expected: Correct repetition and expansion
**test_pattern_match.txt**
- Purpose: Test pattern matching `__*/name__`
- Contains: Options with identifiable pattern
- Usage: `__*/test_pattern_match__` should match this file
- Expected: Depth-agnostic matching works correctly
## Test Usage Examples
### Basic Test
```bash
curl -X POST http://127.0.0.1:8188/impact/wildcards \
-H "Content-Type: application/json" \
-d '{"text": "__test_encoding_emoji__", "seed": 42}'
```
### Nesting Test
```bash
curl -X POST http://127.0.0.1:8188/impact/wildcards \
-H "Content-Type: application/json" \
-d '{"text": "__test_nesting_level1__", "seed": 42}'
```
### Error Handling Test
```bash
curl -X POST http://127.0.0.1:8188/impact/wildcards \
-H "Content-Type: application/json" \
-d '{"text": "__test_error_cases__", "seed": 42}'
```
### Circular Reference Test
```bash
curl -X POST http://127.0.0.1:8188/impact/wildcards \
-H "Content-Type: application/json" \
-d '{"text": "__test_circular_a__", "seed": 42}'
```
### Quantifier Test
```bash
curl -X POST http://127.0.0.1:8188/impact/wildcards \
-H "Content-Type: application/json" \
-d '{"text": "3#__test_quantifier__", "seed": 42}'
```
### Pattern Matching Test
```bash
curl -X POST http://127.0.0.1:8188/impact/wildcards \
-H "Content-Type: application/json" \
-d '{"text": "__*/test_pattern_match__", "seed": 42}'
```
## Test Coverage
These test files address the following critical gaps identified in the test coverage analysis:
1.**Error Handling** - Missing wildcard files, circular references
2.**UTF-8 Encoding** - Multi-language support (emoji, CJK, RTL)
3.**Edge Cases** - Empty lines, whitespace, long lines, special chars
4.**Deep Nesting** - 7-level transitive expansion
5.**Comment Handling** - Lines starting with `#`
6.**Case Insensitivity** - Case-insensitive wildcard matching
7.**Pattern Matching** - `__*/name__` syntax
8.**Quantifiers** - `N#__wildcard__` syntax
## Expected Test Results
All tests should:
- Not crash the system
- Return valid results or graceful error messages
- Preserve character encoding correctly
- Handle edge cases without data corruption
- Respect the 100-iteration limit for circular references
- Demonstrate deterministic behavior with same seed
---
**Created**: 2025-11-18
**Purpose**: Test coverage validation for wildcard system
**Total Files**: 21 test wildcard files

View File

@@ -0,0 +1,225 @@
#!/bin/bash
# Verify wildcard lazy loading through ComfyUI API
set -e
# Auto-detect paths
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
IMPACT_PACK_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
COMFYUI_DIR="$(cd "$IMPACT_PACK_DIR/../.." && pwd)"
CONFIG_FILE="$IMPACT_PACK_DIR/impact-pack.ini"
BACKUP_CONFIG="$IMPACT_PACK_DIR/impact-pack.ini.backup"
GREEN='\033[0;32m'
RED='\033[0;31m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m'
echo "=========================================="
echo "Wildcard Lazy Load Verification Test"
echo "=========================================="
echo ""
echo "This test verifies that on-demand loading produces"
echo "identical results to full cache mode."
echo ""
# Backup original config
if [ -f "$CONFIG_FILE" ]; then
cp "$CONFIG_FILE" "$BACKUP_CONFIG"
echo "✓ Backed up original config"
fi
# Cleanup function
cleanup() {
echo ""
echo "Cleaning up..."
pkill -f "python.*main.py" 2>/dev/null || true
sleep 2
}
# Test with specific configuration
test_mode() {
local MODE=$1
local CACHE_LIMIT=$2
local PORT=$3
echo ""
echo "${BLUE}=========================================${NC}"
echo "${BLUE}Testing: $MODE (limit: ${CACHE_LIMIT}MB, port: $PORT)${NC}"
echo "${BLUE}=========================================${NC}"
# Update config
cat > "$CONFIG_FILE" << EOF
[default]
dependency_version = 24
mmdet_skip = True
sam_editor_cpu = False
sam_editor_model = sam_vit_h_4b8939.pth
custom_wildcards = $IMPACT_PACK_DIR/custom_wildcards
disable_gpu_opencv = True
wildcard_cache_limit_mb = $CACHE_LIMIT
EOF
# Start server
cleanup
cd "$COMFYUI_DIR"
bash run.sh --listen 127.0.0.1 --port $PORT > /tmp/comfyui_${MODE}.log 2>&1 &
COMFYUI_PID=$!
echo "Waiting for server startup..."
sleep 15
# Check server
if ! curl -s http://127.0.0.1:$PORT/ > /dev/null; then
echo "${RED}✗ Server failed to start${NC}"
cat /tmp/comfyui_${MODE}.log | grep -i "wildcard\|error" | tail -20
return 1
fi
# Get loading mode from log
MODE_LOG=$(grep -i "wildcard.*mode" /tmp/comfyui_${MODE}.log | tail -1)
echo "${YELLOW}$MODE_LOG${NC}"
echo ""
# Test 1: Get wildcard list (BEFORE any access in on-demand mode)
echo "📋 Test 1: Get wildcard list"
LIST_RESULT=$(curl -s http://127.0.0.1:$PORT/impact/wildcards/list)
LIST_COUNT=$(echo "$LIST_RESULT" | python3 -c "import sys, json; print(len(json.load(sys.stdin)['data']))")
echo " Total wildcards: $LIST_COUNT"
echo " Sample: $(echo "$LIST_RESULT" | python3 -c "import sys, json; print(', '.join(json.load(sys.stdin)['data'][:10]))")"
echo "$LIST_RESULT" > /tmp/result_${MODE}_list.json
echo ""
# Test 2: Simple wildcard
echo "📋 Test 2: Simple wildcard"
RESULT1=$(curl -s http://127.0.0.1:$PORT/impact/wildcards \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": "__samples/flower__", "seed": 42}')
TEXT1=$(echo "$RESULT1" | python3 -c "import sys, json; print(json.load(sys.stdin)['text'])")
echo " Input: __samples/flower__"
echo " Output: $TEXT1"
echo "$RESULT1" > /tmp/result_${MODE}_simple.json
echo ""
# Test 3: Depth 3 transitive (adnd → dragon → dragon_spirit)
echo "📋 Test 3: Depth 3 transitive (TXT → TXT → TXT)"
RESULT2=$(curl -s http://127.0.0.1:$PORT/impact/wildcards \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": "__adnd__ creature", "seed": 222}')
TEXT2=$(echo "$RESULT2" | python3 -c "import sys, json; print(json.load(sys.stdin)['text'])")
echo " Input: __adnd__ creature (depth 3: adnd → dragon → dragon_spirit)"
echo " Output: $TEXT2"
echo "$RESULT2" > /tmp/result_${MODE}_depth3.json
echo ""
# Test 4: YAML transitive (colors → cold/warm → blue/red/orange/yellow)
echo "📋 Test 4: YAML transitive"
RESULT3=$(curl -s http://127.0.0.1:$PORT/impact/wildcards \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": "__colors__", "seed": 333}')
TEXT3=$(echo "$RESULT3" | python3 -c "import sys, json; print(json.load(sys.stdin)['text'])")
echo " Input: __colors__ (YAML: colors → cold|warm → blue|red|orange|yellow)"
echo " Output: $TEXT3"
echo "$RESULT3" > /tmp/result_${MODE}_yaml.json
echo ""
# Test 5: Get wildcard list AGAIN (AFTER access in on-demand mode)
echo "📋 Test 5: Get wildcard list (after access)"
LIST_RESULT2=$(curl -s http://127.0.0.1:$PORT/impact/wildcards/list)
LIST_COUNT2=$(echo "$LIST_RESULT2" | python3 -c "import sys, json; print(len(json.load(sys.stdin)['data']))")
echo " Total wildcards: $LIST_COUNT2"
echo "$LIST_RESULT2" > /tmp/result_${MODE}_list_after.json
echo ""
# Compare before/after list
if [ "$MODE" = "on_demand" ]; then
if [ "$LIST_COUNT" -eq "$LIST_COUNT2" ]; then
echo "${GREEN}✓ Wildcard list unchanged after access (${LIST_COUNT} = ${LIST_COUNT2})${NC}"
else
echo "${RED}✗ Wildcard list changed after access (${LIST_COUNT} != ${LIST_COUNT2})${NC}"
fi
echo ""
fi
cleanup
echo "${GREEN}$MODE tests completed${NC}"
echo ""
}
# Run tests
test_mode "full_cache" 100 8190
test_mode "on_demand" 1 8191
# Compare results
echo ""
echo "=========================================="
echo "COMPARISON RESULTS"
echo "=========================================="
echo ""
compare_test() {
local TEST_NAME=$1
local FILE_SUFFIX=$2
echo "Test: $TEST_NAME"
DIFF=$(diff /tmp/result_full_cache_${FILE_SUFFIX}.json /tmp/result_on_demand_${FILE_SUFFIX}.json || true)
if [ -z "$DIFF" ]; then
echo "${GREEN}✓ Results MATCH${NC}"
else
echo "${RED}✗ Results DIFFER${NC}"
echo "Difference:"
echo "$DIFF" | head -10
fi
echo ""
}
compare_test "Wildcard List (before access)" "list"
compare_test "Simple Wildcard" "simple"
compare_test "Depth 3 Transitive" "depth3"
compare_test "YAML Transitive" "yaml"
compare_test "Wildcard List (after access)" "list_after"
# Summary
echo "=========================================="
echo "SUMMARY"
echo "=========================================="
echo ""
ALL_MATCH=true
for suffix in list simple depth3 yaml list_after; do
if ! diff /tmp/result_full_cache_${suffix}.json /tmp/result_on_demand_${suffix}.json > /dev/null 2>&1; then
ALL_MATCH=false
break
fi
done
if [ "$ALL_MATCH" = true ]; then
echo "${GREEN}🎉 ALL TESTS PASSED${NC}"
echo "${GREEN}On-demand loading produces IDENTICAL results to full cache mode!${NC}"
EXIT_CODE=0
else
echo "${RED}❌ TESTS FAILED${NC}"
echo "${RED}On-demand loading has consistency issues!${NC}"
EXIT_CODE=1
fi
echo ""
# Restore config
if [ -f "$BACKUP_CONFIG" ]; then
mv "$BACKUP_CONFIG" "$CONFIG_FILE"
echo "✓ Restored original config"
fi
cleanup
echo ""
echo "=========================================="
echo "Test Complete"
echo "=========================================="
exit $EXIT_CODE

View File

@@ -0,0 +1,262 @@
#!/usr/bin/env python3
"""
Verify that wildcard lists are identical before and after on-demand loading.
This test ensures that LazyWildcardLoader maintains consistency:
1. Full cache mode: all data loaded immediately
2. On-demand mode (before access): LazyWildcardLoader proxies
3. On-demand mode (after access): data loaded on demand
All three scenarios should produce identical wildcard lists and values.
"""
import sys
import os
# Add parent directory to path
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from modules.impact import config
from modules.impact.wildcards import wildcard_load, wildcard_dict, is_on_demand_mode, process
def get_wildcard_list():
"""Get list of all wildcard keys"""
return sorted(list(wildcard_dict.keys()))
def get_wildcard_sample_values(wildcards_to_test=None):
"""Get sample values from specific wildcards"""
if wildcards_to_test is None:
wildcards_to_test = [
'samples/flower',
'samples/jewel',
'adnd', # Depth 3 transitive
'all', # Depth 3 transitive
'colors', # YAML transitive
]
values = {}
for key in wildcards_to_test:
if key in wildcard_dict:
data = wildcard_dict[key]
# Convert to list if it's a LazyWildcardLoader
if hasattr(data, 'get_data'):
data = data.get_data()
values[key] = list(data) if data else []
else:
values[key] = None
return values
def test_full_cache_mode():
"""Test with full cache mode (limit = 100 MB)"""
print("=" * 80)
print("TEST 1: Full Cache Mode")
print("=" * 80)
print()
# Set high cache limit to force full cache mode
config.get_config()['wildcard_cache_limit_mb'] = 100
# Reload wildcards
wildcard_load()
# Check mode
mode = is_on_demand_mode()
print(f"Mode: {'On-Demand' if mode else 'Full Cache'}")
assert not mode, "Should be in Full Cache mode"
# Get wildcard list
wc_list = get_wildcard_list()
print(f"Total wildcards: {len(wc_list)}")
print(f"Sample wildcards: {wc_list[:10]}")
print()
# Get sample values
values = get_wildcard_sample_values()
print("Sample values:")
for key, val in values.items():
if val is not None:
print(f" {key}: {len(val)} items - {val[:3] if len(val) >= 3 else val}")
else:
print(f" {key}: NOT FOUND")
print()
return {
'mode': 'full_cache',
'wildcard_list': wc_list,
'values': values,
}
def test_on_demand_mode_before_access():
"""Test with on-demand mode before accessing data"""
print("=" * 80)
print("TEST 2: On-Demand Mode (Before Access)")
print("=" * 80)
print()
# Set low cache limit to force on-demand mode
config.get_config()['wildcard_cache_limit_mb'] = 1
# Reload wildcards
wildcard_load()
# Check mode
mode = is_on_demand_mode()
print(f"Mode: {'On-Demand' if mode else 'Full Cache'}")
assert mode, "Should be in On-Demand mode"
# Get wildcard list (should work even without loading data)
wc_list = get_wildcard_list()
print(f"Total wildcards: {len(wc_list)}")
print(f"Sample wildcards: {wc_list[:10]}")
print()
# Check that wildcards are LazyWildcardLoader instances
lazy_count = sum(1 for k in wc_list if hasattr(wildcard_dict[k], 'get_data'))
print(f"LazyWildcardLoader instances: {lazy_count}/{len(wc_list)}")
print()
return {
'mode': 'on_demand_before',
'wildcard_list': wc_list,
'lazy_count': lazy_count,
}
def test_on_demand_mode_after_access():
"""Test with on-demand mode after accessing data"""
print("=" * 80)
print("TEST 3: On-Demand Mode (After Access)")
print("=" * 80)
print()
# Mode should still be on-demand from previous test
mode = is_on_demand_mode()
print(f"Mode: {'On-Demand' if mode else 'Full Cache'}")
assert mode, "Should still be in On-Demand mode"
# Get sample values (this will trigger lazy loading)
values = get_wildcard_sample_values()
print("Sample values (after access):")
for key, val in values.items():
if val is not None:
print(f" {key}: {len(val)} items - {val[:3] if len(val) >= 3 else val}")
else:
print(f" {key}: NOT FOUND")
print()
# Test deep transitive wildcards
print("Testing deep transitive wildcards:")
test_cases = [
("__adnd__", 42), # Depth 3: adnd → dragon → dragon_spirit
("__all__", 123), # Depth 3: all → giant → giant_soldier
]
for wildcard_text, seed in test_cases:
result = process(wildcard_text, seed)
print(f" {wildcard_text} (seed={seed}): {result}")
print()
return {
'mode': 'on_demand_after',
'wildcard_list': get_wildcard_list(),
'values': values,
}
def compare_results(result1, result2, result3):
"""Compare results from all three tests"""
print("=" * 80)
print("COMPARISON RESULTS")
print("=" * 80)
print()
# Compare wildcard lists
list1 = result1['wildcard_list']
list2 = result2['wildcard_list']
list3 = result3['wildcard_list']
print("1. Wildcard List Comparison")
print(f" Full Cache: {len(list1)} wildcards")
print(f" On-Demand (before): {len(list2)} wildcards")
print(f" On-Demand (after): {len(list3)} wildcards")
if list1 == list2 == list3:
print(" ✅ All lists are IDENTICAL")
else:
print(" ❌ Lists DIFFER")
if list1 != list2:
print(f" Full Cache vs On-Demand (before): {len(set(list1) - set(list2))} differences")
if list1 != list3:
print(f" Full Cache vs On-Demand (after): {len(set(list1) - set(list3))} differences")
if list2 != list3:
print(f" On-Demand (before) vs On-Demand (after): {len(set(list2) - set(list3))} differences")
print()
# Compare sample values
values1 = result1['values']
values3 = result3['values']
print("2. Sample Values Comparison")
all_match = True
for key in values1.keys():
v1 = values1[key]
v3 = values3[key]
if v1 == v3:
status = "✅ MATCH"
else:
status = "❌ DIFFER"
all_match = False
print(f" {key}: {status}")
if v1 != v3:
print(f" Full Cache: {len(v1) if v1 else 0} items")
print(f" On-Demand: {len(v3) if v3 else 0} items")
print()
if all_match:
print("✅ ALL VALUES MATCH - On-demand loading is CONSISTENT")
else:
print("❌ VALUES DIFFER - On-demand loading has ISSUES")
print()
return list1 == list2 == list3 and all_match
def main():
print()
print("=" * 80)
print("WILDCARD LAZY LOAD VERIFICATION TEST")
print("=" * 80)
print()
print("This test verifies that on-demand loading produces identical results")
print("to full cache mode.")
print()
# Run tests
result1 = test_full_cache_mode()
result2 = test_on_demand_mode_before_access()
result3 = test_on_demand_mode_after_access()
# Compare results
success = compare_results(result1, result2, result3)
# Final result
print("=" * 80)
if success:
print("🎉 TEST PASSED - Lazy loading is working correctly!")
else:
print("❌ TEST FAILED - Lazy loading has consistency issues!")
print("=" * 80)
print()
return 0 if success else 1
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,247 @@
#!/usr/bin/env python3
"""
Progressive On-Demand Wildcard Loading Unit Tests
Tests that wildcard loading happens progressively as wildcards are accessed.
"""
import sys
import os
import tempfile
# Add parent directory to path
test_dir = os.path.dirname(os.path.abspath(__file__))
impact_pack_dir = os.path.dirname(test_dir)
sys.path.insert(0, impact_pack_dir)
from modules.impact import wildcards
def test_early_termination():
"""Test that calculate_directory_size stops early when limit exceeded"""
print("=" * 60)
print("TEST 1: Early Termination Size Calculation")
print("=" * 60)
# Create temporary directory with test files
with tempfile.TemporaryDirectory() as tmpdir:
# Create files totaling 100 bytes
for i in range(10):
with open(os.path.join(tmpdir, f"test{i}.txt"), 'w') as f:
f.write("x" * 10) # 10 bytes each
# Test without limit (should scan all)
total_size = wildcards.calculate_directory_size(tmpdir)
print(f"✓ Total size without limit: {total_size} bytes")
assert total_size == 100, f"Expected 100 bytes, got {total_size}"
# Test with limit (should stop early)
limited_size = wildcards.calculate_directory_size(tmpdir, limit=50)
print(f"✓ Size with 50 byte limit: {limited_size} bytes")
assert limited_size >= 50, f"Expected >= 50 bytes, got {limited_size}"
assert limited_size <= total_size, "Limited should not exceed total"
print(f"✓ Early termination working (stopped at {limited_size} bytes)")
print("\n✅ Early termination test PASSED\n")
def test_metadata_scan():
"""Test that scan_wildcard_metadata only scans file paths, not data"""
print("=" * 60)
print("TEST 2: Metadata-Only Scan")
print("=" * 60)
# Create temporary wildcard directory
with tempfile.TemporaryDirectory() as tmpdir:
# Create test files
test_file1 = os.path.join(tmpdir, "test1.txt")
test_file2 = os.path.join(tmpdir, "test2.txt")
test_yaml = os.path.join(tmpdir, "test3.yaml")
with open(test_file1, 'w') as f:
f.write("option1a\noption1b\noption1c\n")
with open(test_file2, 'w') as f:
f.write("option2a\noption2b\n")
with open(test_yaml, 'w') as f:
f.write("key1:\n - value1\n - value2\n")
# Clear globals
wildcards.available_wildcards = {}
wildcards.loaded_wildcards = {}
# Scan metadata only
print(f"✓ Scanning directory: {tmpdir}")
discovered = wildcards.scan_wildcard_metadata(tmpdir)
print(f"✓ Discovered {discovered} wildcards")
assert discovered == 3, f"Expected 3 wildcards, got {discovered}"
print(f"✓ Available wildcards: {list(wildcards.available_wildcards.keys())}")
assert len(wildcards.available_wildcards) == 3
# Verify that data is NOT loaded
assert len(wildcards.loaded_wildcards) == 0, "Data should not be loaded yet"
print("✓ No data loaded (metadata only)")
# Verify file paths are stored
for key in wildcards.available_wildcards.keys():
file_path = wildcards.available_wildcards[key]
assert os.path.exists(file_path), f"File path should exist: {file_path}"
print(f" - {key} -> {file_path}")
print("\n✅ Metadata scan test PASSED\n")
def test_progressive_loading():
"""Test that wildcards are loaded progressively on access"""
print("=" * 60)
print("TEST 3: Progressive On-Demand Loading")
print("=" * 60)
# Create temporary wildcard directory
with tempfile.TemporaryDirectory() as tmpdir:
# Create test files
test_file1 = os.path.join(tmpdir, "wildcard1.txt")
test_file2 = os.path.join(tmpdir, "wildcard2.txt")
test_file3 = os.path.join(tmpdir, "wildcard3.txt")
with open(test_file1, 'w') as f:
f.write("option1a\noption1b\n")
with open(test_file2, 'w') as f:
f.write("option2a\noption2b\n")
with open(test_file3, 'w') as f:
f.write("option3a\noption3b\n")
# Clear globals
wildcards.available_wildcards = {}
wildcards.loaded_wildcards = {}
wildcards._on_demand_mode = True
# Scan metadata
discovered = wildcards.scan_wildcard_metadata(tmpdir)
print(f"✓ Discovered {discovered} wildcards")
print(f"✓ Available: {len(wildcards.available_wildcards)}")
print(f"✓ Loaded: {len(wildcards.loaded_wildcards)}")
# Initial state: 3 available, 0 loaded
assert len(wildcards.available_wildcards) == 3
assert len(wildcards.loaded_wildcards) == 0
# Access first wildcard
print("\nAccessing wildcard1...")
data1 = wildcards.get_wildcard_value("wildcard1")
assert data1 is not None, "Should load wildcard1"
assert len(data1) == 2, f"Expected 2 options, got {len(data1)}"
print(f"✓ Loaded wildcard1: {data1}")
print(f"✓ Loaded count: {len(wildcards.loaded_wildcards)}")
assert len(wildcards.loaded_wildcards) == 1, "Should have 1 loaded wildcard"
# Access second wildcard
print("\nAccessing wildcard2...")
data2 = wildcards.get_wildcard_value("wildcard2")
assert data2 is not None, "Should load wildcard2"
print(f"✓ Loaded wildcard2: {data2}")
print(f"✓ Loaded count: {len(wildcards.loaded_wildcards)}")
assert len(wildcards.loaded_wildcards) == 2, "Should have 2 loaded wildcards"
# Re-access first wildcard (should use cache)
print("\nRe-accessing wildcard1 (cached)...")
data1_again = wildcards.get_wildcard_value("wildcard1")
assert data1_again == data1, "Cached data should match"
print("✓ Cache hit, data matches")
print(f"✓ Loaded count: {len(wildcards.loaded_wildcards)}")
assert len(wildcards.loaded_wildcards) == 2, "Count should not increase on cache hit"
# Access third wildcard
print("\nAccessing wildcard3...")
data3 = wildcards.get_wildcard_value("wildcard3")
assert data3 is not None, "Should load wildcard3"
print(f"✓ Loaded wildcard3: {data3}")
print(f"✓ Loaded count: {len(wildcards.loaded_wildcards)}")
assert len(wildcards.loaded_wildcards) == 3, "Should have 3 loaded wildcards"
# Verify all loaded
assert set(wildcards.loaded_wildcards.keys()) == {"wildcard1", "wildcard2", "wildcard3"}
print("✓ All wildcards loaded progressively")
print("\n✅ Progressive loading test PASSED\n")
def test_wildcard_list_functions():
"""Test get_wildcard_list() and get_loaded_wildcard_list()"""
print("=" * 60)
print("TEST 4: Wildcard List Functions")
print("=" * 60)
# Create temporary wildcard directory
with tempfile.TemporaryDirectory() as tmpdir:
# Create test files
for i in range(5):
with open(os.path.join(tmpdir, f"test{i}.txt"), 'w') as f:
f.write(f"option{i}a\noption{i}b\n")
# Clear globals
wildcards.available_wildcards = {}
wildcards.loaded_wildcards = {}
wildcards._on_demand_mode = True
# Scan metadata
wildcards.scan_wildcard_metadata(tmpdir)
# Test get_wildcard_list (should return all available)
all_wildcards = wildcards.get_wildcard_list()
print(f"✓ get_wildcard_list(): {len(all_wildcards)} wildcards")
assert len(all_wildcards) == 5, "Should return all available wildcards"
# Test get_loaded_wildcard_list (should return 0 initially)
loaded_wildcards_list = wildcards.get_loaded_wildcard_list()
print(f"✓ get_loaded_wildcard_list(): {len(loaded_wildcards_list)} wildcards (initial)")
assert len(loaded_wildcards_list) == 0, "Should return no loaded wildcards initially"
# Load some wildcards
wildcards.get_wildcard_value("test0")
wildcards.get_wildcard_value("test1")
# Test get_loaded_wildcard_list (should return 2 now)
loaded_wildcards_list = wildcards.get_loaded_wildcard_list()
print(f"✓ get_loaded_wildcard_list(): {len(loaded_wildcards_list)} wildcards (after loading 2)")
assert len(loaded_wildcards_list) == 2, "Should return 2 loaded wildcards"
# Verify loaded list is subset of available list
assert set(loaded_wildcards_list).issubset(set(all_wildcards)), "Loaded should be subset of available"
print("✓ Loaded list is subset of available list")
print("\n✅ Wildcard list functions test PASSED\n")
def main():
"""Run all tests"""
print("\n" + "=" * 60)
print("PROGRESSIVE ON-DEMAND LOADING TEST SUITE")
print("=" * 60 + "\n")
try:
test_early_termination()
test_metadata_scan()
test_progressive_loading()
test_wildcard_list_functions()
print("=" * 60)
print("✅ ALL TESTS PASSED")
print("=" * 60)
return 0
except Exception as e:
print("\n" + "=" * 60)
print(f"❌ TEST FAILED: {e}")
print("=" * 60)
import traceback
traceback.print_exc()
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,270 @@
#!/bin/bash
# Progressive On-Demand Wildcard Loading Test
# Verifies that wildcards are loaded progressively as they are accessed
set -e
# Auto-detect paths
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
IMPACT_PACK_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
COMFYUI_DIR="$(cd "$IMPACT_PACK_DIR/../.." && pwd)"
CONFIG_FILE="$IMPACT_PACK_DIR/impact-pack.ini"
BACKUP_CONFIG="$IMPACT_PACK_DIR/impact-pack.ini.backup"
PORT=8195
GREEN='\033[0;32m'
RED='\033[0;31m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
CYAN='\033[0;36m'
NC='\033[0m'
echo "=========================================="
echo "Progressive On-Demand Loading Test"
echo "=========================================="
echo ""
echo "This test verifies that /wildcards/list/loaded"
echo "increases progressively as wildcards are accessed."
echo ""
# Backup original config
if [ -f "$CONFIG_FILE" ]; then
cp "$CONFIG_FILE" "$BACKUP_CONFIG"
echo "✓ Backed up original config"
fi
# Cleanup function
cleanup() {
echo ""
echo "Cleaning up..."
pkill -f "python.*main.py.*$PORT" 2>/dev/null || true
sleep 2
}
# Setup on-demand mode (low cache limit)
echo "${BLUE}Setting up on-demand mode configuration${NC}"
cat > "$CONFIG_FILE" << EOF
[default]
dependency_version = 24
mmdet_skip = True
sam_editor_cpu = False
sam_editor_model = sam_vit_h_4b8939.pth
custom_wildcards = $IMPACT_PACK_DIR/custom_wildcards
disable_gpu_opencv = True
wildcard_cache_limit_mb = 0.5
EOF
echo "✓ Configuration: on-demand mode (0.5MB limit)"
echo ""
# Start server
cleanup
cd "$COMFYUI_DIR"
echo "Starting ComfyUI server on port $PORT..."
bash run.sh --listen 127.0.0.1 --port $PORT > /tmp/progressive_test.log 2>&1 &
COMFYUI_PID=$!
echo "Waiting for server startup..."
sleep 15
# Check server
if ! curl -s http://127.0.0.1:$PORT/ > /dev/null; then
echo "${RED}✗ Server failed to start${NC}"
cat /tmp/progressive_test.log | grep -i "wildcard\|error" | tail -20
exit 1
fi
echo "${GREEN}✓ Server started${NC}"
echo ""
# Check loading mode from log
MODE_LOG=$(grep -i "wildcard.*mode" /tmp/progressive_test.log | tail -1)
echo "${YELLOW}$MODE_LOG${NC}"
echo ""
# Test Progressive Loading
echo "=========================================="
echo "Progressive Loading Verification"
echo "=========================================="
echo ""
# Step 1: Initial state (no wildcards accessed)
echo "${CYAN}Step 1: Initial state (before any wildcard access)${NC}"
RESPONSE=$(curl -s http://127.0.0.1:$PORT/impact/wildcards/list/loaded)
LOADED_COUNT=$(echo "$RESPONSE" | python3 -c "import sys, json; print(len(json.load(sys.stdin)['data']))" 2>/dev/null || echo "0")
ON_DEMAND=$(echo "$RESPONSE" | python3 -c "import sys, json; print(json.load(sys.stdin).get('on_demand_mode', False))" 2>/dev/null || echo "false")
TOTAL_AVAILABLE=$(echo "$RESPONSE" | python3 -c "import sys, json; print(json.load(sys.stdin).get('total_available', 0))" 2>/dev/null || echo "0")
echo " On-demand mode: $ON_DEMAND"
echo " Total available wildcards: $TOTAL_AVAILABLE"
echo " Loaded wildcards: ${YELLOW}$LOADED_COUNT${NC}"
if [ "$ON_DEMAND" != "True" ]; then
echo "${RED}✗ FAIL: On-demand mode not active!${NC}"
exit 1
fi
if [ "$LOADED_COUNT" -ne 0 ]; then
echo "${YELLOW}⚠ WARNING: Expected 0 loaded, got $LOADED_COUNT${NC}"
fi
echo ""
# Step 2: Access first wildcard
echo "${CYAN}Step 2: Access first wildcard (__samples/flower__)${NC}"
RESULT1=$(curl -s http://127.0.0.1:$PORT/impact/wildcards \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": "__samples/flower__", "seed": 42}')
TEXT1=$(echo "$RESULT1" | python3 -c "import sys, json; print(json.load(sys.stdin).get('text','ERROR'))")
echo " Result: $TEXT1"
RESPONSE=$(curl -s http://127.0.0.1:$PORT/impact/wildcards/list/loaded)
LOADED_COUNT_1=$(echo "$RESPONSE" | python3 -c "import sys, json; print(len(json.load(sys.stdin)['data']))")
echo " Loaded wildcards: ${YELLOW}$LOADED_COUNT_1${NC}"
if [ "$LOADED_COUNT_1" -lt 1 ]; then
echo "${RED}✗ FAIL: Expected at least 1 loaded wildcard${NC}"
exit 1
fi
echo "${GREEN}✓ PASS: Wildcard count increased${NC}"
echo ""
# Step 3: Access second wildcard (different from first)
echo "${CYAN}Step 3: Access second wildcard (__dragon__)${NC}"
RESULT2=$(curl -s http://127.0.0.1:$PORT/impact/wildcards \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": "__dragon__", "seed": 200}')
TEXT2=$(echo "$RESULT2" | python3 -c "import sys, json; print(json.load(sys.stdin).get('text','ERROR'))")
echo " Result: $TEXT2"
RESPONSE=$(curl -s http://127.0.0.1:$PORT/impact/wildcards/list/loaded)
LOADED_COUNT_2=$(echo "$RESPONSE" | python3 -c "import sys, json; print(len(json.load(sys.stdin)['data']))")
echo " Loaded wildcards: ${YELLOW}$LOADED_COUNT_2${NC}"
if [ "$LOADED_COUNT_2" -le "$LOADED_COUNT_1" ]; then
echo "${RED}✗ FAIL: Expected loaded count to increase (was $LOADED_COUNT_1, now $LOADED_COUNT_2)${NC}"
exit 1
fi
echo "${GREEN}✓ PASS: Wildcard count increased progressively${NC}"
echo ""
# Step 4: Access third wildcard (YAML)
echo "${CYAN}Step 4: Access third wildcard (__colors__)${NC}"
RESULT3=$(curl -s http://127.0.0.1:$PORT/impact/wildcards \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": "__colors__", "seed": 333}')
TEXT3=$(echo "$RESULT3" | python3 -c "import sys, json; print(json.load(sys.stdin).get('text','ERROR'))")
echo " Result: $TEXT3"
RESPONSE=$(curl -s http://127.0.0.1:$PORT/impact/wildcards/list/loaded)
LOADED_COUNT_3=$(echo "$RESPONSE" | python3 -c "import sys, json; print(len(json.load(sys.stdin)['data']))")
LOADED_LIST=$(echo "$RESPONSE" | python3 -c "import sys, json; print(', '.join(json.load(sys.stdin)['data'][:10]))")
echo " Loaded wildcards: ${YELLOW}$LOADED_COUNT_3${NC}"
echo " Sample loaded: $LOADED_LIST"
if [ "$LOADED_COUNT_3" -le "$LOADED_COUNT_2" ]; then
echo "${RED}✗ FAIL: Expected loaded count to increase (was $LOADED_COUNT_2, now $LOADED_COUNT_3)${NC}"
exit 1
fi
echo "${GREEN}✓ PASS: Wildcard count increased progressively${NC}"
echo ""
# Step 5: Re-access first wildcard (should not increase count)
echo "${CYAN}Step 5: Re-access first wildcard (cached)${NC}"
RESULT4=$(curl -s http://127.0.0.1:$PORT/impact/wildcards \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": "__samples/flower__", "seed": 42}')
RESPONSE=$(curl -s http://127.0.0.1:$PORT/impact/wildcards/list/loaded)
LOADED_COUNT_4=$(echo "$RESPONSE" | python3 -c "import sys, json; print(len(json.load(sys.stdin)['data']))")
echo " Loaded wildcards: ${YELLOW}$LOADED_COUNT_4${NC}"
if [ "$LOADED_COUNT_4" -ne "$LOADED_COUNT_3" ]; then
echo "${YELLOW}⚠ WARNING: Count changed on cache access (was $LOADED_COUNT_3, now $LOADED_COUNT_4)${NC}"
else
echo "${GREEN}✓ PASS: Cached access did not change count${NC}"
fi
echo ""
# Step 6: Deep transitive wildcard (should load multiple wildcards)
echo "${CYAN}Step 6: Deep transitive wildcard (__adnd__)${NC}"
RESULT5=$(curl -s http://127.0.0.1:$PORT/impact/wildcards \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": "__adnd__ creature", "seed": 222}')
TEXT5=$(echo "$RESULT5" | python3 -c "import sys, json; print(json.load(sys.stdin).get('text','ERROR'))")
echo " Result: $TEXT5"
RESPONSE=$(curl -s http://127.0.0.1:$PORT/impact/wildcards/list/loaded)
LOADED_COUNT_5=$(echo "$RESPONSE" | python3 -c "import sys, json; print(len(json.load(sys.stdin)['data']))")
echo " Loaded wildcards: ${YELLOW}$LOADED_COUNT_5${NC}"
if [ "$LOADED_COUNT_5" -le "$LOADED_COUNT_4" ]; then
echo "${YELLOW}⚠ Transitive wildcards may already be loaded${NC}"
else
echo "${GREEN}✓ PASS: Transitive wildcards loaded progressively${NC}"
fi
echo ""
# Summary
echo "=========================================="
echo "Progressive Loading Summary"
echo "=========================================="
echo ""
echo "Total available wildcards: $TOTAL_AVAILABLE"
echo "Loading progression:"
echo " Initial: $LOADED_COUNT"
echo " After step 2: $LOADED_COUNT_1 (+$(($LOADED_COUNT_1 - $LOADED_COUNT)))"
echo " After step 3: $LOADED_COUNT_2 (+$(($LOADED_COUNT_2 - $LOADED_COUNT_1)))"
echo " After step 4: $LOADED_COUNT_3 (+$(($LOADED_COUNT_3 - $LOADED_COUNT_2)))"
echo " After step 5: $LOADED_COUNT_4 (cache, no change)"
echo " After step 6: $LOADED_COUNT_5 (+$(($LOADED_COUNT_5 - $LOADED_COUNT_4)))"
echo ""
# Validation
ALL_PASSED=true
if [ "$LOADED_COUNT_1" -le "$LOADED_COUNT" ]; then
echo "${RED}✗ FAIL: Step 2 did not increase count${NC}"
ALL_PASSED=false
fi
if [ "$LOADED_COUNT_2" -le "$LOADED_COUNT_1" ]; then
echo "${RED}✗ FAIL: Step 3 did not increase count${NC}"
ALL_PASSED=false
fi
if [ "$LOADED_COUNT_3" -le "$LOADED_COUNT_2" ]; then
echo "${RED}✗ FAIL: Step 4 did not increase count${NC}"
ALL_PASSED=false
fi
if [ "$ALL_PASSED" = true ]; then
echo "${GREEN}🎉 ALL TESTS PASSED${NC}"
echo "${GREEN}Progressive on-demand loading verified successfully!${NC}"
EXIT_CODE=0
else
echo "${RED}❌ TESTS FAILED${NC}"
echo "${RED}Progressive loading did not work as expected!${NC}"
EXIT_CODE=1
fi
echo ""
# Restore config
cleanup
if [ -f "$BACKUP_CONFIG" ]; then
mv "$BACKUP_CONFIG" "$CONFIG_FILE"
echo "✓ Restored original config"
fi
echo ""
echo "=========================================="
echo "Test Complete"
echo "=========================================="
echo "Log saved to: /tmp/progressive_test.log"
echo ""
exit $EXIT_CODE

View File

@@ -0,0 +1,327 @@
#!/bin/bash
# Sequential Multi-Stage Wildcard Loading Test
# Tests transitive wildcards that load in multiple sequential stages
# Auto-detect paths
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
IMPACT_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
PORT=8193
CONFIG_FILE="$IMPACT_DIR/impact-pack.ini"
GREEN='\033[0;32m'
RED='\033[0;31m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
CYAN='\033[0;36m'
NC='\033[0m'
echo "=========================================="
echo "Sequential Multi-Stage Wildcard Loading Test"
echo "=========================================="
echo ""
# Setup config for full cache mode
cat > "$CONFIG_FILE" << EOF
[default]
dependency_version = 24
mmdet_skip = True
sam_editor_cpu = False
sam_editor_model = sam_vit_h_4b8939.pth
custom_wildcards = $IMPACT_DIR/custom_wildcards
disable_gpu_opencv = True
wildcard_cache_limit_mb = 50
EOF
echo "Mode: Full cache mode (50MB limit)"
echo ""
# Kill existing servers
pkill -9 -f "python.*main.py" 2>/dev/null || true
sleep 3
# Start server
COMFYUI_DIR="$(cd "$IMPACT_DIR/../.." && pwd)"
cd "$COMFYUI_DIR"
echo "Starting ComfyUI server on port $PORT..."
bash run.sh --listen 127.0.0.1 --port $PORT > /tmp/sequential_test.log 2>&1 &
SERVER_PID=$!
# Wait for server
echo "Waiting 70 seconds for server startup..."
for i in {1..70}; do
sleep 1
if [ $((i % 10)) -eq 0 ]; then
echo " ... $i seconds"
fi
done
# Check server
if ! curl -s http://127.0.0.1:$PORT/ > /dev/null; then
echo "${RED}✗ Server failed to start${NC}"
exit 1
fi
echo "${GREEN}✓ Server started${NC}"
echo ""
# Test function with stage visualization
test_sequential() {
local TEST_NUM=$1
local RAW_PROMPT=$2
local SEED=$3
local DESCRIPTION=$4
local EXPECTED_STAGES=$5 # Number of expected expansion stages
echo "${BLUE}=== Test $TEST_NUM: $DESCRIPTION ===${NC}"
echo "Raw prompt: ${YELLOW}$RAW_PROMPT${NC}"
echo "Seed: $SEED"
echo "Expected stages: $EXPECTED_STAGES"
echo ""
# Test the prompt
RESULT=$(curl -s -X POST http://127.0.0.1:$PORT/impact/wildcards \
-H "Content-Type: application/json" \
-d "{\"text\": \"$RAW_PROMPT\", \"seed\": $SEED}" | \
python3 -c "import sys, json; print(json.load(sys.stdin).get('text','ERROR'))" 2>/dev/null || echo "ERROR")
echo "${CYAN}Stage Analysis:${NC}"
echo " Stage 0 (Input): $RAW_PROMPT"
# Check if result contains any wildcards (incomplete expansion)
if echo "$RESULT" | grep -q "__.*__"; then
echo " ${YELLOW}⚠ Result still contains wildcards (incomplete expansion)${NC}"
echo " Final Result: $RESULT"
else
echo " ${GREEN}✓ All wildcards fully expanded${NC}"
fi
echo " Final Output: ${GREEN}$RESULT${NC}"
echo ""
# Validate result
if [ "$RESULT" != "ERROR" ] && [ "$RESULT" != "" ]; then
# Check if result still has wildcards (shouldn't have)
if echo "$RESULT" | grep -q "__.*__"; then
echo "Status: ${YELLOW}⚠ PARTIAL - Wildcards remain${NC}"
else
echo "Status: ${GREEN}✅ SUCCESS - Complete expansion${NC}"
fi
else
echo "Status: ${RED}❌ FAILED - Error or empty result${NC}"
fi
echo ""
}
echo "=========================================="
echo "Sequential Loading Test Suite"
echo "=========================================="
echo ""
echo "${CYAN}Test Category 1: Depth Verification${NC}"
echo "Testing different transitive depths with stage tracking"
echo ""
# Test 1: Depth 1 (Direct wildcard)
test_sequential "01" \
"__samples/flower__" \
42 \
"Depth 1 - Direct wildcard (no transitive)" \
1
# Test 2: Depth 2 (One level transitive)
test_sequential "02" \
"__dragon__" \
200 \
"Depth 2 - One level transitive" \
2
# Test 3: Depth 3 (Two levels transitive)
test_sequential "03" \
"__dragon__ warrior" \
200 \
"Depth 3 - Two levels with suffix" \
3
# Test 4: Depth 3 (Maximum verified depth)
test_sequential "04" \
"__adnd__ creature" \
222 \
"Depth 3 - Maximum transitive chain" \
3
echo ""
echo "${CYAN}Test Category 2: Mixed Transitive Scenarios${NC}"
echo "Testing wildcards mixed with dynamic prompts"
echo ""
# Test 5: Transitive with dynamic prompt
test_sequential "05" \
"{__dragon__|__adnd__} in battle" \
100 \
"Dynamic selection of transitive wildcards" \
3
# Test 6: Multiple transitive wildcards
test_sequential "06" \
"__dragon__ fights __adnd__" \
150 \
"Multiple transitive wildcards in one prompt" \
3
# Test 7: Nested transitive in dynamic
test_sequential "07" \
"powerful {__dragon__|__adnd__|simple warrior}" \
200 \
"Transitive wildcards nested in dynamic prompts" \
3
echo ""
echo "${CYAN}Test Category 3: Complex Sequential Scenarios${NC}"
echo "Testing complex multi-stage expansions"
echo ""
# Test 8: Transitive with weights
test_sequential "08" \
"{5::__dragon__|3::__adnd__|regular warrior}" \
250 \
"Weighted selection with transitive wildcards" \
3
# Test 9: Multi-select with transitive
test_sequential "09" \
"{2\$\$, \$\$__dragon__|__adnd__|warrior|mage}" \
300 \
"Multi-select including transitive wildcards" \
3
# Test 10: Quantified transitive
test_sequential "10" \
"{2\$\$, \$\$3#__dragon__}" \
350 \
"Quantified wildcard with transitive expansion" \
3
echo ""
echo "${CYAN}Test Category 4: Edge Cases${NC}"
echo "Testing boundary conditions and special cases"
echo ""
# Test 11: Transitive in compound grammar
test_sequential "11" \
"1{girl holding __samples/flower__|boy riding __dragon__}" \
400 \
"Compound grammar with mixed transitive depths" \
3
# Test 12: Multiple wildcards, different depths
test_sequential "12" \
"__samples/flower__ and __dragon__ with __colors__" \
450 \
"Multiple wildcards with varying depths" \
3
# Test 13: YAML wildcard (no transitive)
test_sequential "13" \
"__colors__" \
333 \
"YAML wildcard (depth 1, no transitive)" \
1
# Test 14: Transitive + YAML combination
test_sequential "14" \
"__dragon__ with __colors__ armor" \
500 \
"Combination of transitive and YAML wildcards" \
3
echo ""
echo "${CYAN}Test Category 5: On-Demand Mode Verification${NC}"
echo "Testing sequential loading in on-demand mode"
echo ""
# Switch to on-demand mode
cat > "$CONFIG_FILE" << EOF
[default]
dependency_version = 24
mmdet_skip = True
sam_editor_cpu = False
sam_editor_model = sam_vit_h_4b8939.pth
custom_wildcards = $IMPACT_DIR/custom_wildcards
disable_gpu_opencv = True
wildcard_cache_limit_mb = 0.5
EOF
# Restart server
kill $SERVER_PID 2>/dev/null
pkill -9 -f "python.*main.py.*$PORT" 2>/dev/null
sleep 3
echo "Restarting server in on-demand mode (0.5MB limit)..."
cd "$COMFYUI_DIR"
bash run.sh --listen 127.0.0.1 --port $PORT > /tmp/sequential_ondemand.log 2>&1 &
SERVER_PID=$!
echo "Waiting 70 seconds for server restart..."
for i in {1..70}; do
sleep 1
if [ $((i % 10)) -eq 0 ]; then
echo " ... $i seconds"
fi
done
if ! curl -s http://127.0.0.1:$PORT/ > /dev/null; then
echo "${RED}✗ Server failed to restart${NC}"
exit 1
fi
echo "${GREEN}✓ Server restarted in on-demand mode${NC}"
echo ""
# Test 15: Same transitive in on-demand mode
test_sequential "15" \
"__adnd__ creature" \
222 \
"Depth 3 transitive in on-demand mode (should match full cache)" \
3
# Test 16: Complex scenario in on-demand
test_sequential "16" \
"{__dragon__|__adnd__} {warrior|mage}" \
100 \
"Complex transitive with dynamic in on-demand mode" \
3
# Test 17: Multiple transitive in on-demand
test_sequential "17" \
"__dragon__ and __adnd__ together" \
150 \
"Multiple transitive wildcards in on-demand mode" \
3
# Stop server
kill $SERVER_PID 2>/dev/null
pkill -9 -f "python.*main.py.*$PORT" 2>/dev/null
echo "=========================================="
echo "Test Summary"
echo "=========================================="
echo ""
echo "Total tests: 17"
echo "Categories:"
echo " - Depth Verification (4 tests)"
echo " - Mixed Transitive Scenarios (3 tests)"
echo " - Complex Sequential Scenarios (3 tests)"
echo " - Edge Cases (4 tests)"
echo " - On-Demand Mode Verification (3 tests)"
echo ""
echo "Test Focus:"
echo " ✓ Multi-stage transitive wildcard expansion"
echo " ✓ Sequential loading across different depths"
echo " ✓ Transitive wildcards in dynamic prompts"
echo " ✓ Transitive wildcards with weights and multi-select"
echo " ✓ On-demand mode sequential loading verification"
echo ""
echo "Log saved to:"
echo " - Full cache mode: /tmp/sequential_test.log"
echo " - On-demand mode: /tmp/sequential_ondemand.log"
echo ""

View File

@@ -0,0 +1,281 @@
#!/bin/bash
# Comprehensive wildcard prompt test suite
# Tests all features from ImpactWildcard tutorial
# Auto-detect paths
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
IMPACT_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
PORT=8192
CONFIG_FILE="$IMPACT_DIR/impact-pack.ini"
GREEN='\033[0;32m'
RED='\033[0;31m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m'
echo "=========================================="
echo "Versatile Wildcard Prompt Test Suite"
echo "=========================================="
echo ""
# Setup config
cat > "$CONFIG_FILE" << EOF
[default]
dependency_version = 24
mmdet_skip = True
sam_editor_cpu = False
sam_editor_model = sam_vit_h_4b8939.pth
custom_wildcards = $IMPACT_DIR/custom_wildcards
disable_gpu_opencv = True
wildcard_cache_limit_mb = 50
EOF
echo "Mode: Full cache mode (50MB limit)"
echo ""
# Kill existing servers
pkill -9 -f "python.*main.py" 2>/dev/null || true
sleep 3
# Start server
COMFYUI_DIR="$(cd "$IMPACT_DIR/../.." && pwd)"
cd "$COMFYUI_DIR"
echo "Starting ComfyUI server on port $PORT..."
bash run.sh --listen 127.0.0.1 --port $PORT > /tmp/versatile_test.log 2>&1 &
SERVER_PID=$!
# Wait for server
echo "Waiting 70 seconds for server startup..."
for i in {1..70}; do
sleep 1
if [ $((i % 10)) -eq 0 ]; then
echo " ... $i seconds"
fi
done
# Check server
if ! curl -s http://127.0.0.1:$PORT/ > /dev/null; then
echo "${RED}✗ Server failed to start${NC}"
exit 1
fi
echo "${GREEN}✓ Server started${NC}"
echo ""
# Test function
test_prompt() {
local TEST_NUM=$1
local CATEGORY=$2
local PROMPT=$3
local SEED=$4
local DESCRIPTION=$5
echo "${BLUE}=== Test $TEST_NUM: $CATEGORY ===${NC}"
echo "Description: $DESCRIPTION"
echo "Raw prompt: ${YELLOW}$PROMPT${NC}"
echo "Seed: $SEED"
RESULT=$(curl -s -X POST http://127.0.0.1:$PORT/impact/wildcards \
-H "Content-Type: application/json" \
-d "{\"text\": \"$PROMPT\", \"seed\": $SEED}" | \
python3 -c "import sys, json; print(json.load(sys.stdin).get('text','ERROR'))" 2>/dev/null || echo "ERROR")
echo "Populated: ${GREEN}$RESULT${NC}"
if [ "$RESULT" != "ERROR" ] && [ "$RESULT" != "" ]; then
echo "Status: ${GREEN}✅ SUCCESS${NC}"
else
echo "Status: ${RED}❌ FAILED${NC}"
fi
echo ""
}
echo "=========================================="
echo "Test Suite Execution"
echo "=========================================="
echo ""
# Category 1: Simple Wildcards
test_prompt "01" "Simple Wildcard" \
"__samples/flower__" \
42 \
"Basic wildcard substitution"
test_prompt "02" "Case Insensitive" \
"__SAMPLES/FLOWER__" \
42 \
"Wildcard names are case insensitive"
test_prompt "03" "Mixed Case" \
"__SaMpLeS/FlOwEr__" \
42 \
"Mixed case should work identically"
# Category 2: Dynamic Prompts
test_prompt "04" "Dynamic Prompt (Simple)" \
"{red|green|blue} apple" \
100 \
"Random selection from pipe-separated options"
test_prompt "05" "Dynamic Prompt (Nested)" \
"{a|{d|e|f}|c}" \
100 \
"Nested dynamic prompts with inner choices"
test_prompt "06" "Dynamic Prompt (Complex)" \
"{blue apple|red {cherry|berry}|green melon}" \
100 \
"Nested options with multiple levels"
# Category 3: Selection Weights
test_prompt "07" "Weighted Selection" \
"{5::red|4::green|7::blue|black} car" \
100 \
"Weighted random selection (5:4:7:1 ratio)"
test_prompt "08" "Weighted Complex" \
"A {10::beautiful|5::stunning|amazing} {3::sunset|2::sunrise|dawn}" \
100 \
"Multiple weighted selections in one prompt"
# Category 4: Compound Grammar
test_prompt "09" "Wildcard + Dynamic" \
"1girl holding {blue pencil|red apple|colorful __samples/flower__}" \
100 \
"Mixing wildcard with dynamic prompt"
test_prompt "10" "Multiple Wildcards" \
"__samples/flower__ and __colors__" \
100 \
"Multiple wildcards in single prompt"
test_prompt "11" "Complex Compound" \
"{1girl holding|1boy riding} {blue|red|__colors__} {pencil|__samples/flower__}" \
100 \
"Complex nesting with wildcards and dynamics"
# Category 5: Transitive Wildcards
test_prompt "12" "Transitive Depth 1" \
"__dragon__" \
200 \
"First level transitive wildcard"
test_prompt "13" "Transitive Depth 2" \
"__dragon__ warrior" \
200 \
"Second level transitive with suffix"
test_prompt "14" "Transitive Depth 3" \
"__adnd__ creature" \
222 \
"Third level transitive (adnd→dragon→dragon_spirit)"
# Category 6: Multi-Select
test_prompt "15" "Multi-Select (Fixed)" \
"{2\$\$, \$\$red|green|blue|yellow|purple}" \
100 \
"Select exactly 2 items with comma separator"
test_prompt "16" "Multi-Select (Range)" \
"{1-3\$\$, \$\$apple|banana|orange|grape|mango}" \
100 \
"Select 1-3 items randomly"
test_prompt "17" "Multi-Select (Custom Sep)" \
"{2\$\$ and \$\$cat|dog|bird|fish}" \
100 \
"Custom separator: 'and' instead of comma"
test_prompt "18" "Multi-Select (Or Sep)" \
"{2-3\$\$ or \$\$happy|sad|excited|calm}" \
100 \
"Range with 'or' separator"
# Category 7: Quantifying Wildcard
test_prompt "19" "Quantified Wildcard" \
"{2\$\$, \$\$3#__samples/flower__}" \
100 \
"Repeat wildcard 3 times, select 2"
test_prompt "20" "Quantified Complex" \
"Garden with {3\$\$, \$\$5#__samples/flower__}" \
100 \
"Select 3 from 5 repeated wildcards"
# Category 8: YAML Wildcards
test_prompt "21" "YAML Simple" \
"__colors__" \
333 \
"YAML wildcard file"
test_prompt "22" "YAML in Dynamic" \
"{solid|{metallic|pastel} __colors__}" \
100 \
"YAML wildcard nested in dynamic prompt"
# Category 9: Complex Real-World Scenarios
test_prompt "23" "Realistic Prompt 1" \
"1girl, {5::beautiful|3::stunning|gorgeous} __samples/flower__ in hair, {blue|red|__colors__} dress" \
100 \
"Realistic character description"
test_prompt "24" "Realistic Prompt 2" \
"{detailed|highly detailed} {portrait|illustration} of {1girl|1boy} with {2\$\$, \$\$__samples/flower__|__samples/jewel__|elegant accessories}" \
100 \
"Complex art prompt with multi-select"
test_prompt "25" "Realistic Prompt 3" \
"__adnd__ {warrior|mage|rogue}, {10::epic|5::legendary|mythical} {armor|robes}, wielding {ancient|magical} weapon" \
100 \
"Fantasy character with transitive wildcard"
# Category 10: Edge Cases
test_prompt "26" "Empty Dynamic" \
"{|something|nothing}" \
100 \
"Dynamic with empty option"
test_prompt "27" "Single Option" \
"{only_one}" \
100 \
"Dynamic with single option (no choice)"
test_prompt "28" "Deeply Nested" \
"{a|{b|{c|{d|e}}}}" \
100 \
"Very deep nesting"
test_prompt "29" "Multiple Weights" \
"{100::common|10::uncommon|1::rare|super_rare}" \
100 \
"Extreme weight differences"
test_prompt "30" "Wildcard Only" \
"__samples/flower__" \
999 \
"Different seed on same wildcard"
# Stop server
kill $SERVER_PID 2>/dev/null
pkill -9 -f "python.*main.py.*$PORT" 2>/dev/null
echo "=========================================="
echo "Test Summary"
echo "=========================================="
echo ""
echo "Total tests: 30"
echo "Categories tested:"
echo " - Simple Wildcards (3 tests)"
echo " - Dynamic Prompts (3 tests)"
echo " - Selection Weights (2 tests)"
echo " - Compound Grammar (3 tests)"
echo " - Transitive Wildcards (3 tests)"
echo " - Multi-Select (4 tests)"
echo " - Quantifying Wildcard (2 tests)"
echo " - YAML Wildcards (2 tests)"
echo " - Real-World Scenarios (3 tests)"
echo " - Edge Cases (5 tests)"
echo ""
echo "Log saved to: /tmp/versatile_test.log"
echo ""

View File

@@ -0,0 +1,226 @@
#!/bin/bash
# Test wildcard consistency between full cache and on-demand modes
set -e
# Auto-detect paths
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
IMPACT_PACK_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
COMFYUI_DIR="$(cd "$IMPACT_PACK_DIR/../.." && pwd)"
CONFIG_FILE="$IMPACT_PACK_DIR/impact-pack.ini"
BACKUP_CONFIG="$IMPACT_PACK_DIR/impact-pack.ini.backup"
# Colors
GREEN='\033[0;32m'
RED='\033[0;31m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
echo "=========================================="
echo "Wildcard Consistency Test"
echo "=========================================="
echo ""
# Backup original config
if [ -f "$CONFIG_FILE" ]; then
cp "$CONFIG_FILE" "$BACKUP_CONFIG"
echo "✓ Backed up original config"
fi
# Function to kill ComfyUI
cleanup() {
pkill -f "python.*main.py" 2>/dev/null || true
sleep 2
}
# Function to test wildcard with specific config
test_with_config() {
local MODE=$1
local CACHE_LIMIT=$2
echo ""
echo "${BLUE}Testing $MODE mode (cache limit: ${CACHE_LIMIT}MB)${NC}"
echo "----------------------------------------"
# Update config
cat > "$CONFIG_FILE" << EOF
[default]
dependency_version = 24
mmdet_skip = True
sam_editor_cpu = False
sam_editor_model = sam_vit_h_4b8939.pth
custom_wildcards = $IMPACT_PACK_DIR/custom_wildcards
disable_gpu_opencv = True
wildcard_cache_limit_mb = $CACHE_LIMIT
EOF
# Start ComfyUI
cleanup
cd "$COMFYUI_DIR"
bash run.sh --listen 127.0.0.1 --port 8190 > /tmp/comfyui_${MODE}.log 2>&1 &
COMFYUI_PID=$!
echo " Waiting for server startup..."
sleep 15
# Check if server is running
if ! curl -s http://127.0.0.1:8190/ > /dev/null; then
echo "${RED}✗ Server failed to start${NC}"
cat /tmp/comfyui_${MODE}.log | grep -i "wildcard\|error" | tail -20
cleanup
return 1
fi
# Check log for mode
MODE_LOG=$(grep -i "wildcard.*mode" /tmp/comfyui_${MODE}.log | tail -1)
echo " $MODE_LOG"
# Test 1: Simple wildcard
echo ""
echo " Test 1: Simple wildcard substitution"
RESULT1=$(curl -s http://127.0.0.1:8190/impact/wildcards \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": "__samples/flower__", "seed": 42}')
TEXT1=$(echo "$RESULT1" | python3 -c "import sys, json; print(json.load(sys.stdin)['text'])")
echo " Input: __samples/flower__"
echo " Output: $TEXT1"
echo " Result: $RESULT1" > /tmp/result_${MODE}_test1.json
# Test 2: Dynamic prompt
echo ""
echo " Test 2: Dynamic prompt"
RESULT2=$(curl -s http://127.0.0.1:8190/impact/wildcards \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": "{red|blue|green} flower", "seed": 123}')
TEXT2=$(echo "$RESULT2" | python3 -c "import sys, json; print(json.load(sys.stdin)['text'])")
echo " Input: {red|blue|green} flower"
echo " Output: $TEXT2"
echo " Result: $RESULT2" > /tmp/result_${MODE}_test2.json
# Test 3: Combined wildcard and dynamic prompt
echo ""
echo " Test 3: Combined wildcard + dynamic prompt"
RESULT3=$(curl -s http://127.0.0.1:8190/impact/wildcards \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": "beautiful {red|blue} __samples/flower__ with __samples/jewel__", "seed": 456}')
TEXT3=$(echo "$RESULT3" | python3 -c "import sys, json; print(json.load(sys.stdin)['text'])")
echo " Input: beautiful {red|blue} __samples/flower__ with __samples/jewel__"
echo " Output: $TEXT3"
echo " Result: $RESULT3" > /tmp/result_${MODE}_test3.json
# Test 4: Transitive YAML wildcard
echo ""
echo " Test 4: Transitive YAML wildcard (test.yaml)"
RESULT4=$(curl -s http://127.0.0.1:8190/impact/wildcards \
-X POST \
-H "Content-Type: application/json" \
-d '{"text": "__colors__", "seed": 222}')
TEXT4=$(echo "$RESULT4" | python3 -c "import sys, json; print(json.load(sys.stdin)['text'])")
echo " Input: __colors__ (transitive: __cold__|__warm__ -> blue|red|orange|yellow)"
echo " Output: $TEXT4"
echo " Expected: blue|red|orange|yellow"
echo " Result: $RESULT4" > /tmp/result_${MODE}_test4.json
# Test 5: Wildcard list
echo ""
echo " Test 5: Wildcard list API"
LIST_RESULT=$(curl -s http://127.0.0.1:8190/impact/wildcards/list)
LIST_COUNT=$(echo "$LIST_RESULT" | python3 -c "import sys, json; print(len(json.load(sys.stdin)['data']))")
echo " Wildcards found: $LIST_COUNT"
echo " Sample: $(echo "$LIST_RESULT" | python3 -c "import sys, json; print(', '.join(json.load(sys.stdin)['data'][:5]))")"
echo " Result: $LIST_RESULT" > /tmp/result_${MODE}_list.json
# Stop server
cleanup
echo ""
echo "${GREEN}$MODE mode tests completed${NC}"
}
# Run tests
echo ""
echo "Starting consistency tests..."
# Test full cache mode
test_with_config "full_cache" 50
# Test on-demand mode
test_with_config "on_demand" 1
# Compare results
echo ""
echo "=========================================="
echo "Comparing Results"
echo "=========================================="
echo ""
echo "Test 1: Simple wildcard"
DIFF1=$(diff /tmp/result_full_cache_test1.json /tmp/result_on_demand_test1.json || true)
if [ -z "$DIFF1" ]; then
echo "${GREEN}✓ Results match${NC}"
else
echo "${RED}✗ Results differ${NC}"
echo "$DIFF1"
fi
echo ""
echo "Test 2: Dynamic prompt"
DIFF2=$(diff /tmp/result_full_cache_test2.json /tmp/result_on_demand_test2.json || true)
if [ -z "$DIFF2" ]; then
echo "${GREEN}✓ Results match${NC}"
else
echo "${RED}✗ Results differ${NC}"
echo "$DIFF2"
fi
echo ""
echo "Test 3: Combined wildcard + dynamic prompt"
DIFF3=$(diff /tmp/result_full_cache_test3.json /tmp/result_on_demand_test3.json || true)
if [ -z "$DIFF3" ]; then
echo "${GREEN}✓ Results match${NC}"
else
echo "${RED}✗ Results differ${NC}"
echo "$DIFF3"
fi
echo ""
echo "Test 4: Transitive YAML wildcard"
DIFF4=$(diff /tmp/result_full_cache_test4.json /tmp/result_on_demand_test4.json || true)
if [ -z "$DIFF4" ]; then
echo "${GREEN}✓ Results match${NC}"
else
echo "${RED}✗ Results differ${NC}"
echo "$DIFF4"
fi
echo ""
echo "Test 5: Wildcard list"
DIFF_LIST=$(diff /tmp/result_full_cache_list.json /tmp/result_on_demand_list.json || true)
if [ -z "$DIFF_LIST" ]; then
echo "${GREEN}✓ Wildcard lists match${NC}"
else
echo "${RED}✗ Wildcard lists differ${NC}"
echo "$DIFF_LIST"
fi
# Restore original config
if [ -f "$BACKUP_CONFIG" ]; then
mv "$BACKUP_CONFIG" "$CONFIG_FILE"
echo ""
echo "✓ Restored original config"
fi
# Final cleanup
cleanup
echo ""
echo "=========================================="
echo "Consistency Test Complete"
echo "=========================================="

View File

@@ -0,0 +1,165 @@
#!/usr/bin/env python3
"""
Final comprehensive wildcard test - validates consistency between full cache and on-demand modes
Tests include:
1. Simple wildcard substitution
2. Nested wildcards (transitive loading)
3. Multiple wildcards in single prompt
4. Dynamic prompts combined with wildcards
5. YAML-based wildcards
"""
import subprocess
import time
import sys
from pathlib import Path
# Auto-detect paths
SCRIPT_DIR = Path(__file__).parent
IMPACT_PACK_DIR = SCRIPT_DIR.parent
COMFYUI_DIR = IMPACT_PACK_DIR.parent.parent
CONFIG_FILE = IMPACT_PACK_DIR / "impact-pack.ini"
def run_test(test_name, cache_limit, test_cases):
"""Run tests with specific cache limit"""
print(f"\n{'='*60}")
print(f"Testing: {test_name}")
print(f"Cache Limit: {cache_limit} MB")
print(f"{'='*60}\n")
# Update config
config_content = f"""[default]
dependency_version = 24
mmdet_skip = True
sam_editor_cpu = False
sam_editor_model = sam_vit_h_4b8939.pth
custom_wildcards = {IMPACT_PACK_DIR}/custom_wildcards
disable_gpu_opencv = True
wildcard_cache_limit_mb = {cache_limit}
"""
with open(CONFIG_FILE, 'w') as f:
f.write(config_content)
# Start ComfyUI
print("Starting ComfyUI...")
proc = subprocess.Popen(
['bash', 'run.sh', '--listen', '127.0.0.1', '--port', '8191'],
cwd=str(COMFYUI_DIR),
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True
)
# Wait for server to start
time.sleep(20)
# Check logs
import requests
try:
response = requests.get('http://127.0.0.1:8191/')
print("✓ Server started successfully\n")
except Exception:
print("✗ Server failed to start")
proc.terminate()
return {}
# Run test cases
results = {}
for i, (description, text, seed) in enumerate(test_cases, 1):
print(f"Test {i}: {description}")
print(f" Input: {text}")
try:
response = requests.post(
'http://127.0.0.1:8191/impact/wildcards',
json={'text': text, 'seed': seed},
timeout=5
)
result = response.json()
output = result.get('text', '')
print(f" Output: {output}")
results[f"test{i}"] = output
except Exception as e:
print(f" Error: {e}")
results[f"test{i}"] = f"ERROR: {e}"
print()
# Stop server
proc.terminate()
time.sleep(2)
return results
def main():
print("\n" + "="*60)
print("WILDCARD COMPREHENSIVE CONSISTENCY TEST")
print("="*60)
# Test cases: (description, wildcard text, seed)
test_cases = [
# Test 1: Simple wildcard
("Simple wildcard", "__samples/flower__", 42),
# Test 2: Multiple wildcards
("Multiple wildcards", "a __samples/flower__ and a __samples/jewel__", 123),
# Test 3: Dynamic prompt
("Dynamic prompt", "{red|blue|green} flower", 456),
# Test 4: Combined wildcard + dynamic
("Combined", "{beautiful|elegant} __samples/flower__ with {gold|silver} __samples/jewel__", 789),
# Test 5: Nested selection (multi-select)
("Multi-select", "{2$$, $$__samples/flower__|rose|tulip|daisy}", 111),
# Test 6: Transitive YAML wildcard (custom_wildcards/test.yaml)
# __colors__ → __cold__|__warm__ → blue|red|orange|yellow
("Transitive YAML wildcard", "__colors__", 222),
# Test 7: Transitive with text
("Transitive with context", "a {beautiful|vibrant} __colors__ flower", 333),
]
# Test with full cache mode
results_full = run_test("Full Cache Mode", 50, test_cases)
time.sleep(5)
# Test with on-demand mode
results_on_demand = run_test("On-Demand Mode", 1, test_cases)
# Compare results
print("\n" + "="*60)
print("RESULTS COMPARISON")
print("="*60 + "\n")
all_match = True
for key in results_full.keys():
full_result = results_full.get(key, "MISSING")
on_demand_result = results_on_demand.get(key, "MISSING")
match = full_result == on_demand_result
all_match = all_match and match
status = "✓ MATCH" if match else "✗ DIFFER"
print(f"{key}: {status}")
if not match:
print(f" Full cache: {full_result}")
print(f" On-demand: {on_demand_result}")
print()
# Final verdict
print("="*60)
if all_match:
print("✅ ALL TESTS PASSED - Results are identical")
print("="*60)
return 0
else:
print("❌ TESTS FAILED - Results differ between modes")
print("="*60)
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,200 @@
#!/usr/bin/env python3
"""
Test script for wildcard lazy loading functionality
"""
import sys
import os
import tempfile
# Add parent directory to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
from modules.impact import wildcards
def test_lazy_loader():
"""Test LazyWildcardLoader class"""
print("=" * 60)
print("TEST 1: LazyWildcardLoader functionality")
print("=" * 60)
# Create a temporary test file
with tempfile.NamedTemporaryFile(mode='w', suffix='.txt', delete=False) as f:
f.write("option1\n")
f.write("option2\n")
f.write("# comment line\n")
f.write("option3\n")
temp_file = f.name
try:
# Test lazy loading
loader = wildcards.LazyWildcardLoader(temp_file, 'txt')
print(f"✓ Created LazyWildcardLoader: {loader}")
# Check that data is not loaded yet
assert not loader._loaded, "Data should not be loaded initially"
print("✓ Data not loaded initially (lazy)")
# Access data
data = loader.get_data()
print(f"✓ Loaded data: {data}")
assert len(data) == 3, f"Expected 3 items, got {len(data)}"
assert 'option1' in data, "option1 should be in data"
# Check that data is now loaded
assert loader._loaded, "Data should be loaded after access"
print("✓ Data loaded after first access")
# Test list-like operations
print(f"✓ len(loader) = {len(loader)}")
assert len(loader) == 3
print(f"✓ loader[0] = {loader[0]}")
assert loader[0] == 'option1'
print(f"'option2' in loader = {'option2' in loader}")
assert 'option2' in loader
print(f"✓ list(loader) = {list(loader)}")
print("\n✅ LazyWildcardLoader tests PASSED\n")
finally:
os.unlink(temp_file)
def test_cache_limit_detection():
"""Test automatic cache mode detection"""
print("=" * 60)
print("TEST 2: Cache limit detection")
print("=" * 60)
# Get current cache limit
limit = wildcards.get_cache_limit()
print(f"✓ Cache limit: {limit / (1024*1024):.2f} MB")
# Calculate wildcard directory size
wildcards_dir = wildcards.wildcards_path
total_size = wildcards.calculate_directory_size(wildcards_dir)
print(f"✓ Wildcards directory size: {total_size / (1024*1024):.2f} MB")
print(f"✓ Wildcards path: {wildcards_dir}")
# Determine expected mode
if total_size >= limit:
expected_mode = "on-demand"
else:
expected_mode = "full cache"
print(f"✓ Expected mode: {expected_mode}")
print("\n✅ Cache detection tests PASSED\n")
def test_wildcard_loading():
"""Test actual wildcard loading"""
print("=" * 60)
print("TEST 3: Wildcard loading with current mode")
print("=" * 60)
# Clear existing wildcards
wildcards.wildcard_dict = {}
wildcards._on_demand_mode = False
# Load wildcards
print("Loading wildcards...")
wildcards.wildcard_load()
# Check mode
is_on_demand = wildcards.is_on_demand_mode()
print(f"✓ On-demand mode active: {is_on_demand}")
# Check loaded wildcards
wc_list = wildcards.get_wildcard_list()
print(f"✓ Loaded {len(wc_list)} wildcards")
if len(wc_list) > 0:
print(f"✓ Sample wildcards: {wc_list[:5]}")
# Test accessing a wildcard
if len(wildcards.wildcard_dict) > 0:
key = list(wildcards.wildcard_dict.keys())[0]
value = wildcards.wildcard_dict[key]
print(f"✓ Sample wildcard '{key}' type: {type(value).__name__}")
if isinstance(value, wildcards.LazyWildcardLoader):
print(f" - LazyWildcardLoader: {value}")
print(f" - Loaded: {value._loaded}")
# Access the data
data = value.get_data()
print(f" - Data loaded, items: {len(data)}")
else:
print(f" - Direct list, items: {len(value)}")
print("\n✅ Wildcard loading tests PASSED\n")
def test_on_demand_simulation():
"""Simulate on-demand mode with temporary wildcards"""
print("=" * 60)
print("TEST 4: On-demand mode simulation")
print("=" * 60)
# Create temporary wildcard directory
with tempfile.TemporaryDirectory() as tmpdir:
# Create test files
test_file1 = os.path.join(tmpdir, "test1.txt")
test_file2 = os.path.join(tmpdir, "test2.txt")
with open(test_file1, 'w') as f:
f.write("option1a\noption1b\noption1c\n")
with open(test_file2, 'w') as f:
f.write("option2a\noption2b\n")
# Clear and load with on-demand mode
wildcards.wildcard_dict = {}
wildcards._on_demand_mode = False
print(f"✓ Loading from temp directory: {tmpdir}")
wildcards.read_wildcard_dict(tmpdir, on_demand=True)
print(f"✓ Loaded {len(wildcards.wildcard_dict)} wildcards")
for key, value in wildcards.wildcard_dict.items():
print(f"✓ Wildcard '{key}':")
print(f" - Type: {type(value).__name__}")
if isinstance(value, wildcards.LazyWildcardLoader):
print(f" - Initially loaded: {value._loaded}")
data = value.get_data()
print(f" - After access: loaded={value._loaded}, items={len(data)}")
print(f" - Sample data: {data[:2]}")
print("\n✅ On-demand simulation tests PASSED\n")
def main():
"""Run all tests"""
print("\n" + "=" * 60)
print("WILDCARD LAZY LOADING TEST SUITE")
print("=" * 60 + "\n")
try:
test_lazy_loader()
test_cache_limit_detection()
test_wildcard_loading()
test_on_demand_simulation()
print("=" * 60)
print("✅ ALL TESTS PASSED")
print("=" * 60)
return 0
except Exception as e:
print("\n" + "=" * 60)
print(f"❌ TEST FAILED: {e}")
print("=" * 60)
import traceback
traceback.print_exc()
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,97 @@
#!/bin/bash
# Verify that on-demand mode is actually triggered with 0.5MB limit
# Auto-detect paths
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
IMPACT_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
CONFIG_FILE="$IMPACT_DIR/impact-pack.ini"
echo "=========================================="
echo "Verify On-Demand Mode Activation"
echo "=========================================="
echo ""
# Set config to 0.5MB limit
cat > "$CONFIG_FILE" << EOF
[default]
dependency_version = 24
mmdet_skip = True
sam_editor_cpu = False
sam_editor_model = sam_vit_h_4b8939.pth
custom_wildcards = $IMPACT_DIR/custom_wildcards
disable_gpu_opencv = True
wildcard_cache_limit_mb = 0.5
EOF
echo "Config set to 0.5MB cache limit"
echo ""
# Kill any existing servers
pkill -9 -f "python.*main.py" 2>/dev/null || true
sleep 3
# Start server
COMFYUI_DIR="$(cd "$IMPACT_DIR/../.." && pwd)"
cd "$COMFYUI_DIR"
echo "Starting ComfyUI server on port 8190..."
bash run.sh --listen 127.0.0.1 --port 8190 > /tmp/verify_ondemand.log 2>&1 &
SERVER_PID=$!
# Wait for server
echo "Waiting 70 seconds for server startup..."
for i in {1..70}; do
sleep 1
if [ $((i % 10)) -eq 0 ]; then
echo " ... $i seconds"
fi
done
# Check server
if ! curl -s http://127.0.0.1:8190/ > /dev/null; then
echo "✗ Server failed to start"
cat /tmp/verify_ondemand.log
exit 1
fi
echo "✓ Server started"
echo ""
# Check loading mode
echo "Loading mode detected:"
grep -i "wildcard.*mode\|wildcard.*size.*cache" /tmp/verify_ondemand.log | grep -v "Maximum depth"
echo ""
# Verify mode
if grep -q "Using on-demand loading mode" /tmp/verify_ondemand.log; then
echo "✅ SUCCESS: On-demand mode activated with 0.5MB limit!"
elif grep -q "Using full cache mode" /tmp/verify_ondemand.log; then
echo "❌ FAIL: Full cache mode used (should be on-demand)"
echo ""
echo "Cache limit in log:"
grep "cache limit" /tmp/verify_ondemand.log
else
echo "⚠️ WARNING: Could not determine mode"
fi
# Test wildcard functionality
echo ""
echo "Testing wildcard functionality in on-demand mode..."
curl -s -X POST http://127.0.0.1:8190/impact/wildcards \
-H "Content-Type: application/json" \
-d '{"text": "__adnd__ creature", "seed": 222}' > /tmp/verify_result.json
RESULT=$(cat /tmp/verify_result.json | python3 -c "import sys, json; print(json.load(sys.stdin).get('text','ERROR'))" 2>/dev/null || echo "ERROR")
echo " Depth 3 transitive (seed=222): $RESULT"
if [ "$RESULT" = "Shrewd Hatchling creature" ]; then
echo " ✅ Transitive wildcard works correctly"
else
echo " ❌ Unexpected result: $RESULT"
fi
# Stop server
kill $SERVER_PID 2>/dev/null
pkill -9 -f "python.*main.py.*8190" 2>/dev/null
echo ""
echo "Full log saved to: /tmp/verify_ondemand.log"