Documentation Index
Fetch the complete documentation index at: https://docs.monostate.ai/llms.txt
Use this file to discover all available pages before exploring further.
Wizard Commands Reference
The AITraining wizard supports various commands to help you navigate, search, and configure your training job.Navigation Commands
These commands work at any prompt:| Command | Shortcut | Description |
|---|---|---|
:back | Go back to the previous step | |
:help | ?, :h | Show detailed help for the current prompt |
:exit | :quit | Cancel the wizard and exit |
Using :back
You can go back at any point to change previous answers:Using :help
Every prompt has contextual help:Catalog Commands
These commands work when browsing models or datasets:| Command | Description |
|---|---|
/search <query> | Search for models/datasets by name |
/sort | Change sorting (trending, downloads, likes, recent) |
/filter | Filter models by size (models only) |
/refresh | Clear cache and reload the list |
/search
Find specific models or datasets:/search gemma- Find Gemma models/search code- Find code-focused models/search alpaca- Find Alpaca-style datasets/search conversation- Find conversation datasets
/sort
Change how results are ordered:| Sort Option | Key | Description |
|---|---|---|
| Trending | T | What’s popular right now |
| Downloads | D | Most downloaded all-time |
| Likes | L | Most liked by the community |
| Recent | R | Newest additions |
/filter
Filter models by parameter count (only works for models, not datasets):| Filter | Key | Size Range | Typical Hardware |
|---|---|---|---|
| All | A | No filter | Any |
| Small | S | < 3B parameters | MacBook, consumer GPU |
| Medium | M | 3B - 10B parameters | Gaming GPU, workstation |
| Large | L | > 10B parameters | Cloud GPU, multi-GPU |
/refresh
Clear the cache and fetch fresh data:Selection Methods
When choosing a model or dataset, you have several options:By Number
Select from the displayed list:By HuggingFace ID
Type the full model/dataset ID:By Local Path
Point to a local directory:Input Conventions
Defaults
Values in[brackets] are defaults. Press Enter to accept:
Required Fields
Fields marked[REQUIRED] must be filled:
Yes/No Questions
Answer with y/yes or n/no:[Y/n]- Default is Yes[y/N]- Default is No
Keyboard Shortcuts
| Key | Action |
|---|---|
| Enter | Accept default or confirm input |
| Ctrl+C | Cancel wizard (same as :exit) |
| Arrow Up/Down | Scroll through numbered options (if supported) |
Advanced Parameters
When configuring advanced parameters, the wizard groups them:| Group | Contains |
|---|---|
| Training Hyperparameters | epochs, batch_size, lr, warmup_ratio |
| PEFT/LoRA | peft, lora_r, lora_alpha, quantization |
| DPO/ORPO | dpo_beta, max_prompt_length |
| Hub Integration | push_to_hub, username, token |
| Knowledge Distillation | teacher_model, distill_temperature |
| Hyperparameter Sweep | use_sweep, sweep_n_trials |
| Enhanced Evaluation | use_enhanced_eval, eval_metrics |
| Reinforcement Learning | rl_reward_model_path (PPO only) |
Tips
Use :help liberally
Use :help liberally
Every single prompt has detailed help. If you’re unsure what something means, type
:help.Go back to fix mistakes
Go back to fix mistakes
Made a wrong choice? Use
:back to return to previous steps. Your other answers are preserved.Search before scrolling
Search before scrolling
Instead of scrolling through hundreds of models, use
/search llama or /search 7b to narrow down.Filter by your hardware
Filter by your hardware
Not sure which models will work? Use
/filter → S (small) to see only models that fit consumer hardware.Accept defaults for first run
Accept defaults for first run
On your first training, accept most defaults. Get something working, then customize.