Documentation Index
Fetch the complete documentation index at: https://docs.monostate.ai/llms.txt
Use this file to discover all available pages before exploring further.
When to Use the Chat Interface
The Chat interface lets you test and interact with your trained models in a browser.What It Does
The Chat interface (aitraining chat) provides:
- Interactive conversation with your trained models
- Real-time response generation
- Conversation history
- Model parameter adjustment (temperature, max tokens, etc.)
Best For
- Testing trained models - Verify your fine-tuned model works as expected
- Quick experiments - Try different prompts and parameters
- Demos - Show stakeholders what your model can do
- Debugging - Identify issues in model responses
What It Looks Like
Open your browser to the chat interface:- Type messages in a chat box
- See model responses in real-time
- Adjust generation parameters
- View conversation history
Starting the Chat Interface
http://localhost:7860 in your browser.
Workflow Example
- Train your model with CLI:
aitraining llm --train ... - Start chat interface:
aitraining chat - Open browser to
localhost:7860 - Select your trained model
- Start chatting to test responses
- Adjust temperature/parameters as needed
- Iterate on training if needed
Advantages
- Immediate feedback - See responses instantly
- No coding required - Just type and chat
- Visual interface - Easy to use
- Parameter tuning - Adjust generation settings in real-time
Limitations
- Not for training - Use CLI or API for training
- Local only - Must access the machine running it
- Single model - Test one model at a time
When to Use Something Else
Use CLI when you:- Need to train models
- Want to automate workflows
- Need batch processing
- Want reproducible experiments
- Build applications
- Need programmatic control
- Integrate with other systems
- Deploy to production
Common Use Cases
Post-Training Verification
“Did my fine-tuning work?”- Load trained model
- Test with sample prompts
- Verify response quality
Parameter Exploration
“What temperature works best?”- Try different generation settings
- See effects immediately
- Find optimal parameters
Demo Preparation
“Show the team what we built”- Visual, easy to understand
- Interactive demonstration
- No technical setup needed
Tips
- Start with low temperature - More consistent responses for testing
- Save good prompts - Document what works
- Compare models - Test before/after fine-tuning
- Check edge cases - Try unusual inputs
Next Steps
Launch Chat
Get started with the chat interface
CLI Training
Train models with the command line