Documentation Index
Fetch the complete documentation index at: https://docs.monostate.ai/llms.txt
Use this file to discover all available pages before exploring further.
身份验证
为 Hugging Face Hub 和 W&B 配置身份验证。
Hugging Face 令牌
环境变量
export HF_TOKEN="hf_xxxxxxxxxxxxxxxxxxxxx"
在 Python 中
from autotrain.trainers.clm.params import LLMTrainingParams
params = LLMTrainingParams(
model="google/gemma-3-270m",
data_path="./data.jsonl",
project_name="my-model",
token="hf_xxxxxxxxxxxxxxxxxxxxx", # HF token
)
获取令牌
- 访问 huggingface.co/settings/tokens
- 点击 “New token”
- 选择 “Write” 访问权限以推送到 hub
- 复制令牌
W&B 令牌
环境变量
export WANDB_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
在 Python 中
params = LLMTrainingParams(
model="google/gemma-3-270m",
data_path="./data.jsonl",
project_name="my-model",
log="wandb",
wandb_token="xxxxxxxxxxxxxxxxxxxxxxxx",
)
获取令牌
- 访问 wandb.ai/authorize
- 复制您的 API 密钥
推送到 Hub
将模型推送到 Hugging Face Hub:
params = LLMTrainingParams(
model="google/gemma-3-270m",
data_path="./data.jsonl",
project_name="my-model",
push_to_hub=True,
username="your-hf-username",
token="hf_xxxxxxxxxxxxxxxxxxxxx",
)
私有模型
使用您的令牌访问私有模型:
# Set environment variable
import os
os.environ["HF_TOKEN"] = "hf_xxxxxxxxxxxxxxxxxxxxx"
# Or pass directly
params = LLMTrainingParams(
model="your-org/private-model",
data_path="./data.jsonl",
project_name="my-model",
token="hf_xxxxxxxxxxxxxxxxxxxxx",
)
私有数据集
访问私有数据集:
params = LLMTrainingParams(
model="google/gemma-3-270m",
data_path="your-org/private-dataset", # HF dataset ID
project_name="my-model",
token="hf_xxxxxxxxxxxxxxxxxxxxx",
)
安全令牌处理
使用 .env 文件
# .env
HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxx
WANDB_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxx
from dotenv import load_dotenv
import os
load_dotenv()
params = LLMTrainingParams(
model="google/gemma-3-270m",
data_path="./data.jsonl",
project_name="my-model",
token=os.getenv("HF_TOKEN"),
)
切勿提交令牌
添加到 .gitignore:
下一步