Have a conversation with an open-source language model via the Inference Providers API.
Usage
hf_chat(
message,
system = NULL,
model = "HuggingFaceTB/SmolLM3-3B",
max_tokens = 500,
temperature = 0.7,
token = NULL,
...
)Arguments
- message
Character string. The user message to send to the model.
- system
Character string or NULL. Optional system prompt to set behavior.
- model
Character string. Model ID from Hugging Face Hub. Default: "HuggingFaceTB/SmolLM3-3B". Use `:provider` suffix to select a specific provider (e.g., "meta-llama/Llama-3-8B-Instruct:together").
- max_tokens
Integer. Maximum tokens to generate. Default: 500.
- temperature
Numeric. Sampling temperature (0-2). Default: 0.7.
- token
Character string or NULL. API token for authentication.
- ...
Additional parameters passed to the model.
Examples
if (FALSE) { # \dontrun{
# Simple question
hf_chat("What is the capital of France?")
# With system prompt
hf_chat(
"Explain gradient descent",
system = "You are a statistics professor. Use simple analogies."
)
# Use a specific provider
hf_chat("Hello!", model = "meta-llama/Llama-3-8B-Instruct:together")
} # }