Generate text from a prompt using a language model via the Inference Providers API.
Usage
hf_generate(
prompt,
model = "HuggingFaceTB/SmolLM3-3B",
max_new_tokens = 50,
temperature = 1,
top_p = NULL,
token = NULL,
...
)Arguments
- prompt
Character vector of text prompt(s) to generate from.
- model
Character string. Model ID from Hugging Face Hub. Default: "HuggingFaceTB/SmolLM3-3B".
- max_new_tokens
Integer. Maximum number of tokens to generate. Default: 50.
- temperature
Numeric. Sampling temperature (0-2). Default: 1.0.
- top_p
Numeric. Nucleus sampling parameter. Default: NULL.
- token
Character string or NULL. API token for authentication.
- ...
Additional parameters passed to the model.