Generate text from a prompt using a language model via the Inference Providers API.
Usage
hf_generate(
prompt,
model = "meta-llama/Llama-3.1-8B-Instruct",
max_new_tokens = 50,
temperature = 1,
top_p = NULL,
token = NULL,
endpoint_url = NULL,
...
)Arguments
- prompt
Character vector of text prompt(s) to generate from.
- model
Character string. Model ID from Hugging Face Hub. Default: "meta-llama/Llama-3.1-8B-Instruct".
- max_new_tokens
Integer. Maximum number of tokens to generate. Default: 50.
- temperature
Numeric. Sampling temperature (0-2). Default: 1.0.
- top_p
Numeric. Nucleus sampling parameter. Default: NULL.
- token
Character string or NULL. API token for authentication.
- endpoint_url
Character string or NULL. A custom Inference Endpoint URL. The endpoint must support the chat completions format.
- ...
Additional parameters passed to the model.