Skip to contents

Generate text from a prompt using a language model via the Inference Providers API.

Usage

hf_generate(
  prompt,
  model = "HuggingFaceTB/SmolLM3-3B",
  max_new_tokens = 50,
  temperature = 1,
  top_p = NULL,
  token = NULL,
  ...
)

Arguments

prompt

Character vector of text prompt(s) to generate from.

model

Character string. Model ID from Hugging Face Hub. Default: "HuggingFaceTB/SmolLM3-3B".

max_new_tokens

Integer. Maximum number of tokens to generate. Default: 50.

temperature

Numeric. Sampling temperature (0-2). Default: 1.0.

top_p

Numeric. Nucleus sampling parameter. Default: NULL.

token

Character string or NULL. API token for authentication.

...

Additional parameters passed to the model.

Value

A tibble with columns: prompt, generated_text

Examples

if (FALSE) { # \dontrun{
# Simple text generation
hf_generate("Once upon a time in a land far away,")

# With different model
hf_generate("The future of AI is", model = "meta-llama/Llama-3-8B-Instruct:together")
} # }