Skip to contents

huggingfaceR 2.0.0

Breaking changes

  • The package no longer requires Python or reticulate for core functionality. All inference is handled through the Hugging Face Inference API via httr2. Legacy functions that depend on Python/reticulate remain available but are not required for new workflows.

  • Default chat and generation model changed from HuggingFaceTB/SmolLM3-3B to meta-llama/Llama-3.1-8B-Instruct, which has broader provider support.

New features

Improvements

  • All functions return tibbles and accept character vectors, enabling natural composition with dplyr, tidyr, and the rest of the tidyverse.

  • Improved error messages for 404 responses explain that the model may exist on the Hub but not be available for serverless inference, and suggest using hf_check_inference().

  • Documentation updated to clarify that the Inference API serves a curated subset of the Hub’s 500,000+ models, not all of them.