Check whether a model supports the Hugging Face Serverless Inference API. Not all models on the Hub are served by the Inference API. This function queries model metadata to determine availability before you make inference calls.
Value
A list (invisibly if quiet = TRUE) with components:
- model_id
The model ID queried.
- available
Logical. TRUE if the model is available on the Inference API.
- pipeline_tag
The model's task type (e.g., "feature-extraction").
- inference_provider
The inference provider, if available.
Examples
if (FALSE) { # \dontrun{
# Check if a model supports serverless inference
hf_check_inference("BAAI/bge-small-en-v1.5")
# Use programmatically
result <- hf_check_inference("some-org/some-model", quiet = TRUE)
if (result$available) {
embeddings <- hf_embed("hello", model = "some-org/some-model")
}
} # }