Table Question Answering API Inference
Source:R/ez.R
hf_ez_table_question_answering_api_inference.RdTable Question Answering API Inference
Usage
hf_ez_table_question_answering_api_inference(
query,
table,
tidy = TRUE,
use_gpu = FALSE,
use_cache = FALSE,
wait_for_model = FALSE,
use_auth_token = NULL,
stop_on_error = FALSE,
...
)Arguments
- query
The query in plain text that you want to ask the table
- table
A dataframe with all text columns.
- tidy
Whether to tidy the results into a tibble. Default: TRUE (tidy the results)
- use_gpu
Whether to use GPU for inference.
- use_cache
Whether to use cached inference results for previously seen inputs.
- wait_for_model
Whether to wait for the model to be ready instead of receiving a 503 error after a certain amount of time.
- use_auth_token
The token to use as HTTP bearer authorization for the Inference API. Defaults to HUGGING_FACE_HUB_TOKEN environment variable.
- stop_on_error
Whether to throw an error if an API error is encountered. Defaults to FALSE (do not throw error).