|
- How to change huggingface transformers default cache directory?
The default cache directory lacks disk capacity, I need to change the configuration of the default cache directory How can I do that?
- How to do Tokenizer Batch processing? - HuggingFace
9 in the Tokenizer documentation from huggingface, the call fuction accepts List [List [str]] and says: text (str, List [str], List [List [str]], optional) — The sequence or batch of sequences to be encoded Each sequence can be a string or a list of strings (pretokenized string)
- huggingface hub - ImportError: cannot import name cached_download . . .
ImportError: cannot import name 'cached_download' from 'huggingface_hub' Asked 6 months ago Modified 4 months ago Viewed 18k times
- SSLError: HTTPSConnectionPool(host=huggingface. co, port=443): Max . . .
Also, HF complains that now the connection is insecure: InsecureRequestWarning: Unverified HTTPS request is being made to host 'huggingface co' Adding certificate verification is strongly advised
- python - Why does HuggingFace-provided Deepseek code result in an . . .
Why does HuggingFace-provided Deepseek code result in an 'Unknown quantization type' error? Asked 5 months ago Modified 3 months ago Viewed 3k times
- Offline using cached models from huggingface pretrained
HuggingFace includes a caching mechanism Whenever you load a model, a tokenizer, or a dataset, the files are downloaded and kept in a local cache for further utilization
- python - Efficiently using Hugging Face transformers pipelines on GPU . . .
I'm relatively new to Python and facing some performance issues while using Hugging Face Transformers for sentiment analysis on a relatively large dataset I've created a DataFrame with 6000 rows o
- How to download a model from huggingface? - Stack Overflow
For example, I want to download bert-base-uncased on https: huggingface co models, but can't find a 'Download' link Or is it not downloadable?
|
|
|