Michael Booth
AI & ML interests
Recent Activity
Organizations
mjboothaus's activity
[bot] Conversion to Parquet

Diabetes dataset - mean age of -9.733290091 ร 10-19
Upload 4 files
Upload titanic_sample10.csv
Thanks for providing this very nice example.
I have tried to reproduce the example both in Colab and locally without success. Get errors reported on calling the APIs. I have a HF token and have also received permission to use the Meta-Llama3.1 model on HF.
Part of the error output on Colab:
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3.1-70B-Instruct (Request ID: eVZLb-h9UvHfNxNBGeMUw)
Rate limit reached. Please log in or use a HF access token
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/transformers/agents/agents.py", line 711, in direct_run
step_logs = self.step()
File "/usr/local/lib/python3.10/dist-packages/transformers/agents/agents.py", line 883, in step
raise AgentGenerationError(f"Error in generating llm output: {e}.")
transformers.agents.agents.AgentGenerationError: Error in generating llm output: 429 Client Error: Too Many Requests for url: https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3.1-70B-Instruct (Request ID: eVZLb-h9UvHfNxNBGeMUw)
Rate limit reached. Please log in or use a HF access token.
Reached max iterations.
NoneType: None