Michael Booth

mjboothaus
ยท

AI & ML interests

None yet

Recent Activity

Organizations

SIL Global - AI's profile picture DataBooth's profile picture

mjboothaus's activity

New activity in mjboothaus/titanic-databooth about 1 month ago
New activity in mjboothaus/titanic-databooth about 1 month ago

Upload 4 files

1
#2 opened about 1 month ago by
mjboothaus

Upload titanic_sample10.csv

1
#1 opened about 1 month ago by
mjboothaus
replied to m-ric's post 10 months ago
view reply

Thanks for providing this very nice example.

I have tried to reproduce the example both in Colab and locally without success. Get errors reported on calling the APIs. I have a HF token and have also received permission to use the Meta-Llama3.1 model on HF.

Part of the error output on Colab:
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3.1-70B-Instruct (Request ID: eVZLb-h9UvHfNxNBGeMUw)

Rate limit reached. Please log in or use a HF access token

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/transformers/agents/agents.py", line 711, in direct_run
step_logs = self.step()
File "/usr/local/lib/python3.10/dist-packages/transformers/agents/agents.py", line 883, in step
raise AgentGenerationError(f"Error in generating llm output: {e}.")
transformers.agents.agents.AgentGenerationError: Error in generating llm output: 429 Client Error: Too Many Requests for url: https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3.1-70B-Instruct (Request ID: eVZLb-h9UvHfNxNBGeMUw)

Rate limit reached. Please log in or use a HF access token.
Reached max iterations.
NoneType: None

New activity in meta-llama/Llama-3.1-8B-Instruct 10 months ago