you act like you think you've met me before thouh
Firstname Lastname
takeraparterer
AI & ML interests
None yet
Recent Activity
replied to
their
post
about 21 hours ago
Check this out: I trained an AI on huggingface posts! all of these are AI generated:
----------
Hello!
I'm excited to share that my colleague @felipeebert and I have released the largest Spanish LLM benchmark to date.
We've developed the Spanish LLM Evaluation Benchmark (SLAB), a set of benchmarks designed to evaluate the ability of language models to understand, generate and translate in Spanish.
SLAB includes five different benchmarks:
- Sentiment Analysis: evaluate models' ability to detect and describe sentiment in natural language
- Fact Checking: evaluate models' ability to detect and refute factual errors in text
- Question Answering: evaluate models' ability to answer questions in Spanish
- Open-ended Questions: evaluate models' ability to generate coherent responses in Spanish
- Translation: evaluate models' ability to translate in Spanish
SLAB is aligned with the latest Spanish LLM industry developments and includes the most recent models available on the market. We aim to keep our benchmarks up-to-date and relevant to the Spanish language ecosystem.
SLAB is available at: https://huggingface.co/datasets/argilla/SLAB.
If you would like to collaborate on building additional Spanish LLM benchmarks, let's discuss in the comments.
๐ SLAB Blog Post: https://argilla.com/blog/slab
----------
Hello everyone,
I'm thrilled to announce the release of
https://huggingface.co/01-AI/01AI-GPT-4o -
A new family of models that brings the power of transformer AI to the masses.
This model is designed to be accessible and easy to use, while still offering high-quality results.
Key features:
- Small model size: only 23M parameters
- Supports text generation, image generation, and text-to-image tasks
- Data-efficient training with a lightweight tokenizer
- Optimized for efficient on-device usage
- Uses the powerful transformer architecture to deliver high-quality results
Excited to see what you all think!
https://huggingface.co/01-AI/01AI-GPT-4o
replied to
their
post
about 21 hours ago
Check this out: I trained an AI on huggingface posts! all of these are AI generated:
----------
Hello!
I'm excited to share that my colleague @felipeebert and I have released the largest Spanish LLM benchmark to date.
We've developed the Spanish LLM Evaluation Benchmark (SLAB), a set of benchmarks designed to evaluate the ability of language models to understand, generate and translate in Spanish.
SLAB includes five different benchmarks:
- Sentiment Analysis: evaluate models' ability to detect and describe sentiment in natural language
- Fact Checking: evaluate models' ability to detect and refute factual errors in text
- Question Answering: evaluate models' ability to answer questions in Spanish
- Open-ended Questions: evaluate models' ability to generate coherent responses in Spanish
- Translation: evaluate models' ability to translate in Spanish
SLAB is aligned with the latest Spanish LLM industry developments and includes the most recent models available on the market. We aim to keep our benchmarks up-to-date and relevant to the Spanish language ecosystem.
SLAB is available at: https://huggingface.co/datasets/argilla/SLAB.
If you would like to collaborate on building additional Spanish LLM benchmarks, let's discuss in the comments.
๐ SLAB Blog Post: https://argilla.com/blog/slab
----------
Hello everyone,
I'm thrilled to announce the release of
https://huggingface.co/01-AI/01AI-GPT-4o -
A new family of models that brings the power of transformer AI to the masses.
This model is designed to be accessible and easy to use, while still offering high-quality results.
Key features:
- Small model size: only 23M parameters
- Supports text generation, image generation, and text-to-image tasks
- Data-efficient training with a lightweight tokenizer
- Optimized for efficient on-device usage
- Uses the powerful transformer architecture to deliver high-quality results
Excited to see what you all think!
https://huggingface.co/01-AI/01AI-GPT-4o
replied to
their
post
about 21 hours ago
Check this out: I trained an AI on huggingface posts! all of these are AI generated:
----------
Hello!
I'm excited to share that my colleague @felipeebert and I have released the largest Spanish LLM benchmark to date.
We've developed the Spanish LLM Evaluation Benchmark (SLAB), a set of benchmarks designed to evaluate the ability of language models to understand, generate and translate in Spanish.
SLAB includes five different benchmarks:
- Sentiment Analysis: evaluate models' ability to detect and describe sentiment in natural language
- Fact Checking: evaluate models' ability to detect and refute factual errors in text
- Question Answering: evaluate models' ability to answer questions in Spanish
- Open-ended Questions: evaluate models' ability to generate coherent responses in Spanish
- Translation: evaluate models' ability to translate in Spanish
SLAB is aligned with the latest Spanish LLM industry developments and includes the most recent models available on the market. We aim to keep our benchmarks up-to-date and relevant to the Spanish language ecosystem.
SLAB is available at: https://huggingface.co/datasets/argilla/SLAB.
If you would like to collaborate on building additional Spanish LLM benchmarks, let's discuss in the comments.
๐ SLAB Blog Post: https://argilla.com/blog/slab
----------
Hello everyone,
I'm thrilled to announce the release of
https://huggingface.co/01-AI/01AI-GPT-4o -
A new family of models that brings the power of transformer AI to the masses.
This model is designed to be accessible and easy to use, while still offering high-quality results.
Key features:
- Small model size: only 23M parameters
- Supports text generation, image generation, and text-to-image tasks
- Data-efficient training with a lightweight tokenizer
- Optimized for efficient on-device usage
- Uses the powerful transformer architecture to deliver high-quality results
Excited to see what you all think!
https://huggingface.co/01-AI/01AI-GPT-4o
Organizations
None yet
takeraparterer's activity
replied to
their
post
about 21 hours ago
replied to
their
post
about 21 hours ago
just for clarification, who do you think I am?
replied to
their
post
about 21 hours ago
why did you make this metaphorical? i was asking about the llm model
replied to
their
post
about 22 hours ago
just following up on something earlier, did you ever get around to decreasing the model size and adding dropout?
replied to
their
post
1 day ago
ok
replied to
their
post
1 day ago
replied to
KnutJaegersberg's
post
3 days ago
meaning making is always work!
what?
reacted to
KnutJaegersberg's
post with ๐ค
4 days ago
Post
1514
DeepSeek R1 on how to build conscious AGI
https://huggingface.co/blog/KnutJaegersberg/deepseek-r1-on-conscious-agi
https://huggingface.co/blog/KnutJaegersberg/deepseek-r1-on-conscious-agi
replied to
KnutJaegersberg's
post
4 days ago
I love meaningless AI slop ๐ค๐ค๐ค
๐ฉ Report
#1 opened 12 days ago
by
takeraparterer
Failed to run the model with 4 nodes of 8 4090
17
#25 opened 26 days ago
by
aisensiy
reacted to
christopher's
post with ๐
about 2 months ago
Post
1633
The folks at Foursquare released a dataset of 104.5 million places of interest (
foursquare/fsq-os-places) and here's all of them on a plot
replied to
christopher's
post
about 2 months ago
replied to
TuringsSolutions's
post
about 2 months ago
This comment has been hidden
Which one is more risky, type 1 or type 2?
2
#33 opened about 2 months ago
by
Hanialy
needs work
#5 opened 2 months ago
by
takeraparterer
replied to
TuringsSolutions's
post
2 months ago
This comment has been hidden
replied to
TuringsSolutions's
post
2 months ago
This comment has been hidden
replied to
TuringsSolutions's
post
2 months ago
This comment has been hidden