Richard A Aragon

TuringsSolutions

AI & ML interests

None yet

Recent Activity

replied to takeraparterer's post about 4 hours ago
Check this out: I trained an AI on huggingface posts! all of these are AI generated: ---------- Hello! I'm excited to share that my colleague @felipeebert and I have released the largest Spanish LLM benchmark to date. We've developed the Spanish LLM Evaluation Benchmark (SLAB), a set of benchmarks designed to evaluate the ability of language models to understand, generate and translate in Spanish. SLAB includes five different benchmarks: - Sentiment Analysis: evaluate models' ability to detect and describe sentiment in natural language - Fact Checking: evaluate models' ability to detect and refute factual errors in text - Question Answering: evaluate models' ability to answer questions in Spanish - Open-ended Questions: evaluate models' ability to generate coherent responses in Spanish - Translation: evaluate models' ability to translate in Spanish SLAB is aligned with the latest Spanish LLM industry developments and includes the most recent models available on the market. We aim to keep our benchmarks up-to-date and relevant to the Spanish language ecosystem. SLAB is available at: https://huggingface.co/datasets/argilla/SLAB. If you would like to collaborate on building additional Spanish LLM benchmarks, let's discuss in the comments. šŸ”— SLAB Blog Post: https://argilla.com/blog/slab ---------- Hello everyone, I'm thrilled to announce the release of https://huggingface.co/01-AI/01AI-GPT-4o - A new family of models that brings the power of transformer AI to the masses. This model is designed to be accessible and easy to use, while still offering high-quality results. Key features: - Small model size: only 23M parameters - Supports text generation, image generation, and text-to-image tasks - Data-efficient training with a lightweight tokenizer - Optimized for efficient on-device usage - Uses the powerful transformer architecture to deliver high-quality results Excited to see what you all think! https://huggingface.co/01-AI/01AI-GPT-4o
replied to takeraparterer's post about 4 hours ago
Check this out: I trained an AI on huggingface posts! all of these are AI generated: ---------- Hello! I'm excited to share that my colleague @felipeebert and I have released the largest Spanish LLM benchmark to date. We've developed the Spanish LLM Evaluation Benchmark (SLAB), a set of benchmarks designed to evaluate the ability of language models to understand, generate and translate in Spanish. SLAB includes five different benchmarks: - Sentiment Analysis: evaluate models' ability to detect and describe sentiment in natural language - Fact Checking: evaluate models' ability to detect and refute factual errors in text - Question Answering: evaluate models' ability to answer questions in Spanish - Open-ended Questions: evaluate models' ability to generate coherent responses in Spanish - Translation: evaluate models' ability to translate in Spanish SLAB is aligned with the latest Spanish LLM industry developments and includes the most recent models available on the market. We aim to keep our benchmarks up-to-date and relevant to the Spanish language ecosystem. SLAB is available at: https://huggingface.co/datasets/argilla/SLAB. If you would like to collaborate on building additional Spanish LLM benchmarks, let's discuss in the comments. šŸ”— SLAB Blog Post: https://argilla.com/blog/slab ---------- Hello everyone, I'm thrilled to announce the release of https://huggingface.co/01-AI/01AI-GPT-4o - A new family of models that brings the power of transformer AI to the masses. This model is designed to be accessible and easy to use, while still offering high-quality results. Key features: - Small model size: only 23M parameters - Supports text generation, image generation, and text-to-image tasks - Data-efficient training with a lightweight tokenizer - Optimized for efficient on-device usage - Uses the powerful transformer architecture to deliver high-quality results Excited to see what you all think! https://huggingface.co/01-AI/01AI-GPT-4o
replied to takeraparterer's post about 4 hours ago
Check this out: I trained an AI on huggingface posts! all of these are AI generated: ---------- Hello! I'm excited to share that my colleague @felipeebert and I have released the largest Spanish LLM benchmark to date. We've developed the Spanish LLM Evaluation Benchmark (SLAB), a set of benchmarks designed to evaluate the ability of language models to understand, generate and translate in Spanish. SLAB includes five different benchmarks: - Sentiment Analysis: evaluate models' ability to detect and describe sentiment in natural language - Fact Checking: evaluate models' ability to detect and refute factual errors in text - Question Answering: evaluate models' ability to answer questions in Spanish - Open-ended Questions: evaluate models' ability to generate coherent responses in Spanish - Translation: evaluate models' ability to translate in Spanish SLAB is aligned with the latest Spanish LLM industry developments and includes the most recent models available on the market. We aim to keep our benchmarks up-to-date and relevant to the Spanish language ecosystem. SLAB is available at: https://huggingface.co/datasets/argilla/SLAB. If you would like to collaborate on building additional Spanish LLM benchmarks, let's discuss in the comments. šŸ”— SLAB Blog Post: https://argilla.com/blog/slab ---------- Hello everyone, I'm thrilled to announce the release of https://huggingface.co/01-AI/01AI-GPT-4o - A new family of models that brings the power of transformer AI to the masses. This model is designed to be accessible and easy to use, while still offering high-quality results. Key features: - Small model size: only 23M parameters - Supports text generation, image generation, and text-to-image tasks - Data-efficient training with a lightweight tokenizer - Optimized for efficient on-device usage - Uses the powerful transformer architecture to deliver high-quality results Excited to see what you all think! https://huggingface.co/01-AI/01AI-GPT-4o
View all activity

Articles

Organizations

Turing's Solutions's profile picture Cloud Mentor's profile picture ZeroGPU Explorers's profile picture Turings Solutions's profile picture Hugging Face for Legal's profile picture Data Is Better Together Contributor's profile picture

Posts 38

view post
Post
379
Maybe that post I showed the other day with my Hyperbolic Embeddings getting to perfect loss with RAdam was a one-time fluke, bad test dataset, etc.? Anotha' one! I gave it a test set a PhD student would struggle with. This model is a bit more souped up. Major callouts of the model: High Dimensional Encoding (HDC), Hyperbolic Embeddings, Entropix. Link to the Colab Notebook: https://colab.research.google.com/drive/1mS-uxhufx-h7eZXL0ZwPMAAXHqSeGZxX?usp=sharing
view post
Post
900
I created something called 'Hyperbolic Embeddings'. I literally just embed the tokens into Hyperbolic Space instead of Euclidean space. At first, this did not get me the gains I was expecting. I was a sad panda. Then I thought about it, a Hyperbolic Embedding needs a Hyperbolic Optimizer. So, instead of Adam, I used Riemannian Adam (RAdam). "Ladies and Gentlemen, We Got 'Em!"