Richard A Aragon

TuringsSolutions

AI & ML interests

None yet

Articles

Organizations

TuringsSolutions's activity

replied to their post 2 days ago
view reply

Reported, again. @no-mad your site is actively enabling this at this point and is a far easier target to sue.

replied to their post 3 days ago
view reply

@no-mad Is this enough history yet? It has been a repeated pattern across multiple posts now. The user is obviously deranged and has threatened violence in subtle ways. They are only encouraged with no repercussions whatsoever.

replied to their post 3 days ago
replied to their post 3 days ago
posted an update 3 days ago
view post
Post
3750
Are you familiar with the difference between discrete learning and predictive learning? This distinction is exactly why LLM models are not designed to perform and execute function calls, they are not the right shape for it. LLM models are prediction machines. Function calling requires discrete learning machines. Fortunately, you can easily couple an LLM model with a discrete learning algorithm. It is beyond easy to do, you simply need to know the math to do it. Want to dive deeper into this subject? Check out this video.

https://youtu.be/wBRem2p8iPM
  • 8 replies
ยท
replied to their post 5 days ago
view reply

This isn't agents, it is API. You would do this because this is a multi million dollar problem that I have run into first hand with multiple Fortune 500's. It is for them.

posted an update 6 days ago
view post
Post
497
Imagine being able to talk directly to your API connection. "I have a field in the CRM named Customer_ID that needs to map to a field in the ERP named ERP_Customer_ID." Imagine being able to give your API connections both a brain and swarm of agents as a body to execute any task or function. This isn't science fiction, this is the revolutionary power of Liquid API. A product 10 years in the making!

https://youtu.be/cHI_k1Dkdr4
  • 2 replies
ยท
replied to their post 8 days ago
replied to their post 8 days ago
replied to their post 8 days ago
posted an update 8 days ago
view post
Post
1080
How would you like to be able to run AI Agents locally from your computer, for $0? Does this sound like a pipe dream? It is reality. Note: I am of the personal opinion that agent-based technology is still 'not quite ready for primetime'. That has not stopped FAANG from flooding you with agent-based products though. So, if you want to buy their marketing, here is what they are offering you, for free.

https://youtu.be/aV3F5fqHyqc
  • 6 replies
ยท
reacted to davanstrien's post with ๐Ÿš€ 9 days ago
reacted to prithivMLmods's post with ๐Ÿ‘ 9 days ago
view post
Post
4455
New Droppings๐Ÿฅณ

๐Ÿ˜ถโ€๐ŸŒซ๏ธCollection: prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be

๐ŸฅณDemo Here: prithivMLmods/FLUX-LoRA-DLC with more than 100+ Flux LoRA's

๐ŸชจFluid Dramatic Neon: prithivMLmods/Castor-Dramatic-Neon-Flux-LoRA
๐ŸชจPast & Present Blend: prithivMLmods/Past-Present-Deep-Mix-Flux-LoRA
๐ŸชจTarot Cards Refreshed Themes: prithivMLmods/Ton618-Tarot-Cards-Flux-LoRA
๐ŸชจAmxtoon Character Mix Real-Anime: prithivMLmods/Ton618-Amxtoon-Flux-LoRA
๐ŸชจEpic Realism Flux v1: prithivMLmods/Ton618-Epic-Realism-Flux-LoRA
๐ŸชจMock-up Textures: prithivMLmods/Mockup-Texture-Flux-LoRA
.
.
.
@prithivMLmods ๐Ÿค—
  • 2 replies
ยท
replied to their post 10 days ago
view reply

I will produce a model just for you! Give me a bit of time, if I going to do it, I want to do it right. I try to be super careful in this video and will remain careful moving forward, my specific criticism of their research paper is that the model literally does not work when I reconstruct their methods. I like where they are going with the math, which is why the paper caught my eye in the first place. What good is mathematical and computational simplification if the end result does not work though? That is backwards logic.

posted an update 10 days ago
view post
Post
2855
I have been seeing a specific type of AI hype more and more, I call it, releasing research expecting that no one will ever reproduce your methods, then overhyping your results. I test the methodology of maybe 4-5 research papers per day. That is how I find a lot of my research. Usually, 3-4 of those experiments end up not being reproduceable for some reason. I am starting to think it is not accidental.

So, I am launching a new series where I specifically showcase a research paper by reproducing their methodology and highlighting the blatant flaws that show up when you actually do this. Here is Episode 1!

https://www.youtube.com/watch?v=JLa0cFWm1A4
  • 5 replies
ยท
posted an update 15 days ago
view post
Post
520
Why is the Adam Optimizer so good? Simple, because it will never find the absolute most optimal solution. That is a design feature, not a flaw. This is why no other optimizer comes close in terms of generalizable use. Want to learn more about this entire process and exactly what I am talking about? I break all of this down in very simple terms in this video! https://youtu.be/B9lMONNngGM

https://youtu.be/B9lMONNngGM
reacted to daniel-de-leon's post with ๐Ÿ”ฅ 17 days ago
view post
Post
2380
As the rapid adoption of chat bots and QandA models continues, so do the concerns for their reliability and safety. In response to this, many state-of-the-art models are being tuned to act as Safety Guardrails to protect against malicious usage and avoid undesired, harmful output. I published a Hugging Face blog introducing a simple, proof-of-concept, RoBERTa-based LLM that my team and I finetuned to detect toxic prompt inputs into chat-style LLMs. The article explores some of the tradeoffs of fine-tuning larger decoder vs. smaller encoder models and asks the question if "simpler is better" in the arena of toxic prompt detection.

๐Ÿ”— to blog: https://huggingface.co/blog/daniel-de-leon/toxic-prompt-roberta
๐Ÿ”— to model: Intel/toxic-prompt-roberta
๐Ÿ”— to OPEA microservice: https://github.com/opea-project/GenAIComps/tree/main/comps/guardrails/toxicity_detection

A huge thank you to my colleagues that helped contribute: @qgao007 , @mitalipo , @ashahba and Fahim Mohammad
posted an update 18 days ago
view post
Post
1401
I think Reinforcement Learning is the future, for a lot of reasons. I spell them out for you in this video, and also provide you with the basic code to get up and running with Atari and OpenAI Gym. If you want to get into RL, this is your ticket. Link to a cool training montage of the model in the description of the video as well. Step 2 from here would be the full-on training and certification that HuggingFace offers for RL.

https://youtu.be/ueZl3A36ZQk
posted an update 21 days ago
view post
Post
1072
Every adult on the planet knows what a vector is and has the basic understanding of how they are utilized right in their heads. You just don't know it as vector math. You do not know a 2-D vector as a 2-D vector, you know it as a graph. Want to know more? Check out this video, I break down the concept in about 10 minutes and I am positive you will fully understand it by the end: https://youtu.be/Iny2ughcGsA
  • 1 reply
ยท
reacted to hbseong's post with โค๏ธ 21 days ago
view post
Post
1177
๐Ÿšจ๐Ÿ”ฅ New Release Alert! ๐Ÿ”ฅ๐Ÿšจ

Introducing the 435M model that outperforms Llama-Guard-3-8B while slashing 75% of the computation cost! ๐Ÿ’ป๐Ÿ’ฅ
๐Ÿ‘‰ Check it out: hbseong/HarmAug-Guard (Yes, INFERENCE CODE INCLUDED! ๐Ÿ’ก)

More details in our paper: https://arxiv.org/abs/2410.01524 ๐Ÿ“œ

#HarmAug #LLM # Safety #EfficiencyBoost #Research #AI #MachineLearning