Intel

company
Verified

AI & ML interests

None defined yet.

Recent Activity

bconsolvo  updated a Space 1 day ago
Intel/vacaigent
PhillipHoward  updated a dataset 2 days ago
Intel/NeuroComparatives
bconsolvo  published a Space 8 days ago
Intel/vacaigent
View all activity

Articles

Intel's activity

bconsolvo 
updated a Space about 1 month ago
ashahba 
updated a Space about 2 months ago
ek-id 
in Intel/polite-guard 2 months ago

Adding ONNX file of this model

#4 opened 2 months ago by
ek-id
daniel-de-leon 
posted an update 6 months ago
view post
Post
2417
As the rapid adoption of chat bots and QandA models continues, so do the concerns for their reliability and safety. In response to this, many state-of-the-art models are being tuned to act as Safety Guardrails to protect against malicious usage and avoid undesired, harmful output. I published a Hugging Face blog introducing a simple, proof-of-concept, RoBERTa-based LLM that my team and I finetuned to detect toxic prompt inputs into chat-style LLMs. The article explores some of the tradeoffs of fine-tuning larger decoder vs. smaller encoder models and asks the question if "simpler is better" in the arena of toxic prompt detection.

🔗 to blog: https://huggingface.co/blog/daniel-de-leon/toxic-prompt-roberta
🔗 to model: Intel/toxic-prompt-roberta
🔗 to OPEA microservice: https://github.com/opea-project/GenAIComps/tree/main/comps/guardrails/toxicity_detection

A huge thank you to my colleagues that helped contribute: @qgao007 , @mitalipo , @ashahba and Fahim Mohammad