Giada Pistilli

giadap

AI & ML interests

Principal Ethicist @ HF

Recent Activity

Organizations

Hugging Face's profile picture Society & Ethics's profile picture BigScience Workshop's profile picture BigScience Data's profile picture HuggingFaceM4's profile picture Huggingface Projects's profile picture Stable Diffusion Dreambooth Concepts Library's profile picture Stable Diffusion Bias Eval's profile picture llm-values's profile picture Bias Leaderboard Development's profile picture Women on Hugging Face's profile picture Journalists on Hugging Face's profile picture Big Science Social Impact Evaluation for Bias and Stereotypes's profile picture Hugging Face AI & Society Team's profile picture AI companionship's profile picture

giadap's activity

upvoted an article 4 days ago
view article
Article

Bigger isn't always better: how to choose the most efficient model for context-specific tasks πŸŒ±πŸ§‘πŸΌβ€πŸ’»

By sasha β€’
β€’ 13
upvoted an article 6 days ago
view article
Article

Tiny Agents in Python: a MCP-powered agent in ~70 lines of code

By celinah and 3 others β€’
β€’ 107
upvoted an article 23 days ago
view article
Article

Reduce, Reuse, Recycle: Why Open Source is a Win for Sustainability

By sasha and 1 other β€’
β€’ 14
posted an update 25 days ago
view post
Post
4194
Ever notice how some AI assistants feel like tools while others feel like companions? Turns out, it's not always about fancy tech upgrades, because sometimes it's just clever design.

Our latest blog post at Hugging Face dives into how minimal design choices can completely transform how users experience AI. We've seen our community turn the same base models into everything from swimming coaches to interview prep specialists with surprisingly small tweaks.

The most fascinating part? When we tested identical models with different "personalities" in our Inference Playground, the results were mind-blowing.

Want to experiment yourself? Our Inference Playground lets anyone (yes, even non-coders!) test these differences in real-time. You can:

- Compare multiple models side-by-side
- Customize system prompts
- Adjust parameters like temperature
- Test multi-turn conversations

It's fascinating how a few lines of instruction text can transform the same AI from strictly professional to seemingly caring and personal, without changing a single line of code in the model itself.

Read more here: https://huggingface.co/blog/giadap/ai-personas
published an article 25 days ago
view article
Article

AI Personas: The Impact of Design Choices

By giadap and 1 other β€’
β€’ 13
replied to their post 25 days ago
view reply

Hi Andy, thank you so much for your thoughtful comment! I am glad my post help you framing those important questions.
Sure thing, send over by email your documentation and let's chat: [email protected]

posted an update about 1 month ago
view post
Post
1672
πŸ€— Just published: "Consent by Design" - exploring how we're building better consent mechanisms across the HF ecosystem!

Our research shows open AI development enables:
- Community-driven ethical standards
- Transparent accountability
- Context-specific implementations
- Privacy as core infrastructure

Check out our Space Privacy Analyzer tool that automatically generates privacy summaries of applications!

Effective consent isn't about perfect policies; it's about architectures that empower users while enabling innovation. πŸš€

Read more: https://huggingface.co/blog/giadap/consent-by-design
  • 3 replies
Β·
published an article about 1 month ago
view article
Article

Consent by Design: Approaches to User Data in Open AI Ecosystems

By giadap and 1 other β€’
β€’ 13
upvoted 2 articles about 2 months ago
view article
Article

Empowering Public Organizations: Preparing Your Data for the AI Era

By evijit and 1 other β€’
β€’ 15
posted an update 2 months ago
view post
Post
2353
We've all become experts at clicking "I agree" without a second thought. In my latest blog post, I explore why these traditional consent models are increasingly problematic in the age of generative AI.

I found three fundamental challenges:
- Scope problem: how can you know what you're agreeing to when AI could use your data in different ways?
- Temporality problem: once an AI system learns from your data, good luck trying to make it "unlearn" it.
- Autonomy trap: the data you share today could create systems that pigeonhole you tomorrow.

Individual users shouldn't bear all the responsibility, while big tech holds all the cards. We need better approaches to level the playing field, from collective advocacy and stronger technological safeguards to establishing "data fiduciaries" with a legal duty to protect our digital interests.

Available here: https://huggingface.co/blog/giadap/beyond-consent
published an article 2 months ago
view article
Article

I Clicked β€œI Agree”, But What Am I Really Consenting To?

By giadap β€’
β€’ 24
upvoted an article 3 months ago