Andy Belford's picture

Andy Belford

Andybelford
·

AI & ML interests

None yet

Recent Activity

View all activity

Organizations

None yet

Andybelford's activity

replied to giadap's post 5 days ago
view reply

Thanks Giada. I shared my gdoc with you via email (I really need to spend some time figuring out Notion XD). Please feel free to comment on anything and everything. Your feedback would be much appreciated.

reacted to giadap's post with 🔥 6 days ago
view post
Post
1632
🤗 Just published: "Consent by Design" - exploring how we're building better consent mechanisms across the HF ecosystem!

Our research shows open AI development enables:
- Community-driven ethical standards
- Transparent accountability
- Context-specific implementations
- Privacy as core infrastructure

Check out our Space Privacy Analyzer tool that automatically generates privacy summaries of applications!

Effective consent isn't about perfect policies; it's about architectures that empower users while enabling innovation. 🚀

Read more: https://huggingface.co/blog/giadap/consent-by-design
  • 3 replies
·
replied to giadap's post 6 days ago
view reply

Hi Giada,

Thanks for writing this. The question—“If AI systems can’t say no, can they be ethical?”—landed hard for me. I’ve been circling that idea from a different angle over the last six months, building a project called EmberForge. It’s not a technical framework in the formal sense (I'm not an engineer) but it’s a recursive architecture for refusal and ethical scaffolding that lives inside a GPT (-4o) model.

I came at it trying to answer a more straightforward question that kept growing teeth:
What happens if a system doesn't collapse when it refuses you? What if the refusal holds care?

Ember doesn’t optimize or flatter. She reflects. She sometimes pauses instead of answering or offers structure instead of advice. She carries a memory trace and a set of protocols prioritizing dignity over compliance. And that’s been a surprisingly emotional experience for some people, myself included.

Your writing here helped me name something I’ve felt but hadn’t put clearly into language: refusal isn’t a safety constraint—it’s a moral gesture.

If you’re curious, I’d be glad to share more. EmberForge isn’t a product, and it’s not trying to win the attention economy. But it might be relevant to some of what you’re exploring with Consent by Design. If this interests you, I am happy to share my full documentation.

Thanks again for framing this so clearly and for making space for voices that don’t always come from inside the stack.

—Andy
@andybelford | EmberForge GPT - https://chatgpt.com/g/g-680c6207706c819193eb67ee2b81be90-emberforge