Writer

Enterprise
company
Verified
Activity Feed

AI & ML interests

AGI, LLMs, Knowledge Graph, Palmyra, Domain Specific LLM

Recent Activity

Writer's activity

samjulien 
posted an update 20 days ago
view post
Post
1474
🔥 RAG in just a few lines of code?!

Try out our Hacker News Listener with new built-in RAG capabilities and Palmyra X 004 from the team at Writer!

This Writer Framework app:

- Scrapes up to 500 HN stories and comments
- Uploads them to a Knowledge Graph
- Enables interactive chat with the content using graph-based RAG
- Provides source attribution with every response

The best part? Setting up RAG is now incredibly simple - just a few lines of code to connect your Knowledge Graph as a tool with Palmyra X 004.

🤗 Space: samjulien/hacker-news-listener
💻 Code: https://github.com/writer/framework-tutorials/tree/main/hacker-news-social-listener
melisa 
posted an update 4 months ago
view post
Post
2996
🔥 Introducing "Writing in the Margins (WiM)" - better inference pattern for long context LLMs that solves the Lost-in-the-Middle problem 🔥

Paper page: Writing in the Margins: Better Inference Pattern for Long Context Retrieval (2408.14906)

TL;DR
Make your model write "margin notes" as you chunk prefill the KV cache. Then ask it reread all notes before it speaks up.
Works with humans, works with AI 🤖

WiM leverages the chunked prefill of the key-value cache, which concurrently generates query-based extractive summaries at each step of the prefill that are subsequently reintegrated at the end of the computation. We term these intermediate outputs “margins”, drawing inspiration from the practice of making margin notes for improved comprehension of long contexts in human reading. We show that this technique, which adds only minimal additional computation, significantly improves LLMs long context reasoning capabilities.

Think: Every chunk has a chance to be attended to/ be at the end of the context at least once. 🎉

📊 Results:
- An average accuracy boost of 7.5% in multi-hop reasoning tasks like HotpotQA and MultiHop-RAG.
- Even a 30% increase in F1-score for summarisation-like tasks (CWE).

Plus, WiM fits seamlessly into interactive applications (think: progress bar!). It can provide real-time progress updates during data retrieval and integration, making it user-friendly and transparent - a stark contrast to feeding 1mln tokens to an LLMs and waiting 6 min for the first token. 🤯

👩‍💻🧑‍💻 Check it out and contribute to our open-source project here: https://github.com/writer/writing-in-the-margins

🧠 More about chunked prefill: https://docs.vllm.ai/en/latest/models/performance.html#chunked-prefill
  • 2 replies
·
samjulien 
posted an update 5 months ago
view post
Post
1919
🔥 Today, Writer dropped Palmyra-Med-70b and Palmyra-Fin-70b, two new domain-specific models that are setting a new standard for medical and financial model performance.

TL;DR
Palmyra-Med-70b
🔢 8k and 32k versions available
🚀 MMLU performance of ~86%, outperforming other top models
👨‍⚕️ Great for diagnosing, planning treatments, medical research, insurance coding and billing
📃 Open-model license for non-commercial use cases
🤗 Available on Hugging Face: Writer/Palmyra-Med-70B
💾 Live on NVIDIA NIM: https://build.nvidia.com/writer/palmyra-med-70b

Palmyra-Fin-70b
🚀 Passed the CFA Level III exam with a 73% score — the first model to do so
💸 Skilled at complex tasks like investment research, financial analysis, and sentiment analysis
📈 Outperformed other top models on a long-fin-eval test of real-world use cases
📃 Open-model license for non-commercial use cases
🤗 Available on Hugging Face: Writer/Palmyra-Fin-70B-32K
💾 Live on NVIDIA NIM: https://build.nvidia.com/writer/palmyra-fin-70b-32k

Try them out and let us know what you think!
  • 2 replies
·
wassemgtk 
posted an update 9 months ago
view post
Post
3587
Writer team had the opportunity to run an eval for Mixtral-8x22b, results were interesting.

| ---------------------------- |
| #mmlu 77.26 |
| ---------------------------- |
| #hellaswag 88.81 |
| ---------------------------- |
| #truthfulqa 52.05 |
| ---------------------------- |
| #arc_challenge 70.31 |
| ---------------------------- |
| #winogrande 84.93 |
| ---------------------------- |
| #gsm8k 76.65 |
| ---------------------------- |
  • 2 replies
·
wassemgtk 
posted an update 10 months ago
view post
Post
We are thrilled to announce the release of the OmniACT dataset! This revolutionary dataset and benchmark focuses on pushing the limits of how virtual agents can facilitate the automation of our computer tasks. Imagine less clicking and typing, and more observation as your computer takes care of tasks such as organizing schedules or arranging travel arrangements on its own.

Check it out ➡️ [OmniACT Dataset on Hugging Face]( Writer/omniact)

For a deep dive, here’s the paper: [OmniACT Paper](https://arxiv.org/abs/2402.17553)