Nice Article! Does Atla-1-mini or its eval framework natively support function calling?
atayloraerospace
Taylor658
AI & ML interests
Multimodal Gen AI ๐ค | Agentic AI ๐ง ๐ค | Computer Vision ๐ญ | AI in Healthcare ๐ฉบ | AI in Aerospace ๐
Recent Activity
new activity
7 days ago
Taylor658/Electrohydrodynamics:Update README.md
updated
a model
7 days ago
Taylor658/Electrohydrodynamics
new activity
7 days ago
Taylor658/Titan-Hohmann:Update README.md
Organizations
Taylor658's activity
reacted to
merve's
post with ๐
23 days ago
Post
2630
small but mighty ๐ฅ
you can fine-tune SmolVLM on an L4 with batch size of 4 and it will only take 16.4 GB VRAM ๐ซฐ๐ป also with gradient accumulation simulated batch size is 16 โจ
I made a notebook that includes all the goodies: QLoRA, gradient accumulation, gradient checkpointing with explanations on how they work ๐ https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb
you can fine-tune SmolVLM on an L4 with batch size of 4 and it will only take 16.4 GB VRAM ๐ซฐ๐ป also with gradient accumulation simulated batch size is 16 โจ
I made a notebook that includes all the goodies: QLoRA, gradient accumulation, gradient checkpointing with explanations on how they work ๐ https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb
I think your predictions for 2025 are spot on @clem ; especially as it relates to who will be leading from a global standpoint for Open Source AI and Research. (I especially see it in Computer Vision, Multimodal and NLP)
posted
an
update
23 days ago
Post
433
๐ The Stanford Institute for Human-Centered AI (https://aiindex.stanford.edu/vibrancy/) has released its 2024 Global AI Vibrancy Tool, a way to explore and compare AI progress across 36 countries.
๐ It measures progress across the 8 broad pillars of R&D, Responsible AI, Economy, Education, Diversity, Policy and Governance, Public Opinion and Infrastructure. (Each of these pillars have a number of Sub Indices)
๐ As a whole it is not surprising that the USA was at the top in terms of overall score as of 2023 (AI investment activity is a large part of the economic pillar for example and that is a large part of the overall USA ranking) but drilling in to more STRATEGIC Macro pillars like Education, Infrastructure or R&D reveal interesting growth patterns in Asia (particularly China) and Western Europe that I suspect the 2024 metrics will bear out.
๐ค Hopefully the 2024 Global Vibrancy ranking will break out AI and ML verticals like Computer Vision or NLP and or the AI Agent space as that may also from a global macro level give indications of what is to come globally for AI in 2025.
๐ It measures progress across the 8 broad pillars of R&D, Responsible AI, Economy, Education, Diversity, Policy and Governance, Public Opinion and Infrastructure. (Each of these pillars have a number of Sub Indices)
๐ As a whole it is not surprising that the USA was at the top in terms of overall score as of 2023 (AI investment activity is a large part of the economic pillar for example and that is a large part of the overall USA ranking) but drilling in to more STRATEGIC Macro pillars like Education, Infrastructure or R&D reveal interesting growth patterns in Asia (particularly China) and Western Europe that I suspect the 2024 metrics will bear out.
๐ค Hopefully the 2024 Global Vibrancy ranking will break out AI and ML verticals like Computer Vision or NLP and or the AI Agent space as that may also from a global macro level give indications of what is to come globally for AI in 2025.
reacted to
clem's
post with ๐
24 days ago
Post
4103
Six predictions for AI in 2025 (and a review of how my 2024 predictions turned out):
- There will be the first major public protest related to AI
- A big company will see its market cap divided by two or more because of AI
- At least 100,000 personal AI robots will be pre-ordered
- China will start to lead the AI race (as a consequence of leading the open-source AI race).
- There will be big breakthroughs in AI for biology and chemistry.
- We will begin to see the economic and employment growth potential of AI, with 15M AI builders on Hugging Face.
How my predictions for 2024 turned out:
- A hyped AI company will go bankrupt or get acquired for a ridiculously low price
โ (Inflexion, AdeptAI,...)
- Open-source LLMs will reach the level of the best closed-source LLMs
โ with QwQ and dozens of others
- Big breakthroughs in AI for video, time-series, biology and chemistry
โ for video ๐ดfor time-series, biology and chemistry
- We will talk much more about the cost (monetary and environmental) of AI
โ Monetary ๐ดEnvironmental (๐ข)
- A popular media will be mostly AI-generated
โ with NotebookLM by Google
- 10 millions AI builders on Hugging Face leading to no increase of unemployment
๐currently 7M of AI builders on Hugging Face
- There will be the first major public protest related to AI
- A big company will see its market cap divided by two or more because of AI
- At least 100,000 personal AI robots will be pre-ordered
- China will start to lead the AI race (as a consequence of leading the open-source AI race).
- There will be big breakthroughs in AI for biology and chemistry.
- We will begin to see the economic and employment growth potential of AI, with 15M AI builders on Hugging Face.
How my predictions for 2024 turned out:
- A hyped AI company will go bankrupt or get acquired for a ridiculously low price
โ (Inflexion, AdeptAI,...)
- Open-source LLMs will reach the level of the best closed-source LLMs
โ with QwQ and dozens of others
- Big breakthroughs in AI for video, time-series, biology and chemistry
โ for video ๐ดfor time-series, biology and chemistry
- We will talk much more about the cost (monetary and environmental) of AI
โ Monetary ๐ดEnvironmental (๐ข)
- A popular media will be mostly AI-generated
โ with NotebookLM by Google
- 10 millions AI builders on Hugging Face leading to no increase of unemployment
๐currently 7M of AI builders on Hugging Face
reacted to
clem's
post with ๐
about 1 month ago
Post
1971
I've been in Brazil for 10 days now ๐ง๐ท๐ง๐ท๐ง๐ท
I've been surprised by the gap between the massive number of people interested in AI (chatgpt adoption is crazy here) and the relatively low number of real AI builders - aka people and companies building their own AI models, datasets and apps.
Lots of efforts needed across the world for everyone to participate, control and benefit this foundational technology, starting with open-source & multi-lingual AI, more access to GPUs & AI builder training for all!
I've been surprised by the gap between the massive number of people interested in AI (chatgpt adoption is crazy here) and the relatively low number of real AI builders - aka people and companies building their own AI models, datasets and apps.
Lots of efforts needed across the world for everyone to participate, control and benefit this foundational technology, starting with open-source & multi-lingual AI, more access to GPUs & AI builder training for all!
posted
an
update
about 1 month ago
Post
693
๐ค๐ป Function Calling is a key component of Agent workflows. To call functions, an LLM needs a way to interact with other systems and run code. This usually means connecting it to a runtime environment that can handle function calls, data, and security.
Per the Berkeley Function-Calling Leaderboard there are only 2 fully open source models (The other 2 in the top 20 that are not closed source have cc-by-nc-4.0 licenses) out of the top 20 models that currently have function calling built in as of 17 Nov 2024.
https://gorilla.cs.berkeley.edu/leaderboard.html
The 2 Open Source Models out of the top 20 that currently support function calling are:
meetkai/functionary-medium-v3.1
Team-ACE/ToolACE-8B
This is a both a huge disadvantage AND an opportunity for the Open Source community as Enterprises, Small Business, Government Agencies etc. quickly adopt Agents and Agent workflows over the next few months. Open Source will have a lot of catching up to do as Enterprises will be hesitant to switch from the closed source models that they may initially build their Agent workflows on in the next few months to an open source alternative later.
Hopefully more open source models will support function calling in the near future.
Per the Berkeley Function-Calling Leaderboard there are only 2 fully open source models (The other 2 in the top 20 that are not closed source have cc-by-nc-4.0 licenses) out of the top 20 models that currently have function calling built in as of 17 Nov 2024.
https://gorilla.cs.berkeley.edu/leaderboard.html
The 2 Open Source Models out of the top 20 that currently support function calling are:
meetkai/functionary-medium-v3.1
Team-ACE/ToolACE-8B
This is a both a huge disadvantage AND an opportunity for the Open Source community as Enterprises, Small Business, Government Agencies etc. quickly adopt Agents and Agent workflows over the next few months. Open Source will have a lot of catching up to do as Enterprises will be hesitant to switch from the closed source models that they may initially build their Agent workflows on in the next few months to an open source alternative later.
Hopefully more open source models will support function calling in the near future.
posted
an
update
2 months ago
Post
2264
The Mystery Bot ๐ต๏ธโโ๏ธ saga I posted about from earlier this week has been solved...๐ค
Cohere for AI has just announced its open source Aya Expanse multilingual model. The Initial release supports 23 languages with more on the way soon.๐ ๐
You can also try Aya Expanse via SMS on your mobile phone using the global WhatsApp number or one of the initial set of country specific numbers listed below.โฌ๏ธ
๐WhatsApp - +14313028498
Germany - (+49) 1771786365
USA โ +18332746219
United Kingdom โ (+44) 7418373332
Canada โ (+1) 2044107115
Netherlands โ (+31) 97006520757
Brazil โ (+55) 11950110169
Portugal โ (+351) 923249773
Italy โ (+39) 3399950813
Poland - (+48) 459050281
Cohere for AI has just announced its open source Aya Expanse multilingual model. The Initial release supports 23 languages with more on the way soon.๐ ๐
You can also try Aya Expanse via SMS on your mobile phone using the global WhatsApp number or one of the initial set of country specific numbers listed below.โฌ๏ธ
๐WhatsApp - +14313028498
Germany - (+49) 1771786365
USA โ +18332746219
United Kingdom โ (+44) 7418373332
Canada โ (+1) 2044107115
Netherlands โ (+31) 97006520757
Brazil โ (+55) 11950110169
Portugal โ (+351) 923249773
Italy โ (+39) 3399950813
Poland - (+48) 459050281
posted
an
update
2 months ago
Post
2513
Spent the weekend testing out some prompts with ๐ต๏ธโโ๏ธMystery Bot๐ต๏ธโโ๏ธ on my mobile... exciting things are coming soon for the following languages:
๐Arabic, Chinese, Czech, Dutch, English French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese!๐
๐Arabic, Chinese, Czech, Dutch, English French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese!๐
reacted to
fdaudens's
post with ๐
3 months ago
Post
1145
๐ Your AI toolkit just got a major upgrade! I updated the Journalists on Hugging Face community's collection with tools for investigative work, content creation, and data analysis.
Sharing these new additions with the links in case itโs helpful:
- @wendys-llc 's excellent 6-part video series on AI for investigative journalism https://www.youtube.com/playlist?list=PLewNEVDy7gq1_GPUaL0OQ31QsiHP5ncAQ
- @jeremycaplan 's curated AI Spaces on HF https://wondertools.substack.com/p/huggingface
- @Xenova 's Whisper Timestamped (with diarization!) for private, on-device transcription Xenova/whisper-speaker-diarization & Xenova/whisper-word-level-timestamps
- Flux models for image gen & LoRAs autotrain-projects/train-flux-lora-ease
- FineGrain's object cutter finegrain/finegrain-object-cutter and object eraser (this one's cool) finegrain/finegrain-object-eraser
- FineVideo: massive open-source annotated dataset + explorer HuggingFaceFV/FineVideo-Explorer
- Qwen2 chat demos, including 2.5 & multimodal versions (crushing it on handwriting recognition) Qwen/Qwen2.5 & Qwen/Qwen2-VL
- GOT-OCR integration stepfun-ai/GOT_official_online_demo
- HTML to Markdown converter maxiw/HTML-to-Markdown
- Text-to-SQL query tool by @davidberenstein1957 for HF datasets davidberenstein1957/text-to-sql-hub-datasets
There's a lot of potential here for journalism and beyond. Give these a try and let me know what you build!
You can also add your favorite ones if you're part of the community!
Check it out: https://huggingface.co/JournalistsonHF
#AIforJournalism #HuggingFace #OpenSourceAI
Sharing these new additions with the links in case itโs helpful:
- @wendys-llc 's excellent 6-part video series on AI for investigative journalism https://www.youtube.com/playlist?list=PLewNEVDy7gq1_GPUaL0OQ31QsiHP5ncAQ
- @jeremycaplan 's curated AI Spaces on HF https://wondertools.substack.com/p/huggingface
- @Xenova 's Whisper Timestamped (with diarization!) for private, on-device transcription Xenova/whisper-speaker-diarization & Xenova/whisper-word-level-timestamps
- Flux models for image gen & LoRAs autotrain-projects/train-flux-lora-ease
- FineGrain's object cutter finegrain/finegrain-object-cutter and object eraser (this one's cool) finegrain/finegrain-object-eraser
- FineVideo: massive open-source annotated dataset + explorer HuggingFaceFV/FineVideo-Explorer
- Qwen2 chat demos, including 2.5 & multimodal versions (crushing it on handwriting recognition) Qwen/Qwen2.5 & Qwen/Qwen2-VL
- GOT-OCR integration stepfun-ai/GOT_official_online_demo
- HTML to Markdown converter maxiw/HTML-to-Markdown
- Text-to-SQL query tool by @davidberenstein1957 for HF datasets davidberenstein1957/text-to-sql-hub-datasets
There's a lot of potential here for journalism and beyond. Give these a try and let me know what you build!
You can also add your favorite ones if you're part of the community!
Check it out: https://huggingface.co/JournalistsonHF
#AIforJournalism #HuggingFace #OpenSourceAI
reacted to
Wauplin's
post with ๐ฅ
3 months ago
Post
4573
๐ Exciting News! ๐
We've just released ๐๐๐๐๐๐๐๐๐๐๐_๐๐๐ v0.25.0 and it's packed with powerful new features and improvements!
โจ ๐ง๐ผ๐ฝ ๐๐ถ๐ด๐ต๐น๐ถ๐ด๐ต๐๐:
โข ๐ ๐จ๐ฝ๐น๐ผ๐ฎ๐ฑ ๐น๐ฎ๐ฟ๐ด๐ฒ ๐ณ๐ผ๐น๐ฑ๐ฒ๐ฟ๐ with ease using
โข ๐ ๐ฆ๐ฒ๐ฎ๐ฟ๐ฐ๐ต ๐๐ฃ๐: new search filters (gated status, inference status) and fetch trending score.
โข โก๐๐ป๐ณ๐ฒ๐ฟ๐ฒ๐ป๐ฐ๐ฒ๐๐น๐ถ๐ฒ๐ป๐: major improvements simplifying chat completions and handling async tasks better.
Weโve also introduced tons of bug fixes and quality-of-life improvements - thanks to the awesome contributions from our community! ๐ช
๐ก Check out the release notes: Wauplin/huggingface_hub#8
Want to try it out? Install the release with:
We've just released ๐๐๐๐๐๐๐๐๐๐๐_๐๐๐ v0.25.0 and it's packed with powerful new features and improvements!
โจ ๐ง๐ผ๐ฝ ๐๐ถ๐ด๐ต๐น๐ถ๐ด๐ต๐๐:
โข ๐ ๐จ๐ฝ๐น๐ผ๐ฎ๐ฑ ๐น๐ฎ๐ฟ๐ด๐ฒ ๐ณ๐ผ๐น๐ฑ๐ฒ๐ฟ๐ with ease using
huggingface-cli upload-large-folder
. Designed for your massive models and datasets. Much recommended if you struggle to upload your Llama 70B fine-tuned model ๐คกโข ๐ ๐ฆ๐ฒ๐ฎ๐ฟ๐ฐ๐ต ๐๐ฃ๐: new search filters (gated status, inference status) and fetch trending score.
โข โก๐๐ป๐ณ๐ฒ๐ฟ๐ฒ๐ป๐ฐ๐ฒ๐๐น๐ถ๐ฒ๐ป๐: major improvements simplifying chat completions and handling async tasks better.
Weโve also introduced tons of bug fixes and quality-of-life improvements - thanks to the awesome contributions from our community! ๐ช
๐ก Check out the release notes: Wauplin/huggingface_hub#8
Want to try it out? Install the release with:
pip install huggingface_hub==0.25.0
posted
an
update
3 months ago
Post
1388
๐ข 2024 CVPR Videos Are Now Available! ๐ฅ
CVPR conference keynotes, panels, posters, workshops, and other content are now available.
โฌ๏ธ
https://cvpr.thecvf.com/Conferences/2024/Videos
CVPR conference keynotes, panels, posters, workshops, and other content are now available.
โฌ๏ธ
https://cvpr.thecvf.com/Conferences/2024/Videos
reacted to
aaditya's
post with ๐
4 months ago
Post
3005
Last Week in Medical AI: Top Research Papers/Models
๐ (August 25 - August 31, 2024)
- MultiMed: Multimodal Medical Benchmark
- A Foundation model for generating chest X-ray images
- MEDSAGE: Medical Dialogue Summarization
- Knowledge Graphs for Radiology Report Generation
- Exploring Multi-modal LLMs for Chest X-ray
- Improving Clinical Note Generation
...
Check the full thread : https://x.com/OpenlifesciAI/status/1829984701324448051
๐ (August 25 - August 31, 2024)
- MultiMed: Multimodal Medical Benchmark
- A Foundation model for generating chest X-ray images
- MEDSAGE: Medical Dialogue Summarization
- Knowledge Graphs for Radiology Report Generation
- Exploring Multi-modal LLMs for Chest X-ray
- Improving Clinical Note Generation
...
Check the full thread : https://x.com/OpenlifesciAI/status/1829984701324448051
reacted to
vilarin's
post with โค๏ธ
4 months ago
Post
6052
๐คฉ Amazing day. AWPortrait-FL finally here!
๐ฆ AWPortrait-FL is finetuned on FLUX.1-dev using the training set of AWPortrait-XL and nearly 2,000 fashion photography photos with extremely high aesthetic quality.
๐คModel: Shakker-Labs/AWPortrait-FL
๐Demo: vilarin/flux-labs
๐ฆ AWPortrait-FL is finetuned on FLUX.1-dev using the training set of AWPortrait-XL and nearly 2,000 fashion photography photos with extremely high aesthetic quality.
๐คModel: Shakker-Labs/AWPortrait-FL
๐Demo: vilarin/flux-labs
posted
an
update
4 months ago
Post
2350
๐กAndrew Ng recently gave a strong defense of Open Source AI models and the need to slow down legislative efforts in the US and the EU to restrict innovation in Open Source AI at Stanford GSB.
๐ฅSee video below
https://youtu.be/yzUdmwlh1sQ?si=bZc690p8iubolXm_
๐ฅSee video below
https://youtu.be/yzUdmwlh1sQ?si=bZc690p8iubolXm_
reacted to
mmhamdy's
post with ๐
5 months ago
Post
3649
๐ Introducing The Open Language Models List
This is a work-in-progress list of open language models with permissive licenses such as MIT, Apache 2.0, or other similar licenses.
The list is not limited to only autoregressive models or even only transformers models, and it includes many SSMs, and SSM-Transformers hybrids.
๐ค Contributions, corrections, and feedback are very welcome!
The Open Language Models List: https://github.com/mmhamdy/open-language-models
This is a work-in-progress list of open language models with permissive licenses such as MIT, Apache 2.0, or other similar licenses.
The list is not limited to only autoregressive models or even only transformers models, and it includes many SSMs, and SSM-Transformers hybrids.
๐ค Contributions, corrections, and feedback are very welcome!
The Open Language Models List: https://github.com/mmhamdy/open-language-models
reacted to
not-lain's
post with ๐ฅ
5 months ago
Post
6598
๐ฅ New state of the art model for background removal is out
๐ค You can try the model at ZhengPeng7/BiRefNet
๐ model shows impressive results outperforming briaai/RMBG-1.4
๐ you can try out the model in: ZhengPeng7/BiRefNet_demo
๐paper: Bilateral Reference for High-Resolution Dichotomous Image Segmentation (2401.03407)
๐ค You can try the model at ZhengPeng7/BiRefNet
๐ model shows impressive results outperforming briaai/RMBG-1.4
๐ you can try out the model in: ZhengPeng7/BiRefNet_demo
๐paper: Bilateral Reference for High-Resolution Dichotomous Image Segmentation (2401.03407)
reacted to
m-ric's
post with ๐
5 months ago
Post
2278
๐๐ด๐ฒ๐ป๐๐ถ๐ฐ ๐๐ฎ๐๐ฎ ๐ฎ๐ป๐ฎ๐น๐๐๐: ๐ฑ๐ฟ๐ผ๐ฝ ๐๐ผ๐๐ฟ ๐ฑ๐ฎ๐๐ฎ ๐ณ๐ถ๐น๐ฒ, ๐น๐ฒ๐ ๐๐ต๐ฒ ๐๐๐ ๐ฑ๐ผ ๐๐ต๐ฒ ๐ฎ๐ป๐ฎ๐น๐๐๐ถ๐ ๐โ๏ธ
Need to make quick exploratory data analysis? โก๏ธ Get help from an agent.
I was impressed by Llama-3.1's capacity to derive insights from data. Given a csv file, it makes quick work of exploratory data analysis and can derive interesting insights.
On the data from the Kaggle titanic challenge, that records which passengers survived the Titanic wreckage, it was able by itself to derive interesting trends like "passengers that paid higher fares were more likely to survive" or "survival rate was much higher for women than men".
The cookbook even lets the agent built its own submission to the challenge, and it ranks under 3,000 out of 17,000 submissions: ๐ not bad at all!
Try it for yourself in this Space demo ๐ m-ric/agent-data-analyst
Need to make quick exploratory data analysis? โก๏ธ Get help from an agent.
I was impressed by Llama-3.1's capacity to derive insights from data. Given a csv file, it makes quick work of exploratory data analysis and can derive interesting insights.
On the data from the Kaggle titanic challenge, that records which passengers survived the Titanic wreckage, it was able by itself to derive interesting trends like "passengers that paid higher fares were more likely to survive" or "survival rate was much higher for women than men".
The cookbook even lets the agent built its own submission to the challenge, and it ranks under 3,000 out of 17,000 submissions: ๐ not bad at all!
Try it for yourself in this Space demo ๐ m-ric/agent-data-analyst
reacted to
lhoestq's
post with ๐
5 months ago
Post
3058
โจ Easy Synthetic Dataset File Generation using LLM DataGen ! Link: https://huggingface.co/spaces/lhoestq/LLM_DataGen
features + how it works:
โ๏ธ Generate the dataset content you want just by entering a file name
๐ก Optionally specify the column names you need
๐จ The dataset is streamed and generated on-the-fly in JSON Lines format
โ Generation is constrained to always output valid JSON
How does this work ?
1/ Enter a file name
2/ The model generates column names for such a file. Using structured generation, it can generate 2 to 5 column names using lower case characters and underscores. I use a prompt that asks to generate column names for a realistic dataset and low temperature.
3/ The columns are used to update the Finite State Machine for the dataset content structured generation, so that it is used to generate JSON objects using those columns
4/ The model generates JSON objects using structured generation again, using the updated Finite State Machine. I use a prompt that asks for realistic data and a temperature of 1.
> Why update a Finite State Machine instead of re-creating one ?
Creating one can take up to 30sec, while updating one takes 0.1s (though it requires to manipulate a graph which is not easy to implement)
> Batched generation is faster, why not use it ?
Generate in batches is faster but tends to generate duplicates for this demo.
Further work can be to provide different prompts (one per sequence in the batch) to end up with a different distribution of sequences in each batch. Or implement a custom sampler that would forbid generating the same data in sequences of the same batch.
> How does structured generation work ?
I used the
Let me know what you think ! And feel free to duplicate/modify it to try other models/prompts or sampling methods :)
features + how it works:
โ๏ธ Generate the dataset content you want just by entering a file name
๐ก Optionally specify the column names you need
๐จ The dataset is streamed and generated on-the-fly in JSON Lines format
โ Generation is constrained to always output valid JSON
How does this work ?
1/ Enter a file name
2/ The model generates column names for such a file. Using structured generation, it can generate 2 to 5 column names using lower case characters and underscores. I use a prompt that asks to generate column names for a realistic dataset and low temperature.
3/ The columns are used to update the Finite State Machine for the dataset content structured generation, so that it is used to generate JSON objects using those columns
4/ The model generates JSON objects using structured generation again, using the updated Finite State Machine. I use a prompt that asks for realistic data and a temperature of 1.
> Why update a Finite State Machine instead of re-creating one ?
Creating one can take up to 30sec, while updating one takes 0.1s (though it requires to manipulate a graph which is not easy to implement)
> Batched generation is faster, why not use it ?
Generate in batches is faster but tends to generate duplicates for this demo.
Further work can be to provide different prompts (one per sequence in the batch) to end up with a different distribution of sequences in each batch. Or implement a custom sampler that would forbid generating the same data in sequences of the same batch.
> How does structured generation work ?
I used the
outlines
library with transformers
to to define a JSON schema that the generation has to follow. It uses a Finite State Machine with token_id
as transitions.Let me know what you think ! And feel free to duplicate/modify it to try other models/prompts or sampling methods :)
reacted to
sequelbox's
post with ๐
5 months ago
Post
1330
JUST RELEASED: Fireplace 2 for Llama 3.1 8b Instruct!
Fireplace 2 is an 'expansion pack' of structured outputs you can request during your chat, using special request tokens to let Llama know you're looking for specific types of responses:
Inline function calls
SQL queries
JSON objects
Data visualization with matplotlib
ValiantLabs/Llama3.1-8B-Fireplace2
Fireplace 2 is an 'expansion pack' of structured outputs you can request during your chat, using special request tokens to let Llama know you're looking for specific types of responses:
Inline function calls
SQL queries
JSON objects
Data visualization with matplotlib
ValiantLabs/Llama3.1-8B-Fireplace2