Falcon-Edge-1B-Instruct converted to run on Apple Neural Engine (ANe). Model conversion code here.

To run clone this repo with the files and run this command:

python falcon_edge_generate.py --embeddings ./falcon_edge_1b_embeddings.npy --lm_head falcon_edge_1b_lmhead.mlmodelc --model ./falcon_edge_1b.mlmodelc --cache_length 1024 --temp 0.7 --min_p 0.1

Supported cache lengths are 512, 1024, 2048, 3072, 4096, 6144 and 8192.

First time running with a new cache length takes a longer initialization time for model compilation.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for seba/Falcon-Edge-1B-Instruct-CoreML

Finetuned
(1)
this model