Falcon-Edge-1B-Instruct converted to run on Apple Neural Engine (ANe). Model conversion code here.
To run clone this repo with the files and run this command:
python falcon_edge_generate.py --embeddings ./falcon_edge_1b_embeddings.npy --lm_head falcon_edge_1b_lmhead.mlmodelc --model ./falcon_edge_1b.mlmodelc --cache_length 1024 --temp 0.7 --min_p 0.1
Supported cache lengths are 512, 1024, 2048, 3072, 4096, 6144 and 8192
.
First time running with a new cache length takes a longer initialization time for model compilation.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for seba/Falcon-Edge-1B-Instruct-CoreML
Base model
tiiuae/Falcon-E-1B-Instruct