Model Card for Model ID

BLING-cerebras-1.3b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, with instruct training on top of the cerebras/Cerebras-GPT-1.3B base.

BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even without using any advanced quantization optimizations.

Model Description

  • Developed by: llmware
  • Model type: Instruct-trained GPT decoder
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from model [optional]: cerebras/Cerebras-GPT-1.3B

Uses

The intended use of BLING models is two-fold:

  1. Provide high-quality Instruct models that can run on a laptop for local testing. We have found it extremely useful when building a proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases.

  2. Push the state of the art for smaller Instruct-following models in the sub-7B parameter range, especially 1B-3B, as single-purpose automation tools for specific tasks through targeted fine-tuning datasets and focused "instruction" tasks.

Direct Use

BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. Rather than try to be "all things to all people," BLING models try to focus on a narrower set of Instructions more suitable to a ~1B parameter GPT model.

BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without having to send sensitive information over an Internet-based API.

The first BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.

Bias, Risks, and Limitations

Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.

How to Get Started with the Model

The fastest way to get started with BLING is through direct import in transformers:

from transformers import AutoTokenizer, AutoModelForCausalLM  
tokenizer = AutoTokenizer.from_pretrained("llmware/bling-cerebras-1.3b-0.1")  
model = AutoModelForCausalLM.from_pretrained("llmware/bling-cerebras-1.3b-0.1")  

Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The generation_test_llmware_script.py includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.

The BLING model was fine-tuned with a simple "<human> and <bot> wrapper", so to get the best results, wrap inference entries as:

full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"

The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:

  1. Text Passage Context, and
  2. Specific question or instruction based on the text passage

To get the best results, package "my_prompt" as follows:

my_prompt = {{text_passage}} + "\n" + {{question/instruction}}

If you are using a HuggingFace generation script:

# prepare prompt packaging used in fine-tuning process
new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"

inputs = tokenizer(new_prompt, return_tensors="pt")  
start_of_output = len(inputs.input_ids[0])

#   temperature: set at 0.3 for consistency of output
#   max_new_tokens:  set at 100 - may prematurely stop a few of the summaries

outputs = model.generate(
        inputs.input_ids.to(device),
        eos_token_id=tokenizer.eos_token_id,
        pad_token_id=tokenizer.eos_token_id,
        do_sample=True,
        temperature=0.3,
        max_new_tokens=100,
        )

output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)  

Citation [optional]

This BLING model is built on top of a Cerebras base GPT trained model - for more information about the Cerebras GPT models, please see the following paper:

{ Title: Cerebras-GPT: Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster Authors: Nolan Dey, Gurpreet Gosal, Zhiming (Charles) Chen, Hemant Khachane, William Marshall, Ribhu Pathria, Marvin Tom, Joe Hestness Publication: April 6, 2023 }

Model Card Contact

Darren Oberst & llmware team

Downloads last month
55
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Collection including llmware/bling-cerebras-1.3b-0.1