HPC-Coder-v2

The HPC-Coder-v2-16b model is an HPC code LLM fine-tuned on an instruction dataset catered to common HPC topics such as parallelism, optimization, accelerator porting, etc. This version is a fine-tuning of the Deepseek Coder V2 lite base model. It is fine-tuned on the hpc-instruct, oss-instruct, and evol-instruct datasets. We utilized the distributed training library AxoNN to fine-tune in parallel across many GPUs.

HPC-Coder-v2-1.3b, HPC-Coder-v2-6.7b, and HPC-Coder-v2-16b are the most capable open-source LLMs for parallel and HPC code generation. HPC-Coder-v2-16b is currently the best performing open-source LLM on the ParEval parallel code generation benchmark in terms of correctness and performance. It scores similarly to 34B and commercial models like Phind-V2 and GPT-4 on parallel code generation. HPC-Coder-v2-6.7b is not far behind the 16b in terms of performance.

Using HPC-Coder-v2

The model is provided as a standard huggingface model with safetensor weights. It can be used with transformers pipelines, vllm, or any other standard model inference framework. HPC-Coder-v2 is an instruct model and prompts need to be formatted as instructions for best results. It was trained with the following instruct template:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:
Downloads last month
46
Safetensors
Model size
15.7B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for hpcgroup/hpc-coder-v2-16b

Finetuned
(1)
this model
Quantizations
1 model

Datasets used to train hpcgroup/hpc-coder-v2-16b

Collection including hpcgroup/hpc-coder-v2-16b