Transformers
GGUF
English
llama-factory
Inference Endpoints
conversational
Edit model card

QuantFactory Banner

QuantFactory/ValueLlama-3-8B-GGUF

This is quantized version of Value4AI/ValueLlama-3-8B created using llama.cpp

Original Model Card

Model Card for ValueLlama

Model Description

ValueLlama is designed for perception-level value measurement in an open-ended value space, which includes two tasks: (1) Relevance classification determines whether a perception is relevant to a value; and (2) Valence classification determines whether a perception supports, opposes, or remains neutral (context-dependent) towards a value. Both tasks are formulated as generating a label given a value and a perception.

Paper

For more information, please refer to our paper: Measuring Human and AI Values based on Generative Psychometrics with Large Language Models.

Uses

It is intended for use in research to measure human/AI values and conduct related analyses.

See our codebase for more details: https://github.com/Value4AI/gpv.

BibTeX:

If you find this model helpful, we would appreciate it if you cite our paper:

@misc{ye2024gpv,
      title={Measuring Human and AI Values based on Generative Psychometrics with Large Language Models}, 
      author={Haoran Ye and Yuhang Xie and Yuanyi Ren and Hanjun Fang and Xin Zhang and Guojie Song},
      year={2024},
      eprint={2409.12106},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2409.12106}, 
}
Downloads last month
157
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Datasets used to train QuantFactory/ValueLlama-3-8B-GGUF