yuan-yang commited on
Commit
080b17c
·
1 Parent(s): c92883d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md CHANGED
@@ -1,3 +1,40 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
  ---
6
+
7
+ # LogicLLaMA Model Card
8
+
9
+ ## Model details
10
+
11
+ LogicLLaMA is a language model that translates natural-language (NL) statements into first-order logic (FOL) rules.
12
+ It is trained by fine-tuning the LLaMA-7B model on the [MALLS](https://huggingface.co/datasets/yuan-yang/MALLS-v0) dataset.
13
+
14
+ **Model type:**
15
+ This repo contains the LoRA delta weights for direct translation LogicLLaMA, which directly translates the NL statement into a FOL rule in one go.
16
+ We also provide the delta weights for other modes:
17
+ - [naive correction LogicLLaMA ](https://huggingface.co/yuan-yang/LogicLLaMA-7b-naive-correction-delta-v0)
18
+
19
+ **License:**
20
+ Apache License 2.0
21
+
22
+ ## Using the model
23
+
24
+ Check out how to use the model on our project page: https://github.com/gblackout/LogicLLaMA
25
+
26
+
27
+ **Primary intended uses:**
28
+ LogicLLaMA is intended to be used for research.
29
+
30
+
31
+ ## Citation
32
+
33
+ ```
34
+ @article{yang2023harnessing,
35
+ title={Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation},
36
+ author={Yuan Yang and Siheng Xiong and Ali Payani and Ehsan Shareghi and Faramarz Fekri},
37
+ journal={arXiv preprint arXiv:2305.15541},
38
+ year={2023}
39
+ }
40
+ ```