Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -2,6 +2,77 @@ | |
| 2 | 
             
            tags:
         | 
| 3 | 
             
            - summarization
         | 
| 4 | 
             
            widget:
         | 
| 5 | 
            -
            - text: " | 
| 6 |  | 
| 7 | 
             
            ---
         | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 2 | 
             
            tags:
         | 
| 3 | 
             
            - summarization
         | 
| 4 | 
             
            widget:
         | 
| 5 | 
            +
            - text: "public static < T , U > Function < T , U > castFunction  ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
         | 
| 6 |  | 
| 7 | 
             
            ---
         | 
| 8 | 
            +
             | 
| 9 | 
            +
            # CodeTrans model for code documentation generation java
         | 
| 10 | 
            +
            Pretrained model on programming language java using the t5 small model architecture. It was first released in
         | 
| 11 | 
            +
            [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
         | 
| 12 | 
            +
             | 
| 13 | 
            +
             | 
| 14 | 
            +
            ## Model description
         | 
| 15 | 
            +
             | 
| 16 | 
            +
            This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the java function/method.
         | 
| 17 | 
            +
             | 
| 18 | 
            +
            ## Intended uses & limitations
         | 
| 19 | 
            +
             | 
| 20 | 
            +
            The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
         | 
| 21 | 
            +
             | 
| 22 | 
            +
            ### How to use
         | 
| 23 | 
            +
             | 
| 24 | 
            +
            Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
         | 
| 25 | 
            +
             | 
| 26 | 
            +
            ```python
         | 
| 27 | 
            +
            from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
         | 
| 28 | 
            +
             | 
| 29 | 
            +
            pipeline = SummarizationPipeline(
         | 
| 30 | 
            +
                model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java_multitask_finetune"),
         | 
| 31 | 
            +
                tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java_multitask_finetune", skip_special_tokens=True),
         | 
| 32 | 
            +
                device=0
         | 
| 33 | 
            +
            )
         | 
| 34 | 
            +
             | 
| 35 | 
            +
            tokenized_code = "public static < T , U > Function < T , U > castFunction  ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
         | 
| 36 | 
            +
            pipeline([tokenized_code])
         | 
| 37 | 
            +
            ```
         | 
| 38 | 
            +
            Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/java/small_model.ipynb).
         | 
| 39 | 
            +
            ## Training data
         | 
| 40 | 
            +
             | 
| 41 | 
            +
            The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
         | 
| 42 | 
            +
             | 
| 43 | 
            +
            ## Training procedure
         | 
| 44 | 
            +
             | 
| 45 | 
            +
            ### Multi-task Pretraining
         | 
| 46 | 
            +
             | 
| 47 | 
            +
            The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
         | 
| 48 | 
            +
            It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
         | 
| 49 | 
            +
            The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
         | 
| 50 | 
            +
             | 
| 51 | 
            +
            ### Fine-tuning
         | 
| 52 | 
            +
             | 
| 53 | 
            +
            This model was then fine-tuned on a single TPU Pod V2-8 for 4000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.
         | 
| 54 | 
            +
             | 
| 55 | 
            +
             | 
| 56 | 
            +
            ## Evaluation results
         | 
| 57 | 
            +
             | 
| 58 | 
            +
            For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
         | 
| 59 | 
            +
             | 
| 60 | 
            +
            Test results :
         | 
| 61 | 
            +
             | 
| 62 | 
            +
            |   Language / Model   |     Python     |      Java      |       Go       |      Php       |      Ruby      |   JavaScript   |
         | 
| 63 | 
            +
            | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
         | 
| 64 | 
            +
            |   CodeTrans-ST-Small    |      17.31     |     16.65      |     16.89      |     23.05      |      9.19      |      13.7      |
         | 
| 65 | 
            +
            |   CodeTrans-ST-Base     |      16.86     |     17.17      |     17.16      |     22.98      |      8.23      |      13.17     |   
         | 
| 66 | 
            +
            |   CodeTrans-TF-Small    |      19.93     |     19.48      |     18.88      |     25.35      |     13.15      |      17.23     |
         | 
| 67 | 
            +
            |   CodeTrans-TF-Base     |      20.26     |     20.19      |     19.50      |     25.84      |     14.07      |      18.25     |
         | 
| 68 | 
            +
            |   CodeTrans-TF-Large    |      20.35     |     20.06      |   **19.54**    |     26.18      |     14.94      |    **18.98**   |
         | 
| 69 | 
            +
            |   CodeTrans-MT-Small    |      19.64     |     19.00      |     19.15      |     24.68      |     14.91      |      15.26     |
         | 
| 70 | 
            +
            |   CodeTrans-MT-Base     |    **20.39**   |     21.22      |     19.43      |   **26.23**    |   **15.26**    |      16.11     |
         | 
| 71 | 
            +
            |   CodeTrans-MT-Large    |      20.18     |   **21.87**    |     19.38      |     26.08      |     15.00      |      16.23     |
         | 
| 72 | 
            +
            |   CodeTrans-MT-TF-Small |      19.77     |     20.04      |     19.36      |     25.55      |     13.70      |      17.24     |
         | 
| 73 | 
            +
            |   CodeTrans-MT-TF-Base  |      19.77     |     21.12      |     18.86      |     25.79      |     14.24      |      18.62     |
         | 
| 74 | 
            +
            |   CodeTrans-MT-TF-Large |      18.94     |     21.42      |     18.77      |     26.20      |     14.19      |      18.83     |
         | 
| 75 | 
            +
            |   State of the art   |      19.06     |     17.65      |     18.07      |     25.16      |     12.16      |      14.90     |
         | 
| 76 | 
            +
             | 
| 77 | 
            +
             | 
| 78 | 
            +
            > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
         | 

