nashrah18 commited on
Commit
aab698a
·
verified ·
1 Parent(s): 917a1a8

Update README.md

Browse files

Specifically designed for female tourists in India.

Files changed (1) hide show
  1. README.md +13 -9
README.md CHANGED
@@ -7,6 +7,11 @@ tags:
7
  model-index:
8
  - name: indian_translatorv1
9
  results: []
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
@@ -14,24 +19,23 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  # indian_translatorv1
16
 
17
- This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
  - Train Loss: 0.1119
20
  - Epoch: 14
21
 
22
  ## Model description
23
 
24
- More information needed
25
-
26
- ## Intended uses & limitations
27
-
28
- More information needed
29
 
30
  ## Training and evaluation data
31
 
32
- More information needed
33
-
34
- ## Training procedure
 
35
 
36
  ### Training hyperparameters
37
 
 
7
  model-index:
8
  - name: indian_translatorv1
9
  results: []
10
+ datasets:
11
+ - nashrah18/indiantranslator
12
+ language:
13
+ - en
14
+ - hi
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
 
19
 
20
  # indian_translatorv1
21
 
22
+ This model is a fine-tuned version model on indiantranslator dataset.
23
  It achieves the following results on the evaluation set:
24
  - Train Loss: 0.1119
25
  - Epoch: 14
26
 
27
  ## Model description
28
 
29
+ Solo female tourists in India often face communication barriers due to formal or outdated translations.
30
+ This can lead to misunderstandings, frustration, and even safety concerns.
31
+ Therefore, a model designed just for them that translates your english text into hindi colloquial.
 
 
32
 
33
  ## Training and evaluation data
34
 
35
+ batch_size = 16
36
+ learning_rate = 5e-5
37
+ weight_decay = 0.01
38
+ num_train_epochs = 15
39
 
40
  ### Training hyperparameters
41