File size: 1,337 Bytes
544097c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# CSQA GPT2-Large Context-Aware Model

This model is a GPT2-large based model fine-tuned for the CommonsenseQA (CSQA) task with context-aware capabilities.

## Model Architecture

This is a multi-component model that includes:
- **Encoder Model**: GPT2-large based encoder with adapter layers
- **Latent Model**: GPT2-large based latent representation model with adapter layers  
- **Decoder Model**: GPT2-large based decoder with adapter layers
- **Projection Layers**: Linear projections between encoder-latent and latent-decoder components

## Files Structure

- `encoder.pt` / `encoder_model/`: Encoder component weights and configuration
- `latent_model.pt` / `latent_model/`: Latent model component weights and configuration  
- `decoder.pt` / `decoder_model/`: Decoder component weights and configuration
- `encoder_to_latent_model_proj.pt`: Projection layer from encoder to latent model
- `latent_model_to_decoder_proj.pt`: Projection layer from latent model to decoder
- `tokenizer/`: GPT2 tokenizer files
- `config.json`: Model configuration

## Usage

This model was trained for the CommonsenseQA task and includes specialized components for context-aware reasoning.

## Training

The model was trained in multiple stages on the CommonsenseQA dataset, incorporating context-aware mechanisms to improve reasoning capabilities.