Edit model card

NOTE: This "diff model" cannot be used directly.
Users have to apply it on top of the original LLaMA weights to get actual Spock weights.
Please find the instructions here: https://github.com/luffycodes/Tutorbot-Spock-Bio.

Spock Model Card

Github details

Please checkout the repo: https://github.com/luffycodes/Tutorbot-Spock-Bio.

Model details

Model type: Spock is an open-source educational tutoring chatbot trained by fine-tuning LLaMA and Vicuna model on synthetic student-tutorbot conversations generated using a specialized prompt.

Model date: Spock was trained between April 2023 and May 2023.

Organizations developing the model: The Spock team with members from Rice University and OpenStax.

Training dataset

700 conversations generated using a specialized prompt from GPT-4. Dataset link: https://huggingface.co/datasets/luffycodes/Tutorbot-Spock-Bio-Dataset

Paper or resources for more information: https://arxiv.org/abs/2305.13272

Code or resources for more information: https://github.com/luffycodes/Tutorbot-Spock-Bio

License: Apache License 2.0

Where to send questions or comments about the model: Shashank Sonkar ([email protected])

If you use this work, please cite: CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles https://arxiv.org/abs/2305.13272

@misc{sonkar2023class,
      title={CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles}, 
      author={Shashank Sonkar and Lucy Liu and Debshila Basu Mallick and Richard G. Baraniuk},
      year={2023},
      eprint={2305.13272},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train luffycodes/tutorbot-spock-bio-llama-diff