Gemma fine-tuned to speak like a pirate. This is a Gemma model uploaded using the KerasNLP library and can be used with JAX, TensorFlow, and PyTorch backends. This model is related to a CausalLM task.

Model config:

  • name: gemma_backbone
  • trainable: True
  • vocabulary_size: 256000
  • num_layers: 28
  • num_query_heads: 16
  • num_key_value_heads: 16
  • hidden_dim: 3072
  • intermediate_dim: 49152
  • head_dim: 256
  • layer_norm_epsilon: 1e-06
  • dropout: 0

This model card has been generated automatically and should be completed by the model author. See Model Cards documentation for more information.

Downloads last month
8
Inference Examples
Inference API (serverless) does not yet support keras-hub models for this pipeline type.