ποΈ Finetuned Full HTR Model (Qwen-based)
This is a Qwen Vision2Seq model fine-tuned for Handwritten Text Recognition (HTR). It reads handwritten text from images and generates clean, editable output using advanced transformer-based image-to-text techniques.
π Model Summary
- Model Architecture: Qwen-Vision2Seq (Image encoder + Language decoder)
- Framework: TensorFlow (via Hugging Face Transformers)
- Input: Handwritten text image
- Output: Recognized plain text
π§ How to Use (with Hugging Face Transformers)
from transformers import AutoProcessor, AutoModelForVision2Seq
from PIL import Image
import torch
# Load processor and model
processor = AutoProcessor.from_pretrained("Emeritus-21/Finetuned-full-HTR-model", trust_remote_code=True)
model = AutoModelForVision2Seq.from_pretrained("Emeritus-21/Finetuned-full-HTR-model", trust_remote_code=True)
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)
# Load and process image
image = Image.open("your_image.jpg").convert("RGB")
inputs = processor(images=image, return_tensors="pt").to(device)
# Generate prediction
generated_ids = model.generate(**inputs)
recognized_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print("π Recognized Text:", recognized_text)
- Downloads last month
- 31
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support