subashmourougayane's picture
Update README.md
f0bd346 verified
metadata
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
  - mistral
  - hr
  - human-resources
  - recruitment
  - intent-detection
  - entity-extraction
  - merged
license: apache-2.0

Recruitment Intent & Entity Extraction (Merged Model)

This is the merged version of the recruitment intent and entity extraction model. No adapter loading required!

Model Description

Fine-tuned Mistral-7B model for extracting intents and entities from recruitment and job posting requests. This model can understand hiring needs and extract structured information from natural language queries.

Capabilities

  • Intent Detection: Identifies recruitment-related intents
  • Entity Extraction: Extracts key information like:
    • Job titles and roles
    • Required skills and technologies
    • Experience levels
    • Location preferences
    • Number of positions
    • Employment type (full-time, contract, etc.)
    • Application deadlines

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("subashmourougayane/recruitement-intent-entity-extraction")
tokenizer = AutoTokenizer.from_pretrained("subashmourougayane/recruitement-intent-entity-extraction")

messages = [
    {"role": "system", "content": "Extract intent and entities from HR requests. Return JSON format."},
    {"role": "user", "content": "Looking for 3 Python developers with 5+ years experience in Bangalore"}
]

input_text = tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300, temperature=0.3)
response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
print(response)

Performance

  • Success Rate: 99.9% on comprehensive test suite
  • Response Time: ~4.3 seconds average
  • GPU Memory: ~4.4GB