philschmid HF staff jeffboudier HF staff commited on
Commit
5d1afa6
1 Parent(s): d4347a5

Replace pipeline.py with handler.py (#2)

Browse files

- Replace pipeline.py with handler.py (8df0307e21977ec061c6918a044445784ac7b4ba)


Co-authored-by: Jeff Boudier <[email protected]>

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -6,14 +6,14 @@ tags:
6
  library_name: generic
7
  ---
8
 
9
- # Optimized and Quantized DistilBERT with a custom pipeline.py
10
 
11
  > NOTE: Blog post coming soon
12
 
13
  This is a template repository for Text Classification using Optimum and onnxruntime to support generic inference with Hugging Face Hub generic Inference API. There are two required steps:
14
 
15
  1. Specify the requirements by defining a `requirements.txt` file.
16
- 2. Implement the `pipeline.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload the optimum model and tokenizers as well as the `text-classification` pipeline needed for inference. This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.
17
 
18
  add
19
  ```
 
6
  library_name: generic
7
  ---
8
 
9
+ # Optimized and Quantized DistilBERT with a custom pipeline with handler.py
10
 
11
  > NOTE: Blog post coming soon
12
 
13
  This is a template repository for Text Classification using Optimum and onnxruntime to support generic inference with Hugging Face Hub generic Inference API. There are two required steps:
14
 
15
  1. Specify the requirements by defining a `requirements.txt` file.
16
+ 2. Implement the `handler.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload the optimum model and tokenizers as well as the `text-classification` pipeline needed for inference. This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.
17
 
18
  add
19
  ```