Update README.md
Browse files
README.md
CHANGED
|
@@ -38,5 +38,30 @@ Nora take a language instruction and a camera image of a robot workspace as inpu
|
|
| 38 |
To execute on an actual robot platform, actions need to be un-normalized subject to statistics computed on a per-robot, per-dataset basis.
|
| 39 |
|
| 40 |
|
| 41 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
|
|
|
|
| 38 |
To execute on an actual robot platform, actions need to be un-normalized subject to statistics computed on a per-robot, per-dataset basis.
|
| 39 |
|
| 40 |
|
| 41 |
+
## Getting Started For Inference
|
| 42 |
+
To get started with loading and running Nora for inference, we provide a lightweight interface that with minimal dependencies.
|
| 43 |
+
```bash
|
| 44 |
+
git clone https://github.com/declare-lab/nora
|
| 45 |
+
cd inference
|
| 46 |
+
pip install -r requirements.txt
|
| 47 |
+
```
|
| 48 |
+
For example, to load Nora for zero-shot instruction following in the BridgeData V2 environments with a WidowX robot:
|
| 49 |
+
```python
|
| 50 |
+
|
| 51 |
+
# Load VLA
|
| 52 |
+
from inference.nora import Nora
|
| 53 |
+
nora = Nora(device='cuda')
|
| 54 |
+
|
| 55 |
+
# Get Inputs
|
| 56 |
+
image: Image.Image = camera(...)
|
| 57 |
+
instruction: str = <INSTRUCTION>
|
| 58 |
+
# Predict Action (7-DoF; un-normalize for BridgeData V2)
|
| 59 |
+
actions = nora.inference(
|
| 60 |
+
image=image, # Dummy image
|
| 61 |
+
instruction=instruction,
|
| 62 |
+
unnorm_key='bridge_orig' # Optional, specify if needed
|
| 63 |
+
)
|
| 64 |
+
# Execute...
|
| 65 |
+
robot.act(action, ...)
|
| 66 |
+
```
|
| 67 |
|