Upload 165-Allow Motion.mp4
Browse filesThis revised diagram outlines a three-phase training system for our myoelectric prosthetic hand, refining the process from data collection to real-time EMG-only control.
Phase 1: Data Collection
Goal: Build a dataset pairing hand gestures with EMG signals.
Inputs:
Hand Image (Webcam): Captures visual gestures.
EMG Signal (Sensors): Records muscle activity.
Processing:
Image Processing:
Tracks hand region & detects finger positions.
Recognizes meaningful gestures (e.g., fist, pinch).
EMG Processing:
Filters and analyzes muscle signals.
Matches EMG patterns to corresponding gestures.
Output:
A training dataset linking EMG signals to gestures.
Phase 2: Matching EMG + Hand Gestures (Learning Phase)
Goal: Train AI to associate EMG signals with gestures.
Inputs: Same as Phase 1 (camera + EMG).
Processing:
EMG Signal Matching:
Compares live EMG data to the dataset.
Predicts intended gestures.
Verification:
Cross-checks predicted gestures against camera input or user input for accuracy.
Action Execution:
Prosthetic performs the recognized gesture.
Phase 3: EMG-Only Control (Use Phase)
Goal: Operate the prosthetic without camera input (EMG-driven).
Input: EMG signals only (no visual tracking).
Processing:
Signal Processing:
Analyzes muscle activity in real time.
Matches EMG patterns to pre-trained gestures.
Action Execution:
Prosthetic performs the predicted gesture (e.g., grip, release).
Key Improvements Over the Previous Version
Structured Progression:
Data Collection → AI Training → Real-World Use (EMG-only).
Reduced Dependency on Camera:
The final phase relies solely on EMG, making it more practical for daily use.
Verification Step:
Ensures AI predictions match actual gestures before full deployment.

- 165-Allow Motion.mp4 +3 -0
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a570b015c92d74327b691d3db86e8858d0f52da2cd0e3549d8b32a75517ae9d4
|
3 |
+
size 118982307
|