π‘ Idea
MotionCoder is a AI-powered framework that translates gestures and signs in real time into precise CAD/DCC commands β for faster 3D workflows, fewer clicks, and better ergonomics.
MotionCoder is a AI-powered framework that translates gestures and signs in real time into precise CAD/DCC commands β for faster 3D workflows, fewer clicks, and better ergonomics.
Todayβs 3D workflows (CAD/DCC) are click-heavy, time-consuming, and mode-bound: constantly switching modes, opening dialogs, typing parameters. That costs time, focus, and patience.
MotionCoder recognizes iconic gestures (e.g., line, circle, scissors, handwheel) and turns them in real time into commands with parameters.
Designed primarily for the desktop with a large monitor β more comfortable & efficient than VR. VR is optional.
Why now? Modern GPUs with AI acceleration enable low-latency on-prem pipelines with high performance and strong cost-performance β often cheaper than ToF setups.
Similar ideas for gesture control exist, but they usually lacked end-to-end real-time capability β and the integration of sign-language nuances (handshape, fine-tuning, cognitive reference, spatial grammar, coarticulation, timing/rhythm, prosody, etc.). As a result, they saw little adoption or disappeared again. My entrepreneurial experience in sign language, programming, and mechanical engineering closes this gap.
Own CNC workshop; experience in mechanical engineering, drive technology, CAD, kinematics, C/C++ as well as sign-language communication. The idea for MotionCoder grows out of my CNC software development; plus hands-on vision/camera projects for clients.
From practice I know when MotionCoder is faster and more ergonomic than the mouse in everyday work. Gestures are more natural and faster in many scenarios β that shop-floor know-how is embedded in xtan.ai.
Gestures in β result out: fast β‘οΈ, precise π―, performant πͺ.
More info: xtan.ai Β· GitHub: xtanai/overview