Everything starts from high‑precision motion capture you can take anywhere.

At Model Health, we believe that cutting edge biomechanics has the power to transform physical health and boost human performance - but it needs to be accessible to every movement professional, not just research labs.

We spent years engineering the most portable lab-grade motion capture technology. We are now thriving to make it actionable to every practice and set up.

Portable set-up
  • No specialized hardware, software, or expertise required
  • Just two iOS devices needed
  • Works anywhere — indoors or outdoors
  • Fully automated cloud processing
Accurate
  • Scientifically validated
  • Based on a constrained biomechanical model
  • Validated against marker-based motion capture
Fast and easy to use
  • One-time setup in under 5 minutes
  • Kinematic results delivered in minutes
  • Automated Analyses and progress tracking in a few click
Secure
  • Compliant with high-risk data cloud computing standards
  • All data encrypted in transit and at rest
  • Runs on AWS infrastructure with HIPAA-compliant services

Step 1

Record videos through our app

The app will walk you through three steps to pair the recording devices, calibrate them, and record activities .

Step 2

Visualize, analyze and export 3D movement data

Once the activity is recorded, the videos are uploaded to our cloud, where computer vision, AI, and biomechanics algorithms generate an accurate 3D biomechanical model.

The 3D model yields rich time-series data; including joint angles and angular velocities, global displacement, and center-of-mass trajectories.

Step 3

Trigger an automated analysis tailored to the most frequent Physical Tests

We bridge the gap to real-world applications by extracting interpretable metrics and actionable insights specific to each movement from time-series data.

Drawing from scientific literature, we highlight key metrics that are linked to both performance optimization and injury risk -- empowering practitioners to make informed, evidence-based decisions and track patient or athlete progression.

Learn more about our Automated Analysis series

Track every movement metric you need, all in one take

Whatever your focus, we've got you covered

Shoulder plane
position

Joint angles

Contact information

Joint angles

Center of mass trajectory

Energy and momentum
Joint moments
Joint moments

Scientifically-validated

Our technology brings together years of research developed at Stanford University  by our co-founders and is already used by Sport Professionals globally, together with more than 7000 researchers
With Model Health, you benefit from the latest research with extended features and continuous support for operational and commercial use
OpenCap: Human movement dynamics from smartphone videos
Uhlrich S & Falisse A, et al. (2023)
PLOS Computational Biology

OpenCap combines advances in computer vision, machine learning, and musculoskeletal simulation to make movement analysis widely available without specialized hardware, software, or expertise.

We validated OpenCap against laboratory-based measurements in a cohort of 10 individuals for a set of tasks including gait, squat, sit-to-stand, and drop vertical jump:

  • 4.5° error in joint angles
  • 6.2% bodyweight error in ground reaction forces
  • 1.2% bodyweight*height error in joint moments

We also demonstrated OpenCap's usefulness for applications including screening for disease risk, evaluating intervention efficacy, and informing rehabilitation decisions.

Marker data enhancement for markerless motion capture
Falisse A, et al. (2025)
IEEE Transactions on Biomedical Engineering

We developed a more accurate and generalizable model, named marker enhancer, to predict the position of 43 anatomical markers from 20 keypoints identified from video. We trained this model on a large database of 1,433 hours of data from 1,176 subjects.

We showed the our model improves kinematic accuracy (4.1° error) compared to OpenCap's original model (5.3° error) on a benchmark dataset. We also showed that it better generalized to unseen, diverse movements (4.1° error) than OpenCap’s original model (40.4° error).

We developed a computationally efficient optimal control framework to predict human gaits based on optimization of a performance criterion without relying on experimental data.

The ability to predict the mechanics and energetics of a broad range of gaits with complex 3D musculoskeletal models allows testing novel hypotheses about gait control and hasten the development of optimal treatments for neuro-musculoskeletal disorders.