Skip navigation EPAM

Understanding Motion Analytics, Where It Is and Where It's Going

In the News

TechTarget – by George Lawton

Understanding Motion Analytics, Where It Is and Where It's Going

By Larry Solomon, Chief People Officer, EPAM

Machine learning is helping make motion analysis more usable for the average enterprise, creating new use cases and applications that can drive value.

It used to take a room full of equipment or special sensors to capture high-quality movement data, which powers motion analytics like that used to animate movie characters or study professional athletes. But this is starting to change. The amount of information previously provided to algorithms through complex technical tools such as lidar and special sensors is starting to be mined by complex machine learning algorithms.

Developers are finding ways to accurately capture things like yoga alignment using the camera built into modern smartphones rather than require a special sensor vest or specialized room.

"AI pose detection using a regular camera really reduces the barrier of entry and cuts pricing for the final products," said Peter Ma co-founder and CEO of MixPose, a streaming platform for yoga instructors. Although high-end motion tracking hardware will always have its uses, he believes software-based motion tracking applications show a lot of promise since they are more adaptable.

There are a variety of techniques for analyzing human movement with AI. Whether they are referred to as motion analytics, movement intelligence or intelligent video analytics, they're all aimed at understanding how people move and interact with the world.

Emerging use cases for motion analytics include improving all aspects of sports and fitness, elder care monitoring, enforcing social distancing guidelines, retail analytics and smart energy management.

"Human pose estimation algorithms are developing rapidly, improving in all areas of 3D accuracy, computational requirements, ability to interpret crowds of people or players and size of the target," said, David Rose, futurist at EPAM Continuum, a digital transformation consultancy that builds custom motion tracking apps.

Growing field of motion analytics apps

EPAM Continuum has been working with Catapult Sports to weave AI analytics into an app that helps coaches capture and organize libraries of player movement and team dynamics. They created a cloud-based service that uses Python and PyTorch for the back-end system and the Unity 3D game engine for the front-end interface. Rose said computer vision technology has great potential to help automate manual processes such as reducing the time required to review footage from hours to minutes.

AI can also be used to improve form analysis to help optimize an athlete's performance. Masaki Nakada, co-founder of Presence Fit, an online coaching platform, said the best athletes have refined the self-awareness required to mimic experts.

"With AI, this kind of feedback no longer needs to come from your own self-awareness," he said. The AI can help identify what someone needs to do differently to perform a movement more efficiently. Similar techniques can also be applied to analyze the form, pace and reps, and to make personalized programs that are optimized for each student.

Motion analytics use cases

AI judges could also help to reduce bias and improve accuracy in sports. AI video analytics can reduce the emotional aspect of judging a game and ensure the same scoring system is used across all players, which guarantees fairness, Nakada said.

In a large online yoga class, a teacher may have trouble monitoring every student in the tiny video windows. MixPose is using motion intelligence to highlight when a student's pose deviates from the rest of the class.

"Using pose detection, we can detect who is not doing the pose everybody else is doing and prioritize those so that the instructor can pay more attention to them," Ma said.

He and his team built the applications by combining the PoseNet library for pose estimation, some custom algorithms and Google's ML Kit for the mobile app.

Enterprises are also turning to AI motion and space tracking from other domains to ensure COVID-19 safety measures. For example, Pro-Vigil, a crime deterrence app, recently pivoted to measuring compliance with COVID-19 prevention guidelines around social distancing, occupancy limits and face-mask usage. Not only can this AI suite monitor when individuals violate the six-foot rule, but it uses AI to create a daily scorecard showing users exactly how their organization is doing in following social distancing and other safety guidelines, said Satish Raj, CTO at Pro-Vigil.

Down the road, new AI techniques could start to track behavior using other types of sensors, said James Kobielus, principal analyst at Franconia Research. For example, MIT researchers working on the RF-Diary project-built algorithms to observe people through walls using reflected radio waves. They trained AI algorithms to interpret the radio reflections with camera data to recognize 30 different actions, including sleeping, reading, cooking and watching TV, with a 90% accuracy.

Challenges to motion analytics

Developers face a range of challenges in building robust motion intelligence applications. One of the biggest is accuracy.

"Even the top companies or research labs in the world still cannot achieve good enough accuracy for most applications," Nakada said. This difficulty comes from the variability of real-world conditions, including lighting and occlusions of surrounding objects. Also, developers struggle with manually labelling motion data, which can take a long time even for short videos.

Another challenge is psychological. "We have learned that end users of such analytical systems are often resistant to change," Rose said, referring to players interacting with Catapult Sports' coaching app.

Latency is an additional factor, as player and play tagging must be done as soon as possible to provide users with insights.

Finding ways to bring down costs must also be considered. Developers are looking for ways to use the sensors in smartphones like cameras and mics rather than require specialized devices.

"Clearly, the cost and complexity go up with specialized devices, and the addressable market for the devices and any add-on third-party apps and cloud services is correspondingly less," Kobielus said.

A final issue is running afoul of privacy regulations in the various jurisdictions where these motion analytics apps are used. Clearly, these approaches are well-suited to surveillance purposes, and the surreptitious nature of many types of motion sensing technology will raise hackles among potentially targeted populations, Kobielus said.

Libraries and tools

There are quite a few libraries to help developers craft motion intelligence applications. Nakada said OpenPose is one of the most popular and easy to use to get started. He also recommended the Apple Vision framework, which enables mobile developers to add computer vision and pose estimation to iOS apps. These technologies make it easy for developers to quickly prototype apps.

But there are challenges in improving the accuracy for most real-world use cases. "Open-source solutions are a good starting point, but building a world-class, production-ready system requires carefully curated datasets and custom models," Rose said.

Better fusion required

The future of motion analytics will require teams to find ways to fuse data from across different kinds of sensors to improve accuracy and reduce cost.

"The development and widespread adoption of devices, such as lidar and stereo cameras, will allow us to use them in conjunction with sophisticated algorithms and [to] enrich the data available to machine learning algorithms," Rose said.

But Nakada believes AI researchers may have to abandon mainstream approaches, such as deep learning, that require large amounts of data to train models.

"Humans do not analyze motions by sampling lots of data. Instead, we can recognize any walking as walking even when there are differences between different people, even with occlusions," Nakada said. This is because humans understand the scene semantically rather than just matching input and output data blindly, which is what current AI does.

In the meantime, he recommended developers focus on applications that don't require perfect accuracy.

"There are lots of places where we can apply such an AI without extremely high accuracy, and those are not well-executed yet," Nakada said. "It is important for us to be aware of the limitations and still think out of the box to find the best use cases, which I believe is the best way to roll out AI motion analytics soon."

The original article can be found here.