Computers can now complete tasks that wouldn’t have been possible without machine learning: things like identifying faces or analyzing hospital tests. But when pitted against tasks involving real-world happenings, machine learning models tend to peter out in terms of effectiveness or precision. But a team at the MIT-IBM Watson AI Lab think they can make machine learning on mobile a reality with a new method.
This recent Engadget report says the Lab’s new method reduces the typically-large size of video recognition programs, quickens training, and could make reduces the size of video-recognition models, speeds up training and could improve mobile performance.
How does it work?
According to Engadget, the secret lies in how these video recognition models handle time.
“Current models encode the passage of time in a sequence of images, which creates bigger, computationally-intensive models. The MIT-IBM researchers designed a temporal shift module, which gives the model a sense of time passing without explicitly representing it,” — Engadget
Their testing showed that the deep learning AI model was able to be trained three times faster than any other. This method could make it easier to run video recognition machine learning on mobile, the report says.
Another benefit to their findings: it could also reduce AI’s overall carbon footprint by helping social media platforms find violent footage or terrorist uploads, medical orgs run AI apps locally, and more, says Engadget.
The research will be published in a paper at the International Conference on Computer Vision.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!