The limitations of building your own model

While ML is getting popular, it is not yet feasible to run ML models on mobile platforms to reach the masses. When you are building your own model for mobile apps, there are some limitations as well. While it is possible to make predictions on a local device without a cloud service, it is not advisable to build an evolving model that makes predictions based on your current actions and accumulates data on the local device itself. As of right now, we can run pre-built models and get inferences out of them on mobile devices, due to the constraints on memory and the processing power of the mobile devices. Once we have better processors on mobile devices, we can train and improve the model on the local device.

There are a lot of use cases related to this. Apple's Face ID is one such example, running a model on a local device that requires computations from a CPU or GPU. When the device's capability increases in the future, it will be possible to build a completely new model on the device itself.

Accuracy is another reason why people refrain from developing models on their mobile devices. Since we are currently unable to run heavy operations on our mobile devices, the accuracy, as compared to a cloud-based service, seems bleak, the reason for this being the limitations on both memory and computational capability. You could run the models that are available for mobile devices in the TensorFlow and Core ML libraries instead.

The TensorFlow Lite models can be found at https://www.tensorflow.org/lite/models; and the Core ML models can be found at https://github.com/likedan/Awesome-CoreML-Models.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.119.170