Traditional AI models built using Rule-based systems depend on explicitly defined rules. Such AI systems deliver limited business value based on how well these rules are defined. However, AI systems using machine learning work entirely different. After a ML model is trained, the model can learn on its own with new data inputs and requires minimal to no human intervention.
IDC forecasts that the Spending on AI and ML is growing from $12B from 2017 to $57.6B by 2021. In 2018 alone, $9.3 billion VC funding has flown to AI related ventures, according to a report from PwC and CB Insights. Also, unicorns like PayTm, Swiggy, and Oyo have been actively investing resources in AI and at least acquired one AI company.
Many Indian startups have been actively working with ML. For example, Strand Life Sciences, a Bangalore based startup, uses machine learning to deliver precision medicine, which plays a vital role in cancer treatment. Another startup Ntalents.ai innovates using machine learning to make candidate screening an easy process, disrupting the traditional practices in talent acquisition. Liv.ai, an AI startup recently acquired by Flipkart, has been working out an AI system that converts speech to text. Also, multiple AI startups are using machine learning for fashion retail to automate image recognition and image regeneration to improve consumer experience.
Data is the heart of Machine learning systems. As more data and inputs become available, ML systems become more reliable and dependable. The trained learning models continuously learn and self-improve for better decision making. For example, if a ML model is deployed for an ecommerce platform, as the users browse, the model tries to understand and uncover behavioural patterns, then suggests products that are of interest to particular users; improving the shopping experience.
Working with ML is easy with Open Source frameworks like Auto-Keras that specifically help with automated ML and deep learning. Startups can start with already available datasets using Open Source ML models and algorithms. In the process, they can build machine learning expertise that they can use to build production grade machine learning systems.
To train ML models, massively parallel processing is a necessity. Even to generate decent Machine Learning Models, traditional CPU cores on general purpose servers could take months at a time. However, GPU based deployments can help to speed up machine learning workloads by an order of magnitude, hours and days instead of weeks and months. Recent innovations led to GPUs that contain several hundred cores which are capable of handling simple logic operations rapidly using massively parallel processing, leading to reduced time and total cost of ownership.
Cloud-based GPUs are a great option to run machine learning workloads, where pay as you go pricing is very attractive.
In Conclusion
The speed necessary for machine learning systems can be accomplished with modern GPUs purpose built for AI/ML workloads that offer a compelling alternative to traditional general purpose processors.