As one of the most influential characters, in East and West Asian philosophy, Sun Tzu’s book ‘The Art of War’ is a phenomenon in several aspects of military strategy. However, it became public almost 2500 years ago and based on different war situations. Still, you can easily compare its significant effect on real-life scenarios, competition, and many more. Many top strategists, businesses, and corporates use them precisely to be part of their planning, execution, and implementation to stay ahead of their competition.
While the world has advanced to include several new technologies with machine learning(ML) and Artificial Intelligence (AI) leading the pack, these AI-ML models have become the standard for systems worldwide to handle a large amount of data analysis. And selecting the right CPU or GPU for inference is a crucial stage that defines the capability of a model to deliver results. Professionals and IT experts often argue for the effective use of CPU (central processing unit) or GPU (graphics processing unit) in the inference.
Here Sun Tzu’s awesome tips act as a guiding light for experts to choose the right path for effective implementation of CPU or GPU for inference.
Preparation is a crucial key to victory. And adaptability is the first step for a successful individual.
Although CPUs are the fundamental part of computer technology. But with the vast need for quicker computing and enhanced data learning has transformed the way for GPUs in developing optimized solutions for the future.
CPUs are ideal for serial or sequential processing. Being fast, versatile, and interactive, these can run through a series of tasks such as match keystrokes for driving response from the user’s hard drive. CPUs have only a few cores and a lot of cache memory for matching limited requirements.
While GPUs are ideal for parallel processing and provide a realistic approach for parallel computing to transform simple personal computers into supercomputers. These can handle thousands of operations simultaneously to match with huge demand for AI inference solutions.
The ability to defend and control must be the prime focus.
Not only do GPUs have the right capability to handle large data for inference, but they can also match with the right requirements at a high pace. Such as in AI image recognition, systems must have the ability to match millions of threads at once in self-driving, or in gaming to deliver smoother operation. Modern systems must be able to see, understand, visualize, and then predict for inference to match with the demand for higher productivity. Here GPUs have that superior advantage over the CPUs for inference.
Inter-connectivity between knowledge can only bring perfection.
It’s not about one or two powerful features; it’s all about delivering collectively. Several technologies today need to work together to bring more predictive and futuristic results via inference. These AI Models need highly sophisticated power solutions to match the needs of superior solutions. Here is an experiment was done by Intel on the comparison between a Two-socket Intel Xeon 9282, NVIDIA V100, and NVIDIA T4.
CapabilitiesTwo-Socket Intel Xeon 9282NVIDIA V100 (Volta)NVIDIA T4 (Turing)ResNet-50 Inference (images/sec)7,8787,8444,944# of Processors211Total Processor TDP800 W350 W70 WEnergy Efficiency (Taking TDP)10 img/ sec/W22 img/ sec/W71 img/ sec/WPerformance per Processor (images/sec)3,9397,8444,944GPU Performance Advantage1.0 (baseline)2.0x1.3xGPU Energy-Efficiency Advantage1.0 (baseline)2.3x7.2x
GPUs prove their worth over the use of CPUs with additional quickness, 7x efficiency, and saves a lot of costs too. (source)
Self-awareness is the first step to take the right step for the future. Make opportunities in case you are having none!
Today AI models can understand pattern recognition and offer intelligent solutions in real-time applications such as predictive analysis, fraud detection, and natural language conversion. Though GPUs have significantly enhanced in the last few years. Still, CPUs have been a traditional answer for AI inherence.
Here is a quick glance on the table that defines the superior capability of GPUs over the CPUs.
Dual Intel Xeon Gold 6240NVIDIA T4 (Turing)BERT Inference, Question-Answering (sentences/sec)2118Processor TDP300 W (150 Wx2)70 WEnergy Efficiency (using TDP)0.007 sentences/ sec/W1.7 sentences/ sec/WGPU Performance Advantage1.0 (baseline)59xGPU Energy-Efficiency Advantage1.0 (baseline)240x
Table: Inference on BERT. Workload: Fine-Tune Inference on BERT Large dataset (Source)
Great Victories don’t require meaningless battles. Divide and Conquer!
Sun Tzu’s strategy of ‘Divide and conquer’ has been famous in military, politics, and generals in everyday life situations. This is also applicable in GPU computing, where it divides the computing problem into small independent tasks and solves them in parallel to improve the computing speed.
Understand your enemy to defeat them. If you can win by deception, then so be it!
Sun Tzu always emphasized knowing strengths and weaknesses, threats, and opportunities before attacking the enemy. It has become a widespread management tool to understand the developing team’s reliance while attracting the dynamic market. There is a mention of ‘nine variations’in the book, which advises flexibility with changes in the circumstances. On a similar front, although GPUs have a lot of advantage over the CPUs in fetching results from the inference. Still, if you put aside speed than CPUs can bring more enhanced results with improved productivity. (source)
Conclusion
Sun Tzu’s tips have been inspirational for strategic planning and implementation. Today leaders, speakers, and strategists use them to motivate people around them. Here in this modern age of technology, these excellent Sun Tzu’s tips can pave the path to select the ideal match for your CPU or GPU when dealing with inference. And with continuous research and development, these tips help us build a better future for tomorrow.
Reference:
https://blogs.nvidia.com/blog/2019/05/21/intel-inference-nvidia-gpus/
https://blogs.nvidia.com/blog/2009/12/16/whats-the-difference-between-a-cpu-and-a-gpu/
https://medium.com/@dhartidhami/gpu-vs-cpu-for-ml-model-inference-438e43d53654
https://www.diva-portal.org/smash/get/diva2:1354858/FULLTEXT01.pdf