AI, big data, finance, image analysis, simulation: compute intensive applications are migrating from traditional CPUs to high speed massively parallel processors, otherwise known as GPUs. Cloud based virtual machines with tens of thousands of GPU cores are available from Amazon and Microsoft for modest hourly rates. Mid-level laptops typically come with hundreds of GPU cores now.
For the right problem and with the right programming, a modern GPU can deliver solutions that are from 10x to 100x faster than with traditional multi-core CPUs. To put it another way, you can tackle problems that are orders of magnitude larger than previously feasible. GPUs also consume approximately 1/10th the power for an equivalent amount of computing, a significant cost driver in large data centers.
GPU programming is very different from what you are used to. A whole different mindset is required to design algorithms that work efficiently on single instruction multiple data streaming processors, which are the heart of modern GPUs. This session will help you look at problems from the viewpoint of a GPU. You will learn what they are (very) good at, and what they are (very) bad at. You will learn how to seamlessly integrate GPUs as co-processors within .NET and C++ programs.
The good news is that software tools have gotten better. It is now possible to write GPU code in C# or F#, in addition to the traditional C++ and C. Examples from both C# and C++ will be covered.