Microsoft, GPU Computing and C++ AMP
Microsoft, GPU Computing and C++ AMP
This will allow C++ programmers the ability of the language to take advantage of GPUs in programs to accelerate by the use of both the CPU and GPU. Back in Spring 2007, there was only one language, namely CUDA C that could support NVIDIA GPU, but now, programmers have a wider selection of languages and APIs for GPU computing – CUDA C, CUDA C++, CUDA Fortran, OpenCL, DirectCompute and in the future Microsoft C++ AMP, along with Java and Python wrappers, as well as.NET integration, available that sit on top of CUDA C or CUDA C++.
So, if you are new to GPU computing, you can use your C++ programming knowledge through Microsoft Visual Studio to enter CUDA C++ and start to accelerate your applications by using the GPUs along with the CPU. CUDA C++ comes with a rich eco-system of profilers, debuggers, and libraries like cuFFT, cuBLAS, LAPACK, cuSPARSE, cuRAND, etc. NVIDIA’s Parallel Nsight™ for Visual Studio 2010 provides these Windows developers a familiar development environment, combined with excellent GPU profiling and debugging tools.
With Microsoft making such an approach, we can be sure that GPU computing has really matured and will be used by many programmers for their applications from now on. Even now, Chinese scientists are running the world’s fastest simulation on Tianhe-1A. Portland Group (PGI) announced that they are releasing new CUDA compilers that target x86 CPUs.
PGI’s release of CUDA x86 compilers enables developers to protect their investment in parallelizing their applications, by using the same code for CPUs and GPUs, making NVIDIA CUDA GPUs the only platform that supports all GPU computing programming models, APIs, and languages. Share this post on Microsoft, GPU Computing and C++ AMP
0 comments for Microsoft, GPU Computing and C++ AMP
Leave a reply