Introduction to GPU Computing with MATLAB
Speed up your MATLAB®applications using NVIDIA®GPUs without needing any CUDA®programming experience.
并行计算工具箱™支持700多个功能,使您使用GPU计算。万博1manbetx如果提供输入为G万博1manbetxPU数组,则任何GPU支持的功能都可以自动使用GPU运行,从而易于转换和评估应用程序的GPU计算性能。
In this video, watch a brief overview, including code examples and benchmarks. In addition, discover options for getting access to a GPU if you do not have one in your desktop computing environment. Also, learn about deploying GPU-enabled applications directly as CUDA code generated by GPU Coder™.
GPU计算是一项广泛采用的技术,它利用GPU的力量加速了计算密集型工作流程。自2010年以来,并行计算工具箱已为MATLAB提供了GPU计算支持。万博1manbetx尽管GP最初是用于图形渲染的,但现在通常用于加速在科学计算,工程,人工智能和财务分析等领域的应用。
Using parallel Computing Toolbox, you can leverage NVIDIA GPUs, to accelerate your application directly from MATLAB. MATLAB provides a direct interface for accelerating computationally intensive workflows, on GPUs for over 500 functions. Using these supported functions, you can execute your code on a GPU without needing any of programming experience.
对于计算密集型问题,有可能达到显着的速度,只对您现有代码进行了一些更改。通过并行计算工具箱万博1manbetx中的GPU支持,很容易确定您是否可以使用GPU来加快应用程序的速度。如果您的代码包含GPU支持的功能,则将输入转换为GP万博1manbetxU数组会自动在GPU上执行这些功能。
MATLAB automatically handles GPU resource allocation. So you can focus on your application, without having to learn any low level GPU computing tools. MATLAB takes advantage of the hundreds of specialized cores in a GPU. To accelerate performance of applications that can be largely paralyzed. You can achieve the most effective results, with the GPU when executing workflows that process sizable data, and contain heavily vectorized operations.
You can use GPUBench, from MathWorks File Exchange. To compare performance of supported GPUs, using standard numerical benchmarks in MATLAB. Many MATLAB functions, such as the trained network function, use any compatible GPUs by default. To train your model on multiple GPUs, you can simply change a training option directly in MATLAB.
If you don't have access to a GPU on your laptop or workstation, you can leverage of MATLAB reference architecture to use one or more GPUs, within a MATLAB desktop in the cloud. You can also leverage the MATLAB Deep Learning Container from NVIDIA GPU Cloud which supports NVIDIA ddx, and other platforms that support Docker.
If you have many GPU applications to run, or need to scale beyond a single machine with GPUs, you can use MATLAB Parallel Server to extend your workflow to a cluster with GPUs. If you don't already have access to a GPU cluster, you can leverage MathWorks Cloud Center or MATLAB Parallel Server Reference Architecture.
Parallel Computing Toolbox provides additional features for working directly with CUDA code. The mexcuda function, compiles CUDA code into a mex file, that can be called directly in MATLAB as a function. Conversely, after writing your MATLAB code, you can generate and deploy ready to use CUDA code, with CPU coder.
The generated code is optimized, to call standard CUDA libraries and can be integrated and deployed directly onto NVIDIA GPUs. To learn more about how to take full advantage of your GPU in MATLAB, explore the GPU computing solutions page. You can also explore the MathWorks documentation. For a complete list of functions, which give you support and more examples.
Related Products
Featured Product
Parallel Computing Toolbox
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select:.
您还可以从以下列表中选择一个网站:
如何获得最佳网站性能
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- AméricaLatina(Español)
- Canada(English)
- United States(English)
欧洲
Asia Pacific
- Australia(English)
- India(English)
- 新西兰(English)
- 中国
- 日本Japanese(日本語)
- 한국Korean(한국어)