**- Undocumented Matlab - https://undocumentedmatlab.com -**

Convolution performance

Posted By Yair Altman On February 3, 2016 | __6 Comments__

MathWorks’ latest MATLAB Digest ^{[1]} (January 2016) featured my book “Accelerating MATLAB Performance ^{[2]}“. I am deeply honored and appreciative of MathWorks for this.

^{[2]}I would like to dedicate today’s post to a not-well-known performance trick from my book, that could significantly improve the speed when computing the convolution ^{[3]} of two data arrays. Matlab’s internal implementation of convolution (* conv*,

However, this can often be sped up significantly if we use the Convolution Theorem

```
% Prepare the input vectors (1M elements each)
x = rand(1e6,1);
y = rand(1e6,1);
% Compute the convolution using the builtin conv()
tic, z1 = conv(x,y); toc
=> Elapsed time is 360.521187 seconds.
% Now compute the convolution using fft/ifft: 780x faster!
n = length(x) + length(y) - 1; % we need to zero-pad
tic, z2 = ifft(fft(x,n) .* fft(y,n)); toc
=> Elapsed time is 0.463169 seconds.
% Compare the relative accuracy (the results are nearly identical)
disp(max(abs(z1-z2)./abs(z1)))
=> 2.75200348450538e-10
```

This latest result shows that the results are nearly identical, up to a tiny difference, which is certainly acceptable in most cases when considering the enormous performance speedup (780x in this specific case). Bruno’s implementation (* convnfft ^{[5]}*) is made even more efficient by using MEX in-place data multiplications, power-of-2 FFTs, and use of GPU/Jacket where available.

It should be noted that the builtin Matlab functions can still be faster for relatively small data arrays, or if your machine has a large number of CPU cores and free memory that Matlab’s builtin

If you have read my book, please be kind enough to post your feedback about it on Amazon (link

Categories: Low risk of breaking in future versions, Stock Matlab function, Undocumented feature

6 Comments (Open | Close)

Article printed from Undocumented Matlab: **https://undocumentedmatlab.com**

URL to article: **https://undocumentedmatlab.com/articles/convolution-performance**

URLs in this post:

[1] MATLAB Digest: **http://mathworks.com/company/digest/current/**

[2] Accelerating MATLAB Performance: **http://undocumentedmatlab.com/books/matlab-performance**

[3] convolution: **https://en.wikipedia.org/wiki/Convolution**

[4] Convolution Theorem: **https://en.wikipedia.org/wiki/Convolution_theorem**

[5] proposed by Bruno Luong: **http://www.mathworks.com/matlabcentral/fileexchange/24504-fft-based-convolution**

[6] link: **http://amazon.com/Accelerating-MATLAB-Performance-speed-programs/product-reviews/1482211297/ref=cm_cr_dp_see_all_summary**

[7] Performance: scatter vs. line : **https://undocumentedmatlab.com/articles/performance-scatter-vs-line**

[8] Performance: accessing handle properties : **https://undocumentedmatlab.com/articles/performance-accessing-handle-properties**

[9] rmfield performance : **https://undocumentedmatlab.com/articles/rmfield-performance**

[10] Zero-testing performance : **https://undocumentedmatlab.com/articles/zero-testing-performance**

[11] Matrix processing performance : **https://undocumentedmatlab.com/articles/matrix-processing-performance**

[12] Preallocation performance : **https://undocumentedmatlab.com/articles/preallocation-performance**

Click here to print.

Copyright © Yair Altman - Undocumented Matlab. All rights reserved.

6 Comments To "Convolution performance"

#1 CommentByAlexOn February 28, 2016 @ 19:22Could you please provide a code for 2D version? In case of linked .mex is not working any faster than standard convolution.

#2 CommentByYair AltmanOn February 28, 2016 @ 21:00@Alex – you can take a look at the m-code within Bruno’s

utility for this. The speedup depends on several factors, including the size of the data, the Matlab release, and your available memory. So it is quite possible that on your specific system with your specific data you do not see significant speedup, but in many cases Bruno’sconvnfftdoes improve the processing speed.convnfft#3 CommentByAlexOn September 30, 2017 @ 16:50Hello,

I am having a problem trying to do FFT-based convolution in 2D.

is definitely the fastest one, but onlyconvnfftproduces a valid result. Forimfilterandconvnfftthe result is wrong, as can be seen in the minimal working example below:convnBest regards,

Alex

#4 CommentByYair AltmanOn October 1, 2017 @ 17:42@Alex – this is due to your use of the optional

`'replicate'`

option in your call to. You are not doing the same withimfilterorconv2fft, which causes the results to look different. Border-pixels replication is especially important in cases such as yours where the kernel size is the same size as the input image;convnIf you remove the

`'replicate'`

option in your call to, you will see that the results look the same (to the naked eye at least…).imfilterIf you want to use

orconv2fftrather than the slowconvn, and yet you still want to see a nice-looking image, then you should either reduce the kernel size, or enlarge the input image (so that the original image is at its center) and take care of the boundary pixels. You can either do it the same way as theimfilter`'replicate'`

option, or in a different way. For example, here is a simple implementation that at least in my eyes gives superior results even compared to:imfilter#5 CommentByJackie ShanOn September 6, 2016 @ 01:27When looking at the CPU utilization, I noticed that the ND convolution function (convn) does not use multiple cores when operating on greater than 2D arrays.

I was wondering if there’s any reason for this?

#6 CommentByYair AltmanOn September 6, 2016 @ 10:12@Jackie – I believe this is due to a sub-optimal implementation. MathWorks has limited engineering resources and probably decided that 2D convolution is much more common than 3D. I assume that MathWorks focused its engineers on improving the performance of the 2D case and then moved on to more pressing matters, instead of also solving the harder and less-used 3D case. In a world with limited resources this is certainly understandable.