- Undocumented Matlab - https://undocumentedmatlab.com -

Matrix processing performance

Posted By Yair Altman On July 13, 2011 | 14 Comments

A few days ago, fellow Matlab blogger Roy Fahn, well-respected in the Israeli Matlab community, posted an interesting article [1] on his MATLAB with Fun [2] blog (note the word-play). Since his article is in Hebrew, and the automated Google Translation [3] is somewhat lacking, I thought of sharing Roy’s post here (with his permission of course), for the minority of the Matlab community which is not fluent in Hebrew…

Roy’s translated post: “Anyone who adds, detracts (from execution time)”

In the story of Eve and the serpent, the first woman told the serpent about the prohibition of eating from the Tree of Knowledge, adding to that prohibition a ban on touching the tree (something that God has not commanded). The snake used this inaccuracy in her words, showing her that one can touch the tree without fear, and therefore argued that the prohibition to eat its fruit is similarly not true. As a result, Eve was tempted to eat the fruit, and the rest is known. Jewish sages said of the imaginary prohibition which Eve has added, that this is an example where “Anyone who adds, [in effect] detracts”.
Recently I [Roy] came across an interesting phenomenon, that in MATLAB, adding elements to a vector on which an action is performed, does not degrade the execution time, but rather the reverse. Adding vector elements actually reduces execution time!
Here’s an example. Try to rank the following tic-toc segments from fastest to slowest:

x = rand(1000,1000);
% Segment #1
y = ones(1000,1000);
tic
for i=1:100
    y = x .* y;
end
toc
% Segment #2
y=ones(1000,1000);
tic
for i=1:100
    y(:,1:999) = x(:,1:999) .* y(:,1:999);
end
toc
% Segment #3
y=ones(1000,1000);
tic
for i=1:100
    y(1:999,:) = x(1:999,:) .* y(1:999,:);
end
toc

The first loop multiplies all the elements of the x and y matrices, and should therefore run longer than the other loops, which multiply matrices that are one row or one column smaller. However, in practice, the first loop was the fastest – just 0.25 seconds on my computer, whereas the second ran for 1.75 seconds, and the third – 6.65 seconds [YMMV].
Why is the first loop the fastest?
The subscription operation performed in each of the latter two loops is a wasteful action, and therefore in such cases I would suggest that you run your operation on the full matrix, and then get rid of the unnecessary row or column.
And why does the second loop run faster than the third?
This is related to the fact that MATLAB prefers operations on columns rather than rows. In the second loop, all the elements are multiplied except those in the last column, while in the third loop all the elements that have been extracted from all rows are multiplied, except for the last row.
In your work with MATLAB, have you encountered similar phenomena that are initially counter-intuitive, such as the example described above? If so, please post a comment below [4], or directly on Roy’s blog [5].
Is all of this undocumented? I really don’t know. But it is certainly unexpected and interesting…

Categories: Low risk of breaking in future versions, Stock Matlab function, Undocumented feature


14 Comments (Open | Close)

14 Comments To "Matrix processing performance"

#1 Comment By Malcolm Lidierth On July 13, 2011 @ 10:47

See also
[12]

#2 Comment By Jonathan Birge On July 13, 2011 @ 11:53

I think this is really more an indication of how poorly MATLAB is written, than anything else. It seems that if you do subscripts on the right hand side, instead of optimizing that out by simply truncating the iteration internally, MATLAB actually creates an entirely new copy. In fact, MATLAB is so poorly optimized that if you replace your first segment with

y = x(:,1:1000) .* y(:,1:1000);

you will find that it runs just as slow as the second segment.

Given that the MathWorks essentially has a monopoly on scientific numerical analysis software, I’d say it’s not very surprising that the core language implementation is of such low quality. I will grant you this, though: the fact that MATLAB, despite being claimed to have an optimizing JIT compiler, does something so stupid as to make an entirely new copy of an array when you only want to read a subset of it is definitely something the MathWorks doesn’t want to document…

#3 Comment By Jonathan Birge On July 14, 2011 @ 07:40

One thing to add: I originally thought maybe the problem was that the original example used in-place assignment, since y was on both sides. This isn’t the issue, and the results are the same (including a huge slowdown for replacing x with the “equivalent” x(:,1:1000)) if you use x for both variables on the right side.

#4 Comment By celine On July 14, 2011 @ 07:58

In the same subject, the function squeeze can actually be slower than reshape. Example below

%%% evaluate the timing of 'squeeze'
% create 3D tables
apple=10*rand([100,300,400],'single');
maxLoop=100000;

% pick one line and one column
tic
for i=1:maxLoop  %this will take 3s
    core=squeeze(apple(3,25,:));
end
toc

tic
for i=1:maxLoop     %this will take 0.3s
    core2=apple(3,25,:);
    core2=reshape(core2,[1,400]);
end
toc

tic
for i=1:maxLoop       %this will take 0.3s
    core3=apple(3,25,:);
    core3=reshape(core3,[400,1]);
end
toc

#5 Comment By rami On July 18, 2011 @ 00:01

The reason the second loop is faster than the third loop is that matlab uses Column-Major Order :

a = [1,2,3;4,5,6]

is stored as 1,4,2,5,3,6.

(Ref : [13])

As for the first one being faster than the second one, I dont know.

Cheers

#6 Comment By Ajay Iyer On July 19, 2011 @ 09:46

Interestingly, here is a segment of code (#4) that is as fast as the #1.

% Segment #4
y=ones(1000,1000);
tic
x = x(:);
y = y(:);
for i=1:100
    y = x.*y ;
end
y = reshape(y,1000,1000);
x = reshape(x,1000,1000);
toc

#7 Comment By Thomas M. On July 19, 2011 @ 10:22

A little outside the subject, but I’ve been developing code using Simulink for xPC Target and the bottleneck in my algorithm is the large matrix calculation of A = M*N, where both M and N are (250,250). I am sampling at 0.001 and I am trying to find ways to optimize the speed. Is it possible to make this calculation faster? Also, why would this calculation take longer with xPC 5.0 compared with xPC 4.3 loaded on the same system?

#8 Comment By Yair Altman On July 19, 2011 @ 11:50

@Thomas – You can always use a GPU or MEX to speed up the calculations, but try to think whether you really need to have all this data. Just reducing the size to (100,100) will improve performance by 2 orders of magnitude, not to mention reducing memory problems, thrashing (memory swaps) etc.

#9 Comment By Thomas M. On July 25, 2011 @ 09:27

The matrix has to be that size. How can the GPU be leveraged on xPC Target? Also, what do you mean by using MEX to speed up calculations?

#10 Pingback By Waiting for asynchronous events | Undocumented Matlab On July 18, 2012 @ 11:51

[…] Performance tuning can indeed be counter-intuitive sometimes, until you learn the underlying reasons when it becomes clear (I’ve shown several examples of this in the past) […]

#11 Comment By Peter Gardman On April 4, 2013 @ 10:13

Dear Yair,
I’m wondering about the fastest way of creating a structure. In my ignorance, I used to think that >>A.B = x would call A = struct(‘B’,x) so writing A = struct(‘B’,x) in an mfile would be much faster. To my surprise A.B =x is much faster:

>> tic, A.B = 3; toc
Elapsed time is 0.000003 seconds.

>> tic, M = struct(‘B’,3); toc
Elapsed time is 0.000297 seconds.

Do you have any clue why is this?, or, if you have already talked about this in a post, could you please indicate it to me where? Thank you very much

#12 Comment By Yair Altman On April 4, 2013 @ 16:44

@Peter – I suspect that this is due to the fact that A.B is directly interpreted by the Matlab interpreter, whereas struct(…) goes through a library routine (in libmx.dll on Windows).

Note: when you place the profiled code in an m-file, rather then testing in the non-optimized Command Prompt, the speedup is only ~2 or 3, not 100 (not that a x2 speedup should be disparaged…).

In a related matter, I compare alternatives for preallocating arrays of structs [14].

#13 Comment By Peter Gardman On April 5, 2013 @ 10:48

Thank you very much Yair

#14 Pingback By Python:Performance of row vs column operations in NumPy – IT Sprite On November 5, 2015 @ 05:09

[…] are a few articles that show that MATLAB prefers column operations than row operations, and that depending on you lay […]


Article printed from Undocumented Matlab: https://undocumentedmatlab.com

URL to article: https://undocumentedmatlab.com/articles/matrix-processing-performance

URLs in this post:

[1] interesting article: http://matlabisrael.blogspot.com/2011/07/blog-post.html

[2] MATLAB with Fun: http://matlabisrael.blogspot.com/

[3] automated Google Translation: http://translate.google.com/translate?hl=en&sl=iw&tl=en&u=http%3A%2F%2Fmatlabisrael.blogspot.com%2F2011%2F07%2Fblog-post.html

[4] below: http://undocumentedmatlab.com/blog/array-processing-performance/#respond

[5] directly on Roy’s blog: http://matlabisrael.blogspot.com/2011/07/blog-post.html#comment-form

[6] Undocumented view transformation matrix : https://undocumentedmatlab.com/articles/undocumented-view-transformation-matrix

[7] Allocation performance take 2 : https://undocumentedmatlab.com/articles/allocation-performance-take-2

[8] Performance: scatter vs. line : https://undocumentedmatlab.com/articles/performance-scatter-vs-line

[9] Convolution performance : https://undocumentedmatlab.com/articles/convolution-performance

[10] Preallocation performance : https://undocumentedmatlab.com/articles/preallocation-performance

[11] Array resizing performance : https://undocumentedmatlab.com/articles/array-resizing-performance

[12] : http://www.mathworks.com/company/newsletters/news_notes/june07/patterns.html

[13] : http://en.wikipedia.org/wiki/Row-major_order#Column-major_order

[14] : https://undocumentedmatlab.com/blog/preallocation-performance/#structs

Copyright © Yair Altman - Undocumented Matlab. All rights reserved.