A few days ago, fellow Matlab blogger Roy Fahn, well-respected in the Israeli Matlab community, posted an interesting article on his MATLAB with Fun blog (note the word-play). Since his article is in Hebrew, and the automated Google Translation is somewhat lacking, I thought of sharing Roy’s post here (with his permission of course), for the minority of the Matlab community which is not fluent in Hebrew…
Roy’s translated post: “Anyone who adds, detracts (from execution time)”
In the story of Eve and the serpent, the first woman told the serpent about the prohibition of eating from the Tree of Knowledge, adding to that prohibition a ban on touching the tree (something that God has not commanded). The snake used this inaccuracy in her words, showing her that one can touch the tree without fear, and therefore argued that the prohibition to eat its fruit is similarly not true. As a result, Eve was tempted to eat the fruit, and the rest is known. Jewish sages said of the imaginary prohibition which Eve has added, that this is an example where “Anyone who adds, [in effect] detracts”.
Recently I [Roy] came across an interesting phenomenon, that in MATLAB, adding elements to a vector on which an action is performed, does not degrade the execution time, but rather the reverse. Adding vector elements actually reduces execution time!
Here’s an example. Try to rank the following tic-toc segments from fastest to slowest:
x = rand(1000,1000); % Segment #1 y = ones(1000,1000); tic for i=1:100 y = x .* y; end toc % Segment #2 y=ones(1000,1000); tic for i=1:100 y(:,1:999) = x(:,1:999) .* y(:,1:999); end toc % Segment #3 y=ones(1000,1000); tic for i=1:100 y(1:999,:) = x(1:999,:) .* y(1:999,:); end toc
The first loop multiplies all the elements of the x and y matrices, and should therefore run longer than the other loops, which multiply matrices that are one row or one column smaller. However, in practice, the first loop was the fastest – just 0.25 seconds on my computer, whereas the second ran for 1.75 seconds, and the third – 6.65 seconds [YMMV].
Why is the first loop the fastest?
The subscription operation performed in each of the latter two loops is a wasteful action, and therefore in such cases I would suggest that you run your operation on the full matrix, and then get rid of the unnecessary row or column.
And why does the second loop run faster than the third?
This is related to the fact that MATLAB prefers operations on columns rather than rows. In the second loop, all the elements are multiplied except those in the last column, while in the third loop all the elements that have been extracted from all rows are multiplied, except for the last row.
In your work with MATLAB, have you encountered similar phenomena that are initially counter-intuitive, such as the example described above? If so, please post a comment below, or directly on Roy’s blog.
Is all of this undocumented? I really don’t know. But it is certainly unexpected and interesting…
I think this is really more an indication of how poorly MATLAB is written, than anything else. It seems that if you do subscripts on the right hand side, instead of optimizing that out by simply truncating the iteration internally, MATLAB actually creates an entirely new copy. In fact, MATLAB is so poorly optimized that if you replace your first segment with
y = x(:,1:1000) .* y(:,1:1000);
you will find that it runs just as slow as the second segment.
Given that the MathWorks essentially has a monopoly on scientific numerical analysis software, I’d say it’s not very surprising that the core language implementation is of such low quality. I will grant you this, though: the fact that MATLAB, despite being claimed to have an optimizing JIT compiler, does something so stupid as to make an entirely new copy of an array when you only want to read a subset of it is definitely something the MathWorks doesn’t want to document…
One thing to add: I originally thought maybe the problem was that the original example used in-place assignment, since y was on both sides. This isn’t the issue, and the results are the same (including a huge slowdown for replacing x with the “equivalent” x(:,1:1000)) if you use x for both variables on the right side.
In the same subject, the function squeeze can actually be slower than reshape. Example below
The reason the second loop is faster than the third loop is that matlab uses Column-Major Order :
is stored as 1,4,2,5,3,6.
(Ref : http://en.wikipedia.org/wiki/Row-major_order#Column-major_order)
As for the first one being faster than the second one, I dont know.
Interestingly, here is a segment of code (#4) that is as fast as the #1.
A little outside the subject, but I’ve been developing code using Simulink for xPC Target and the bottleneck in my algorithm is the large matrix calculation of A = M*N, where both M and N are (250,250). I am sampling at 0.001 and I am trying to find ways to optimize the speed. Is it possible to make this calculation faster? Also, why would this calculation take longer with xPC 5.0 compared with xPC 4.3 loaded on the same system?