<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>
	Comments on: Matrix processing performance	</title>
	<atom:link href="https://undocumentedmatlab.com/articles/matrix-processing-performance/feed" rel="self" type="application/rss+xml" />
	<link>https://undocumentedmatlab.com/articles/matrix-processing-performance?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=matrix-processing-performance</link>
	<description>Professional Matlab consulting, development and training</description>
	<lastBuildDate>Thu, 05 Nov 2015 12:09:02 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.2</generator>
	<item>
		<title>
		By: Python:Performance of row vs column operations in NumPy &#8211; IT Sprite		</title>
		<link>https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-360691</link>

		<dc:creator><![CDATA[Python:Performance of row vs column operations in NumPy &#8211; IT Sprite]]></dc:creator>
		<pubDate>Thu, 05 Nov 2015 12:09:02 +0000</pubDate>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2380#comment-360691</guid>

					<description><![CDATA[[...] are a few articles that show that MATLAB prefers column operations than row operations, and that depending on you lay [...]]]></description>
			<content:encoded><![CDATA[<p>[&#8230;] are a few articles that show that MATLAB prefers column operations than row operations, and that depending on you lay [&#8230;]</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Peter Gardman		</title>
		<link>https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-184884</link>

		<dc:creator><![CDATA[Peter Gardman]]></dc:creator>
		<pubDate>Fri, 05 Apr 2013 17:48:29 +0000</pubDate>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2380#comment-184884</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-184514&quot;&gt;Yair Altman&lt;/a&gt;.

Thank you very much Yair]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-184514">Yair Altman</a>.</p>
<p>Thank you very much Yair</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Yair Altman		</title>
		<link>https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-184514</link>

		<dc:creator><![CDATA[Yair Altman]]></dc:creator>
		<pubDate>Thu, 04 Apr 2013 23:44:37 +0000</pubDate>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2380#comment-184514</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-184418&quot;&gt;Peter Gardman&lt;/a&gt;.

@Peter - I suspect that this is due to the fact that A.B is directly interpreted by the Matlab interpreter, whereas &lt;i&gt;&lt;b&gt;struct&lt;/b&gt;(...)&lt;/i&gt; goes through a library routine (in &lt;i&gt;libmx.dll&lt;/i&gt; on Windows). 

Note: when you place the profiled code in an m-file, rather then testing in the non-optimized Command Prompt, the speedup is only ~2 or 3, not 100 (not that a x2 speedup should be disparaged...).

In a related matter, I compare alternatives for preallocating arrays of structs &lt;a href=&quot;http://undocumentedmatlab.com/blog/preallocation-performance/#structs&quot; target=&quot;_blank&quot; rel=&quot;nofollow&quot;&gt;here&lt;/a&gt;.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-184418">Peter Gardman</a>.</p>
<p>@Peter &#8211; I suspect that this is due to the fact that A.B is directly interpreted by the Matlab interpreter, whereas <i><b>struct</b>(&#8230;)</i> goes through a library routine (in <i>libmx.dll</i> on Windows). </p>
<p>Note: when you place the profiled code in an m-file, rather then testing in the non-optimized Command Prompt, the speedup is only ~2 or 3, not 100 (not that a x2 speedup should be disparaged&#8230;).</p>
<p>In a related matter, I compare alternatives for preallocating arrays of structs <a href="http://undocumentedmatlab.com/blog/preallocation-performance/#structs" target="_blank" rel="nofollow">here</a>.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Peter Gardman		</title>
		<link>https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-184418</link>

		<dc:creator><![CDATA[Peter Gardman]]></dc:creator>
		<pubDate>Thu, 04 Apr 2013 17:13:48 +0000</pubDate>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2380#comment-184418</guid>

					<description><![CDATA[Dear Yair,
I&#039;m wondering about the fastest way of creating a structure. In my ignorance, I used to think that   &#062;&#062;A.B = x  would call A = struct(&#039;B&#039;,x) so  writing A = struct(&#039;B&#039;,x) in an mfile would be much faster. To my surprise  A.B =x is much faster:

&#062;&#062; tic, A.B = 3; toc
Elapsed time is 0.000003 seconds.

&#062;&#062; tic, M = struct(&#039;B&#039;,3); toc
Elapsed time is 0.000297 seconds.

Do you have any clue why is this?, or, if you have already talked about this in a post, could you please indicate it to me where? Thank you very much]]></description>
			<content:encoded><![CDATA[<p>Dear Yair,<br />
I&#8217;m wondering about the fastest way of creating a structure. In my ignorance, I used to think that   &gt;&gt;A.B = x  would call A = struct(&#8216;B&#8217;,x) so  writing A = struct(&#8216;B&#8217;,x) in an mfile would be much faster. To my surprise  A.B =x is much faster:</p>
<p>&gt;&gt; tic, A.B = 3; toc<br />
Elapsed time is 0.000003 seconds.</p>
<p>&gt;&gt; tic, M = struct(&#8216;B&#8217;,3); toc<br />
Elapsed time is 0.000297 seconds.</p>
<p>Do you have any clue why is this?, or, if you have already talked about this in a post, could you please indicate it to me where? Thank you very much</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Waiting for asynchronous events &#124; Undocumented Matlab		</title>
		<link>https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-96981</link>

		<dc:creator><![CDATA[Waiting for asynchronous events &#124; Undocumented Matlab]]></dc:creator>
		<pubDate>Wed, 18 Jul 2012 18:51:22 +0000</pubDate>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2380#comment-96981</guid>

					<description><![CDATA[[...] Performance tuning can indeed be counter-intuitive sometimes, until you learn the underlying reasons when it becomes clear (I&#039;ve shown several examples of this in the past) [...]]]></description>
			<content:encoded><![CDATA[<p>[&#8230;] Performance tuning can indeed be counter-intuitive sometimes, until you learn the underlying reasons when it becomes clear (I&#8217;ve shown several examples of this in the past) [&#8230;]</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Thomas M.		</title>
		<link>https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-52346</link>

		<dc:creator><![CDATA[Thomas M.]]></dc:creator>
		<pubDate>Mon, 25 Jul 2011 16:27:35 +0000</pubDate>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2380#comment-52346</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-51711&quot;&gt;Thomas M.&lt;/a&gt;.

The matrix has to be that size.  How can the GPU be leveraged on xPC Target?  Also, what do you mean by using MEX to speed up calculations?]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-51711">Thomas M.</a>.</p>
<p>The matrix has to be that size.  How can the GPU be leveraged on xPC Target?  Also, what do you mean by using MEX to speed up calculations?</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Yair Altman		</title>
		<link>https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-51720</link>

		<dc:creator><![CDATA[Yair Altman]]></dc:creator>
		<pubDate>Tue, 19 Jul 2011 18:50:55 +0000</pubDate>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2380#comment-51720</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-51711&quot;&gt;Thomas M.&lt;/a&gt;.

@Thomas - You can always use a GPU or MEX to speed up the calculations, but try to think whether you really need to have all this data. Just reducing the size to (100,100) will improve performance by 2 orders of magnitude, not to mention reducing memory problems, thrashing (memory swaps) etc.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-51711">Thomas M.</a>.</p>
<p>@Thomas &#8211; You can always use a GPU or MEX to speed up the calculations, but try to think whether you really need to have all this data. Just reducing the size to (100,100) will improve performance by 2 orders of magnitude, not to mention reducing memory problems, thrashing (memory swaps) etc.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Thomas M.		</title>
		<link>https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-51711</link>

		<dc:creator><![CDATA[Thomas M.]]></dc:creator>
		<pubDate>Tue, 19 Jul 2011 17:22:34 +0000</pubDate>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2380#comment-51711</guid>

					<description><![CDATA[A little outside the subject, but I&#039;ve been developing code using Simulink for xPC Target and the bottleneck in my algorithm is the large matrix calculation of A = M*N, where both M and N are (250,250).  I am sampling at 0.001 and I am trying to find ways to optimize the speed.  Is it possible to make this calculation faster?  Also, why would this calculation take longer with xPC 5.0 compared with xPC 4.3 loaded on the same system?]]></description>
			<content:encoded><![CDATA[<p>A little outside the subject, but I&#8217;ve been developing code using Simulink for xPC Target and the bottleneck in my algorithm is the large matrix calculation of A = M*N, where both M and N are (250,250).  I am sampling at 0.001 and I am trying to find ways to optimize the speed.  Is it possible to make this calculation faster?  Also, why would this calculation take longer with xPC 5.0 compared with xPC 4.3 loaded on the same system?</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Ajay Iyer		</title>
		<link>https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-51705</link>

		<dc:creator><![CDATA[Ajay Iyer]]></dc:creator>
		<pubDate>Tue, 19 Jul 2011 16:46:52 +0000</pubDate>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2380#comment-51705</guid>

					<description><![CDATA[Interestingly, here is a segment of code (#4) that is as fast as the #1.

&lt;pre lang=&quot;matlab&quot;&gt;
% Segment #4
y=ones(1000,1000);
tic
x = x(:);
y = y(:);
for i=1:100
    y = x.*y ;
end
y = reshape(y,1000,1000);
x = reshape(x,1000,1000);
toc
&lt;/pre&gt;]]></description>
			<content:encoded><![CDATA[<p>Interestingly, here is a segment of code (#4) that is as fast as the #1.</p>
<pre lang="matlab">
% Segment #4
y=ones(1000,1000);
tic
x = x(:);
y = y(:);
for i=1:100
    y = x.*y ;
end
y = reshape(y,1000,1000);
x = reshape(x,1000,1000);
toc
</pre>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: rami		</title>
		<link>https://undocumentedmatlab.com/articles/matrix-processing-performance#comment-51525</link>

		<dc:creator><![CDATA[rami]]></dc:creator>
		<pubDate>Mon, 18 Jul 2011 07:01:18 +0000</pubDate>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2380#comment-51525</guid>

					<description><![CDATA[The reason the second loop is faster than the third loop is that matlab uses Column-Major Order :

&lt;pre lang=&quot;matlab&quot;&gt;
a = [1,2,3;4,5,6]
&lt;/pre&gt;

is stored as 1,4,2,5,3,6.

(Ref : http://en.wikipedia.org/wiki/Row-major_order#Column-major_order)

As for the first one being faster than the second one, I dont know.

Cheers]]></description>
			<content:encoded><![CDATA[<p>The reason the second loop is faster than the third loop is that matlab uses Column-Major Order :</p>
<pre lang="matlab">
a = [1,2,3;4,5,6]
</pre>
<p>is stored as 1,4,2,5,3,6.</p>
<p>(Ref : <a href="http://en.wikipedia.org/wiki/Row-major_order#Column-major_order" rel="nofollow ugc">http://en.wikipedia.org/wiki/Row-major_order#Column-major_order</a>)</p>
<p>As for the first one being faster than the second one, I dont know.</p>
<p>Cheers</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
