<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Memory &#8211; Undocumented Matlab</title>
	<atom:link href="https://undocumentedmatlab.com/articles/category/memory/feed" rel="self" type="application/rss+xml" />
	<link>https://undocumentedmatlab.com</link>
	<description>Professional Matlab consulting, development and training</description>
	<lastBuildDate>Sun, 13 May 2018 20:22:08 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.2</generator>
	<item>
		<title>Blocked wait with timeout for asynchronous events</title>
		<link>https://undocumentedmatlab.com/articles/blocked-wait-with-timeout-for-asynchronous-events?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=blocked-wait-with-timeout-for-asynchronous-events</link>
					<comments>https://undocumentedmatlab.com/articles/blocked-wait-with-timeout-for-asynchronous-events#respond</comments>
		
		<dc:creator><![CDATA[Yair Altman]]></dc:creator>
		<pubDate>Sun, 13 May 2018 20:22:08 +0000</pubDate>
				<category><![CDATA[Java]]></category>
		<category><![CDATA[Listeners]]></category>
		<category><![CDATA[Low risk of breaking in future versions]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[Performance]]></category>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=7620</guid>

					<description><![CDATA[<p>It is easy to convert asynchronous (streaming) events into a blocked wait in Matlab. </p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/blocked-wait-with-timeout-for-asynchronous-events">Blocked wait with timeout for asynchronous events</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/waiting-for-asynchronous-events" rel="bookmark" title="Waiting for asynchronous events">Waiting for asynchronous events </a> <small>The Matlab waitfor function can be used to wait for asynchronous Java/ActiveX events, as well as with timeouts. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/udd-events-and-listeners" rel="bookmark" title="UDD Events and Listeners">UDD Events and Listeners </a> <small>UDD event listeners can be used to listen to property value changes and other important events of Matlab objects...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/matlab-callbacks-for-java-events" rel="bookmark" title="Matlab callbacks for Java events">Matlab callbacks for Java events </a> <small>Events raised in Java code can be caught and handled in Matlab callback functions - this article explains how...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/detecting-window-focus-events" rel="bookmark" title="Detecting window focus events">Detecting window focus events </a> <small>Matlab does not have any documented method to detect window focus events (gain/loss). This article describes an undocumented way to detect such events....</small></li>
</ol>
</div>
]]></description>
										<content:encoded><![CDATA[<p>Readers of this website may have noticed that I have recently added an <a href="/iqml" rel="nofollow" target="_blank">IQML section</a> to the website&#8217;s top menu bar. IQML is a software connector that connects Matlab to DTN&#8217;s IQFeed, a financial data-feed of live and historic market data. IQFeed, like most other data-feed providers, sends its data in asynchronous messages, which need to be processed one at a time by the receiving client program (Matlab in this case). I wanted IQML to provide users with two complementary modes of operation:<br />
<span class="alignright"><a href="/iqml" rel="nofollow" target="_blank"><img fetchpriority="high" decoding="async" src="https://undocumentedmatlab.com/images/IQML.png" title="IQML's IQFeed-Matlab connectivity" alt="IQML's IQFeed-Matlab connectivity" width="426" height="213"/></a></span></p>
<ul>
<li><b>Streaming</b> (asynchronous, non-blocking) &#8211; incoming server data is processed by internal callback functions in the background, and is made available for the user to query at any later time.</li>
<li><b>Blocking</b> (synchronously waiting for data) &#8211; in this case, the main Matlab processing flows waits until the data arrives, or until the specified timeout period has passed &#8211; whichever comes first.</li>
</ul>
<p>Implementing streaming mode is relatively simple in general &#8211; all we need to do is ensure that the underlying connector object passes the incoming server messages to the relevant Matlab function for processing, and ensure that the user has some external way to access this processed data in Matlab memory (in practice making the connector object pass incoming data messages as Matlab callback events may be non-trivial, but that&#8217;s a separate matter &#8211; <a href="/articles/matlab-callbacks-for-java-events" target="_blank">read here</a> for details).<br />
In today&#8217;s article I&#8217;ll explain how we can implement a blocking mode in Matlab. It may sound difficult but it turns out to be relatively simple.<br />
I had several requirements/criteria for my blocked-wait implementation:</p>
<ol>
<li><b>Compatibility</b> &#8211; It had to work on all Matlab platforms, and all Matlab releases in the past decade (which rules out using Microsoft Dot-NET objects)</li>
<li><b>Ease-of-use</b> &#8211; It had to work out-of-the-box, with no additional installation/configuration (which ruled out using Perl/Python objects), and had to use a simple interface function</li>
<li><b>Timeout</b> &#8211; It had to implement a timed-wait, and had to be able to tell whether the program proceeded due to a timeout, or because the expected event has arrived</li>
<li><b>Performance</b> &#8211; It had to have minimal performance overhead</li>
</ol>
<p><span id="more-7620"></span></p>
<h3 id="basic">The basic idea</h3>
<p>The basic idea is to use Matlab&#8217;s builtin <i><b>waitfor</b></i>, as I explained in <a href="/articles/waiting-for-asynchronous-events" target="_blank">another post</a> back in 2012. If our underlying connector object has some settable boolean property (e.g., <code>Done</code>) then we can set this property in our event callback, and then <code>waitfor(object,'Done')</code>. The timeout mechanism is implemented using a dedicated timer object, as follows:</p>
<pre lang="matlab">
% Wait for data updates to complete (isDone = false if timeout, true if event has arrived)
function isDone = waitForDone(object, timeout)
    % Initialize: timeout flag = false
    object.setDone(false);
    % Create and start the separate timeout timer thread
    hTimer = timer('TimerFcn',@(h,e)object.setDone(true), 'StartDelay',timeout);
    start(hTimer);
    % Wait for the object property to change or for timeout, whichever comes first
    waitfor(object,'Done');
    % waitfor is over - either because of timeout or because the data changed
    % To determine which, check whether the timer callback was activated
    isDone = isvalid(hTimer) && hTimer.TasksExecuted == 0;
    % Delete the timer object
    try stop(hTimer);   catch, end
    try delete(hTimer); catch, end
    % Return the flag indicating whether or not timeout was reached
end  % waitForDone
</pre>
<p>where the event callback is responsible for invoking <code>object.setDone(true)</code> when the server data arrives. The usage would then be similar to this:</p>
<pre lang="matlab">
requestDataFromServer();
if isBlockingMode
   % Blocking mode - wait for data or timeout (whichever comes first)
   isDone = waitForDone(object, 10.0);  % wait up to 10 secs for data to arrive
   if ~isDone  % indicates a timeout
      fprintf(2, 'No server data has arrived within the specified timeout period!\n')
   end
else
   % Non-blocking (streaming) mode - continue with regular processing
end
</pre>
<h3 id="generic">Using a stand-alone generic signaling object</h3>
<p>But what can we do if we don&#8217;t have such a <code>Done</code> property in our underlying object, or if we do not have programmatic access to it?<br />
We could create a new non-visible figure and then <i><b>waitfor</b></i> one of its properties (e.g. <code>Resize</code>). The property would be initialized to <code>'off'</code>, and within both the event and timer callbacks we would set it to <code>'on'</code>, and then <code>waitfor(hFigure,'Resize','on')</code>. However, figures, even if non-visible, are quite heavy objects in terms of both memory, UI resources, and performance.<br />
It would be preferable to use a much lighter-weight object, as long as it abides by the other criteria above. Luckily, there are numerous such objects in Java, which is bundled in every Matlab since 2000, on every Matlab platform. As long as we choose a small Java object that has existed forever, we should be fine. For example, we could use a simple <code>javax.swing.JButton</code> and its boolean property <code>Enabled</code>:</p>
<pre lang="matlab">
hSemaphore = handle(javax.swing.JButton);  % waitfor() expects a handle() object, not a "naked" Java reference
% Wait for data updates to complete (isDone = false if timeout, true if event has arrived)
function isDone = waitForDone(hSemaphore, timeout)
    % Initialize: timeout flag = false
    hSemaphore.setEnabled(false);
    % Create and start the separate timeout timer thread
    hTimer = timer('TimerFcn',@(h,e)hSemaphore.setEnabled(true), 'StartDelay',timeout);
    start(hTimer);
    % Wait for the object property to change or for timeout, whichever comes first
    waitfor(hSemaphore,'Enabled');
    % waitfor is over - either because of timeout or because the data changed
    % To determine which, check whether the timer callback was activated
    isDone = isvalid(hTimer) && hTimer.TasksExecuted == 0;
    % Delete the timer object
    try stop(hTimer);   catch, end
    try delete(hTimer); catch, end
    % Return the flag indicating whether or not timeout was reached
end  % waitForDone
</pre>
<p>In this implementation, we would need to pass the <code>hSemaphore</code> object handle to the event callback, so that it would be able to invoke <code>hSemaphore.setEnabled(true)</code> when the server data has arrived.<br />
Under the hood, note that <code>Enabled</code> is not a true &#8220;property&#8221; of <code>javax.swing.JButton</code>, but rather exposes two simple public getter/setter methods (<i>setEnabled()</i> and <i>isEnabled()</i>), which Matlab interprets as a &#8220;property&#8221;. For all intents and purposes, in our Matlab code we can treat <code>Enabled</code> as a property of <code>javax.swing.JButton</code>, including its use by Matlab&#8217;s builtin <i><b>waitfor</b></i> function.<br />
There is a light overhead to this: on my laptop the <i><b>waitfor</b></i> function returns ~20 mSecs after the invocation of <code>hSemaphore.setEnabled(true)</code> in the timer or event callback &#8211; in many cases this overhead is negligible compared to the networking latencies for the blocked data request. When event reactivity is of utmost importance, users can always use streaming (non-blocking) mode, and process the incoming data events immediately in a callback.<br />
Of course, it would have been best if MathWorks would add a timeout option and return value to Matlab&#8217;s builtin <i><b>waitfor</b></i> function, similar to my <code>waitForDone</code> function &#8211; this would significantly simplify the code above. But until this happens, you can use <code>waitForDone</code> pretty-much as-is. I have used similar combinations of blocking and streaming modes with multiple other connectors that I implemented over the years (Interactive Brokers, CQG, Bloomberg and Reuters for example), and the bottom line is that it actually works well in practice.<br />
<a href="/consulting" target="_blank">Let me know</a> if you&#8217;d like me to assist with your own Matlab project or data connector, either developing it from scratch or improving your existing code. I will be visiting Boston and New York in early June and would be happy to meet you in person to discuss your specific needs.</p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/blocked-wait-with-timeout-for-asynchronous-events">Blocked wait with timeout for asynchronous events</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/waiting-for-asynchronous-events" rel="bookmark" title="Waiting for asynchronous events">Waiting for asynchronous events </a> <small>The Matlab waitfor function can be used to wait for asynchronous Java/ActiveX events, as well as with timeouts. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/udd-events-and-listeners" rel="bookmark" title="UDD Events and Listeners">UDD Events and Listeners </a> <small>UDD event listeners can be used to listen to property value changes and other important events of Matlab objects...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/matlab-callbacks-for-java-events" rel="bookmark" title="Matlab callbacks for Java events">Matlab callbacks for Java events </a> <small>Events raised in Java code can be caught and handled in Matlab callback functions - this article explains how...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/detecting-window-focus-events" rel="bookmark" title="Detecting window focus events">Detecting window focus events </a> <small>Matlab does not have any documented method to detect window focus events (gain/loss). This article describes an undocumented way to detect such events....</small></li>
</ol>
</div>
]]></content:encoded>
					
					<wfw:commentRss>https://undocumentedmatlab.com/articles/blocked-wait-with-timeout-for-asynchronous-events/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Quirks with parfor vs. for</title>
		<link>https://undocumentedmatlab.com/articles/quirks-with-parfor-vs-for?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=quirks-with-parfor-vs-for</link>
					<comments>https://undocumentedmatlab.com/articles/quirks-with-parfor-vs-for#comments</comments>
		
		<dc:creator><![CDATA[Yair Altman]]></dc:creator>
		<pubDate>Thu, 05 Jan 2017 17:15:48 +0000</pubDate>
				<category><![CDATA[Guest bloggers]]></category>
		<category><![CDATA[Medium risk of breaking in future versions]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[Stock Matlab function]]></category>
		<category><![CDATA[Undocumented feature]]></category>
		<category><![CDATA[Bug]]></category>
		<category><![CDATA[Performance]]></category>
		<category><![CDATA[Pure Matlab]]></category>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=6821</guid>

					<description><![CDATA[<p>Parallelizing loops with Matlab's parfor might generate unexpected results. Users beware! </p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/quirks-with-parfor-vs-for">Quirks with parfor vs. for</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/a-few-parfor-tips" rel="bookmark" title="A few parfor tips">A few parfor tips </a> <small>The parfor (parallel for) loops can be made faster using a few simple tips. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/matlab-compilation-quirks-take-2" rel="bookmark" title="Matlab compilation quirks &#8211; take 2">Matlab compilation quirks &#8211; take 2 </a> <small>A few hard-to-trace quirks with Matlab compiler outputs are explained. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/quirks-with-compiled-matlab-dlls" rel="bookmark" title="Quirks with compiled Matlab DLLs">Quirks with compiled Matlab DLLs </a> <small>Several quirks with Matlab-compiled DLLs are discussed and workarounds suggested. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/preallocation-performance" rel="bookmark" title="Preallocation performance">Preallocation performance </a> <small>Preallocation is a standard Matlab speedup technique. Still, it has several undocumented aspects. ...</small></li>
</ol>
</div>
]]></description>
										<content:encoded><![CDATA[<p>A few months ago, I discussed several <a href="/articles/a-few-parfor-tips" target="_blank">tips regarding Matlab&#8217;s <i><b>parfor</b></i></a> command, which is used by the Parallel Computing Toolbox (PCT) for parallelizing loops. Today I wish to extend that post with some unexplained oddities when using <i><b>parfor</b></i></a>, compared to a standard <i><b>for</b></i> loop.</p>
<h3 id="serialization">Data serialization quirks</h3>
<p><a href="http://www.mathworks.com/matlabcentral/profile/authors/870050-dimitri-shvorob" rel="nofollow" target="_blank">Dimitri Shvorob</a> may not appear at first glance to be a prolific contributor on Matlab Central, but from the little he has posted over the years I regard him to be a Matlab power-user. So when Dimitri reports something, I take it seriously. Such was the case several months ago, when he contacted me regarding very odd behavior that he saw in his code: the <i><b>for</b></i> loop worked well, but the <i><b>parfor</b></i> version returned different (incorrect) results. Eventually, Dimitry traced the problem to something <a href="http://fluffynukeit.com/tag/loadobj" rel="nofollow" target="_blank">originally reported</a> by Dan Austin on his <a href="http://fluffynukeit.com" rel="nofollow" target="_blank">Fluffy Nuke It blog</a>.<br />
The core issue is that if we have a class object that is used within a <i><b>for</b></i> loop, Matlab can access the object directly in memory. But with a <i><b>parfor</b></i> loop, the object needs to be serialized in order to be sent over to the parallel workers, and deserialized within each worker. If this serialization/deserialization process involves internal class methods, the workers might see a different version of the class object than the one seen in the serial <i><b>for</b></i> loop. This could happen, for example, if the serialization/deserialization method croaks on an error, or depends on some dynamic (or random) conditions to create data.<br />
In other words, when we use data objects in a <i><b>parfor</b></i> loop, the data object is not necessarily sent &#8220;as-is&#8221;: additional processing may be involved under the hood that modify the data in a way that may be invisible to the user (or the loop code), resulting in different processing results of the parallel (<i><b>parfor</b></i>) vs. serial (<i><b>for</b></i>) loops.<br />
For additional aspects of Matlab serialization/deserialization, see <a href="/articles/serializing-deserializing-matlab-data" target="_blank">my article</a> from 2 years ago (and its interesting feedback comments).</p>
<h3 id="precision">Data precision quirks</h3>
<p><i>The following section was contributed by guest blogger Lior Perlmuter-Shoshany, head algorithmician at a private equity fund.</i><br />
In my work, I had to work with matrixes in the order of 10<sup>9</sup> cells. To reduce the memory footprint (and hopefully also improve performance), I decided to work with data of type <code>single</code> instead of Matlab&#8217;s default <code>double</code>. Furthermore, in order to speed up the calculation I use <i><b>parfor</b></i> rather than <i><b>for</b></i> in the main calculation. In the end of the run I am running a mini <i><b>for</b></i>-loop to see the best results.<br />
What I discovered to my surprise is that the results from the <b><i>parfor</i></b> and <i><b>for</b></i> loop variants is not the same!<br />
<span id="more-6821"></span><br />
The following simplified code snippet illustrate the problem by calculating a simple standard-deviation (<i><b>std</b></i>) over the same data, in both <code>single</code>&#8211; and <code>double</code>-precision. Note that the loops are ran with only a single iteration, to illustrate the fact that the problem is with the parallelization mechanism (probably the serialization/deserialization parts once again), not with the distribution of iterations among the workers.</p>
<pre lang="matlab">
clear
rng('shuffle','twister');
% Prepare the data in both double and single precision
arr_double = rand(1,100000000);
arr_single = single(arr_double);
% No loop - direct computation
std_single0 = std(arr_single);
std_double0 = std(arr_double);
% Loop #1 - serial for loop
std_single = 0;
std_double = 0;
for i=1
    std_single(i) = std(arr_single);
    std_double(i) = std(arr_double);
end
% Loop #2 - parallel parfor loop
par_std_single = 0;
par_std_double = 0;
parfor i=1
    par_std_single(i) = std(arr_single);
    par_std_double(i) = std(arr_double);
end
% Compare results of for loop vs. non-looped computation
isForSingleOk = isequal(std_single, std_single0)
isForDoubleOk = isequal(std_double, std_double0)
% Compare results of single-precision data (for vs. parfor)
isParforSingleOk = isequal(std_single, par_std_single)
parforSingleAccuracy = std_single / par_std_single
% Compare results of double-precision data (for vs. parfor)
isParforDoubleOk = isequal(std_double, par_std_double)
parforDoubleAccuracy = std_double / par_std_double
</pre>
<p>Output example :</p>
<pre lang="matlab">
isForSingleOk =
    1                   % <= true (of course!)
isForDoubleOk =
    1                   % <= true (of course!)
isParforSingleOk =
    0                   % <= false (odd!)
parforSingleAccuracy =
    0.73895227413361    % <= single-precision results are radically different in parfor vs. for
isParforDoubleOk =
    0                   % <= false (odd!)
parforDoubleAccuracy =
    1.00000000000021    % <= double-precision results are almost [but not exactly] the same in parfor vs. for
</pre>
<p>From my testing, the larger the data array, the bigger the difference is between the results of <code>single</code>-precision data when running in <i><b>for</b></i> vs. <i><b>parfor</b></i>.<br />
In other words, my experience has been that if you have a huge data matrix, it's better to parallelize it in <code>double</code>-precision if you wish to get [nearly] accurate results. But even so, I find it deeply disconcerting that the results are not exactly identical (at least on R2015a-R2016b on which I tested) even for the native <code>double</code>-precision .<br />
Hmmm... bug?</p>
<h3 id="travels">Upcoming travels - Zürich & Geneva</h3>
<p>I will shortly be traveling to clients in Zürich and Geneva, Switzerland. If you are in the area and wish to meet me to discuss how I could bring value to your work with some advanced Matlab consulting or training, then please email me (altmany at gmail):</p>
<ul>
<li><b>Zürich</b>: January 15-17</li>
<li><b>Geneva</b>: January 18-21</li>
</ul>
<p>Happy new year everybody!</p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/quirks-with-parfor-vs-for">Quirks with parfor vs. for</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/a-few-parfor-tips" rel="bookmark" title="A few parfor tips">A few parfor tips </a> <small>The parfor (parallel for) loops can be made faster using a few simple tips. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/matlab-compilation-quirks-take-2" rel="bookmark" title="Matlab compilation quirks &#8211; take 2">Matlab compilation quirks &#8211; take 2 </a> <small>A few hard-to-trace quirks with Matlab compiler outputs are explained. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/quirks-with-compiled-matlab-dlls" rel="bookmark" title="Quirks with compiled Matlab DLLs">Quirks with compiled Matlab DLLs </a> <small>Several quirks with Matlab-compiled DLLs are discussed and workarounds suggested. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/preallocation-performance" rel="bookmark" title="Preallocation performance">Preallocation performance </a> <small>Preallocation is a standard Matlab speedup technique. Still, it has several undocumented aspects. ...</small></li>
</ol>
</div>
]]></content:encoded>
					
					<wfw:commentRss>https://undocumentedmatlab.com/articles/quirks-with-parfor-vs-for/feed</wfw:commentRss>
			<slash:comments>7</slash:comments>
		
		
			</item>
		<item>
		<title>General-use object copy</title>
		<link>https://undocumentedmatlab.com/articles/general-use-object-copy?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=general-use-object-copy</link>
					<comments>https://undocumentedmatlab.com/articles/general-use-object-copy#comments</comments>
		
		<dc:creator><![CDATA[Yair Altman]]></dc:creator>
		<pubDate>Wed, 06 May 2015 13:30:23 +0000</pubDate>
				<category><![CDATA[High risk of breaking in future versions]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[Undocumented feature]]></category>
		<category><![CDATA[Undocumented function]]></category>
		<category><![CDATA[Internal component]]></category>
		<category><![CDATA[MCOS]]></category>
		<category><![CDATA[Pure Matlab]]></category>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=5782</guid>

					<description><![CDATA[<p>Matlab's dual internal serialization/deserialization functions can be used to create duplicates of any object. </p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/general-use-object-copy">General-use object copy</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/handle-object-as-default-class-property-value" rel="bookmark" title="Handle object as default class property value">Handle object as default class property value </a> <small>MCOS property initialization has a documented but unexpected behavior that could cause many bugs in user code. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/undocumented-cursorbar-object" rel="bookmark" title="Undocumented cursorbar object">Undocumented cursorbar object </a> <small>Matlab's internal undocumented graphics.cursorbar object can be used to present dynamic data-tip cross-hairs...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/accessing-private-object-properties" rel="bookmark" title="Accessing private object properties">Accessing private object properties </a> <small>Private properties of Matlab class objects can be accessed (read and write) using some undocumented techniques. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/class-object-tab-completion-and-improper-field-names" rel="bookmark" title="Class object tab completion &amp; improper field names">Class object tab completion &amp; improper field names </a> <small>Tab completions and property access can be customized for user-created Matlab classes. ...</small></li>
</ol>
</div>
]]></description>
										<content:encoded><![CDATA[<p>When using Matlab objects, either a Matlab class (MCOS) or any other (e.g., Java, COM, C# etc.), it is often useful to create a copy of the original object, complete with all internal property values. This enables modification of the new copy without affecting the original object. This is not important for MCOS value-class objects, since value objects use the COW (<i><a target="_blank" href="/articles/internal-matlab-memory-optimizations">Copy-on-Write</a>/Update</i>, a.k.a. <i>Lazy Copy</i>) and this is handled automatically by the Matlab interpreter when it detects that a change is made to the copy reference. However, it is very important for handle objects, where modifying any property of the copied object also modifies the original object.<br />
Most OOP languages include some sort of a <a target="_blank" rel="nofollow" href="http://en.wikipedia.org/wiki/Copy_constructor"><i>copy constructor</i></a>, which enables programmers to duplicate a handle/reference object, internal properties included, such that it becomes entirely separate from the original object. Unfortunately, Matlab did not include such a copy constructor until R2011a (<a target="_blank" rel="nofollow" href="http://www.mathworks.com/help/matlab/ref/matlab.mixin.copyable-class.html"><i><b>matlab.mixin.Copyable.copy()</b></i></a>).<br />
On Matlab R2010b and older, as well as on newer releases, we do not have a readily-available solution for handle object copy. Until now, that is.<br />
<span id="more-5782"></span><br />
There are several ways by which we can create such a copy function. We might call the main constructor to create a default object and then override its properties by iterating over the original object&#8217;s properties. This might work in some cases, but not if there is no default constructor for the object, or if there are side-effects to object property modifications. If we wanted to implement a deep (rather than shallow) copy, we&#8217;d need to recursively iterate over all the properties of the internal objects as well.<br />
A simpler solution might be to save the object to a temporary file (<i><b>tempname</b></i>, then load from that file (which creates a copy), and finally delete the temp file. This is nice and clean, but the extra I/O could be relatively slow compared to in-memory processing.<br />
Which leads us to today&#8217;s chosen solution, where we use Matlab&#8217;s builtin functions <i><b>getByteStreamFromArray</b></i> and <i><b>getArrayFromByteStream</b></i>, which I discussed last year as a way to easily <a target="_blank" href="/articles/serializing-deserializing-matlab-data">serialize and deserialize Matlab data</a> of any type. Specifically, <i><b>getArrayFromByteStream</b></i> has the side-effect of creating a duplicate of the serialized data, which is perfect for our needs here (note that these pair of function are only available on R2010b or newer; on R2010a or older we can still serialize via a temp file):</p>
<pre lang='matlab'>
% Copy function - replacement for matlab.mixin.Copyable.copy() to create object copies
function newObj = copy(obj)
    try
        % R2010b or newer - directly in memory (faster)
        objByteArray = getByteStreamFromArray(obj);
        newObj = getArrayFromByteStream(objByteArray);
    catch
        % R2010a or earlier - serialize via temp file (slower)
        fname = [tempname '.mat'];
        save(fname, 'obj');
        newObj = load(fname);
        newObj = newObj.obj;
        delete(fname);
    end
end
</pre>
<p>This function can be placed anywhere on the Matlab path and will work on all recent Matlab releases (including R2010b and older), any type of Matlab data (including value or handle objects, UDD objects, structs, arrays etc.), as well as external objects (Java, C#, COM). In short, it works on anything that can be assigned to a Matlab variable:</p>
<pre lang='matlab'>
obj1 = ... % anything really!
obj2 = obj1.copy();  % alternative #1
obj2 = copy(obj1);   % alternative #2
</pre>
<p>Alternative #1 may look &#8220;nicer&#8221; to a computer scientist, but alternative #2 is preferable because it also handles the case of non-object data (e.g., [] or &#8216;abc&#8217; or <i><b>magic(5)</b></i> or a struct or cell array), whereas alternative #1 would error in such cases.<br />
In any case, using either alternatives, we no longer need to worry about inheriting our MCOS class from <i><b>matlab.mixin.Copyable</b></i>, or backward compatibility with R2010b and older (I may possibly be bashed for this statement, but in my eyes future compatibility is less important than backward compatibility). This is not such a wild edge-case. In fact, I came across the idea for this post last week, when I developed an MCOS project for a consulting client that uses both R2010a and R2012a, and the same code needed to run on both Matlab releases.<br />
Using the serialization functions also solves the case of creating copies for Java/C#/COM objects, which currently have no other solution, except if these objects happen to contain their own copy method.<br />
In summary, using Matlab&#8217;s undocumented builtin serialization functions enables easy implementation of a very efficient (in-memory) copy constructor, which is expected to work across all Matlab types and many releases, without requiring any changes to existing code &#8211; just placing the above <i><b>copy</b></i> function on the Matlab path. This is expected to continue working properly until Matlab decides to remove the serialization functions (which should hopefully never happen, as they are <i>so</i> useful).<br />
Sometimes, the best solutions lie not in sophisticated new features (e.g., <i><b>matlab.mixin.Copyable</b></i>), but by using plain ol&#8217; existing building blocks. There&#8217;s a good lesson to be learned here I think.<br />
p.s. &#8211; I do realize that <i><b>matlab.mixin.Copyable</b></i> provides the nice feature of enabling users to control the copy process, including implementing deep or shallow or selective copy. If that&#8217;s your need and you have R2011a or newer then good for you, go ahead and inherit Copyable. Today&#8217;s post was meant for the regular Joe who doesn&#8217;t need this fancy feature, but does need to support R2010b, and/or a simple way to clone Java/C#/COM objects.</p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/general-use-object-copy">General-use object copy</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/handle-object-as-default-class-property-value" rel="bookmark" title="Handle object as default class property value">Handle object as default class property value </a> <small>MCOS property initialization has a documented but unexpected behavior that could cause many bugs in user code. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/undocumented-cursorbar-object" rel="bookmark" title="Undocumented cursorbar object">Undocumented cursorbar object </a> <small>Matlab's internal undocumented graphics.cursorbar object can be used to present dynamic data-tip cross-hairs...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/accessing-private-object-properties" rel="bookmark" title="Accessing private object properties">Accessing private object properties </a> <small>Private properties of Matlab class objects can be accessed (read and write) using some undocumented techniques. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/class-object-tab-completion-and-improper-field-names" rel="bookmark" title="Class object tab completion &amp; improper field names">Class object tab completion &amp; improper field names </a> <small>Tab completions and property access can be customized for user-created Matlab classes. ...</small></li>
</ol>
</div>
]]></content:encoded>
					
					<wfw:commentRss>https://undocumentedmatlab.com/articles/general-use-object-copy/feed</wfw:commentRss>
			<slash:comments>15</slash:comments>
		
		
			</item>
		<item>
		<title>New book: Accelerating MATLAB Performance</title>
		<link>https://undocumentedmatlab.com/articles/new-book-accelerating-matlab-performance?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=new-book-accelerating-matlab-performance</link>
		
		<dc:creator><![CDATA[Yair Altman]]></dc:creator>
		<pubDate>Tue, 16 Dec 2014 21:22:03 +0000</pubDate>
				<category><![CDATA[GUI]]></category>
		<category><![CDATA[Low risk of breaking in future versions]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[Stock Matlab function]]></category>
		<category><![CDATA[Toolbox]]></category>
		<category><![CDATA[Book]]></category>
		<category><![CDATA[Performance]]></category>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=5391</guid>

					<description><![CDATA[<p>Accelerating MATLAB Performance (ISBN 9781482211290) is a book dedicated to improving Matlab performance (speed). </p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/new-book-accelerating-matlab-performance">New book: Accelerating MATLAB Performance</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/tips-for-accelerating-matlab-performance" rel="bookmark" title="Tips for accelerating Matlab performance">Tips for accelerating Matlab performance </a> <small>My article on "Tips for Accelerating MATLAB Performance" was recently featured in the September 2017 Matlab newsletter digest. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/matlab-java-book" rel="bookmark" title="New book: Undocumented Secrets of MATLAB-Java Programming">New book: Undocumented Secrets of MATLAB-Java Programming </a> <small>Undocumented Secrets of Matlab-Java Programming (ISBN 9781439869031) is a book dedicated to the integration of Matlab and Java. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/some-performance-tuning-tips" rel="bookmark" title="Some Matlab performance-tuning tips">Some Matlab performance-tuning tips </a> <small>Matlab can be made to run much faster using some simple optimization techniques. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/matlab-java-memory-leaks-performance" rel="bookmark" title="Matlab-Java memory leaks, performance">Matlab-Java memory leaks, performance </a> <small>Internal fields of Java objects may leak memory - this article explains how to avoid this without sacrificing performance. ...</small></li>
</ol>
</div>
]]></description>
										<content:encoded><![CDATA[<p>I am pleased to announce that after three years of research and hard work, following my first book on <a target="_blank" href="/books/matlab-java">Matlab-Java programming</a>, my new book &#8220;<b>Accelerating MATLAB Performance</b>&#8221; is finally published.<br />
<span class="alignright"><a target="_blank" rel="nofollow" href="http://www.crcpress.com/product/isbn/9781482211290#post-img"><img decoding="async" title="Accelerating MATLAB Performance book" src="https://undocumentedmatlab.com/images/K21680-cover-3D-320x454b.jpg" alt="Accelerating MATLAB Performance book" width="320" height="454"/></a><!-- http://www.crcpress.com/ecommerce_product/product_detail.jsf?isbn=9781482211290 --><br />
<a target="_blank" rel="nofollow" href="http://www.crcpress.com/product/isbn/9781482211290#post-banner"><img decoding="async" title="CRC discount promo code" alt="CRC discount promo code" src="https://undocumentedmatlab.com/images/CRChoriz260x69_MZK07_25perc.gif" class="aligncenter" style="margin: 10px 0 0 30px;" width="260" height="69"/></a><br />
</span> The Matlab programming environment is often perceived as a platform suitable for prototyping and modeling but not for “serious” applications. One of the main complaints is that Matlab is just too slow.<br />
<i><b>Accelerating MATLAB Performance</b></i> (<a target="_blank" rel="nofollow" href="http://www.crcpress.com/product/isbn/9781482211290#post-isbn">CRC Press, ISBN 9781482211290</a>, 785 pages) aims to correct this perception, by describing multiple ways to greatly improve Matlab program speed.<br />
The book:</p>
<ul>
<li>Demonstrates how to profile MATLAB code for performance and resource usage, enabling users to focus on the program’s actual hotspots</li>
<li>Considers tradeoffs in performance tuning, horizontal vs. vertical scalability, latency vs. throughput, and perceived vs. actual performance</li>
<li>Explains generic speedup techniques used throughout the software industry and their adaptation for Matlab, plus methods specific to Matlab</li>
<li>Analyzes the effects of various data types and processing functions</li>
<li>Covers vectorization, parallelization (implicit and explicit), distributed computing, optimization, memory management, chunking, and caching</li>
<li>Explains Matlab’s memory model and shows how to profile memory usage and optimize code to reduce memory allocations and data fetches</li>
<li>Describes the use of GPU, MEX, FPGA, and other forms of compiled code</li>
<li>Details acceleration techniques for GUI, graphics, I/O, Simulink, object-oriented Matlab, Matlab startup, and deployed applications</li>
<li>Discusses a wide variety of MathWorks and third-party functions, utilities, libraries, and toolboxes that can help to improve performance</li>
</ul>
<p>Ideal for novices and professionals alike, the book leaves no stone unturned. It covers all aspects of Matlab, taking a comprehensive approach to boosting Matlab performance. It is packed with thousands of helpful tips, code examples, and online references. Supported by this active website, the book will help readers rapidly attain significant reductions in development costs and program run times.<br />
Additional information about the book, including detailed Table-of-Contents, book structure, reviews, resources and errata list, can be found in a <a target="_blank" href="/books/matlab-performance">dedicated webpage</a> that I&#8217;ve prepared for this book and plan to maintain.<br />
<center><!-- span style="background-color:#E7E7E7;" --><b><a target="_blank" rel="nofollow" href="http://www.crcpress.com/product/isbn/9781482211290#post-cta">Click here to get your book copy now</a>!</b><br />
Use promo code  <span style="background-color: rgb(255, 255, 0);"><b>MZK07</b></span> for a 25% discount and free worldwide shipping on crcpress.com</center><br />
Instead of focusing on just a single performance aspect, I&#8217;ve attempted to cover all bases at least to some degree. The basic idea is that there are numerous different ways to speed up Matlab code: Some users might like vectorization, others may prefer parallelization, still others may choose caching, or smart algorithms, or better memory-management, or compiled C code, or improved I/O, or faster graphics. All of these alternatives are perfectly fine, and the book attempts to cover every major alternative. I hope that you will find some speedup techniques to your liking among the alternatives, and at least a few new insights that you can employ to improve your program&#8217;s speed.<br />
I am the first to admit that this book is far from perfect. There are several topics that I would have loved to explore in greater detail, and there are probably many speedup tips that I forgot to mention or have not yet discovered. Still, with over 700 pages of speedup tips, I thought this book might be useful enough as-is, flawed as it may be. After all, it will never be perfect, but I worked very hard to make it close enough, and I really hope that you&#8217;ll agree.<br />
If your work relies on Matlab code performance in any way, you might benefit by reading this book. If your organization has several people who might benefit, consider inviting me for <a target="_blank" href="/training">dedicated onsite training</a> on Matlab performance and other advanced Matlab topics.<br />
As always, your comments and feedback would be greatly welcome &#8211; please post them directly on <a href="/books/matlab-performance#respond">the book&#8217;s webpage</a>.<br />
Happy Holidays everybody!</p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/new-book-accelerating-matlab-performance">New book: Accelerating MATLAB Performance</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/tips-for-accelerating-matlab-performance" rel="bookmark" title="Tips for accelerating Matlab performance">Tips for accelerating Matlab performance </a> <small>My article on "Tips for Accelerating MATLAB Performance" was recently featured in the September 2017 Matlab newsletter digest. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/matlab-java-book" rel="bookmark" title="New book: Undocumented Secrets of MATLAB-Java Programming">New book: Undocumented Secrets of MATLAB-Java Programming </a> <small>Undocumented Secrets of Matlab-Java Programming (ISBN 9781439869031) is a book dedicated to the integration of Matlab and Java. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/some-performance-tuning-tips" rel="bookmark" title="Some Matlab performance-tuning tips">Some Matlab performance-tuning tips </a> <small>Matlab can be made to run much faster using some simple optimization techniques. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/matlab-java-memory-leaks-performance" rel="bookmark" title="Matlab-Java memory leaks, performance">Matlab-Java memory leaks, performance </a> <small>Internal fields of Java objects may leak memory - this article explains how to avoid this without sacrificing performance. ...</small></li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Allocation performance take 2</title>
		<link>https://undocumentedmatlab.com/articles/allocation-performance-take-2?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=allocation-performance-take-2</link>
					<comments>https://undocumentedmatlab.com/articles/allocation-performance-take-2#comments</comments>
		
		<dc:creator><![CDATA[Yair Altman]]></dc:creator>
		<pubDate>Wed, 14 Aug 2013 18:00:05 +0000</pubDate>
				<category><![CDATA[Low risk of breaking in future versions]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[Undocumented feature]]></category>
		<category><![CDATA[Performance]]></category>
		<category><![CDATA[Pure Matlab]]></category>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=4086</guid>

					<description><![CDATA[<p>The clear function has some non-trivial effects on Matlab performance. </p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/allocation-performance-take-2">Allocation performance take 2</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/performance-scatter-vs-line" rel="bookmark" title="Performance: scatter vs. line">Performance: scatter vs. line </a> <small>In many circumstances, the line function can generate visually-identical plots as the scatter function, much faster...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/zero-testing-performance" rel="bookmark" title="Zero-testing performance">Zero-testing performance </a> <small>Subtle changes in the way that we test for zero/non-zero entries in Matlab can have a significant performance impact. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/preallocation-performance" rel="bookmark" title="Preallocation performance">Preallocation performance </a> <small>Preallocation is a standard Matlab speedup technique. Still, it has several undocumented aspects. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/array-resizing-performance" rel="bookmark" title="Array resizing performance">Array resizing performance </a> <small>Several alternatives are explored for dynamic array growth performance in Matlab loops. ...</small></li>
</ol>
</div>
]]></description>
										<content:encoded><![CDATA[<p>Last week, Mike Croucher posted a very interesting <a target="_blank" rel="nofollow" href="http://www.walkingrandomly.com/?p=5043">article</a> on the fact that <i><b>cumprod</b></i> can be used to generate a vector of powers much more quickly than the built-in <code>.^</code> operator. Trying to improve on Mike&#8217;s results, I used my finding that <code>zeros(n,m)+scalar</code> is often faster than <code>ones(n,m)*scalar</code> (see my article on <a target="_blank" href="/articles/preallocation-performance/#non-default">pre-allocation performance</a>). Applying this to Mike&#8217;s powers-vector example, <code>zeros(n,m)+scalar</code> only gave me a 25% performance boost (i.e., 1-1/1.25 or 20% faster), rather than the x5 speedup that I received in my original article.<br />
Naturally, the difference could be due to different conditions: a different running platform, OS, Matlab release, and allocation size. But the difference felt intriguing enough to warrant a small investigation. I came up with some interesting new findings, that I cannot fully explain:<br />
<figure style="width: 495px" class="wp-caption alignright"><img loading="lazy" decoding="async" alt="The performance of allocating zeros, ones" src="https://undocumentedmatlab.com/images/clear_performance.gif" title="The performance of allocating zeros, ones" width="495" height="392" /><figcaption class="wp-caption-text">The performance of allocating zeros, ones (x100)</figcaption></figure>            </p>
<pre lang='matlab'>
function t=perfTest
    % Run tests multiple time, for multiple allocation sizes
    n=100; tidx = 1; iters = 100;
    while n < 1e8
        t(tidx,1) = n;
        clear y; tic, for idx=1:iters, clear y; y=ones(n,1);  end, t(tidx,2)=toc;  % clear; ones()
        clear y; tic, for idx=1:iters, clear y; y=zeros(n,1); end, t(tidx,3)=toc;  % clear; zeros()
        clear y; tic, for idx=1:iters,          y=ones(n,1);  end, t(tidx,4)=toc;  % only ones()
        clear y; tic, for idx=1:iters,          y=zeros(n,1); end, t(tidx,5)=toc;  % only zeros()
        n = n * 2;
        tidx = tidx + 1;
    end
    % Normalize result on a per-element basis
    t2 = bsxfun(@rdivide, t(:,2:end), t(:,1));
    % Display the results
    h  = loglog(t(:,1), t(:,2:end));  % overall durations
    %h = loglog(t(:,1), t2);  % normalized durations
    set(h, 'LineSmoothing','on');  % see https://undocumentedmatlab.com/articles/plot-linesmoothing-property/
    set(h(2), 'LineStyle','--', 'Marker','+', 'MarkerSize',5, 'Color',[0,.5,0]);
    set(h(3), 'LineStyle',':',  'Marker','o', 'MarkerSize',5);
    set(h(4), 'LineStyle','-.', 'Marker','*', 'MarkerSize',5);
    legend(h, 'clear; ones', 'clear; zeros', 'ones', 'zeros', 'Location','NorthWest');
    xlabel('# allocated elements');
    ylabel('duration [secs]');
    box off
end
</pre>
<p><span id="more-4086"></span><br />
The full results were (R2013a, Win 7 64b, 8MB):<br />
<figure style="width: 495px" class="wp-caption alignright"><img loading="lazy" decoding="async" alt="The same data normalized per-element" src="https://undocumentedmatlab.com/images/clear_performance2.gif" title="The same data normalized per-element" width="495" height="392" /><figcaption class="wp-caption-text">The same data normalized per-element (x100)</figcaption></figure>               </p>
<pre lang='matlab'>
       n  clear,ones  clear,zeros    only ones   only zeros
========  ==========  ===========    =========   ==========
     100    0.000442     0.000384     0.000129     0.000124
     200    0.000390     0.000378     0.000150     0.000121
     400    0.000404     0.000424     0.000161     0.000151
     800    0.000422     0.000438     0.000165     0.000176
    1600    0.000583     0.000516     0.000211     0.000206
    3200    0.000656     0.000606     0.000325     0.000296
    6400    0.000863     0.000724     0.000587     0.000396
   12800    0.001289     0.000976     0.000975     0.000659
   25600    0.002184     0.001574     0.001874     0.001360
   51200    0.004189     0.002776     0.003649     0.002320
  102400    0.010900     0.005870     0.010778     0.005487
  204800    0.051658     0.000966     0.049570     0.000466
  409600    0.095736     0.000901     0.095183     0.000463
  819200    0.213949     0.000984     0.219887     0.000817
 1638400    0.421103     0.001023     0.429692     0.000610
 3276800    0.886328     0.000936     0.877006     0.000609
 6553600    1.749774     0.000972     1.740359     0.000526
13107200    3.499982     0.001108     3.550072     0.000649
26214400    7.094449     0.001144     7.006229     0.000712
52428800   14.039551     0.001853    14.396687     0.000822
</pre>
<p>(Note: all numbers should be divided by the number of loop iterations <code>iters=100</code>)<br />
As can be seen from the data and resulting plots (log-log scale), the more elements we allocate, the longer this takes. It is not surprising that in all cases the allocation duration is roughly linear, since when twice as many elements need to be allocated, this roughly takes twice as long. It is also not surprising to see that the allocation has some small overhead, which is apparent when allocating a small number of elements.<br />
A potentially surprising result, namely that allocating 200-400 elements is in some cases a bit faster than allocating only 100 elements, can actually be attributed to measurement inaccuracies and JIT warm-up time.<br />
Another potentially surprising result, that <i><b>zeros</b></i> is consistently faster than <i><b>ones</b></i> can perhaps be explained by <i><b>zeros</b></i> being able to use more efficient low-level functions (<code>bzero</code>) for clearing memory, than <i><b>ones</b></i> which needs to <code>memset</code> a value.<br />
A somewhat more surprising effect is that of the <i><b>clear</b></i> command: As can be seen in the code, calling <i><b>clear</b></i> within the timed loops has no functional use, because in all cases the variable <code>y</code> is being overridden with new values. However, we clearly see that the overhead of calling <i><b>clear</b></i> is an extra 3&#038;#181S or so per call. Calling <i><b>clear</b></i> is important in cases where we deal with very large memory constructs: clearing them from memory enables additional memory allocations (of the same or different variables) without requiring virtual memory paging, which would be disastrous for performance. But if we have a very large loop which calls <i><b>clear</b></i> numerous times and does not serve such a purpose, then it is better to remove this call: although the overhead is small, it accumulates and might be an important factor in very large loops.<br />
Another aspect that is surprising is the fact that <i><b>zeros</b></i> (with or without <i><b>clear</b></i>) is <i>much</i> faster when allocating 200K+ elements, compared to 100K elements. This is indicative of an internal switch to a more optimized allocation algorithm, which apparently has constant speed rather than linear with allocation size. At the very same point, there is a corresponding performance <i>degradation</i> in the allocation of <i><b>ones</b></i>. I suspect that 100K is the point at which <a target="_blank" rel="nofollow" href="http://www.mathworks.com/support/solutions/en/data/1-4PG4AN/">Matlab&#8217;s internal parallelization</a> (multi-threading) kicks in. This occurs at varying points for different functions, but it is normally some multiple of 20K elements (20K, 40K, 100K or 200K &#8211; a detailed list was <a target="_blank" rel="nofollow" href="http://www.walkingrandomly.com/?p=1894">posted</a> by Mike Croucher again). Apparently, it kicks-in at 100K for <i><b>zeros</b></i>, but for some reason not for <i><b>ones</b></i>.<br />
The performance degradation at 100K elements has been around in Matlab for ages &#8211; I see it as far back as R12 (Matlab 6.0), for both <i><b>zeros</b></i> and <i><b>ones</b></i>. The reason for it is unknown to me, if anyone could illuminate me, I&#8217;d be happy to hear. The new thing is the implementation of a faster internal mechanism (presumably multi-threading) in R2008b (Matlab 7.7) for <i><b>zeros</b></i>, at the very same point (100K elements), although for some unknown reason this was not done for <i><b>ones</b></i> as well.<br />
Another aspect that is strange here is that the speedup for <i><b>zeros</b></i> at 200K elements is ~12 &#8211; much higher than the expected optimal speedup of 4 on my quad-core system. The higher speedup may perhaps be explained by hyper-threading or <a target="_blank" rel="nofollow" href="http://en.wikipedia.org/wiki/SIMD">SIMD</a> at the CPU level.<br />
In any case, going back to the original reason I started this investigation, the reason for getting such wide disparity in speedups between using <i><b>zeros</b></i> and <i><b>ones</b></i> for 10K elements (as in Mike Croucher&#8217;s post), and for 3M elements (as in my pre-allocation performance article) now becomes clear: In the case of 10K elements, multi-threading is not active, and <i><b>zeros</b></i> is indeed only 20-30% faster than <i><b>ones</b></i>; In the case of 3M elements, the superior multi-threading of <i><b>zeros</b></i> over <i><b>ones</b></i> enables much larger speedups, increasing with allocated size.<br />
Some take-away lessons:</p>
<ul>
<li>Using <i><b>zeros</b></i> is always preferable to <i><b>ones</b></i>, especially for more than 100K elements on Matlab 7.7 (R2008b) and newer.</li>
<li><code>zeros(n,m)+scalar</code> is consistently faster than <code>ones(n,m)*scalar</code>, especially for more than 100K elements, and for the same reason.</li>
<li>In some cases, it may be worth to use a built-in function with more elements than actually necessary, just to benefit from its internal multi-threading.</li>
<li>Never take performance assumptions for granted. Always test on your specific system using a representative data-set.</li>
</ul>
<p>p.s. &#8211; readers who are interested in the historical evolution of the <i><b>zeros</b></i> function are referred to <a target="_blank" rel="nofollow" href="http://blogs.mathworks.com/loren/2013/08/09/zero-evolution/">Loren Shure&#8217;s latest post</a>, only a few days ago (a fortunate coincidence indeed). Unfortunately, Loren&#8217;s post does not illuminate the mysteries above.</p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/allocation-performance-take-2">Allocation performance take 2</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/performance-scatter-vs-line" rel="bookmark" title="Performance: scatter vs. line">Performance: scatter vs. line </a> <small>In many circumstances, the line function can generate visually-identical plots as the scatter function, much faster...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/zero-testing-performance" rel="bookmark" title="Zero-testing performance">Zero-testing performance </a> <small>Subtle changes in the way that we test for zero/non-zero entries in Matlab can have a significant performance impact. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/preallocation-performance" rel="bookmark" title="Preallocation performance">Preallocation performance </a> <small>Preallocation is a standard Matlab speedup technique. Still, it has several undocumented aspects. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/array-resizing-performance" rel="bookmark" title="Array resizing performance">Array resizing performance </a> <small>Several alternatives are explored for dynamic array growth performance in Matlab loops. ...</small></li>
</ol>
</div>
]]></content:encoded>
					
					<wfw:commentRss>https://undocumentedmatlab.com/articles/allocation-performance-take-2/feed</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
			</item>
		<item>
		<title>File deletion memory leaks, performance</title>
		<link>https://undocumentedmatlab.com/articles/file-deletion-memory-leaks-performance?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=file-deletion-memory-leaks-performance</link>
					<comments>https://undocumentedmatlab.com/articles/file-deletion-memory-leaks-performance#comments</comments>
		
		<dc:creator><![CDATA[Yair Altman]]></dc:creator>
		<pubDate>Wed, 05 Sep 2012 18:00:32 +0000</pubDate>
				<category><![CDATA[Java]]></category>
		<category><![CDATA[Low risk of breaking in future versions]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[Stock Matlab function]]></category>
		<category><![CDATA[Performance]]></category>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=3133</guid>

					<description><![CDATA[<p>Matlab's delete function leaks memory and is also slower than the equivalent Java function. </p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/file-deletion-memory-leaks-performance">File deletion memory leaks, performance</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/matlab-java-memory-leaks-performance" rel="bookmark" title="Matlab-Java memory leaks, performance">Matlab-Java memory leaks, performance </a> <small>Internal fields of Java objects may leak memory - this article explains how to avoid this without sacrificing performance. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/profiling-matlab-memory-usage" rel="bookmark" title="Profiling Matlab memory usage">Profiling Matlab memory usage </a> <small>mtic and mtoc were a couple of undocumented features that enabled users of past Matlab releases to easily profile memory usage. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/internal-matlab-memory-optimizations" rel="bookmark" title="Internal Matlab memory optimizations">Internal Matlab memory optimizations </a> <small>Copy-on-write and in-place data manipulations are very useful Matlab performance improvement techniques. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/mlintfailurefiles" rel="bookmark" title="MLintFailureFiles or: Why can&#039;t I save my m-file?!">MLintFailureFiles or: Why can&#039;t I save my m-file?! </a> <small>Sometimes Matlab gets into a state where it cannot use a valid m-file. This article explains what can be done. ...</small></li>
</ol>
</div>
]]></description>
										<content:encoded><![CDATA[<p>Last week I wrote about Matlab&#8217;s built-in <i><b>pause</b></i> function, that not only leaks memory but also appears to be less accurate than the equivalent Java function. Today I write about a very similar case. Apparently, using Matlab&#8217;s <i><b>delete</b></i> function not only leaks memory but is also slower than the equivalent Java function.</p>
<h3 id="leak">Memory leak</h3>
<p>The memory leak in <i><b>delete</b></i> was (to the best of my knowledge) originally <a target="_blank" rel="nofollow" href="https://www.mathworks.com/matlabcentral/newsreader/view_thread/305515#885698">reported</a> in the CSSM newsgroup and <a target="_blank" href="/articles/matlab-java-memory-leaks-performance/#comment-104833">on this blog</a> a few weeks ago. The reporter mentioned that after deleting 760K files using <i><b>delete</b></i>, he got a Java Heap Space out-of-memory error. The reported solution was to use the Java equivalent, <code>java.io.File(filename).delete()</code>, which does not leak anything.<br />
I was able to recreate the report on my WinXP R2012a system, and discovered what appears to be a memory leak of ~150 bytes per file. This appears to be a very small number, but multiply by 760K (=111MB) and you can understand the problem. Of course, you can always increase the size of the Java heap used by Matlab (<a target="_blank" rel="nofollow" href="http://www.mathworks.co.uk/support/solutions/en/data/1-18I2C/">here&#8217;s how</a>), but this should only be used as a last resort and certainly not when the solution is so simple.<br />
<span id="more-3133"></span><br />
For those interested, here&#8217;s the short test harness that I&#8217;ve used to test the memory leak:</p>
<pre lang='matlab'>
function perfTest()
    rt = java.lang.Runtime.getRuntime;
    rt.gc();
    java.lang.Thread.sleep(1000);  % wait 1 sec to let the GC time to finish
    orig = rt.freeMemory;  % in bytes
    testSize = 50000;
    for idx = 1 : testSize
        % Create a temp file
        tn = [tempname '.tmp'];
        fid = fopen(tn,'wt');
        fclose(fid);
        % Delete the temp file
        delete(tn);
        %java.io.File(tn).delete();
    end
    rt.gc();
    java.lang.Thread.sleep(1000);  % wait 1 sec to let the GC time to finish
    free = rt.freeMemory;
    totalLeak = orig - free;
    leakPerCall = totalLeak / testSize
end
</pre>
<p>I placed it in a function to remove command-prompt-generated fluctuations, but it must still be run several times to smooth the data. The main reason for the changes across runs is the fact that the Java heap is constantly <a target="_blank" rel="nofollow" href="http://www.javaperformancetuning.com/tools/gcviewer/index.shtml">growing and shrinking in a seesaw manner</a>, and explicitly calling the garbage collector as I have done does not guarantee that it actually gets performed immediately or fully. By running a large-enough loop, and rerunning the test several times, the results become consistent due to the <a target="_blank" rel="nofollow" href="http://en.wikipedia.org/wiki/Law_of_large_numbers">law of large numbers</a>.<br />
Running the test above with the <i><b>delete</b></i> line commented and the <code>java.io.File</code> line uncommented, shows no discernible memory leak.<br />
To monitor Matlab&#8217;s Java heap space size in runtime, see <a target="_blank" href="/articles/profiling-matlab-memory-usage/">my article</a> from several months ago, or use Elmar Tarajan&#8217;s <a target="_blank" rel="nofollow" href="http://www.mathworks.com/matlabcentral/fileexchange/8169-matlab-memory-monitor-v2-4">memory-monitor utility</a> from the File Exchange.<br />
Note: there are numerous online resources about Java&#8217;s garbage collector. Here&#8217;s one <a target="_blank" rel="nofollow" href="http://middlewaremagic.com/weblogic/?p=6388">interesting article</a> that I have recently come across.</p>
<h3 id="performance">Performance</h3>
<p>When running the test function using <code>java.io.File</code>, we notice a significant speedup compared to running using <i><b>delete</b></i>. The reason is that (at least on my system, YMMV) <i><b>delete</b></i> takes 1.5-2 milliseconds to run while <code>java.io.File</code> only takes 0.4-0.5 ms. Again, this doesn&#8217;t seem like much, but multiply by thousands of files and it starts to be appreciable. For our 50K test harness, the difference translates into ~50 seconds, or 40% of the overall time.<br />
Since we&#8217;re dealing with file I/O, it is important to run the testing multiple times and within a function (not the Matlab Command Prompt), to get rid of spurious measurement artifacts.<br />
Have you encountered any other Matlab function, where the equivalent in Java is better? If so, please <a href="/articles/file-deletion-memory-leaks-performance/#respond">add a comment</a> below.</p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/file-deletion-memory-leaks-performance">File deletion memory leaks, performance</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/matlab-java-memory-leaks-performance" rel="bookmark" title="Matlab-Java memory leaks, performance">Matlab-Java memory leaks, performance </a> <small>Internal fields of Java objects may leak memory - this article explains how to avoid this without sacrificing performance. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/profiling-matlab-memory-usage" rel="bookmark" title="Profiling Matlab memory usage">Profiling Matlab memory usage </a> <small>mtic and mtoc were a couple of undocumented features that enabled users of past Matlab releases to easily profile memory usage. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/internal-matlab-memory-optimizations" rel="bookmark" title="Internal Matlab memory optimizations">Internal Matlab memory optimizations </a> <small>Copy-on-write and in-place data manipulations are very useful Matlab performance improvement techniques. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/mlintfailurefiles" rel="bookmark" title="MLintFailureFiles or: Why can&#039;t I save my m-file?!">MLintFailureFiles or: Why can&#039;t I save my m-file?! </a> <small>Sometimes Matlab gets into a state where it cannot use a valid m-file. This article explains what can be done. ...</small></li>
</ol>
</div>
]]></content:encoded>
					
					<wfw:commentRss>https://undocumentedmatlab.com/articles/file-deletion-memory-leaks-performance/feed</wfw:commentRss>
			<slash:comments>8</slash:comments>
		
		
			</item>
		<item>
		<title>The Java import directive</title>
		<link>https://undocumentedmatlab.com/articles/the-java-import-directive?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-java-import-directive</link>
					<comments>https://undocumentedmatlab.com/articles/the-java-import-directive#respond</comments>
		
		<dc:creator><![CDATA[Yair Altman]]></dc:creator>
		<pubDate>Wed, 13 Jun 2012 18:00:37 +0000</pubDate>
				<category><![CDATA[Java]]></category>
		<category><![CDATA[Low risk of breaking in future versions]]></category>
		<category><![CDATA[Memory]]></category>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2959</guid>

					<description><![CDATA[<p>The import function can be used to clarify Java code used in Matlab. </p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/the-java-import-directive">The Java import directive</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/converting-java-vectors-to-matlab-arrays" rel="bookmark" title="Converting Java vectors to Matlab arrays">Converting Java vectors to Matlab arrays </a> <small>Converting Java vectors to Matlab arrays is pretty simple - this article explains how....</small></li>
<li><a href="https://undocumentedmatlab.com/articles/udd-and-java" rel="bookmark" title="UDD and Java">UDD and Java </a> <small>UDD provides built-in convenience methods to facilitate the integration of Matlab UDD objects with Java code - this article explains how...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/using-pure-java-gui-in-deployed-matlab-apps" rel="bookmark" title="Using pure Java GUI in deployed Matlab apps">Using pure Java GUI in deployed Matlab apps </a> <small>Using pure-Java GUI in deployed Matlab apps requires a special yet simple adaptation. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/matlab-java-memory-leaks-performance" rel="bookmark" title="Matlab-Java memory leaks, performance">Matlab-Java memory leaks, performance </a> <small>Internal fields of Java objects may leak memory - this article explains how to avoid this without sacrificing performance. ...</small></li>
</ol>
</div>
]]></description>
										<content:encoded><![CDATA[<p>A recent <a target="_blank" rel="nofollow" href="https://thilinasameera.wordpress.com/2012/05/30/undocumented-matlab-java-codes-in-matlab-m-files/">blog post</a> on a site I came across showed me that some users who are using Java in Matlab take the unnecessary precaution of always using the fully-qualified class-name (FQCN) of the Java classes, and are not familiar with the <i><b>import</b></i> directive in Matlab. Today I&#8217;ll show how to use <i><b>import</b></i> to simplify Java usage in Matlab.<br />
Basically, the <i><b>import</b></i> function enables Matlab users to declare that a specific class name belongs to a particular Java namespace, without having to specifically state the full namespace in each use. In this regard, Matlab&#8217;s <i><b>import</b></i> closely mimics <a target="_blank" rel="nofollow" href="http://docs.oracle.com/javase/tutorial/java/package/usepkgs.html">Java&#8217;s <code>import</code></a>, and not surprisingly also has similar syntax:</p>
<pre lang='matlab'>
% Alternative 1 - using explicit namespaces
jFrame = javax.swing.JFrame;
jDim = java.awt.Dimension(50,120);
jPanel.add(jButton, java.awt.GridBagConstraints(0, 0, 1, 1, 1.0, 1.0, ...
                    java.awt.GridBagConstraints.NORTHWEST, ...
                    java.awt.GridBagConstraints.NONE, ...
                    java.awt.Insets(6, 12, 6, 6), 1, 1));
% Alternative 2 - using import
import javax.swing.*
import java.awt.*
jFrame = JFrame;
jDim = Dimension(50,120);
jPanel.add(jButton, GridBagConstraints(0, 0, 1, 1, 1.0, 1.0, ...
                    GridBagConstraints.NORTHWEST, ...
                    GridBagConstraints.NONE, ...
                    Insets(6, 12, 6, 6), 1, 1));
</pre>
<p>Note how much cleaner Alternative #2 looks compared to Alternative #1. However, as with Java&#8217;s <code>import</code>, there is a tradeoff here: by removing the namespaces from the code, it could become confusing as to which namespace a particular object belongs. For example, by specifying <code>java.awt.Insets</code>, we immediately know that it&#8217;s an AWT insets object, rather than, say, a book&#8217;s insets. There is no clear-cut answer to this dilemma, and in fact there are many Java developers who prefer one way or the other. As in Java, the choice is yours to make also in Matlab.<br />
Perhaps a good compromise, one which I often use, is to stay away from the <code>import something.*</code> format and directly specify the imported classes. In the example above, I would have written:</p>
<pre lang='matlab'>
% Alternative 3 - using explicit import
import javax.swing.JFrame;
import java.awt.Dimension;
import java.awt.GridBagConstraints;
import java.awt.Insets;
jFrame = JFrame;
jDim = Dimension(50,120);
jPanel.add(jButton, GridBagConstraints(0, 0, 1, 1, 1.0, 1.0, ...
                    GridBagConstraints.NORTHWEST, ...
                    GridBagConstraints.NONE, ...
                    Insets(6, 12, 6, 6), 1, 1));
</pre>
<p>This alternative has the benefit that it is immediately clear that Insets belongs to the AWT package, without having to explicitly use the <code>java.awt</code> prefix everywhere. Obviously, if the list of imported classes becomes too large, we could always revert to the <code>import java.awt.*</code> format.<br />
Interestingly, we can also use the functional form of <i><b>import</b></i>:</p>
<pre lang='matlab'>
% Alternative #1
import java.lang.String;
% Alternative #2
import('java.lang.String');
% Alternative #3
classname = 'java.lang.String';
import(classname);
</pre>
<p>Using the third alternative format, that of dynamic import, enables us to decide <b><u>in run-time</u></b>(!) whether to use a class C from package PA or PB. This is a cool feature but must be used with care, since it could lead to very difficult-to-diagnose errors. For example, if the code later invokes a method that exists only in PA.C but not in PB.C. The correct way to do this would probably be to define a class hierarchy where PA.C and PB.C both inherit from the same superclass. But in some cases this is simply not feasible (for example, when you have 2 JARs from different vendors, which use the same classname) and dynamic importing can help.<br />
It is possible to specify multiple input parameters to <i><b>import</b></i> in the same directive. However, note that Matlab 7.5 R2007b and older releases crash (at least on WinXP) when one of the imported parameters is any MathWorks-derived (<code>com.mathworks...</code>) package/class. This bug was fixed in Matlab 7.6 R2008a, but to support earlier releases simply separate such imports into different lines:</p>
<pre lang='matlab'>
% This crashes Matlab 7.5 R2007b and earlier;  OK on Matlab 7.6 R2008a and later
import javax.swing.* com.mathworks.mwswing.*
% This is ok in all Matlab releases
import javax.swing.*
import com.mathworks.mwswing.*
</pre>
<p><i><b>import</b></i> does NOT load the Java class into memory &#8211; it just declares its namespace for the JVM. This mechanism is sometimes called <i>lazy loading</i> (compare to the <a target="_blank" href="/articles/internal-matlab-memory-optimizations/">lazy copying mechanism</a> that I described a couple of weeks ago). To force-load a class into memory, either use it directly (for example, by declaring an object of it, or by using one of its methods), or use a classloader to load it. The issue of JVM classloaders in Matlab is non-trivial (there are several non-identical alternatives), and will be covered in a future article.<br />
A few additional notes:</p>
<ul>
<li>Although not strictly mandatory, it is good practice to place all the <i><b>import</b></i> directives at the top of the function, for visibility and code maintainability reasons</li>
<li>There is no need to end the <i><b>import</b></i> declaration with a semicolon (;). It&#8217;s really a matter of style consistency. I usually omit it because I find that it is a bit intrusive when placed after a *</li>
<li><i><b>import</b></i> by itself, without any input arguments (class/package names) returns the current list of imported classes/packages</li>
<li>Imported classes and packages can be un-imported using the <i><b>clear import</b></i> directive from the Command Window</li>
<li>It has been <a target="_blank" rel="nofollow" href="https://www.mathworks.com/matlabcentral/newsreader/view_thread/285933">reported</a> that in some cases using <i><b>import</b></i> in deployed (compiled) application fails &#8211; the solution is to use the FQCN in such cases</li>
</ul>
<p><i><b>Note: This topic is covered and extended in Chapter 1 of my <a target="_blank" href="/matlab-java-book/">Matlab-Java programming book</a></b></i></p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/the-java-import-directive">The Java import directive</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/converting-java-vectors-to-matlab-arrays" rel="bookmark" title="Converting Java vectors to Matlab arrays">Converting Java vectors to Matlab arrays </a> <small>Converting Java vectors to Matlab arrays is pretty simple - this article explains how....</small></li>
<li><a href="https://undocumentedmatlab.com/articles/udd-and-java" rel="bookmark" title="UDD and Java">UDD and Java </a> <small>UDD provides built-in convenience methods to facilitate the integration of Matlab UDD objects with Java code - this article explains how...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/using-pure-java-gui-in-deployed-matlab-apps" rel="bookmark" title="Using pure Java GUI in deployed Matlab apps">Using pure Java GUI in deployed Matlab apps </a> <small>Using pure-Java GUI in deployed Matlab apps requires a special yet simple adaptation. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/matlab-java-memory-leaks-performance" rel="bookmark" title="Matlab-Java memory leaks, performance">Matlab-Java memory leaks, performance </a> <small>Internal fields of Java objects may leak memory - this article explains how to avoid this without sacrificing performance. ...</small></li>
</ol>
</div>
]]></content:encoded>
					
					<wfw:commentRss>https://undocumentedmatlab.com/articles/the-java-import-directive/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Internal Matlab memory optimizations</title>
		<link>https://undocumentedmatlab.com/articles/internal-matlab-memory-optimizations?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=internal-matlab-memory-optimizations</link>
					<comments>https://undocumentedmatlab.com/articles/internal-matlab-memory-optimizations#comments</comments>
		
		<dc:creator><![CDATA[Yair Altman]]></dc:creator>
		<pubDate>Wed, 30 May 2012 12:09:16 +0000</pubDate>
				<category><![CDATA[Low risk of breaking in future versions]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[Stock Matlab function]]></category>
		<category><![CDATA[JIT]]></category>
		<category><![CDATA[Performance]]></category>
		<category><![CDATA[Pure Matlab]]></category>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2952</guid>

					<description><![CDATA[<p>Copy-on-write and in-place data manipulations are very useful Matlab performance improvement techniques. </p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/internal-matlab-memory-optimizations">Internal Matlab memory optimizations</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/matlabs-internal-memory-representation" rel="bookmark" title="Matlab&#039;s internal memory representation">Matlab&#039;s internal memory representation </a> <small>Matlab's internal memory structure is explored and discussed. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/profiling-matlab-memory-usage" rel="bookmark" title="Profiling Matlab memory usage">Profiling Matlab memory usage </a> <small>mtic and mtoc were a couple of undocumented features that enabled users of past Matlab releases to easily profile memory usage. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/matlab-java-memory-leaks-performance" rel="bookmark" title="Matlab-Java memory leaks, performance">Matlab-Java memory leaks, performance </a> <small>Internal fields of Java objects may leak memory - this article explains how to avoid this without sacrificing performance. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/couple-of-bugs-and-workarounds" rel="bookmark" title="A couple of internal Matlab bugs and workarounds">A couple of internal Matlab bugs and workarounds </a> <small>A couple of undocumented Matlab bugs have simple workarounds. ...</small></li>
</ol>
</div>
]]></description>
										<content:encoded><![CDATA[<p>Yesterday I attended a seminar on developing trading strategies using Matlab. This is of interest to me because of my <a target="_blank" href="/ib-matlab/">IB-Matlab</a> product, and since many of my clients are traders in the financial sector. In the seminar, the issue of memory and performance naturally arose. It seemed to me that there was some confusion with regards to Matlab&#8217;s built-in memory optimizations. Since I discussed related topics in the past two weeks (<a target="_blank" href="/articles/preallocation-performance/">preallocation performance</a>, <a target="_blank" href="/articles/array-resizing-performance/">array resizing performance</a>), these internal optimizations seemed a natural topic for today&#8217;s article.<br />
The specific mechanisms I&#8217;ll describe today are <i><b>Copy on Write</b></i> (aka <i>COW</i> or <i>Lazy Copying</i>) and <i><b>in-place data manipulations</b></i>. Both mechanisms were already documented (for example, on <a target="_blank" rel="nofollow" href="http://blogs.mathworks.com/loren/2006/05/10/memory-management-for-functions-and-variables/">Loren&#8217;s blog</a> or on <a target="_blank" href="/articles/matlab-mex-in-place-editing/#COW">this blog</a>). But apparently, they are still not well known. Understanting them could help Matlab users modify their code to improve performance and reduce memory consumption. So although this article is not entirely &#8220;undocumented&#8221;, I&#8217;ll give myself some slack today.</p>
<h3 id="COW">Copy on Write (COW, Lazy Copy)</h3>
<p>Matlab implements an automatic <a target="_blank" rel="nofollow" href="http://blogs.mathworks.com/loren/2006/05/10/memory-management-for-functions-and-variables/">copy-on-write</a> (sometimes called <i>copy-on-update</i> or <i>lazy copying</i>) mechanism, which transparently allocates a temporary copy of the data only when it sees that the input data is modified. This improves run-time performance by delaying actual memory block allocation until absolutely necessary. COW has two variants: during regular variable copy operations, and when passing data as input parameters into a function:</p>
<h4 id="COW-vars">1. Regular variable copies</h4>
<p>When a variable is copied, as long as the data is not modified, both variables actually use the same shared memory block. The data is only copied onto a newly-allocated memory block when one of the variables is modified. The modified variable is assigned the newly-allocated block of memory, which is initialized with the values in the shared memory block before being updated:</p>
<pre lang='matlab'>
data1 = magic(5000);  % 5Kx5K elements = 191 MB
data2 = data1;        % data1 & data2 share memory; no allocation done
data2(1,1) = 0;       % data2 allocated, copied and only then modified
</pre>
<p>If we profile our code using any of <a target="_blank" href="/articles/profiling-matlab-memory-usage/">Matlab&#8217;s memory-profiling options</a>, we will see that the copy operation <code>data2=data1</code> takes negligible time to run and allocates no memory. On the other hand, the simple update operation <code>data2(1,1)=0</code>, which we could otherwise have assumed to take minimal time and memory, actually takes a relatively long time and allocates 191 MB of memory.<br />
<center><figure style="width: 449px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" alt="Copy-on-write effect monitored using the Profiler's -memory option" src="https://undocumentedmatlab.com/images/Copy-on-Write1c.png" title="Copy-on-write effect monitored using the Profiler's -memory option" width="449" height="96"/><figcaption class="wp-caption-text">Copy-on-write effect monitored using the Profiler's -memory option</figcaption></figure><br />
<figure style="width: 439px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" alt="Copy-on-write effect monitored using Windows Process Explorer" src="https://undocumentedmatlab.com/images/Copy-on-Write1a.png" title="Copy-on-write effect monitored using Windows Process Explorer" width="439" height="515"/><figcaption class="wp-caption-text">Copy-on-write effect monitored using Windows Process Explorer</figcaption></figure></center><br />
We first see a memory spike (used during the computation of the magic square data), closely followed by a leveling off at 190.7MB above the baseline (this is due to allocation of data1). Copying <code>data2=data1</code> has no discernible effect on either CPU or memory. Only when we set <code>data2(1,1)=0</code> does the CPU return, in order to allocate the extra 190MB for data2. When we exit the test function, data1 and data2 are both deallocated, returning the Matlab process memory to its baseline level.<br />
There are several lessons that we can draw from this simple example:<br />
Firstly, creating copies of data does not necessarily or immediately impact memory and performance. Rather, it is the update of these copies which may be problematic. If we can modify our code to use more read-only data and less updated data copies, then we would improve performance. The Profiler report will show us exactly where in our code we have memory and CPU hotspots – these are the places we should consider optimizing.<br />
Secondly, when we see such odd behavior in our Profiler reports (i.e., memory and/or CPU spikes that occur on seemingly innocent code lines), we should be aware of the copy-on-write mechanism, which could be the cause for the behavior.</p>
<h4 id="COW-functions">2. Function input parameters</h4>
<p>The copy-on-write mechanism behaves similarly for input parameters in functions: whenever a function is invoked (called) with input data, the memory allocated for this data is used up until the point that one of its copies is modified. At that point, the copies diverge: a new memory block is allocated, populated with data from the shared memory block, and assigned to the modified variable. Only then is the update done on the new memory block.</p>
<pre lang='matlab'>
data1 = magic(5000);      % 5Kx5K elements = 191 MB
data2 = perfTest(data1);
function outData = perfTest(inData)
   outData = inData;   % inData & outData share memory; no allocation
   outData2(1,1) = 0;  % outData allocated, copied and then modified
end
</pre>
<p><center><figure style="width: 534px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" alt="Copy-on-write effect monitored using the Profiler's -memory option" src="https://undocumentedmatlab.com/images/Copy-on-Write2a.png" title="Copy-on-write effect monitored using the Profiler's -memory option" width="534" height="82"/><figcaption class="wp-caption-text">Copy-on-write effect monitored using the Profiler's -memory option</figcaption></figure></center><br />
One lesson that can be drawn from this is that whenever possible we should attempt to use functions that do not modify their input data. This is particularly true if the modified input data is very large. Read-only functions will be faster than functions that do even the simplest of data updates.<br />
Another lesson is that perhaps counter intuitively, it does not make a difference from a performance standpoint to pass read-only data to functions as input parameters. We might think that passing large data objects around as function parameters will involve multiple memory allocations and deallocations of the data. In fact, it is only the data&#8217;s reference (or more precisely, its <a target="_blank" href="/articles/matlabs-internal-memory-representation/">mxArray structure</a>) which is being passed around and placed on the function&#8217;s call stack. Since this reference/structure is quite small in size, there are no real performance penalties. In fact, this only benefits code clarity and maintainability.<br />
The only case where we may wish to use other means of passing data to functions is when a large data object needs to be updated. In such cases, the updated copy will be allocated to a new memory block with an associated performance cost.</p>
<h3 id="inplace">In-place data manipulation</h3>
<p>Matlab&#8217;s interpreter, at least in recent releases, has a very sophisticated algorithm for using in-place data manipulation (<a target="_blank" rel="nofollow" href="http://blogs.mathworks.com/loren/2007/03/22/in-place-operations-on-data">report</a>). Modifying data in-place means that the original data block is modified, rather than creating a new block with the modified data, thus saving any memory allocations and deallocations.<br />
For example, let us manipulate a simple 4Kx4K (122MB) numeric array:</p>
<pre lang='matlab'>
>> m = magic(4000);   % 4Kx4K = 122MB
>> memory
Maximum possible array:            1022 MB (1.072e+09 bytes)
Memory available for all arrays:   1218 MB (1.278e+09 bytes)
Memory used by MATLAB:              709 MB (7.434e+08 bytes)
Physical Memory (RAM):             3002 MB (3.148e+09 bytes)
% In-place array data manipulation: no memory allocated
>> m = m * 0.5;
>> memory
Maximum possible array:            1022 MB (1.072e+09 bytes)
Memory available for all arrays:   1214 MB (1.273e+09 bytes)
Memory used by MATLAB:              709 MB (7.434e+08 bytes)
Physical Memory (RAM):             3002 MB (3.148e+09 bytes)
% New variable allocated, taking an extra 122MB of memory
>> m2 = m * 0.5;
>> memory
Maximum possible array:            1022 MB (1.072e+09 bytes)
Memory available for all arrays:   1092 MB (1.145e+09 bytes)
Memory used by MATLAB:              831 MB (8.714e+08 bytes)
Physical Memory (RAM):             3002 MB (3.148e+09 bytes)
</pre>
<p>The extra memory allocation of the not-in-place manipulation naturally translates into a performance loss:</p>
<pre lang='matlab'>
% In-place data manipulation, no memory allocation
>> tic, m = m * 0.5; toc
Elapsed time is 0.056464 seconds.
% Regular data manipulation (122MB allocation) – 50% slower
>> clear m2; tic, m2 = m * 0.5; toc;
Elapsed time is 0.084770 seconds.
</pre>
<p>The difference may not seem large, but placed in a loop it could become significant indeed, and might be much more important if virtual memory swapping comes into play, or when Matlab&#8217;s memory space is exhausted (out-of-memory error).<br />
Similarly, when returning data from a function, we should try to update the original data variable whenever possible, <a target="_blank" rel="nofollow" href="http://www.mathworks.com/company/newsletters/news_notes/june07/patterns.html">avoiding the need for allocation</a> of a new variable:</p>
<pre lang='matlab'>
% In-place data manipulation, no memory allocation
>> d=0:1e-7:1; tic, d = sin(d); toc
Elapsed time is 0.083397 seconds.
% Regular data manipulation (76MB allocation) – 50% slower
>> clear d2, d=0:1e-7:1; tic, d2 = sin(d); toc
Elapsed time is 0.121415 seconds.
</pre>
<p>Within the function itself we should ensure that we return the modified input variable, and not assign the output to a new variable, so that in-place optimization can also be applied within the function. The in-place optimization mechanism is smart enough to override Matlab&#8217;s default copy-on-write mechanism, which automatically allocates a new copy of the data when it sees that the input data is modified:</p>
<pre lang='matlab'>
% Suggested practice: use in-place optimization within functions
function x = function1(x)
   x = someOperationOn(x);   % temporary variable x is NOT allocated
end
% Standard practice: prevents future use of in-place optimizations
function y = function2(x)
   y = someOperationOn(x);   % new temporary variable y is allocated
end
</pre>
<p>In order to benefit from in-place optimizations of function results, we must both use the same variable in the caller workspace (x = function1(x)) and also ensure that the called function is optimizable (e.g., function x = function1(x)) – if any of these two requirements is not met then in-place function-call optimization is not performed.<br />
Also, for the in-place optimization to be active, we need to call the in-place function from within another function, not from a script or the Matlab Command Window.<br />
A related performance trick is to use masks on the original data rather than temporary data copies. For example, suppose we wish to get the result of a function that acts on only a portion of some large data. If we create a temporary variable that holds the data subset and then process it, it would create an unnecessary copy of the original data:</p>
<pre lang='matlab'>
% Original data
data = 0 : 1e-7 : 1;     % 10^7 elements, 76MB allocated
% Unnecessary copy of data into data2 (extra 8MB allocated)
data2 = data(data>0.1);  % 10^6 elements, 7.6MB allocated
results = sin(data2);    % another 10^6 elements, 7.6MB allocated
% Use of data masks obviates the need for temporary variable data2:
results = sin(data(data>0.1));  % no need for the data2 allocation
</pre>
<p>A note of caution: we should not invest undue efforts to use in-place data manipulation if the overall benefits would be negligible. It would only help if we have a real memory limitation issue and the data matrix is very large.<br />
Matlab in-place optimization is a topic of continuous development. Code which is not in-place optimized today (for example, in-place manipulation on class object properties) may possibly be optimized in next year&#8217;s release. For this reason, it is important to write the code in a way that would facilitate the future optimization (for example, obj.x=2*obj.x rather than y=2*obj.x).<br />
Some in-place optimizations were added to the JIT Accelerator as early as Matlab 6.5 R13, but Matlab 7.3 R2006b saw a major boost. As Matlab&#8217;s JIT Accelerator improves from release to release, we should expect in-place data manipulations to be automatically applied in an increasingly larger number of code cases.<br />
In some older Matlab releases, and in some complex data manipulations where the JIT Accelerator cannot implement in-place processing, a temporary storage is allocated that is assigned to the original variable when the computation is done. To implement in-place data manipulations in such cases we could develop an external function (e.g., <a target="_blank" href="/articles/matlab-mex-in-place-editing/">using Mex</a>) that directly works on the original data block. Note that the officially supported mex update method is to always create deep-copies of the data using <i>mxDuplicateArray()</i> and then modify the new array rather than the original; modifying the original data directly is both <a target="_blank" rel="nofollow" href="http://stackoverflow.com/questions/1708433/matlab-avoiding-memory-allocation-in-mex">discouraged</a> and <a target="_blank" rel="nofollow" href="http://blogs.mathworks.com/loren/2007/03/22/in-place-operations-on-data/#comment-16202">not officially supported</a>. Doing it incorrectly can easily crash Matlab. If you do directly overwrite the original input data, at least ensure that you <a target="_blank" rel="nofollow" href="http://www.mk.tu-berlin.de/Members/Benjamin/mex_sharedArrays">unshare any variables</a> that share the same data memory block, thus mimicking the copy-on-write mechanism.<br />
Using Matlab&#8217;s internal in-place data manipulation is very useful, especially since it is done automatically without need for any major code changes on our part. But sometimes we need certainty of actually processing the original data variable without having to guess or check whether the automated in-place mechanism will be activated or not. This can be achieved using several alternatives:</p>
<ul>
<li>Using global or persistent variable</li>
<li>Using a parent-scope variable within a nested function</li>
<li>Modifying a reference (handle class) object&#8217;s internal properties</li>
</ul>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/internal-matlab-memory-optimizations">Internal Matlab memory optimizations</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/matlabs-internal-memory-representation" rel="bookmark" title="Matlab&#039;s internal memory representation">Matlab&#039;s internal memory representation </a> <small>Matlab's internal memory structure is explored and discussed. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/profiling-matlab-memory-usage" rel="bookmark" title="Profiling Matlab memory usage">Profiling Matlab memory usage </a> <small>mtic and mtoc were a couple of undocumented features that enabled users of past Matlab releases to easily profile memory usage. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/matlab-java-memory-leaks-performance" rel="bookmark" title="Matlab-Java memory leaks, performance">Matlab-Java memory leaks, performance </a> <small>Internal fields of Java objects may leak memory - this article explains how to avoid this without sacrificing performance. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/couple-of-bugs-and-workarounds" rel="bookmark" title="A couple of internal Matlab bugs and workarounds">A couple of internal Matlab bugs and workarounds </a> <small>A couple of undocumented Matlab bugs have simple workarounds. ...</small></li>
</ol>
</div>
]]></content:encoded>
					
					<wfw:commentRss>https://undocumentedmatlab.com/articles/internal-matlab-memory-optimizations/feed</wfw:commentRss>
			<slash:comments>7</slash:comments>
		
		
			</item>
		<item>
		<title>Array resizing performance</title>
		<link>https://undocumentedmatlab.com/articles/array-resizing-performance?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=array-resizing-performance</link>
					<comments>https://undocumentedmatlab.com/articles/array-resizing-performance#comments</comments>
		
		<dc:creator><![CDATA[Yair Altman]]></dc:creator>
		<pubDate>Wed, 23 May 2012 20:43:03 +0000</pubDate>
				<category><![CDATA[Low risk of breaking in future versions]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[Stock Matlab function]]></category>
		<category><![CDATA[Undocumented feature]]></category>
		<category><![CDATA[JIT]]></category>
		<category><![CDATA[Performance]]></category>
		<category><![CDATA[Pure Matlab]]></category>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2949</guid>

					<description><![CDATA[<p>Several alternatives are explored for dynamic array growth performance in Matlab loops. </p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/array-resizing-performance">Array resizing performance</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/preallocation-performance" rel="bookmark" title="Preallocation performance">Preallocation performance </a> <small>Preallocation is a standard Matlab speedup technique. Still, it has several undocumented aspects. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/performance-accessing-handle-properties" rel="bookmark" title="Performance: accessing handle properties">Performance: accessing handle properties </a> <small>Handle object property access (get/set) performance can be significantly improved using dot-notation. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/convolution-performance" rel="bookmark" title="Convolution performance">Convolution performance </a> <small>Matlab's internal implementation of convolution can often be sped up significantly using the Convolution Theorem. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/zero-testing-performance" rel="bookmark" title="Zero-testing performance">Zero-testing performance </a> <small>Subtle changes in the way that we test for zero/non-zero entries in Matlab can have a significant performance impact. ...</small></li>
</ol>
</div>
]]></description>
										<content:encoded><![CDATA[<p>As I have <a target="_blank" href="/articles/preallocation-performance/">explained</a> last week, the best way to avoid the performance penalties associated with dynamic array resizing (typically, growth) in Matlab is to pre-allocate the array to its expected final size. I have shown different alternatives for such preallocation, but in all cases the performance is much better than using a naïve dynamic resize.<br />
Unfortunately, such simple preallocation is not always possible. Apparently, all is not lost. There are still a few things we can do to mitigate the performance pain. As in last week, there is much more here than meets the eye at first sight.<br />
The interesting <a target="_blank" rel="nofollow" href="https://www.mathworks.com/matlabcentral/newsreader/view_thread/102704">newsgroup thread from 2005</a> about this issue that I mentioned last week contains two main solutions to this problem. The effects of these solutions is negligible for small data sizes and/or loop iterations (i.e., number of memory reallocations), but could be dramatic for large data arrays and/or a large number of memory reallocations. The difference could well mean the difference between a usable and an unusable (&#8220;hang&#8221;) program:</p>
<h3 id="Chunks">Factor growth: dynamic allocation by chunks</h3>
<p>The idea is to dynamically grow the array by a certain percentage factor each time. When the array first needs to grow by a single element, we would in fact grow it by a larger chunk (say 40% of the current array size, for example by using the <i><b>repmat</b></i> function, or by concatenating a specified number of <i><b>zeros</b></i>, or by setting some way-forward index to 0), so that it would take the program some time before it needs to reallocate memory.<br />
This method has a theoretical cost of N&middot;log(N), which is nearly linear in N for most practical purposes. It is similar to preallocation in the sense that we are preparing a chunk of memory for future array use in advance. You might say that this is on-the-fly preallocation.</p>
<h3 id="Cells">Using cell arrays</h3>
<p>The idea here is to use cell arrays to store and grow the data, then use cell2mat to convert the resulting cell array to a regular numeric array. Cell elements are <a target="_blank" rel="nofollow" href="http://www.mathworks.com/help/techdoc/matlab_prog/brh72ex-25.html#brh72ex-38">implemented as references</a> to distinct memory blocks, so concatenating an object to a cell array merely concatenates its reference; when a cell array is reallocated, only its internal references (not the referenced data) are moved. Note that this relies on the internal implementation of cell arrays in Matlab, and may possibly change in some future release.<br />
Like factor growth, using cell arrays is faster than quadratic behavior (although <a target="_blank" rel="nofollow" href="http://abandonmatlab.wordpress.com/2009/07/28/no-lists/#comment-113">not quite as fast enough</a> as we would have liked, of course). Different situations may favor using either the cell arrays method or the factor growth mechanism.</p>
<h3 id="Growdata">The <i><b>growdata</b></i> utility</h3>
<p>John D&#8217;Errico has posted a well-researched utility called <a target="_blank" rel="nofollow" href="http://www.mathworks.com/matlabcentral/fileexchange/8334-incremental-growth-of-an-array-revisited"><i><b>growdata</b></i></a> that optimizes dynamic array growth for maximal performance. It is based in part on ideas mentioned in the aforementioned 2005 newsgroup thread, where <i><b>growdata</b></i> is also discussed in detail.<br />
As an interesting side-note, John D&#8217;Errico also recently posted an extremely fast <a target="_blank" rel="nofollow" href="http://www.mathworks.com/matlabcentral/fileexchange/34766-the-fibonacci-sequence">implementation</a> of the Fibonacci function. The source code may seem complex, but the resulting performance gain is well worth the extra complexity. I believe that readers who will read this utility&#8217;s source code and understand its underlying logic will gain insight into several performance tricks that could be very useful in general.</p>
<h3 id="JIT">Effects of incremental JIT improvements</h3>
<p>The introduction of JIT Acceleration in Matlab 6.5 (R13) caused a dramatic boost in performance (there is an internal distinction between the Accelerator and JIT: JIT is <a target="_blank" rel="nofollow" href="http://www.mathworks.com/matlabcentral/fileexchange/18510-matlab-performance-measurement/content/Documents/MATLABperformance/configinfo.m">apparently</a> only part of the Matlab Accelerator, but this distinction appears to have no practical impact on the discussion here).<br />
Over the years, MathWorks has consistently improved the efficiency of its computational engine and the JIT Accelerator in particular. JIT was consistently improved since that release, giving a small improvement with each new Matlab release. In Matlab 7.11 (R2010b), the short Fibonacci snippet used in last week&#8217;s article showed executed about 30% faster compared to Matlab 7.1 R14SP3. The behavior was still quadratic in nature, and so in these releases, using any of the above-mentioned solutions could prove very beneficial.<br />
In Matlab 7.12 (R2011a), a major improvement was done in the Matlab engine (JIT?). The execution run-times improved significantly, and in addition have become linear in nature. This means that multiplying the array size by N only degrades performance by N, not N<sup>2</sup> – an impressive achievement:</p>
<pre lang='matlab'>
% This is ran on Matlab 7.14 (R2012a):
clear f, tic, f=[0,1]; for idx=3:10000, f(idx)=f(idx-1)+f(idx-2); end, toc
   => Elapsed time is 0.004924 seconds.  % baseline loop size, & exec time
clear f, tic, f=[0,1]; for idx=3:20000, f(idx)=f(idx-1)+f(idx-2); end, toc
   => Elapsed time is 0.009971 seconds.  % x2 loop size, x2 execution time
clear f, tic, f=[0,1]; for idx=3:40000, f(idx)=f(idx-1)+f(idx-2); end, toc
   => Elapsed time is 0.019282 seconds.  % x4 loop size, x4 execution time
</pre>
<p>In fact, it turns out that using either the cell arrays method or the factor growth mechanism is much slower in R2011a than using the naïve dynamic growth!<br />
This teaches us a very important lesson: It is not wise to program against a specific implementation of the engine, at least not in the long run. While this may yield performance benefits on some Matlab releases, the situation may well be reversed on some future release. This might force us to retest, reprofile and potentially rewrite significant portions of code for each new release. Obviously this is not a maintainable solution. In practice, most code that is written on some old Matlab release would likely we carried over with minimal changes to the newer releases. If this code has release-specific tuning, we could be shooting ourselves in the leg in the long run.<br />
MathWorks strongly <a target="_blank" rel="nofollow" href="http://blogs.mathworks.com/loren/2008/06/25/speeding-up-matlab-applications/#comment-29607">advises</a> (and <a target="_blank" rel="nofollow" href="https://www.mathworks.com/matlabcentral/newsreader/view_thread/284759#784131">again</a>, and once <a target="_blank" href="/articles/undocumented-profiler-options/#comment-64">again</a>), and I concur, to program in a natural manner, rather than in a way that is tailored to a particular Matlab release (unless of course we can be certain that we shall only be using that release and none other). This will improve development time, maintainability and in the long run also performance.<br />
<i>(and of course you could say that a corollary lesson is to hurry up and get the latest Matlab release&#8230;)</i></p>
<h3 id="Variants">Variants for array growth</h3>
<p>If we are left with using a naïve dynamic resize, there are several equivalent alternatives for doing this, having significantly different performances:</p>
<pre lang='matlab'>
% This is ran on Matlab 7.12 (R2011a):
% Variant #1: direct assignment into a specific out-of-bounds index
data=[]; tic, for idx=1:100000; data(idx)=1; end, toc
   => Elapsed time is 0.075440 seconds.
% Variant #2: direct assignment into an index just outside the bounds
data=[]; tic, for idx=1:100000; data(end+1)=1; end, toc
   => Elapsed time is 0.241466 seconds.    % 3 times slower
% Variant #3: concatenating a new value to the array
data=[]; tic, for idx=1:100000; data=[data,1]; end, toc
   => Elapsed time is 22.897688 seconds.   % 300 times slower!!!
</pre>
<p>As can be seen, it is much faster to directly index an out-of-bounds element as a means to force Matlab to enlarge a data array, rather than using the end+1 notation, which needs to recalculate the value of end each time.<br />
In any case, try to avoid using the concatenation variant, which is significantly slower than either of the other two alternatives (300 times slower in the above example!). In this respect, there is no discernible difference between using the [] operator or the <i><b>cat</b>()</i> function for the concatenation.<br />
Apparently, the JIT performance boost gained in Matlab R2011a does not work for concatenation. Future JIT improvements may possibly also improve the performance of concatenations, but in the meantime it is better to use direct indexing instead.<br />
The effect of the JIT performance boost is easily seen when we run the same variants on pre-R2011a Matlab releases. The corresponding values are 30.9, 34.8 and 34.3 seconds. Using direct indexing is still the fastest approach, but concatenation is now only 10% slower, not 300 times slower.<br />
When we need to append a non-scalar element (for example, a 2D matrix) to the end of an array, we might think that we have no choice but to use the slow concatenation method. This assumption is incorrect: we can still use the much faster direct-indexing method, as shown below (notice the non-linear growth in execution time for the concatenation variant):</p>
<pre lang='matlab'>
% This is ran on Matlab 7.12 (R2011a):
matrix = magic(3);
% Variant #1: direct assignment – fast and linear cost
data=[]; tic, for idx=1:10000; data(:,(idx*3-2):(idx*3))=matrix; end, toc
   => Elapsed time is 0.969262 seconds.
data=[]; tic, for idx=1:100000; data(:,(idx*3-2):(idx*3))=matrix; end, toc
   => Elapsed time is 9.558555 seconds.
% Variant #2: concatenation – much slower, quadratic cost
data=[]; tic, for idx=1:10000; data=[data,matrix]; end, toc
   => Elapsed time is 2.666223 seconds.
data=[]; tic, for idx=1:100000; data=[data,matrix]; end, toc
   => Elapsed time is 356.567582 seconds.
</pre>
<p>As the size of the array enlargement element (in this case, a 3&#215;3 matrix) increases, the computer needs to allocate more memory space more frequently, thereby increasing execution time and the importance of preallocation. Even if the system has an internal memory-management mechanism that enables it to expand into adjacent (contiguous) empty memory space, as the size of the enlargement grows the empty space will run out sooner and a new larger memory block will need to be allocated more frequently than in the case of small incremental enlargements of a single 8-byte double.</p>
<h3 id="Alternatives">Other alternatives</h3>
<p>If preallocation is not possible, JIT is not very helpful, vectorization is out of the question, and rewriting the problem so that it doesn&#8217;t need dynamic array growth is impossible &#8211; if all these are not an option, then consider using one of the following alternatives for array growth (read again the interesting <a target="_blank" rel="nofollow" href="https://www.mathworks.com/matlabcentral/newsreader/view_thread/102704">newsgroup thread from 2005</a> about this issue):</p>
<ul>
<li>Dynamically grow the array by a certain percentage factor each time the array runs out of space (on-the-fly preallocation)</li>
<li>Use John D&#8217;Errico&#8217;s <i><b>growdata</b></i> utility</li>
<li>Use cell arrays to store and grow the data, then use cell2mat to convert the resulting cell array to a regular numeric array</li>
<li>Reuse an existing data array that has the necessary storage space</li>
<li>Wrap the data in a referential object (a class object that inherits from handle), then append the reference handle rather than the original data (<a target="_blank" rel="nofollow" href="http://stackoverflow.com/questions/276198/matlab-class-array">ref</a>). Note that if your class object does not inherit from handle, it is not a referential object but rather a value object, and as such it will be appended in its entirety to the array data, losing any performance benefits. Of course, it may not always be possible to wrap our class objects as a handle.<br />
References have a much small memory footprint than the objects that they reference. The objects themselves will remain somewhere in memory and will not need to be moved whenever the data array is enlarged and reallocated – only the small-footprint reference will be moved, which is much faster. This is also the reason that cell concatenation is faster than array concatenations for large objects.</li>
</ul>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/array-resizing-performance">Array resizing performance</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/preallocation-performance" rel="bookmark" title="Preallocation performance">Preallocation performance </a> <small>Preallocation is a standard Matlab speedup technique. Still, it has several undocumented aspects. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/performance-accessing-handle-properties" rel="bookmark" title="Performance: accessing handle properties">Performance: accessing handle properties </a> <small>Handle object property access (get/set) performance can be significantly improved using dot-notation. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/convolution-performance" rel="bookmark" title="Convolution performance">Convolution performance </a> <small>Matlab's internal implementation of convolution can often be sped up significantly using the Convolution Theorem. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/zero-testing-performance" rel="bookmark" title="Zero-testing performance">Zero-testing performance </a> <small>Subtle changes in the way that we test for zero/non-zero entries in Matlab can have a significant performance impact. ...</small></li>
</ol>
</div>
]]></content:encoded>
					
					<wfw:commentRss>https://undocumentedmatlab.com/articles/array-resizing-performance/feed</wfw:commentRss>
			<slash:comments>7</slash:comments>
		
		
			</item>
		<item>
		<title>Preallocation performance</title>
		<link>https://undocumentedmatlab.com/articles/preallocation-performance?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=preallocation-performance</link>
					<comments>https://undocumentedmatlab.com/articles/preallocation-performance#comments</comments>
		
		<dc:creator><![CDATA[Yair Altman]]></dc:creator>
		<pubDate>Wed, 16 May 2012 12:14:46 +0000</pubDate>
				<category><![CDATA[Low risk of breaking in future versions]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[Stock Matlab function]]></category>
		<category><![CDATA[Undocumented feature]]></category>
		<category><![CDATA[JIT]]></category>
		<category><![CDATA[Performance]]></category>
		<category><![CDATA[Pure Matlab]]></category>
		<guid isPermaLink="false">http://undocumentedmatlab.com/?p=2940</guid>

					<description><![CDATA[<p>Preallocation is a standard Matlab speedup technique. Still, it has several undocumented aspects. </p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/preallocation-performance">Preallocation performance</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/zero-testing-performance" rel="bookmark" title="Zero-testing performance">Zero-testing performance </a> <small>Subtle changes in the way that we test for zero/non-zero entries in Matlab can have a significant performance impact. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/performance-scatter-vs-line" rel="bookmark" title="Performance: scatter vs. line">Performance: scatter vs. line </a> <small>In many circumstances, the line function can generate visually-identical plots as the scatter function, much faster...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/array-resizing-performance" rel="bookmark" title="Array resizing performance">Array resizing performance </a> <small>Several alternatives are explored for dynamic array growth performance in Matlab loops. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/allocation-performance-take-2" rel="bookmark" title="Allocation performance take 2">Allocation performance take 2 </a> <small>The clear function has some non-trivial effects on Matlab performance. ...</small></li>
</ol>
</div>
]]></description>
										<content:encoded><![CDATA[<p>Array <a target="_blank" rel="nofollow" href="http://www.mathworks.com/help/techdoc/matlab_prog/f8-784135.html#f8-793781">preallocation</a> is a standard and quite well-known technique for improving Matlab loop run-time performance. Today&#8217;s article will show that there is more than meets the eye for even such a simple coding technique.<br />
A note of caution: in the examples that follow, don&#8217;t take any speedup as an expected actual value &#8211; the actual value may well be different on your system. Your mileage may vary. I only mean to display the relative differences between different alternatives.</p>
<h3 id="problem">The underlying problem</h3>
<p>Memory management has a direct influence on performance. I have already shown <a target="_blank" href="/articles/matlab-java-memory-leaks-performance/">some examples of this</a> in past articles here.<br />
Preallocation solves a basic problem in simple program loops, where an array is iteratively enlarged with new data (dynamic array growth). Unlike other programming languages (such as C, C++, C# or Java) that use static typing,  Matlab uses dynamic typing. This means that it is natural and easy to modify array size dynamically during program execution. For example:</p>
<pre lang='matlab'>
fibonacci = [0, 1];
for idx = 3 : 100
   fibonacci(idx) = fibonacci(idx-1) + fibonacci(idx-2);
end
</pre>
<p>While this may be simple to program, it is not wise with regards to performance. The reason is that whenever an array is resized (typically enlarged), Matlab allocates an entirely new contiguous block of memory for the array, copying the old values from the previous block to the new, then releasing the old block for potential reuse. This operation takes time to execute. In some cases, this reallocation might require accessing virtual memory and page swaps, which would take an even longer time to execute. If the operation is done in a loop, then performance could quickly drop off a cliff.<br />
The cost of such naïve array growth is theoretically quadratic. This means that multiplying the number of elements by N multiplies the execution time by about N<sup>2</sup>. The reason for this is that Matlab needs to reallocate N times more than before, and each time takes N times longer due to the larger allocation size (the average block size multiplies by N), and N times more data elements to copy from the old to the new memory blocks.<br />
A very interesting discussion of this phenomenon and various solutions can be found in a <a target="_blank" rel="nofollow" href="https://www.mathworks.com/matlabcentral/newsreader/view_thread/102704">newsgroup thread from 2005</a>. Three main solutions were presented: preallocation, selective dynamic growth (<i>allocating headroom</i>) and using cell arrays. The best solution among these in terms of ease of use and performance is preallocation.</p>
<h3 id="basics">The basics of pre-allocation</h3>
<p>The basic idea of preallocation is to create a data array in the final expected size before actually starting the processing loop. This saves any reallocations within the loop, since all the data array elements are already available and can be accessed. This solution is useful when the final size is known in advance, as the following snippet illustrates:</p>
<pre lang='matlab'>
% Regular dynamic array growth:
tic
fibonacci = [0,1];
for idx = 3 : 40000
   fibonacci(idx) = fibonacci(idx-1) + fibonacci(idx-2);
end
toc
   => Elapsed time is 0.019954 seconds.
% Now use preallocation – 5 times faster than dynamic array growth:
tic
fibonacci = zeros(40000,1);
fibonacci(1)=0; fibonacci(2)=1;
for idx = 3 : 40000,
   fibonacci(idx) = fibonacci(idx-1) + fibonacci(idx-2);
end
toc
   => Elapsed time is 0.004132 seconds.
</pre>
<p>On pre-R2011a releases the effect of preallocation is even more pronounced: I got a 35-times speedup on the same machine using Matlab 7.1 (R14 SP3). R2011a (Matlab 7.12) had a dramatic performance boost for such cases in the internal accelerator, so newer releases are much faster in dynamic allocations, but preallocation is still 5 times faster even on R2011a.</p>
<h3 id="nondeterministic">Non-deterministic pre-allocation</h3>
<p>Because the effect of preallocation is so dramatic on all Matlab releases, it makes sense to utilize it even in cases where the data array&#8217;s final size is not known in advance. We can do this by estimating an upper bound to the array&#8217;s size, preallocate this large size, and when we&#8217;re done remove any excess elements:</p>
<pre lang='matlab'>
% The final array size is unknown – assume 1Kx3K upper bound (~23MB)
data = zeros(1000,3000);  % estimated maximal size
numRows = 0;
numCols = 0;
while (someCondition)
   colIdx = someValue1;   numCols = max(numCols,colIdx);
   rowIdx = someValue2;   numRows = max(numRows,rowIdx);
   data(rowIdx,colIdx) = someOtherValue;
end
% Now remove any excess elements
data(:,numCols+1:end) = [];   % remove excess columns
data(numRows+1:end,:) = [];   % remove excess rows
</pre>
<h3 id="variants">Variants for pre-allocation</h3>
<p>It turns out that MathWorks&#8217; <a target="_blank" rel="nofollow" href="http://www.mathworks.com/help/techdoc/matlab_prog/f8-784135.html#f8-793795">official suggestion</a> for preallocation, namely using the <i><b>zeros</b></i> function, is not the most efficient:</p>
<pre lang='matlab'>
% MathWorks suggested variant
clear data1, tic, data1 = zeros(1000,3000); toc
   => Elapsed time is 0.016907 seconds.
% A much faster alternative - 500 times faster!
clear data1, tic, data1(1000,3000) = 0; toc
   => Elapsed time is 0.000034 seconds.
</pre>
<p>The reason for the second variant being so much faster is because it only allocates the memory, without worrying about the internal values (they get a default of 0, <i>false</i> or &#8221;, in case you wondered). On the other hand, <i><b>zeros</b></i> has to place a value in each of the allocated locations, which takes precious time.<br />
In most cases the differences are immaterial since the preallocation code would only run once in the program, and an extra 17ms isn&#8217;t such a big deal. But in some cases we may have a need to periodically refresh our data, where the extra run-time could quickly accumulate.<br />
<b><u>Update (October 27, 2015)</u></b>: As Marshall <a href="/articles/preallocation-performance#comment-359956">notes below</a>, this behavior changed in R2015b when <a target="_blank" href="/articles/callback-functions-performance">the new LXE</a> (Matlab&#8217;s new execution engine) replaced the previous engine. In R2015b, the <i><b>zeros</b></i> function is faster than the alternative of just setting the last array element to 0. Similar changes may also have occurred to the following post content, so if you are using R2015b onward, be sure to test carefully on your specific system.</p>
<h3 id="non-default">Pre-allocating non-default values</h3>
<p>When we need to preallocate a specific value into every data array element, we cannot use Variant #2. The reason is that Variant #2 only sets the very last data element, and all other array elements get assigned the default value (0, ‘’ or false, depending on the array’s data type). In this case, we can use one of the following alternatives (with their associated timings for a 1000&#215;3000 data array):</p>
<pre lang='matlab'>
scalar = pi;  % for example...
data = scalar(ones(1000,3000));           % Variant A: 87.680 msecs
data(1:1000,1:3000) = scalar;             % Variant B: 28.646 msecs
data = repmat(scalar,1000,3000);          % Variant C: 17.250 msecs
data = scalar + zeros(1000,3000);         % Variant D: 17.168 msecs
data(1000,3000) = 0; data = data+scalar;  % Variant E: 16.334 msecs
</pre>
<p>As can be seen, Variants C-E are about twice as fast as Variant B, and 5 times faster than Variant A.</p>
<h3 id="non-double">Pre-allocating non-double data</h3>
<p>7.4.5 Preallocating non-double data<br />
When preallocating an array of a type that is not <i><b>double</b></i>, we should be careful to create it using the desired type, to prevent memory and/or performance inefficiencies. For example, if we need to process a large array of small integers (<i><b>int8</b></i>), it would be inefficient to preallocate an array of doubles and type-convert to/from int8 within every loop iteration. Similarly, it would be inefficient to preallocate the array as a double type and then convert it to int8. Instead, we should create the array as an int8 array in the first place:</p>
<pre lang='matlab'>
% Bad idea: allocates 8MB double array, then converts to 1MB int8 array
data = int8(zeros(1000,1000));   % 1M elements
   => Elapsed time is 0.008170 seconds.
% Better: directly allocate the array as a 1MB int8 array – x80 faster
data = zeros(1000,1000,'int8');
   => Elapsed time is 0.000095 seconds.
</pre>
<h3 id="cells">Pre-allocating cell arrays</h3>
<p>To preallocate a cell-array we can use the cell function (explicit preallocation), or the maximal cell index (implicit preallocation). Explicit preallocation is faster than implicit preallocation, but functionally equivalent (Note: this is contrary to the experience with allocation of numeric arrays and other arrays):</p>
<pre lang='matlab'>
% Variant #1: Explicit preallocation of a 1Kx3K cell array
data = cell(1000,3000);
   => Elapsed time is 0.004637 seconds.
% Variant #2: Implicit preallocation – x3 slower than explicit
clear('data'), data{1000,3000} = [];
   => Elapsed time is 0.012873 seconds.
</pre>
<h3 id="structs">Pre-allocating arrays of structs</h3>
<p>To preallocate an array of structs or class objects, we can use the <i><b>repmat</b></i> function to replicate copies of a single data element (explicit preallocation), or just use the maximal data index (implicit preallocation). In this case, unlike the case of cell arrays, implicit preallocation is much faster than explicit preallocation, since the single element does not actually need to be copied multiple times (<a target="_blank" rel="nofollow" href="http://www.mathworks.com/support/solutions/en/data/1-7S1YKO/">ref</a>):</p>
<pre lang='matlab'>
% Variant #1: Explicit preallocation of a 100x300 struct array
element = struct('field1',magic(2), 'field2',{[]});
data = repmat(element, 100, 300);
   => Elapsed time is 0.002804 seconds.
% Variant #2: Implicit preallocation – x7 faster than explicit
element = struct('field1',magic(2), 'field2',{[]});
clear('data'), data(100,300) = element;
   => Elapsed time is 0.000429 seconds.
</pre>
<p>When preallocating structs, we can also use a third variant, using the built-in struct feature of replicating the struct when the <i><b>struct</b></i> function is passed a cell array. For example, <code>struct('field1',cell(100,1), 'field2',5)</code> will create 100 structs, each of them having the empty field <i>field1</i> and another field called <i>field2</i> with value 5. Unfortunately, this variant is slower than both of the previous variants.</p>
<h3 id="objects">Pre-allocating class objects</h3>
<p>When preallocating in general, ensure that you are using the maximal expected array size. There is no point in preallocating an empty array or an array having a smaller size than the expected maximum, since dynamic memory reallocation will automatically kick-in within the processing-loop. For this reason, <a target="_blank" rel="nofollow" href="http://stackoverflow.com/questions/2510427/how-to-preallocate-an-array-of-class-in-matlab">do not use</a> the <i>empty()</i> method of class objects to preallocate, but rather <i><b>repmat</b></i> as explained above.<br />
When using <i><b>repmat</b></i> to replicate class objects, always be careful to note whether you are replicating the object itself (this happens if your class does NOT derive from <i><b>handle</b></i>) or its reference handle (which happens if you derive the class from <i><b>handle</b></i>). If you are replicating objects, then you can safely edit any of their properties independently of each other; but if you replicate references, you are merely using multiple copies of the same reference, so that modifying referenced object #1 will also automatically affect all the other referenced objects. This may or may not be suitable for your particular program requirements, so be careful to check carefully. If you actually need to use independent object copies, you will <a target="_blank" rel="nofollow" href="http://stackoverflow.com/questions/591495/matlab-preallocate-a-non-numeric-vector#591788">need to call</a> the class constructor multiple times, once for each new independent object.</p>
<p />
Next week: what if we can&#8217;t avoid dynamic array resizing? &#8211; apparently, all is not lost. Stay tuned&#8230;<br />
<i><br />
Do you have any similar allocation-related tricks you&#8217;re using? or unexpected differences such as the ones shown above? If so, then please do <a href="/articles/preallocation-performance/#respond">post a comment</a>.<br />
</i> </p>
<p>The post <a rel="nofollow" href="https://undocumentedmatlab.com/articles/preallocation-performance">Preallocation performance</a> appeared first on <a rel="nofollow" href="https://undocumentedmatlab.com">Undocumented Matlab</a>.</p>
<div class='yarpp-related-rss'>
<h3>Related posts:</h3><ol>
<li><a href="https://undocumentedmatlab.com/articles/zero-testing-performance" rel="bookmark" title="Zero-testing performance">Zero-testing performance </a> <small>Subtle changes in the way that we test for zero/non-zero entries in Matlab can have a significant performance impact. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/performance-scatter-vs-line" rel="bookmark" title="Performance: scatter vs. line">Performance: scatter vs. line </a> <small>In many circumstances, the line function can generate visually-identical plots as the scatter function, much faster...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/array-resizing-performance" rel="bookmark" title="Array resizing performance">Array resizing performance </a> <small>Several alternatives are explored for dynamic array growth performance in Matlab loops. ...</small></li>
<li><a href="https://undocumentedmatlab.com/articles/allocation-performance-take-2" rel="bookmark" title="Allocation performance take 2">Allocation performance take 2 </a> <small>The clear function has some non-trivial effects on Matlab performance. ...</small></li>
</ol>
</div>
]]></content:encoded>
					
					<wfw:commentRss>https://undocumentedmatlab.com/articles/preallocation-performance/feed</wfw:commentRss>
			<slash:comments>30</slash:comments>
		
		
			</item>
	</channel>
</rss>
