- Undocumented Matlab - https://undocumentedmatlab.com -

Afterthoughts on implicit expansion

Posted By Yair Altman On November 30, 2016 | 12 Comments

Matlab release R2016b introduced implicit arithmetic expansion [1], which is a great and long-awaited natural expansion of Matlab’s arithmetic syntax (if you are still unaware of this or what it means, now would be a good time to read about it). This is a well-documented new feature. The reason for today’s post is that this new feature contains an undocumented aspect that should very well have been documented and even highlighted.
The undocumented aspect that I’m referring to is the fact that code that until R2016a produced an error, in R2016b produces a valid result:

% R2016a
>> [1:5] + [1:3]'
Error using  +
Matrix dimensions must agree.
 
% R2016b
>> [1:5] + [1:3]'
ans =
     2     3     4     5     6
     3     4     5     6     7
     4     5     6     7     8

This incompatibility is indeed documented, but not where it matters most (read on).
I first discovered this feature by chance when trying to track down a very strange phenomenon with client code that produced different numeric results on R2015b and earlier, compared to R2016a Pre-release. After some debugging the problem was traced to a code snippet in the client’s code that looked something like this (simplified):

% Ensure compatible input data
try
    dataA + dataB;  % this will (?) error if dataA, dataB are incompatible
catch
    dataB = dataB';
end


The code snippet relied on the fact that incompatible data (row vs. col) would error when combined, as it did up to R2015b. But in R2016a Pre-release it just gave a valid numeric matrix, which caused numerically incorrect results downstream in the code. The program never crashed, so everything appeared to be in order, it just gave different numeric results. I looked at the release notes and none of the mentioned release incompatibilities appeared relevant. It took me quite some time, using side-by-side step-by-step debugging on two separate instances of Matlab (R2015b and R2016aPR) to trace the problem to this new feature.
This implicit expansion feature was removed from the official R2016a release for performance reasons [2]. This was apparently fixed in time for R2016b’s release.
I’m totally in favor of this great new feature, don’t get me wrong. I’ve been an ardent user of bsxfun for many years and (unlike many) have even grown fond of it, but I still find the new feature to be better. I use it wherever there is no significant performance penalty, a need to support older Matlab releases, or a possibility of incorrect results due to dimensional mismatch.

So what’s my point?

What I am concerned about is that I have not seen the new feature highlighted as a potential backward compatibility issue in the documentation or the release notes [3]. Issues of far lesser importance are clearly marked for their backward incompatibility in the release notes, but not this important major change. A simple marking of the new feature with the warning icon () and in the “Functionality being removed or changed” section would have saved my client and me a lot of time and frustration.
MathWorks are definitely aware of the potential problems that the new feature might cause in rare use cases such as this. As Steve Eddins recently noted [4], there were plenty of internal discussions about this very thing. MathWorks were careful to ensure that the feature’s benefits far outweigh its risks (and I concur). But this also highlights the fact that MathWorks were fully aware that in some rare cases it might indeed break existing code. For those cases, I believe that they should have clearly marked the incompatibility implications in the release notes and elsewhere.
I have several clients who scour Matlab’s release notes before each release, trying to determine the operational risk of a Matlab upgrade. Having a program that returns different results in R2016b compared to R2016a, without being aware of this risk, is simply unacceptable to them, and leaves users with a disinclination to upgrade Matlab, to MathWorks’ detriment.
MathWorks in general are taking a very serious methodical approach to compatibility issues, and are clearly investing a lot of energy in this (a recent example [5]). It’s too bad that sometimes this chain is broken. I find it a pity, and think that this can still be corrected in the online doc pages. If and when this is fixed, I’ll be happy to post an addendum here.
In my humble opinion from the backbenches, increasing the transparency [6] on compatibility issues and open bugs will increase user confidence and result in greater adoption and upgrades of Matlab. Just my 2 cents…

Addendum December 27, 2016:

Today MathWorks added the following compatibility warning to the release notes [7] (R2016b, Mathematics section, first item) – thanks for listening MathWorks 🙂

MathWorks compatibility warning

Categories: Low risk of breaking in future versions, Stock Matlab function, Undocumented feature


12 Comments (Open | Close)

12 Comments To "Afterthoughts on implicit expansion"

#1 Comment By Steve Eddins On November 30, 2016 @ 23:55

I am looking into the possibility of adding a “compatibility consideration” to the release note.

Implicit expansion was pulled from the final release of R2016a for performance reasons. There were no other factors involved in the decision.

#2 Comment By Yair Altman On November 30, 2016 @ 23:59

@Steve – thanks on both accounts. I amended the text accordingly.

#3 Comment By TheBlackCat On December 1, 2016 @ 02:30

Your client is lucky the problem got caught. A lot of such issues will likely go unnoticed. For example, basically anything with a mathematical operation followed by the use of linear indexing (such as in a for loop) will seem to work fine but will give mathematically incorrect results. I know you ideally shouldn’t be doing this, and I assume MATLAB internal code doesn’t do it very much, but I see code like that all the time from people with less of a programming background, and sometimes your algorithm requires it.

I had always assumed the reason MATLAB hadn’t implemented this feature over the last 15 years or so was that it was too big of a backwards-compatibility break.

#4 Comment By David B On December 1, 2016 @ 22:20

If nothing else I hope that the exercise of debugging this proved as a rude wake-up call for the author of that original and terrible code in the try/catch. They should be ashamed.

#5 Comment By Yair Altman On December 1, 2016 @ 22:36

@David – while I fully agree with you that it’s not good coding style/practice, I’ve seen much worse client codes. Most Matlab users don’t have a degree in computer science, and sadly enough even CS grads often exhibit deplorable coding. In fact, my personal experience has been that only a minority of Matlab users have high-quality code. Most Matlab users use Matlab as an engineering tool and not as an end to itself: as long as something works, they don’t mind if it’s nice-looking – they just move on to solving the next problem. In this sense, the snippet above is beautiful in its simplicity, and to hell with the CS purists…

#6 Comment By TheBlackCat On December 2, 2016 @ 00:28

What would your alternative be? The equivalent one I can think of would be:

if numel(dataA)~=1 && numel(dataB)~=1 && any(size(dataA)~=size(dataB))
    dataB = dataB';
end

A more strict test would be:

if numel(dataA)~=1 && numel(dataB)~=1 && ndim(dataA)==2 && ndim(dataB)==2 && any(size(dataA)~=size(dataB)) && all(fliplr(size(dataA))==size(dataB))
    dataB = dataB';
end

So yes, probably from a code correctness standpoint you are probably right. However, from a readability and maintainability standpoint their solution is pretty elegant. Will it be slower? Yes, but in many cases not enough to make a difference. Does it have corner cases that it doesn’t handle? Yes, although those may not be relevant or may get caught later. But even without a comment I can tell in an instant what their code is doing, while it would take me some time to figure out what either of the two examples I posted did.

If you have another approach that is as simple and easy-to-read as the above case then of course I will retract that. But otherwise, the best algorithm from a CS standpoint isn’t necessarily the best approach once you have to start involving humans and want to be able to figure out what your code is doing 3 years down the road.

#7 Comment By David B On December 3, 2016 @ 12:33

@TheBlackCat From the limited information we have available we are assuming the data is a vector. If that is the case then I think something like this code snippet would work nicely and is perfectly human readable.

if isrow(a) && ~isrow(b) || iscolumn(a) && ~iscolumn(b)
    b = b';
end

#8 Comment By Marshall On December 12, 2016 @ 21:04

@DavidB if we know we are receiving vectors, and they may be oriented differently, the following would be more elegant compared to any requiring any logical branches:

dataA(:)+dataB(:);

#9 Comment By Guillaume On December 2, 2016 @ 18:05

Well, presumably, the code is operating on vectors, so the alternative could be

   dataA(:) + dataB(:)
  

But really, the proper alternative would have been to find out why the data does not come with the expected shape rather than take a gamble and flip it. I’m with David B, the code is a strong indication that something is very wrong in the algorithm somewhere and that one day, given some particular input, it’s going to break in even more unpleasant ways.

#10 Comment By TheBlackCat On December 2, 2016 @ 19:19

It is hard to say without seeing more code. It may very well be that the data can only come in a few formats, so transposing it is the correct thing to do.

#11 Comment By Yair Altman On December 3, 2016 @ 19:37

@DavidB + @Guillaume + @TheBlackCat – if I remember correctly, my client wanted the code to continue processing only when the 2 inputs were both vectors, although possibly of different dimensionality. So, [1,2,3] should be combinable with [3;4;5] but not with [3;4] (which would error out downstream). In such cases the try-catch block gives the expected results, but a(:)+b(:) would not have.

As I said in the post, the code snippet is just a simplified version and the actual coding details don’t really matter. The important thing in my opinion is that for this specific use-case, a functional change in Matlab R2016b caused fully-legitimate code to return different results, and since the functional change was not documented as having a compatibility aspect, this caused an operational problem that was unacceptable. In this respect, it really does not matter whether the client’s code was due to bad coding or to explicit design.

#12 Comment By Sue Ann Koay On August 4, 2017 @ 04:37

Having only recently installed Matlab 2017a, I almost immediately “discovered” this feature and am compelled to say that implicit expansion gives me the creeps. As an outside example: why do some languages like C++ impose type safety? It’s not because programmers can’t write their own type checking code. For a 10-line function I can easily write 50 lines of code to perform every possible check I can think of to ensure that the inputs are of the correct type and sizes and ranges and whatnot. Will this make me a better coder? Maybe. Will chances be high that I’ll do it for every function? If I existed solely for such a purpose, maybe. Will this help others read and modify my code down the line? Well…

As a person from a joint CS/science background, I’m certainly not advocating sloppy coding practices. I think implicit expansion (and yes also lack of type safety) actually goes towards encouraging sloppy coding practices. Now instead of a self-documenting piece of code where one uses bsxfun(), if I read the line a .* b somewhere I have to wonder if it is the outer product of vectors or an element-wise product or anything goes. The best designed systems limit the amount of mistakes that humans can make, and allowing the syntax itself to be sloppy doesn’t seem very encouraging.


Article printed from Undocumented Matlab: https://undocumentedmatlab.com

URL to article: https://undocumentedmatlab.com/articles/afterthoughts-on-implicit-expansion

URLs in this post:

[1] implicit arithmetic expansion: http://blogs.mathworks.com/loren/2016/10/24/matlab-arithmetic-expands-in-r2016b

[2] for performance reasons: http://blogs.mathworks.com/loren/2016/10/24/matlab-arithmetic-expands-in-r2016b/#comment-46516

[3] release notes: https://www.mathworks.com/help/matlab/release-notes.html

[4] recently noted: http://blogs.mathworks.com/loren/2016/11/10/more_thoughts_about_implicit_expansion

[5] a recent example: http://blogs.mathworks.com/loren/2016/11/07/notes-on-the-release-notes/#acaecb34-9d88-40cf-93fa-e657f1076b35

[6] increasing the transparency: http://undocumentedmatlab.com/blog/couple-of-matlab-bugs-and-workarounds

[7] release notes: https://www.mathworks.com/help/matlab/release-notes.html#bvn8rd5-1

[8] Additional license data : https://undocumentedmatlab.com/articles/additional-license-data

[9] Setting class property types – take 2 : https://undocumentedmatlab.com/articles/setting-class-property-types-2

[10] Viewing saved profiling results : https://undocumentedmatlab.com/articles/viewing-saved-profiling-results

[11] Customizing axes part 5 – origin crossover and labels : https://undocumentedmatlab.com/articles/customizing-axes-part-5-origin-crossover-and-labels

[12] Preallocation performance : https://undocumentedmatlab.com/articles/preallocation-performance

[13] Working with non-standard DPI displays : https://undocumentedmatlab.com/articles/working-with-non-standard-dpi-displays

Copyright © Yair Altman - Undocumented Matlab. All rights reserved.