search
top

The *hidden* Obamacare website code.

I just saw this video of Congressman Joe Barton questioning Cheryl Campbell of CGI about the Obamacare website’s *hidden* code. http://youtu.be/N54gSI4Aft0 It really doesn’t bother me that the congressman doesn’t understand that he’s looking at a ‘comment’, not ‘code’. And it doesn’t bother me that he doesn’t understand that it was probably commented out because of the dynamic, incremental, nature of software development with decisions being made and unmade continuously on the fly. And it doesn’t bother me that he doesn’t understand this commented code doesn’t affect compliance since it’s not functional, not displayed, and completely ignored by the computer. I can almost guarantee how this played out; An overly defensive business analyst decided to add this in as a CYA tactic. The developer added it then somebody realized it was not to spec and told the programmer to remove it. The programmer commented it out instead of removing it because they’ve seen the flip flopping on functionality like this in the past (possibly multiple times in this project). The comments were never removed (are they ever?), and it was promoted to production. Then somebody with enough knowledge to be dangerous, but doesn’t know what they are looking at found it, and pushed it through the ranks until if finally wound up being questioned in a Congressional Hearing. What bothers me is that nobody was able to clear up this misunderstanding. There was nobody to step in and say, “Hey, this is a non-issue”. It’s a non-threat. I was a little disappointed that Cheryl Campbell didn’t chime in and say “Actually, Mr. Congressman, I understand your concern, but it’s not what it looks like”, then explained what it was. Ok, she obviously doesn’t have the skills to understand or explain it, but you would think the solution architect would’ve rode shotgun to clear up misunderstandings like this. I’m also embarrassed that programmers are still commenting dead code instead of removing it. It’s not 1990 anymore, the source control tools we have are awesome, so don’t be scared to delete anything which is no longer serving a purpose, it will live in your source repo history forever. Don’t be a hack. Once it is understood this commented out code is a non-functional historical artifact of an extremely dynamic process, it could also be asked “Why was this added in the first place?”. And that would be a valid question, to which I would answer, “It doesn’t matter since that functionality... read more

How to keep your Get & Post ViewModels DRY

I’ve been convinced that I should not be using the same view model class for both loading the view and the posted controller action parameter. The trouble is it allows data to be transferred back which may not be necessary and it needlessly increases our attack surface. Don’t get me wrong, I’ve always had unit tests specifically monitoring that only necessary view model properties are being pushed back to the entity model, but still, seperate viewmodels will allow me to eliminate that risk completely and even remove those tests. To be honest, the idea of having 2 view model classes that are nearly identical is not very appealing, especially given that if the structure deviates, the default model binder will no longer work with those properties which are out of sync with its partner view model. Don’t get me wrong, I’m very liberal when it comes to class quantities, but this was enough to make me uncomfortable. So I’m happy that I realized the post view model class can always be used as a base class for the get view model class. I mean it has to be in order for the default model binder to work. Using inheritance for these classes will a) keep these classes in sync so the default model binder will always work, b) follow the DRY principle and c) keep my sanity. In a nutshell, rather than having 1 2 3 4 5 6 7 public class FooViewModel { public int Id { get; set; } public string Name { get; set; } public FooStatusEnum Status { get; set; } public DateTime CreateDate { get; set; } }public class FooViewModel { public int Id { get; set; } public string Name { get; set; } public FooStatusEnum Status { get; set; } public DateTime CreateDate { get; set; } } We can instead have 1 2 3 4 5 6 7 8 9 10 11 public class FooPostViewModel { public int Id { get; set; } public string Name { get; set; } }   public class FooGetViewModel : FooPostViewModel { public FooStatusEnum Status { get; set; } public DateTime CreateDate { get; set; } }public class FooPostViewModel { public int Id { get; set; } public string Name { get; set; } } public class FooGetViewModel : FooPostViewModel { public FooStatusEnum Status { get; set; } public DateTime CreateDate { get; set; } } So FooGetViewModel is loaded and passed into the view.... read more

I’m a good dev

2 recent experiences Earlier this year I had 2 experiences which made me pause & reflect: The first was an outsourcing company’s framework from hell.  The framework was made to make MVC work like webforms … I kid you not.  I suspect they had a webforms framework and were trying to hold on to their previous productivity gains by jamming their square peg into the MVC paradigm’s round hole.  The end result was an undocumented, inconsistent, buggy mess that only an experienced developer can figure out; ironically it’s the kind of mess an experienced developer would know enough to stay away from.  (Although I didn’t) Fine.  Whatever.  We all make mistakes right? But I did find the video on their website talking about how “good high quality code” “pays dividends down the road when you’re trying to add features” and how it “enables you to have high performance applications” a little nauseating, especially while fixing errors around Sql Server’s 2,100 parameter limitation.  Turns out they were querying ids matching their filter, passing the results back to .NET, and then passing those ids back into a second query.  There was a bunch of rookie mistakes like that which hardly enabled ‘high performance applications’. The second was when a friend asked me to add OSCommerce to his brand new brochure website.  No problem, but I was shocked that the guy who wrote the brochure website (this year … in 2013) did it in Classic ASP.  .. wait. wuht?  Yes, somebody wrote a brand new app in a technology which was obsoleted in 2001.  Don’t get me wrong, if they did this in 2001, 2002, or even 2003 because they didn’t want to work in the ‘untested’ new .net technology, I would understand, I’m rather risk adverse myself. … but it’s been 12 years. I think we’re ok. Since they were moving to OSCommerce which is written in PHP, their ability to find hosting which has both PHP and Classic ASP might be limited, and it would be so easy to port a Classic ASP brochure site to PHP, I just ported it.  The original Classic ASP code was good code, it was clean, easy to read, and used appropriate best practices … but how good is your code when it’s obsolete 12 years before it’s written?  Well from what I hear, the original developer didn’t think my decision to move to PHP was a great idea and questioned my competence. I’m... read more

The asp.net-mvc predicate

I have a theory that you can tell if a Microsoft web developer is good or not based on as single question. Would you choose webforms or MVC on a new project? Many readers probably consider this obvious, perhaps even equivalent to the choice between punch cards vs keyboard/monitor. I suspect most people reading this consider asp.net-MVC the logical choice, and only an idiot would choose otherwise However, I suspect most Microsoft web application developers would still chose webforms … scary as that is. I think that right now, MVC has really separated the good from the bad, at least as far as asp.net developers go. Developers on top of their game have at least overview experience with MVC, yet most weak or lazy developers haven’t been forced to make the leap yet, as a result they either haven’t looked into it, or have been so confounded by the paradigm shift that they’ve passed it off as a trendy fad. Please notice I said ‘choose on a new project’, not ‘currently working in’. I’m sure there are many developers who are working in webforms who would prefer to be working in MVC … I’m one of them. On the flip side, I recently met a developer working in an MVC app who wishes it was webforms. There is a presumption in my statement, that MVC is inherently better than webforms. To most of us, this seems obvious, but there are many who disagree … and that’s fine. However, I’ve yet to find a valid argument for webforms over MVC. … don’t get me wrong, I’ve heard people tell me that webforms is better, but I’ve never heard a rational argument. Those who have tried have basically given me buzzwords like ‘event-driven’ but fail to contrast it against, or even recognize the cost of that paradigm. Take for example; answers to this Stackoverflow question When to favor webforms over MVC? I’m not going to argue about integrating MVC into an existing webforms app because I think maintaining technology consistency is important. But some of the other reasons include : Leveraging existing training – True, but good developers usually keep their skills up to date, so is this really that big of a deal? View/edit modes makes duplicate work – This is actually better than all edit pages, yes … even for enterprise apps. If you don’t believe me, just think about all the functionality you added to ensure users couldn’t ‘save’... read more

A simple proposal to rule out obvious software patents

I just read the EFF’s new website (https://defendinnovation.org/) about how to change the patent system to prevent the patent trolls from stifling innovation.  I like some of the ideas, but question how realistic others are (i.e. #3? – C’mon; How many implementations in how many languages will you need to write to cover all your bases?) I realize this is an extremely naive statement to make, but I’m going to make it anyway; solving the ‘non-obvious’ aspect of a software patent is simple. … and it doesn’t involve judges learning to code. Here’s my proposal:  Software patent proposals should include a unit test suite and a single solution implementation.  The test suite would be made public immediately, without the implemented solution.  The public would take a crack at making the test suite pass. If there are no successful passes, then it could be deemed as a difficult problem with a non-obvious solution. If the patent office is bombarded with working solutions, then it unquestionably fails the ‘non-obvious’ test. It’s debatable what to do if a relative few brilliant developers solve it, while most fail.  But this would definitely eliminate laughable patents like Amazon’s 1-Click which would have bombarded the patent office in less than an... read more

My take on identifier semantics (Id vs No vs Code vs Key)

My simple conventions for these popular identifier names. I don’t believe they’re all the same, and should be used under different circumstances.

read more

Discovering Typemock

Frustration mocking static methods, the ridiculous hoops I was forced to jump through, and the clean implementation I was finally able to do with Typemock.

read more

Consultants are advisors, not decision makers.

Overview I was having lunch with my friend & colleague last week and we had a disagreement about whose decision it is to make a change when you see something wrong in the client’s software. Mechanic analogy My colleague used an analogy about a mechanic; it was a good one, so I’ll use it here. Let’s say you bring your car in for a $30 oil change, and your mechanic notices a problem with the timing belt.  My colleague suggests, he doesn’t just ignore it, he tells you it’s an emergency must be changed immediately (the emphasis is my friends). Well I agree the mechanic shouldn’t just ignore it, and I was relieved my friend didn’t suggest the mechanic should simply change the timing belt, driving the bill from $30 to $930, but I’m not sure if telling the car owner they must change the timing belt ‘now’ is appropriate either. In my opinion the mechanic should tell the customer what he found, the risk in not fixing it, outline the options to fix it, and the cost & associated risk with each option[1].  Then make a recommendation. It’s the customer’s decision, not the mechanics, and even if the customer makes a foolish decision, it’s his decision, not the mechanics. Fortunately, all the mechanics I’ve dealt with seem to understand this. As consultants we’re advisors, not the decision maker In my opinion, our role as consultants / advisors includes the responsibility to inform the client of any problems or potential problems you’ve noticed, the risks in not fixing it, options to resolve the problem, along with the costs and risks associated with each. Unethical behaviour To me it seems unethical to go rogue, and just start making changes the client doesn’t know about, and didn’t approve. It’s also unethical to purposely instil fear, uncertainty, and doubt when explaining the options to the client so they make the decision you want them to; regardless of your intentions. The client owns your time After all, the client does own the software, is responsible for its maintenance, and they do own your time to direct as necessary. Wait, what?  Who owns your time? Yeah, when you’re consulting, your client (or employer) owns your time.  … you sold it to them; remember? Every consulting and/or employment agreement is different, but if you’re charging by the hour, it probably says something like this; they bought your focused efforts to solve their problems for a... read more

« Previous Entries

top