An Abstract Data Model
This is post 3 from a 7 part series entitled Technical Achievements in my Last Project. Overview Normally, when I build a new system, I design the new data model based on the requirements, and build my business objects and data access, based primarily on a that data model*. The remainder of the application is built on the components beneath it, so when you change something at the bottom, like the data model, changes ripple throughout the application. The data model serves as the foundation of my application. Now as far as this project goes, one of the important requirements was to deliver the new system incrementally, while leaving the older system to run in parallel until completely replaced. Parallel Data Models This presented a bit of a dilemma for me since the current database was … well … lacking, and I was planning to refactor it enough to make it a very unstable foundation for the old system. I wanted to refactor it for a number of reasons including; missing primary keys, no foreign keys, no constraints, data fields which were required but not there, data fields which were there but not used, data fields containing 2 or more pieces of information, and tables which should have been multiple tables. Not to mention the desire to achieve a consistent naming convention without the insane column names using characters like ‘/’ and ‘?’ … seriously. However the parallel systems requirement caused a bit of a dilemma. I mean, how do you manage parallel systems, one of which needs a stable foundation, and the other is so temperamental that you don’t want to touch it. My options as I saw them were something like: Scrap the data model refactoring. This really didn’t get much thought. Well it did, but the thought was, is this the best route for the client? And if so, should I offer to help them find my replacement or just leave? I definitely wasn’t up for replacing one unmaintainable piece of junk for another. New data model and re-factor the existing app. The existing application was a total nightmare built in classic Access spaghetti code fashion. Just touching that looked like going down a rabbit hole of certain doom. New application on the old data model and refactor the data model later. This would have caused a real disconnect between the data model and the application. I’m not sure if the data model and application ever... read more6 simple steps to a stress free database change deployment
… Deployments can be a real headache at the best of times, but especially when schema updates to a production database are involved. Don’t get me wrong, you usually have a backup to fall back on, but how long will that take to restore? … Really, you don’t want to resort to the restore, have the database offline for that long, or have your name associated to it. So gradually I evolved a process which has kept me sane and confident when deploying schema changes to production servers, even on large, sensitive, and active databases….
read moreHow To Get The Most Frequently Used Column Values
Whenever I import external data, integrate to another database, or am new to a project, I need to get familiar with the database. The table schemas, relational integrity, and constraints are the first thing I look at and take me a long way, but soon I need to know what the data looks like.
In an ideal world, relational integrity and database constraints would define control this, and all I’d really need to do is look at those. But the reality is, in 15 years of working in this industry, most of the databases I’ve worked on, that I didn’t design, have barely used constraints and some haven’t even used relation integrity fully!
The need to get a good feel of the data is even more prevalent when working with dirty data, or when refactoring poorly written applications to ensure any refactoring doesn’t introduce other issues. I will usually wind up writing the following query repeatedly:
….
How To Write Dynamic SQL AND Prevent SQL Injection Attacks
One of my pet peeves is when general rules are taken as gospel, and declared as the only acceptable practice regardless of the circumstance. One of the big ones is Dynamic SQL. There’s a heck of a good reason for this, and it’s called an SQL Injection Attack, and if you are not familiar with it, I would strongly urge you to leave this post right now, and read up on it. Anyway, Dynamic SQL is not inherently evil, it’s the appending of user entered text that is evil. Appending user entered text is just lazy and can be easily avoided with parameterization. The trick is to create dynamic SQL …
read more