Changing code on a production server is evil: But what's the best way to handle it, if you do it?
Changing code on a production system to quick-fix a problem is seductive. Even if you know it's evil and bad and dangerous - the day comes you ignore the facts and do it nevertheless.
For all of you that go to the dark side from time to time: How do you try to fix the drawbacks? Do you install a SVN (...) Server to track changes on the prod machines? Install a job that compares checksums of files and sends out "开发者_JAVA技巧remember-you-changed-this"-Mails? Just place a note on the whiteboard? Sync changes back to a development server?
Added: I take it as a fact that this kind of bad practice happens. I am not interested in a perfect workflow to avoid this. Or whether it happens more often in PHP or JAVA or COBOL projects. Or in small vs. big projects. In newbie vs. veteran projects. Or if you get immediately punished by a cosmic entity if you do it. I am simply interested in creative usable tips from people that know how to handle that kind of situation.
Have a rollback plan in case the quick fix doesn't work.
For a website, it may be as simple as copying the whole thing to a backup folder.
Often, this entails having a database script to undo changes made in database scripts.
Have a smoke test so you can tell immediately if you have broken the application.
Don't do it......make the change in source control, deploy to your System Test/UAT environment, test the change, then deploy to Production.
Otherwise, how do you know your 'fix' worked?
Funny you should ask, since my boss asked me to make a quick change in production just last night.
Except it wasn't quite as quick and dirty as what you may be asking about. I made the change in our development environment first, then ran the job using a copy of production data and verified the results. On the administrative side, I created a bug ticket to document what I was doing.
And of course I made backup copies of the production code before I copied the change in from development.
No matter how quick and simple the fix and how sure you are that it's what you need to do, I hope you at least made backup copies.
Use a version control system and, in that, have a 'stable' branch (or use the trunk as the stable component). Put a requirement in there that everything in that branch (or the trunk) should be suitable for immediate deployment at any given time. And add a warning that whoever breaks that branch / trunk shall die or some other painful punishment.
Adding automated testing will also give you confidence in such things - if the tests succeed, you can presume a deploy on the live server will be no problem. Of course, it takes quite a big investment in time (and, consequently, money) to set up such an environment (and maintain it), plus you'd have to convince the management.
It's also a big plus if you have a test environment that behaves exactly like the production environment - same or comparable data, hardware, operating system, versions of third party software and runtimes, and if your production environment is a clustered one (multiple webservers, for example), make sure your test environment has that too - this could be done using virtual machines, if I'm right.
Cowboy Coding: http://www.bnj.com/cowboy-coding-pink-sombrero/
Though a funny appearance it seems a noteworthy approach. By giving this "visual feedback" not only everyone knows, that there is trouble on production, but it also encourages everyone to improve the workflow to minimize "pink sombrero" time.
精彩评论