Join today and have your say! It’s FREE!

Become a member today, It's free!

We will not release or resell your information to third parties without your permission.
Please Try Again
{{ error }}
By providing my email, I consent to receiving investment related electronic messages from Stockhouse.

or

Sign In

Please Try Again
{{ error }}
Password Hint : {{passwordHint}}
Forgot Password?

or

Please Try Again {{ error }}

Send my password

SUCCESS
An email was sent with password retrieval instructions. Please go to the link in the email message to retrieve your password.

Become a member today, It's free!

We will not release or resell your information to third parties without your permission.
Quote  |  Bullboard  |  News  |  Opinion  |  Profile  |  Peers  |  Filings  |  Financials  |  Options  |  Price History  |  Ratios  |  Ownership  |  Insiders  |  Valuation

Second Wave Petroleum Inc SCSZF



GREY:SCSZF - Post by User

Bullboard Posts
Comment by idleSpeculatoron Aug 26, 2003 6:47am
181 Views
Post# 6352443

RE: Random acts

RE: Random actsTrue story, from 10 days ago I have just finished upgrading computer systems in a small bank, we have 1 live server, 1 spare onsite server / network, and an identical offsite network (less PC's). All servers have 2 tape drives (in case one fails), Drives are arranged in two arrays, system array with the OS and a data array (with the data). Once a month we put a copy of the system array in a large fire proof safe. Once a quarter a copy goes to the DR site 5 miles away. 2 full backups are created every night, one set stored onsite the other offsite. DR site can be up and running in 4 hours which includes creating and restoring DATA array with last nights backup and imaging all workstations. Large part of that 4 hours is in fact travel time. I Felt confident we could survive anything with no more than loss of current days work worst case. 10 days ago I had a problem with a user not being able to log on to the server properly. Looked like Directory problem (where users, groups, rights etc are stored), at the end of the day I restored directory, restarted server, tried again and found everyone now had log in errors and application icons (also stored in directory) missing from desktop. we were not isn a state to start business next day. My first line of recovery had apparently just failed. If I can't recover from tape, maybe I can use OS tools to back up directory onto disk from spare drive in safe and restore to live server? I Inserted safe drive (only one week old) into spare server, mirrored it and replaced in safe. Restarted server and noticed it was no longer giving out IP addresses to worksations on test network. BAcked up directory, restored to live server, no better. 2nd line of DR just failed, and no reason to think 3rd line (the offsite network) would be any better, even though it tested fine when we were last there, the safe drive had too when we made it?. Worse, although I could use the safe drive to rebuild the server from scratch, it had a problem with giving out IP addresses? And that problem had now appeared on the live server, presumably with the restore becuase the IP information is staored in the directory. Worked frantically throught the night, and got system working by manually assigning IP addresses to workstations, practical only becuase it is a small network. THis also recovered application icons on desktops so people bank could work following morning. Following evening traced problem down to Norton AV updates. Load sequence on server was DHCPSRVR - reponsible for allocating IP addresses other OS stuff Norton AV other OS stuff worksation imaging (which hooks into DHCP) Automatic updates in Norton had resulted it's load time changing to more/less time than before. As a result, imaging was completing it's load before DHCPSRVR preventing the latter from completing becuase they both use the same TCP/IP port. There never had been a restore problem, the safe drive had been fine until I stuck it in the spare server to mirror it and it had downloaded latest Norton AV signatures. We were very very lucky not to suffer serious down time, normally we restart servers in the morning after a drive swap, it was only becuase we had a support issue that I had done it at the end of day and uncovered the problem. Sorry, long story, just intended to highlight kind of thing that can happen even when you have good DR. When you are looking at risks, this stuff is important, partiuclarly when it takes your TT/SB clients off line and stops them doing business. It's low probability, potentially high risk to business if it happens and we don't deal with it in a timely fashion and are sued. Promise not to say another word on the subject Idle
Bullboard Posts