Google Search Engine

Saturday, September 12, 2009

The Hidden Costs of Increasing Data Storage

Large-scale IT environments have the resources to manage all aspects of a network expansion, including the initial analysis, equipment installation and wiring, and proper access management to users. In smaller environments the planning may not go beyond the immediate reaction to the user’s needs—that is, “we’re out of space!” While the size of the environment may determine how storage needs are addressed and managed, such things as proper equipment cooling, storage management software that allows for scalable growth (SRM), disaster recovery (including backup contingencies), and data recovery concerns apply to IT environments of every size.
In one scenario, picture a small business with five desktop machines. Despite following careful data compression procedures and rigorous archiving of old files, their system is running out of space. They have a small file server sitting near the users’ desks. Can the business owner upgrade the file server with a bigger hard drive or should he add a separate rack of inexpensive drives? How much space will they need? Will a terabyte be enough? What if they need to upgrade in the future? How hard will it be? What other hidden costs are they going to run into?
In another scenario, a business that uses 30-40 desktop machines has a file server located in a separate room with adequate cooling, user access management, and a solid network infrastructure. But they too are running out of space. When they plan for an expansion, what hidden costs will they need to consider?
In addition to equipment investment, there are many hidden costs to consider when determining storage needs and subsequent management. Following are some hidden costs identified when it comes to storage.
How can you get the most out of existing storage space, not allowing it to fill up so quickly? In conjunction, how do you prevent your storage space from running out before the full life expectancy is realized? This is where storage management software, such as SRM and ILM, enters the picture. Storage Resource Management (SRM) software provides storage administrators the right tools to manage space effectively. Information Lifecycle Management (ILM) software helps the management of data through its lifecycle.
While a viable solution, SRM and ILM software may not cover all the needs of a business environment. SRM and ILM software are designed to manage files and storage effectively, and with a level of automation. Beyond this is where good old-fashioned space management is required. Remember the days when space was at a premium and there were all sorts of methods to make sure that inactive files were stored somewhere else—like on floppies? Remember when file compression utilities came out and we were squeezing every duplicate byte out of files? Those techniques are not outdated just because the cost per MB has dropped, or tools exist to help us manage data storage. Prudent storage practices never go out of style. Power consumption
Manufacturers are working hard to optimize the performance of their machines, yet server power consumption remains on the increase. What will be the power requirement of your company’s new storage solution? Luiz AndrĂ© Barroso at Google reports that if performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin.
Power consumption can be a hidden fixed cost that may not have been expected with the expansion of storage space. Especially when consider the fluctuating costs of energy, unanticipated power usage increases can be an expensive budget buster affecting the entire enterprise. Cooling requirements
Closely related to power consumption is the need to keep cool the more powerful processors found in the latest machines. Both the performance and life expectancy of the equipment are related to the component temperature of the equipment. Ever since the Pentium II processor in 1997, proper heat dissipation using heat sinks and cooling fans has become a standard for computer equipment. Today’s high performance processors, main boards, video cards, and hard drives require reliable temperature management in order to effectively and efficiently work day in, day out.
If you or your client’s storage requirements grow, proper ambient server room temperature settings are going to be required. Adding such a room or creating the necessary environment may add build-out costs, not to mention increase those power consumption and energy costs mentioned about earlier. Noise
With proper heat dissipation and cooling comes noise. All those extra fans and cooling compressors can create a noticeable amount of decibels. A large-scale IT environment has the luxury of potentially keeping its noisy machines away from the users. However, in a smaller-scale business or home business, some have found the sound levels generated by their storage equipment to be intolerable or at minimum concentration breaking. Such noise makes surrounding areas non-conducive to work and productivity, hindering employee’s ability to simply think. When increasing your data storage, make sure the resulting noise generated is tolerable. Be sure, too, that noise suppression efforts don’t interfere or defeat heat dissipation or cooling solutions. Administrative cost
The equipment investment for the expansion may be significant, but how does this increased storage relate to administrative needs? Should management hire a network consultant to assess user needs, then install, setup, and test the new equipment? Or can the company’s in-house network administrator do the work? A small company has a risk because although they might not be able to afford to have a professional assessment and installation, they may learn the hard way with an inexpensive solution the old adage of “you get what you pay for.”
A non-professional might misdiagnose storage usage needs, set up the equipment incorrectly, or buy equipment that isn’t a good fit for the environment. Such unintentional blunders are why there are certifications for network professionals. Storage management is not as simple as adding more space when needed, it is a complicated, multi-layered endeavor affecting every aspect and employee of a business.
Although using the skills of a professional greatly increases the success of the storage expansion, it will raise the final cost. When considering the monetary expense, businesses must also remember to consider how much other ‘costs’ - overall risk, loss of data availability, system downtime if the implemented solution fails - they can afford. Backup management
How does your business currently manage backup cycles and corresponding storage needs? Do you store your backups on-site, or do you have a safe alternate location at which to store this precious data? Natural disasters such as fires and floods, and extreme disasters like Hurricane Katrina are wakeup calls to many resistant to the idea of offsite data storage. Offsite data storage may be as simple as storing backup tapes off site or archiving data with data farms for a monthly space rental fee, or as complex as having a mirrored site housing a direct copy of all your data (effective but costly).
Whatever backup management and storage process utilized, backups created should be tested, as well as the backup system with the expanded storage to make sure it’s actually backing everything up. There is nothing worse than relying on a backup that doesn’t work, was improperly created, or doesn’t contain the vital data your business needs. Database storage
Databases created as a result of daily business activities can be staggering (as referenced in the earlier example of one large retail corporation’s generation of a billion rows of sales data daily). This activity can result in large amounts of data being stored. One way to optimize database performance is by separating the database files and storing them in three separate locations. In this process, data files are stored in one location, transaction files or logs in a second location, and backups in a completely different location. This not only makes data processing more efficient but prevents having an “all the eggs in one basket” scenario, beneficial when experiencing a process disruption such as equipment failure.
Undertaking this type of database optimization involves the aforementioned planning and equipment costs. But keep in mind how database information has reached into all areas of the business - customer information, billing information, and inventory management information - and how vital it is that this information be protected. Hidden costs associated with protecting database information can escalate quickly. Installation and cabling
The old trend was a standalone unit where the processor and storage were one system. Now the trend is to build a separate networked storage system that can be accessed by many users and servers. In general, there are two types of separate storage systems, the storage area network (SAN), and the network attached storage (NAS).
The separate storage system offers a number of advantages, including easier expansion. The consideration however, is that you will need the network infrastructure to support a separate storage system. In other words, if your storage system is in a separate building, you will need faster network connectivity to avoid a “bottleneck” in communication between the server and the storage device. Disaster recovery
A disaster recovery plan encompasses everything that could happen if there is a system failure due to destruction, natural disaster, fire, theft or equipment failure. Part of a good disaster recovery plan includes a business continuation plan, that is, how to keep the business going and doing business despite the disaster. When planning for a data storage expansion, the disaster recovery plan should be reviewed to make sure the company’s data is accessible in the event of a contingency, and be closely aligned to business continuity planning and efforts. Data recovery
Data recovery can become a hidden cost if not planned for. Every business continuity plan and disaster plan should include professional data recovery services as part of their overall solution.
Ontrack has successfully recovered data for customers who have lost data due to failures encountered during storage space migration or expansion, mirroring failures, system shutdowns due to environmental abnormalities, natural disaster, backup inconsistencies, and software and database corruption.
As you can see, there is much more to scalable growth than just adding more storage space. Although prudent planning and every precaution in instigating and undertaking an effective storage management solution has been enacted, failures and unforeseen circumstances can and do occur. Simply put, despite the best preparation disasters do happen. Ontrack Data Recovery is your partner for success when you or your users experience data loss scenarios and is here to assist with the recovery and restoration of the original data.

Determining the Need to Recover Lost Data

To Recover or Not To Recover, That Is the Question
A data loss has occurred - now what? Determining the need to recover lost data can be a difficult one. There are several things to take into consideration when determining if data recovery is required.
Backup, Backup, BackupEveryone knows the importance of a good backup system, so your first step should be to determine if the data is actually backed up. Many times lost data is stored on a backup tape, backup hard drive, on the network or other various locations throughout an organization.
Unfortunately, locating and reloading the lost information can be time consuming and deplete resources. If a backup is located, it is important to check that the most recent copy of the data is available. Many times backups occur on a set schedule and if modifications to the data were saved after the backup occurred that information will not be accessible.
Re-Creation
Another important option to consider is if the data can or should be re-created. Two items to take into account when considering this option include the type of data lost and the amount lost:
Type of Data - Different data may have different perceived value. Recovering a customer database is (probably) more important than recovering a file containing possible names for a pet goldfish. Is the missing data a high-volume transaction database such as a banking record? This would be nearly impossible to recreate the thousands of transactions that were happening in real time. Other types of data may not be able to be re-created such as digital photos. Understanding the type of data that was lost is imperative to determining your next steps.
Amount of Data - Understanding how much data was lost can help you understand how much time and resources would be required to re-create the data. The more data lost, the more time and resources required to re-create it – if re-creation is even possible.
An additional point to consider is that with strict regulatory and legal requirements, many companies need access to their lost data in order to comply with these requirements. Accessibility to data and the legal requirements surrounding that data are essential to understand when considering if data recovery is necessary or not.
Putting the Power Back In Your Hands
Ontrack® VeriFile™ data reports can help determine if a recovery is necessary. Part of our complete evaluation service, VeriFile puts the power of the recovery in your hands by showing you which files are recoverable and which are not -- allowing you to make an informed business decision on moving forward with the full data recovery.
Perspective
Data recovery costs can be difficult to plan for because they are unexpected. No one wants to lose data just like no one wants their car to break down or to have to call a plumber for a broken pipe. However, to help put it into perspective with other business related costs – vending services and that morning cup of coffee can run between $500 and $1000 every month for a small business office. An average recovery fee for a typical desktop, Windows-based system is around $1,000. Comparing those figures – the true value of data recovery becomes clear.

No Summer Vacation for Data Loss

Tips to protect against data loss during severe weather, heat, electrical storms and major disasters
According to Ontrack Data Recovery, extreme summer weather and the hurricane season cause a significant increase in data loss incidents during the summer months meaning computers users need to pay special attention to protecting their valuable data starting immediately. From intense heat to electrical storms to major disasters like hurricanes, there are a variety of potential problems that can lead to data disasters. The National Oceanic and Atmospheric Administration recently predicted a “very active” Atlantic Hurricane season for 2006 with up to 10 hurricanes, of which four to six could become “major” hurricanes of Category 3 strength or higher. After witnessing the devastation caused by Hurricane Katrina and several other storms that affected the US last year, it is imperative that proactive steps are taken to ensure proper data protection.“As Katrina proved last year, summer storms can cause major data loss problems – but people shouldn’t forget about other weather-related issues like overheating,” said Jim Reinert, senior director of Software and Services for Ontrack Data Recovery. “A few simple steps can help computer users prepare for the upcoming season and avoid the headaches caused by weather-related data loss.”
Ontrack offers these tips to help protect against damage from severe summer weather and lessen the chances of data loss if damage does occur:
Summer heat can be a significant problem as drive failures can result from overheating. Keep your computer in a cool, dry area to prevent overheating;
If you are dealing with large servers, make sure they have adequate air conditioning. Increases in computer processor speed have resulted in more power requirements, which in turn require better cooling – especially important during the summer months;
Electrical storms can be a major problem during summer. Make sure to install a surge protector between the power source and the computer’s power cable to handle any power spikes or surges.
Invest in some form of Uninterruptible Power Supply (UPS), which uses batteries to keep computers running during power outages. UPS systems also help manage an orderly shutdown of the computer – unexpected shutdowns from power surge problems can cause data loss;
Check protection devices regularly: At least once a year you should inspect your power protection devices to make sure that they are functioning properly. Most good ones will have a signaling light to tell you when they are protecting your equipment properly;
Do not shake, disassemble or attempt to clean any hard drive or server that has been damaged – improper handling can make recovery operations more difficult which can lead to valuable information being lost;
Never attempt to dry water-damaged media by opening it or exposing it to heat – such as that from a hairdryer;
Do not attempt to operate visibly damaged devices;
For mission critical situations, contact a data recovery professional before any attempts are made to reconfigure, reinstall or reformat.

Success with Remote Data Recovery™

Data loss can happen to anyone and usually without warning. Remote Data Recovery™ service uses patented technology and trained engineers to allow us to recover data right on your server, desktop or laptop through an Internet connection or a modem. Remote Data Recovery is an excellent option especially for server recoveries because there is no need to dismantle and ship your drive or hardware in for service – which can be very challenging for server recoveries due to security and shipping costs. It also eliminates shipping time which is of the essence when a server is inaccessible.
This article will highlight specifics surrounding instances in which Remote Data Recovery (RDR®) saved the day.
Success Story #1A scheduled data migration to new equipment presented a challenge to IT staff. After the equipment was successfully installed, a migration of user data was started. During the migration, a bug in the destination server's operating system caused the source and destination volumes to become corrupt, making the user data inaccessible. The failed migration happened over the weekend and users had to have their critical files accessible by Monday morning.
Remote Data Recovery was the only option due to time and shipping concerns. While this organization did have a backup, it was weeks old and had limited value. The client had to have the original data recovered.
Success Story #2
A mid-sized enterprise had a rapidly growing storage pool on a SAN system. The user's area was regularly increased on an as needed basis. Using the SAN storage management software, additional storage was added to the user area. While expanding the Logical Unit Number (LUN) the user volume became corrupt and was inaccessible. The storage administrator could not determine what had caused the problem; a full diagnosis of the hard drives inside of the SAN was started. After two days of running the SAN through lengthy tests, it was concluded that the hardware was sound. The volume was still inaccessible and the only options were to reformat the user volume and restore from backup or engage a professional data recovery company. This enterprise chose Ontrack Data Recovery because of the Remote Data Recovery service. Remote Data Recovery engineers discovered that the Logical Volume Management database had become corrupted and that some file system damage had been sustained. Engineers worked around the clock to piece the original volume back together and then copied out the data to another SAN volume.
While ideal for servers, Remote Data Recovery is also ideal for other media ranging from flash drives to floppy drives to desktops and laptops. This next success story outlines how RDR was able to recover multiple laptop users’ data quickly.
Success Story #3 An untested login script was accidentally released to the user groups. This login script started a reinstall of the operating system from core install images. Users logged in Monday morning only to have to wait for an unusually long time to complete the login process. Users quickly discovered that all their data was missing.
This script affected 300 users. The IT department was overwhelmed with angry users. This enterprise engaged Ontrack Data Recovery immediately and chose the Remote Data Recovery™ service because of the logistical nightmare it would be to send all of the laptops in.
The recovery consisted of a three-phased effort for each user's laptop: 1) recover original data from the newly reinstalled operating system, 2) search the entire media for specific files that were not found in phase one, 3) search the entire drive for Microsoft Outlook Personal Store (PST) files.
The IT department was able to get 30 laptops connected at one time and domestic and international engineers worked to complete the recovery within four days.
Remote Data Recovery can truly be an around the clock service. Ontrack Data Recovery can utilize our domestic and International locations to complete remote jobs quickly and efficiently.
We’ve outlined how RDR can help solve data loss situations, but how does Remote Data Recovery really work? Here is a behind the scenes look at how we utilize Remote Data Recovery to recover lost data.
Ontrack Remote Data Recovery consists of three main components:
1) Communications Client – The customer initiates a connection to an Ontrack Data Recovery RDR Server using the specially designed RDR QuickStart™ software. The software is available in a form native to your operating system and also in a self-booting diskette for situations in which the operating system is not bootable. After initiating the application the customer selects the mode of communication, which can include a direct modem or Internet connection.
2) RDR Server – Once the connection is established to the Ontrack Data Recovery Server it is distributed to the next available Remote Data Recovery engineer in any of our worldwide locations.
3) RDR Workstation – A specially designed application allows the RDR engineer to run advanced data recovery tools on the computer system that lost data. Before beginning the recovery process, the engineer enables proprietary technologies that track and backup all changes that will be made to the system. This process provides the engineer with the ability to complete the recovery “virtually” before any changes are made to the system. Any changes made can be reversed or modified in order to provide the most complete recovery possible.
With security identified as a major concern, Remote Data Recovery implements several secure elements to help keep your data safe. Remote Data Recovery uses Port80 TCP/UDP and the client software initiates the connection, not Ontrack Data Recovery. Ontrack Data Recovery also uses proprietary communication protocol and packets in which the packets are encrypted using your IE encryption libraries.
So whether you have a single flash drive with family vacation photos or a server filled with thousands of user’s files (like the success story below), Remote Data Recovery can help you get access to your data again.
Success Story #4During a scheduled firmware update of all servers, storage controllers, and backup tape library machines a large enterprise's Microsoft® Exchange server became corrupted and inaccessible. This organization did have a regular backup schedule, yet the most recent backup was corrupted. This server was one of four servers that handled user's email. The Exchange Information Store contained 3,000 user mailboxes, including many that were of the executive staff, and it was over a hundred gigabytes in sizes.
Onsite technicians started an Information Store repair utility and it had been operating for days with no end in sight. By the end of the 3rd day users were demanding their archived messages. This organization decided to engage Ontrack Data Recovery because they were running out of time.
Initially Remote Data Recovery was started to analyze the Information Store. However due to hardware issues, the IT staff requested onsite staff. One of Ontrack Data Recovery's Remote Data Recovery engineers traveled to the site. During the travel time, the analysis provided by Remote Data Recovery continued on and this was able to give the onsite engineer a jump on the recovery efforts.
Within 15 hours of the engineer arriving, user mailboxes were begun to be copied out. Over the next two days of around the clock service, all of the mailboxes were delivered back to the users. These results show how important it is to engage professional data recovery services early on in the disaster.

Virtualized Tape Library

Virtualization is becoming more and more topical in the computer trade magazines. In some articles, virtualization has been hailed as the next frontier of computing. What is computer virtualization and how can you or your clients benefit from it?
Virtualization is a method of running other software or hardware applications under a host system. The virtual system and the host system would share the same hardware. Virtualization can allow multiple systems to share one physical computer. For example, an enterprise could invest into a computer system with high processing power and maximum memory, then by using virtualization, an administrator could have three or four operating systems running on that equipment (depending on the processing power of the equipment and the operating system requirements). The benefits of hardware cost savings alone justify you or your client’s attention to this exciting technology.
Recently, Microsoft® and VMware® (companies specializing in software virtualization) announced that consumers could download their virtualization software at no charge. The results of Microsoft and VMware publicly releasing their virtual host server software to users free of charge encourages more individuals to become familiar with virtualized operating systems. This familiarity, combined with cheap access to massive amounts of storage (with individual disk drives at 700GB and single 1TB drives just around the corner, multi-terabyte arrays are common place) and RAID technology becoming more widespread and thereby more accessible, is anticipated to produce a proliferation of virtualization across business types and sizes.
Virtualization doesn’t stop with operating systems; you can also have virtualized applications and SAN storage pools. In line with these resource virtualization concepts, presenting storage components like hard disk drives as tape hardware is known as a virtual tape library, or VTL. The topic of this month’s technical article, VTL technology boasts a high percentage of return on investment, offers ease of installation within an existing archival environment, and affords faster data restores. Additionally, VTL doesn’t mean the end of the investment that has been made into physical tape machines or libraries. The architecture of the backup system can still stream data to a physical tape for offsite storage.
In a nutshell, VTL utilizes hardware and software solutions for redirecting the backup data that would have been sent to the tape library to a large RAID array. The backup software is able to do this (by means of hardware and software) by recognizing the RAID array as a tape drive. Traditional backup options, such as Full, Differential, Incremental, and Snapshot schemas still function in the same way in a VTL. Essentially, the backup schema in place pre-VTL implementation will still be available after migrating to a VTL setup.
Storage Concepts of a Virtual Tape LibraryThe storage concepts of VTL revolve around streaming backup data to a RAID 0, or RAID 5 configuration. There are several advantages to streaming the data to a disk array first; the principle among them being speed. Benchmark tests have shown that the transfer throughput (from server to backup disk array) is noticeably increased. This is because the data transfer to magnetic tape media is eliminated. Additionally, retrieval of archived data is also much faster because there is no bottleneck due to rewind and fast-forward operations, or of cataloging tape archives and sessions.
Storage for a VTL system can start at the half terabyte range and go into the hundreds of terabytes depending on your needs. Storage can be high performance Fibre Channel or iSCSI systems. Alternatively, SATA (Serial ATA) and PATA (Parallel ATA) systems are available and are usually lower in cost. All of these storage systems are a good choice for VTL implementations.
VTL software and hardware also support multiple virtual tape libraries. Historically, in environments using a traditional physical tape machine schema employing a one physical tape machine setup it was noticed there was a lot data moving to that one device. To address this data movement issue, IT administrators added multiple tape machines, large tape libraries that employ many tape machines, to spread the workload out and to keep the data transfer balanced. VTL setups offer the same multiplicity of backups running at once, which means you can distribute the archiving process over a greater number of data areas. Despite the virtualization however, the data will still be physically stored on the RAID storage array.
For IT environments that have specific policies regarding offsite storage of data, nearly all VTL systems now support a physical tape library that is connected to the VTL, allowing a consistent flow of archived data to be “re-archived” onto a physical tape—a backup of a backup. This helps to doubly ensure that user files are being protected. The secondary archive is set to a schedule where tapes can be stored or recycled.
Some organizations have produced a VTL setup on a WAN scale. In theory, this enables organizations to host a remote Disaster Recovery site as little as 50 miles away. By utilizing point in time snapshots in conjunction with such a VTL setup, the data restoration during an outage is reduced considerably.
A large number of tape backup applications already employ some sort of tape virtualization. If you have specific requirements in this regard, you should contact your software vendor. So how does the entire system work?
Operation FlowOperationally, the environment does not change and the scheduled backups still happen as they have already been setup. The hardware and software setup may require some installation depending on the equipment installed, with connectivity details, (IP, SCSI, iSCSI, Fibre Channel) dependent on the topography of the network.
With more setup and configuration a more dynamic, fault-tolerant solution can be installed—all without the overhead, media cost, and tape recycling schedules.
What exactly does virtualization bring to this configuration? Virtualization has the potential to remove tape media from the topography completely. As mentioned previously, products are available that can create multiple virtual libraries or tape machines. The advantage is that multiple backups can be running from different servers all into a storage pool. This storage can perform a less rigorous backup to tape, or another VTL. This second level VTL can be slower disk storage and function as an ongoing backup of the first level backup. Easy availability of products to facilitate creation of VTL environments along with affordable technology has made the dual backup process with different schedules possible.
Data Recovery of VTL storageToday’s compliance and regulatory laws are requiring organizations to ensure ‘data availability.’ You won’t get an understanding nod from an auditor by saying, “The server you wanted to look at has just failed.” What happens when there is a failure on the storage array that is hosting your first level or second level backup data?
All is not lost! A professional data recovery firm can rebuild and extract the data from storage arrays that are used in VTL systems, focusing on the data contained within the tape archive files post-extraction. Today’s complex archiving software will store the target files with a high compression ratio and internal cataloging method. Only a competent and experienced data recovery firm like Ontrack Data Recovery will be able to deliver the archived data in a timely fashion.
Regardless of the method or media used to store the data, Ontrack Data Recovery will be able to assist you and your organization should the worst happen. By partnering with Ontrack, you are adding a third level of recovery. Having Ontrack Data Recovery as part of you or your client’s disaster recovery plan, you or your client can weather any data disaster.
Ontrack Data Recovery is the largest, most experienced and technologically advanced provider of data recovery products and services worldwide. Ontrack is able to recover lost or corrupted data from virtually all operating systems and types of storage devices through its do-it-yourself, remote and in-lab capabilities, using its hundreds of proprietary tools and techniques. Ontrack invests in technology and techniques to speed recovery times and enhance recovery capabilities.

Common Scenarios of Server Data Disasters

Ontrack Data Recovery has been the undisputed leader in the industry with the most technologically advanced data recovery solutions available. We have been serving customers globally for nearly 20 years with offices, clean rooms, engineers, and employees located around the world. During that time, we have seen many data loss situations ranging from commonplace to unique.
When a data loss occurs on something as valuable as a server, it is essential to the life of your business to get back up and running as soon as possible. Ontrack Data Recovery’s experience and technology to recover data from systems ranging from legacy and post-mainframe storage devices to the latest high-end SANs help you do just that.
Here is a sampling of specific types of disasters accompanied with actual engineering notes from recent Remote Data Recovery™ jobs:
Causes of Partition/Volume/File System Corruption Disasters
Corrupted File System due to system crash
File system damaged to automatic volume repair utilities
File system corruption due partition/volume resizing utilities
Corrupt volume management settings
Case StudySevere damage to partition/volume information to Windows 2000 workstation; had used 3rd party recovery software--didn't work, reinstalled OS but was looking for 2nd partition/volume, found it and it was a 100% recovery.Evaluation Time: 46 minutes (Evaluation time represents the time it takes to evaluate the problem, make necessary file system changes to access data, and to report on all of the directories and files that can be recovered)
Causes of Specific File Error Disasters
Corrupted business system database; file system is fine
Corrupted message database; file system is fine
Corrupted user files
Case StudyWindows 2000 server, volume repair tool damaged file system; target directories unavailable. Complete access to original files critical. Remote Data Recovery safely repaired volume; restored original data, 100% recovery.Evaluation Time: 20 Minutes
Exchange 2000 server, severely corrupted Information store; corruption cause unknown. Scanned Information Store file for valid user mailboxes, results took up to 48 hours due to the corruption. Backup was one month old/not valid for users.Evaluation Time: 96 Hours (1.5 days)
Possible Causes of Hardware Related Disasters
Server hardware upgrades (Storage Controller Firmware, BIOS, RAID Firmware)
Expanding Storage Array capacity by adding larger drives to controller
Failed Array Controller
Failed drive on Storage Array
Multiple failed drives on Storage Array
Storage Array failure but drives are working
Failed boot drive
Migration to new Storage Array system
Case StudyNetware volume server, Traditional NWFS, failing hard drive made volume inaccessible; Netware would not mount volume. Errors on hard drive were not in the data area and drive was still functional. Copied all of the data to another volume; 100% recovery.Evaluation Time: 1 hour
Causes of Software Related Disasters
Business System Software Upgrades (Service Packs, Patches to Business system)
Anti-virus software deleted/truncated suspect file in error and data has been deleted, overwritten or both.
Case StudyPartial drive copy overwrite using third party tools, overwrite started and then crashed 1% into the process, found a large portion of the original data. Rebuilt file system, provided reports on recoverable data; customer will be requiring that we test some files to verify quality of recovery.Evaluation Time: 1 hour
Causes of User Error Disasters
During a data loss disaster, restored backup data to exact location, thereby overwriting it
Deleted files
Overwritten operating system with reinstall of OS or application software
Case StudyUser's machine had the OS reinstalled – Restore CD was used; user looking for Outlook PST file. Searched for PST data through the drive because original file system completely overwritten. Found three potential files that might contain the user's data, after using PST recovery tools we found one of those files to contain all of the user's email; there were missing messages, majority of the messages/attachments came back.Evaluation Time: 5 hours
Causes of Operating System Related Disasters
Server OS upgrades (Service Packs, Patches to OS)
Migration to different OS
Case StudyNetware traditional, 2TB volume, damage to file system when trying to expand size of volume, repaired on drive, volume mountable. Evaluation Time: 4 hours

Server Recovery Tips

Data disasters will happen. Accepting that reality is the first step in preparing a comprehensive disaster plan. Time is always against an IT team when a disaster strikes, therefore the details of a disaster plan are critical for success.
Here are some suggestions from Ontrack Data Recovery engineers of what not to do when data disasters occur:
In a disaster recovery, never restore data to the server that has lost the data - always restore to a separate server or location.
In Microsoft Exchange or SQL failures, never try to repair the original Information Store or database files - work on a copy.
In a deleted data situation, turn off the machine immediately. Do not shut down Windows - this will prevent the risk of overwritten data.
Use a volume defragmenter regularly.
If a drive fails on RAID systems, never replace the failed drive with a drive that was part of a previous RAID system - always zero out the replacement drive before using.
If a drive is making unusual mechanical noises, turn it off immediately and get assistance.
Have a valid backup before making hardware or software changes.
Label the drives with their position in a RAID array.
Do not run volume repair utilities on suspected bad drives.
Do not run defragmenter utilities on suspected bad drives.
In a power loss situation with a RAID array, if the file system looks suspicious, or is unmountable, or the data is inaccessible after power is restored, do not run volume repairutilities.
Ontrack Data Recovery should be part of your disaster planning and your key personnel should be aware of our recovery capabilities. During an outage, it is common to have multiple recovery efforts going on at the same time. This makes sense because the goal is to get the company back to its data. The key to success is to get Ontrack Data Recovery involved as soon as possible.

Large Scale Solutions in Storage Systems

Avoiding Storage System Failure - Reduce or Eliminate the Impact of Storage System Failures
Storage systems have become their own unique and complex computer field and can mean different things to different people. So what is the definition of these systems? Storage systems are the hardware that store data.
For example, this may be a small business server supporting an office of ten users or less—the storage system would be the hard drives that are inside of that server where user information is located. In large business environments, the storage systems can be the large SAN cabinet that is full of hard drives and the space has been sliced-and-diced in different ways to provide redundancy and performance.
The Ever-Changing Storage System Technology
Today’s storage technology encompasses all sorts of storage media. These could include WORM systems, tape library systems and virtual tape library systems. Over the past few years, SAN and NAS systems have provided excellent reliability. What is the difference between the two?
SAN (Storage Area Network) units can be massive cabinets—some with 240 hard drives in them! These large 50+ Terabyte storage systems are doing more than just powering up hundreds of drives. These systems are incredibly powerful data warehouses that have versatile software utilities behind them to manage multiple arrays, various storage architecture configurations, and provide constant system monitoring.
NAS (Network Attached Storage) units are self-contained units that have their own operating system, file system, and manage their attached hard drives. These units come in all sorts of different sizes to fit most needs and operate as file servers.
For some time, large-scale storage has been out reach of the small business. Serial ATA (SATA) hard disk drive-based SAN systems are becoming a cost-effective way of providing large amounts of storage space. These array units are also becoming mainstream for virtual tape backup systems—literally RAID arrays that are presented as tape machines; thereby removing the tape media element completely.
Other storage technologies such as iSCSI, DAS (Direct Attached Storage), Near-Line Storage (data that is attached to removable media), and CAS (Content Attached Storage) are all methods for providing data availability. Storage Architects know that just having a ‘backup’ is not enough. In today’s high information environments, a normal nightly incremental or weekly full backup is obsolete in hours or even minutes after creation. In large data warehouse environments, backing up data that constantly changes is not even an option. The only method for those massive systems is to have storage system mirrors—literally identical servers with the exact same storage space.
How does one decide which system is best? Careful analysis of the operation environment is required. Most would say that having no failures at all is the best environment—that is true for users and administrators alike! The harsh truth is that data disasters happen every day despite the implementation of risk mitigation policies and plans.
When reviewing your own or your client’s storage needs, consider these questions:
What is the recovery turn-time? What is your client’s maximum time period allowed to be back to the data? In other words, how long can you or your client survive without the data? This will help to establish performance requirements for equipment.
Quality of data restoredIs original restored data required or will older, backed up data suffice? This relates to the backup scheme that is used. If the data on your, or your client’s storage system changes rapidly, then the original data is what is most valuable.
How much data are you or your client archiving? Restoring large amounts of data will take time to move through a network. On DAS (Direct Attached Storage) configurations, time of restoration will depend on equipment and I/O performance of the hardware.
Unique Data Protection Schemes
Storage System manufacturers are pursuing unique ways of processing large amounts of data while still being able to provide redundancy in case of disaster. Some large SAN units incorporate intricate device block-level organization, essentially creating a low-level file system from the RAID perspective. Other SAN units have an internal block-level transaction log in place so that the Control Processor of the SAN is tracking all of the block-level writes to the individual disks. Using this transaction log, the SAN unit can recover from unexpected power failures or shutdowns.
Some computer scientists specializing in the storage system field are proposing adding more intelligence to the RAID array controller card so that it is ‘file system aware.’ This technology would provide more recoverability in case disaster struck, the goal being the storage array would become more self-healing.
Other ideas along these lines are to have a heterogeneous storage pool where multiple computers can access information without being dependant on a specific system’s file system. In organizations where there are multiple hardware and system platforms, a transparent file system will provide access to data regardless of what system wrote the data.
Other computer scientists are approaching the redundancy of the storage array quite differently. The RAID concept is in use on a vast number of systems, yet computer scientists and engineers are looking for new ways to provide better data protection in case of failure. The goals that drive this type of RAID development are data protection and redundancy without sacrificing performance.
Reviewing the University of California, Berkeley report about the amount of digital information that was produced 2003 is staggering. You or your client’s site may not have terabytes or petabytes of information, yet during a data disaster, every file is critically important.
Avoiding Storage System Failures
There are many ways to reduce or eliminate the impact of storage system failures. You may not be able to prevent a disaster from happening, but you may be able to minimize the disruption of service to your clients.
There are many ways to add redundancy to primary storage systems. Some of the options can be quite costly and only large business organizations can afford the investment. These options include duplicate storage systems or identical servers, known as ‘mirror sites’. Additionally, elaborate backup processes or file-system ‘snapshots’ that always have a checkpoint to restore to, provide another level of data protection.
Experience has shown there are usually multiple or rolling failures that happen when an organization has a data disaster. Therefore, to rely on just one restoration protocol is shortsighted. A successful storage organization will have multiple layers of restoration pathways.
Ontrack Data Recovery has heard thousands of IT horror stories of initial storage failures turning into complete data calamities. In an effort to bring back a system, some choices can permanently corrupt the data. Here are several risk mitigation policies that storage administrators can adopt that will help minimize data loss when a disaster happens:
Offline storage system — Avoid forcing an array or drive back on-line. There is usually a valid reason for a controller card to disable a drive or array, forcing an array back on-line may expose the volume to file system corruption.
Rebuilding a failed drive — When rebuilding a single failed drive, it is import to allow the controller card to finish the process. If a second drive fails or go off-line during this process, stop and get professional data recovery services involved. During a rebuild, replacing a second failed drive will change the data on the other drives.
Storage system architecture — Plan the storage system’s configuration carefully. We have seen many cases with multiple configurations used on a single storage array. For example, three RAID 5 arrays (each holding six drives) are striped in a RAID 0 configuration and then spanned. Keep a simple storage configuration and document each aspect of it.
During an outage — If the problem escalates up to the OEM technical support, always ask “Is the data integrity at risk?” or, “Will this damage my data in any way?” If the technician says that there may be a risk to the data, stop and get professional data recovery services involved.
Ontrack Data Recovery - The Leader in Storage System Recoveries
Ontrack Data Recovery has been successfully recovering data from large storage systems for many years. Ontrack Data Recovery’s unique approach is what sets us apart from other data recovery companies.
A recovery of a data volume implementing a RAID configuration starts out with a Senior Engineer evaluating each hard disk involved and analyzing the data structures to determine the proper recovery path. There is no standard configuration for these systems and each OEM implements RAID configurations differently, making every job unique and challenging. The final step is verifying the file system is correctly pointing to the data, validating the file system information and data.
These types of recoveries are the pinnacle of engineering challenges. It is amazing to see one of these systems come together after hours of hard work – going from a data disaster to a complete and successful recovery. Often times, these recoveries result in the original files being recovered and archived without any hardware or software manipulation required on the part of the customer.
We applaud the storage industry for continuing to find better ways to preserve data and maintain business continuity. Some failures are beyond the soft recovery methods that the hardware can handle. This is where Ontrack Data Recovery fits into your or your client’s Data Availability plans. Ontrack Data Recovery has services available to accommodate your or your client’s time requirements for original data restoration.
Ontrack Data Recovery is the leader in storage system data recovery because of our experience, development resources, and engineering staff. Ontrack Data Recovery is the data recovery company of choice for users, partners, and IT professionals who have high requirements for data recovery.

Protect Your Data from Extreme Weather

Every summer, Ontrack Data Recovery engineers see the same pattern: a surge in data recovery service requests that coincides with the start of the severe storm season. Ontrack Data Recovery has over 20 years in the data recovery business; and for 20 years, the summer months have always meant high demand for recovery services.
You can protect your data by following some simple precautions. With that said, even the most well-protected hard drives can crash, fail, quit, click, die… you get the picture. So we’ve also provided a few tips for how to respond when extreme weather does damage your computer equipment.
If you would like to read more about the latest data protection and recovery topics, subscribe to our free monthly e-newsletter: Data Recovery News.
Protecting Your Data from Severe Weather
1. Summer heat can be a significant problem as overheating can lead to drive failures can result. Keep your computer in a cool, dry area to prevent overheating.
2. Make sure your servers have adequate air conditioning. Increases in computer processor speed have resulted in more power requirements, which in turn require better cooling - especially important during the summer months.
3. To prevent damage caused by lightning strikes, install a surge protector between the power source and the computer’s power cable to handle any power spikes or surges.
4. Invest in some form of Uninterruptible Power Supply (UPS), which uses batteries to keep computers running during power outages. UPS systems also help manage an orderly shutdown of the computer - unexpected shutdowns from power surge problems can cause data loss.
5. Check protection devices regularly: At least once a year you should inspect your power protection devices to make sure that they are functioning properly.
Responding to Data Loss Caused by Severe Weather
1. Do not attempt to operate visibly damaged computers or hard drives.
2. Do not shake, disassemble or attempt to clean any hard drive or server that has been damaged - improper handling can make recovery operations more difficult which can lead to valuable information being lost.
3. Never attempt to dry water-damaged media by opening it or exposing it to heat - such as that from a hairdryer. In fact, keeping a water-damaged drive damp can improve your chances for recovery.
4. Do not use data recovery software to attempt recovery on a physically damaged hard drive. Data recovery software is only designed for use on a drive that is fully functioning mechanically.
5. Contact Ontrack Data Recovery at 800 872 2599 for free data recovery consultation 24/7/365. Our experts will explain options and answer any questions you have about your damaged data storage devices.
Never assume that data is unrecoverable - no matter how extreme the damage. Ontrack Data Recovery engineers have retrieved data from devices damaged in Hurricane Katrina, the Columbia Space Shuttle Disaster, and numerous other disasters.

Beyond “just” Data Recovery: Tape Restoration Services

What happens when you need to access files from an old backup tape that is no longer compatible with your back up system, tape drive or back up software?
The rapidly changing world of IT means that new innovations are constantly replacing the latest technology. With changes to back up regimes, old tapes become redundant despite requests for old files to be restored. Furthermore, data compliance regulations require businesses to retain data for many years, often longer than the availability of the technology used to store it.
In most cases, the data format from the "old" system is not compatible with the "new", making the transfer of data a major challenge. With 20 years experience in data restoration, Kroll Ontrack has worked with practically every type of storage format, media and data loss scenario. This gives us a unique insight and the ability to facilitate the conversion of data between various platforms, file formats, tape formats, etc.
Tape Conversion & Tape Migration
Kroll Ontrack's tape conversion service can transfer data files from an unrivalled array of backup formats, even those that have been obsolete for many years. Moreover, Kroll Ontrack’s data conversion utilities are used throughout the industry to reformat data from mainframe systems to allow for the importation into PC and UNIX database and spreadsheet applications. Kroll Ontrack has developed a wide range of tools and bespoke software to process data so that it can be transferred to newer systems in a quick and accurate manner.
Tape Duplication
Kroll Ontrack has developed software and systems that allow for the off-line duplication of your vital backups or for the creation of multiple copies of data for distribution. With most backup applications your only choice is to repeat the backup to create a second copy of your save sets. This can cause problems as system performance will be affected and often the extra copy has to be made during business hours when the most important files are in use. Kroll Ontrack can copy your tapes independently from your live systems and validate them with a restore process. We can also supply a system solution for your duplication requirement.
Tape Recovery
Kroll Ontrack, through its Ontrack Data Recovery services, can quickly and successfully recover lost data from tapes, no matter how extreme the cause. Trying tape recovery on your own or through an inexperienced provider may lead to further damage. Select a data recovery provider with resources, expertise and experience you can trust.
Causes of Tape Failure and Data Loss
Corruption – operational error, mishandling of the tape or accidental overwrites caused by inserting or partially formatting the wrong tape.
Physical damage – broken tapes, dirty drives, expired tapes and damage caused by fire, flood or other natural disaster
Software upgrades – inability for data on tape to be read by new application or servers
Tape Recovery Process
Tape recoveries are performed in dust-free cleanroom environments
Tapes and tape drives are carefully dismounted, examined and processed
Proprietary tools can “force” the drive to read around the bad area to recover your data successfully
Drives are imaged and a copy of the disk is created and transferred to new system
LTO Tapes
The LTO tape has emerged as one of the major players at the high data capacity and high performance end of the market. Kroll Ontrack has extensive experience with the recovery of data from LTO tapes. The majority of the problems that we see are the result of human error such as accidentally re-initialising a tape, or forgetting to enable the append option before starting a backup.
If you or your customer have a specific tape project in mind, contact one of our data recovery representatives for a free consultation.

Does Encryption Complicate Things?

Encryption continues to be the topic on every CIO and IT person’s lips nowadays. No one wants to end up in the news as the next victim of a privacy breach or the next company that didn’t protect its customers’ information. If you conduct a news search using the words “personal data breach,” you’ll be alarmed at the number of instances where personal information such as social security and credit-card numbers have been exposed to possible theft. In a recent breach, a state government site allowed access to hundreds of thousands of records, including names, addresses, social security numbers and documents with signatures.
Whether it’s government agencies, research facilities, banking institutions, credit card processing companies, hospitals--or your company’s computers - the risk of compromising private information is very high. At the recent “CEO-CIO Symposium,” speaker Erik Phelps from the law firm Michael Best & Friedrich described the relationship business has with technology. In his presentation, he stated that since “business relies so heavily on technology today, business risk becomes technology dependent.” The possibility of litigation is part of business. It has always been a risk of doing business, but because technology and today’s business are so intertwined, business risk has a higher threat level. This has prompted many to encrypt workstations and mobile computers in order to protect critical business data.
If you have rolled out encryption, how do you maintain your IT service quality when the hard disk drive fails? How do you plan and prepare for a data loss when the user’s computer is encrypted? These are all issues that should be considered when putting together a data disaster plan. In addition, data recovery, one of the more common missing elements of a disaster recovery plan, should also be factored in because it can serve as the “Hail Mary” attempt when all other options have been exhausted.
Data Recovery and Encryption
Business continuity and disaster planning are critical for businesses regardless of their size. Most archive and backup software have key features to restore user files, database stores and point in time snap-shots of users’ files. Software is becoming more automated so users don’t have to manually backup their files. Some computer manufacturers have built-in backup systems that include dedicated hard disk drives for archive storage. Most external USB hard disk drives have some sort of third party software that provides data archiving during a trial time period. Such solutions, while solving the data backup need, create questions regarding how effective the systems are with respect to user data. What are your options when a user’s computer has a data disaster and the hard disk drive is fully encrypted?
Most IT security policies require a multi-pronged approach to data security. For example, when setting up a new computer for a user, the IT department will require a BIOS (Basic Input/Output System) password for the system before the computer will start. BIOS password security varies in functionality. Some are computer system specific, meaning that the computer will not start without the proper password. Other BIOS passwords are hard disk drive specific, meaning that the hard drive will not be accessible without the proper password. Some computer BIOS employ one password for access control to the system and the hard disk drive. To add a second level of protection, new IT security policies require full hard disk drive encryption. The most common of full hard disk encryption software operates as a memory resident program. When the computer starts up, the encryption software is loaded before the operating system starts and a pass-phrase or password prompt is required. After a successful login from the user, the software decrypts the hard disk drive sectors in memory, as they are needed. The process is reversed when writing to the hard disk drive. This leaves the hard disk drive in a constant state of encryption. The operating system and program applications function normally, without having to be aware of any encryption software.
The Recovery Process
Recovering from hard disk drives that are encrypted follows the same handling procedures as all other magnetic media. A strict process of handling and documentation starts right at the shipping door upon drive receipt and ends when the drive is shipped back to the customer. In most cases, when working with a top data recovery provider, all recovery processes are logged. This results in an audit trail of the recovery history and serves as verification that the recovery was conducted in a secure, compliant manner. Specifically, you want to ensure the process consists of the following high-level steps:
Triage drive; determine faults without opening drive
Clean room escalation for physical or electronic damage
Secure original media
Sector-by-sector copy of drive data
User Key used to decrypt data
Produce file listing of user file names
Repair file system
Prepare data for delivery
Encryption options for data delivery
After the first four stages listed above, the recovery engineer will begin to map all key file system structures that point to the user files. However, if the hard disk drive is encrypted, then the drive needs to be decrypted in order to proceed.
Decryption
If this is the case, a user key or decryption password is required. Fortunately, encryption software has come a long way over the years. Instead of using a master password for decryption, most professional encryption software provides a technician level pass-phrase that changes on a daily basis. This protects the user’s password and the organization’s master password.
Many organizations are comfortable providing these one-time use pass-phrases so that the recovery work can continue. However, this is not always the case. For some organizations, providing this information to an outside vendor, such as a data recovery provider, is against their security policy. In these situations, a successful recovery is still possible. There are data recovery vendors that can perform recoveries while leaving the data in its encrypted form throughout the entire process. In this case, the data will be recovered and sent back to the client in its encrypted form; however, the specific results will be unknown until the files are opened by someone with access to the encryption key. Ultimately, this limits the ability for a data recovery provider to communicate the success of the recovery until the recovered data is delivered and opened, thereby placing some burden back on the customer.
As a result, it is clear that significant time and cost savings are associated with allowing your data recovery vendor to access your one-time use pass-phrase codes while attempting to recover your encrypted data. At the same time, it’s critical to ensure that your selected vendor also understands security protocols, is knowledgeable about encryption products and has privacy policies in place.
Resuming Recovery
Following the recovery, preparation for delivering the data begins. Since the original hard disk drive was encrypted, safely securing the recovered data is highly important. The recovered data is backed up to the media choice of the user and is re-encrypted. The new decryption key is communicated verbally to the user; email should not be used, as this could be a security risk. Some leading edge data recovery companies are able to deliver recovered data back to the customer in an encrypted format on external USB/Firewire hard disk drives. From the start of the recovery to the final delivery, data should be secure throughout the entire process.
Data Recovery Vendor Considerations
When looking for a data recovery provider, it’s important to ensure that the one selected can handle not only the various types of media, but also understands the data security regulations of today’s organizations. For example, encrypted data requires special data handling processes -- from the clean room to the technically-advanced recovery lab. This isolation ensures no one person has complete access to the media throughout the recovery process, thereby providing security while maintaining recovery continuity and quality.
Additionally, it is important to note that some data recovery companies have been cleared for security projects and services for U.S. government agencies. As a result, these companies implement data privacy controls that are based on the U.S. government’s Electronic Defense Security Services requirements for civilian companies that are under contract for security clearance projects or services.
Unfortunately, most data loss victims only consider data recovery right after they have experienced a data loss and are scrambling for a solution. Emotions run high at this point. The fallout from a data disaster and corresponding data loss is sometimes crippling, with the IT staff working around the clock to get the computer systems back to normal. These distressed circumstances are not the time to think about what makes a good data recovery vendor. Incorporating this important decision into your business continuity planning is best done in advance. Some key questions to ask as part of this proactive exercise include:
Do you have a relationship with a preferred data recovery vendor?
What should you look for when reviewing data recovery companies?
Do you include data recovery in your disaster and business continuity planning?
Do you have a plan for how to handle data loss of encrypted data?
Do appropriate people have access to the encryption keys to speed up the recovery process?
Sometimes planning for these procedures can become involved and tedious, especially if you are planning for something you have never experienced. Do some investigating by calling data recovery service companies and presenting data loss situations such as email server recoveries, or RAID storage recoveries or physically damaged hard disk drives from mobile users. Ask about data protection and the policies in place to protect your company’s files.
Additionally, find out the techniques and recovery tools the providers use. Ask the companies how large their software development staff is. Inquire about how they handle custom development for unique data files. For example, will they be able to repair or rebuild your user’s unique files? Does the data recovery service company have any patents or special OEM certifications?
While these details may not seem important at first, they can be the decisive factors that determine whether your data recovery experience is a positive and successful endeavor.
Following is a checklist of factors to consider when searching for a data recovery vendor for encrypted data or ensuring your data recovery partner is able to comply with your data security policies:
Solid Reputation – Experienced data recovery company with a strong background.
Customer Service – Dedicated and knowledgeable staff.
Secure Protocols – Expert knowledge of encryption products with privacy protocols in place.
Technical Expertise – Capable of recovering from virtually all operating systems and types of storage devices.
Scalable Volume Operations – Equipped with full-service labs and personnel that can handle all size jobs on any media type.
Research & Development – Invested in technology for superior recoveries; not just purchasing solutions.
It is important to understand that data loss can occur at any time on any scale. It’s especially crucial to be prepared with a plan that adheres to your company’s security policy. The more prepared one is, the better the chance for a quick and successful recovery when a problem arises.
About the Author:
Sean Barry is the remote data recovery manager of North America at Kroll Ontrack, the largest, most experienced and technologically advanced provider of data recovery products and services worldwide. Kroll Ontrack is able to recover lost or corrupted data from virtually all operating systems and types of storage devices through its do-it-yourself, remote and in-lab capabilities, using its hundreds of proprietary tools and techniques.

Digital Photo Data Loss - Saving Memories in a Digital Era

With family get-togethers, holiday pageants and winter vacations, it’s definitely the season for taking pictures. Amateur photographers everywhere are grabbing their cameras to capture the perfect holiday memory – and now more than ever, they’re using digital cameras to do the job. According to Photo Marketing Association International (PMAI), 122 million digital cameras will be sold in 2008, and over half of all U.S. households will own a digital camera.
Many of these digital cameras no doubt ended up as holiday gifts – and along with them the digital media where the pictures are actually stored. Other digital toys like portable MP3 players and Personal Digital Assistants (PDAs) use digital media to store information, making memory cards, flash media and microdrives products that people should become accustomed to in the new year. However, like computers, digital media can suffer from corruption and make your information inaccessible. If you run into problems and think your precious holiday pictures are lost forever, don’t panic. Ontrack Data Recovery can help.
There are many types of digital storage media available today in various capacities, ranging from tiny memory cards that come bundled with cameras to high-capacity microdrives. Regardless of the format, people are trusting their pictures to a different media than traditional film – and with that new media, comes new problems. Instead of overexposure or a damaged roll, you have to deal with corrupted data and hardware failures. Most digital media is formatted with the FAT file system for data storage and organization. When this file system gets corrupted, the device that uses the memory card can’t find the data so whatever information you have stored is “lost.” Even though it still exists on the memory card, the data is inaccessible.
What could cause the file system to become corrupted? When the device becomes low on power or when the card is removed while the device is on are common situations where the file system may no longer point to the data. When hardware failure occurs, the digital media is physically damaged and cannot connect with the device that reads the data. This typically happens due to accidental breakage or rough treatment.
In either case, it is important to remember that recovery is always a possibility. Although Ontrack Data Recovery typically deals with hard drives from individual users or huge servers from large companies, they also have the technology and expertise to handle all types of digital media. With their technique of finding critical data to rebuild the file system, Ontrack Data Recovery engineers use special tools in their data recovery labs to find lost data and repair hardware damage.
So if you open your presents to discover a new digital camera or other digital toy that uses digital media storage this holiday, embrace your new technology knowing that data recovery service is an option if any problems occur.

Computer Virus Information

Q: How can I protect myself from getting a virus?
In today's world having anti-virus software is not optional. A good anti-virus program will perform real-time and on-demand virus checks on your system, and warn you if it detects a virus. The program should also provide a way for you to update its virus definitions, or signatures, so that your virus protection will be current (new viruses are discovered all the time). It is important that you keep your virus definitions as current as possible.
Once you have purchased an anti-virus program, use it to scan new programs before you execute or install them, and new diskettes (even if you think they are blank) before you use them.
You can also take the following precautions to protect your computer from getting a virus:
Always be very careful about opening attachments you receive in an email -- particularly if the mail comes from someone you do not know. Avoid accepting programs (EXE or COM files) from USENET news group postings. Be careful about running programs that come from unfamiliar sources or have come to you unrequested. Be careful about using Microsoft Word or Excel files that originate from an unknown or insecure source.
Avoid booting off a diskette by never leaving a floppy disk in your system when you turn it off.
Write protect all your system and software diskettes when you obtain them. This will stop a computer virus spreading to them if your system becomes infected.
Change your system's CMOS Setup configuration to prevent it from booting from the diskette drive. If you do this a boot sector virus will be unable to infect your computer during an accidental or deliberate reboot while an infected floppy is in the drive. If you ever need to boot off your Rescue Disk, remember to change the CMOS back to allow you to boot from diskette!
Configure Microsoft Word and Excel to warn you whenever you open a document or spreadsheet that contains a macro (in Microsoft Word check the appropriate box in the Tools Options General tab).
Write-protect your system's NORMAL.DOT file. By making this file read-only, you will hopefully notice if a macro virus attempts to write to it.
When you need to distribute a Microsoft Word file to someone, send the RTF (Rich Text Format) file instead. RTF files do not suport macros, and by doing so you can ensure that you won't be inadvertently sending an infected file.
Rename your C:\AUTOEXEC.BAT file to C:\AUTO.BAT. Then, edit your C:\AUTOEXEC.BAT file to the following single line:autoBy doing this you can easily notice any viruses or trojans that try to add to, or replace, your AUTOEXEC.BAT file. Additionally, if a virus attempts to add code to the bottom of the file, it will not be executed.
Finally, always make regular backups of your computer files. That way, if your computer becomes infected, you can be confident of having a clean backup to help you recover from the attack.
Q: What types of files do you recommend that I scan and set for auto-protection?
Here's a list of file extensions that you should make sure your anti-virus software scans and autoprotects:
386, ADT, BIN, CBT, CLA, COM, CPL, CSC, DLL, DOC, DOT, DRV, EXE, HTM, HTT, JS, MDB, MSO, OV?, POT, PPT, RTF, SCR, SHS, SYS, VBS, XL?
Q: What are some good indications that my computer has a virus?
A very good indicator is having anti-virus software tell you that it found several files on a disk infected with the same virus (sometimes if the software reports just one file is infected, or if the file is not a program file -- an EXE or COM file -- it is a false report).
Another good indicator is if the reported virus was found in an EXE or COM file or in a boot sector on the disk.
If Windows can not start in 32-bit disk or file access mode your computer may have a virus.
If several executable files (EXE and COM) on your system are suddenly and mysteriously larger than they were previously, you may have a virus.
If you get a warning that a Microsoft Word document or Excel spreadsheet contains a macro but you know that it should not have a macro (you must first have the auto-warn feature activated in Word/Excel).
Q: What are the most common ways to get a virus?
One of the most common ways to get a computer virus is by booting from an infected diskette. Another way is to receive an infected file (such as an EXE or COM file, or a Microsoft Word document or Excel spreadsheet) through file sharing, by downloading it off the Internet, or as an attachment in an email message.
Q: What should I do if I get a virus?
First, don't panic! Resist the urge to reformat or erase everything in sight. Write down everything you do in the order that you do it. This will help you to be thorough and not duplicate your efforts. Your main actions will be to contain the virus, so it does not spread elsewhere, and then to eradicate it.
If you work in a networked environment, where you share information and resources with others, do not be silent. If you have a system administrator, tell her what has happened. It is possible that the virus has infected more than one machine in your workgroup or organization. If you are on a local area network, remove yourself physically from it immediately.
Once you have contained the virus, you will need to disinfect your system, and then work carefully outwards to deal with any problems beyond your system itself (for example, you should meticulously and methodically look at your system backups, and any removable media that you use). If you are on a network, any networked computers and servers will also need to be checked.
Any good anti-virus software will help you to identify the virus and then remove it from your system. Viruses are designed to spread, so don't stop at the first one you find, continue looking until you are sure you've checked every possible source. It is entirely possible that you could find several hundred copies of the virus throughout your system and media!
To disinfect your system, shut down all applications and shut down your computer right away. Then, if you have Fix-It Utilities 99, boot off your System Rescue Disk. Use the virus scanner on this rescue disk to scan your system for viruses. Because the virus definitions on your Rescue Disk may be out of date and is not as comprehensive as the full Virus Scanner in Fix-It, once you have used it and it has cleared your system of known viruses, boot into Windows and use the full Virus Scanner to do an "On Demand" scan set to scan all files. If you haven't run Easy Update recently to get the most current virus definition files, do so now.
If the virus scanner can remove the virus from an infected file, go ahead and clean the file. If the cleaning operation fails, or the virus software cannot remove it, either delete the file or isolate it. The best way to isolate such a file is to put it on a clearly marked floppy disk and then delete it from your system.
Once you have dealt with your system, you will need to look beyond it at things like floppy disks, backups and removable media. This way you can make sure that you won't accidentally re-infect your computer. Check all of the diskettes, zip disks, and CD-ROMs that may have been used on the system.
Finally, ask yourself who has used the computer in the last few weeks. If there are others, they may have inadvertently carried the infection to their computer, and be in need of help. Viruses can also infect other computers through files you may have shared with other people. Ask yourself if you have sent any files as email attachments, or copied any files from your machine to a server, web site or FTP site recently. If so, scan them to see if they are infected, and if they are, inform other people who may now have a copy of the infected file on their machine.
Disclaimer: These pages are not responsible for any damage that the information contained herein may cause to your system.

Top 5 Reasons for Digital Photo Disasters

Food, Family, Friends & Fun – No Room for Data Loss
With the holiday season quickly approaching, many of us are looking forward to spending time with our family and friends. These special times, which used to be captured on film, are now recorded digitally on a video recording device or a camera. Ontrack has revealed the top causes of memory card disasters from digital camera users desperate to recover their memories.1. ReformattingUsers often forget that reformatting a memory card will remove all the files stored on it including protected pictures and print orders. This data can only be retrieved by experts so Ontrack advises users to think again before you reformat.
2. OverwritingA common mistake is the accidental overwriting of images held on camera memory cards with new photos. It’s easily done. So check, check and check again that you’ve successfully transferred your images onto your PC, laptop, CD or DVD before taking new pictures.
3. Cracked and damaged mediaPacking memory cards into overstuffed suitcases can result in them becoming bent or damaged on the journey home, making them unreadable. Wrapping cards in clothes and placing them in the middle of your case offers some degree of protection in transit and helps ensure the safety of your pictures during your return trip.
4. Burnt media Leaving memory cards in an elevated temperature environment - close to a heat source such as a radiator or oven - will increase the chances of failure. Heat is unlikely to cause damage to the digital photos on the memory card but may stop the card from being recognized in a card reader. 5. Holiday injuries For those of you that opt for a tropical vacation instead of a snowy week with the in-laws, digital cameras often get dropped in the sand or splashed with water around the pool, damaging smart media to the extent that photos can’t be viewed. Only an expert can recover digital images from smart media damaged in this way, so users should be careful to keep digital cameras in padded and watertight cases to keep them safe.
“Recent research1 found that almost 90% of consumers now own a digital camera but around one-third don’t back up their photographs,” said Phil Bridge, business development manager at Ontrack Data Recovery. “The danger is that they could lose once-in-a-lifetime memories if anything happens to the memory cards that store their images.”
“Our list of memory card disasters highlights the need to protect smart media and alert digital camera users to the most common problems so that they can make sure it doesn’t happen to them,“ he added.

Mobile Device Data Loss

Data Loss – From PCs to Suit Pockets
Data is everywhere. No longer confined to desktop computers, data is always with us – at the gym in the form of an iPod®, in the car via your cell phone, and of course surrounding you at work – notebooks, desktops, servers, etc. With the increased portability of data comes the increased risk for data to be lost, misplaced, damaged or destroyed.
An Ontrack Data Recovery survey conducted among 400 professionals worldwide revealed that 65% of professionals own at least one USB stick. The compact size leads many people to store their sticks incorrectly – in suit pockets or in bags – leaving valuable confidential data at risk.
To help protect mobile devices from data loss, Ontrack Data Recovery has put together some simple preventative steps that will help create good habits for the use of USB sticks and hopefully prevent any data disasters.
Minimize misplacement – Try to prevent ‘wandering’ USB sticks. The device is easily lost when you don’t exactly know where it is kept. A dedicated USB spot prevents loss of data from a portable storage device.
Carry with care – Make sure your USB is stored safely when traveling to minimize the risk of losing data.
No backups, please – A USB stick is too vulnerable to store precious information. These sticks should therefore never be used as a backup device.
Put a lid on it – if not in use ensure that the connector of your USB is protected. By using the protective cap, provided with any USB stick, a possible data disaster can be averted
Unplug before you leave – Before you embark on a journey that requires a laptop and a USB stick, make sure the devices are separated. This way, both the laptop and the USB stick will run less risk of damage
With the use of mobile storage devices becoming more common, it is crucial that people understand there are options available if their data disappears. Ontrack Data Recovery is an expert at recovering data from storage devices of any size – from multi-terabyte servers to tiny xD flash media. If you have any questions or are in need of data recovery services, please contact us at 1-800-872-2599.

Hints to Help Students Protect Against Laptop Data Disasters

The Computer Ate My Homework
As most students prepare to head back to school, many will be packing laptop computers in addition to the usual school supplies. It's clear that laptop computers are quickly becoming a vital part of the scholastic experience, however with more laptops in use comes more danger for data loss.
“Laptop computers are an excellent way for today's students to manage their workload, but protecting the data on those computers isn't as simple as securing a notebook in a locker,” said Jim Reinert, sr. director of Software and Services for Ontrack Data Recovery. “Students need to be careful with their laptops to avoid both physical damage and other problems that could affect the integrity of their data. If problems do occur, it's also important they know that data recovery is always an option.”
To help students protect against laptop data disasters, Ontrack Data Recovery offers several tips:
Laptops are not as rugged as many like to think. When laptops are being docked, moved or transported, the greatest of care should be taken to prevent unnecessary shock or impact. Set up your computer in a dry, cool, controlled environment that is clean and dust-free. Placing your computer in a low-traffic area will protect your system and storage media from harmful jarring or bumping.
Use a sturdy, well padded laptop bag - Using just a back-pack or brief-case may not provide the protection a laptop needs during transportation. Make sure your laptop has plenty of built-in padding for protection.
Backup your data regularly - Creating regular backups is one of the most effective ways to protect you from losing data. Back up data at least once a week on a reliable medium (CD, DVD, USB flash drives or Internet backup), always verifying that the correct data is backed up.
Run a virus scan and update it regularly - Computer viruses are one of the worst enemies to your computer. Good anti-virus software tests your system for sequences of code unique to each known computer virus and eliminates the infecting invader. Also beware of spyware, a common problem brought about by Web surfing and downloads that can cause complications with your computer’s efficiency. There are several programs available on the internet that can assist with the removal of most spyware programs.
Be aware of strange noises - If you hear a strange noise or grinding sound, turn off your computer immediately and call an expert. Further operation may damage your hard drive beyond repair.
Do not use file recovery software if you suspect an electrical or mechanical failure - Using file recovery software on a faulty hard drive may destroy what was otherwise recoverable data or worsen the physical failure.
Use Auto-Save features – Most software applications have Auto-Save features that will save the project or document you have open at a preset intervals. For laptop users, a good time interval to use is every 5 minutes.
Be battery-level aware – If you are going to be using the laptop for long hours, be sure to try and find an electrical outlet to plug into. Some laptops will shut down quickly when a specific low battery level is reached and important documents may be lost.
If you do experience a data loss, Ontrack Data Recovery can help - Even the best maintenance program cannot always prevent system crashes or data loss. Ontrack offers a wide array of data recovery solutions, ranging from In-lab and Remote services to cost-effective do-it-yourself EasyRecovery™ software. Contact Ontrack for immediate assistance at 1-800-872-2599.

Hard Drive History – 50 years in the making

Today marks the 50th anniversary of hard drive storage. When IBM delivered its first hard drive on September 13th, 1956, few could have imagined the impact it would have on our everyday lives. The RAMAC (also known as 'Random Access Method of Accounting and Control') was the size of two refrigerators and weighed a ton. It required a separate air compressor to protect the heads, had pizza-sized platters and was able to store a then whopping 5 megabytes of data. Now you can do all that with a mere pocket drive! What's more - the RAMAC was available to lease for $35,000 USD, the equivalent of $254,275 in today's dollars.
25 years later, the first hard drive for personal computers was invented. Using the MFM encoding method, it held a 40MB capacity and 625 KBps data transfer rate. A later version of the ST506 interface switched to the RLL encoding method, allowing for increased storage capacity and processing speed.
IBM made technological history on August 12, 1981, with the launch of their first personal computer - the IBM 5150. At a cost of $1,565, the 5150 had just 16K of memory- just enough for a small amount of emails. It's difficult to conceive that as recently as the late 1980s 100MB of hard disk space was considered ample. In today's era, this would be totally insufficient, hardly enough to install the operating system, not to mention a large application such as Microsoft Office.
When asked about the limitations of the early PC, Tom Standage, the Economist magazine's business editor says: "It's hard to imagine what people used to do with computers in those days because by modern standards they really couldn't do anything."
As a result of these major breakthroughs, the industry has grown from several thousand disk drives per year in the 1950s to over 260 million drives per year in 2003. During this period, the cost of magnetic disk storage has decreased from $2,057 per megabyte in the 1960s to $.005 today.
The future is bright
At present, the standard 3.5 inch desktop drive can store up to 750 gigabytes (GB) in data. But disk drives are set to become even smaller, more powerful and less costly. According to Bill Healy, an executive at Hitachi, drives containing hundreds of gigabytes will be small enough to wear as jewelry. "You'll have with you every album and tune you've ever bought, every picture you've ever taken, every tax record."
Having five disk drives in your household is becoming increasingly commonplace: PCs, laptops, game systems, TiVo® video recorders, iPod® - just to mention a few. Experts believe that someday households will have up to 15 disk drives, some of which may appear in your TV set, cell phone or car.
In fact, the industry is expected to deliver as many drives in the next five years as it did in the last 50 years. Industry analysts such as Gartner, IDC and TrendFOCUS believe that the global hard drive market will continue to experience impressive unit and revenue growth.
Take the good with the bad
As new devices hit the market, and the amount of stored data escalates the potential for data loss is greater than ever. No matter how strict your back-up policy or how heavily you invest in data protection - somewhere along the line data loss will occur. With nearly 20 years of experience, Ontrack Data Recovery™ has certainly seen its fair share of data disasters. From the dog that ate one man's memory stick, to the frustrated user so angry he shot his laptop with a gun, to the businesswoman who spilt coffee on her laptop to the father who accidentally deleted his child's baby photos, Ontrack has truly seen it all.
Ontrack Data Recovery has the technical capability to recover data from any media, operating system or storage device - no matter how old or cumbersome! Preparation is the name of the game. By establishing a relationship with a reputable data recovery provider, you can reduce the stress surrounding data loss and relax with a ready-made action plan for data recovery and restoration.

Overview of Microsoft Personal Folder Information Store

Successful businesses today are the sum of integrated groups of people that work together to achieve the company’s goals. In today’s organizations, regardless of their size, communication between people and departments is a priority. Equally important is the method an organization uses to manage and distribute its own information. Reliance on electronic information has grown massively in the past 20 years and information management processes and solutions are vital for the success of the organization.
Electronic messaging has become an essential part of the corporate business environment, for some companies this is their primary means of communication. Electronic messaging is so important, in fact, that if a simultaneous telephone and email outage occurred, many companies would want their email service restored first.
A number of software vendors have developed messaging applications over the years. This month’s technical article focuses on one - Microsoft Outlook and the Personal Folder Information Store files it uses. We will look at how they work internally, common data loss scenarios, and why Ontrack Data Recovery is the recovery solution for data loss involving archived messages.
Microsoft Outlook is designed around the Messaging Application Programming Interface (MAPI) which provides the foundation for message data organization. This library is used for receiving raw message data.
There are four message storage providers within the Microsoft messaging environment.
From the Server Application
Public Information Store – stores public folders, contains information to be shared between users
Private Information Store – stores mailboxes for users, contains information to be secured from other users
From the Client Application
Personal Folder Information Store (PST). This is the file where the Microsoft Exchange server delivers messages.
Offline Personal Folder Information Store (OST). An information store that is used to store folder information that can be accessed offline.
MAPI, the communication protocol, is a library of instructions on how to process message data. MAPI handles the message data and then transfers the information to the message store provider. The message store engine puts the message data inside of the PST/OST file.
MAPI could be compared to a hotel’s concierge – just as the concierge receives new guests and manages existing hotel guests, so MAPI receives new messages and manages existing messages. A hotel concierge is also the contact for requests and information. Similarly, MAPI interacts with back-end message organization for requests such as sorting by date or sender. MAPI is also the central interface for all the information associated with individual message data.
Complexity Built-In for Storage Integrity
PST/OST file formats are very complex and the file is designed around relational database concepts. The PST/OST file organizes message information into a hierarchical system using folder groups and specific user folders to define each of the multiple levels that are possible within the file.
Interestingly, the way you see your message information is not how it is stored inside of the file. Outlook uses a standard form or interface to present the message data to you.
The MAPI library contains specific definitions for the database tables that store your messages. There are many, many tables within a PST/OST file. Here is a simplified example of the relationship between the displayed information and stored information:
In this particular example, there are only four pieces of information about each message and there are only two messages in this table. On a production Outlook PST file, there is more information stored from each message. Additionally, each table would correspond to a folder and there would be many messages in that table.
The list of MAPI tables is too long to list here; however, these internal storage mechanisms all work together in a seamless fashion to store your electronic messages.
This level of complexity is built-in for data organization and speed. PST files are always growing due to the streaming messages that are being received. Appreciating the intricate nature of these files and protecting your PST/OST files are important to avoiding a data disaster.There are many types of data loss situations. PST/OST files are just one of many file types that can be affected. What can you do to minimize data loss if a disaster happens to you or your client?
When Disasters Happen
Data disasters happen every day, what are the most common types of data loss situations?
Ontrack Data Recovery has been in the business of recovering data for nearly 20 years and we have seen all types of data loss. Categorically, data loss falls under one or more headings. Those are: physical device failure, logical file system errors, internal file data corruption, and human error.
Physical FailureWhat steps should be taken if the data loss is a physical problem with the hard drive?
If you hear strange noises (such as grinding, clicking, or screeching noises), do not attempt to restart the machine. This can cause further damage. Call Ontrack Data Recovery for data recovery.
If the hard drive does not spin up do not attempt to force the drive to start, this may produce internal damage. Call Ontrack Data Recovery for data recovery.
If the hard drive has been in a flood or experienced water damage do not dry out or start up. Keep the drive moist and send it to Ontrack Data Recovery immediately.
If the hard drive has been in a fire, put the drive in a sealable plastic bag with a moist paper towel. Send it to Ontrack Data Recovery immediately.
File System Errors, File Corruption, and Human ErrorWhat steps should be taken if the data loss is a logical file system problem or if files were deleted?
If the operating system starts up but you cannot find your data, turn off the computer without shutting down normally. This will avoid further data loss. Call Ontrack Data Recovery immediately.
If the computer starts but the operating system fails, call Ontrack Data Recovery. Do not reinstall or use a ‘Restore CD’ or ‘System Rescue’ CD supplied by the OEM, this will overwrite your data.
If files have been deleted, do not restore backup data to the machine that has lost the data. Call Ontrack Data Recovery for recovery.
If a user profile has been deleted, do not log into the domain/network or original user data will be overwritten. Call Ontrack Data Recovery for recovery.
If the problem is internal file corruption, do not attempt to repair the file without first backing up the data.
Data disasters can be as complex as hard drive failures and file system errors. Or data disasters can be straightforward such as critical files being deleted by mistake, a user’s profile being replaced, or the user’s operating system being reinstalled.
Ontrack Data Recovery: The Solution Provider
Ontrack Data Recovery is your premier data recovery company. This means that we find multiple ways to get your information available. Outlook PST/OST recoveries are an excellent example of this. Even in situations where the drive has been reformatted, Ontrack Data Recovery can get back PST/OST data. There have been recent recoveries where the data loss was so severe that original file system did not point to the user’s Outlook PST file. Ontrack Data Recovery’s team of recovery engineers was able to find important message data with thousands of messages made available for the client.
Data recovery is more than getting hard drives to operate, running generic recovery tools, and hoping for a good recovery. Ontrack Data Recovery ensures quality recoveries by thoroughly researching and developing data recovery tools for in-lab and remote service. In the case of PST/OST recoveries, Ontrack Data Recovery has invested in research and development to produce a suite of tools that will recover message data from these files.
Ontrack Data Recovery’s expertise and knowledge in all types of recoveries proves that Ontrack Data Recovery is the market leader in data recovery. If you find that you, your users, or your client’s users are having a data disaster, don’t panic, Ontrack Data Recovery is here to help.