Testing Corsair Force Series SSD Life
For Corsair, Inc.
October 11th, 2011
IntroductionA common question appearing on the forums these days is the lifespan of a Solid State Drive (SSD). Being a very new technology compared to mechanical platter technology, the numbers of failures of SSDs are currently extremely low: there just hasn’t been enough SSDs out there long enough yet to have a significant number failing yet. However, the lifespan of the SSD is a growing concern for many consumers looking to make the switch to SSD technology. It is expected that SSDs have much shorter lifespans than HDDs because of the limitation of NAND flash memory itself; the current flash technology used in most SSDs including all available Corsair SSD products.
The current accepted method to report lifespan estimates of hardware products is the mean-time between failure (MTBF) times. The current Corsair products have MTBF times between 1,000,000 and 2,000,000 hours. Here’s a table compiling the latest Corsair drives and their MTBF times in a more useful scale: years.
Obviously, the MTBF has not been experimentally validated. Also, the MTBF time is predicted based off power-on hours. If you simply plugged in a SSD, but never actually transferred any data to it, it would most likely stay functional for over 100 years. A drive like the Force GT, could possibly reach over 200 years in power-on time.
We need a more accurate metric to which can accurately determine the lifespan of an SSD under regular use: while writing and deleting files. Additionally, constant writing and deleting (P/E cycles) is a concern for those who want to maintain their SSD’s extreme performance. For this experiment, I used a utility in order to simulate real world scenarios and test the robustness (performance, file integrity, and life) of an SSD.
Materials & MethodsAnvil’s Storage Utilities 1.0.27 Beta5 (8/1/2011) was used to perform the endurance testing (writing and deleting of files). ATTO was used for benchmarks to test sequential data rates. Average random write speed is reported in Anvil’s application. SSDife Pro was used to confirm the health of the drive at the end of the test. The test machine was an Asus P5W-DH Deluxe with an Intel QX6700 and ICH-7R controller (SATA II support) running Windows 7 SP1 Enterprise 64-bit. TRIM is fully supported and enabled. Most importantly of all, a 40 GB Corsair Force drive was used. The drive was Rev. A, which utilizes 25nm technology. The latest firmware at the time of writing, 2.2, was pre-installed on the drive.
The following figure shows SMART data from CrystalDiskInfo. Notice the health status is “Good” at 100% and there are no lifetime writes or reads (F1 and F2 values respectively) to this virgin drive.
ATTO benchmarking utility shows initial sequential data rates at around 265 MB/sec writes and 280 MB/sec reads:
ExperimentThree files were placed on the drive: a 35 KiB excel file, 49 KiB word document, and 2,181 KiB JPEG image. MD5 hashes were generated for the three files before testing commenced. Throughout the testing, the MD5 for the files were repeatedly generated and no changes in MD5 value ever occurred. File integrity remained at 100%.
The 40 GB (true SI notation, 1000 MB = 1 GB) Force drive has approximately 37.2 GiB (1 Gibibyte = 1024 MiB) of capacity. Note from here on, I will not be using the true SI notation of Giga and Tera-bytes, bur rather Gibi and Tibi-bytes, which is actually what Windows uses and what the common reader would be familiar with. The endurance test was set to fill 36 GiB of the drive at a time with random data, then delete it and repeat. It is believed the static data on the drive (the three test files) are rotated among the cells for even wear leveling.
ResultsThe health indicator (E7) remained at 100% until just over 19 TiB of written data (F1) when it dropped to 97%. The average random write speed reported by Anvil’s application was 151 MiB/s.
After about three weeks of testing and 127 TiB written and deleted to the drive, the average random write speed remained steady at 146 MB/s. The health status was down to 57%. There were still no retired blocks (05), but the wear range delta started increasing and was at 49 at this point (B1).
Surprisingly, SSDlife Pro still shows good health for the drive, and ATTO benchmarks were only slightly changed from the virgin drive after the 127 TiB of written and deleted data.
By 168 TiB written, the MWI had dropped to 25%. The speeds stayed the same and still no retired blocks (05). The wear range delta continued rising slowly and was up to 83 at this point.
By 189 TiB written, the MWI was down to 10%. The average write speed was still the same and no retired blocks. It stayed at this MWI even beyond 206 TiB as shown by SSDLife. Even up to 228 TiB, the MWI was still at 10%, speeds stayed consistent, and no blocks were ever retired. The MD5 hashes were generated and still unchanged.
By 240.8 TiB, the drive had suddenly disappeared from the controller. Finally, it had called quits. The overall history of this drive is shown in the following plots. Due to natural compression, the true amount of data written to the drive is actually less than the amount reported by Anvil's program. In order to illustrate any potential differences, I plotted the MWI health status (E7) vs. both E9 (true data written to SSD blocks) and EA/F1 (effective data written, “lifetime writes from host”).
Original post was at http://www.corsair.com/blog/force-series-ssd-life-testing/ Share this post on Testing Corsair Force Series SSD Life