Mailing List Archive

1 2 3  View All
Re: Are those "green" drives any good? [ In reply to ]
napalm@squareownz.org wrote:
> On Wed, May 09, 2012 at 06:58:47PM -0500, Dale wrote:
>> Mark Knecht wrote:
>>> On Wed, May 9, 2012 at 3:24 PM, Dale <rdalek1967@gmail.com> wrote:
>>>> Alan McKinnon wrote:
>>> <SNIP>
>>>>> My thoughts these days is that nobody really makes a bad drive anymore.
>>>>> Like cars[1], they're all good and do what it says on the box. Same
>>>>> with bikes[2].
>>>>>
>>>>> A manufacturer may have some bad luck and a product range is less than
>>>>> perfect, but even that is quite rare and most stuff ups can be fixed
>>>>> with new firmware. So it's all good.
>>>>
>>>>
>>>> That's my thoughts too. It doesn't matter what brand you go with, they
>>>> all have some sort of failure at some point. They are not built to last
>>>> forever and there is always the random failure, even when a week old.
>>>> It's usually the loss of important data and not having a backup that
>>>> makes it sooooo bad. I'm not real picky on brand as long as it is a
>>>> company I have heard of.
>>>>
>>>
>>> One thing to keep in mind is statistics. For a single drive by itself
>>> it hardly matters anymore what you buy. You cannot predict the
>>> failure. However if you buy multiple identical drives at the same time
>>> then most likely you will either get all good drives or (possibly) a
>>> bunch of drives that suffer from similar defects and all start failing
>>> at the same point in their life cycle. For RAID arrays it's
>>> measurably best to buy drives that come from different manufacturing
>>> lots, better from different factories, and maybe even from different
>>> companies. Then, if a drive fails, assuming the failure is really the
>>> fault of the drive and not some local issue like power sources or ESD
>>> events, etc., it's less likely other drives in the box will fail at
>>> the same time.
>>>
>>> Cheers,
>>> Mark
>>>
>>>
>>
>>
>>
>> You make a good point too. I had a headlight to go out on my car once
>> long ago. I, not thinking, replaced them both since the new ones were
>> brighter. Guess what, when one of the bulbs blew out, the other was out
>> VERY soon after. Now, I replace them but NOT at the same time. Keep in
>> mind, just like a hard drive, when one headlight is on, so is the other
>> one. When we turn our computers on, all the drives spin up together so
>> they are basically all getting the same wear and tear effect.
>>
>> I don't use RAID, except to kill bugs, but that is good advice. People
>> who do use RAID would be wise to use it.
>>
>> Dale
>>
>> :-) :-)
>>
>
> hum hum!
> I know that Windows does this by default (it annoys me so I disable it)
> but does linux disable or stop running the disks if they're inactive?
> I'm assuming there's an option somewhere - maybe just `unmount`!
>


The default is to keep them all running and to not spin them down. I
have never had a Linux OS to spin down a drive unless I set/told it to.
You can do this tho. The command and option is:

hdparm -S /dev/sdX

X would be the drive number. There is also the -s option but it is not
recommended.

There is also the -y and -Y options. Before using ANY of these, read
the man page. Each one has it uses and you need to know for sure which
one does what you want.

Dale

:-) :-)

--
I am only responsible for what I said ... Not for what you understood or
how you interpreted my words!

Miss the compile output? Hint:
EMERGE_DEFAULT_OPTS="--quiet-build=n"
Re: Are those "green" drives any good? [ In reply to ]
* Dale <rdalek1967@gmail.com> [120509 19:54]:
[..]
> Way back in the stone age, there was a guy that released a curve for
> electronics life. The failure rate is high at the beginning, especially
> for the first few minutes, then falls to about nothing, then after
> several years it goes back up again. At the beginning of the curve, the
> thought was it could be a bad solder job, bad components or some other
> problem. At the other end was just when age kicked in. Sweat spot is
> in the middle.

C. Gordon Bell has that curve in his book "Computer Engineering."

Available online at:

http://research.microsoft.com/en-us/um/people/gbell/Computer_Engineering/index.html

for HTML and:

http://research.microsoft.com/en-us/um/people/gbell/CGB%20Files/Computer%20Engineering%207809%20c.pdf

for the PDF.

Todd
Re: Are those "green" drives any good? [ In reply to ]
On Thu, May 10, 2012 at 07:38:34AM -0500, Dale wrote:
>
> The default is to keep them all running and to not spin them down. I
> have never had a Linux OS to spin down a drive unless I set/told it to.
> You can do this tho. The command and option is:
>
> hdparm -S /dev/sdX
>
> X would be the drive number. There is also the -s option but it is not
> recommended.
>
> There is also the -y and -Y options. Before using ANY of these, read
> the man page. Each one has it uses and you need to know for sure which
> one does what you want.
>
> Dale
>

Awesome thanks very much, if I need to power down one of my drives I
shall use hdparam!

Does the kernel keep even unmounted drives spinning by default?

Thank you Dale!
Re: Are those "green" drives any good? [ In reply to ]
On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
> Hi,
>
> As some know, I'm planning to buy me a LARGE hard drive to put all my
> videos on, eventually.  The prices are coming down now.  I keep seeing
> these "green" drives that are made by just about every company nowadays.
>  When comparing them to a non "green" drive, do they hold up as good?
> Are they as dependable as a plain drive?  I guess they are more
> efficient and I get that but do they break quicker, more often or no
> difference?
>
> I have noticed that they tend to spin slower and are cheaper.  That much
> I have figured out.  Other than that, I can't see any other difference.
>  Data speeds seem to be about the same.
>

They have an ugly tendency to nod off at 6 second intervals.
This runs up "193 Load_Cycle_Count" unacceptably: as many
as a few hundred thousand in a year & a million cycles is
getting close to the lifetime limit on most hard drives. I end
up running some iteration of
# hdparm -B 255 /dev/sda
every boot.
Re: Are those "green" drives any good? [ In reply to ]
On Thu, May 10, 2012 at 9:20 AM, Norman Invasion
<invasivenorman@gmail.com> wrote:
> On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
>> Hi,
>>
>> As some know, I'm planning to buy me a LARGE hard drive to put all my
>> videos on, eventually.  The prices are coming down now.  I keep seeing
>> these "green" drives that are made by just about every company nowadays.
>>  When comparing them to a non "green" drive, do they hold up as good?
>> Are they as dependable as a plain drive?  I guess they are more
>> efficient and I get that but do they break quicker, more often or no
>> difference?
>>
>> I have noticed that they tend to spin slower and are cheaper.  That much
>> I have figured out.  Other than that, I can't see any other difference.
>>  Data speeds seem to be about the same.
>>
>
> They have an ugly tendency to nod off at 6 second intervals.
> This runs up "193 Load_Cycle_Count" unacceptably: as many
> as a few hundred thousand in a year & a million cycles is
> getting close to the lifetime limit on most hard drives.  I end
> up running some iteration of
> # hdparm -B 255 /dev/sda
> every boot.
>

Very true about the 193 count. Here's a drive in a system that was
built in Jan., 2010 so it's a bit over 2 years old at this point. It's
on 24/7 and not rebooted except for more major updates, etc. My tests
say the drive spins down and starts back up every 2 minutes and has
been doing so for about 28 months. IIRC the 193 spec on this drive was
something like 300000 max with the drive currently clocking in at
700488. I don't see any evidence that it's going to fail but I am
trying to make sure it's backed up often. Being that it's gone >2x at
this point I will swap the drive out in the early summer no matter
what. This week I'll be visiting where the machine is so I'm going to
put a backup drive in the box to get ready.

- Mark


gandalf ~ # smartctl -a /dev/sda
smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.2.12-gentoo] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Caviar Green (Adv. Format)
Device Model: WDC WD10EARS-00Y5B1
Serial Number: WD-WCAV55464493
LU WWN Device Id: 5 0014ee 2ae6b5ffe
Firmware Version: 80.00A80
User Capacity: 1,000,204,886,016 bytes [1.00 TB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 8
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Thu May 10 10:53:59 2012 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (19800) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection
on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 228) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x3031) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail
Always - 0
3 Spin_Up_Time 0x0027 131 128 021 Pre-fail
Always - 6441
4 Start_Stop_Count 0x0032 100 100 000 Old_age
Always - 65
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail
Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age
Always - 0
9 Power_On_Hours 0x0032 074 074 000 Old_age
Always - 19316
10 Spin_Retry_Count 0x0032 100 253 000 Old_age
Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age
Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age
Always - 63
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age
Always - 14
193 Load_Cycle_Count 0x0032 001 001 000 Old_age
Always - 700488
194 Temperature_Celsius 0x0022 120 113 000 Old_age
Always - 27
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age
Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age
Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age
Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age
Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age
Offline - 0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num Test_Description Status Remaining
LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 11655 -
# 2 Extended offline Completed without error 00% 8797 -
# 3 Short offline Completed without error 00% 8794 -
# 4 Extended offline Completed without error 00% 1009 -
# 5 Extended offline Completed without error 00% 388 -
# 6 Short offline Completed without error 00% 376 -

SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

gandalf ~ #
Re: Are those "green" drives any good? [ In reply to ]
On 10 May 2012 14:01, Mark Knecht <markknecht@gmail.com> wrote:
> On Thu, May 10, 2012 at 9:20 AM, Norman Invasion
> <invasivenorman@gmail.com> wrote:
>> On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
>>> Hi,
>>>
>>> As some know, I'm planning to buy me a LARGE hard drive to put all my
>>> videos on, eventually.  The prices are coming down now.  I keep seeing
>>> these "green" drives that are made by just about every company nowadays.
>>>  When comparing them to a non "green" drive, do they hold up as good?
>>> Are they as dependable as a plain drive?  I guess they are more
>>> efficient and I get that but do they break quicker, more often or no
>>> difference?
>>>
>>> I have noticed that they tend to spin slower and are cheaper.  That much
>>> I have figured out.  Other than that, I can't see any other difference.
>>>  Data speeds seem to be about the same.
>>>
>>
>> They have an ugly tendency to nod off at 6 second intervals.
>> This runs up "193 Load_Cycle_Count" unacceptably: as many
>> as a few hundred thousand in a year & a million cycles is
>> getting close to the lifetime limit on most hard drives.  I end
>> up running some iteration of
>> # hdparm -B 255 /dev/sda
>> every boot.
>>
>
> Very true about the 193 count. Here's a drive in a system that was
> built in Jan., 2010 so it's a bit over 2 years old at this point. It's
> on 24/7 and not rebooted except for more major updates, etc. My tests
> say the drive spins down and starts back up every 2 minutes and has
> been doing so for about 28 months. IIRC the 193 spec on this drive was
> something like 300000 max with the drive currently clocking in at
> 700488. I don't see any evidence that it's going to fail but I am
> trying to make sure it's backed up often. Being that it's gone >2x at
> this point I will swap the drive out in the early summer no matter
> what. This week I'll be visiting where the machine is so I'm going to
> put a backup drive in the box to get ready.
>

Yes, I just learned about this problem in 2009 or so, &
checked on my FreeBSD laptop, which turned out to be
at >400000. It only made it another month or so before
having unrecoverable errors.

Now, I can't conclusively demonstrate that the 193
Load_Cycle_Count was somehow causitive, but I
gots my suspicions. Many of 'em highly suspectable.
Re: Are those "green" drives any good? [ In reply to ]
On Thu, May 10, 2012 at 11:13 AM, Norman Invasion
<invasivenorman@gmail.com> wrote:
> On 10 May 2012 14:01, Mark Knecht <markknecht@gmail.com> wrote:
>> On Thu, May 10, 2012 at 9:20 AM, Norman Invasion
>> <invasivenorman@gmail.com> wrote:
>>> On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
>>>> Hi,
>>>>
>>>> As some know, I'm planning to buy me a LARGE hard drive to put all my
>>>> videos on, eventually.  The prices are coming down now.  I keep seeing
>>>> these "green" drives that are made by just about every company nowadays.
>>>>  When comparing them to a non "green" drive, do they hold up as good?
>>>> Are they as dependable as a plain drive?  I guess they are more
>>>> efficient and I get that but do they break quicker, more often or no
>>>> difference?
>>>>
>>>> I have noticed that they tend to spin slower and are cheaper.  That much
>>>> I have figured out.  Other than that, I can't see any other difference.
>>>>  Data speeds seem to be about the same.
>>>>
>>>
>>> They have an ugly tendency to nod off at 6 second intervals.
>>> This runs up "193 Load_Cycle_Count" unacceptably: as many
>>> as a few hundred thousand in a year & a million cycles is
>>> getting close to the lifetime limit on most hard drives.  I end
>>> up running some iteration of
>>> # hdparm -B 255 /dev/sda
>>> every boot.
>>>
>>
>> Very true about the 193 count. Here's a drive in a system that was
>> built in Jan., 2010 so it's a bit over 2 years old at this point. It's
>> on 24/7 and not rebooted except for more major updates, etc. My tests
>> say the drive spins down and starts back up every 2 minutes and has
>> been doing so for about 28 months. IIRC the 193 spec on this drive was
>> something like 300000 max with the drive currently clocking in at
>> 700488. I don't see any evidence that it's going to fail but I am
>> trying to make sure it's backed up often. Being that it's gone >2x at
>> this point I will swap the drive out in the early summer no matter
>> what. This week I'll be visiting where the machine is so I'm going to
>> put a backup drive in the box to get ready.
>>
>
> Yes, I just learned about this problem in 2009 or so, &
> checked on my FreeBSD laptop, which turned out to be
> at >400000.  It only made it another month or so before
> having unrecoverable errors.
>
> Now, I can't conclusively demonstrate that the 193
> Load_Cycle_Count was somehow causitive, but I
> gots my suspicions.  Many of 'em highly suspectable.
>

It's part of the 'Wear Out Failure' part of the Bathtub Curve posted
in the last few days. That said, some Toyotas go 100K miles, and
others go 500K miles. Same car, same spec, same production line,
different owners, different roads, different climates, etc.

It's not possible to absolutely know when any drive will fail. I
suspect that the 300K spec is just that, a spec. They'd replace the
drive if it failed at 299,999 and wouldn't replace it at 300,001. That
said, they don't want to spec thing too tightly, and I doubt many
people make a purchasing decision on a spec like this, so for the vast
majority of drives most likely they'd do far more than 300K.

At 2 minutes per count on that specific WD Green Drive, if a home
machine is turned on for instance 5 hours a day (6PM to 11PM) then
300K count equates to around 6 years. To me that seems pretty generous
for a low cost home machine. However for a 24/7 production server it's
a pretty fast replacement schedule.

Here's data for my 500GB WD RAID Edition drives in my compute server
here. It's powered down almost every night but doesn't suffer from the
same firmware issues. The machine was built in April, 2010, so it's a
bit of 2 years old. Note that it's been powered on less than 1/2 the
number of hours but only has a 193 count of 907 vs > 700000!

Cheers,
Mark


c2stable ~ # smartctl -a /dev/sda
smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.2.12-gentoo] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family: Western Digital RE3 Serial ATA
Device Model: WDC WD5002ABYS-02B1B0
Serial Number: WD-WCASYA846988
LU WWN Device Id: 5 0014ee 2042c3477
Firmware Version: 02.03B03
User Capacity: 500,107,862,016 bytes [500 GB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 8
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Thu May 10 11:45:45 2012 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x84) Offline data collection activity
was suspended by an
interrupting command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 9480) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection
on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test
supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before
entering
power-saving mode.
Supports SMART auto save
timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging
supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 112) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x303f) SCT Status supported.
SCT Error Recovery Control
supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail
Always - 0
3 Spin_Up_Time 0x0027 239 235 021 Pre-fail
Always - 1050
4 Start_Stop_Count 0x0032 100 100 000 Old_age
Always - 935
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail
Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age
Always - 0
9 Power_On_Hours 0x0032 091 091 000 Old_age
Always - 7281
10 Spin_Retry_Count 0x0032 100 100 000 Old_age
Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age
Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age
Always - 933
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age
Always - 27
193 Load_Cycle_Count 0x0032 200 200 000 Old_age
Always - 907
194 Temperature_Celsius 0x0022 106 086 000 Old_age
Always - 41
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age
Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age
Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age
Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age
Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age
Offline - 0
Re: Are those "green" drives any good? [ In reply to ]
Hello,

On Thu, 10 May 2012, Mark Knecht wrote:
>On Thu, May 10, 2012 at 9:20 AM, Norman Invasion
><invasivenorman@gmail.com> wrote:
>> They have an ugly tendency to nod off at 6 second intervals.
>> This runs up "193 Load_Cycle_Count" unacceptably: as many
>> as a few hundred thousand in a year & a million cycles is
>> getting close to the lifetime limit on most hard drives.  I end
>> up running some iteration of
>> # hdparm -B 255 /dev/sda
>
>Very true about the 193 count.

There was some bug, IIRC.
http://jeanbruenn.info/2011/01/23/wd-green-discs-and-the-problem-in-linux-load-cycle-count/

and search for 'linux Load_Cycle_Count' using your favorite search site.

HTH,
-dnh

--
Well, merry frelling christmas! -- Aeryn Sun, Farscape - 4x13 - Terra Firma
Re: Are those "green" drives any good? [ In reply to ]
Hello,

On Wed, 09 May 2012, Dale wrote:
>As some know, I'm planning to buy me a LARGE hard drive to put all my
>videos on, eventually. The prices are coming down now. I keep seeing
>these "green" drives that are made by just about every company nowadays.
> When comparing them to a non "green" drive, do they hold up as good?
>Are they as dependable as a plain drive? I guess they are more
>efficient and I get that but do they break quicker, more often or no
>difference?

Basically: they run a 5400 min^-1, the "normal" ones at 7200 min^-1
and the green use less power. Years ago, a normal drive took 10-13W
running, up to 27W during spinup. Now it's IIRC 4-6W running and some
more during spinup (haven't seen any figures lately).

>I have noticed that they tend to spin slower and are cheaper. That much
>I have figured out. Other than that, I can't see any other difference.
> Data speeds seem to be about the same.

Yes.

>Please, no brand wars. I may get a WD, Maxtor, Samsung or some other
>brand.

Hm. You've been out of the loop. Of those 3, only one remains.
Maxtor was bought by Seagate some years ago and Samsung this year,
there's now appearing the first Samsung drives from Seagate (I got one
of those, odd labeling, sold as
2000GB Seagate Barracuda Green ST2000DL004 (HD204UI)

So, now there's only 3.5 to 4 Manufacturers left: WD, Seagate, Hitachi
and Toshiba (and Fujitsu?) manufacturing only 2.5" laptop drives.

Other sellers like cnMemory etc. used to repackage Samsung drives
(IIRC the othere Manufatureres did not allow that), I wonder what
those will do now that Samsung is bought up by Seagate.

HTH,
-dnh

--
I am supposed to be the info provider, so here is my answer:
42
By the way: What is the question? -- Johannes Meixner
in https://bugzilla.novell.com/show_bug.cgi?id=190173
Re: Are those "green" drives any good? [ In reply to ]
Hello,

On Wed, 09 May 2012, Alan McKinnon wrote:
>One thing we have noticed is that Samsung's recent model are not very
>"green", they spin up slowly, use lots of power and make a racket when
>spinning. But they do work.

Which ones? I've got one of all Models of the last years, and to none
applies what you're saying.

-dnh

--
If breathing required conscious thought, the world population would be
on a sharp decline. -- Greg Andrews
Re: Are those "green" drives any good? [ In reply to ]
Hello,

On Wed, 09 May 2012, Dale wrote:
>While on the thread. Has anyone had any sort of luck with the
>recertified drives?

Avoid them.

-dnh

--
Well I wish you'd just tell me rather than try to engage my enthusiasm.
-- Marvin
Re: Are those "green" drives any good? [ In reply to ]
On Wed, May 9, 2012 at 1:47 AM, Dale <rdalek1967@gmail.com> wrote:
> Hi,
>
> As some know, I'm planning to buy me a LARGE hard drive to put all my
> videos on, eventually.

Hi Dale,
One thing I wanted to point out about the task you have in front of
you. There is a problem in your work statement here and it really
comes down to one single one letter word. That word was 'a', as in
"buy me _a_ LARGE hard drive"

No matter what drive you purchase, and no matter how well you treat
it, they all fail eventually and you lose your movies & all the time
it takes to put it back together again. At a minimum, if you plan on
buying one to use then you need to buy a _second_ drive to do backups
of the first. You need to rsync that second drive on a regular basis
and then disconnect it and put it in a different place in the house,
or even better, store it in a safety deposit box to protect against
theft or your house burning down, etc.

This sort of comment certainly goes for the system as a whole, but
at a seasoned Gentoo user I'm sure you're doing that already. ;-) Just
don't forget to do the same for this new drive.

Have fun,
Mark
Re: Are those "green" drives any good? [ In reply to ]
On Thu, 10 May 2012 21:38:20 +0200
David Haller <gentoo@dhaller.de> wrote:

> Hello,
>
> On Wed, 09 May 2012, Alan McKinnon wrote:
> >One thing we have noticed is that Samsung's recent model are not very
> >"green", they spin up slowly, use lots of power and make a racket
> >when spinning. But they do work.
>
> Which ones? I've got one of all Models of the last years, and to none
> applies what you're saying.
>
> -dnh
>

I wasn't talking from my experience, I was talking from my developer
colleagues' experience. I'll find out which drive models they used.

--
Alan McKinnnon
alan.mckinnon@gmail.com
Re: Are those "green" drives any good? [ In reply to ]
On Thu, 10 May 2012 21:36:46 +0200, David Haller wrote:

> Basically: they run a 5400 min^-1, the "normal" ones at 7200 min^-1
> and the green use less power.

Some green drives run at 5900rpm.


--
Neil Bothwick

WinErr 01E: Timing error - Please wait. And wait. And wait. And wait.
Re: Are those "green" drives any good? [ In reply to ]
napalm@squareownz.org wrote:
> On Thu, May 10, 2012 at 07:38:34AM -0500, Dale wrote:
>>
>> The default is to keep them all running and to not spin them down. I
>> have never had a Linux OS to spin down a drive unless I set/told it to.
>> You can do this tho. The command and option is:
>>
>> hdparm -S /dev/sdX
>>
>> X would be the drive number. There is also the -s option but it is not
>> recommended.
>>
>> There is also the -y and -Y options. Before using ANY of these, read
>> the man page. Each one has it uses and you need to know for sure which
>> one does what you want.
>>
>> Dale
>>
>
> Awesome thanks very much, if I need to power down one of my drives I
> shall use hdparam!
>
> Does the kernel keep even unmounted drives spinning by default?
>
> Thank you Dale!


From my experience, as I posted I have never had Linux spin down a drive
without me telling it to or setting it up to do so. If you want that to
be disabled as you have it in windows, the default settings should be
fine.

If you have a drive that is not being used, then you can use one of
those commands to shut it down to save power, wear and tear or whatever.

Dale

:-) :-)

--
I am only responsible for what I said ... Not for what you understood or
how you interpreted my words!

Miss the compile output? Hint:
EMERGE_DEFAULT_OPTS="--quiet-build=n"
Re: Are those "green" drives any good? [ In reply to ]
On Thu, May 10, 2012 at 6:55 AM, <napalm@squareownz.org> wrote:
>
> hum hum!
> I know that Windows does this by default (it annoys me so I disable it)
> but does linux disable or stop running the disks if they're inactive?
> I'm assuming there's an option somewhere - maybe just `unmount`!

Some drives cannot have this spindown "feature" disabled, because it
is a fixed value in their firmware in order to be "green"...

You can adjust the power management setting with hdparm, and on some
drives this allows disabling the spindown or disabling power
management altogether.

On my HDDs, I cannot disable APM but I can disable spindown by
changing the power-saving level to 254. I have a script in
/etc/local.d/ which calls:

hdparm -B 254 /dev/sd[abcdef]

at boot time.

To quote the hdparm manpage:

"A low value means aggressive power management and a high value means
better performance. Possible settings range from values 1 through 127
(which permit spin-down), and values 128 through 254 (which do not
permit spin-down). The highest degree of power management is
attained with a setting of 1, and the highest I/O performance with a
setting of 254. A value of 255 tells hdparm to disable Advanced Power
Management altogether on the drive (not all drives support disabling
it, but most do)."
Re: Are those "green" drives any good? [ In reply to ]
On Thu, 10 May 2012 17:53:27 -0500, Paul Hartman wrote:

> On my HDDs, I cannot disable APM but I can disable spindown by
> changing the power-saving level to 254. I have a script in
> /etc/local.d/ which calls:

You don't need a script, add the options you need to /etc/conf.d/hdparm
and add hdparm to the default runlevel.

> hdparm -B 254 /dev/sd[abcdef]

That doesn't work with my WD WD20EARX drives, which just report APM
disabled when I run it.


--
Neil Bothwick

Eagles may soar, but Wombles don't get sucked into jet engines
Re: Are those "green" drives any good? [ In reply to ]
Hello,

On Fri, 11 May 2012, Neil Bothwick wrote:
>On Thu, 10 May 2012 17:53:27 -0500, Paul Hartman wrote:
>> On my HDDs, I cannot disable APM but I can disable spindown by
>> changing the power-saving level to 254. I have a script in
>> /etc/local.d/ which calls:
>
>You don't need a script, add the options you need to /etc/conf.d/hdparm
>and add hdparm to the default runlevel.
>
>> hdparm -B 254 /dev/sd[abcdef]
>
>That doesn't work with my WD WD20EARX drives, which just report APM
>disabled when I run it.

Oh boy, we did get confused in this thread, did we?

RTFM hdparm.

a) Disk APM has usually only 3 settings, and only controls the
"agressiveness" or the speed of how seeks are done, i.e. how fast
the head moves seeking from track to track.
0-127 slow
128-254 fast
255 default
At least some manufacturers disable this (IIRC e.g. Seagate,
lock it to "slow" on the "green" disks and fast on enterprise.

b) spindown is a totally unrelated feature, which is can be set by
using 'hdparm -S'. I have about 20 disks in two boxen, one of them
a WD 20xxEARS, and _NONE_ spin down (until shutdown).

Have a look into your /etc/pm-profiler/{YOUR_PROFILE} (not sure if
that's gentoo standard, I only have a very minimal gentoo
installed). I've e.g. copied the "Balanced Low Latency" profile but
set
SATA_ALPM="max_performance"
In the "powersaving" you get
SATA_ALPM="min_power"
which sets (via hdparm -S) the disks to spindown after whatever
seconds (20s? I don't know).

Anyway, there is some stuff setting disk-spindown timeouts. So, choose
and/or adjust pm/upower config and/or set spindown time via 'hdparm
-S', with pm-profiler, upower, init-script, whatever.

BTW: 'hdparm -S 0' disables spindown.

HTH,
-dnh, with a seriously outdated gentoo installed only in parallel, but
I have a lot of disks and know hdparm a bit ;)

--
When the SysAdmin answers the phone politely, say "sorry", hang up and
run awaaaaay!
Informal advice to users at Karolinska Institutet, 1993-1994
Re: Are those "green" drives any good? [ In reply to ]
On Thu, 2012-05-10 at 12:20 -0400, Norman Invasion wrote:
> On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
> > Hi,
> >
> > As some know, I'm planning to buy me a LARGE hard drive to put all my
> > videos on, eventually. The prices are coming down now. I keep seeing
> > these "green" drives that are made by just about every company nowadays.
> > When comparing them to a non "green" drive, do they hold up as good?
> > Are they as dependable as a plain drive? I guess they are more
> > efficient and I get that but do they break quicker, more often or no
> > difference?
> >
> > I have noticed that they tend to spin slower and are cheaper. That much
> > I have figured out. Other than that, I can't see any other difference.
> > Data speeds seem to be about the same.
> >
>
> They have an ugly tendency to nod off at 6 second intervals.
> This runs up "193 Load_Cycle_Count" unacceptably: as many
> as a few hundred thousand in a year & a million cycles is
> getting close to the lifetime limit on most hard drives. I end
> up running some iteration of
> # hdparm -B 255 /dev/sda
> every boot.
>

hdparm installs an init script with a /etc/conf.d/hdparm file which
allows you to set things up at whatever run level you are using. Also
beware things like "laptopmode" which take over rewriting the kernel and
harddrive parameters for dynamic power saving (i.e., different between
running on battery as to from mains) - really does work but can kill a
drive with Load_Cycle_Counts so drive life can be foreshortened if you
get too zealous (i.e., very short spindown times and using a journalled
file system.

BillK
Re: Are those "green" drives any good? [ In reply to ]
On Thursday 10 May 2012 19:51:14 Mark Knecht wrote:
> On Thu, May 10, 2012 at 11:13 AM, Norman Invasion
>
> <invasivenorman@gmail.com> wrote:
> > On 10 May 2012 14:01, Mark Knecht <markknecht@gmail.com> wrote:
> >> On Thu, May 10, 2012 at 9:20 AM, Norman Invasion
> >>
> >> <invasivenorman@gmail.com> wrote:
> >>> On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
> >>>> Hi,
> >>>>
> >>>> As some know, I'm planning to buy me a LARGE hard drive to put all my
> >>>> videos on, eventually. The prices are coming down now. I keep seeing
> >>>> these "green" drives that are made by just about every company
> >>>> nowadays. When comparing them to a non "green" drive, do they hold up
> >>>> as good? Are they as dependable as a plain drive? I guess they are
> >>>> more efficient and I get that but do they break quicker, more often
> >>>> or no difference?
> >>>>
> >>>> I have noticed that they tend to spin slower and are cheaper. That
> >>>> much I have figured out. Other than that, I can't see any other
> >>>> difference. Data speeds seem to be about the same.
> >>>
> >>> They have an ugly tendency to nod off at 6 second intervals.
> >>> This runs up "193 Load_Cycle_Count" unacceptably: as many
> >>> as a few hundred thousand in a year & a million cycles is
> >>> getting close to the lifetime limit on most hard drives. I end
> >>> up running some iteration of
> >>> # hdparm -B 255 /dev/sda
> >>> every boot.
> >>
> >> Very true about the 193 count. Here's a drive in a system that was
> >> built in Jan., 2010 so it's a bit over 2 years old at this point. It's
> >> on 24/7 and not rebooted except for more major updates, etc. My tests
> >> say the drive spins down and starts back up every 2 minutes and has
> >> been doing so for about 28 months. IIRC the 193 spec on this drive was
> >> something like 300000 max with the drive currently clocking in at
> >> 700488. I don't see any evidence that it's going to fail but I am
> >> trying to make sure it's backed up often. Being that it's gone >2x at
> >> this point I will swap the drive out in the early summer no matter
> >> what. This week I'll be visiting where the machine is so I'm going to
> >> put a backup drive in the box to get ready.
> >
> > Yes, I just learned about this problem in 2009 or so, &
> > checked on my FreeBSD laptop, which turned out to be
> > at >400000. It only made it another month or so before
> > having unrecoverable errors.
> >
> > Now, I can't conclusively demonstrate that the 193
> > Load_Cycle_Count was somehow causitive, but I
> > gots my suspicions. Many of 'em highly suspectable.
>
> It's part of the 'Wear Out Failure' part of the Bathtub Curve posted
> in the last few days. That said, some Toyotas go 100K miles, and
> others go 500K miles. Same car, same spec, same production line,
> different owners, different roads, different climates, etc.
>
> It's not possible to absolutely know when any drive will fail. I
> suspect that the 300K spec is just that, a spec. They'd replace the
> drive if it failed at 299,999 and wouldn't replace it at 300,001. That
> said, they don't want to spec thing too tightly, and I doubt many
> people make a purchasing decision on a spec like this, so for the vast
> majority of drives most likely they'd do far more than 300K.
>
> At 2 minutes per count on that specific WD Green Drive, if a home
> machine is turned on for instance 5 hours a day (6PM to 11PM) then
> 300K count equates to around 6 years. To me that seems pretty generous
> for a low cost home machine. However for a 24/7 production server it's
> a pretty fast replacement schedule.
>
> Here's data for my 500GB WD RAID Edition drives in my compute server
> here. It's powered down almost every night but doesn't suffer from the
> same firmware issues. The machine was built in April, 2010, so it's a
> bit of 2 years old. Note that it's been powered on less than 1/2 the
> number of hours but only has a 193 count of 907 vs > 700000!
>
> Cheers,
> Mark
>
>
> c2stable ~ # smartctl -a /dev/sda
> smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.2.12-gentoo] (local build)
> Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
>
> === START OF INFORMATION SECTION ===
> Model Family: Western Digital RE3 Serial ATA
> Device Model: WDC WD5002ABYS-02B1B0
> Serial Number: WD-WCASYA846988
> LU WWN Device Id: 5 0014ee 2042c3477
> Firmware Version: 02.03B03
> User Capacity: 500,107,862,016 bytes [500 GB]
> Sector Size: 512 bytes logical/physical
> Device is: In smartctl database [for details use: -P show]
> ATA Version is: 8
> ATA Standard is: Exact ATA specification draft version not indicated
> Local Time is: Thu May 10 11:45:45 2012 PDT
> SMART support is: Available - device has SMART capability.
> SMART support is: Enabled
>
> === START OF READ SMART DATA SECTION ===
> SMART overall-health self-assessment test result: PASSED
>
> General SMART Values:
> Offline data collection status: (0x84) Offline data collection activity
> was suspended by an
> interrupting command from host.
> Auto Offline Data Collection:
> Enabled. Self-test execution status: ( 0) The previous self-test
> routine completed without error or no self-test has ever been run.
> Total time to complete Offline
> data collection: ( 9480) seconds.
> Offline data collection
> capabilities: (0x7b) SMART execute Offline immediate.
> Auto Offline data collection
> on/off support.
> Suspend Offline collection upon new
> command.
> Offline surface scan supported.
> Self-test supported.
> Conveyance Self-test
> supported.
> Selective Self-test supported.
> SMART capabilities: (0x0003) Saves SMART data before
> entering
> power-saving mode.
> Supports SMART auto save
> timer.
> Error logging capability: (0x01) Error logging supported.
> General Purpose Logging
> supported.
> Short self-test routine
> recommended polling time: ( 2) minutes.
> Extended self-test routine
> recommended polling time: ( 112) minutes.
> Conveyance self-test routine
> recommended polling time: ( 5) minutes.
> SCT capabilities: (0x303f) SCT Status supported.
> SCT Error Recovery Control
> supported.
> SCT Feature Control supported.
> SCT Data Table supported.
>
> SMART Attributes Data Structure revision number: 16
> Vendor Specific SMART Attributes with Thresholds:
> ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
> UPDATED WHEN_FAILED RAW_VALUE
> 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail
> Always - 0
> 3 Spin_Up_Time 0x0027 239 235 021 Pre-fail
> Always - 1050
> 4 Start_Stop_Count 0x0032 100 100 000 Old_age
> Always - 935
> 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail
> Always - 0
> 7 Seek_Error_Rate 0x002e 200 200 000 Old_age
> Always - 0
> 9 Power_On_Hours 0x0032 091 091 000 Old_age
> Always - 7281
> 10 Spin_Retry_Count 0x0032 100 100 000 Old_age
> Always - 0
> 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age
> Always - 0
> 12 Power_Cycle_Count 0x0032 100 100 000 Old_age
> Always - 933
> 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age
> Always - 27
> 193 Load_Cycle_Count 0x0032 200 200 000 Old_age
> Always - 907
> 194 Temperature_Celsius 0x0022 106 086 000 Old_age
> Always - 41
> 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age
> Always - 0
> 197 Current_Pending_Sector 0x0032 200 200 000 Old_age
> Always - 0
> 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age
> Offline - 0
> 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age
> Always - 0
> 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age
> Offline - 0

Is this 193 Load_Cycle_Count an issue only on the green drives?

I have a very old Compaq laptop here that shows:

# smartctl -A /dev/sda | egrep "Power_On|Load_Cycle"
9 Power_On_Hours 0x0012 055 055 000 Old_age Always
- 19830
193 Load_Cycle_Count 0x0012 001 001 000 Old_age Always
- 1739734

Admittedly, there are some 60 errors on it (having been used extensively on
bouncy trains, buses, aeroplanes, etc) but it is still refusing to die ...
O_O

It is a Hitachi 20G

=== START OF INFORMATION SECTION ===
Model Family: Hitachi Travelstar 80GN
Device Model: IC25N020ATMR04-0
Serial Number: MRX107K1DS623H
Firmware Version: MO1OAD5A
User Capacity: 20,003,880,960 bytes [20.0 GB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 6
ATA Standard is: ATA/ATAPI-6 T13 1410D revision 3a
Local Time is: Sat May 12 10:30:13 2012 BST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===

--
Regards,
Mick
Re: Are those "green" drives any good? [ In reply to ]
Am Samstag, 12. Mai 2012, 10:34:12 schrieb Mick:
> On Thursday 10 May 2012 19:51:14 Mark Knecht wrote:
> > On Thu, May 10, 2012 at 11:13 AM, Norman Invasion
> >
> > <invasivenorman@gmail.com> wrote:
> > > On 10 May 2012 14:01, Mark Knecht <markknecht@gmail.com> wrote:
> > >> On Thu, May 10, 2012 at 9:20 AM, Norman Invasion
> > >>
> > >> <invasivenorman@gmail.com> wrote:
> > >>> On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
> > >>>> Hi,
> > >>>>
> > >>>> As some know, I'm planning to buy me a LARGE hard drive to put all my
> > >>>> videos on, eventually. The prices are coming down now. I keep
> > >>>> seeing
> > >>>> these "green" drives that are made by just about every company
> > >>>> nowadays. When comparing them to a non "green" drive, do they hold up
> > >>>> as good? Are they as dependable as a plain drive? I guess they are
> > >>>> more efficient and I get that but do they break quicker, more often
> > >>>> or no difference?
> > >>>>
> > >>>> I have noticed that they tend to spin slower and are cheaper. That
> > >>>> much I have figured out. Other than that, I can't see any other
> > >>>> difference. Data speeds seem to be about the same.
> > >>>
> > >>> They have an ugly tendency to nod off at 6 second intervals.
> > >>> This runs up "193 Load_Cycle_Count" unacceptably: as many
> > >>> as a few hundred thousand in a year & a million cycles is
> > >>> getting close to the lifetime limit on most hard drives. I end
> > >>> up running some iteration of
> > >>> # hdparm -B 255 /dev/sda
> > >>> every boot.
> > >>
> > >> Very true about the 193 count. Here's a drive in a system that was
> > >> built in Jan., 2010 so it's a bit over 2 years old at this point. It's
> > >> on 24/7 and not rebooted except for more major updates, etc. My tests
> > >> say the drive spins down and starts back up every 2 minutes and has
> > >> been doing so for about 28 months. IIRC the 193 spec on this drive was
> > >> something like 300000 max with the drive currently clocking in at
> > >> 700488. I don't see any evidence that it's going to fail but I am
> > >> trying to make sure it's backed up often. Being that it's gone >2x at
> > >> this point I will swap the drive out in the early summer no matter
> > >> what. This week I'll be visiting where the machine is so I'm going to
> > >> put a backup drive in the box to get ready.
> > >
> > > Yes, I just learned about this problem in 2009 or so, &
> > > checked on my FreeBSD laptop, which turned out to be
> > > at >400000. It only made it another month or so before
> > > having unrecoverable errors.
> > >
> > > Now, I can't conclusively demonstrate that the 193
> > > Load_Cycle_Count was somehow causitive, but I
> > > gots my suspicions. Many of 'em highly suspectable.
> >
> > It's part of the 'Wear Out Failure' part of the Bathtub Curve posted
> > in the last few days. That said, some Toyotas go 100K miles, and
> > others go 500K miles. Same car, same spec, same production line,
> > different owners, different roads, different climates, etc.
> >
> > It's not possible to absolutely know when any drive will fail. I
> > suspect that the 300K spec is just that, a spec. They'd replace the
> > drive if it failed at 299,999 and wouldn't replace it at 300,001. That
> > said, they don't want to spec thing too tightly, and I doubt many
> > people make a purchasing decision on a spec like this, so for the vast
> > majority of drives most likely they'd do far more than 300K.
> >
> > At 2 minutes per count on that specific WD Green Drive, if a home
> > machine is turned on for instance 5 hours a day (6PM to 11PM) then
> > 300K count equates to around 6 years. To me that seems pretty generous
> > for a low cost home machine. However for a 24/7 production server it's
> > a pretty fast replacement schedule.
> >
> > Here's data for my 500GB WD RAID Edition drives in my compute server
> > here. It's powered down almost every night but doesn't suffer from the
> > same firmware issues. The machine was built in April, 2010, so it's a
> > bit of 2 years old. Note that it's been powered on less than 1/2 the
> > number of hours but only has a 193 count of 907 vs > 700000!
> >
> > Cheers,
> > Mark
> >
> >
> > c2stable ~ # smartctl -a /dev/sda
> > smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.2.12-gentoo] (local build)
> > Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
> >
> > === START OF INFORMATION SECTION ===
> > Model Family: Western Digital RE3 Serial ATA
> > Device Model: WDC WD5002ABYS-02B1B0
> > Serial Number: WD-WCASYA846988
> > LU WWN Device Id: 5 0014ee 2042c3477
> > Firmware Version: 02.03B03
> > User Capacity: 500,107,862,016 bytes [500 GB]
> > Sector Size: 512 bytes logical/physical
> > Device is: In smartctl database [for details use: -P show]
> > ATA Version is: 8
> > ATA Standard is: Exact ATA specification draft version not indicated
> > Local Time is: Thu May 10 11:45:45 2012 PDT
> > SMART support is: Available - device has SMART capability.
> > SMART support is: Enabled
> >
> > === START OF READ SMART DATA SECTION ===
> > SMART overall-health self-assessment test result: PASSED
> >
> > General SMART Values:
> > Offline data collection status: (0x84) Offline data collection activity
> >
> > was suspended by an
> >
> > interrupting command from host.
> >
> > Auto Offline Data Collection:
> > Enabled. Self-test execution status: ( 0) The previous self-test
> > routine completed without error or no self-test has ever been run.
> > Total time to complete Offline
> > data collection: ( 9480) seconds.
> > Offline data collection
> > capabilities: (0x7b) SMART execute Offline immediate.
> >
> > Auto Offline data collection
> >
> > on/off support.
> >
> > Suspend Offline collection upon
> > new
> > command.
> > Offline surface scan supported.
> > Self-test supported.
> > Conveyance Self-test
> >
> > supported.
> >
> > Selective Self-test supported.
> >
> > SMART capabilities: (0x0003) Saves SMART data before
> > entering
> >
> > power-saving mode.
> > Supports SMART auto save
> >
> > timer.
> > Error logging capability: (0x01) Error logging supported.
> >
> > General Purpose Logging
> >
> > supported.
> > Short self-test routine
> > recommended polling time: ( 2) minutes.
> > Extended self-test routine
> > recommended polling time: ( 112) minutes.
> > Conveyance self-test routine
> > recommended polling time: ( 5) minutes.
> > SCT capabilities: (0x303f) SCT Status supported.
> >
> > SCT Error Recovery Control
> >
> > supported.
> >
> > SCT Feature Control supported.
> > SCT Data Table supported.
> >
> > SMART Attributes Data Structure revision number: 16
> > Vendor Specific SMART Attributes with Thresholds:
> > ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
> > UPDATED WHEN_FAILED RAW_VALUE
> >
> > 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail
> >
> > Always - 0
> >
> > 3 Spin_Up_Time 0x0027 239 235 021 Pre-fail
> >
> > Always - 1050
> >
> > 4 Start_Stop_Count 0x0032 100 100 000 Old_age
> >
> > Always - 935
> >
> > 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail
> >
> > Always - 0
> >
> > 7 Seek_Error_Rate 0x002e 200 200 000 Old_age
> >
> > Always - 0
> >
> > 9 Power_On_Hours 0x0032 091 091 000 Old_age
> >
> > Always - 7281
> >
> > 10 Spin_Retry_Count 0x0032 100 100 000 Old_age
> >
> > Always - 0
> >
> > 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age
> >
> > Always - 0
> >
> > 12 Power_Cycle_Count 0x0032 100 100 000 Old_age
> >
> > Always - 933
> > 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age
> > Always - 27
> > 193 Load_Cycle_Count 0x0032 200 200 000 Old_age
> > Always - 907
> > 194 Temperature_Celsius 0x0022 106 086 000 Old_age
> > Always - 41
> > 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age
> > Always - 0
> > 197 Current_Pending_Sector 0x0032 200 200 000 Old_age
> > Always - 0
> > 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age
> > Offline - 0
> > 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age
> > Always - 0
> > 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age
> > Offline - 0
>
> Is this 193 Load_Cycle_Count an issue only on the green drives?
>
> I have a very old Compaq laptop here that shows:
>
> # smartctl -A /dev/sda | egrep "Power_On|Load_Cycle"
> 9 Power_On_Hours 0x0012 055 055 000 Old_age Always
> - 19830
> 193 Load_Cycle_Count 0x0012 001 001 000 Old_age Always
> - 1739734
>
> Admittedly, there are some 60 errors on it (having been used extensively on
> bouncy trains, buses, aeroplanes, etc) but it is still refusing to die ...
> O_O
>
> It is a Hitachi 20G
>
> === START OF INFORMATION SECTION ===
> Model Family: Hitachi Travelstar 80GN
> Device Model: IC25N020ATMR04-0
> Serial Number: MRX107K1DS623H
> Firmware Version: MO1OAD5A
> User Capacity: 20,003,880,960 bytes [20.0 GB]
> Sector Size: 512 bytes logical/physical
> Device is: In smartctl database [for details use: -P show]
> ATA Version is: 6
> ATA Standard is: ATA/ATAPI-6 T13 1410D revision 3a
> Local Time is: Sat May 12 10:30:13 2012 BST
> SMART support is: Available - device has SMART capability.
> SMART support is: Enabled
>
> === START OF READ SMART DATA SECTION ===

SAMSUNG HD502IJ
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED
WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 100 100 051 Pre-fail Always
- 0
3 Spin_Up_Time 0x0007 085 085 011 Pre-fail Always
- 5480
4 Start_Stop_Count 0x0032 098 098 000 Old_age Always
- 1864
5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always
- 0
7 Seek_Error_Rate 0x000f 100 100 051 Pre-fail Always
- 0
8 Seek_Time_Performance 0x0025 100 100 015 Pre-fail Offline -
10814
9 Power_On_Hours 0x0032 096 096 000 Old_age Always
- 20178
10 Spin_Retry_Count 0x0033 100 100 051 Pre-fail Always
- 0
11 Calibration_Retry_Count 0x0012 100 100 000 Old_age Always
- 0
12 Power_Cycle_Count 0x0032 098 098 000 Old_age Always
- 1854
13 Read_Soft_Error_Rate 0x000e 100 100 000 Old_age Always
- 0
183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always
- 8
184 End-to-End_Error 0x0033 100 100 099 Pre-fail Always
- 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always
- 0
188 Command_Timeout 0x0032 100 100 000 Old_age Always
- 0
190 Airflow_Temperature_Cel 0x0022 082 065 000 Old_age Always
- 18 (Min/Max 13/18)
194 Temperature_Celsius 0x0022 079 061 000 Old_age Always
- 21 (Min/Max 13/21)
195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always
- 2587
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always
- 0
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always
- 0
198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline -
0
199 UDMA_CRC_Error_Count 0x003e 100 100 000 Old_age Always
- 0
200 Multi_Zone_Error_Rate 0x000a 100 100 000 Old_age Always
- 0
201 Soft_Read_Error_Rate 0x000a 253 253 000 Old_age Always
- 0


SAMSUNG HD753LJ:

ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED
WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 100 100 051 Pre-fail Always
- 0
3 Spin_Up_Time 0x0007 075 075 011 Pre-fail Always
- 8330
4 Start_Stop_Count 0x0032 099 099 000 Old_age Always
- 1399
5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always
- 0
7 Seek_Error_Rate 0x000f 100 100 051 Pre-fail Always
- 0
8 Seek_Time_Performance 0x0025 100 100 015 Pre-fail Offline -
10041
9 Power_On_Hours 0x0032 097 097 000 Old_age Always
- 16941
10 Spin_Retry_Count 0x0033 100 100 051 Pre-fail Always
- 0
11 Calibration_Retry_Count 0x0012 100 100 000 Old_age Always
- 0
12 Power_Cycle_Count 0x0032 099 099 000 Old_age Always
- 1397
13 Read_Soft_Error_Rate 0x000e 100 100 000 Old_age Always
- 0
183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always
- 0
184 End-to-End_Error 0x0033 100 100 000 Pre-fail Always
- 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always
- 0
188 Command_Timeout 0x0032 100 100 000 Old_age Always
- 0
190 Airflow_Temperature_Cel 0x0022 081 065 000 Old_age Always
- 19 (Min/Max 12/19)
194 Temperature_Celsius 0x0022 077 064 000 Old_age Always
- 23 (Min/Max 12/23)
195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always
- 1412
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always
- 0
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always
- 0
198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline -
0
199 UDMA_CRC_Error_Count 0x003e 100 100 000 Old_age Always
- 0
200 Multi_Zone_Error_Rate 0x000a 100 100 000 Old_age Always
- 0
201 Soft_Read_Error_Rate 0x000a 253 253 000 Old_age Always
- 0

i have some more disks, but those clocked the most hours.


--
#163933
Re: Are those "green" drives any good? [ In reply to ]
On Thu, May 10, 2012 at 12:20:57PM -0400, Norman Invasion wrote:
> On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
> > Hi,
> >
> > As some know, I'm planning to buy me a LARGE hard drive to put all my
> > videos on, eventually.  The prices are coming down now.  I keep seeing
> > these "green" drives that are made by just about every company nowadays.
> >  When comparing them to a non "green" drive, do they hold up as good?
> > Are they as dependable as a plain drive?  I guess they are more
> > efficient and I get that but do they break quicker, more often or no
> > difference?
> >
> > I have noticed that they tend to spin slower and are cheaper.  That much
> > I have figured out.  Other than that, I can't see any other difference.
> >  Data speeds seem to be about the same.
> >
>
> They have an ugly tendency to nod off at 6 second intervals.
> This runs up "193 Load_Cycle_Count" unacceptably: as many
> as a few hundred thousand in a year & a million cycles is
> getting close to the lifetime limit on most hard drives. I end
> up running some iteration of
> # hdparm -B 255 /dev/sda
> every boot.

I bought my current internal laptop disk for Christmas 2008. It's a Samsung
HM500JI (with 500 GB). Early on I noticed that, according to smartctl, its
Load_Cycle_Count is increasing every 2 or 3 seconds. I even asked Samsung
about this, but they either couldn't give any clue or didn't want to, b/c the
Serial Number is from Turkey, so not from the European market.

Anyhoo... I just checked the values:
Power on hours: 11500
Start/stop count: 2797
Power cycle count: 2197

But the load cycle count is at almost 12.3 million(!). That just can't be
right. I stopped believing that number a good while ago.


OTOH, I just became a bit nervous when looking at smartctl's output...
Reallocated sectors: 7 (threshold 10)
Calibration retry count: 1631
Load retry count: 1631
--
Gruß | Greetings | Qapla'
Please do not share anything from, with or about me with any Facebook service.

Humans lose most of their time trying to gain time.
Re: Are those "green" drives any good? [ In reply to ]
Hello,

On Sat, 12 May 2012, Mick wrote:
>Is this 193 Load_Cycle_Count an issue only on the green drives?

AFAIK it was a firmware bug on some models.

>I have a very old Compaq laptop here that shows:
>
># smartctl -A /dev/sda | egrep "Power_On|Load_Cycle"
> 9 Power_On_Hours 0x0012 055 055 000 Old_age Always
>- 19830
>193 Load_Cycle_Count 0x0012 001 001 000 Old_age Always
>- 1739734

Laptop drives are _built_ for unloading frequently to protect the
drive from bumps and also to save power. Desktop drives are _not_
built for that.

So, don't worry.

HTH,
-dnh

--
Death: I am last minute stuff!
Re: Are those "green" drives any good? [ In reply to ]
Hello,

On Sat, 12 May 2012, Frank Steinmetzger wrote:
>On Thu, May 10, 2012 at 12:20:57PM -0400, Norman Invasion wrote:
>> On 9 May 2012 04:47, Dale <rdalek1967@gmail.com> wrote:
>> > Hi,
>> >
>> > As some know, I'm planning to buy me a LARGE hard drive to put all my
>> > videos on, eventually.  The prices are coming down now.  I keep seeing
>> > these "green" drives that are made by just about every company nowadays.
>> >  When comparing them to a non "green" drive, do they hold up as good?
>> > Are they as dependable as a plain drive?  I guess they are more
>> > efficient and I get that but do they break quicker, more often or no
>> > difference?
>> >
>> > I have noticed that they tend to spin slower and are cheaper.  That much
>> > I have figured out.  Other than that, I can't see any other difference.
>> >  Data speeds seem to be about the same.
>> >
>>
>> They have an ugly tendency to nod off at 6 second intervals.
>> This runs up "193 Load_Cycle_Count" unacceptably: as many
>> as a few hundred thousand in a year & a million cycles is
>> getting close to the lifetime limit on most hard drives. I end
>> up running some iteration of
>> # hdparm -B 255 /dev/sda
>> every boot.
>
>I bought my current internal laptop disk for Christmas 2008. It's a Samsung
>HM500JI (with 500 GB). Early on I noticed that, according to smartctl, its
>Load_Cycle_Count is increasing every 2 or 3 seconds. I even asked Samsung
>about this, but they either couldn't give any clue or didn't want to, b/c the
>Serial Number is from Turkey, so not from the European market.
>
>Anyhoo... I just checked the values:
>Power on hours: 11500
>Start/stop count: 2797
>Power cycle count: 2197
>
>But the load cycle count is at almost 12.3 million(!). That just can't be
>right. I stopped believing that number a good while ago.

As I said in another mail: laptop drives are built for frequent
unloading. Your number does seem a bit high though, that's about 1000
load cycles per hour...

>OTOH, I just became a bit nervous when looking at smartctl's output...
>Reallocated sectors: 7 (threshold 10)
>Calibration retry count: 1631
>Load retry count: 1631

That's not healty. c.f. http://en.wikipedia.org/wiki/S.M.A.R.T.

HTH,
-dnh

--
To resist the influence of others, knowledge of one's self is
most important. -- Teal'C, Stargate SG-1, 9x14 - Stronghold
Re: Are those "green" drives any good? [ In reply to ]
On Sun, May 13, 2012 at 11:38:34AM +0200, David Haller wrote:

> >I bought my current internal laptop disk for Christmas 2008. It's a Samsung
> >HM500JI (with 500 GB). Early on I noticed that, according to smartctl, its
> >Load_Cycle_Count is increasing every 2 or 3 seconds. I even asked Samsung
> >about this, but they either couldn't give any clue or didn't want to, b/c the
> >Serial Number is from Turkey, so not from the European market.
> >
> >Anyhoo... I just checked the values:
>>[…]
> >But the load cycle count is at almost 12.3 million(!). That just can't be
> >right. I stopped believing that number a good while ago.
>
> As I said in another mail: laptop drives are built for frequent
> unloading. Your number does seem a bit high though, that's about 1000
> load cycles per hour...

My Pa bought the same HDD model for his laptop a few months back. Last weekend
I visited him and loaded a diag tool on his Windows. It showed 20 or 30.000
cycle counts. So I guess my model just has a bad firmware or summit like that.
Perhaps that's why it was so cheap back then (only ~62€ for a 500 GB drive by
the end of 2008).

Oh well, I'll just have to remember to do backups a bit more often.
--
Gruß | Greetings | Qapla'
Please do not share anything from, with or about me with any Facebook service.

A boss is a human just like everyone else, he just doesn’t know.

1 2 3  View All