[Tech] Server hardware information required!
15 years ago
I want to get two of these for home use. Aside from SAS-to-SATA breakaway cables, does anyone here with the appropriate knowledge have any information about things I'd need to buy to make these work, or reasons I don't want to mess with these at all?
For reference I'm trying to create a massive JBOD setup for my Windows Home Server box to spread dozens of terabytes of storage across dozens of hard drives, and I want the most space and money-efficient way to do that.
For reference I'm trying to create a massive JBOD setup for my Windows Home Server box to spread dozens of terabytes of storage across dozens of hard drives, and I want the most space and money-efficient way to do that.
FA+

And I thought I was bad with just four drives~
Or: stockpiling for the zombie apocalypse. When there won't be any new TV shows airing.
Stay away from the JMicron chipsets for controller cards.
Then there is the other server 08 machine that hosts things like the VPN, dedicated game servers etc.
Or the other two are file servers, one for temporary massive amounts of data (temporary), and all the stuff thats a keeper makes it to the 2nd file server.
I think all the ISP's on the planet would think 11.5GB is a crazy amount, tho it does tend to add up to those amounts after awhile
*stares at fiber node out in front of house*
I'm not saying you couldn't do it this way, It just wouldn't be a very good implementation of what you're trying to do. JBOD doesn't have any real redundancy built into it.
I always make sure that I have some form of reliability to it, so if I were to stuff 40 drives or so into some kind of array I wouldn't have to spend much time trying to find out what could be gumming things up if something went wrong.
That and 2TB drives so far have been somewhat flaky -_-, and they are just starting to switch over to drives with 4K sectors as well.
You might find the answer you are looking for in the forums at http://www.storagereview.com/, pretty much get the answer you are looking for within a day of posting :)
-Acru
o.o; It does too have redundancy. I tell it "this share needs to be redundant!" and it stores a copy of it on another hard drive. Unless I'm not understanding something correctly?
I'm not using 2 TB drives for just that reason. We're sticking with 1.5s because those are the sweet spot with better reliability.
I've yet to hear about any solution with a better balance of simplicity, redundancy, and expandability. Pro users will say "Don't use WHS, use Server 2xxx or Linux" because it gives more options and such, but they're missing the point that I want something which is effortless to use and manages the storage pool in an almost mom-friendly manner.
Ah to each their own. hehe
CPU resources,
Saturation of the local bus
RAM, the system will attempt to cache the volume opening and closing the volume in Windows Explorer will become increasingly painful as more data is added.
Power requirements will be huge.
Unless this is a rack mounted hardware solution with a dedicated server for file serving, the system will just be too unmanageable. You already take a loss in performance when you use software managed raids.
While jbod should give you the relative speed of each and every individual drive, you would still see decreases in performance as the system will need to refresh the cache when mounting the volume.
A better alternative, rather hard to explain in a limited amount of space would be to create one drive and have folders on that drive for the other drives.
Instead of drive letters for each drive in your system, you would have a folder on a single drive for each additional drive. It would look something like C:\Drive 1, C:\Drive 2 and so on. But to create one 50 TB volume with todays management technology without exploring a server hardware option would be too ambitious. This is a big idea you cant save money on or it will fail.
CPU usage on a 1.8 single core Celeron hasn't been high enough for me to ever notice.
The primary purpose is to stream media, which has bitrates from .1 to 30 Mbps, so I don't see the local bus being saturated any time soon.
Can't comment on the RAM usage, as I've never seen the resource utilization inside the server during normal use.
How are the power requirements huge? Having a low-power system doing very little work 24/7 is preferable to a full-on desktop doing very little work 24/7.
It is a rack mounted hardware solution/dedicated server for file sharing. That's... what it is. It's already a dedicated file server, and I'm buying the rack mount hardware for it.
What about WHS's use of a software RAID-like solution is taxing on resources and limits performance, specifically?
Forgive me if I'm just really poorly informed on this. Maybe it's because of my computer background, but the way I see it, I've got a reasonably capable computer processor which only has to maintain a database table of what drive(s) a file is on. "Oh, he wants the file at \\fuzzyserver\software\games\Morrowind\Morrowind.iso? Let's look that up, ah, it's actually on fuzzyserver's H drive, at... software\games\Morrowind\Morrowind.iso." Or maybe "Oh shit, the H drive isn't reporting to me. I need to look up the files that were on there with data duplication enabled, find out where live copies exist, copy them to other drives, and add those drive letters to my listing of where the files exist." (a more complex operation, but something which should only happen once in a blue moon).
'cuz I mean, I can see where all those files are on the individual drives if I pull them out and plug them in. It just takes all the files and directory structures and spreads them out randomly across dozens of drives. I could simulate the database functionality in a very simple Excel file--I don't see why it's such a challenge for a computer which literally has nothing else that it does all day.
I just see that you wanna save money yet buying/planning for that amount of drives still is a rather large cost. I know it's just going to be a dumb massive network storage device of sorts, it just seems counter productive skimping on the board/cpu, or using something like WHS to accomplish it. WHS V1 doesn't have all the goodies like a reworked network stack, WHS V2 does but it's just throwbacks to a stripped down server 08 R2 system.
There is simplifying things then there is over simplifying things, I guess working with Microsofts latest and greatest in Server 08 and Windows 8 technologies, I never got the impression during the WHS beta that anyone else testing it, tested it to be used in such a manner. People were JBOD spanning 300GB-500GB drives at the time, nobody ever really bugged it for 50TB capacities.
I mean just the whole concept of what you will be doing isn't exactly what I would call something for beginners heh
Until someone comes up with a better deal than 1.5 TB for <$100, I will end up spending a lot on the hard drives, but that's a cost I'm willing to live with because there's simply no way around it if I want files. Which I do.
I don't really know why I'd need a reworked network stack, or what's wrong with a stripped down Server 08 (or even 03) system. I do not need a full-fledged general-purpose server. WHS does everything I want, more easily, and for less money, than those enterprise-level solutions.
And I still don't see why WHS should have a problem with 40 drives compared to 4 drives. As I add more drives the bus bandwidth isn't more strained because I'm not increasing the amount of data I access, and I really hope the processor has enough oomph to consult a database to see what drive the file I want is stored on--seeing as the 1980s ended two decades ago, it has no excuse not to.
I really appreciate your concerns and taking the time to talk to me, but you must understand that with nothing other than "I've never heard of anyone using more than X drives for a WHS setup" and "it would take too long to explain", I can't let your opinions weigh too heavily in my decisions. Hell, one of the reviews for the case I want said it was the perfect WHS case.
Oh yeah, don't forget to take into consideration the 4K cluster implementation being phased in on newer drives, try to avoid drives with this or you might run into quite a bit of hassle with compatibility.
Give it a shot, its your time and your money.
We have a Windows Home Server system spec'd out to be as power-efficient as possible without dipping into Atom territory. Motherboard and CPU were selected for low power draw. Currently it connects to 11 hard drives, 11.75ish TB, and is performing happily... for practical reasons we want to expand that in the future. Its primary use is a massive file dump, sitting on 11.5 TB of assorted files... personal documents, disc images, photos, but primarily music and video. As such, it's only being accessed sometimes, and usually only at the rate media needs to be streamed (max of ~30 Mbps for Blu-ray content). It's used for convenience (we never have to move around hard drives or optical discs) and clean organization (we don't have tons of content spread out across half a dozen computers in different folders, then have to labor over permissions and leave all the computers on to be assured we have access to the content). It also has the supplementary role of performing and managing system backups once a day. We are not literally making a JBOD for the system, WHS just treats the drives like a single volume which we all access as a network share. We have selective data redundancy enabled for our more important files.
It is on a UPS battery backup thingy and we'll be getting a new one because the old one doesn't hold much charge. On the power side of things we'll also be getting a new PSU to improve efficiency. I don't understand what you mean about the power being accumulative.
Does that make any more sense?
That is along the lines of what you want to do, but you will still need to consider a raid card that can handle up to 24 drives if you want to do jbod.
This isn't so much an argument that you cant do it, but most of these setups are real workhorses for very good reason. Take cpu usage into account if 1 drive utilizes 4% and you have 40 drives total, then to access content across all 40 drives, worst case scenario, you are looking at 200% cpu utilization.
As you can see from the link these units require a lot of power and usually have their own motherboards and processors. As I said, give it a shot. See what works and what doesn't.
Curious though, what kind of controllers are you using to add more drives.
Normal servers are workhorses for a reason, correct. And that reason is they're reading and writing lots of things concurrently as fast as possible with crazy RAID out the ass. If we had a RAID 5 setup across 24 drives and there were 100 clients at a time who were accessing data and making changes, then yes, we'd want a RAID card and nice CPU. If we were recording and broadcasting live video streams for a TV network then it'd absolutely be important to have redundant power supplies. And if we were doing those things, we'd have thousands of dollars to throw around to make sure things worked smoothly. But that's just not the case.
As I explained to you, we have very modest access requirements. If a single drive takes 4% to access, then 40 drives don't take 40 times as much, because whether we have 1 drive or 1 million drives, we still only have 4 people in the apartment to access content--and half of us are gone or sleeping at any given time. The idle performance and power overhead of adding another drive is important, for sure, but that's probably the most important thing.
I'll certainly keep you in the loop about how it works out for us, though so far, WHS has scaled perfectly up to 11 drives.
Cheapo 4-SATA 1X PCI-Express controllers. Cheapest things we can find that have good ratings.
So I'm rock hard, dripping with lust, loaded and ready to blow, who wants it?
He might be able to help you though.
Any reason you want to do it with SAS cables? If you just reduced the complexity and got a couple of PCI-express SATA controllers and stuck them into a cheap-as-chips motherboard (i.e. Intel G41 with something like a Celeron or Pentium Dual Core in it) you'd be able to easily power that amount of disks, and you don't need a lot of processing power as Windows Home Server will easily run on things like an Atom.
Enough connectors to get all the drives working
A moderately good PSU (400W + CPU (about 100ishW)+other) so about 600-700W
If you are wanting fast access to this, at least one 1Gbps Network Interface
You shouldnt need anything with grunt since you are just using it as one huge backup drive.
What's grunt?
Video: http://drobo.com/resources/droboFSdemo.php
wow that is impressive.
I'll call see what my friend has to say.
He did say that he would buy one of the servers if he had the money.
What power management options are you using?
I would expect that spinning down most of the drives would have a significant power-saving advantage in this application, since a drive would only need to be rotating when you're actually streaming something to or from it.
We're using... whatever Windows Home Server uses. As far as power consumption goes, our first time starting it up after migrating to the new case, with a very low-end graphics card and 11 hard drives, it spiked up to 300W on startup and settled down to about 190 idle. With the graphics card removed after we were sure it was working happily, and the addition of the 1 TB drive that came with the case, it doesn't get higher than 250 during startup and it idles closer to 150. During use it seems to go up by about 5 watts per hard drive active, so 150-170 watts is about as high as it goes.
I'm pretty sure the drives spin all the time, which I might wanna look into. I'm not so much worried about the power consumption (I figure I could save about 50 watt-years = $50/year), moreso the drives dying and the data being lost. Although it has file redundancy on the important data, I'd hate to have to track down the "unimportant" data.
Overall the system is really quiet, 6x80mm fans at 14 decibels and 32 CFM, but it's a huge step down in airflow compared to our previous setup--10x120mm fans at 24 decibels and ~70CFM apiece. So the drives tend to run about 15 F hotter, 100-115 F as opposed to 85-100 F. Other than that though, the idle refrigerator at the same distance makes more noise.
Heat- and power-wise I think we're in a pretty good place, and data density is great (20 drives at 2 TB apiece = up to 40 TB of managed space in 2 cubic feet). At our minimum data consumption rates we're looking at about 2.5 years before we need drives with higher capacity than 2 TB and/or a second case, so I'd say it's a good investment. My biggest concern is all the people who were saying that Windows Home Server shouldn't be managing dozens of hard drives in what has the end appearance of a JBOD. Why, they couldn't say. I didn't think it was too much to ask a 1.8 GHz processor with 1-2 GB of DDR2 to manage a database that records what drive each file is on >_>;
I agree with your CPU performance evaluation. A worst case of four streams of high-def video isn't much of a load for a modern system. It'd be different if you were running a video server for your whole neighborhood! (Although really high-def BD video could get to be an issue. My understanding is that the initial release of Avatar uses almost the entire 50GB of a BD.)
I think being worried about capacity limitations is reasonable, though. Too often designs have builtin limits because the authors thought "They could never, ever, possibly use more than that!" only to run into that brick wall in just a few years. I don't know anything about the internal design of the home server. If it's based on their commercial/enterprise software, though, I would expect it to be fine. Any capacity limits would have to be added intentionally, but MS has been known to do that, unfortunately.
At some point, you might want to consider putting all your hardware in a cooled closet. Running the disks that much hotter worries me a little: cooler equipment lasts longer, and copying terabytes takes a while. Also, when trying to relax and listen to quiet music (but do you ever do that? ;3 ), background noise can get to be quite annoying.
High-def streams are, for now, limited to 72 Mbps because that's the peak read speed of a 2X Blu-ray drive. In reality it'll be lower because you can't count on that sort of bitrate on the inside tracks of the disc, and if you have a 2.5 hour movie (Avatar) the 50 GB cap means you can't have higher than 45 Mbps anyway. Streaming that sort of data is no problem for the server, it's got a 1 Gbps limit at the Ethernet port obviously, but that should be able to handle 20 Avatar streams. Bandwidth off the drives and from the drives to the system isn't an issue either. I'd say the biggest limitation after that Gbit Ethernet connection would be a drive trying to stream multiple video files at once. Poor read heads...
WHS is based on Server 2k3 (the next version will be based on 2k8) so I assume any limitations would be inherited from there. I can't imagine them actively adding further restrictions to it, because most people who would want 2k3 are companies with the budgets and the legal sense not to skimp on a home license for work.
We'll definitely put the hardware in a cooler location when we have one available. We're moving everything, including eventually gaming desktops, in the direction of being rack mounted. That rack will go in a dedicated server room or closet, kept colder than the rest of the house. We'll probably water-cool everything we can, too. We don't care for background noise, fortunately the server from 10 feet away makes less noise than my laptop.