CAUTION The answer about changing the UNIX password for "postgres" through "$ sudo passwd postgres" is not preferred, and can even be DANGEROUS!
This is why: By default, the UNIX account "postgres" is locked, which means it cannot be logged in using a password. If you use "sudo passwd postgres", the account is immediately unlocked. Worse, if you set the password to something weak, like "postgres", then you are exposed to a great security danger. For example, there are a number of bots out there trying the username/password combo "postgres/postgres" to log into your UNIX system.
What you should do is follow Chris James's answer:
sudo -u postgres psql postgres
# \password postgres
Enter new password:
To explain it a little bit. There are usually two default ways to login to PostgreSQL server:
By running the "psql" command as a UNIX user (so-called IDENT/PEER authentication), e.g.: sudo -u postgres psql
. Note that sudo -u
does NOT unlock the UNIX user.
by TCP/IP connection using PostgreSQL's own managed username/password (so-called TCP authentication) (i.e., NOT the UNIX password).
So you never want to set the password for UNIX account "postgres". Leave it locked as it is by default.
Of course things can change if you configure it differently from the default setting. For example, one could sync the PostgreSQL password with UNIX password and only allow local logins. That would be beyond the scope of this question.
Lets start from the top.
I've got a large database for a
telephony application, about 60GiB or
so
Rephrase that: I have a pretty small database. Seriously, the time where 60 giga where large is about 10 years ago. Compare that to: I have a finaincial data database that has 800gb and growing, with 95% of the data in one table ;)
It'll have four internal HDs,
probably for OS and backup, but the
biggest change is the attached storage
- 12 x 15k drives on an external SAS interface.
Here is what I would do:
- Mirror two discs for boot. Put off a 64gb partition down, the rest up you use for TEMP. You do not want to see a lot of IO there.
Mirror the next 2 discs for logfiles. If those run hifh on IO - relace them with SSD. Given the small amount of changes you have.... small 80gb SSD should be enough.
The rest (12 discs), put up a hugh RAID 10.
More important is that you reconfigure our server to use:
- Minimum 12 data and log files for tempdb. Do NOT autogrow those. Fix them.
- Minimum 12 log files. No joke. Do not autogrow here, either.
- Minimum 12 database files. Did I say - no autogrow?
Then, of course, there's always RAID
10 vs RAID 5/6, vs RAID 50/60 to
consider.
What please is to consider there, given the HUGH performance differences between Raid 10 vs the others - which all blow the water out of Raid 5/6/50/60 for anyhing requiring high IO. RAID 5 / 6 make onl ysense if you put in SSD drives - then the significnat IO loss will be totally eaten up. Actually given your trivial database size, it may be financially idiotic to even go with 2x15 SAS discs. Get 2 x200gb REALSSD drives and you will have about 100 times the IO performance if a RAID 10 over your 30 drives. Given the significant cost of the infrastructure you may save a LOT of money on the way.
Actually the smartest thing would be to not ordet the whole SAS thingy - you have 4 drive slots, put the OS on two drives, use 200gb SSD in a Mirror on the other one. Finished. And a LOT faster than your SAS stuff, too ;) THe joy of having a trivial datbase size. Check http://www.fastestssd.com for the current state. A modern SSD will reah 200mb sustained random rates in that setup, even if not top of the line. THis will seriously wipe the floow with the mediocre IO you get from your SAS setup.
Or: 30 SAS discs are maybe 4800 IOPS. RealSSD gets up to 50.000 - on one disc, with "weak times" of around 36.000 IOPS. That means that ONE SDD is about 7,5 times as fast - in slow moments - as your 12 disc setup. Around 10 times as fast in good times. Ouch.
Be carefull to properly align the parittions and properly format the file systme (hint: do not use the standard 4kb node size - stupid for SQL Server).
I could do a massive RAID 10 of 6
disks and throw the entire DB onto it,
but I've been considering breaking up
TRAFFIC and it's index files onto
separate partitions - and possible
BILLING as well. In addition, the
Everything Else might do well in it's
own File.
That would be stupid abuse of SQL Server. GIven that it does load balancing between files and wants/ asks for multiple files per group (one per logical processor) it would not gain anything - au contraire. Separating files and indices achieves NOTHING if they end up on the same discs anyway. In your case you are better off with one filegroup, 12 files. If you want later scalability, you may want to go for 48 data files to start with - gives you room up to processor 48 cores.
You may want to use two filegroups to splt off the billing data from the less volatile -/ requested one, but not for direct speed, but for the priviledge of posibly moving them totally later off without reorganization - that is what I did with my financial database.
Last words: whoever purchased the server made a bad decision hardware wise. There is no reason to have an external SAS tray for something that small. My database server is from SuperMicro and has 24 disc slots in 2 rack units height. That is without external cage. I dont really want to compare the numbers here- but I bet it was a lot of wasted money.
Best Answer
This sounds a little worrisome to me, it sounds like a low-end RAID controller. You want a good RAID controller that can keep up with 8 fast HDDs (that's actually not a given). If you have a fair amount of writes to your DB, then you really want a Battery Backup Unit, and to enable battery-protected write caching on the RAID controller.
As for RAID disk layout, there are 2 common schools of thought:
I would rather not take sides on the RAID volume design, it tends to become a bit of a fact-light discussion. Ideally you should experiment with different storage layouts and benchmark them for your specific workload. My gut feel is that all disks in RAID10 is faster and more robust over multiple workloads.
One last thing, to make sure that OS partitions and RAID stripe boundaries are aligned (see here, Windows centric, but the principle is general). You can do this when you create the partitions.