Question Recommend File System For Rhel?

LarryD

Active Member
We are in the process of assisting a customer to migrate to a new server, and I'm looking for any recommendations for files systems that work best with OE databases.

Existing Server:

RHEL 6.8
OE 10.2B08
Databases on SSD's Raid 10
32GB memory

New server:

RHEL 7.x (I'm assuming 7.2)
OE 11.6 (or 11.7 depending on availability)
Databases will be on SSD's Raid 10
32GB (or more) of memory

Any and all recommendations / advice welcome.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
I'm going through a similar process myself right now with a customer. If you're deploying anytime soon then I think your target will be 11.6.3, unless a later service pack drops soon. At this point it is looking like 11.7 will ship in Q1 2017.

I typically use ext3 on RHEL, with the noatime mount option on the db partition(s). I have also tried ext4 but I haven't done or seen any side-by-side benchmarks. Keep the AI extents logically, and if possible physically, segregated from the rest of the database. So if you lose the DB volume, you have backups and AI. If you lose the AI volume, you still have the DB; neither is catastrophic.

Have a separate volume for scratch space with a good chunk of free space; I would say at least 5x the size of your DBs. That space can be your target location for local backups (until they are shipped off-site), dumps, idxbuild temp files, etc. You may also need this space to completely rebuild the DB. Last year I had a client with database corruption and we had to rebuild the DB in a new location, preserving the old DB to be able to extract data from it. So we had old DB, dump files, new DB, idxbuild scratch files, post-load backup; the space gets chewed up pretty quickly.

I'd want to buy several spare disks to be able to swap one in immediately if there's a failure.

How large are these databases? Will 32 GB be enough RAM for effective caching? The best way to optimize I/O is to prevent it from happening in the first place, via -B/-B2/-Bt/-Bp/-omsize/-mmax etc.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Other stuff:
  • write backups to a different array (different controller if possible?) from the database, so you aren't splitting your I/O bandwidth between reads and writes; better yet, do your backups on the DR box if possible;
  • keep server-side client temp file I/O and application logging away from the database volume, both for performance and to avoid quickly filling the DB partition with a runaway client log (it happens... :();
  • if using the AI file management daemon, archive the AI extents locally, not across a network share; create a cron job on the source side to push the files to DR.
 

LarryD

Active Member
Thanks Rob! I had already found and included that kBase article in our recommendation document.

As for all of your other recommendations... I've actually paid attention over the years to all of the expert and non-expert advice, but reminders are always good. :)

Our existing server has pretty much all of what you recommend:

- DB's are about 200GB
- BI and AI are on a physical separate disk from the DB's
- All user data/files/logs etc are on other physical non-SSD disks (2TB). Will probably be more on new server.
- Additional 16TB file server (for various uses)
- Backups are performed locally then immediately moved to a separate file system (and from there offsite) (and yes we do test them on a regular basis!)
- The db partition on the SSD's is currently ext3, and all other partitions (including BI/AI) are ext4

We may go to 64GB or more (up to the customer), but even utilizing B2 the existing 32GB performs very well (and I have to say that SSD's have been a massive speed improvement over the olden days for the db).

I was mainly wondering if any file system has better performance and reliability since we are starting with a new box.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
I was mainly wondering if any file system has better performance and reliability since we are starting with a new box.
Good question, and this is the time to change if you're going to.

Progress used to benchmark such things but I think it's been a while since they've done so. It would be interesting to see the results of a database filesystem performance test for ext3 vs. ext4 vs. zfs vs. btrfs vs. xfs (the new RHEL default). It would also be interesting to see a comparison test of NTFS vs. ReFS.

This kind of testing is not trivial though in terms of cost, and we all have other fish to fry.
 

LarryD

Active Member
Agreed wholeheartedly. TBH, I don't have the time or permission to do it on the customer's dime.

At one time in the distant past if I'm not mistaken one of the incarnations of the 'bunker crew' did do some benchmarking of the existing Linux file systems, but alas they've gone on to other pursuits and interests.
 

TheMadDBA

Active Member
I did a fair amount of testing with ext3 and ext4 on Linux (Amazon - Redhat branch). From an OE perspective I didn't see any real performance benefits to take the perceived "risk" of using ext4. Of course YMMV. I doubt you would run into any actual issues with ext4 though as it is quite widely used.
 
Top