Question Oe 11.6 Database Vs Server

Hi All,
As part of our migration project we have option to choose either NFS, NetApp iSCSI or San Pure Array. Our DB size is 2 TB. Which one of this would you recommend to use from performance stand point?

Regards,
Saravanakumar B
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Don't put your database on an NFS share. That's asking for trouble and I don't know if it's even supported anymore.

Don't put your database on a NetApp if you care at all about performance.

I don't know what a "San Pure Array" is or who makes it so I can't comment on that.

For pure performance I'd go with direct-attached enterprise SSDs in RAID 10. Some people have reported very good performance with FusionIO flash storage and with Violin flash storage. At 2 TB you need to consider the time it takes to do maintenance tasks along with application performance; it's pretty large for an OpenEdge database and I think pretty much anyone running one that large is on a SAN. If possible keep the database physically segregated from application code, log files, OS files, etc. If possible, keep the database segregated from AI files and backups. Allocate and keep free enough space to create dump files from the existing database and build a new database without having to remove the existing one.

I've heard of a 2 TB database where probkup took more than 24 hours to run, so the DBA had no choice but to back up via SAN snapshot. Whatever you select, and it sounds like you have only one viable choice, make sure you test and benchmark not only your application but also any expected maintenance and recovery tasks: dump & load, idxbuild, backup and restore, AI roll forward, OE Replication roll forward, etc. If you're on a platform with more than one file system choice (so not Windows, hopefully ;)), test with the file system/configuration you intend to use in production. Hardware that is good on paper can run slowly with the wrong configuration.
 

RealHeavyDude

Well-Known Member
As Rob already pointed out, the problem with any SAN is that the random i/o is just slow compared to local disks. While you can still achieve a decent overall performance with the application ( query response times and transaction throughput ) where you can benefit from a huge buffer pool, most DBA tasks are just darn slow.

This is what pains me the most in our setup:
Solaris 64Bit 64GB memory 4CPUs, file syste
  • Backup or restore of the nearly 1 TB database takes about 12 hours
  • Index Rebuild some 20 hours.
  • Dump & Load of the whole database up to 48 hours.
That means, that you need to carefully plan and performance test any dba task so that you are able to meet your downtime windows.

While system admins love VMs ( Solaris zones in our case ) and SAN storage, IMHO this is a compromise between availability, ease of administration and the price tag. As there is no focus on performance you get what you get.

Heavy Regards, RealHeavyDude.
 

TomBascom

Curmudgeon
Internal SSD is incredibly fast and much, much cheaper than a SAN. It is hard to imagine why people still put databases on SANs.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
This is what pains me the most in our setup:
Solaris 64Bit 64GB memory 4CPUs, file syste
  • Backup or restore of the nearly 1 TB database takes about 12 hours
  • Index Rebuild some 20 hours.
  • Dump & Load of the whole database up to 48 hours.
No doubt that is painful. I have a customer on SAN storage who backs up two databases, about 265 GB in all, in about 1.25 hours via probkup. So 12 hours seems a very long time for 1 TB.
 
Internal SSD is incredibly fast and much, much cheaper than a SAN. It is hard to imagine why people still put databases on SANs.

@TomBascom - What happens in case of system crash if we go with internal SSD or disks. Basically how do we handle if we have to recover from system failure, or to build new one or attach to some stand-by system.
 

RealHeavyDude

Well-Known Member
The bank has two data centers located roughly 50 miles apart.

Instead of letting us run on the bare metal with directly attached incredibly fast SSDs and using OpenEdge Replication for redundant systems - we are forced to run on the standard solution for redundant systems in the bank. That is Solaris Zones ( with no guaranteed resources like CPU and memory, but we just happen to be the only system on the host - which might be subject to change in the future ), ZFS file systems residing on an EMC "budget" SAN which itself is mirrored to the other data center.

Detaching the mirror on the SAN would double our throughput ...

Therefore, instead of just streaming AI blocks over the wire, the system actually mirrors every bit & byte that gets written on any SAN file system to the other data center. Plus, the standby system is really passive - meaning the file systems are only attached to the active system.

Everytime I complain about bad performance I get the following answer from the storage team: The SAN provides roughly 8 ms response time for sequential i/o. Thererfore the storage is fine.

What SAN operators ( and vendors ) don't tell is, that you share that throughput with all others using the SAN and the you barely have sequential i/o running a database. Furthermore, then, if you could have sequential i/o - for example writing a backup - then, well, my dear friend, ZFS comes into play and it's storage optimization characteristics will ensure that there is hardly any sequential i/o.

I think that this is one of the most hostile environments you might be forced to run any RDBMS - not just Progress. If it wasn't for Progress we would never stand a chance of processing 30 to 40 GBs worth of transaction load every night.

IMHO, there is lots of potential to transform these systems to save and lot of money and get a lot more performance. But what do I know ...

Heavy Regards, RealHeavyDude.
 

TomBascom

Curmudgeon
Regarding recovery... you have a serious system. You need a serious high-availability plan. *And* a serious DR plan.

After-imaging and OE Replication with redundant, non-local servers are what you should be focused on. Swapping failed parts then becomes a much less stressful event.
 
Top