Don't put your database on an NFS share. That's asking for trouble and I don't know if it's even supported anymore.
Don't put your database on a NetApp if you care at all about performance.
I don't know what a "San Pure Array" is or who makes it so I can't comment on that.
For pure performance I'd go with direct-attached enterprise SSDs in RAID 10. Some people have reported very good performance with FusionIO flash storage and with Violin flash storage. At 2 TB you need to consider the time it takes to do maintenance tasks along with application performance; it's pretty large for an OpenEdge database and I think pretty much anyone running one that large is on a SAN. If possible keep the database physically segregated from application code, log files, OS files, etc. If possible, keep the database segregated from AI files and backups. Allocate and keep free enough space to create dump files from the existing database and build a new database without having to remove the existing one.
I've heard of a 2 TB database where probkup took more than 24 hours to run, so the DBA had no choice but to back up via SAN snapshot. Whatever you select, and it sounds like you have only one viable choice, make sure you test and benchmark not only your application but also any expected maintenance and recovery tasks: dump & load, idxbuild, backup and restore, AI roll forward, OE Replication roll forward, etc. If you're on a platform with more than one file system choice (so not Windows, hopefully
), test with the file system/configuration you intend to use in production. Hardware that is good on paper can run slowly with the wrong configuration.