There is no simple formula for determining the storage cost of auditing. There are a couple of reasons for this. First, there are options available to you when creating audit policies and the choices you make will affect how much data is stored for a given audit event. Second, the amount of audit data you write isn't a function of something you can pre-calculate like your schema size or database size. Rather it is a function of the activity on the objects or events that are being audited. You can try to get some idea of that by looking at your current run-time metrics, but of course past activity doesn't necessarily predict future activity.
If you decide to use OpenEdge auditing, planning is key. That planning needs to include the creation of a data-retention policy and the use of an audit archive database. The data-retention policy will dictate, among other things, how long the audit data remains in your application database(s) before you archive it and how long archived audit data will remain in the audit archive database until deletion.
I recommend that you run proutil auditarchive on your application databases at least daily and deactivate non-essential audit indexes in those databases (see
Knowledge Article for details). Do your audit data reporting only against the audit archive database, where all indexes are active . These practices will minimize the performance and storage impact of enabling auditing in production.
One other piece of advice: start small. Audit one thing or a few and determine the effect on performance and storage. If you're not sure whether you want to audit a table or event, don't. There is no point in capturing data that no one will read.