Fixed vs Variable

ForEachInvoiceDelete

Active Member
Anyone got any metrics for Fixed/Variable extent performance on different storage media under different loads?

I always thought it was a good idea back in the days of spinning disks to have Fixed extents but i have been wondering if it actually matters now a days?

If not ill collect some data and put it up here for reference later.
 

Cringer

ProgressTalk.com Moderator
Staff member
I'm pretty certain Paul Koufalis has done the numbers. And with a decently configured disk system it makes no difference.
In practise, in my experience, fixed is still better, because not many folks have a decently configured disk system. People would rather listen to the snake oil of high performance SAN vendors than listen to the guys who actually have to work with the disk in the end.
 

TomBascom

Curmudgeon
From a performance perspective it simply does not matter.

Even in practice. Even on craptastic SANs.

We often have "spirited discussions" on this topic at PUG talks, Mike Furgal is the big advocate of fixed extents.

Anyhow - our side of the argument goes like this:

Back in the dark ages the difference was a lot more significant. A couple of things have changed over the years:

1) Type 2 storage areas expand a cluster at a time. Growth events are much less frequent and the IO impact is greatly reduced.

2) Progress has improved the way that they add space by using better OS calls (so much so that they had to add the -zExtendSyncIO parameter so that people doing disk benchmarks could get the old slow behavior back!)

3) Disk technology has improved greatly -- even without considering the impact of SSDs -- rotating rust disks (and even SANs) are much faster than they used to be.

Take some numbers: if your database is growing 1GB per day that would have been 65,536 extend operations in the old days (they used to expand extents 16K at a time). Today, with a type 2 area, 8k blocks and 512 blocks per cluster that will be just 256 extend operations. In the old days if that all happened during the 8 "working hours" of the day you would looking at roughly 2 extends per second. Today you would see one every 2 or 3 minutes.

1GB per week works out to an extend operation every 10 minutes or so.

1GB per month works out to 1.5 extends per hour.

(Feel free to check my arithmetic - I've always had a weakness with simple math... but the big picture is correct.)

The only time that you can still measure a difference is when doing the "load" phase of d&l (and we get around that by loading into a voided copy of a previously loaded database, so the first dry run is slow but subsequent runs are fast).

So, in a nutshell, the benefits of fixed extents have almost entirely vanished.

Almost all of our customers run all of their extents as variable extents. It is much simpler to manage and has no perceptible performance disadvantage. (One of those customers happens to have a 600GB variable extent, they also have 4,000 users. I'd like to break that one up just to make it easier to work with but getting down time is a problem.)

In our customer base almost everyone we see who has started with fixed extents soon stops staying on top of adding extents. They almost always have a mix of odd sized extents from "oops" events where they forgot to add one in time or they have just given up and allowed everything to flow into the variables. So, from observation, we think that it is simply too much trouble for most people. They don't actually stay on top of it and they never notice a problem when they forget. We don't see any value in making people feel bad about that.

If you are setting up extents for someone else then it is basically a certainty that they will not stay on top of it.

Mike's strongest argument is that there is a hard limit of 1TB on the extent size. If you hit that limit an unrecoverable error will occur and your db will have to go through restore and roll forward. There are probably less than a dozen sites that need to worry about that. (And ProTop has an alert for that that fires long before it will become an issue.)

Mike's other good argument is that it is helpful to have "reasonable" extent sizes if you ever need to run scans with dbrpr or if you need to perform surgery with dbrpr. (This is why I add initial fixed extents for the larger areas when dumping & loading a really big db.) The definition of "reasonable" varies over time and with the capabilities of the hardware. 8GB is actually pretty small these days - 16GB or 32GB extents sizes are not unusual and I've been using 64GB extents at sites with very large databases.

The paid version of ProTop with monitoring and alerting supports alerts on fixed extents that will help you stay on top of it if you'd like. There is even a "script" alert type that can create the .st file and even potentially do the "prostrct add" automatically. (Personally, I prefer to not do it 100% automatically and instead queue it for review prior to actually adding the new extent.)

Ultimately it is up to you -- I recommend "all variable" but if you prefer to have "all fixed" it is easy to set it up that way. If you do set up fixed extents I will not be at all surprised when, 6 months from now, it turns out that your variable extents are in use and that nobody has noticed or done anything about it.
 

Cringer

ProgressTalk.com Moderator
Staff member
(and we get around that by loading into a voided copy of a previously loaded database, so the first dry run is slow but subsequent runs are fast).
By voided do you mean you just delete all the records in the DB?
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
By voided do you mean you just delete all the records in the DB?
It has the same effect as deleting all the data, but you don't have to write any code and it's very quick to do (which doing it in the language would not be). The end result is that you should have no extend operations in your second and subsequent dry runs of your load, so all of your I/O bandwidth goes to doing data and BI writes, which is what you want. Once you've finished dry runs and you're ready to do your production load, you use your large empty DB (with schema) from the dry runs as the starting point.
 

Cringer

ProgressTalk.com Moderator
Staff member
So essentially,
procopy empty8
Load DF
Backup.
Do dry runs.
Prorest over dry run with backup?
 
Top