BULKLOAD Across Servers

KMoody

Member
Progress: 11.7.5
Server OS: SUSE Linux Enterprise Server 15 SP1

Let's say we have an empty database on Server 1 and an FD file on Server 2. Is it possible to issue a BULKLOAD command to the empty database on Server 1 to load the FD file on Server 2? (It may not be fast or practical, but is it possible?) If not, are there other ways to load bulk loader description files?
 

TomBascom

Curmudgeon
Why on Earth would you ever use bulk load?

But, to answer your question, no.

At least not if you are thinking in terms of "proutil dbname -H hostname -S port ..." sorts of commands. Proutil does not support that.

The bulkload command needs to *run* in the context of the server being loaded.

You _could_ use a shared drive or an NFS mount or something of that ilk to dump on one server and load on another.

You could also use "ssh" to start the command from one server and run it on the other.
 

RealHeavyDude

Well-Known Member
You _could_ use a shared drive or an NFS mount or something of that ilk to dump on one server and load on another.

I like rsync to copy (binary) dump files from one machine to another. In my case from a "very" old Solaris SPARC system to a RHEL 7 x86 system. Works really great. In my case it turned out to be much faster than to use a shared network file system or an NFS mount to dump to and load from.
 

KMoody

Member
Maybe it would be best to start with the problem description or business objective rather than the proposed solution.

What are you trying to do?

We have two servers. The first is the live server, which our employees ues. The second is the dev server, which we use to develop, test, and debug programs. We sometimes need to dump database information from the live server to the dev server.

I like rsync to copy (binary) dump files from one machine to another. In my case from a "very" old Solaris SPARC system to a RHEL 7 x86 system. Works really great. In my case it turned out to be much faster than to use a shared network file system or an NFS mount to dump to and load from.

Yep, in the end, we decided to use rsync to copy the FD file.

Why on Earth would you ever use bulk load?

What's wrong with bulk load, aside from being slower than a binary load?
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
We have two servers. The first is the live server, which our employees ues. The second is the dev server, which we use to develop, test, and debug programs. We sometimes need to dump database information from the live server to the dev server.
Okay. You said "SUSE Linux Enterprise Server 15 SP1". Do both servers use this OS? If so, couldn't you just get prod data into dev by database backup and restore?

If they are not on the same OS, a binary dump/copy/load/idxbuild solution can be fully scripted.
 

TomBascom

Curmudgeon
Slower, uses more disk space. What more do you need?

I wasn’t saying anything is necessarily *wrong*, just wondering *why* you would deliberately use it? In 30+ years I have never found a reason to use it. So l’m curious - what benefit are you getting from using it?
 

KMoody

Member
Slower, uses more disk space. What more do you need?

I wasn’t saying anything is necessarily *wrong*, just wondering *why* you would deliberately use it? In 30+ years I have never found a reason to use it. So l’m curious - what benefit are you getting from using it?
Honestly, we don't really benefit from it. :confused: This is just how our company has done it for years, before I even came on board. I wasn't aware there was a better way to dump and load an entire database at once.
 

TomBascom

Curmudgeon
As Rob pointed out -- if you are transferring the entire database and the data is going from Linux to Linux the simplest solution is to just backup and restore.

I could _almost_ see a reason to use bulk load if there was some automated filtering or masking of the data happening between source and target. In theory that might be easier with text files rather than binary files.

But in this day and age and with data privacy and security considerations the idea of wholesale moving production data to development is kind of questionable to start with.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
But in this day and age and with data privacy and security considerations the idea of wholesale moving production data to development is kind of questionable to start with.
Good point. Though with the right processes in place, this approach can still be via for use cases where you still want a full-size data set, like a production-support environment. What my clients do is restore a prod backup to a temporary location, run a utility to anonymize the data in place, then back that up and restore to the needed non-prod location.

For other use cases, e.g. development, unit-test automation, etc., it may be more appropriate to create a "starting point" database containing just the reference tables required to make the application work, then write programs to create arbitrary numbers of entities (transactions, history records, whatever data the client typically creates). These programs can be re-used to create production-like databases of arbitrary size.
 
Top