You've connected to the remote database (via dbuser/dbpsw@rdsdblink
), but your DB link is created in your local database. At the moment you're trying to run the import on the remote DB, with a network link also to the remote DB, and that network link is trying to use a DB link that doesn't exist on that remote DB.
The tnsnames.ora
entry and the DB link are completely separate things.
You need to connect normally locally - using whichever credentials you used to create the DB link, probably. The network_link
parameter will then make your local database session, that is started by impdp
, act against the remote server; so your local directory can be used.
Except... it doesn't work like that. The remote database identified by the network_link
can be used as the source of the import, without a dump file at all; but it can't be the target for an import from a file.
From the impdp
documentation:
The NETWORK_LINK
parameter initiates an import via a database link.
This means that the system to which the impdp
client is connected
contacts the source database referenced by the
source_database_link
, retrieves data from it, and writes the data directly to the database on the connected instance. There are no dump
files involved.
If you really wanted to go down this route, I think you would need a link from remote to local, and to run the import against the remote (as you are now), but to be pulling directly from your schema - not from a previous export. You'd still need access to a DIRECTORY
object on the remote server, as logs etc. would be written there, even if you weren't copying your dump file over. Even with nologfile
I believe it will error if you don't specify a directory or don't have permissions on it.
The article you linked to in your previous question said the same thing:
For imports, the NETWORK_LINK
parameter also identifies the database
link pointing to the source server. The difference here is the objects
are imported directly from the source into the local server without
being written to a dump file. Although there is no need for a
DUMPFILE
parameter, a directory object is still required for the
logs associated with the operation.
I also can't figure out how to use DBMS_DATAPUMP.GET_STATUS when the job doesn't even get defined, let alone run.
The job is defined, you have a handle by the time you call add_file
. SO you can add an exception handler to call get_status
and output the results (after doing set serveroutput on
of course):
...
EXCEPTION
WHEN OTHERS THEN
DECLARE
job_state varchar2(4000);
status ku$_Status;
BEGIN
DBMS_DATAPUMP.GET_STATUS(
handle => jobhandle,
mask => dbms_datapump.KU$_STATUS_JOB_ERROR,
timeout => null,
job_state => job_state,
status => status);
FOR I IN 1..status.error.COUNT LOOP
DBMS_OUTPUT.PUT_LINE(status.error(I).errornumber || ': ' || status.error(I).logtext);
END LOOP;
END;
RAISE;
END;
/
(Yes, I know when others
is bad, even when re-raising, but this is a temporary thing...)
Given what you've shown and the call that's erroring, I expect that will show something like:
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/home/achim/temp/0003759/T0987654321/Q01DED3D.dmp"
ORA-27040: file create error, unable to create file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 1
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Presumably when this is run as a callable statement it uses a different path for the directory object. Or doesn't include the create/drop at all, which you wouldn't normally do at run-time. The 12c version either doesn't have the directory defined, or if it's on a different server the path being used isn't valid there. Or, perhaps, is RAC and is only valid on one node, which might make the error intermittent.
In what you've shown though, the /home/achim
part looks suspect. The Oracle process owner - usually oracle
- has to be able to read and write files in the operating system directory, and unless you've opened your home directory up for whole world to see, you're probably getting a failure because oracle
cannot create the dump file where you asked it to.
If that is the case then change the directory path to point to somewhere you are sure oracle
does have the necessary permissions; if you have access to that account, try to create a file manually from the command line to verify. If not you you'll just need to check the permissions on the operating system directory and its hierarchy carefully.
You might see a different error of course, but it looks like it has to be something related to the directory or file, not the other arguments.
Also, for removing orphan jobs, see My Oracle Support document 336014.1.
Best Answer
From the 12c documentation:
You will need to define your own directory object in your PDB, which your user (system here) has read/write privileges against.
It can be the same operating system directory that DATA_PUMP_DIR points to, you just need a separate directory object. But I've used the path you said you'd prefer, from a comment on a previous question.
Then the import is modified to have: