Skip to content

ORA-39126: Data Pump Worker Fatal Error - Fix Export/Import

ORA-39126: Worker Unexpected Fatal Error in KUPW$WORKER

Section titled “ORA-39126: Worker Unexpected Fatal Error in KUPW$WORKER”

Error Text: ORA-39126: Worker unexpected fatal error in KUPW$WORKER.MAIN [<context>]

ORA-39126 occurs when a Data Pump worker process crashes unexpectedly during an export (expdp) or import (impdp) operation. The worker process KUPW$WORKER is the internal PL/SQL-based engine that drives object-level processing in Data Pump. When a worker dies with an unhandled exception, Oracle raises ORA-39126 along with contextual information in brackets that identifies what the worker was doing when it failed.

The error terminates the worker but does not necessarily abort the entire Data Pump job — other workers may continue. However, objects being processed by the failed worker will be skipped or partially processed. ORA-39126 always appears alongside other errors that identify the root cause; the bracketed context (e.g., [PARSE], [FETCH], [WRITE]) points to the phase that failed.

1. Corrupt or Inconsistent Database Object

Section titled “1. Corrupt or Inconsistent Database Object”
  • A table, index, or other object has internal corruption that prevents Data Pump from reading it
  • An object’s metadata (stored in the data dictionary) is inconsistent with its physical structure
  • Partially created or dropped object left in an inconsistent state
  • The worker process ran out of PGA memory while processing a large or complex object
  • PGA_AGGREGATE_LIMIT hit during a complex LOB or XMLType export
  • SORT or HASH operations during index-organized table export consumed all available PGA
  • A known Oracle bug in the specific release affects a particular object type
  • Combination of object features (compression + LOB + partitioning) triggers a worker crash
  • Applying a database patch resolves the crash
  • Export user lacks privileges to read a dependent object referenced in the DDL
  • A view definition references an object the exporting user cannot access
  • Database link referenced by an object is invalid and causes the worker to fail
  • An object in the export scope is INVALID and its DDL cannot be generated
  • A package or trigger references a table that no longer exists
  • Object statistics or metadata accessible through the export path are corrupted

Find ORA-39126 Details in the Data Pump Log and Trace Files

Section titled “Find ORA-39126 Details in the Data Pump Log and Trace Files”
-- Find recent Data Pump jobs and their status:
SELECT
job_name,
operation,
job_mode,
state,
degree,
TO_CHAR(start_time, 'YYYY-MM-DD HH24:MI:SS') AS start_time,
TO_CHAR(end_time, 'YYYY-MM-DD HH24:MI:SS') AS end_time,
attached_sessions
FROM dba_datapump_jobs
ORDER BY start_time DESC;
-- Check what objects the failed job was processing:
SELECT
log_time,
message
FROM dba_datapump_log
WHERE job_name = '&job_name'
AND (message LIKE '%ORA-%' OR message LIKE '%error%')
ORDER BY log_time;

Identify the Problem Object From the Trace File

Section titled “Identify the Problem Object From the Trace File”
-- Find the Data Pump trace file directory:
SELECT value FROM v$parameter WHERE name = 'diagnostic_dest';
-- Trace files are in: <diagnostic_dest>/diag/rdbms/<db_name>/<instance>/trace/
-- Look for files named: dp*.trc or kupw*.trc created around the failure time
-- Check for invalid objects in the export schema:
SELECT
owner,
object_name,
object_type,
status,
last_ddl_time
FROM dba_objects
WHERE owner = UPPER('&schema_name')
AND status = 'INVALID'
ORDER BY object_type, object_name;
-- Check for objects with accessibility issues:
SELECT
owner,
object_name,
object_type,
status
FROM dba_objects
WHERE owner = UPPER('&schema_name')
AND object_type IN ('TABLE', 'INDEX', 'VIEW', 'PACKAGE', 'PROCEDURE', 'FUNCTION')
AND status = 'INVALID'
ORDER BY object_type;
-- Check for corrupt segments in the export schema:
SELECT
owner,
segment_name,
segment_type,
partition_name
FROM dba_segments
WHERE owner = UPPER('&schema_name')
AND (
-- Check via DBMS_SPACE_ADMIN if corruption is suspected:
segment_name IN (
SELECT segment_name FROM dba_extents
WHERE owner = UPPER('&schema_name')
MINUS
SELECT table_name FROM dba_tables WHERE owner = UPPER('&schema_name')
MINUS
SELECT index_name FROM dba_indexes WHERE owner = UPPER('&schema_name')
)
);
-- Check data dictionary consistency for the schema:
SELECT
table_name,
num_rows,
last_analyzed,
status
FROM dba_tables
WHERE owner = UPPER('&schema_name')
AND status != 'VALID'
ORDER BY table_name;
-- Identify LOB segments and their integrity:
SELECT
l.owner,
l.table_name,
l.column_name,
l.segment_name,
s.bytes / 1024 / 1024 AS lob_size_mb
FROM dba_lobs l
JOIN dba_segments s ON l.segment_name = s.segment_name AND l.owner = s.owner
WHERE l.owner = UPPER('&schema_name')
ORDER BY s.bytes DESC;
-- Current PGA usage and limits:
SELECT
name,
value / 1024 / 1024 AS value_mb
FROM v$pgastat
WHERE name IN (
'total PGA inuse',
'total PGA allocated',
'maximum PGA allocated',
'PGA memory freed back to OS'
)
ORDER BY name;
-- PGA parameters:
SELECT name, value
FROM v$parameter
WHERE name IN ('pga_aggregate_target', 'pga_aggregate_limit', 'workarea_size_policy')
ORDER BY name;
-- Sessions consuming large PGA (during active Data Pump job):
SELECT
s.sid,
s.serial#,
s.username,
s.program,
p.pga_used_mem / 1024 / 1024 AS pga_used_mb,
p.pga_alloc_mem / 1024 / 1024 AS pga_alloc_mb
FROM v$session s
JOIN v$process p ON s.paddr = p.addr
WHERE s.username IS NOT NULL
ORDER BY p.pga_alloc_mem DESC
FETCH FIRST 20 ROWS ONLY;
Terminal window
# Find trace files generated around the time of the failure:
ls -lt $ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID/trace/*.trc | head -20
# Search for ORA-39126 context in trace files:
grep -l "ORA-39126\|KUPW" $ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID/trace/*.trc | head -5
# View the relevant trace file:
tail -200 /path/to/relevant_trace.trc | grep -A 20 "ORA-39126\|fatal"

If the trace or log identifies the failing object, exclude it from the export:

Terminal window
# Run expdp excluding the problematic object:
expdp system/password \
DIRECTORY=DATA_PUMP_DIR \
DUMPFILE=export_%U.dmp \
LOGFILE=export.log \
SCHEMAS=SCHEMA_NAME \
EXCLUDE=TABLE:"IN ('PROBLEM_TABLE1', 'PROBLEM_TABLE2')"
# Or exclude specific object types:
expdp system/password \
DIRECTORY=DATA_PUMP_DIR \
DUMPFILE=export_%U.dmp \
SCHEMAS=SCHEMA_NAME \
EXCLUDE=STATISTICS
-- Temporarily increase PGA for the Data Pump session:
ALTER SYSTEM SET pga_aggregate_target = 4G SCOPE=MEMORY;
ALTER SYSTEM SET pga_aggregate_limit = 8G SCOPE=MEMORY;
-- Or increase for the specific session (if connecting interactively):
ALTER SESSION SET workarea_size_policy = MANUAL;
ALTER SESSION SET sort_area_size = 104857600; -- 100 MB

Then retry the Data Pump job:

Terminal window
expdp system/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=export.dmp SCHEMAS=SCHEMA_NAME
-- Recompile all invalid objects in the schema:
BEGIN
DBMS_UTILITY.COMPILE_SCHEMA(
schema => 'SCHEMA_NAME',
compile_all => FALSE
);
END;
/
-- Or use utlrp.sql for all schemas:
@?/rdbms/admin/utlrp.sql
-- Verify no invalid objects remain:
SELECT object_name, object_type, status
FROM dba_objects
WHERE owner = 'SCHEMA_NAME' AND status = 'INVALID';

5. Export Only Specific Object Types (Divide and Conquer)

Section titled “5. Export Only Specific Object Types (Divide and Conquer)”
Terminal window
# Export metadata only first to identify what fails:
expdp system/password \
DIRECTORY=DATA_PUMP_DIR \
DUMPFILE=metadata_only.dmp \
SCHEMAS=SCHEMA_NAME \
CONTENT=METADATA_ONLY
# Then export data only:
expdp system/password \
DIRECTORY=DATA_PUMP_DIR \
DUMPFILE=data_only.dmp \
SCHEMAS=SCHEMA_NAME \
CONTENT=DATA_ONLY
# Narrow down: export table by table
expdp system/password \
DIRECTORY=DATA_PUMP_DIR \
DUMPFILE=single_table.dmp \
TABLES=SCHEMA_NAME.SPECIFIC_TABLE

6. Attach to a Failed Job and Restart or Kill It

Section titled “6. Attach to a Failed Job and Restart or Kill It”
Terminal window
# Attach to the failed Data Pump job:
expdp system/password ATTACH=JOB_NAME
# At the Export> prompt:
# STATUS -- Check current status
# CONTINUE_CLIENT -- Resume if paused
# KILL_JOB -- Terminate if unrecoverable
-- Or kill from SQL*Plus if the job is stuck:
SELECT job_name, state FROM dba_datapump_jobs WHERE state != 'NOT RUNNING';
-- Kill the job:
BEGIN
DBMS_DATAPUMP.STOP_JOB(
handle => DBMS_DATAPUMP.OPEN('EXPORT', 'SCHEMA', NULL, 'JOB_NAME'),
immediate => 1,
keep_master => 0
);
END;
/
Terminal window
# Check current patch level:
$ORACLE_HOME/OPatch/opatch lsinventory | grep -i "patch\|PSU\|RU"
# Search My Oracle Support for the specific ORA-39126 context string
# (the text in brackets) to find known bug fixes.
# Common fix: apply the latest Database Release Update (RU).

1. Validate Schema Objects Before Running Data Pump

Section titled “1. Validate Schema Objects Before Running Data Pump”
-- Pre-export validation script:
SELECT
object_type,
COUNT(*) AS invalid_count
FROM dba_objects
WHERE owner = 'SCHEMA_NAME'
AND status = 'INVALID'
GROUP BY object_type;
-- Expect zero rows. Fix any invalid objects before exporting.
Terminal window
# Reduce parallel degree when workers are crashing:
expdp system/password \
DIRECTORY=DATA_PUMP_DIR \
DUMPFILE=export_%U.dmp \
SCHEMAS=SCHEMA_NAME \
PARALLEL=1 # Single worker — easier to diagnose, no concurrent worker crashes
Terminal window
# Exclude statistics from the main export (avoids stats-related worker crashes):
expdp system/password \
DIRECTORY=DATA_PUMP_DIR \
DUMPFILE=export.dmp \
SCHEMAS=SCHEMA_NAME \
EXCLUDE=STATISTICS
# Separately use DBMS_STATS.EXPORT_SCHEMA_STATS for statistics:
BEGIN
DBMS_STATS.EXPORT_SCHEMA_STATS(
ownname => 'SCHEMA_NAME',
stattab => 'STATS_BACKUP',
statid => 'PRE_EXPORT',
statown => 'SCHEMA_NAME'
);
END;
/

4. Test Data Pump on Dev Before Production Exports

Section titled “4. Test Data Pump on Dev Before Production Exports”
Terminal window
# Run a metadata-only export on dev after every schema change:
expdp system/password \
DIRECTORY=DATA_PUMP_DIR \
DUMPFILE=dev_meta_check.dmp \
SCHEMAS=SCHEMA_NAME \
CONTENT=METADATA_ONLY \
LOGFILE=dev_meta_check.log
# Verify no errors in the log before running full production export.
  • ORA-39083 - Object type failed to create (import side)
  • ORA-39166 - Object was not found or could not be exported
  • ORA-31693 - Table data object failed to load/unload
  • ORA-06512 - At line (PL/SQL stack trace)

Quick Workaround — Skip the Problem Object

Section titled “Quick Workaround — Skip the Problem Object”
Terminal window
# Identify the failing object from the Data Pump log, then exclude it:
expdp system/password \
DIRECTORY=DATA_PUMP_DIR \
DUMPFILE=export_skip.dmp \
SCHEMAS=SCHEMA_NAME \
EXCLUDE=TABLE:"= 'PROBLEM_TABLE'"
Terminal window
# Kill and restart:
expdp system/password ATTACH=JOB_NAME
Export> KILL_JOB
# Then re-run with the problematic object excluded.
-- After successful export, verify the dump file contains expected objects:
-- impdp with SQLFILE to generate a SQL script from the dump (no actual import):
-- impdp system/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=export.dmp SQLFILE=check.sql
-- Count objects per type in the dump:
-- Review the generated SQL file for expected CREATE statements.