Home » RDBMS Server » Server Administration » regarding deleting 80 million records (11.2.0.4)
regarding deleting 80 million records [message #634897] Tue, 17 March 2015 17:19 Go to next message
varunvir
Messages: 389
Registered: November 2007
Senior Member
Experts,
I want to delete 80 million records in database at once.
Please suggest what could I do to avoid snapshot too old error.
Can I put commits in between every 5 million records?

Thanks,

Varun
Re: regarding deleting 80 million records [message #634898 is a reply to message #634897] Tue, 17 March 2015 17:30 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
varunvir wrote on Tue, 17 March 2015 15:19
Experts,
I want to delete 80 million records in database at once.
Please suggest what could I do to avoid snapshot too old error.
Can I put commits in between every 5 million records?

Thanks,

Varun


If you COMMIT every 5 million records, you will INCREASE the probability for ORA-01555.

>I want to delete 80 million records in database at once.
How many rows in the table before you start the DELETE?

What problem are you actually trying to solve?
How will you & I know when correct answer has been posted here?

Re: regarding deleting 80 million records [message #634901 is a reply to message #634898] Tue, 17 March 2015 17:39 Go to previous messageGo to next message
varunvir
Messages: 389
Registered: November 2007
Senior Member
Hi Blackswan,

1)Would not frequent commit clears the undo tablespace and avoid the ora-01555?
1)No of rows in table before I start delete:-84.949.637.
3)I am not trying to solve but avoid the problem since we have to purge these many records soon.

Thanks,

Varun


[EDITED by LF: formatted number]

[Updated on: Wed, 18 March 2015 01:09] by Moderator

Report message to a moderator

Re: regarding deleting 80 million records [message #634905 is a reply to message #634901] Tue, 17 March 2015 18:12 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
>Would not frequent commit clears the undo tablespace and avoid the ora-01555?
Frequent COMMIT increases the chances that ORA-01555 gets thrown.

COMMIT means that both REDO & UNDO blocks can be reused & overwritten.
When that happens the long running SELECT no longer can have a read consistent view of the table as it existed prior to any DELETE & throws ORA-01555
Re: regarding deleting 80 million records [message #634917 is a reply to message #634905] Wed, 18 March 2015 01:11 Go to previous messageGo to next message
Littlefoot
Messages: 21806
Registered: June 2005
Location: Croatia, Europe
Senior Member
Account Moderator
As you want to delete 80 million records out of 85 million, perhaps you could use CTAS in order to save records you want to keep, truncate the original table and restore saved values. Something like
create table t_backup as
select * from big_table
where <condition that keeps 5 million records>;

truncate table big_table;

insert into big_table select * from t_backup;
Re: regarding deleting 80 million records [message #634922 is a reply to message #634917] Wed, 18 March 2015 01:17 Go to previous messageGo to next message
Michel Cadot
Messages: 68624
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator

... in addition, you can use PARALLEL hint on SELECT and APPEND and PARALLEL ones on INSERT to improve performances if your hardware and concurrent load allow them.

Re: regarding deleting 80 million records [message #634992 is a reply to message #634922] Wed, 18 March 2015 13:04 Go to previous message
varunvir
Messages: 389
Registered: November 2007
Senior Member
Thanks experts for explaining in such a detail manner.

-Varun
Previous Topic: Cannot connect to database
Next Topic: schema stats
Goto Forum:
  


Current Time: Thu Mar 28 08:26:10 CDT 2024