Home » RDBMS Server » Backup & Recovery » Migration question about using archivelog files (Oracle 10.2.0.3.0, OS: Solaris 5.9)
Migration question about using archivelog files [message #315062] Fri, 18 April 2008 15:08 Go to next message
Nasiobo
Messages: 10
Registered: August 2007
Location: Atlanta, GA
Junior Member
Oracle 10.2.0.3.0
OS: Solaris 5.9

I'm hoping someone can help some or point me in a good direction. We're doing a big data migration into our production server so it will need to be down for a couple of days. However, during this time a few select users will need to be in a "stage" environment for those days. After the migration we need to move all the data they have entered/changed from this staging area to the production database.

So:
Database A is production. It has some data in it.
Database B is an exact copy of Database A.

Both "A" and "B" are same Oracle version and OS.

"A" will be taken offline from all users and will have data loaded into it, lots of data.

"B" will be put online for some users to access.

After two days I need to move the data from "B" to "A", maintaining all the data that was loaded into "A".

Make sense? We are currently working on a custom solution for this, but, I'm also hoping there is some easier way to handle this maybe using archive logs or something.

Could anyone please point me in a direction for this, give me any tips or suggestions.

Thanks!
Re: Migration question about using archivelog files [message #315070 is a reply to message #315062] Fri, 18 April 2008 16:57 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
Depending upon actual application, it may NOT be possible.
Even if it is possible, it won't be easy unless the application was actually designed to support this type of activity.

Good Luck!
Re: Migration question about using archivelog files [message #315199 is a reply to message #315062] Sat, 19 April 2008 17:55 Go to previous messageGo to next message
Nasiobo
Messages: 10
Registered: August 2007
Location: Atlanta, GA
Junior Member
Yes, I know it will be difficult and complex, but, I was wondering if anyone had any suggestions or solutions that they may have experience with.

I have looked at goldengate software as a solution, but, they are priced around $7k per processor. We have 4 procs on the source and 4 on the target, so, that's a bit on the expensive side for two-three days of data.
Re: Migration question about using archivelog files [message #315201 is a reply to message #315062] Sat, 19 April 2008 19:05 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
IMO, nobody but you & your co-workers can help because any solution will be application dependent & we (TINW) don't know the details of your application.

In way too many cases table PK's result from a SEQUENCE.
Let's just speculate that your application has at least 1 table like that & continue to assume that it will be part of this bulk load while database B gets "manual updates".
Now you have duplicate PK values across both database A & B.
Each database will likely have FKs referencing the duplicate PKs.

How do you plan on handling UPDATEs against database B during the "merge" back into database A?
How do you plan on handling DELETEs against database B; let alone CASCADE DELETE constraints?

If data only got inserted & there were NO constraints, it might be solvable.

I am a betting person, I bet you & your co-workers will abandon this approach for one that has a chance to actually work.

Keep in mind, that nothing is impossible for the person who does not actually have to implement & test the results.

I suggest that you request from who ever deemed this "solution" to be workable to provide detailed implementation algorithms & test plans so you can code up their solution.

[Updated on: Sat, 19 April 2008 19:06] by Moderator

Report message to a moderator

Re: Migration question about using archivelog files [message #315276 is a reply to message #315062] Sun, 20 April 2008 14:17 Go to previous message
Nasiobo
Messages: 10
Registered: August 2007
Location: Atlanta, GA
Junior Member
OK, fair enough. I didn't provide enough details to get a detailed answer, and that's ok. Let's just assume that we solved the cascade problem (no cascade deletes) and also the primary key issues and sequences. Let's just say that all I need are the actual insert, updates and deletes that oracle processes, in the order that they are processed. Is this something I could get from log mining, or CSC or Streams? I'm just not educated enough in all of these to understand exactly what I can extract from this.
Previous Topic: excluding tablespace from backup
Next Topic: RMAN Duplicate & ORA-01152
Goto Forum:
  


Current Time: Fri May 10 03:09:47 CDT 2024