|Oracle standard edition one limitations for huge database [message #516252]
||Fri, 15 July 2011 13:51
Registered: July 2008
Good day friends,|
In my current project we are trying to implement new feature and considering using BLOB's for storing 200 mb metadata file. We trying to build the system so that we can store up 200,000 of these blob's and we are estimating the database size can grow to 40 TB. Is this not too much for a Standard edition one ? because our current biggest customer is only 20 GB.
Right now our hands are tied with oracle standard edition one due to budget and other logistics.
Currently we have all the tables in DP_TABLE_TABLESPACE and all the index in DP_INDEX_TABLESPACE. For blob storage we are planning to store the new table with blob column in DP_TABLE_TABLESPACE and the all the blob's will be stored in new tablespace DP_LOB_TABLESPACE.
We trying to have a new strategy and it will be based at Table space level. In this strategy we will try to limit the size of each DP_LOB_TABLESPACE table space and we will have a series of LOB table space like DP_LOB_TABLESPACE_1......DP_LOB_TABLESPACE_n. Let's assume we limit the size of each table space to 50 GB.
Initially our database will only have one lob table space DP_LOB_TABLESPACE_1 and once it reached the max size of 50 GB we will make the DP_LOB_TABLESPACE_1 as read only table space and new Lob table space DP_LOB_TABLESPACE_2 will be created for future new blob's write. After making DP_LOB_TABLESPACE_1 as read-only full backup of database(DP_TABLE_TABLESPACE,DP_INDEX_TABLESPACE,DP_LOB_TABLESPACE_2) will be taken and separate backup of the DP_LOB_TABLESPACE_1 will be taken. From now on DP_LOB_TABLESPACE_1 will be removed from our daily backup routines. We will continue to add new DP_LOB_TABLESPACE when ever the current DP_LOB_TABLESPACE reaches 50 Gb in size.
I think this strategy will bring down our backup time,size and restore time. Correct me if i am wrong.
The above strategy also require the following application design, when ever a blob is delete that is present in DP_LOB_TABLESPACE_1 or any read only table space , all it records are just marked as deleted in the new table with blob column in DP_TABLE_TABLESPACE. marked as deleted will be a addition column in that table. All our current work flow only requires BLOB write and delete and no Updates
Monthly or Bimonthly a maintenance schedule scripts will be run against the read only lob table spaces for deleting the marked as deleted complex object's and for de fragment and backup it again.
But i am not very clear on the following things as I am not a DBA.
1) Can I follow the above strategy and successfully manager 40 TB database on Oracle standard edition one? or this strategy can only hold good only to a certain limit.
2) I see the following risks
• Hitting the limits of the oracle standard edition one.
• Not many companies in the industry use Oracle standard edition one for a 40 TB database.
• No support for advanced tool's to make the jobs quicker or easier. Like database partition, block track changing for backup.
• Increased database backup and restore time as database grows.
• Hardware limitation put by oracle standard edition one. It can only support up 2 processor socket server.
• Heavy database administration task.
what are the other risks?
3) what are the other challenges we could face in stability , supportability and maintenance of the database.
Any inputs is highly appreciated. Thanks in advance.