Home » RDBMS Server » Performance Tuning » getting bad Performance during use
icon5.gif  getting bad Performance during use [message #123918] Wed, 15 June 2005 09:50 Go to next message
hartmut.stumpf@stratos.ag
Messages: 2
Registered: June 2005
Location: Germany
Junior Member
Hello,

we use a quite small database with only 160 MB data (ORACLE 8.1.7 on Win2000).
The application that use the database is server based and share up to 32 cursors for all connected clients.
We use the Microsoft ODBC driver to connect to the database (Version 2.5).

After we start the database instance the performance is good an we reach a cache hit ratio close to 94%. The SGA can use up to 800 MB RAM and the transactions didn't need disk IOs.

Unfortunately the System performance slows down during only a few hours, so the application need up to 5 sec to show an simple query-grid (1 sec after ORACLE restart).

If I use the same statement with the ORACLE-Plus I get really fast response (normally about 0.1 sec).

After restart the ORACLE instance the performance is good again.

Restarting the application-Server didn't have any effect for the performance. So we think that the database should be the problem.


Is it possible that the database answer significant slower to a client using ODBC? Or is it possible, that the instance have problems with long connections?

Can anyone help?

Thanks a lot for your help.

Hartmut

PS: for more details please notice the pfile-listing:

db_files = 1024

control_files = ("C:\oradata\SAPERION\control01.ctl", "D:\ORADATA\SAPERION\control02.ctl", "D:\ORADATA\SAPERION\control03.ctl")

open_cursors = 300
max_enabled_roles = 30
db_file_multiblock_read_count = 16

db_block_buffers = 69664

shared_pool_size = 188743680

large_pool_size = 614400
java_pool_size = 62914560

log_checkpoint_interval = 10000
log_checkpoint_timeout = 1800

processes = 150

parallel_max_servers = 5

log_buffer = 32768

max_dump_file_size = 10240

global_names = true

oracle_trace_collection_name = ""

background_dump_dest = C:\oracle\admin\SAPERION\bdump

user_dump_dest = C:\oracle\admin\SAPERION\udump

db_block_size = 8192

remote_login_passwordfile = exclusive

os_authent_prefix = ""

distributed_transactions = 10
mts_dispatchers =

compatible = 8.1.0

aq_tm_processes = 1

sort_area_size = 1048576
sort_area_retained_size = 1048576


JOB_QUEUE_PROCESSES = 4
JOB_QUEUE_INTERVAL = 10

UTL_FILE_DIR = C:\oracle\ora81\wf\res



Re: getting bad Performance during use [message #123924 is a reply to message #123918] Wed, 15 June 2005 10:34 Go to previous messageGo to next message
smartin
Messages: 1803
Registered: March 2005
Location: Jacksonville, Florida
Senior Member
So immediately after a restart, everything runs fine. But after a while, things start to slow down? And it gets worse and worse the longer it goes? Is this correct?

My initial thought is that cursors are not being closed properly to free up their resources. If you have any ref cursors (result sets) make sure they get closed after use.

It might also be that transactions are not being ended when they could be (committed or rolled back) so the transaction is just hanging out there longer than needed.

Anything in the alert log to indicate a specific problem resulting from a specific action or point in time in your daily activities?
Re: getting bad Performance during use [message #123940 is a reply to message #123924] Wed, 15 June 2005 11:28 Go to previous messageGo to next message
hartmut.stumpf@stratos.ag
Messages: 2
Registered: June 2005
Location: Germany
Junior Member
Hello smartin,

thanks for your fast reply.

You get the problem right. Exactly that happens.
Maybe it's find its worst value after one or two days and keep that bad performance after that (it didnt need 20 sec after a week or so).

About the cursors:
Actualy there are 10 open cursors in 25 sessions connected but there are no open transactions. I've never found locked rows in the used tables, so i didnt think that there are uncommited updates or inserts. The number of cursors keep that value of 10 over the last 2 hours (see the screen shot attached).
The Applcation will use these cursors for the next requests.

I'm quite sure that the cursors will disconnected, if I stop the
service of our application. Unfortunatelly the performance didnt get better after this. Only restarting the ORACLE-Instance will help (ORACLE is installed on an other server).

Get an idee?

  • Attachment: screen.png
    (Size: 87.79KB, Downloaded 873 times)
Re: getting bad Performance during use [message #123948 is a reply to message #123918] Wed, 15 June 2005 11:59 Go to previous messageGo to next message
Frank
Messages: 7901
Registered: March 2000
Senior Member
Are you familiar with statspack? Might be interesting to create a 10 minutes snapshot after db-startup and another 10 minutes when things start to slow down.
If you are not familiar, take a look here

hth
Re: getting bad Performance during use [message #123971 is a reply to message #123918] Wed, 15 June 2005 15:08 Go to previous messageGo to next message
smartin
Messages: 1803
Registered: March 2005
Location: Jacksonville, Florida
Senior Member
Frank makes an excellent suggestion.

Also, you can use things like sql_trace and tkprof to closely examine a statement. Perhaps examinie it once when things are running fine right after startup. Then examine the same statement again later during the slowdown and look for differences.

Also, how big is your shared pool and are you using bind variables? You mention a very large SGA size given the total size of your data and number of users (although I don't know what those users are doing).

I wonder if it is possible that, if you are executing a bunch of different queries, made even "more different" by not using bind variables, and you have a large place for oracle to store those parsed queries, which gets more and more full as the instance stays up, that the amount of time oracle spends looking for an already parsed query gets longer and longer?

That is pure speculation though, I don't know if oracle performs an exhaustive search or if it times out in its search. But, if that were the case, I would think you would solve it by reducing the SGA (specifically the shared pool) as a short term fix, and live with the cost of the hard parse. Then as a long term fix, rewrite your code to use bind variables.

Also, your SGA sounds big regardless (again, depending on your usage details) compared with your total db size. May want to decrease it and dedicate the extra memory to PGA instead.
Re: getting bad Performance during use [message #123975 is a reply to message #123918] Wed, 15 June 2005 15:24 Go to previous message
smartin
Messages: 1803
Registered: March 2005
Location: Jacksonville, Florida
Senior Member
Following up on my last statement, after looking more closely at your SGA init params, (but still guessing as to the nature of your users and queries) I'd suggest a couple of changes (IN DEV FIRST ALWAYS):

You've got around 550MB of RAM for database buffers, with only a 160MB database. I'd cut that by at least 2/3.

Unless you are using Java (as in java stored procedures) I'd cut your java_pool_size down to close to 0, maybe 512K.

I'd also check the two bitmap index memory parameters and cut them to 0 if you are not using bitmap indexes.

What is your hash area size? Your sort area size is pretty low compared to the number of users you have and the amount of RAM you have just saved in the above steps. Maybe make both sort area and hash area 16MB each, instead of the 1MB sort is now.

All of these are just rough starting point guesses that would need to be tested and then fine tuned using whatever monitors, dictionary queries, and tools like statspack are available.

All of this is in addition to my earlier speculation on shared pool size.

I'd love some feedback from others, and look forward to you posting back with what you find in statspack and an update on your server's performance after trying these or other suggestions.
Previous Topic: Which parameter?
Next Topic: How do you cleanup chained rows!
Goto Forum:
  


Current Time: Thu Mar 28 09:25:00 CDT 2024