Last week, one of our clients production database was upgraded from 9i to 10g. Since then they were facing lot of performance problems. They cannot pin point the problem at a particular place, so the problem seems to be “overall”. I did not have access to the production box, all I had was an AWR report generated during the time of performance degradation. When I dug into the report, I found that the instance performance is generally ok. The problem was with most of the application queries – too many physical reads.
I remember I read an article from Jonathan Lewis on his blog regarding performance degradation after 10g upgrade. I quickly went there and did a search and found out this URL http://jonathanlewis.wordpress.com/2007/02/02/10g-upgrade/, excellently explaining the change in the behaviour of dbms_stats package. In short.
“If you didn’t specify a method_opt in your scripts under 9i you were not generating histograms; but under 10g you will be collecting histograms on any columns that Oracle thinks might be suitable candidates.”
I then checked the AWR report to see if there was anything recorded regarding DBMS_STATS, indeed I found out that they were using defualt method_opt
OwnName => 'ADMIN',tabname=>'T2838',estimate_percent => dbms_stats.auto_sample_size,
Granularity => 'DEFAULT',Degree => 4)
Which means they were generating histograms, but on 9i the same script would not have generated them. This could be having a major impact on generating execution plans on 10g. I replied them this, providing the Jonathan Lewis’ URL.
Later, they did reset the METHOD_OPT to 9i levels.
exec dbms_stats.set_param('METHOD_OPT', 'FOR ALL COLUMNS SIZE 1');
I am waiting to hear from them on whether there is any improvement in performance.
Note: I later found Richard Foote has also written an excellent article on this : http://richardfoote.wordpress.com/2008/01/04/dbms_stats-method_opt-default-behaviour-changed-in-10g-be-careful/