Jonathan Lewis

Subscribe to Jonathan Lewis feed Jonathan Lewis
Just another Oracle weblog
Updated: 6 hours 36 min ago

ORA-29857

Mon, 2022-06-20 09:30

This is another little “case study” on solving a problem. It’s based on a question on the MOSC DBA Forum (needs an account) about trying to drop a tablespace and seeing the error “ORA-29857: domain indexes and/or secondary objects exist in the tablespace” when (apparently) there are no domain indexes or secondary objects in the tablespace according to the queries that have been used to try and locate the problem objects.

Not being completely familiar with all the possibilities that might be relevant to that particular error, but having a vague memory of seeing an oddity once before, I thought I’d do a quick check on a couple of possible causes.

  • Case 1, which I assumed ought to raise the error because the tablespace held some objects that met the requirements for the error: I created a table in a tablespace called users and, for the domain indexes a couple of Context (Oracle Text) indexes in a tablespace called test_8k_assm. Then I tried to drop the tablespace holding the context indexes. Here’s the relevant cut-n-paste (19.11.0.0):
SQL> drop tablespace test_8k_assm including contents and datafiles;
drop tablespace test_8k_assm including contents and datafiles
*
ERROR at line 1:
ORA-29857: domain indexes and/or secondary objects exist in the tablespace

  • Case 2, which I thought would drop the tablespace because the model didn’t meet the requirement of the error message: I swapped the locations of the table and its indexes, creating the table in the tablespace called test_8k_assm and the Context indexes in the tablespace called users then I tried to drop the tablespace holding only the table. Here’s the relevant cut-n-paste (19.11.0.0):
SQL> drop tablespace test_8k_assm including contents and datafiles;
drop tablespace test_8k_assm including contents and datafiles
*
ERROR at line 1:
ORA-29857: domain indexes and/or secondary objects exist in the tablespace

So, the error message is misleading and, possibly, this might explain why the queries the OP had posted weren’t reporting any guilty objects. Whatever the error message says, the domain indexes that you need to deal with may not be in the tablespace you want to drop.

I also have a question-mark over “secondary objects” – I don’t know what objects (other than the bits and pieces that make up domain indexes) are deemed to be secondary.

Investigating

It’s at this point that the purpose of the blog note starts to appear – I don’t know the answer to the OP’s question and I’ve discovered a gap in my knowledge so I’m willing to spend a little time extending my knowledge and trying to give a little help to the OP.

We can start with the view dba_secondary_objects (based on secobj$ from script dcore.bsq) which tells us that there are two possible sources of secondary objects – index types and statistics types:

SQL> select dbms_metadata.get_ddl('VIEW','DBA_SECONDARY_OBJECTS') view_text from dual;

-- with a little cosmetic tidying:

VIEW_TEXT
-------------------------------------------------------------------------------------------
CREATE OR REPLACE FORCE NONEDITIONABLE VIEW "SYS"."DBA_SECONDARY_OBJECTS" ("INDEX_OWNER", "INDEX_NAME", "SECONDARY_OBJECT_OWNER", "SECONDARY_OBJECT_NAME", "SECONDARY_OBJDATA_TYPE")
AS
select u.name, o.name, u1.name, o1.name, 
       decode(s.spare1, 0, 'FROM INDEXTYPE',1, 'FROM STATISTICS TYPE')
from   sys.user$ u, sys.obj$ o, sys.user$ u1, sys.obj$ o1, sys.secobj$ s
where  s.obj# = o.obj# and o.owner# = u.user# and
       s.secobj# = o1.obj#  and  o1.owner# = u1.user#;

With this view definition I’d be tempted to do a little drilling down, with queries that return increasing amounts of data, and then pick subsets to expand to higher precision, e.g:

select  secondary_objdata_type, count(*)
from    dba_secondary_objects
group by
        secondary_objdata_type
/

Are there any secondary objects that aren’t domain indexes – if there are a few then query the view for just those to find out what they are and who owns them.

select  *
from    dba_secondary_objects
where   secondary_objdata_type = 'FROM STATISTICS TYPE'
/

In fact if the total number of secondary objects was small I might report the details about all of them and try to track them down, or if there was only one schema using that tablespace I might restrict my queries to the subset of secondary objects owned by that schema:

select  distinct index_owner, index_name
from    dba_secondary_objects
where   secondary_object_owner = 'TEST_USER'
;

The OP had a rather larger task than my little model – there were 14,499 segments in the tablespace, and a query for (segment_owner, segment_type, count(*)) returned 332 rows – so had taken a similar approach to mine to try and minimise the amount of work needed, starting with the following queries (which I’ve enhanced slightly and given a cosmetic makeover):

prompt  ==================================
prompt  Domain indexes (in the tablespace)
prompt  ==================================

select  owner, index_name, table_owner, table_name, tablespace_name
from    dba_indexes 
where   index_type = 'DOMAIN'
-- and  tablespace_name = 'TEST_8K_ASSM'        -- error: domain indexes don't have tablespaces
;

prompt  ==========================================
prompt  Domain indexes on tables in the tablespace
prompt  ==========================================

select  t.table_name, t.tablespace_name, i.index_name, i.tablespace_name
from    dba_indexes i, dba_tables t
where   t.table_name = i.table_name
and     t.owner = i.owner
and     i.index_type = 'DOMAIN'
and     t.tablespace_name = 'TEST_8K_ASSM'
;

prompt  ===================================================
prompt  Secondary objects that are tables in the tablespace
prompt  ===================================================

select  index_owner, index_name, secondary_object_owner, secondary_object_name, secondary_objdata_type
from    dba_secondary_objects
where   secondary_object_name in (
                select table_name 
                from dba_tables 
                where tablespace_name = 'TEST_8K_ASSM'
        )
;

prompt  ======================================================
prompt  Indexes in the tablespace that are secondary objects ?
prompt  ======================================================

select  index_owner, index_name, secondary_object_owner, secondary_object_name, secondary_objdata_type
from    dba_secondary_objects
where   index_name in (
                select index_name 
                from dba_indexes 
                where tablespace_name = 'TEST_8K_ASSM'
        )
;

The first query had an error – domain indexes do not report a tablespace_name in dba_indexes, so this query inevitably returned no rows.

The second query would report any domain indexes on tables that were in the given tablespace – which would be useful for the simple case where everything was created in the user’s default tablespace.

The third query reports any tables in the given tablespace that are the physical tables instantiating the “logical” domain index – which is particularly important for my example of the indexes being in the given tablespace when the table itself was in some other tablespace.

The final query is trying to report domain indexes that have corresponding physical objects in the given tablespace – but this isn’t going to help (at least with my Context indexes) because the index_name in dba_secondary_indexes will be a name that appears in dba_indexes as index_type = ‘DOMAIN’ and won’t have a value stored in the tablespace_name column.

Here’s the output I get (preceded by a full report of the view dba_secondary_objects in my otherwise empty little database) when I had the table in test_8k_assm and the indexes in tablespace users.

SQL> select * from dba_secondary_objects;

INDEX_OWNER     INDEX_NAME           SECONDARY_OBJECT_OWNER           SECONDARY_OBJECT_NAME            SECONDARY_OBJDATA_TY
--------------- -------------------- -------------------------------- -------------------------------- --------------------
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$I                  FROM INDEXTYPE
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$K                  FROM INDEXTYPE
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$N                  FROM INDEXTYPE
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$U                  FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$I                FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$K                FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$N                FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$U                FROM INDEXTYPE

8 rows selected.

==================================
Domain indexes (in the tablespace)
==================================

OWNER           INDEX_NAME           TABLE_OWNER     TABLE_NAME                TABLESPACE_NAME
--------------- -------------------- --------------- ------------------------- ------------------------------
TEST_USER       SH_IX_NAME           TEST_USER       SCRIPT_HANDLES
TEST_USER       SH_IX_HANDLE         TEST_USER       SCRIPT_HANDLES

2 rows selected.

==========================================
Domain indexes on tables in the tablespace
==========================================

TABLE_NAME                TABLESPACE_NAME                INDEX_NAME           TABLESPACE_NAME
------------------------- ------------------------------ -------------------- ------------------------------
SCRIPT_HANDLES            TEST_8K_ASSM                   SH_IX_NAME
SCRIPT_HANDLES            TEST_8K_ASSM                   SH_IX_HANDLE

2 rows selected.

===================================================
Secondary objects that are tables in the tablespace
===================================================

no rows selected

=====================================================
Indexes in the tablespace that have secondary objects
=====================================================

no rows selected

As you can see I have two Context indexes, each represented by 4 secondary tables (and that’s not the complete story because those tables have their own indexes and a couple of them have LOB columns, and one of the secondary tables is actually an index-organized table).

And this is the result when the base table is in tablespace users and the indexes are in test_8k_assm:

==================================
Domain indexes (in the tablespace)
==================================

OWNER           INDEX_NAME           TABLE_OWNER     TABLE_NAME                TABLESPACE_NAME
--------------- -------------------- --------------- ------------------------- ------------------------------
TEST_USER       SH_IX_NAME           TEST_USER       SCRIPT_HANDLES
TEST_USER       SH_IX_HANDLE         TEST_USER       SCRIPT_HANDLES

2 rows selected

==========================================
Domain indexes on tables in the tablespace
==========================================

no rows selected

===================================================
Secondary objects that are tables in the tablespace
===================================================

INDEX_OWNER     INDEX_NAME           SECONDARY_OBJECT_OWNER           SECONDARY_OBJECT_NAME            SECONDARY_OBJDATA_TY
--------------- -------------------- -------------------------------- -------------------------------- --------------------
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$I                  FROM INDEXTYPE
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$K                  FROM INDEXTYPE
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$U                  FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$I                FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$K                FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$U                FROM INDEXTYPE

6 rows selected

=====================================================
Indexes in the tablespace that have secondary objects
=====================================================

no rows selected

So one of the middle two queries ought to return some rows if there are any Context indexes in the tablespace or on tables in the tablespace. (And for the final option – i.e. the table and its Context indexes being in the tablespace – both queries will return rows). The OP got no rows returned on all 4 queries.

What next?

It looks like the OP must have some secondary objects that aren’t playing by the rules if they’re “from index types”, or there are some secondary objects that are “from statistics types” and don’t get picked up by the queries. There is a view dba_ustat for “user stats types” which has interesting column names, but at this point I decided to follow a strategy that I’ve used many times in the past when Oracle raises an unexpected error on a repeatable process: repeat the process but enable SQL trace and see what Oracle was doing just before it raised the error.

Here’s the critical line in the trace file I produced; with a couple of the precding lines:

CLOSE #140708355472384:c=1860,e=1512,dep=1,type=0,tim=31701605233
EXEC #140708354845632:c=68018,e=68006,p=0,cr=1276,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=31701605472
ERROR #140708354845632:err=29857 tim=31701605485

A search backwards up the trace file showed that cursor #140708354845632 was for the “drop tablespace” statement. Just above the ERROR and EXEC lines we see cursor #140708355472384 closing. It’s reasonably likely (though not 100% certain) that this is the cursor representing the last SQL statement that Oracle executed before raising the error so searching backwards for “PARSING IN CURSOR #140708355472384” was my next step (and if that statement didn’t look promising I’d have search backwards from the ERROR line to find the cursor number from the most recent “EXEC” line). In this case (with my data) the CLOSE took me to the following (cosmetically cleaned) SQL:

select null from  (
             select o.obj# from obj$ o, tab$ t      where t.ts#=:1 and t.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select o.obj# from obj$ o, tabpart$ tp      where tp.ts#=:1 and tp.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select o.obj# from obj$ o, tabsubpart$ tsp      where tsp.ts#=:1 and tsp.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select o.obj# from obj$ o, ind$ i      where i.ts#=:1 and i.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select o.obj# from obj$ o, indpart$ ip      where ip.ts#=:1 and ip.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select o.obj# from obj$ o, indsubpart$ isp      where isp.ts#=:1 and isp.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select o.obj# from obj$ o, clu$ c      where c.ts#=:1 and c.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select i.obj# from ind$ i, tab$ t, tabpart$ tp      where tp.ts#=:1 and tp.bo# = t.obj# and t.obj#=i.bo# and i.type#=9    
union all    select i.obj# from ind$ i, tab$ t, tabcompart$ tcp,tabsubpart$ tsp      where tsp.ts#=:1 and tsp.pobj#=tcp.obj# and tcp.bo#=t.obj# and            t.obj#=i.bo# and i.type#=9    
union all    select i.obj# from ind$ i, tab$ t      where t.ts#=:1 and t.obj#=i.bo# and i.type#=9
)

Every single line of the UNION ALL is addressing the same tablespace by number (:1) looking for objects where bit 5 is set in the obj$.flags column (i.e. secondary object) or indexes where the ind$.type# is 9 (“cooperative index method”).

That looks like a good bet for being the code that Oracle uses to check if there are any objects that should block the drop, so I changed the bind variable to the tablespace number (from v$tablespace) changed the “select null” to “select obj#” and stuck the resulting query block into an IN subquery against dba_objects (this may not be the most efficient of queries):

select 
        secondary, owner, object_name, object_type, object_id
from    dba_objects
where   object_id in (
                select obj# from  (
                                  select o.obj# from obj$ o, tab$ t where t.ts#=6 and t.obj#=o.obj# and (decode(bitand(o.flags, 16), 16, 1, 0)=1)
                        union all ...
                        union all select i.obj# from ind$ i, tab$ t where t.ts#=6 and t.obj#=i.bo# and i.type#=9
                )
        )
order by secondary, owner, object_name
/

S OWNER           OBJECT_NAME                      OBJECT_TYPE              OBJECT_ID
- --------------- -------------------------------- ----------------------- ----------
N TEST_USER       SH_IX_HANDLE                     INDEX                       123956
N TEST_USER       SH_IX_NAME                       INDEX                       123944
Y TEST_USER       DR$SH_IX_HANDLE$I                TABLE                       123957
Y TEST_USER       DR$SH_IX_HANDLE$K                TABLE                       123960
Y TEST_USER       DR$SH_IX_HANDLE$KD               INDEX                       123966
Y TEST_USER       DR$SH_IX_HANDLE$KR               INDEX                       123967
Y TEST_USER       DR$SH_IX_HANDLE$U                TABLE                       123963
Y TEST_USER       DR$SH_IX_HANDLE$X                INDEX                       123965
Y TEST_USER       DR$SH_IX_NAME$I                  TABLE                       123945
Y TEST_USER       DR$SH_IX_NAME$K                  TABLE                       123948
Y TEST_USER       DR$SH_IX_NAME$KD                 INDEX                       123954
Y TEST_USER       DR$SH_IX_NAME$KR                 INDEX                       123955
Y TEST_USER       DR$SH_IX_NAME$U                  TABLE                       123951
Y TEST_USER       DR$SH_IX_NAME$X                  INDEX                       123953
Y TEST_USER       SYS_C0016528                     INDEX                       123952
Y TEST_USER       SYS_C0016536                     INDEX                       123964
Y TEST_USER       SYS_IL0000123945C00006$$         INDEX                       123947
Y TEST_USER       SYS_IL0000123957C00006$$         INDEX                       123959
Y TEST_USER       SYS_IOT_TOP_123949               INDEX                       123950
Y TEST_USER       SYS_IOT_TOP_123961               INDEX                       123962

20 rows selected.

This is the list I got when the base table and the Context indexes were all created in the tablespace I wanted to drop – notice that the “logical” indexes (the first two items, flagged as secondary=’N’) are reported along with all the physical objects relating to the context indexes.

When I created the indexes in the target tablespace but the base table in an alternative tablespace the query reported only the physical objects making up the Context indexes; when I created the base table in the target tablespace and the Context indexes in an alternative tablespace the query reported only the “logical” indexes.

Conclusion

As a last resort, when the simple queries you find in the manuals or on MOS (or elsewhere on the internet) don’t seem to supply all the answers the strategy of finding and hacking the query that Oracle runs just before raising the error is a good generic method for getting a better handle on what’s causing the problem.

In this example of Oracle error ORA-29758 you can get a list of the objects (logical or physical or both) that are blocking a “drop tablespace” command and that makes it easier to find out what else you need to drop before you can drop the tablespace.

Bear in mind that I’ve just demonstrated the approach while experimenting with a couple of Context indexes, where the information I needed was available through a couple of ordinary data dictionary views – I only dived down into the internals because the data dictionary views weren’t giving the OP a complete answer.

I don’t know what was in the OP’s database to make this problem happen (it may simply be that the database was version 11.2.0.4 while mine was 19.11.0.0 and there’s a bug that’s been fixed since 11g), so the OP may find that after they’ve cleared all domain indexes indicated by this query they may still see ORA-29758 and find that there’s another query that Oracle uses to check for other conditions (maybe those “from statistics type” secondary objects) that also needs to be hijacked and subverted to complete the job. I know that when I traced a successful “drop tablespace” there were 13 more statements after the one I’ve shown that looked as if they might be checking for the (non-)existence of other types of object before the tablespace could be dropped.

Case Study

Fri, 2022-06-17 07:25

The question “How do you trouble-shoot a performance problem” came up in an online session I did for the AIOUG (All-India OUG) today. It’s a very broad question and the only possible answers are either extremely generic, or very specific – so here’s a specific example that I talked about to give some idea of the processes I go through. It’s an example from the Oracle-l list server asking the (paraphrased) question:

I’ve got a parallel query that take 24 seconds to return 2,500 rows for 2018. .The same query for 2019 should return a similar amount of data but consumes a lot of TEMP space before failing; it takes 45 minutes to complete if I remove the parallel hint. The only difference between the two queries is a change to a single predicate: “opclf.year_number = to_number(‘YYYY’)” and the statistics are up to date – what’s going on?

The ease of diagnosing a problem is dependent on the ease of access to all the right information, and you don’t always know initially what the right information might be. In this case the OP had started by posting to github the SQL Monitor reports for the two queries, which were automatically generated since the queries were using parallel execution.

Click here to show/hide the first few sections for the 2019 output
SQL Text
------------------------------
SELECT /*+ PARALLEL(8) */ DISTINCT rd.document_id AS doc_id ,'firm_id' AS criteria_key ,opf.ultimate_parent_firm_id AS series_id ,period_number FROM ( SELECT /*+ PARALLEL(8) */ DISTINCT rd.document_id ,rd.client_role_id ,( CASE WHEN MONTHS_BETWEEN(TO_DATE('04/28/2022', 'MM/DD/YYYY'), TO_DATE('04/01/2017', 'MM/DD/YYYY')) > 12 THEN TRUNC(dc.date_value, 'YEAR') ELSE TRUNC(dc.date_value, 'MONTH') END ) period_number FROM REPORT_ENGINE.date_code dc ,REPORT_ENGINE.lit_fact_bd rd INNER JOIN
report_engine.firm FIRM ON rd.firm_id = FIRM.firm_id WHERE dc.date_value BETWEEN TO_DATE('04/01/2017', 'MM/DD/YYYY') AND TO_DATE('04/28/2022', 'MM/DD/YYYY') AND rd.publication_date_id = dc.date_id AND rd.year_number = to_number('2019') AND (FIRM.ultimate_parent_firm_id IN (to_number('252094'))) ) rd INNER JOIN report_engine.opposing_counsel op ON rd.client_role_id = op.legal_role_id INNER JOIN report_engine.lit_fact_bd opclf ON opclf.document_id = rd.document_id AND op.opposing_counsel_role_id
= opclf.client_role_id AND opclf.year_number = to_number('2019') INNER JOIN report_engine.firm opf ON opclf.firm_id = opf.firm_id AND opf.firm_id >= 1000 WHERE 1 = 1

Global Information
------------------------------
 Status              :  EXECUTING
 Instance ID         :  1
 Session             :  SYS (647:28741)
 SQL ID              :  3cjg20q2vw41f
 SQL Execution ID    :  16777216
 Execution Started   :  06/09/2022 05:08:24
 First Refresh Time  :  06/09/2022 05:08:25
 Last Refresh Time   :  06/09/2022 05:08:27
 Duration            :  221s
 Module/Action       :  sqlplus@c111dhw (TNS V1-V3)/-
 Service             :  SYS$USERS
 Program             :  sqlplus@c111dhw (TNS V1-V3)

Global Stats
====================================================================
| Elapsed |   Cpu   |    IO    |  Other   | Buffer | Write | Write |
| Time(s) | Time(s) | Waits(s) | Waits(s) |  Gets  | Reqs  | Bytes |
====================================================================
|     222 |     179 |       28 |       15 |    49M | 11624 |   2GB |
====================================================================

Parallel Execution Details (DOP=8 , Servers Allocated=16)
=====================================================================================================================================
|      Name      | Type  | Server# | Elapsed |   Cpu   |    IO    |  Other   | Buffer | Write | Write |         Wait Events         |
|                |       |         | Time(s) | Time(s) | Waits(s) | Waits(s) |  Gets  | Reqs  | Bytes |         (sample #)          |
=====================================================================================================================================
| PX Coordinator | QC    |         |    0.26 |    0.26 |          |     0.00 |  12144 |       |     . |                             |
| p000           | Set 1 |       1 |         |         |          |          |        |       |     . |                             |
| p001           | Set 1 |       2 |         |         |          |          |        |       |     . |                             |
| p002           | Set 1 |       3 |         |         |          |          |        |       |     . |                             |
| p003           | Set 1 |       4 |         |         |          |          |        |       |     . |                             |
| p004           | Set 1 |       5 |         |         |          |          |        |       |     . |                             |
| p005           | Set 1 |       6 |         |         |          |          |        |       |     . |                             |
| p006           | Set 1 |       7 |         |         |          |          |        |       |     . |                             |
| p007           | Set 1 |       8 |         |         |          |          |        |       |     . |                             |
| p008           | Set 2 |       1 |     220 |     177 |       28 |       15 |    48M | 11624 |   2GB | direct path write temp (28) |
| p009           | Set 2 |       2 |         |         |          |          |        |       |     . |                             |
| p010           | Set 2 |       3 |         |         |          |          |        |       |     . |                             |
| p011           | Set 2 |       4 |    1.71 |    1.70 |          |     0.01 |   595K |       |     . |                             |
| p012           | Set 2 |       5 |         |         |          |          |        |       |     . |                             |
| p013           | Set 2 |       6 |         |         |          |          |        |       |     . |                             |
| p014           | Set 2 |       7 |         |         |          |          |        |       |     . |                             |
| p015           | Set 2 |       8 |         |         |          |          |        |       |     . |                             |
=====================================================================================================================================

You’ll note that I haven’t got as far as the body of the execution plan yet, and I’ve highlighted line 44 – a line in the middle of the summary of activity for the parallel execution processes. There are 8 servers in each of two sets (we’re running parallel 8) and the line I’ve highlighted is the first server of the second set. The reason I’ve highlighted it is that it’s the one server that’s doing almost all the work – effectively the query (at some point in the plan) is running serially.

So, a first thought, maybe we’ve just been unlucky and running parallel 7 (say) would result in a better distribution of data across parallel servers and allow each of 7 processes to do whatever they had to do to a much smaller amount of data. Maybe a change to the distribution method (pq_distribute() hint) would spread the workload more evenly. In either case “N” smaller workload might still use a lot of TEMP, but possibly no individual process would fail, and the job would complete nearly N times as quickly.

Note: the 2018 Monitor report showed an equivalent skew in the data distribution, but the amount of work needed was much smaller with a read/write load of only 143MB compared to 2GB for the 2019 report. The OP did try running parallel 7, but with no change to the overall effect.

Let’s take a quick glance at the plan body (click to view/hide)
SQL Plan Monitoring Details (Plan Hash Value=1862006233)
=========================================================================================================================================================================================================================
| Id    |                     Operation                      |           Name            |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Write | Write | Mem | Temp | Activity |        Activity Detail       |
|       |                                                    |                           | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs  | Bytes |     |      |   (%)     |         (# samples)         |
=========================================================================================================================================================================================================================
|     0 | SELECT STATEMENT                                   |                           |         |      |           |        |     3 |          |       |       |     |      |   |                             |
|     1 |   PX COORDINATOR                                   |                           |         |      |           |        |     3 |          |       |       |     |      |   |                             |
|     2 |    PX SEND QC (RANDOM)                             | :TQ10003                  |     781 |  153 |           |        |       |          |       |       |     |      |   |                             |
|     3 |     HASH UNIQUE                                    |                           |     781 |  153 |           |        |       |          |       |       |     |      |   |                             |
|     4 |      PX RECEIVE                                    |                           |     781 |  151 |           |        |       |          |       |       |     |      |   |                             |
|     5 |       PX SEND HASH                                 | :TQ10002                  |     781 |  151 |           |        |     2 |          |       |       |     |      |   |                             |
|  -> 6 |        BUFFER SORT                                 |                           |     781 |  153 |       219 |     +3 |     2 |        0 | 11624 |   2GB | 61M |   2G |    26.21 | Cpu (26)                     |
|       |                                                    |                           |         |      |           |        |       |          |       |       |     |      |   | direct path write temp (28) |
|  -> 7 |         NESTED LOOPS                               |                           |     781 |  151 |       219 |     +3 |     2 |      39M |       |       |     |      |   |                             |
|  -> 8 |          NESTED LOOPS                              |                           |     781 |  151 |       219 |     +3 |     2 |      39M |       |       |     |      |     0.49 | Cpu (1)                      |
|  -> 9 |           NESTED LOOPS                             |                           |     777 |   44 |       219 |     +3 |     2 |      39M |       |       |     |      |   |                             |
| -> 10 |            NESTED LOOPS                            |                           |      41 |   26 |       219 |     +3 |     2 |     6463 |       |       |     |      |   |                             |
| -> 11 |             HASH JOIN                              |                           |      41 |   21 |       219 |     +3 |     2 |     6463 |       |       |  6M |      |   |                             |
|    12 |              BUFFER SORT                           |                           |         |      |         1 |     +3 |     2 |    36855 |       |       |     |      |   |                             |
|    13 |               PX RECEIVE                           |                           |      87 |   19 |         1 |     +3 |     2 |    36855 |       |       |     |      |   |                             |
|    14 |                PX SEND HASH                        | :TQ10000                  |      87 |   19 |         1 |     +3 |     1 |    38694 |       |       |     |      |   |                             |
|    15 |                 NESTED LOOPS                       |                           |      87 |   19 |         1 |     +3 |     1 |    38694 |       |       |     |      |   |                             |
|    16 |                  NESTED LOOPS                      |                           |      87 |   19 |         1 |     +3 |     1 |    38694 |       |       |     |      |   |                             |
|    17 |                   TABLE ACCESS BY INDEX ROWID      | FIRM                      |       1 |    2 |         1 |     +3 |     1 |       43 |       |       |     |      |   |                             |
|    18 |                    INDEX RANGE SCAN                | FIRM_ULT_PARENT_FIRM_IDX1 |       1 |    1 |         1 |     +3 |     1 |       43 |       |       |     |      |   |                             |
|    19 |                   PARTITION RANGE SINGLE           |                           |         |      |         1 |     +3 |    43 |    38694 |       |       |     |      |   |                             |
|    20 |                    BITMAP CONVERSION TO ROWIDS     |                           |         |      |         1 |     +3 |    43 |    38694 |       |       |     |      |   |                             |
|    21 |                     BITMAP INDEX SINGLE VALUE      | LIT_FACT_BD_IDX09         |         |      |         1 |     +3 |    43 |       49 |       |       |     |      |   |                             |
|    22 |                  TABLE ACCESS BY LOCAL INDEX ROWID | LIT_FACT_BD               |      63 |   19 |         3 |     +1 | 38694 |    38694 |       |       |     |      |     0.49 | Cpu (1)                      |
|    23 |              PX RECEIVE                            |                           |      20 |    2 |         1 |     +3 |     2 |        2 |       |       |     |      |   |                             |
|    24 |               PX SEND HASH                         | :TQ10001                  |      20 |    2 |           |        |       |          |       |       |     |      |   |                             |
|    25 |                PX BLOCK ITERATOR                   |                           |      20 |    2 |           |        |       |          |       |       |     |      |   |                             |
|    26 |                 TABLE ACCESS FULL                  | OPPOSING_COUNSEL          |      20 |    2 |           |        |       |          |       |       |     |      |   |                             |
| -> 27 |             TABLE ACCESS BY INDEX ROWID            | DATE_CODE                 |       1 |      |       219 |     +3 |  6465 |     6463 |       |       |     |      |   |                             |
| -> 28 |              INDEX UNIQUE SCAN                     | PK_DATE_CODE              |       1 |      |       219 |     +3 |  6465 |     6465 |       |       |     |      |   |                             |
| -> 29 |            PARTITION RANGE SINGLE                  |                           |      19 |      |       219 |     +3 |  6465 |      39M |       |       |     |      |   |                             |
| -> 30 |             TABLE ACCESS BY LOCAL INDEX ROWID      | LIT_FACT_BD               |      19 |      |       220 |     +2 |  6465 |      39M |       |       |     |      |    35.92 | Cpu (74)                     |
| -> 31 |              INDEX RANGE SCAN                      | LIT_FACT_BD_IDX20         |       1 |      |       219 |     +3 |  6465 |      39M |       |       |     |      |     9.22 | Cpu (19)                     |
| -> 32 |           INDEX UNIQUE SCAN                        | PK_FIRM                   |       1 |      |       219 |     +3 |   39M |      39M |       |       |     |      |    10.68 | Cpu (22)                     |
| -> 33 |          TABLE ACCESS BY INDEX ROWID               | FIRM                      |       1 |      |       219 |     +3 |   39M |      39M |       |       |     |      |    16.99 | Cpu (35)                     |
===================================================================================================================================================================

You can see from the “->” symbols at the left hand side of the plan that this report was generated while the plan was still running. The thing that leaps out as you glance down the page is the value in the “Rows (Actual)” column at operations 7-9 (which show the rowsources generated by some nested loop joins) and operations 29, 32 and 33 of the plan that tell us something about how those rowsources were generated.

Operation 29 has executed (Execs) 6,465 so far, producing a total of 39M rows, and operations 32 and 33 have both executed 39M times each producing a total of 39M rows by index unique scan.

The plan for the 2018 data was similar though the join order for DATE_CODE, LIT_FACT_BD and FIRM was different (and it was the join to LIT_FACT_BD that increased the row count dramatically – so hinting it to be the last table in the join might help a bit), but the largest rowcount for the 2018 query was only 3M rows, not the 39M that had appeared after only 6,465 rows of a possible driving 39,855 in the 2019 query.

So it’s not surprising that the query could take much longer for 2019. It’s not the volume of output that matters, it’s the volume of input (or, more accurately, throughput or intermediate) data that matters.

Let’s think about that volume, though: the 2018 plan generated 3M rows and then crunched them down to 2,500 rows and the 2019 plan was supposed to produce a similar sized output (from 39M+ rows). Could we rewrite the query in some way that made it do some intermediate aggregation so that the volume of data to be aggregated was never enormous?

Let’s take a look at the plan from the 2018 Monitor report (click to show/hide)
SQL Plan Monitoring Details (Plan Hash Value=472871521)
=======================================================================================================================================================================================================================================
| Id |                     Operation                      |           Name            |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  | Write | Write |  Mem  | Temp  | Activity |       Activity Detail       |
|    |                                                    |                           | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes | Reqs  | Bytes | (Max) | (Max) |   (%)     |        (# samples)         |
=======================================================================================================================================================================================================================================
|  0 | SELECT STATEMENT                                   |                           |         |      |         1 |    +24 |    17 |     2613 |      |       |       |       |       |       |   |                            |
|  1 |   PX COORDINATOR                                   |                           |         |      |         1 |    +24 |    17 |     2613 |      |       |       |       |       |       |   |                            |
|  2 |    PX SEND QC (RANDOM)                             | :TQ10003                  |       1 |   39 |         1 |    +24 |     8 |     2613 |      |       |       |       |       |       |   |                            |
|  3 |     HASH UNIQUE                                    |                           |       1 |   39 |         9 |    +16 |     8 |     2613 |      |       |       |       |    9M |       |     6.90 | Cpu (2)                     |
|  4 |      PX RECEIVE                                    |                           |       1 |   38 |         9 |    +16 |     8 |       3M |      |       |       |       |       |       |   |                            |
|  5 |       PX SEND HASH                                 | :TQ10002                  |       1 |   38 |        12 |    +14 |     8 |       3M |      |       |       |       |       |       |     3.45 | Cpu (1)                     |
|  6 |        BUFFER SORT                                 |                           |       1 |   39 |        23 |     +2 |     8 |       3M | 4584 | 143MB |   703 | 143MB |  151M |  151M |    34.48 | Cpu (2)                     |
|    |                                                    |                           |         |      |           |        |       |          |      |       |       |       |       |       |   | direct path read temp (6)  |
|    |                                                    |                           |         |      |           |        |       |          |      |       |       |       |       |       |   | direct path write temp (2) |
|  7 |         NESTED LOOPS                               |                           |       1 |   38 |        15 |     +2 |     8 |       3M |      |       |       |       |       |       |   |                            |
|  8 |          NESTED LOOPS                              |                           |       1 |   38 |        15 |     +2 |     8 |       3M |      |       |       |       |       |       |   |                            |
|  9 |           NESTED LOOPS                             |                           |       1 |   38 |        15 |     +2 |     8 |       3M |      |       |       |       |       |       |   |                            |
| 10 |            NESTED LOOPS                            |                           |       1 |   38 |        15 |     +2 |     8 |       3M |      |       |       |       |       |       |   |                            |
| 11 |             HASH JOIN                              |                           |      41 |   21 |        15 |     +2 |     8 |    19334 |      |       |       |       |    7M |       |   |                            |
| 12 |              BUFFER SORT                           |                           |         |      |        13 |     +2 |     8 |    19233 |      |       |       |       |    1M |       |   |                            |
| 13 |               PX RECEIVE                           |                           |      89 |   19 |        13 |     +2 |     8 |    19233 |      |       |       |       |       |       |   |                            |
| 14 |                PX SEND HASH                        | :TQ10000                  |      89 |   19 |         1 |     +1 |     1 |    19233 |      |       |       |       |       |       |   |                            |
| 15 |                 NESTED LOOPS                       |                           |      89 |   19 |         1 |     +1 |     1 |    19233 |      |       |       |       |       |       |   |                            |
| 16 |                  NESTED LOOPS                      |                           |      89 |   19 |         1 |     +1 |     1 |    19233 |      |       |       |       |       |       |   |                            |
| 17 |                   TABLE ACCESS BY INDEX ROWID      | FIRM                      |       1 |    2 |         1 |     +1 |     1 |       43 |      |       |       |       |       |       |   |                            |
| 18 |                    INDEX RANGE SCAN                | FIRM_ULT_PARENT_FIRM_IDX1 |       1 |    1 |         1 |     +1 |     1 |       43 |      |       |       |       |       |       |   |                            |
| 19 |                   PARTITION RANGE SINGLE           |                           |         |      |         1 |     +1 |    43 |    19233 |      |       |       |       |       |       |   |                            |
| 20 |                    BITMAP CONVERSION TO ROWIDS     |                           |         |      |         1 |     +1 |    43 |    19233 |      |       |       |       |       |       |   |                            |
| 21 |                     BITMAP INDEX SINGLE VALUE      | LIT_FACT_BD_IDX09         |         |      |         1 |     +1 |    43 |       51 |      |       |       |       |       |       |   |                            |
| 22 |                  TABLE ACCESS BY LOCAL INDEX ROWID | LIT_FACT_BD               |      64 |   19 |         1 |     +1 | 19233 |    19233 |      |       |       |       |       |       |   |                            |
| 23 |              PX RECEIVE                            |                           |      20 |    2 |        15 |     +2 |     8 |       20 |      |       |       |       |       |       |   |                            |
| 24 |               PX SEND HASH                         | :TQ10001                  |      20 |    2 |         1 |    +14 |     8 |       20 |      |       |       |       |       |       |   |                            |
| 25 |                PX BLOCK ITERATOR                   |                           |      20 |    2 |         1 |    +14 |     8 |       20 |      |       |       |       |       |       |   |                            |
| 26 |                 TABLE ACCESS FULL                  | OPPOSING_COUNSEL          |      20 |    2 |         1 |    +14 |     3 |       20 |      |       |       |       |       |       |   |                            |
| 27 |             PARTITION RANGE SINGLE                 |                           |       1 |      |        15 |     +2 | 19334 |       3M |      |       |       |       |       |       |   |                            |
| 28 |              TABLE ACCESS BY LOCAL INDEX ROWID     | LIT_FACT_BD               |       1 |      |        16 |     +1 | 19334 |       3M |      |       |       |       |       |       |    17.24 | Cpu (5)                     |
| 29 |               INDEX RANGE SCAN                     | LIT_FACT_BD_IDX20         |       1 |      |        15 |     +2 | 19334 |       3M |      |       |       |       |       |       |   |                            |
| 30 |            TABLE ACCESS BY INDEX ROWID             | DATE_CODE                 |       1 |      |        15 |     +2 |    3M |       3M |      |       |       |       |       |       |    10.34 | Cpu (3)                     |
| 31 |             INDEX UNIQUE SCAN                      | PK_DATE_CODE              |       1 |      |        16 |     +1 |    3M |       3M |      |       |       |       |       |       |     6.90 | Cpu (2)                     |
| 32 |           INDEX UNIQUE SCAN                        | PK_FIRM                   |       1 |      |        23 |     +2 |    3M |       3M |      |       |       |       |       |       |     6.90 | Cpu (2)                     |
| 33 |          TABLE ACCESS BY INDEX ROWID               | FIRM                      |       1 |      |        16 |     +1 |    3M |       3M |      |       |       |       |       |       |    13.79 | Cpu (4)                     |
=======================================================================================================================================================================================================================================

We see from operations 3 – 7 that the 3M rows generated from the nested loop joins pass up through a buffer sort operation before being crunched down to 2,613 rows. It’s probably the buffer sort (which is buffering but not sorting) that has mostly passed through a single server and spilled to disc in the 2019 report. We just don’t want that 39M+ rows ever to exist.

So how easy will it be to change the SQL (click to view/hide)
SELECT
        /*+ PARALLEL(8) */
        DISTINCT rd.document_id AS doc_id
        ,'firm_id' AS criteria_key
        ,opf.ultimate_parent_firm_id AS series_id
        ,period_number
FROM (
        SELECT
                /*+ PARALLEL(8) */
                DISTINCT rd.document_id
                ,rd.client_role_id
                ,(
                        CASE 
                                WHEN MONTHS_BETWEEN(TO_DATE('04/28/2022', 'MM/DD/YYYY'), TO_DATE('04/01/2017', 'MM/DD/YYYY')) > 12
                                        THEN TRUNC(dc.date_value, 'YEAR')
                                ELSE TRUNC(dc.date_value, 'MONTH')
                                END
                        ) period_number
        FROM REPORT_ENGINE.date_code dc
                ,REPORT_ENGINE.lit_fact_bd rd
        INNER JOIN report_engine.firm FIRM ON rd.firm_id = FIRM.firm_id
        WHERE dc.date_value BETWEEN TO_DATE('04/01/2017', 'MM/DD/YYYY')
                        AND TO_DATE('04/28/2022', 'MM/DD/YYYY')
                AND rd.publication_date_id = dc.date_id
                AND rd.year_number = to_number('2019')
                AND (FIRM.ultimate_parent_firm_id IN (to_number('252094')))
        ) rd
INNER JOIN report_engine.opposing_counsel op ON rd.client_role_id = op.legal_role_id
INNER JOIN report_engine.lit_fact_bd opclf ON opclf.document_id = rd.document_id
        AND op.opposing_counsel_role_id = opclf.client_role_id
        AND opclf.year_number = to_number('2019')
INNER JOIN report_engine.firm opf ON opclf.firm_id = opf.firm_id
        AND opf.firm_id >= 1000
WHERE 1 = 1;

Lines 7-10 and 27 tell us we alredy have an inline view where we’re doing a “select distinct” and, unwinding the mix of “Oracle” and “ANSI” syntax, we can see that it joins DATE_CODE, LIT_FACT_BD and FIRM, and we know that one of those tables explodes the intermediate data size to something enormous. So it looks like the original author of this code had already worked out that the query needed to aggregate early.

Checking back to the original plans we note that there’s only one “hash unique” operation, and there’s no sign of a “view” operation, so maybe the performance problem is a result of the optimizer suddenly deciding it can do complex view merging with this inline view, and perhaps all we need to do is add the hint /*+ no_merge */ to the inline view and see what happens.

Here’s the plan after adding the hint (click to hide/vew)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----
| Id  | Operation                                            | Name                      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----
|   0 | SELECT STATEMENT                                     |                           |       |       |   314 (100)|          |       |       |        |      | |
|   1 |  PX COORDINATOR                                      |                           |       |       |            |          |       |       |        |      | |
|   2 |   PX SEND QC (RANDOM)                                | :TQ10005                  |    23 |  2599 |   314   (1)| 00:00:06 |       |       |  Q1,05 | P->S | QC (RAN D)  |
|   3 |    HASH UNIQUE                                       |                           |    23 |  2599 |   314   (1)| 00:00:06 |       |       |  Q1,05 | PCWP | |
|   4 |     PX RECEIVE                                       |                           |    23 |  2599 |   314   (1)| 00:00:06 |       |       |  Q1,05 | PCWP | |
|   5 |      PX SEND HASH                                    | :TQ10004                  |    23 |  2599 |   314   (1)| 00:00:06 |       |       |  Q1,04 | P->P | HASH |
|   6 |       HASH UNIQUE                                    |                           |    23 |  2599 |   314   (1)| 00:00:06 |       |       |  Q1,04 | PCWP | |
|   7 |        NESTED LOOPS                                  |                           |    23 |  2599 |   313   (1)| 00:00:06 |       |       |  Q1,04 | PCWP | |
|   8 |         NESTED LOOPS                                 |                           |    23 |  2599 |   313   (1)| 00:00:06 |       |       |  Q1,04 | PCWP | |
|   9 |          NESTED LOOPS                                |                           |    23 |  2323 |   310   (1)| 00:00:06 |       |       |  Q1,04 | PCWP | |
|* 10 |           HASH JOIN                                  |                           |   388 | 21340 |   148   (1)| 00:00:03 |       |       |  Q1,04 | PCWP | |
|  11 |            PX RECEIVE                                |                           |    20 |   160 |     2   (0)| 00:00:01 |       |       |  Q1,04 | PCWP | |
|  12 |             PX SEND BROADCAST                        | :TQ10002                  |    20 |   160 |     2   (0)| 00:00:01 |       |       |  Q1,02 | P->P | BROADCA ST  |
|  13 |              PX BLOCK ITERATOR                       |                           |    20 |   160 |     2   (0)| 00:00:01 |       |       |  Q1,02 | PCWC | |
|* 14 |               TABLE ACCESS FULL                      | OPPOSING_COUNSEL          |    20 |   160 |     2   (0)| 00:00:01 |       |       |  Q1,02 | PCWP | |
|  15 |            VIEW                                      |                           |   835 | 39245 |   146   (1)| 00:00:03 |       |       |  Q1,04 | PCWP | |
|  16 |             HASH UNIQUE                              |                           |   835 | 63460 |   146   (1)| 00:00:03 |       |       |  Q1,04 | PCWP | |
|  17 |              PX RECEIVE                              |                           |   835 | 63460 |   145   (0)| 00:00:03 |       |       |  Q1,04 | PCWP | |
|  18 |               PX SEND HASH                           | :TQ10003                  |   835 | 63460 |   145   (0)| 00:00:03 |       |       |  Q1,03 | P->P | HASH |
|* 19 |                HASH JOIN BUFFERED                    |                           |   835 | 63460 |   145   (0)| 00:00:03 |       |       |  Q1,03 | PCWP | |
|  20 |                 BUFFER SORT                          |                           |       |       |            |          |       |       |  Q1,03 | PCWC | |
|  21 |                  PX RECEIVE                          |                           |   835 | 52605 |   136   (0)| 00:00:03 |       |       |  Q1,03 | PCWP | |
|  22 |                   PX SEND HASH                       | :TQ10000                  |   835 | 52605 |   136   (0)| 00:00:03 |       |       |        | S->P | HASH |
|  23 |                    NESTED LOOPS                      |                           |   835 | 52605 |   136   (0)| 00:00:03 |       |       |        |      | |
|  24 |                     NESTED LOOPS                     |                           |   835 | 52605 |   136   (0)| 00:00:03 |       |       |        |      | |
|  25 |                      TABLE ACCESS BY INDEX ROWID     | FIRM                      |     1 |    12 |     2   (0)| 00:00:01 |       |       |        |      | |
|* 26 |                       INDEX RANGE SCAN               | FIRM_ULT_PARENT_FIRM_IDX1 |     1 |       |     1   (0)| 00:00:01 |       |       |        |      | |
|  27 |                      PARTITION RANGE SINGLE          |                           |       |       |            |          |    30 |    30 |        |      | |
|  28 |                       BITMAP CONVERSION TO ROWIDS    |                           |       |       |            |          |       |       |        |      | |
|* 29 |                        BITMAP INDEX SINGLE VALUE     | LIT_FACT_BD_IDX09         |       |       |            |          |    30 |    30 |        |      | |
|* 30 |                     TABLE ACCESS BY LOCAL INDEX ROWID| LIT_FACT_BD               |   598 | 30498 |   136   (0)| 00:00:03 |    30 |    30 |        |      | |
|  31 |                 PX RECEIVE                           |                           |  1854 | 24102 |     9   (0)| 00:00:01 |       |       |  Q1,03 | PCWP | |
|  32 |                  PX SEND HASH                        | :TQ10001                  |  1854 | 24102 |     9   (0)| 00:00:01 |       |       |  Q1,01 | P->P | HASH |
|  33 |                   PX BLOCK ITERATOR                  |                           |  1854 | 24102 |     9   (0)| 00:00:01 |       |       |  Q1,01 | PCWC | |
|* 34 |                    TABLE ACCESS FULL                 | DATE_CODE                 |  1854 | 24102 |     9   (0)| 00:00:01 |       |       |  Q1,01 | PCWP | |
|  35 |           PARTITION RANGE SINGLE                     |                           |     1 |    46 |     0   (0)|          |    30 |    30 |  Q1,04 | PCWP | |
|* 36 |            TABLE ACCESS BY LOCAL INDEX ROWID         | LIT_FACT_BD               |     1 |    46 |     0   (0)|          |    30 |    30 |  Q1,04 | PCWP | |
|* 37 |             INDEX RANGE SCAN                         | LIT_FACT_BD_IDX20         |     1 |       |     0   (0)|          |    30 |    30 |  Q1,04 | PCWP | |
|* 38 |          INDEX UNIQUE SCAN                           | PK_FIRM                   |     1 |       |     0   (0)|          |       |       |  Q1,04 | PCWP | |
|  39 |         TABLE ACCESS BY INDEX ROWID                  | FIRM                      |     1 |    12 |     0   (0)|          |       |       |  Q1,04 | PCWP | |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----

Note particularly that operations 15 and 16 tell us that we’ve forced the optimizer into treating the inline view as a separate query block and we now have two aggregation steps, one inside the view, and another after joining FIRM (again) and LIT_FACT_BD (again) to the inline view.

Unfortunately the plan shown here is pulled from memory using dbms_xplan.display_cursor() after execution, so it include the various parallel executoin colums (TQ, IN-OUT, PQ Distrib), but doesn’t have the rowsource execution stats enabled so we can’t see the actual workload and data volume, but in the words of the OP: “adding no_merge hint did the trick and now the SQL is just executing fine”.

Summary

The steps for solving the performance problems of a specific SQL statement are very fluid. For a long-running or parallel statement the SQL Monitor report will automatically be created (though there are limits on the size of the plan that may disable the feature) and this is the easiest source of useful information, though you might also need to pull the execution plan from v$sql_plan to get details about parallel execution and partitioning at the same time.

If you’re not licensed for the diagnostic and performance packs, though, enabling SQL Trace to get the plan and waits gets you a lot of infomation, and querying (g)v$pq_tqstat immediately after running the query can fill in the parallel traffic details.

In this example the SQL Monitor report showed a highly skewed distribution, which might have been fixable by changing the PQ distribution, or even by simply changing the degree of parallelism.

Further examination of the report showed that the query generated an enormous rowsource which it then crunched down to a small result set. Comparing the 2018 and 2019 plans (which were not quite identical but were sufficiently similar to justify comparison) the same skew and explosion of rowsource were visible in both, though the data size involved in the 2018 plan made it seem that the plan was a “good” one which really it wasn’t.

The obvious target for tuning was to see if the explosion in volume could be reduced or eliminated by writing the query with some intermediate non-mergeable view(s), and it turned out that the query had been written with that intent in its original form but without a hint to block complex view merging. After adding the hint the performance was acceptable.

Join View delete

Tue, 2022-05-31 11:39

I’ve written a couple of notes about deleting from join views and (ultimately) the fact that if you have muliple key-preserved tables in join view then the first key preserved table in the from clause is the one where the delete is applied. The topic came up on the Oracle developer forum again today, and closed with a funny little quirk.

Here’s a little data model followed by one query and two deletes that I’ve just run on Oracle 19.11.0.0 and 21.3.0.0. The query and first delete feature a join using “ANSI” syntax, the second delete translates the join into “traditional” Oracle syntax.

rem
rem     Script:         delete_join_2.sql       
rem     Dated:          May 2022
rem     Author:         J P Lewis
rem
rem     Last tested
rem             21.3.0.0
rem             19.11.0.0
rem

create table t_parent (
        id      number 
                        constraint t_parent_pk primary key,
        val     varchar2(10)
);

create table t_child (
        id      number 
                        constraint t_child_fk references t_parent(id),
        val     varchar2(10)
);

alter table t_child add constraint t_child_pk primary key(id);

insert into t_parent values(1,rpad('parent',10));
insert into t_child  values(1,rpad('child', 10));

commit;

execute dbms_stats.gather_table_stats(user,'t_parent',cascade=>true)
execute dbms_stats.gather_table_stats(user,'t_child', cascade=>true)

set autotrace on explain

select t_child.* from t_parent join t_child on t_child.id = t_parent.id;

delete
        (select t_child.* from t_parent join t_child on t_child.id = t_parent.id);

delete (select t_child.* from t_parent, t_child where t_child.id = t_parent.id);

set autotrace off

The table definitions and content (with a little modification) come from the forum posting, and I have to admit that the “one-to-one” referential integrity from parent to child is a little unusual (though occasionally a very effective strategy). I’ve also enabled autotrace to get the (probable) execution plans for the three statements, but in the script on my test machine I’ve also enabled the CBO (10053) trace to pick up a few extra details.

The query is there to show us that the join will return a single row, and the row content will be from the t_child table. What happens, though, when we try to use that query as an inline view to the delete.

It is an updateable join view – and both tables are, in fact, key-preserved thanks to the primary key constraints at both ends of the join. Here’s what we get (with the execution plans removed from the output):

        ID VAL
---------- ----------
         1 child

1 row selected.



delete
*
ERROR at line 1:
ORA-02292: integrity constraint (TEST_USER.T_CHILD_FK) violated - child record found


1 row deleted.

Line 5 tells us that we selected one row – as expected.

Line 11 tells us that we’ve raised an error because we’re trying to break the foreign key constraint on the t_child table – and that error shouldn’t be a surprise because the t_parent table appears first in the from clause, so that’s the table the optimizer wants to delete rows from.

However, line 15 tells us that if we change to “traditional ” Oracle syntax the delete works – so which of the two delete statements is displaying buggy behaviour?

Execution plans

There’s a clue in the execition plans (when we get them – autotrace doesn’t give a plan for the delete that raised the error). So here are the plans for the select and the “traditional” delete in that order:

----------------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |     1 |    14 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| T_CHILD |     1 |    14 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------



-------------------------------------------------------------------------------
| Id  | Operation        | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | DELETE STATEMENT |            |     1 |     3 |     1   (0)| 00:00:01 |
|   1 |  DELETE          | T_CHILD    |       |       |            |          |
|   2 |   INDEX FULL SCAN| T_CHILD_PK |     1 |     3 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------

With assistance from the foreign key constraint, Join Elimination has occurred in both cases leaving t_child as the candidate for deletion. You’ll also note that the optimizer has recognised that while the view’s select list is “all columns” it only needs to optimize for “select rowid” to drive the delete, so produces an index-only access path to get the data it needs.

If we use explain plan, or check the 10053 trace file, for the ANSI delete that failed we’ll find the following:

-----------------------------------------------------------------------------------
| Id  | Operation           | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | DELETE STATEMENT    |             |     1 |     6 |     1   (0)| 00:00:01 |
|   1 |  DELETE             | T_PARENT    |       |       |            |          |
|   2 |   NESTED LOOPS      |             |     1 |     6 |     1   (0)| 00:00:01 |
|   3 |    INDEX FULL SCAN  | T_CHILD_PK  |     1 |     3 |     1   (0)| 00:00:01 |
|*  4 |    INDEX UNIQUE SCAN| T_PARENT_PK |     1 |     3 |     0   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$4396EC5C
   3 - SEL$4396EC5C / T_CHILD@SEL$1
   4 - SEL$4396EC5C / T_PARENT@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T_CHILD"."ID"="T_PARENT"."ID")

There’s no Join Elimination, and Oracle is following the rule that “the first key preserved table in the from clause should be the target of the deletes.

Where’s the bug?

This is a question that doesn’t have an obvious answer.

The traditional syntax gives you the result you probably wanted but it’s deleting from the second table in the from clause of the join view – so is that a documentation bug or a coding bug because if the documentation is correct the statement should have raised the error that the ANSI syntax gave.

Conversely, is there an optimizer bug (or limitation) because table elimination could have been used in the ANSI case but wasn’t, although you could say that the ANSI code is doing the right thing (following the documentation) by raising an error and that the traditional syntax is doing the wrong thing.

If we search the CBO (10053) traces of the select and the ANSI join, looking for sections where the optimizer considers Join Elimination, the results are helpful:

grep -nT "^JE" or19_ora_30861_*.trc >temp.txt

With a little cosmetic editing here’s the output from the trace of the select statement:

or19_ora_30861_select.trc:  78  JE - Join Elimination
or19_ora_30861_select.trc: 807  JE:[V2]   Considering Join Elimination on query block SEL$1 (#0)
or19_ora_30861_select.trc: 813  JE:[V2] Query block (0x7fd3a152f5c0) before join elimination:
or19_ora_30861_select.trc: 816  JE:[V2]: Try to eliminate T_CHILD by ref. join elim using NTSID
or19_ora_30861_select.trc: 817  JE:[V2]: Cannot eliminate T_CHILD by ref. join elim - no constr. based join pred(s)
or19_ora_30861_select.trc: 818  JE:[V2]: Try to eliminate T_PARENT by ref. join elim using PRIMARY(ID) <- FOREIGN(ID)
or19_ora_30861_select.trc: 819  JE:[V2]: Cannot eliminate T_PARENT by ref. join elim - predicate column refs.
or19_ora_30861_select.trc: 820  JE:[V2]: Try to eliminate T_PARENT by ref. join elim using NTSID
or19_ora_30861_select.trc: 821  JE:[V2]: Cannot eliminate T_PARENT by ref. join elim - no constr. based join pred(s)
or19_ora_30861_select.trc: 822  JE:[V2] Query block (0x7fd3a152f5c0) after join elimination:

or19_ora_30861_select.trc: 844  JE:[V2]   Considering Join Elimination on query block SEL$58A6D7F6 (#0)
or19_ora_30861_select.trc: 850  JE:[V2] Query block (0x7fd39c09ee60) before join elimination:
or19_ora_30861_select.trc: 853  JE:[V2]: Try to eliminate T_CHILD by ref. join elim using NTSID
or19_ora_30861_select.trc: 854  JE:[V2]: Cannot eliminate T_CHILD by ref. join elim - no constr. based join pred(s)
or19_ora_30861_select.trc: 855  JE:[V2]: Try to eliminate T_PARENT by ref. join elim using PRIMARY(ID) <- FOREIGN(ID)
or19_ora_30861_select.trc: 856  JE:[V2]: Can eliminate T_PARENT by ref. join elim using PRIMARY(ID) <- FOREIGN(ID)
or19_ora_30861_select.trc: 857  JE:[V2] Eliminate table: T_PARENT (T_PARENT)
or19_ora_30861_select.trc: 858  JE:[V2] Query block (0x7fd39c09ee60) after join elimination:

We can see that the optimizer has considered join elimination twice (lines 2 and 12), once for each of two query blocks (sel$1 – a query block embedded in the text – and sel$58a6d7f6, a query block name generated during some query transformation).

At lines 5 and 6 we can see that Oracle failed to take advantage of the referential integrity constraint to eliminate t_parent with the explanation “predicate column refs”; but at lines 16 and 17 we see that there’s a change of heart and the optimizer does take advantage of the referential integrity to eliminate t_parent. So let’s look at the text of the “query block before join elimination” in both cases and see what’s changed:

First pass
----------
SELECT 
        "T_PARENT"."ID"  "QCSJ_C000000000300000",
        "T_PARENT"."VAL" "QCSJ_C000000000300002",
        "T_CHILD"."ID"   "QCSJ_C000000000300001",
        "T_CHILD"."VAL"  "QCSJ_C000000000300003" 
FROM 
        "TEST_USER"."T_PARENT" "T_PARENT",
        "TEST_USER"."T_CHILD" "T_CHILD" 
WHERE 
        "T_CHILD"."ID"="T_PARENT"."ID"


Second Pass
-----------
SELECT 
        "T_CHILD"."ID" "ID",
        "T_CHILD"."VAL" "VAL" 
FROM 
        "TEST_USER"."T_PARENT" "T_PARENT",
        "TEST_USER"."T_CHILD" "T_CHILD" 
WHERE 
        "T_CHILD"."ID"="T_PARENT"."ID"

The first pass seems to have expanded the two tables in the intial JOIN, ignoring the limited select list, so join elimination is blocked by the presence of the (non-PK) column t_parent.val (aliased as qcsj_c000000000300002, but when we get to the 2nd pass the select list has been limited to the minimum necessary set of columns that would be needed by a pure select statement and join elimination is possible.

The clue to how/why this has happened is the in the few lines between the two passes; the trace file also reports: “CVM: Merging SPJ view SEL$1 (#0) into SEL$2 (#0)” and when we look for the point where the optimizer registered query block sel$2 it turns out to have one item in its from clause, an object called: from$_subquery$_003 (which tells us where the 3 came from in all those column aliases QCSJ_C0000000003nnnnn. In effect it looks like the optimizer started by rewriting the ANSI select in Oracle syntax as (with a little poetic licence for clarity):

select  child_id, child_val, parent_id
from    (
         select  t_child.*, t_parent.*
         from    t_child, t_parent
         where   t_child.id = t_parent.id
        ) from$_subquery$_003
/ 

So the first pass was the optimizer trying to do join elimination on the inner query block and the second pass happened after the inner block was merged with the outer block and the resulting query block (called sel$58a6d7f6) allowed the join elimination to take place

So what happened with the delete statement. Here’s the result of the call to grep:

or19_ora_30861_delete.trc:  90  JE - Join Elimination
or19_ora_30861_delete.trc: 837  JE:[V2]   Considering Join Elimination on query block SEL$1 (#0)
or19_ora_30861_delete.trc: 843  JE:[V2] Query block (0x7fd3a152f758) before join elimination:
or19_ora_30861_delete.trc: 846  JE:[V2]: Try to eliminate T_CHILD by ref. join elim using NTSID
or19_ora_30861_delete.trc: 847  JE:[V2]: Cannot eliminate T_CHILD by ref. join elim - no constr. based join pred(s)
or19_ora_30861_delete.trc: 848  JE:[V2]: Try to eliminate T_PARENT by ref. join elim using PRIMARY(ID) <- FOREIGN(ID)
or19_ora_30861_delete.trc: 849  JE:[V2]: Cannot eliminate T_PARENT by ref. join elim - predicate column refs.
or19_ora_30861_delete.trc: 850  JE:[V2]: Try to eliminate T_PARENT by ref. join elim using NTSID
or19_ora_30861_delete.trc: 851  JE:[V2]: Cannot eliminate T_PARENT by ref. join elim - no constr. based join pred(s)
or19_ora_30861_delete.trc: 852  JE:[V2] Query block (0x7fd3a152f758) after join elimination:

That’s the lot – there is no attempt to do join elimination on a querye block called sel$58a6d7f6, though a few lines after this extract the trace file reports:

CVM:   Merging SPJ view SEL$1 (#0) into SEL$2 (#0)
Registered qb: SEL$58A6D7F6 0x9c0a3038 (VIEW MERGE SEL$2; SEL$1; SEL$2)

That’s exactly the same view merging that was present in the select – but the optimizer doesn’t attempt join elimination before moving on to report, a couple of lines later:

CVM:   Merging SPJ view SEL$58A6D7F6 (#0) into DEL$1 (#0)
Registered qb: SEL$4396EC5C 0x9c09dde0 (VIEW MERGE DEL$1; SEL$58A6D7F6; DEL$1)

It looks like this is a case of the optimizer missing a cycle in its transform/optimze loopback. The delete went through exactly the same process as the select, but failed to optimize the intermediate query block (which happened to be called from$_subquery$_004 in this case, being one layer deeper thanks to the extra level introduced by the delete).

Workaround

I tried two or three things to get the ANSI-style code to work without changing the table order in the from clause – extra layers of subqueries, explicit aliases, but Oracle kept coming back to a point where it had the full projection of all the columns in the two tables. (This reminded me of an old bug in ANSI select expansion that was fixed in 12cR2. It made me wonder if this was a code path where the same bug had been overlooked – it also made me wonder if there would be any problems if the two tables in the join totalled more than 1,000 columns – but that’s a test for another day.)

I did eventually find a fix (that didn’t involved switching the table order in the inline view):

delete
        (select t_child.id from t_parent join t_child on t_child.id = t_parent.id);

Note that I now have only the primary key of t_child in the select

The section in the CBO trace file for Join Elimination looked like this:

JE:[V2] Query block (0x7f93996231f8) before join elimination:
SQL:******* UNPARSED QUERY IS *******
SELECT "T_PARENT"."ID" "QCSJ_C000000000400000","T_CHILD"."ID" "QCSJ_C000000000400001" FROM "TEST_USER"."T_PARENT" "T_PARENT","TEST_USER"."T_CHILD" "T_CHILD" WHERE "T_CHILD"."ID"="T_PARENT"."ID"
JE:[V2]: Try to eliminate T_CHILD by ref. join elim using NTSID
JE:[V2]: Cannot eliminate T_CHILD by ref. join elim - no constr. based join pred(s)
JE:[V2]: Try to eliminate T_PARENT by ref. join elim using PRIMARY(ID) <- FOREIGN(ID)
JE:[V2]: Can eliminate T_PARENT by ref. join elim using PRIMARY(ID) <- FOREIGN(ID)
JE:[V2] Eliminate table: T_PARENT (T_PARENT)
JE:[V2] Replaced column: T_PARENT.ID with column: T_CHILD.ID
JE:[V2] Query block (0x7f93996231f8) after join elimination:
SQL:******* UNPARSED QUERY IS *******
SELECT "T_CHILD"."ID" "QCSJ_C000000000400000","T_CHILD"."ID" "QCSJ_C000000000400001" FROM "TEST_USER"."T_CHILD" "T_CHILD"
Registered qb: SEL$E703A888 0x996231f8 (JOIN REMOVED FROM QUERY BLOCK SEL$1; SEL$1; "T_PARENT"@"SEL$1")

You may decide that this is an obvious workaround, but it’s not particularly helpful. In the general case I would expect to see a stored view that joined the two tables, with the expectation that this type of delete was just one of many operations the view was used for. Rewriting a view to use the increasingly common ANSI syntax shouldn’t result in some statements crashing “for no apparent reason”.

The last laugh

Given that the select and delete both included a merge of sel$1 into sel$2 to produce a query block called sel$58a6d7f6 I wondered what would happen if I blocked the merge:

SQL>  delete  /*+  no_merge (@sel$1) */
  2          (select t_child.id from t_parent join t_child on t_child.id = t_parent.id);
	(select t_child.id from t_parent join t_child on t_child.id = t_parent.id)
        *
ERROR at line 2:
ORA-01752: cannot delete from view without exactly one key-preserved table

Isn’t that the error message that should have been coming out the entire time – according to the orginal documentation for updateable join views?

Quiz Night

Fri, 2022-05-20 12:09

It’s a long time since I’ve done a quiz night – but here’s something that fooled me (briefly) when it appeared on the Oracle Developers’ Forum. Here’s a table definition – and I’m not going to make it easy by giving you a “create table” statement, but it’s just a simple heap table:

sql> desc interr_skuplannparam
 name                             null?    type
 -------------------------------- -------- -------------------
 atpdur                                    number(38)
 depdmdopt                                 number(38)
 externalskusw                             number(1)
 firstreplendate                           date
 lastfrzstart                              date
 lastplanstart                             date
 plandur                                   number(38)
 planleadtime                              number(38)
 planleadtimerule                          number(38)
 planshipfrzdur                            number(38)
 restrictdur                               number(38)
 allocbatchsw                              number(1)
 cmpfirmdur                                number(38)
 custservicelevel                          float(126)
 maxchangefactor                           float(126)
 mfgleadtime                               number(38)
 recschedrcptsdur                          number(38)
 cpppriority                               number(38)
 cpplocksw                                 number(1)
 criticalmaterialsw                        number(1)
 aggexcesssupplyrule                       number(38)
 aggundersupplyrule                        number(38)
 bufferleadtime                            number(38)
 maxoh                                     float(126)
 maxcovdur                                 number(38)
 drpcovdur                                 number(38)
 drpfrzdur                                 number(38)
 drprule                                   number(38)
 drptimefencedate                          date
 drptimefencedur                           number(38)
 incdrpqty                                 float(126)
 mindrpqty                                 float(126)
 mpscovdur                                 number(38)
 mfgfrzdur                                 number(38)
 mpsrule                                   number(38)
 mpstimefencedate                          date
 mpstimefencedur                           number(38)
 incmpsqty                                 float(126)
 minmpsqty                                 float(126)
 shrinkagefactor                           number(38)
 item                                      varchar2(50 char)
 loc                                       varchar2(50 char)
 expdate                                   date
 atprule                                   number(38)
 prodcal                                   varchar2(50 char)
 prodstartdate                             date
 prodstopdate                              date
 orderingcost                              float(126)
 holdingcost                               float(126)
 eoq                                       float(126)
 ff_trigger_control                        number(38)
 workingcal                                varchar2(50 char)
 lookaheaddur                              number
 orderpointrule                            number
 orderskudetailsw                          number(1)
 supsdmindmdcovdur                         number(38)
 orderpointminrule                         number(38)
 orderpointminqty                          float(126)
 orderpointmindur                          number(38)
 orderuptolevelmaxrule                     number(38)
 orderuptolevelmaxqty                      float(126)
 orderuptolevelmaxdur                      number(38)
 aggskurule                                number(38)
 fwdbuymaxdur                              number(38)
 costuom                                   number(38)
 cumleadtimedur                            number(38)
 cumleadtimeadjdur                         number(38)
 cumleadtimerule                           number(38)
 roundingfactor                            float(126)
 limitplanarrivpublishsw                   number(1)
 limitplanarrivpublishdur                  number
 maxohrule                                 number(1)
 integration_stamp                         date
 integration_jobid                not null varchar2(32 char)
 error_str                                 varchar2(2000 char)
 error_stamp                               date

The column integration_jobid (the single “not null” column) has been defined with the default value of “INT_JOB”, which takes 7 bytes to store. What’s the result of the query at the end of this little script:

truncate table interr_skuplannparam;

insert into interr_skuplannparam (atpdur) 
select   0 
from     all_objects 
where    rownum <= 10000
/

commit;

execute dbms_stats.gather_table_stats(user,'interr_skuplannparam')

select avg_row_len from user_tables;

Hint: the value zero is represented internally as a single byte holding the value 0x80 (decimal 128).

Lag/Lead slow

Thu, 2022-05-05 04:05

This note is about a surprising performance difference between the lead() and lag() analytic functions (which turns out to be due to the behaviour of the nth_value() function) when the option to “ignore nulls” is included in their use (jump to conclusion). The detail I’ll be writing about was highlighted in a thread on the Oracle developer forum about a requirement to add a number of analytic columns to a dataset of 156 million rows using a statement of the following shape:

create table tb_target_ignore
as
select
        pat_id,
        visit_date_dt,
        ed_ucc_dt,
        lag (ed_ucc_dt ignore nulls, 1) over (partition by pat_id order by visit_date_dt) as prev_ed_ucc_dt,
        lead(ed_ucc_dt ignore nulls, 1) over (partition by pat_id order by visit_date_dt) as next_ed_ucc_dt,
        row_number() over (partition by pat_id order by visit_date_dt) as row_num
from
        tb_source
;

You’ll notice that I’ve introduced a row_number(), and both a lead() and a lag() of a column called ed_ucc_dt. All three analytic columns use the same partitioning and ordering, though, so Oracle will only be doing one “window sort” in the execution plan. Part of the performance problem, of course, was that with 156M rows of a couple of dozen existing columns and adding a dozen new columns, the workload due to sorting was bound to be very large, so there were various suggestions of how to minimise that part of the workload.

However Solomon Yakobsen pointed out that the code was using the “ignore nulls” option and there was a bug in 11g that made lead() and lag() very slow when this option was used. He subsequently reported that this defect was still present in 19c, restricted it to just the lag() function, and cited a MOS document ID referencing the problem: LAG Function is slow when using ignore nulls (Doc ID 2811596.1). The implication of the MOS note is that we shouldn’t expect this to change.

A follow-up posting by User_H3J7U gave us a reason for the slowness of the lag() function by running a sample query through dbms_utility.expand_sql(). Oracle rewrites the query to use variants of the nth_value() function when you use “ignore nulls”, but rewrites it to use variants of first_value() when you aren’t using the “ignore nulls” option. This isn’t a complete explanation of why lag() should be slow while lead() is not – but it’s a significant pointer towards a possible implementation issue and is a good clue about working around the problem. So let’s build a model of the situation.

The basic model
rem
rem     Script:         ignore_nulls.sql
rem     Author:         Jonathan Lewis / Sam P
rem     Dated:          May 2022
rem     Purpose:     
rem
rem     Last tested
rem             19.11.0.0
rem

create table tb_source (
        pat_id,
        visit_date_dt,
        ed_ucc_dt
)
as
with generator as (
        select rownum id
        from    dual
        connect by
                level <= 1e4    --> comment to avoid wordpress format issue
)
select
        g1.id,
        to_date('01-Apr-2022') + dbms_random.value(0,100),
        to_date('01-Apr-2022') + dbms_random.value(5,105)
--      to_date(null)
from
        generator g1,
        generator g2
where
        g2.id <= 20     --> comment to avoid wordpress format issue
order by
        dbms_random.value
/


spool ignore_nulls.lst

set serveroutput off
set timing on

prompt  ======================
prompt  Without "ignore nulls"
prompt  (My time 0.61 seconds)
prompt  ======================

create table tb_target_no_ignore
as
select
        pat_id,
        visit_date_dt,
        ed_ucc_dt,
        lag (ed_ucc_dt, 1) over (partition by pat_id order by visit_date_dt) as prev_ed_ucc_dt,
        lead(ed_ucc_dt, 1) over (partition by pat_id order by visit_date_dt) as next_ed_ucc_dt,
        row_number() over (partition by pat_id order by visit_date_dt) as row_num
from
        tb_source
;

prompt  ======================
prompt  With "ignore nulls"
prompt  (My time 0.88 seconds)
prompt  ======================

create table tb_target_ignore
as
select
        pat_id,
        visit_date_dt,
        ed_ucc_dt,
        lag (ed_ucc_dt ignore nulls, 1) over (partition by pat_id order by visit_date_dt) as prev_ed_ucc_dt,
        lead(ed_ucc_dt ignore nulls, 1) over (partition by pat_id order by visit_date_dt) as next_ed_ucc_dt,
        row_number() over (partition by pat_id order by visit_date_dt) as row_num
from
        tb_source
;

I’ve created a source table with 200,000 rows, consisting of 10,000 pat_id values, and 20 rows per pat_id. The 20 rows for a pat_id (probably) each have a different visit_date_dt and a different ed_ucc_dt.

After creating the data set I’ve created two more tables using the lead() and lag() functions to generate a previous (lag) and next (lead) ed_ucc_dt for each row, partitioning by pat_id, ordering by visit_date_dt. One statement includes the “ignore nulls” option the other doesn’t and, as you can see, the time to create the “no ignore” table was 0.61 seconds while the time to create the “ignore null” table was 0.88 seconds.

The variation isn’t dramatic – but this is just 200,000 rows, in memory, with only a few columns and only two columns added through lead and lag.

After the baseline test I tweaked the statement that created the table with the “ignore nulls” option to get three more times.

  • With neither lead() nor lag() the time was 0.29 seconds
  • With just the lead() column the time was 0.46 seconds – an increase of 0.17 seconds
  • With just the lag() column the time was 0.71 seconds – an increase of 0.42 seconds

You might note that 0.29 + 0.17 + 0.42 = 0.88 (the time I got for adding both columns) – it’s a little lucky that it looks like a perfect match, but even matching within a couple of hundredths of a second would be have been a nice detail. It certainly seems that lag() – with my test data – consumes more resources than lead() for a pair of operationd that look as if they should produce the same workloads.

Internal Rewrite

The next step was to check what the internal rewrite of the code looked like, so I passed the select part of the statements (the procedure won’t accepts “create as select”) through dbms_utility.expand_sql() and reformatted the results. Here are the two rewritten statements – first without “ignore nulls”:

select
        a1.pat_id pat_id,
        a1.visit_date_dt visit_date_dt,
        a1.ed_ucc_dt ed_ucc_dt,
        decode(
                count(*) over (
                        partition by a1.pat_id order by a1.visit_date_dt
                        rows between 1 preceding and 1 preceding
                        ),a
                 1,     first_value(a1.ed_ucc_dt) over (
                                partition by a1.pat_id order by a1.visit_date_dt
                                rows between 1 preceding and 1 preceding
                        ),
                        null
        ) prev_ed_ucc_dt,
        decode(
                count(*) over (
                        partition by a1.pat_id order by a1.visit_date_dt
                        rows between 1 following and 1 following
                        ),a
                 1,     first_value(a1.ed_ucc_dt) over (
                                partition by a1.pat_id order by a1.visit_date_dt
                                rows between 1 following and 1 following
                        ),
                        null
        ) next_ed_ucc_dt,
        row_number() over (partition by a1.pat_id order by a1.visit_date_dt) row_num
from
        test_user.tb_source a1


The code looks a little long and messy, but that’s mainly because just about everything it does happens twice. The lag() function (prev column) turns into a first_value() function that looks at the row preceding the current row in the partition (rows between 1 preceding and 1 preceding). However it first has to count over the same clause to see if a row exists, and then either report its value or report a null – hence the structure decode(count(), 1, first_value(), null)

Note: the full lag() function call is: “lag(expression, offset, default)” where the offset (number of rows to lag) defaults to 1 and the default is the value you want reported when there is no matching row, and defaults to null.

The call to lead() basically does the same thing, but uses (rows between 1 following and 1 following) to access the next row in the partition.

On the other hand this is the SQL that Oracle generates when we include the “ignore nulls” clause (which means Oracle can’t restrict the row range to just one preceding or following row):

select
        a1.pat_id pat_id,
        a1.visit_date_dt visit_date_dt,
        a1.ed_ucc_dt ed_ucc_dt,
        nvl(
                nth_value(a1.ed_ucc_dt, 1) from last ignore nulls over (
                        partition by a1.pat_id order by a1.visit_date_dt
                        rows between unbounded preceding and 1 preceding
                ),
                 null
        ) prev_ed_ucc_dt,
        nvl(
                nth_value(a1.ed_ucc_dt, 1)           ignore nulls over (
                        partition by a1.pat_id order by a1.visit_date_dt
                        rows between 1 following and unbounded following
                ),
                null
        ) next_ed_ucc_dt,
        row_number() over (partition by a1.pat_id order by a1.visit_date_dt) row_num
from
        test_user.tb_source a1

Both lag() and lead() turn into nth_value() with a second parameter of 1 (i.e. nth == 1st … which makes you wonder why Oracle isn’t using first_value()), and we can also see the “ignore nulls” still being used.

The lag() call now uses the range (rows between unbounded preceding and 1 preceding) i.e. everything from the start of partition to the previous row, while the lead() call uses the range (rows between 1 following and unbounded following) i.e. from the next row to the end of partition.

The other important detail to note is that the translation of the lag() call also includes the clause “from last” – in other words we want the first row when reading the partition in reverse order, and that might have something to do with the extra time it takes to operate the (translated) lag() function.

Workaround

Oracle is using a generic nth_value() to translate a generic lead()/lag(), but we’re in the special case where we know n = 1, which means we (and Oracle) could use first_value()/last_value(). It’s perfectly reasonable for Oracle’s internal code to avoid special cases if it makes no difference to performance, of course, but maybe in this case we could imitate Oracle’s rewrite to get some benefit.

  • Step 1 – change nth_value() to the appropriate first/last.
  • Step 2 – get rid of the “from last” which won’t be needed with last_value()
  • Step 3 – move the “ignore nulls” to the spot that Oracle wants to see it with first/last

Here’s the resulting SQL – I’ve left the nvl(count, expression, null) in place, but if you wanted a null as the default return value for the original lead()/lag() calls you could simplify the code a little further.

create table tb_target 
as
select
        a1.pat_id pat_id,
        a1.visit_date_dt visit_date_dt,
        a1.ed_ucc_dt ed_ucc_dt,
        nvl(
                last_value(a1.ed_ucc_dt ignore nulls) over (
                        partition by a1.pat_id order by a1.visit_date_dt
                        rows between unbounded preceding and 1 preceding
                ),
                 null
        ) prev_ed_ucc_dt,
        nvl(
                first_value(a1.ed_ucc_dt ignore nulls) over (
                        partition by a1.pat_id order by a1.visit_date_dt
                        rows between 1 following and unbounded following
                ),
                null
        ) next_ed_ucc_dt,
        row_number() over (partition by a1.pat_id order by a1.visit_date_dt) row_num
from
        test_user.tb_source a1
/

  • Run-time: 0.61 seconds.
  • Run-time with just last_value() / preceding: 0.47 seconds
  • Run time with just first_value() / following: 0.43 seconds

There still seems to be a little discrepancy between accessing to the preceding data compared to accessing the following data but there’s a much better balance than before.

One more edit – taking out the nvl() construct because the original lead()/lag() calls didn’t have a non-null default supplied:

create table tb_target 
as
select
        a1.pat_id pat_id,
        a1.visit_date_dt visit_date_dt,
        a1.ed_ucc_dt ed_ucc_dt,
--
        last_value(a1.ed_ucc_dt ignore nulls) over (
                partition by a1.pat_id order by a1.visit_date_dt
                rows between unbounded preceding and 1 preceding
        ) prev_ed_ucc_dt,
--
        first_value(a1.ed_ucc_dt ignore nulls) over (
                partition by a1.pat_id order by a1.visit_date_dt
                rows between 1 following and unbounded following
        ) next_ed_ucc_dt,
--
        row_number() over (partition by a1.pat_id order by a1.visit_date_dt) row_num
from
        test_user.tb_source a1
/

With this simplification the time dropped by a further couple of hundredths of a second hovering between 0.57 and 0.58 seconds.

There was one last detail about the test code that should be mentioned – do the changes in code still produce the same results? As a quick and dirty check I ran the following query after each variant of creating the tb_target table (tb_target_ignore is the table created using the original lead()/lag() code with “ignore nulls”):

select  *
from   (
        select * from tb_target minus select * from tb_target_ignore
        union all
        select * from tb_target_ignore minus select * from tb_target
        )
/

The result was always zero rows.

Hypothesis

I raised the idea that the difference in timing for the lead() and lag() functions might have something to do with the volume of data that Oracle could be processing to find the one row it needed.

My thinking was that for the lead() rewrite – the call to nth_value(ed_ucc_dt,1) – would simply be looking at the next row in the partition (if it existed) because my data has no nulls that neeed to be ignored, while the rewrite of the lag() function with its “from last” requirement could be making Oracle re-read the entire preceding section of the partition before starting to process it backwards.

As a possible check to see if this was a viable hypothesis I ran one more test – visible in the initial declaration of tb_source – I created the data with ed_ucc_dt set to null in every single row, so that Oracle would be forced to process from the current position to whichever end of the partition was relevant regardless of whether it was calling lead() or lag().

With this change in place the timing for the lead() only and lag() only statements were nearly identical – which is a weak support for the hypothesis.

And once I’d done that test the next obvious test was to see what happened if I increased size of each partition (using non-null values for ed_ucc_dt) to see if larger partitions would increase the difference between the forward and backward tests. To do this I changed the script to create the tb_source table to produce 5,000 pat_id value with 40 rows per pat_id by changing the where clause to:

where
        g2.id <= 40     --> comment to avoid wordpress format issue
and     g1.id <= 5e3    --> comment to avoid wordpress format issue

With this change in place the timings for the original form of the lead()/lag() statement were:

  • With both lead() and lag() in place the time was 1.05 seconds
  • With neither lead() nor lag() the time was 0.25 seconds
  • With just the lead() column the time was 0.41 seconds – an increase of 0.16 seconds
  • With just the lag() column the time was 0.98 seconds – an increase of 0.73 seconds

So the lag() time (ballpark figures) nearly doubles as the partition size doubles but the lead() time stays pretty much the same.

The results of these two tests do tend to suggest that the generic nth_value() implementation can do some short-circuiting when working “forwards”, using a mechanism that isn’t available when the “from last” clause requires it to work “backwards”.

Writing the previous paragraph prompted me to do one last test – it wouldn’t produce the same results, of course, but I ought to check the performance when I moved the “from last” clause out of the “prev”/lag() column expression into the “next”/lead() column expression in Oracle’s original translation to confirm that the problem was associated with the “from last” and not with the choice of “preceding” or “following” in the row range section of the over() clause. (It was the “from last” that made the difference.)

tl;dr

If you’re using the lag() or lead() functions with “ignore nulls” on a very large dataset you may find that you can rewrite the code with first_value() or last_value() calls that use less CPU. It’s probably only significant on fairly large data sets, and may be particularly noticeable for cases where the same over() clause is used many times.

The potential for excess CPU usage comes from the effect of a generic internal rewrite using the nth_value() function with the “from last” clause when your lead()/lag() use the special case of n = 1.

To get the correct rewrite you can use dbms_utility.expand_sql() to generate a suitable statement from which you can extract and edit the relevant pieces of text.

redefinition error

Wed, 2022-05-04 06:22

Here’s a note about a data error generated by using (possibly mis-using) the dbms_redefinition package. The original code to demonstrate the problem comes from a note on the Oracle Developer forum, and was brought to my attention by a tweet from Daniel Stein.

The critical feature of the demo is that we can end up with a column containing nulls despite being declared NOT NULL (and I don’t mean by that a simple “is not null” check constraint – which is not quite the same as a NOT NULL declaration).

Here’s the first part of a script, mostly copied from the forum post, that I’ve I’ve been running on 19.11.0.0:

rem
rem     Script:         redef_bug.sql
rem     Author:         Jonathan Lewis / Sebastian (User_6AT2M)
rem     Dated:          May 2022
rem
rem     Last tested 
rem             19.11.0.0
rem

create table test_nulls (
        id      number (8)      constraint tn_pk primary key,
        text    varchar2 (25)   constraint tn_nn_tx not null
)
/

create unique index i_test_nulls on test_nulls(text);

insert into test_nulls (id, text) 
select rownum, 'some text '||rownum from dual connect by level <= 50;

-- create an empty copy, without the constraints or indexes

create table test_nulls_interim (
        id      number (8),
        text    varchar2 (25)
);

begin
        dbms_redefinition.start_redef_table(
                uname           => user,
                orig_table      => 'test_nulls',
                int_table       => 'test_nulls_interim',
                col_mapping     => 'id id, substr(text, id, 2) text'
        );
end;
/

The script creates a table with a primary key declared on an id column, a not null declaration on a text column and a unique index on the text column, then populates the table with 50 rows that have the form (N, “some text N”) where N is a number between 1 and 50.

After creating an empty copy of the table with no constraints or indexes I start an online redefinition – modifying the content of the text column as part of the redefinition. If you check the col_mapping carefully you will realise that when id reaches 13 the result from the substr() function becomes null.

This is where the trouble starts. If I now call dbms_redefition.copy_table_dependents() to add the original constraints and indexes to the interim table what’s Oracle going to do about the not null declaration on the text column?

declare
        error_ct pls_integer;
begin
        dbms_redefinition.copy_table_dependents(
                uname           => user,
                orig_table      => 'test_nulls',
                int_table       => 'test_nulls_interim',
                num_errors      => error_ct,
                copy_indexes    => 1,
                copy_constraints=> true,
                ignore_errors   => false,
--
                copy_triggers   => false,
                copy_privileges => false,
                copy_statistics => false,
                copy_mvlog      => false
        );  

        dbms_output.put_line('error count: '||error_ct);
end;
/

begin
        dbms_redefinition.finish_redef_table(user, 'test_nulls', 'test_nulls_interim');
end;
/

drop table TEST_NULLS_INTERIM purge;

I’ve exposed all the parameters to the copy_table_dependents() procedure call in my code, and you can see that I’ve chosen to copy only the constraints and indexes, and I don’t want to ignore errors.

The PL/SQL anonymous block terminates successfully, doesn’t report any errors, and outputs an error count of zero. So let’s see what we’ve got as the final result of the redefinition.

column search_condition_vc format a20
break on table_name skip 1 on index_name
set echo on

select
         table_name, index_name, column_name 
from 
        user_ind_columns 
order by 
        1,2,column_position
/

desc test_nulls

select count(*) from test_nulls;
select /*+ full(test_nulls) */ count(*) from test_nulls;

select
        constraint_name,
        constraint_type,
        search_condition_vc,
        status,
        deferrable,
        deferred,
        validated
from
        user_constraints
where
        table_name = 'TEST_NULLS'
/

set echo off

Here are the results – my login.sql has a lot of column format commands so your results may look a lot messier if you run this bit of code. Here’s the echoed output:

SQL> select
  2           table_name, index_name, column_name
  3  from
  4          user_ind_columns
  5  order by
  6          1,2,column_position
  7  /

TABLE_NAME                INDEX_NAME           COLUMN_NAME
------------------------- -------------------- --------------------
TEST_NULLS                I_TEST_NULLS         TEXT
                          TN_PK                ID


2 rows selected.

SQL> 
SQL> desc test_nulls
 Name                                                                     Null?    Type
 ------------------------------------------------------------------------ -------- -------------------------------------------------
 ID                                                                       NOT NULL NUMBER(8)
 TEXT                                                                     NOT NULL VARCHAR2(25)

SQL> 
SQL> select count(*) from test_nulls;

  COUNT(*)
----------
        12

1 row selected.

SQL> select /*+ full(test_nulls) */ count(*) from test_nulls;

  COUNT(*)
----------
        50

1 row selected.

SQL> 
SQL> select
  2          constraint_name,
  3          constraint_type,
  4          search_condition_vc,
  5          status,
  6          deferrable,
  7          deferred,
  8          validated
  9  from
 10          user_constraints
 11  where
 12          table_name = 'TEST_NULLS'
 13  /

CONSTRAINT_NAME      C SEARCH_CONDITION_VC  STATUS   DEFERRABLE     DEFERRED  VALIDATED
-------------------- - -------------------- -------- -------------- --------- -------------
TN_NN_TX             C "TEXT" IS NOT NULL   ENABLED  NOT DEFERRABLE IMMEDIATE VALIDATED
TN_PK                P                      ENABLED  NOT DEFERRABLE IMMEDIATE VALIDATED

Both indexes have arrived; both columns have NOT NULL declarations.

When you count the number of rows in the table it’s only 12 – unless you force a full tablescan in which case it’s 50. This happens because the NOT NULL declaration of column text allows the optimizer to use an index-only plan on the index i_test_nulls and there are 38 nulls in the table that don’t appear in the index.

The check on user_constraints shows that both the primary key constraint and the “is not null” check constraint are enabled and validated.

Conclusion

There is code in the copy_table_dependents() procedure that is defective, deficient and dangerous. Clearly I’ve done something that I probably shouldn’t have done (and, historically, I wouldn’t have done) but the code should still have protected me from an error that leaves the database in a state that is not internally consistent and can produce incorrect results.

Notes

If you want to repeat this test and try some variations on the theme you will need a few non-trivial privileges (which can be granted through a role). For the basic dbms_redefinition package you will need: execute on dbms_redefinition, select any table, create any table, alter any table, lock any table, drop any table; and to execute the copy_table_dependents() procedure you will also need create any index, create any trigger.

The copy_table_dependents() procedure appeared (I think) in the 10g time-line; prior to that you had to sort out all the constraints and dependencies “by hand” – which you would probably manage online through the sync_interim_table() procedure (viz: “create an index, sync interim table, create next index, sync etc.) before calling the finish_redef_table() procedure. That being the case my immediate response to this issue was that if you don’t want the not null declaration on text then you can have to exclude the copy_constraints option when copying the table dependants; if you did want the not null declaration then you should have included it in the initial definition of the interim table because the start_redef_table() call would then have failed, raising:

ORA-12008: error in materialized view or zonemap refresh path
ORA-01400: cannot insert NULL into ("{schema}"."TEST_NULLS_INTERIM"."TEXT")

Adaptive Joins

Wed, 2022-04-13 07:53

There’s a question on the Oracle Forums at the moment about a query that’s taking a long time to parse. Even after being reduced to one prebuilt (currently remote) table with two non-correlated outer joins to it the parse time is several hundred seconds. This seems fairly bizarre – I have seen some very long parse times from Oracle, but 10 minutes for 3 tables is well over the top; it did remind me, though of a note I started a few years ago of a 4 table join taking 4 seconds to parse, so I thought I’d present the query, the plan, and a little chat on debugging. Here’s the query:

select
        /*+ parallel(t4,3) */
        t1.small_vc,
        t2.small_vc,
        t3.small_vc,
        count(t4.small_vc)
from
        t4,     
        t1,     
        t2,     
        t3
where
        t1.id = t4.id1
and     t2.id = t4.id2
and     t3.id = t4.id3
and     t1.small_vc in (1,2,3)
and     t2.small_vc in (1,2,3,4)
and     t3.small_vc in (1,2,3,4,5)
group by
        t1.small_vc,
        t2.small_vc,
        t3.small_vc
;

I’m expecting a simple cascade of hash joins, with t1, t2 and t3 – the “small” tables – turning into “build” tables, then t4 – the “large” table – passing through each of them in turn until the penultimate rowsource is aggregated.

Here’s the execution plan — which looks pretty much as I expected it to – but there’s something wrong about it that isn’t visible in the output. Why is the query (plan) saying it took 0.07 seconds to complete (A-time), returning only 60 rows, when my SQL*Plus session didn’t return any data for 4 seconds

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name     | Starts | E-Rows | Cost (%CPU)| E-Time   |    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |          |      1 |        |   300 (100)|          |        |      |            |     60 |00:00:00.07 |       5 |      0 |       |       |          |
|   1 |  PX COORDINATOR                |          |      1 |        |            |          |        |      |            |     60 |00:00:00.07 |       5 |      0 | 73728 | 73728 |          |
|   2 |   PX SEND QC (RANDOM)          | :TQ10004 |      0 |      1 |   300   (4)| 00:00:01 |  Q1,04 | P->S | QC (RAND)  |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   3 |    HASH GROUP BY               |          |      3 |      1 |   300   (4)| 00:00:01 |  Q1,04 | PCWP |            |     60 |00:00:00.01 |       0 |      0 |  1394K|  1394K|     3/0/0|
|   4 |     PX RECEIVE                 |          |      3 |      1 |   300   (4)| 00:00:01 |  Q1,04 | PCWP |            |    180 |00:00:00.01 |       0 |      0 |       |       |          |
|   5 |      PX SEND HASH              | :TQ10003 |      0 |      1 |   300   (4)| 00:00:01 |  Q1,03 | P->P | HASH       |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   6 |       HASH GROUP BY            |          |      3 |      1 |   300   (4)| 00:00:01 |  Q1,03 | PCWP |            |    180 |00:00:00.14 |    6114 |   6018 |  1394K|  1394K|     3/0/0|
|*  7 |        HASH JOIN               |          |      3 |   8460 |   299   (4)| 00:00:01 |  Q1,03 | PCWP |            |  12600 |00:00:00.14 |    6114 |   6018 |  2171K|  2171K|     3/0/0|
|   8 |         JOIN FILTER CREATE     | :BF0000  |      3 |     22 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |     75 |00:00:00.02 |       0 |      0 |       |       |          |
|   9 |          PX RECEIVE            |          |      3 |     22 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |     75 |00:00:00.02 |       0 |      0 |       |       |          |
|  10 |           PX SEND BROADCAST    | :TQ10000 |      0 |     22 |     2   (0)| 00:00:01 |  Q1,00 | S->P | BROADCAST  |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  11 |            PX SELECTOR         |          |      3 |        |            |          |  Q1,00 | SCWC |            |     25 |00:00:00.01 |       3 |      0 |       |       |          |
|* 12 |             TABLE ACCESS FULL  | T3       |      1 |     22 |     2   (0)| 00:00:01 |  Q1,00 | SCWP |            |     25 |00:00:00.01 |       3 |      0 |       |       |          |
|* 13 |         HASH JOIN              |          |      3 |  27322 |   297   (4)| 00:00:01 |  Q1,03 | PCWP |            |  12600 |00:00:00.10 |    6114 |   6018 |  2171K|  2171K|     3/0/0|
|  14 |          JOIN FILTER CREATE    | :BF0001  |      3 |     21 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |     72 |00:00:00.01 |       0 |      0 |       |       |          |
|  15 |           PX RECEIVE           |          |      3 |     21 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |     72 |00:00:00.01 |       0 |      0 |       |       |          |
|  16 |            PX SEND BROADCAST   | :TQ10001 |      0 |     21 |     2   (0)| 00:00:01 |  Q1,01 | S->P | BROADCAST  |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  17 |             PX SELECTOR        |          |      3 |        |            |          |  Q1,01 | SCWC |            |     24 |00:00:00.01 |       3 |      0 |       |       |          |
|* 18 |              TABLE ACCESS FULL | T2       |      1 |     21 |     2   (0)| 00:00:01 |  Q1,01 | SCWP |            |     24 |00:00:00.01 |       3 |      0 |       |       |          |
|* 19 |          HASH JOIN             |          |      3 |  92953 |   294   (3)| 00:00:01 |  Q1,03 | PCWP |            |  12600 |00:00:00.10 |    6114 |   6018 |  2171K|  2171K|     3/0/0|
|  20 |           JOIN FILTER CREATE   | :BF0002  |      3 |     19 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |     63 |00:00:00.01 |       0 |      0 |       |       |          |
|  21 |            PX RECEIVE          |          |      3 |     19 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |     63 |00:00:00.01 |       0 |      0 |       |       |          |
|  22 |             PX SEND BROADCAST  | :TQ10002 |      0 |     19 |     2   (0)| 00:00:01 |  Q1,02 | S->P | BROADCAST  |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  23 |              PX SELECTOR       |          |      3 |        |            |          |  Q1,02 | SCWC |            |     21 |00:00:00.01 |       3 |      0 |       |       |          |
|* 24 |               TABLE ACCESS FULL| T1       |      1 |     19 |     2   (0)| 00:00:01 |  Q1,02 | SCWP |            |     21 |00:00:00.01 |       3 |      0 |       |       |          |
|  25 |           JOIN FILTER USE      | :BF0000  |      3 |    343K|   289   (2)| 00:00:01 |  Q1,03 | PCWP |            |  19517 |00:00:00.09 |    6114 |   6018 |       |       |          |
|  26 |            JOIN FILTER USE     | :BF0001  |      3 |    343K|   289   (2)| 00:00:01 |  Q1,03 | PCWP |            |  19517 |00:00:00.09 |    6114 |   6018 |       |       |          |
|  27 |             JOIN FILTER USE    | :BF0002  |      3 |    343K|   289   (2)| 00:00:01 |  Q1,03 | PCWP |            |  19517 |00:00:00.09 |    6114 |   6018 |       |       |          |
|  28 |              PX BLOCK ITERATOR |          |      3 |    343K|   289   (2)| 00:00:01 |  Q1,03 | PCWC |            |  19517 |00:00:00.09 |    6114 |   6018 |       |       |          |
|* 29 |               TABLE ACCESS FULL| T4       |     48 |    343K|   289   (2)| 00:00:01 |  Q1,03 | PCWP |            |  19517 |00:00:00.06 |    6114 |   6018 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
  12 - SEL$1 / T3@SEL$1
  18 - SEL$1 / T2@SEL$1
  24 - SEL$1 / T1@SEL$1
  29 - SEL$1 / T4@SEL$1

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      OPT_PARAM('_fix_control' '16923858:5')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      FULL(@"SEL$1" "T4"@"SEL$1")
      FULL(@"SEL$1" "T2"@"SEL$1")
      FULL(@"SEL$1" "T3"@"SEL$1")
      LEADING(@"SEL$1" "T1"@"SEL$1" "T4"@"SEL$1" "T2"@"SEL$1" "T3"@"SEL$1")
      USE_HASH(@"SEL$1" "T4"@"SEL$1")
      USE_HASH(@"SEL$1" "T2"@"SEL$1")
      USE_HASH(@"SEL$1" "T3"@"SEL$1")
      PQ_DISTRIBUTE(@"SEL$1" "T4"@"SEL$1" BROADCAST NONE)
      PX_JOIN_FILTER(@"SEL$1" "T4"@"SEL$1")
      PQ_DISTRIBUTE(@"SEL$1" "T2"@"SEL$1" NONE BROADCAST)
      PX_JOIN_FILTER(@"SEL$1" "T2"@"SEL$1")
      PQ_DISTRIBUTE(@"SEL$1" "T3"@"SEL$1" NONE BROADCAST)
      PX_JOIN_FILTER(@"SEL$1" "T3"@"SEL$1")
      SWAP_JOIN_INPUTS(@"SEL$1" "T2"@"SEL$1")
      SWAP_JOIN_INPUTS(@"SEL$1" "T3"@"SEL$1")
      GBY_PUSHDOWN(@"SEL$1")
      USE_HASH_AGGREGATION(@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   7 - access("T3"."ID"="T4"."ID3")
  12 - filter((TO_NUMBER("T3"."SMALL_VC")=1 OR TO_NUMBER("T3"."SMALL_VC")=2 OR TO_NUMBER("T3"."SMALL_VC")=3 OR TO_NUMBER("T3"."SMALL_VC")=4 OR TO_NUMBER("T3"."SMALL_VC")=5))
  13 - access("T2"."ID"="T4"."ID2")
  18 - filter((TO_NUMBER("T2"."SMALL_VC")=1 OR TO_NUMBER("T2"."SMALL_VC")=2 OR TO_NUMBER("T2"."SMALL_VC")=3 OR TO_NUMBER("T2"."SMALL_VC")=4))
  19 - access("T1"."ID"="T4"."ID1")
  24 - filter((TO_NUMBER("T1"."SMALL_VC")=1 OR TO_NUMBER("T1"."SMALL_VC")=2 OR TO_NUMBER("T1"."SMALL_VC")=3))
  29 - access(:Z>=:Z AND :Z<=:Z)
       filter(SYS_OP_BLOOM_FILTER_LIST(SYS_OP_BLOOM_FILTER(:BF0002,"T4"."ID1"),SYS_OP_BLOOM_FILTER(:BF0001,"T4"."ID2"),SYS_OP_BLOOM_FILTER(:BF0000,"T4"."ID3")))


One possible trouble-shooting step is simply to re-run the query, taking a snapshot of the session activity stats (v$mystat) and the session events (v$session_event) to see what they tell you (if anything) – here’s the critical extract from the session stats:

Name                                      Value
----                                      -----
CPU used when call started                  423
CPU used by this session                    429
parse time cpu                              411
parse time elapsed                          419

Most of the time is parse time, spent on the CPU. (If this had been a much larger scale problem and had occurred in the recent past I might have looked at ASH (v$active_session_hsitory) for any samples for the correct SQL_ID, and seen the problem revealed in the in_parse, in_hard_parse columns.

So let’s enable event 10053 and run the query again – but since it’s “only” 4 seconds, let’s tweak the timer option to report any step that took longer than 0.1 seconds. The default timer setting is a whole second (10^6 microseconds), so we set the fix-control to 5 to get 0.1 seconds (10^5 microseconds).

alter session set "_fix_control"='16923858:5';
alter session set events '10053 trace name context forever';

-- run the query, find the trace file

 grep TIMER or19_ora_23370.trc 

Here’s the output from the call to grep: it looks like group by placement (GBP) is causing a problem.

TIMER:  GBP: costing SEL$1 cpu: 0.303 sec elapsed: 0.309 sec
TIMER: GBP: iteration (#1) SEL$1 cpu: 0.303 sec elapsed: 0.309 sec
TIMER:  GBP: costing SEL$1565E019 cpu: 0.293 sec elapsed: 0.298 sec
TIMER: GBP: iteration (#2) SEL$1565E019 cpu: 0.294 sec elapsed: 0.299 sec
TIMER:  GBP: costing SEL$23EAFE84 cpu: 0.528 sec elapsed: 0.533 sec
TIMER: GBP: iteration (#3) SEL$23EAFE84 cpu: 0.528 sec elapsed: 0.533 sec
TIMER:  GBP: costing SEL$B5D97CA0 cpu: 0.533 sec elapsed: 0.540 sec
TIMER: GBP: iteration (#4) SEL$B5D97CA0 cpu: 0.534 sec elapsed: 0.540 sec
TIMER:  GBP: costing SEL$6C9B46B6 cpu: 0.531 sec elapsed: 0.531 sec
TIMER: GBP: iteration (#5) SEL$6C9B46B6 cpu: 0.531 sec elapsed: 0.532 sec
TIMER:  GBP: costing SEL$ED1298E3 cpu: 0.522 sec elapsed: 0.523 sec
TIMER: GBP: iteration (#8) SEL$ED1298E3 cpu: 0.523 sec elapsed: 0.524 sec
TIMER:  GBP: costing SEL$5968095A cpu: 0.523 sec elapsed: 0.523 sec
TIMER: GBP: iteration (#9) SEL$5968095A cpu: 0.524 sec elapsed: 0.523 sec
TIMER:  GBP: costing SEL$4CA81688 cpu: 0.525 sec elapsed: 0.527 sec
TIMER: GBP: iteration (#12) SEL$4CA81688 cpu: 0.526 sec elapsed: 0.528 sec
TIMER: Group-By Placement SEL$1 cpu: 3.766 sec elapsed: 3.793 sec
TIMER: Cost-Based Transformations (Overall) SEL$1 cpu: 3.769 sec elapsed: 3.795 sec
TIMER: Access Path Analysis (Final) SEL$1 cpu: 0.288 sec elapsed: 0.289 sec
TIMER: SQL Optimization (Overall) SEL$1 cpu: 4.072 sec elapsed: 4.108 sec

If you check further up the page, though, you’ll see in the Outline Information that Oracle has not used group by placement (it has done a “group by pushdown” but that’s different, and relates to aggregation in parallel execution. So one quick hack we could try is to add the hint /*+ no_place_group_by(@sel$1) */ to the query just to see what happens – and here’s the effect on the parse time:

Name                                      Value
----                                      -----
parse time cpu                               33
parse time elapsed                           34

Problem solved – provided we can get the hint into the code (by hand, or SQL Patch, etc.) But the question still remains: where did the time go? The trace file was fairly long (375,000 lines for the original, compared to 32,000 for the hinted) but a rapid scan seemed in order – and something very quickly caught my attention. It was pretty easy to spot because something big and nasty had happened 8 times.

The answer was in “Adaptive Plans”, which (mostly) get flagged with the label “AP:” in the 10053 trace file, for example:

AP: Computing costs for inflection point at min value 0.00
AP: Using binary search for inflection point search
AP: Costing Join for ADM inflection point at card 0.00
AP: Costing Join for ADM inflection point at card 0.00

You can see here that the optimizer is searching for an “inflection point”, that means it’s “thinking about” an adaptive join, and searching for the number of rows where a switch between a nested loop join and a hash join makes sense.

Notice particularly the comment about “using binary search”. After calculating the cost of the “best” join using the current estimates of cardinality for the incoming rowsource the optimizer starts calculating the possible costs (nested loop or hash, it ignores merge) for a set of alternative cardinalities until it finds the cardinality where the nested loop join and hash join have the same cost. That’s the number of rows that Oracle will use at run time to decide whether it should switch from its initial selection of join method to the alternative.

If the initial join selection was a hash join (i.e. large volume startegy) Oracle will use an alternative cardinality of 1 to start its list of estimates – on the assumption that that would produce a nested loop plan and then keep doubling the estimate until the resulting plan switched to a hash join, then close in on the break point by halving and doubling the remaining gap between the NL estimate and the HJ estimate.

If the initial join selection was a nested loop join Oracle will use a “worst case scenario” for the incoming estimate (acting, broadly speaking, as if every filter at that point had had a selectivity of 1 – i.e. 100% of the available data), and start by halving the estimate. This is fine when the starting estimate is (for example) for the first table in the join order and the esimate is just a few million rows.

Here’s the first estimate in my case (you’ll need a wide screen, or very small print):

6772:AP: Computing costs for inflection point at max value 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00 

After you’ve halved that number about 1,000 times you’re down to values in the region of a few thousand. And when you have to cost a couple of joins every time you halve, and when you’ve gone through the whole process 8 times that’s a lot of CPU.

In fact, with the no_place_group_by hint there was still one point where the optimizer did this adaptive join work – which probably accounts for most of the final 0.33 CPU seconds – but it didn’t show up in any separately reported timer events.

Of course the final little test of this parse time hypothesis is to add the hint /*+ no_adaptive_plan */ – so I did that, and the parse time was still about 0.3 seconds! Investigation of the 10053 trace showed that even with the hint in place the optimizer still went through that one huge binary chop – but when it had worked out the inflection point it printed the message:

AP: Adaptive plans bypassed for query block SEL$1 due to disabled by hint (no_adaptive_plan)
AP: Adaptive joins bypassed for query block SEL$1 due to adaptive plans disabled

According to the hint report the hint was valid, however, so that behaviour looks a little buggy. It then occurred to me that maybe I could have hinted /*+ no_adaptive_plan(@sel$1) */ – and that worked properly with the trace reporting:

AP: Checking validity for query block SEL$1, sqlid=7fjtvwktcmsgq
AP: Adaptive plans bypassed for query block SEL$1 due to disabled by hint (no_adaptive_plan)
AP: Adaptive joins bypassed for query block SEL$1 due to adaptive plans disabled

Had I not realised that this was possible I have fallen back on the hint /*+ opt_param() */ to set the parameter optimizer_adaptive_plans to false for the query in which case the following message (and no other “AP:” message) appeared near the start of the trace:

AP: Adaptive joins bypassed for query block SEL$1 due to disabled by adaptive join parameter

If you want to experiment on your own system here’s the script to create the data – the script name reflects the fact that I found this example by accident while working on something completely different:

rem
rem     Script:         12c_vector_transform_c.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2015
rem     Purpose:        
rem
rem     Last tested 
rem             19.11.0.0
rem             12.2.0.1
rem             12.1.0.2

drop table t1;
drop table t2;
drop table t3;
drop table t4;
purge recyclebin;

create table t1 
as
select
        rownum          id,
        to_char(mod(rownum,10)) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t1 
        add constraint t1_pk primary key(id)
;

create table t2
as
select
        rownum          id,
        to_char(mod(rownum,12)) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t2
        add constraint t2_pk primary key(id)
;

create table t3
as
select
        rownum          id,
        to_char(mod(rownum,14)) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t3
        add constraint t3_pk primary key(id)
;

create table t4
nologging
as
select
        t1.id                   id1,
        t2.id                   id2,
        t3.id                   id3,
        rpad(rownum,10)         small_vc,
        rpad('x',100)           padding
from
        t1, t2, t3
;

spool 12c_vector_transform_c

set timing on
set linesize 255
set trimspool on
set pagesize 90
set arraysize 100
set serveroutput off

execute snap_my_stats.start_snap

alter session set statistics_level = all;

alter session set "_fix_control"='16923858:5';
alter session set events '10053 trace name context forever';

select
        /*+ 
--              opt_param('optimizer_adaptive_plans' 'false')
--              no_adaptive_plan
--              no_adaptive_plan(@sel$1)
--              no_place_group_by(@sel$1)
                parallel(t4,3)
        */
        t1.small_vc,
        t2.small_vc,
        t3.small_vc,
        count(t4.small_vc)
from
        t4,     
        t1,     
        t2,     
        t3
where
        t1.id = t4.id1
and     t2.id = t4.id2
and     t3.id = t4.id3
and     t1.small_vc in (1,2,3)
and     t2.small_vc in (1,2,3,4)
and     t3.small_vc in (1,2,3,4,5)
group by
        t1.small_vc,
        t2.small_vc,
        t3.small_vc
;

select * from table(dbms_xplan.display_cursor(null,null,'cost outline allstats all note -bytes'));

alter session set events '10053 trace name context off';

set serveroutput on
execute snap_my_stats.end_snap

spool off

Index Upgrade

Mon, 2022-04-11 10:55

Sometimes wishes come true and in 19c – with fix_control QKSFM_DBMS_STATS_27268249 – one of mine did. The description of this fix (which is enabled by default) is: “use approximate ndv for computing leaf blocks and distinct keys”.

Here’s a key item in the output file from running tkprof against the trace file generated by a simple call to:

execute dbms_stats.gather_index_stats(user,'t1_i2')

The index is a two_column index on t1(x1, x2) with a size of roughly 16,000 blocks on a table of approximately 6 million rows.

select /*+ opt_param('_optimizer_use_auto_indexes' 'on')
  no_parallel_index(t, "T1_I2")  dbms_stats cursor_sharing_exact
  use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl
  opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad  no_expand
  index_ffs(t,"T1_I2") */ count(*) as nrw,
  approx_count_distinct(sys_op_lbid(106818,'L',t.rowid)) as nlb,
  approx_count_distinct(sys_op_combined_hash("X1","X2")) as ndk,
  sys_op_countchg(substrb(t.rowid,1,15),1) as clf
from
 "TEST_USER"."T1" t where "X1" is not null or "X2" is not null

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE APPROX (cr=15821 pr=0 pw=0 time=2812116 us starts=1)
   6018750    6018750    6018750   INDEX FAST FULL SCAN T1_I2 (cr=15821 pr=0 pw=0 time=894658 us starts=1 cost=2117 size=192000000 card=6000000)(object id 106818)


The first point of interest is the appearance of the approx_count_distinct() function calls used for the nlb (number of leaf blocks) and ndk (number of distinct keys) columns. It’s also worth nothing that the ndk value is derived from a call to sys_op_combined_hash() applied to the two base columns which means the number of distinct keys for a multi-column index is calculated in exactly the same way as the number of distinct values for a column group.

There are two more important details though: first that the mechanism uses a fast full scan of the whole index, secondly that the size of this index is about 16,000 blocks.

A final (unrelated) point is the little reminder in the hints that 19c includes an automatic indexing mechanism. It’s easy to forget such things when your overnight batch job takes longer than usual.

For comparison purposes, the following shows the effect of disabling the feature:

alter session set "_fix_control"='27268249:0';


select /*+ opt_param('_optimizer_use_auto_indexes' 'on')
  no_parallel_index(t, "T1_I2")  dbms_stats cursor_sharing_exact
  use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl
  opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad  no_expand
  index_ffs(t,"T1_I2") */ count(*) as nrw,count(distinct sys_op_lbid(106818,
  'L',t.rowid)) as nlb,count(distinct hextoraw(sys_op_descend("X1")
  ||sys_op_descend("X2"))) as ndk,sys_op_countchg(substrb(t.rowid,1,15),1) as
  clf
from
 "TEST_USER"."T1" sample block (  7.0114135742,1)  t where "X1" is not null
  or "X2" is not null

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT GROUP BY (cr=1132 pr=0 pw=0 time=460459 us starts=1)
    421761     421761     421761   INDEX SAMPLE FAST FULL SCAN T1_I2 (cr=1132 pr=0 pw=0 time=67203 us starts=1 cost=150 size=8413700 card=420685)(object id 106818)

The calculations for nlb and ndk are simple count()s and the thing that ndk counts is a messy concatenation of the columns hextoraw(sys_op_descend(“X1”) || sys_op_descend(“X2”)) that Oracle has used to ensure that counts for like ‘AB’ || ‘CD’ and ‘ABC’||’D’ don’t get combined.

Perhaps most significantly for some people is that the execution plan shows us that the index fast full scan was a SAMPLE and only analyzed (a fairly typical) 1,132 blocks out of 16,000 and 400,000 rows out of 6 million This looks a bit of a threat, of course; but there may be a few critical indexes where this extra workload will stop random variations in execution plans when it really matters.

As with so many details of Oracle there are likely to be cases where the new method is hugely beneficial, and some where it’s a nuisance, so it’s good to know that you can be a little selective about when it gets used.

Footnote

Don’t forget that it’s a good idea to change think about setting the table preference “table_cached_blocks” to allow Oracle to produce a better value for the clustering_factor. This is another mechanism that increases the CPU required to gather index stats.

It’s an odd little detail that the fixed control appeared in 19.3.0.0 according to my histogram of v$system_fix_control and certainly wasn’t in 18.3.0.0 – but the entry in the view thinks that it was available from Oracle 8.0.0.0.

Parallel Inactivity

Thu, 2022-04-07 11:08

This is a simple script that I wrote more than 20 years ago (for 8.1.7.4 apparently) to get the session wait event figures from parallel query slaves as they were running. I’m posting it now because there’s a question on the Oracle Database Forum where it would be useful as a way for checking for performance problems in a long-running CTAS.

Oracle gives us the view v$px_session to link together the query co-ordinator with the parallel query slaves, and it gives us v$px_sesstat to report the linked session activity stats, but it doesn’t gives us a view to link together the session events for the multiple sessions, so this query simply joins v$px_session to v$session_events to produce that result:

rem
rem     Script:         px_waits.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2001
rem     Purpose:        Report events for PX execution
rem
rem     Last tested:
rem             19.3.0.0
rem
rem     Notes:
rem     Note that the co-ordinator can be on a different instance
rem     in parallel server systems.  To cater for an oddity of the
rem     qcinst report, we need the current instance number in case
rem     the QC is on the current machine.
rem
rem     If the co-ordinator is present, then it appears last on the list
rem     for a server group.
rem

column instance_number new_value m_inst
select instance_number from v$instance;

set linesize 120
set pagesize 60
set trimspool on
set tab off
set verify off

spool px_waits

break -
        on qcsid skip 1 -
        on server_group -
        on degree -
        on server_set -
        on sid -
        on server# skip 1

column event format a32

column  qcsid           format a6       heading 'Coord'
column  server_group    format 999      heading 'Grp'
column  degree          format a5       heading 'Deg'   noprint
column  server_set      format 999      heading 'Set'   noprint
column  server#         format 999      heading 'Sno'
column  sid             format 9999     heading 'SID'
column  name            format a32      heading 'Statistic'
column  value           format 99,999,999       heading 'value'

select
        ss.qcsid || '/' || nvl(ss.qcinst_id,&m_inst)            qcsid,
        ss.server_group,
        decode(degree,
                null,null,
                ss.degree || '/' || ss.req_degree
        )       degree,
        ss.server_set,
        ss.server#,
        ss.sid,
        se.event,
        se.total_waits,
        se.time_waited
from
        v$px_session            ss,
        v$session_event         se
where
        se.sid = ss.sid
order by
        ss.qcsid,
        ss.server_group,
        ss.server_set,
        ss.server#,
        se.event
;

clear columns
clear breaks
set verify on

spool off

See also: Parallel Activity for the session activity stats report.

Parallel Activity

Thu, 2022-04-07 10:49

This is a simple script that I wrote more than 20 years ago (for 8.1.7.4 apparently) to get the session activity from parallel query slaves as they were running. I’m posting it now because there’s a question on the Oracle Database Forum where it would be useful as a way for checking for a skewed data distribution in a long-running query.

Oracle gives use the view v$px_sesstat which connects PX slaves with their query co-ordinator (QC) and reports the session activity for all of them. The view only report statistics numbersm though, so this little query joins the view to v$statname to report the names. It eliminates stats where the value is zero, and orders by QC (though you could add a line to restrict the query to a single QC), parallel server group, then process, then statistic number.

rem
rem     Script:         px_stats.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2001
rem     Purpose:        Report statistics for PX slaves on the fly
rem
rem     Last tested
rem             19.3.0.0
rem     Notes:
rem     Note that the co-ordinator can be on a different instance
rem     in parallel server systems.  To cater for an oddity of the
rem     qcinst report, we need the current instance number in case
rem     the QC is on the current machine.
rem
rem     If the co-ordinator is present, then it appears last on the list
rem     for a server group.
rem
rem     This code really needs to be enhanced to do a proper job on OPS/RAC
rem     by accessing gv$px_sesstat, and handling sids and instances correctly
rem
rem     The user running this query has to have SELECT privileges 
rem     on the views v$instance, v$statname and v$px_sesstat
rem

column instance_number new_value m_inst
select instance_number from v$instance;

spool px_stats

set tab off
set pagesize 60
set linesize 156
set trimspool on
set verify off

break -
        on qcsid skip 1 -
        on server_group -
        on degree -
        on server_set -
        on sid -
        on server# skip 1


column  qcsid           format a6               heading 'Coord'
column  server_group    format 999              heading 'Grp'
column  degree          format a5               heading 'Deg'
column  server_set      format 999              heading 'Set'
column  server#         format 999              heading 'Sno'
column  sid             format 9999             heading 'SID'
column  name            format a52              heading 'Statistic'
column  value           format 99,999,999,999   heading 'value'


select
        st.qcsid || '/' || nvl(st.qcinst_id,&m_inst)            qcsid,
        st.server_group,
        decode(degree,
                null,null,
                st.degree || '/' || st.req_degree
        )       degree,
        st.server_set,
        st.server#,
        st.sid,
        sn.name,
        st.value
from
        v$px_sesstat    st,
        v$statname      sn
where
        sn.statistic# = st.statistic#
and     st.value != 0
order by
        st.qcsid,
        st.server_group,
        st.server_set,
        st.server#,
        st.statistic#
;

clear columns
clear breaks
set verify on

spool off

See also: Parallel Inactivity for a simple script reporting wait events

PLSQL_WARNINGS

Mon, 2022-04-04 04:45

I don’t suppose many people edit and compile PL/SQL code from the SQL*Plus command line any more, but the following reminder about avoiding mistakes is probably valid in other development environments even though it may be enabled through a difference mechanism.

You can set the level of warning that you get from the PL/SQL compiler – and the default value isn’t the best value to use if you want your PL/SQL to be efficient. Here’s a demonstration based on a common, trivial, but annoying error – it starts with an indexed table:

rem
rem     Script:         plsql_warning_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2017
rem
rem     Last tested 
rem             19.11.0.0
rem             11.2.0.4
rem

create table t1(
        id      varchar2(10),
        v1      varchar2(32)
);

create index t1_i1 on t1(id);

insert into t1 
select  rownum, object_name 
from    all_objects
where   rownum <= 1000
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

Note that although I’ve declared the id column as a varchar2() type I’m populating it with a number – that will, of course, be coerced into a character format. This is an example of something I’ve seen quite often in production systems – a character column storing something that (always) looks like a number – and it leads to the following type of coding defect:

create or replace function f1 (i_in number)
return varchar2
authid definer
as
        m_v1 t1.v1%type;
begin
        select  t1.v1
        into    f1.m_v1
        from    t1
        where   t1.id = f1.i_in ;

        return  f1.m_v1;

end;
/

show errors

The error should be quite easy to see in this example – but in a more complex example with a longer piece of code it might not be so visible. Still, I’ve got the “show errors” call immediately after my “create or replace” so that should help. Here’s the output from running the above:

Function created.

No errors.

There is an error in the code- but it’s not one that leads to a PL/SQL compilation error. My incoming parameter is numeric, and I’m using it in the predicate checking t1.id – which is an indexed character column – but that means the CBO will modify the predicate internally to: to_number(t1.id) = :b1, disabling the index. The function will work (provided the full tablescan doesn’t find a value in the table that raises a conversion error), but the performance probably won’t be what I’d hoped for.

Let’s recompile the procedure, but precede it with an alter session statement:

alter session set plsql_warnings = 'enable:all';
alter function f1 compile;
show errors

And here’s the resulting output:

Session altered.

SP2-0807: Function altered with compilation warnings

Errors for FUNCTION F1:

LINE/COL ERROR
-------- -----------------------------------------------------------------
10/8     PLW-07204: conversion away from column type may result in
         sub-optimal query plan

The compile succeeded (“function altered …warnings”), so the function will still execute and return the correct result, but we’ve had an explicit warning of exactly what we’ve done wrong and the effect it will have. But you can go one better – if you know your PLSQL error numbers:

alter session set plsql_warnings = 'enable:all','error:7204';
alter function f1 compile;
show errors

In this example I’ve added an extra option to the plsql_warnings parameter – I’ve told it to treat PLW-7204 as an error, not just as a warning, so this is what we see:

Session altered.


Warning: Function altered with compilation errors.

Errors for FUNCTION F1:

LINE/COL ERROR
-------- -----------------------------------------------------------------
10/8     PLS-07204: conversion away from column type may result in
         sub-optimal query plan

The difference doesn’t leap out and hit you in the eye, but instead of “with compilation warnings the error message says “with compilation errors. (It’s also lost the SP2-0807 prefix, but that’s not important.)

Since PLW-07024 is now an error the function state is invalid, and if you try using it in a query the query will raise Oracle error: ORA-06575: Package or function F1 is in an invalid state.

If you want to see how your packages, procedures and functions were last compiled you can query one of two views:

column param_name  format a32
column param_value format a32

select  param_name, param_value
from    user_stored_settings
where   object_name = 'F1'
and     object_type = 'FUNCTION'
/

PARAM_NAME                       PARAM_VALUE
-------------------------------- --------------------------------
plsql_optimize_level             2
plsql_code_type                  INTERPRETED
plsql_debug                      FALSE
nls_length_semantics             BYTE
plsql_warnings                   ENABLE:ALL,ERROR:  7204
plsql_ccflags
plscope_settings                 IDENTIFIERS:NONE
plsql_compiler_flags             INTERPRETED,NON_DEBUG

8 rows selected.

(The other view is a denormalized/pivoted version of this view, giving you one row and 8 columns for each object.)

If you want to make sure that you recompile the function with the same settings that you used previously you can add the clause “reuse settings” to the “alter function f1 compile” call; if you don’t do this the function will compile with whatever your current session settings (which may have been dictated by the system settings).

There are variations on this theme – if you check the “alter compile” syntax you’ll see that you can include “parameter = value” clauses in the call to compile so, for example, I could start a new session and issue:

alter function f1 compile plsql_warnings='error:7204';

This would give me exactly the same warning, and the same output on a subsequent “show errors” – though in this case the output from user_stored_settings would be: “DISABLE:ALL,ERROR: 7204”.

If you want a complete list of all the possible pl/sql warnings you can find them in $ORACLE_HOME/plsql/mesg/plwus.msg. The warnings fall into three categories: Severe, Informational, and Performance, and there is a short note in the message file giving the ranges:

/   SEVERE -- For this category the warning number should be between
/             5000 and 5999.
/   INFORMATIONAL - For this category the warning number should be between
/                   6000 and 6249.
/   PERFORMANCE   - For this category the warning number should be between
/                   7000 and 7249.

It’s worth taking a look at the possible errors – even if you don’t take advantage of the feature. There aren’t very many, but one I particularly like is Informational 6002: “Unreachable code”, which tells you exactly which bits of your PL/SQL are never going to run. (I wonder how many people would find code that failed to recompile if they set the plsql_warning to ‘error:6002’).

Summary

It’s worth knowing about the option to set the parameter plsql_warnings as it may avoid accidental inefficiency in SQL that’s embedded in PL/SQL, and it may highlight coding errors (like “unreachable code”) that otherwise manage to slip past test suites.

Personally I’d be inclined to set it to something quite aggressive on development to help developers spot errors as early and as easily as possible,

Drop column bug

Mon, 2022-03-28 05:29

In the previous note about a problem dropping virtual columns the “guilty party” that made it impossible to drop any columns was based on a complex data type owned by the MDSYS (Spatial) schema. This note demonstrates the same problem with a very simple example created from scratch in an ordinary user schema.

rem
rem     Script:         object_virtual_col.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2022
rem
rem     Last tested 
rem             19.11.0.0
rem

create type point_type as object(x_coord number, y_coord number);
/

create or replace function my_point(inpoint in point_type)
return point_type
deterministic as 
begin
        return inpoint;
end;
/

show errors

create table t1 (id, n1, p1, n2, v1, padding)
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4    -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        rownum                          n1,
        point_type(rownum, rownum)      p1,
        rownum                          n2,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1
where
        rownum <= 100   -- > comment to avoid WordPress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );
end;
/

alter table t1 add constraint t1_pk primary key(id);

So I’ve declared a type “point” which is an object with two attributes of type number, and I’ve created a function that takes a point as its input parameter and returns a point. Then I’ve created a table which includes a column of type point.

Let’s start with a little reminder of what a pain it is to use even simple object types correctly. What’s going to happen with the following three SQL statements:

select    p1.x_coord from t1    where rownum <= 4;
select t1.p1.x_coord from t1    where rownum <= 4;
select t1.p1.x_coord from t1 t1 where rownum <= 4;

The first two will fail – the first one shouldn’t be too surprising, the second does seem a little unreasonable:

ORA-00904: "P1"."X_COORD": invalid identifier
ORA-00904: "T1"."P1"."X_COORD": invalid identifier

So let’s try adding some virtual columns to pick out the X value:

alter table t1 add x_val generated always as (p1.x_coord) virtual;
alter table t1 add c_val generated always as (cast(p1.x_coord as number)) virtual;

The first call will fail (ORA-54016: Invalid column expression was specified) but the second will succeed. What if we try to hide out point column behind a call to our function:

alter table t1 add fp_val generated always as (my_point(p1)) virtual;
alter table t1 add fx_val generated always as (my_point(p1).x_coord) virtual;

Again the first call will fail (ORA-54004: resultant data type of virtual column is not supported) but that’s a documented restriction – a user-defined type may not be used as the type of a virtual column and I wasn’t at that point trying to return just the one attribute.

The second call, however, will succeed. So I can’t create a virtual column p1.x_coord, but I can create a virtual column my_point(p1).x_coord.

We now have two virtual columns that should return the required values, so that’s do a quick check with a couple of simple queries – cut and paste:

SQL> select fx_val "my_point(p1).x_coord" from t1 where rownum <= 4;

my_point(p1).x_coord
--------------------
                   1
                   2
                   3
                   4

4 rows selected.

SQL> select c_val  "cast(p1.x_coord as -" from t1 where rownum <= 4;

cast(p1.x_coord as -
--------------------
                   1
                   2
                   3
                   4

4 rows selected.

Finally we’ll finish off by demonstrating that I’ve just created a problem that no-one will notice until long after I’ve left the site (maybe):

SQL> alter table t1 drop column n1;
alter table t1 drop column n1
 *
ERROR at line 1:
ORA-00904: "TEST_USER"."MY_POINT": invalid identifier

After creating (and using successfully) the virtual column that calls my function, I can’t drop any of the columns in the table.

Summary

The manuals have a stated restriction for virtual columns that they cannot be a user-defined type, and this restriction seems to carry forward to an attribute of a user-defined type unless the attribute has been cast() to a base type.

The same restriction seems to apply to functions returning a user-defined type, but not to the individual attributes of the returned value – it is not necessary to cast() them to a base type. However, if you (accidentally) take advantage of this relaxation of the restriction you will be unable to drop any columns from the table in the future.

Drop column bug

Thu, 2022-03-24 06:07

Here’s a problem that appeared recently on the Orace Developer forum showing one of the classic symptons of new features namely that “mix and match” often runs into problems. This example has been a long time coming so “new” is something of a misnomer, but the alternative source of unlucky collisions is in the “rare” features – in this case Spatial. (It’s possible that the problem is not restricted to just Spatial but it probably does require a somewhat exotic data type.)

The problem appeared in a thread on the Oracle Developer Forum from someone who was trying to drop a pair of columns and finding that the statement failed with a surprising choice of error: ORA-00904: Invalid Identifier. The surprising thing about this error was that the named identifier was clearly not invalid. The suspicion that that this was an example of “new features colliding” was that the columns to be dropped were virtual columns based on a real column of the table that had been declared as an object type defined in the MDSYS (Spatial) schema.

Conveniently the author had supplied a short, simple, script to demonstrate the issue, so I copied it and modified it a bit to do a few tests around the edges of the problem. Here’s the code that I used to start my investigation:

rem
rem     Script:         drop_col_bug.sql
rem     Author:         Jonathan Lewis/User_77G7L
rem     Dated:          Mar 2022
rem     Purpose:        
rem
rem     Last tested 
rem             21.3.0.0        Still broken
rem             19.11.0.0
rem

create table xxx (
        v1              varchar2(10),
        n1              number,
        shape           mdsys.sdo_geometry,
        x_value         number generated always as (mdsys.sdo_geom.sdo_pointonsurface(shape,0.005).sdo_point.x) virtual,
        y_value         number generated always as (mdsys.sdo_geom.sdo_pointonsurface(shape,0.005).sdo_point.y) virtual,
        v2              varchar2(10),
        n2              number,
        n3              number
)
segment creation immediate
;

insert into xxx(v1, n1, v2, n2, n3) values('z',1,'y',2,3);
update xxx  set
        shape = sdo_geometry(
                2003,                           -- two-dimensional polygon
                null,
                null,
                sdo_elem_info_array(1,1003,3),  -- one rectangle (1003 = exterior)
                sdo_ordinate_array(1,1, 5,7)    -- only 2 points needed to define rectangle
        )
;


commit;

alter table xxx drop (x_value, y_value) ;

The modifications I made from the original code are:

  • I’ve removed a couple of redundant sets of parentheses from the virtual column definitions
  • I’ve added a few columns before, in between, and after the virtual columns
  • I’ve used “segment creation immediate”
  • I’ve inserted a row into the table

The last two are simply to ensure that I have data segments and at least one item for the drop to work on – just in case it’s a run-time problem being reported as a parse time issue.

The extra columns are to test whether the type and position of the column I drop affects the outcome, and the change in parentheses is purely aesthetic.

Here’s the result of the attempt to drop the virtual columns:

alter table xxx drop (x_value, y_value)
  *
ERROR at line 1:
ORA-00904: "MDSYS"."SDO_GEOM"."SDO_POINTONSURFACE": invalid identifier

This is a little strange since I have used the packaged function mdsys.sdo_geom.sdo_pointonsurface() to define the virtual columns and Oracle didn’t complain when I created the column. (Part of the reason I had reduced the original parentheses was to check that the compiler hadn’t got confused by an excess of paretheses).

As a quick “what if” test I tried using the alternative syntax for drop column that you can use with just one column:

SQL> alter table xxx drop column x_value;
alter table xxx drop column x_value
  *
ERROR at line 1:
ORA-00904: "MDSYS"."SDO_GEOM"."SDO_POINTONSURFACE": invalid identifier

What about trying to set the column unused before dropping all unused columns?

SQL> alter table xxx set unused column x_value;
alter table xxx set unused column x_value
  *
ERROR at line 1:
ORA-00904: "MDSYS"."SDO_GEOM"."SDO_POINTONSURFACE": invalid identifier

So is the problem restricted to the virtual columns – what happens if I try to drop a column from the end of the table, what about the one between the two virtual columns, how about a column that appears before even the shape column? Nothing changes:

SQL> alter table xxx drop column v1;
alter table xxx drop column v1
  *
ERROR at line 1:
ORA-00904: "MDSYS"."SDO_GEOM"."SDO_POINTONSURFACE": invalid identifier


SQL> alter table xxx set unused column v1;
alter table xxx set unused column v1
  *
ERROR at line 1:
ORA-00904: "MDSYS"."SDO_GEOM"."SDO_POINTONSURFACE": invalid identifier

What if I have only one of the virtual columns? No difference.

What if I don’t have either of the virtual columns? Finally I can drop any column I like from the table (including the shape column). Not that that’s much use to the user.

You can’t set unused or drop any columns in the table thanks to an error that looks as if it’s associated with the definition of those virtual columns.

Workaround

Is there any way to bypass the problem and still store the information we need (until we want to drop it). Let’s start by taking a look at the way Oracle has used our table definition to create column definitions, just in case that gives us a clue:

select 
        column_id id, segment_column_id seg_id, internal_column_id int_id, 
        column_name, data_type, data_default  
from 
        user_tab_cols 
where 
        table_name = 'XXX' 
order by 
        column_id,
        internal_column_id
/

        ID     SEG_ID     INT_ID COLUMN_NAME          DATA_TYPE                 DATA_DEFAULT
---------- ---------- ---------- -------------------- ------------------------- --------------------------------------------------------------------------------
         1          1          1 V1                   VARCHAR2
         2          2          2 N1                   NUMBER
         3          3          3 SHAPE                SDO_GEOMETRY
         3          4          4 SYS_NC00004$         NUMBER
         3          5          5 SYS_NC00005$         NUMBER
         3          6          6 SYS_NC00006$         NUMBER
         3          7          7 SYS_NC00007$         NUMBER
         3          8          8 SYS_NC00008$         NUMBER
         3          9          9 SYS_NC00009$         SDO_ELEM_INFO_ARRAY
         3         10         10 SYS_NC00010$         SDO_ORDINATE_ARRAY
         4                    11 X_VALUE              NUMBER                    (("MDSYS"."SDO_GEOM"."SDO_POINTONSURFACE"("SHAPE",0.005))."SDO_POINT")."X"
         5                    12 Y_VALUE              NUMBER                    (("MDSYS"."SDO_GEOM"."SDO_POINTONSURFACE"("SHAPE",0.005))."SDO_POINT")."Y"
         6         11         13 V2                   VARCHAR2
         7         12         14 N2                   NUMBER
         8         13         15 N3                   NUMBER

15 rows selected.

There’s quite a lot going on there in terms of columns hidden behind the sdo_geometry type. In fact internal columns 9 and 10 might prompt you to look for other objects like table types or LOBs:

SQL> select column_name, segment_name, index_name from user_lobs where table_name = 'XXX';

COLUMN_NAME                    SEGMENT_NAME                   INDEX_NAME
------------------------------ ------------------------------ ------------------------------
"SHAPE"."SDO_ELEM_INFO"        SYS_LOB0000100168C00009$$      SYS_IL0000100168C00009$$
"SHAPE"."SDO_ORDINATES"        SYS_LOB0000100168C00010$$      SYS_IL0000100168C00010$$

2 rows selected.

But the interesting detail is the data_default column for our two virtual columns – which have more parentheses than the original definitions. Perhaps the storage of the expression has gone wrong (as happened in an older version of Oracle with case expressions) and is causing the ORA-00904 error to appear. So let’s try selecting data from the table using the expression stored in data dictionary:

select
        ((MDSYS.SDO_GEOM.SDO_POINTONSURFACE(SHAPE,0.005)).SDO_POINT).X          old_x,
          mdsys.sdo_geom.sdo_pointonsurface(shape,0.005).sdo_point.x            new_x,
        ((MDSYS.SDO_GEOM.SDO_POINTONSURFACE(SHAPE,0.005)).SDO_POINT).Y          old_y
from
        xxx
/

     OLD_X      NEW_X      OLD_Y
---------- ---------- ----------
         1          1          1

1 row selected.


No syntax error there – as far as a simple select is concerned. I’ve included my tidier format for the x_value column aligned with the resulting stored value (with all the double quotes removed – though I’ve also tested it with the quotes in place) – and the only significant visual difference is the number of parentheses, so maybe that’s a clue. In particular we note that the error reports “MDSYS”.”SDO_GEOM”.”SDO_POINTONSURFACE” as the invalid identifier and the first time an extra (close) parenthesis appears is just after that function call. Maybe (for no good reason) the code path involved with handling column data during a drop/set unused call is getting confused by parentheses. So let’s try to reduce the complexity of the expression by hiding it inside a local function.

First attempt – create a function to return an sdo_point_type and define the virtual columns to expose the X and Y values from the point:

create or replace function my_point(
        inshape         in mdsys.sdo_geometry,
        intolerance     in number
)
return mdsys.sdo_point_type
deterministic
as
begin
        return mdsys.sdo_geom.sdo_pointonsurface(inshape, intolerance).sdo_point;
end;
/

        x_value         number generated always as (my_point(shape,0.005).x) virtual,
        y_value         number generated always as (my_point(shape,0.005).y) virtual,

This approach still produces an ORA-00904, though the invalid identifier becomes “TEST_USER”.”MY_POINT”.

Second attempt – two separate functions, one for the x value, one for the y value:

create or replace function my_x(
        inshape         in mdsys.sdo_geometry,
        intolerance     in number
)
return number
deterministic
as
begin
        return   mdsys.sdo_geom.sdo_pointonsurface(inshape, intolerance).sdo_point.x;
end;
/

show errors

create or replace function my_y(
        inshape         in mdsys.sdo_geometry,
        intolerance     in number
)
return number
deterministic
as
begin
        return   mdsys.sdo_geom.sdo_pointonsurface(inshape, intolerance).sdo_point.y;
end;
/

        x_value         number generated always as (my_x(shape, 0.005)) virtual,
        y_value         number generated always as (my_y(shape, 0.005)) virtual,

This worked so, finally, I looked at the SQL Language reference manual to see if there were any restrictions on virtual columns that might explain the problem I had had with all the previous definitions (and, yes, I know I should have done that right after the first failure) and I found the following:

  • The virtual column cannot be an Oracle supplied data type, a user-defined type, or LOB or LONG RAW.

None of my virtual column definitions returned an Oracle supplied data type or a user-defined data type. But would the restriction also apply to single attributes of such a data type, or has the complexity of spatial data types found a hole in the code? And the answer to that question is a whole new blog note waiting to be finish (because I’d forgotten what a pain it was to use object types in simple SQL.)

Upgrade Surprise

Tue, 2022-03-22 05:04

Here’s a little surprise that showed up in the most recent (March 2022) article that I sent to Simpletalk for the series on transformations. I had been using 19c (19.11.0.0) to create and run my little demos but the editor had used 12.2.0.1 to check the examples and questioned a comment I had made about a “default plan”.

Here’s the query in question. I was using the emp and dept tables from the Scott schema to demonstrate a point about subquery execution:

rem
rem     Script:         unnest_demo_simpletalk_3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2022
rem 

break on deptno skip 1

select
        /*+ 
                qb_name(main)
                gather_plan_statistics 
        */
        *
from    emp e1
where   e1.sal + nvl(e1.comm,0) > (
                select  /*+ qb_name(subq) */
                        avg(e2.sal + nvl(e2.comm,0))
                from    emp e2
                where   e2.deptno = e1.deptno
        )
order by
        e1.deptno, e1.empno
/

As you can see, I’ve used a correlated aggregate subquery to report all employees who earned more than the average for their department, where “earnings” is calculated as the sum of salary and commission.

Here’s the plan I got when I ran this query under 19c:

------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |      6 |00:00:00.01 |      24 |       |       |          |
|   1 |  SORT ORDER BY       |      |      1 |      1 |      6 |00:00:00.01 |      24 |  2048 |  2048 | 2048  (0)|
|*  2 |   FILTER             |      |      1 |        |      6 |00:00:00.01 |      24 |       |       |          |
|   3 |    TABLE ACCESS FULL | EMP  |      1 |     14 |     14 |00:00:00.01 |       6 |       |       |          |
|   4 |    SORT AGGREGATE    |      |      3 |      1 |      3 |00:00:00.01 |      18 |       |       |          |
|*  5 |     TABLE ACCESS FULL| EMP  |      3 |      5 |     14 |00:00:00.01 |      18 |       |       |          |
------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("E1"."SAL"+NVL("E1"."COMM",0)>)
   5 - filter("E2"."DEPTNO"=:B1)

When my editor ran the query on 12.2.0.1, and when I started up an instance of 12.2.0.1 and ran the query, the plan looked like this:

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |      6 |00:00:00.02 |      29 |      6 |       |       |          |
|*  1 |  FILTER              |      |      1 |        |      6 |00:00:00.02 |      29 |      6 |       |       |          |
|   2 |   SORT GROUP BY      |      |      1 |      4 |     14 |00:00:00.02 |      29 |      6 |  2048 |  2048 | 2048  (0)|
|*  3 |    HASH JOIN         |      |      1 |     70 |     70 |00:00:00.02 |      29 |      6 |  1922K|  1922K| 1053K (0)|
|   4 |     TABLE ACCESS FULL| EMP  |      1 |     14 |     14 |00:00:00.01 |       7 |      6 |       |       |          |
|   5 |     TABLE ACCESS FULL| EMP  |      1 |     14 |     14 |00:00:00.01 |       7 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$9E18A930")
      MERGE(@"SEL$AA0D0E02" >"SEL$B4BE209F")
      OUTLINE(@"SEL$B4BE209F")
      UNNEST(@"SUBQ")
      OUTLINE(@"SEL$AA0D0E02")
      OUTLINE(@"SEL$D6166863")
      OUTLINE(@"SUBQ")
      OUTLINE(@"MAIN")
      FULL(@"SEL$9E18A930" "E2"@"SUBQ")
      FULL(@"SEL$9E18A930" "E1"@"MAIN")
      LEADING(@"SEL$9E18A930" "E2"@"SUBQ" "E1"@"MAIN")
      USE_HASH(@"SEL$9E18A930" "E1"@"MAIN")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("E1"."SAL"+NVL("E1"."COMM",0)>SUM("E2"."SAL"+NVL("E2"."COMM",0))/COUNT("E2"."SAL"+NVL("E2"."COMM",0))
              )
   3 - access("E2"."DEPTNO"="E1"."DEPTNO")

(I’ve added in a request for the ‘outline’ in the display_cursor() format.) The correlated subquery has been unnested and the resulting inline view has been subject to complex view merging. It was only at this point that I realised that the 19c plan was a little surprising and not what I should have expected.

After checking that the configuration and statistics (including the system stats) were the same on the two instances I re-ran the query on 12c with the /*+ no_unnest */ hint to make it use the plan that 19c had produced and I found (as expected) that the plan with filter subquery produced by 19c had a higher cost than the unnesting plan produced by 12c.

Next I re-ran the query on 19c with the /*+ unnest */ hint to make it use the plan that 12c had produced – but it didn’t! 19c “ignored” the hint and carried on using the filter subquery plan. It did, however, supply the following extra information when I added the ‘hint_report’ option to the to the display_cursor() format:

Total hints for statement: 3 (U - Unused (1))
---------------------------------------------------------------------------

   1 -  MAIN
           -  qb_name(main)

   4 -  SUBQ
         U -  unnest / Failed basic validity checks
           -  qb_name(subq)

The code in 19c thinks that it would be illegal to unnest the subquery that 12c was unnesting (does this mean that some people may be getting wrong results in 12c). So I checked the 10053 (CBO) trace file to see if there was any further information there that would “explain” the problem. This is what I found:

SU: Checking validity for Null Accepting Semi JoinUnnesting for query block MAIN(#1)
SU:   Checking validity of unnesting subquery SUBQ (#2)
SU:     SU bypassed: More than 1 column in connect condition.
SU:     SU bypassed: Failed basic validity checks.
SU:   Validity checks failed.

The reference to “Null accepting” looks a little suspect but prompted a few experiments (modifying the data to eliminate nulls, adding not null declarations to columns, simplifying the query etc.) that suggested that the problem was essentially that the optimizer did not want to unnest when the comparison was with the expression (sal + comm) regardless of the operator, and even when all the relevant columns had been populated, declared not null, and the nvl() function had been removed.

It doesn’t seem reasonable in this case, but possibly the block is a generic strategy to avoid errors in some more subtle cases, and perhaps the block will be refined and relaxed in future versions. (Or maybe it’s a bug that the wrong test is being called at this point – there was nothing in the query requiring “null acceptance” by the time I got through the last test.)

I did find a workaround that you could use to avoid any code rewrite:

alter table emp add nvl_earnings 
        invisible 
        generated always as (sal + nvl(comm,0)) 
        virtual
;

There’s seems to be no good reason why this should work – but it does. The subquery unnests and the Predicate Information in the plan doesn’t give any clue that it’s using a virtual column.

Summary:

When you upgrade from 12c there are some queries involving correlated subqueries that no longer unnest the subquery. This may have a significant impact on performance and it may not be possible to bypass the problem unless you rewrite the query to do a manual unnest although I did find a virtual column workaround for my specific example. So far I’ve tested the query on 19.11.0.0 and 21.3.0.0, the behaviour is the same in both versions.

Footnote:

After failing to find anything on MOS about the problem I emailed a draft of this note to Nigel Bayliss at Oracle – who did find a promising match on MOS.

The failure to unnest may be the consequence of the fix for bug 30593046: “A query having a scalar subquery returned a wrong result”. The fix was introduced in 19.9.0.0 but was too restrictive, leading to the creation of bug 33325981: “Query Loses Subquery Unnesting From Plan in 19.9 and Above”.

The fix for 33325981 was distributed in 19.13.0.0 and 21.4.0.0 (plus a couple of earlier RURs, with patches available for various versions back to 12.2.0.1). Unfortunately the newer fix still doesn’t go far enough in reducing the restrictions and my example still doesn’t unnest.

Make sure you check any code that depends on “expression-based” subquery unnesting before you upgrade to 19.9, as it may change plan, which may make a difference to performance and a requirement for a workaround.

v$_fixed_view_definition

Mon, 2022-03-21 14:08

In one of my notes about excavating the x$ tables I said that there was a problem “hidden in plain sight” with the dynamic performance view (g)v$fixed_view_definition. If you haven’t previously spotted the problem here’s what the view looks like when you describe it:

SQL> desc gV$fixed_view_definition
 Name                                       Null?    Type
 ----------------------------------------- -------- ----------------------------
 INST_ID                                            NUMBER
 VIEW_NAME                                          VARCHAR2(128)
 VIEW_DEFINITION                                    VARCHAR2(4000)
 CON_ID                                             NUMBER

and if that doesn’t make you suspicious, here’s a clue:

SQL> select  substr(view_definition,3950,50) tail_end
  2  from    gv$fixed_view_definition
  3  where   view_name = 'GV$SESSION'
  4  /

TAIL_END
--------------------------------------------------------------------------------
tand(s.ksuseflg2,64),64,'TRUE','FALSE'),decode(bit

Once upon a time the dynamic performance views were much simpler and their definitions would all have fitted inside a varchar2(4000) – but those were the good old days and things have moved on since then.

The view gv$session is one of the biggest offenders as you can’t even get to the list of x$ structures in the from clause into the first 4,000 bytes. If you can’t see the entire definition it can be a little difficult to work out why a query against one of these views is performing badly; it is still possible, of course, but harder than it needs to be.

If you need to work around this issue one strategy would be ask Oracle where it’s hidden the view definition in shared memory. It’s not the easiest option but it may be the only one available to you. You start by modifying the defining query for gv$fixed_view_definition into something that will report the address of the row in x$kqfvt that holds the view text:

select 
        i.addr, t.addr 
from 
        x$kqfvi i, 
        x$kqfvt t
where 
        i.kqfvinam = 'GV$SESSION'
and     t.indx = i.indx
/

ADDR             ADDR
---------------- ----------------
00000000143490A0 00000000143660C0

If you use the x_rowsize.sql script I published a little while ago to see what it says about the lengths of x$kqfvt rows you’ll find that the row length is 32 bytes, though if you check the definition of x$kqfvt using the x_desc.sql script from the original “excavating x$” notes you find the following:

Column                      Ext Type   Int Type  Array Max   Len Size Len Offset   Col Size     Offset      Index    Idx Col
------------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
ADDR                              23          9          0          0          0          8          0          1          0
INDX                               2         11          0          0          0          4          0          2          0
INST_ID                            2         11          0          0          0          4          0          0          0
CON_ID                             2         11          0          0          0          2          0          0          0
KQFTPSEL                           1          6          0          0          0       4000          0          0          0

The only real column in the structure is KQFTPSEL and the “type” columns tell us that it is a pointer (8 bytes) to a string of 4000 bytes – which makes the 32 byte gap between rows a little odd, so let’s “peek” the 32 bytes starting at address 0x143660C0 to see what else (if anything) we can find in that row:

SQL> oradebug setmypid
Statement processed.

SQL> oradebug peek 0x143660C0 32 
[0143660C0, 0143660E0) = 14598D40 00000000 143754A0 00000000 00000000 00000000 00000000 00000000

That looks like 2 addresses (of 8 bytes each) and 16 null bytes which I can’t explain. The first address should get us to the view text, and I’m curious about the second address, so I’ll do a couple of large peeks for these two addresses, and dump the results into my session trace file by appending a ‘1’ to the command:

SQL> oradebug peek 0x14598D40 256 1
[014598D40, 014598E40) = 656C6573 73207463 736E692E 64695F74 612E732C 2C726464 6E692E73 732C7864 75736B2E 65736573 2E732C72 7575736B 73657364 6B2E732C ...

SQL> oradebug peek 0x143754A0 256 1
[0143754A0, 0143755A0) = 00000007 00000000 139779D0 00000000 00000005 00000000 1455A464 00000000 00000003 00000000 14BD2FAC 00000000 00000007 00000000 ...

The first peek looks promising – the hex values are in the “ASCII alphanumerics” range and reading each block of 4 bytes from right to left we get: ‘sele’ ‘ct t’ ‘.ins’ ‘t_id’ which, when we string it together starts us off with “select inst_id” which is the start of the view text.

The second peek looks as if it’s reporting pairs of (length of string, pointer to string) so I peeked at the first few addresses and found the strings: “INST_ID“, “SADDR“, “SID“, “SERIAL#” which looks like the first few names of the columns in gv$session.

Of course we haven’t yet seen anything telling us the length of the view text or the number of columns in the view – but maybe that’s in the x$kqfvi structure, and you’ll notice that I included the address (addr) of from that structure in my query. Here’s what I get from peekin the 80 bytes (the row size indicated by x_rowsize.sql) from that address:

0143490A0 0000000A 00000000 145528BC 00000000  [.........(U.....]
0143490B0 00000004 00000000 12F5D384 00000000  [................]
0143490C0 00000003 00000000 00000000 00000000  [................]
0143490D0 0000119F 00000000 00000068 00000000  [........h.......]
0143490E0 FFFFC15A 00000022 00000000 00000000  [Z..."...........]

The first line is a pointer to the character string “GV$SESSION” (0xA is the length), the second line is a point to the character string “NULL” (0x4 is the length) – but I don’t know why that’s there (yet). I don’t know what the third line is about. The fourth line holds the numbers 4511 (dec) and 104 (dec). Since the latter is the number of columns in gv$session would anyone cater to guess that the former is the length of the view text) The last line reports the kqfviobj and kqfviver as seen in the view definition.

The interesting question is whether we can find the full text of the view at the address we peeked because it’s possible that the presentation of the view definition has simply been stopped at the 4,000 bytes mark when it does actually continue in memory. So let’s do a bigger peek of the first address we found in x$kqfvt and check the trace file:

SQL> oradebug peek 0x14598D20 4600 1
[014598D20, 014599F18) = 74736E49 65636E61 00202927 00000000 00000000 00000000 00000000 00000000 656C6573 73207463 736E692E 64695F74 612E732C 2C726464 ...

SQL> ed or19_ora_28809.trc 

I’ve actually started my peek 32 (0x20) bytes earlier than the target address because of the way that Oracle fails to format the first few peeked bytes in the trace file. By peeking a little early I’ll get the whole section that I want to see formatted into readability:

Processing Oradebug command 'peek 0x14598D20 4600 1'
[014598D20, 014599F18) = 74736E49 65636E61 00202927 00000000 00000000 ...
Dump of memory from 0x014598D34 to 0x014599F18
014598D30          00000000 00000000 00000000      [............]
014598D40 656C6573 73207463 736E692E 64695F74  [select s.inst_id]
014598D50 612E732C 2C726464 6E692E73 732C7864  [,s.addr,s.indx,s]
014598D60 75736B2E 65736573 2E732C72 7575736B  [.ksuseser,s.ksuu]
014598D70 73657364 6B2E732C 65737573 2C6F7270  [dses,s.ksusepro,]
014598D80 736B2E73 6C647575 732C6975 75736B2E  [s.ksuudlui,s.ksu]
...
014599E50 20292745 6D6F7266 6B247820 65737573  [E') from x$ksuse]
014599E60 202C7320 736B2478 2064656C 78202C65  [ s, x$ksled e, x]
014599E70 6C736B24 77207477 65687720 62206572  [$kslwt w where b]
014599E80 6E617469 2E732864 7073736B 676C6661  [itand(s.ksspaflg]
014599E90 2129312C 6120303D 6220646E 6E617469  [,1)!=0 and bitan]
014599EA0 2E732864 7375736B 676C6665 2129312C  [d(s.ksuseflg,1)!]
014599EB0 6120303D 7320646E 646E692E 2E773D78  [=0 and s.indx=w.]
014599EC0 776C736B 64697374 646E6120 6B2E7720  [kslwtsid and w.k]
014599ED0 74776C73 3D747665 6E692E65 00007864  [slwtevt=e.indx..]
014599EE0 656C6573 20207463 44444153 202C2052  [select  SADDR , ]
014599EF0 20444953 4553202C 4C414952 202C2023  [SID , SERIAL# , ]
014599F00 53445541 2C204449 44415020 2C205244  [AUDSID , PADDR ,]
014599F10 45535520 2C202352                    [ USER# ,]

I got a little lucky with my guestimate of how much to peek and, as you can see I’ve got the whole of the view definition (which means I can now see all the tables in the from clause and all the predicates in the where clause).

I can now do some tedious work to edit the trace file to extract the full view and make it look pretty. On the other hand you may think that’s too much like hard work so, as an alternative, you might note that the view definition references x$ksuse, x$ksled and x$kslwt in that order, which means you might get lucky if you just search the executable to see if you can find those three words very close to each other:

[oracle@linux183 trace]$ cd $ORACLE_HOME/bin

[oracle@linux183 bin]$ strings -a oracle | grep -n "x\$ksuse.*x\$ksled.*x\$kslwt"  >temp.txt

[oracle@linux183 bin]$ wc temp.txt
   1  142 4519 temp.txt

We’ve found one line of 4,511 characters (plus 8 due to the -n option) matching our grep search criteria – and it’s the full text of the view.

If you want to find which other views lose the ends of their definitions a simple query will tell you:

set pagesize 60
set linesize 100
set trimspool on
set tab off

column view_name format a32
column view_definition format a60 wrap word

select  view_name, view_definition
from    v$fixed_view_definition
where   length(view_definition) = 4000
/

There were 18 of them in my 19.11.0.0 instance.

Summary

If you’re after the select statements defining any of the dynamic performance views they can be found in the view (g)v$fixed_view_definition. Unfortunately this will report only the first 4,000 bytes of the definition as will x$kqfvt, the x$ structure that exposes the defining text. (This doesn’t change, by the way, if you have set max_string_size to EXTENDED.)

The 4,000 byte limit is an artificial limit imposed by the limits on the external varchar2() type, and the information in the x$kqfvt structure actually points to a string holding the entire view definition. We can use oradebug to peek at the contents of a row in x$kqfvt using the addr column as the target address for peeking. The first 8 bytes will report the address where the view text is located, so we can use oradebug again to peek at that address (less 32 to get the whole string converted to character format) to tell Oracle to dump the entire string into the trace file.

Since it’s likely to be a tedious job to stitch the view text into a single piece from the 16 byte fragments dumped by Oracle you could use the strings o/s command on the oracle executable and search the result for a recognizable piece of text that will pick out just the view definition. If you can’t access the executable, though, the trace file may be the only option – and don’t forget that you may be able to read your trace file from v$diag_trace_file_contents through an SQL*Plus connection.

X$ row sizes

Mon, 2022-03-21 05:59

Here’s a script I’ve used from time to time to help me get some ideas about how to hack through some x$ tables. It’s obviously something that can only be run by the SYS schema and it’s not something to apply arbitrarily to every x$ table as the side effects could be nasty but, on a sandbox instance with enough CPUs and a large enough SGA, it might give you some interesting clues about how Oracle is defining or accessing some of its internal structures.

The first query simply walks through the specified x$ table in index (indx) order comparing the address (addr) of the preceding row with that of the current row and reporting the difference. Some of the x$ tables are likely to give you a random collection of large numbers, which means there is no well-structured row involved. Others will give very regular patterns – though not necessarily a constant value as you move from row to row (take a look at x$ksuru, for example).

The second query takes the lowest and highest addresses of rows in the table, finds the difference and divides by one less than the number of rows in the table (i.e. allowing for the fact that the highest address is the start of a final row, not the end of a row). If the average size value matches the value reported by every other row in the output that’s a fairly strong clue that the x$ table may be a simple array of fixed sized rows.

rem
rem     Script:         x_rowsize.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2002 (probably)
rem
rem     Usage:
rem             Connect as SYS (due to x$)
rem             start x_rowsize {x tablename} {number of rows to list}
rem

set linesize 120
set pagesize  60
set trimspool on
set tab off

prompt  ================================================
prompt  Usage:  @x_rowsize {table_name} {number_of_rows}
prompt  ================================================
pause   Press return to continue

define m_xtab  = &1
define m_rowct = &2

column addr_dec         format 999,999,999,999,999
column lag_addr_dec     format 999,999,999,999,999
column row_size         format 999,999,999,999,999

spool x_rowsize

select 
        addr,
        to_number(addr,'XXXXXXXXXXXXXXXX') addr_dec, 
        to_number(lag(addr,1) over (order by indx),'xxxxxxxxxxxxxxxx') lag_addr_dec, 
        to_char(
                to_number(addr,'XXXXXXXXXXXXXXXX') -
                        to_number(lag(addr,1) over (order by indx),'xxxxxxxxxxxxxxxx'),
                        'xxxxxxxxxxxxxxxx'
        ) row_size_hex,
        to_number(addr,'XXXXXXXXXXXXXXXX') -
                to_number(lag(addr,1) over (order by indx),'xxxxxxxxxxxxxxxx') row_size,
        indx 
from    &m_xtab
where   indx <= &m_rowct
order by 
        indx
/

prompt  ========================================================
prompt  Average row size will (almost certainly) not match the
prompt  row gap listed in the above if the x$ table is segmented
prompt  ========================================================

select
        min(addr), max(addr), count(*),
        (to_number(max(addr),'xxxxxxxxxxxxxxxx') -
                to_number(min(addr),'xxxxxxxxxxxxxxxx')
        ) / (count(*) - 1) avg_row_size
from    &m_xtab
/

spool off

It’s a good idea to start with a small sample size in case a complete scan of a structure crashes the session or even the instance- so here’s the result of calling the script with the parameters x$ksuse 10

===============================================
Usage:  @x_rowsize {table_name} {numbe_of_rows}
===============================================
Press return to continue


ADDR                         ADDR_DEC         LAG_ADDR_DEC ROW_SIZE_HEX                  ROW_SIZE       INDX
---------------- -------------------- -------------------- ----------------- -------------------- ----------
000000008E713678        2,389,784,184                                                                      1
000000008E715E60        2,389,794,400        2,389,784,184              27e8               10,216          2
000000008E718648        2,389,804,616        2,389,794,400              27e8               10,216          3
000000008E71AE30        2,389,814,832        2,389,804,616              27e8               10,216          4
000000008E71D618        2,389,825,048        2,389,814,832              27e8               10,216          5
000000008E71FE00        2,389,835,264        2,389,825,048              27e8               10,216          6
000000008E7225E8        2,389,845,480        2,389,835,264              27e8               10,216          7
000000008E724DD0        2,389,855,696        2,389,845,480              27e8               10,216          8
000000008E7275B8        2,389,865,912        2,389,855,696              27e8               10,216          9
000000008E729DA0        2,389,876,128        2,389,865,912              27e8               10,216         10

10 rows selected.

========================================================
Average row size will (almost certainly) not match the
row gap listed in the above if the x$ table is segmented
========================================================

MIN(ADDR)        MAX(ADDR)          COUNT(*) AVG_ROW_SIZE
---------------- ---------------- ---------- ------------
000000008E713678 000000008EDEDD70        704   10222.1451

1 row selected.

At first sight it looks as if the rowsize for an x$ksuse (v$session) rows is 10,216 bytes – but then the average rowsize turns out to be different, so there’s some inconsistency somewhere in x$ksuse that we can’t see in the first few rows.

After running the report for the whole x$ksuse structure I realised that a quick and easy edit would report just the interesting bits. I folded the original query into an inline view and selected only the rows where “row_size != 10216”. Here’s the result:

ADDR                         ADDR_DEC         LAG_ADDR_DEC ROW_SIZE_HEX                  ROW_SIZE       INDX
---------------- -------------------- -------------------- ----------------- -------------------- ----------
000000008E8CAB98        2,391,583,640        2,391,571,984              2d88               11,656        177
000000008EA820B8        2,393,383,096        2,393,371,440              2d88               11,656        353
000000008EC395D8        2,395,182,552        2,395,170,896              2d88               11,656        529

That looks like an interesting pattern – especially if you’re good at mental arithmetic (or have a calculator handy) and I tell you that there are 704 rows in x$ksuse.

The way I’ve written my code the row size reported for indx = n is the length of row n-1, so out of the 704 rows in x$ksuse, the “interesting” rows are 176, 352, and 528 and when you check the arithmetic you realise:

  • 176 = 704 / 4,
  • 352 = 704 / 2,
  • 528 = 704 * 3/4

Maybe it’s not a coincidence that this instance is running on 4 CPUs; moreover, with that thought in mind, if I look at v$latch_children to see if there are any latches with 4 children I find (amongst others) the following latch names:

client/application info
session allocation
session idle bit
session switching

There may be other latches that are also relevant but these 4 latches have names that suggest you might want to acquire one of them if you wanted to modify the contents of a session-related row – in other words if you wanted to modify the contents of a row in x$ksuse. So maybe Oracle has checked the cpu_count on startup and created the v$session/x$ksuse array in equal sized pieces, one for each CPU, so that it can minimise contention when try to modify data in x$ksuse. And following that thought I’ll just add that I had set sessions = 702 in my parameter file, and Oracle had rounded that up to a multiple of 4.

The hypothesis is easy to check – just bounce the database a few times, using a pfile rather than an spfile, change the cpu_count and sessions parameter each time and see if Oracle always rounds sessions up (if needed) to a multiple of cpu_count and then splits the array into cpu_count equal pieces.

And that certainly seemed to be the case on my little machine. (Obviously I’m not in a position to see what would happen on a machine with 256 CPU and sessions set to 16,000 – so there might be some sanity checks for extreme cases.)

Jumping to conclusions

Going back to the output from my initial setup, it looks as if each row in x$ksuse is going to demand 10,216 bytes, and that there may be 1,440 (viz: 11,656 – 10,216) bytes needed as some sort of “header” for each segment of the array, but what sort of header could need 1,440 bytes? I can’t help noticing that if you wanted to store 176 pointers of 8 bytes each that’s a total of 1,408 bytes – so maybe the “header” is mostly a list of pointers to the rows – just like the row directory in a data block!

It’s a thought that might be heading in the right direction, but perhaps it’s a little premature. Fortunately on one of my restarts one of the gaps showed up as roughly minus 30,000,000. So I need to find an explanation that covers this case as well – but that’s okay because I soon came up with a hypothesis (that I haven’t actually confirmed with a reproducible model yet).

I know that my memory granule size is 16MB (select * from V$sgainfo). What if Oracle had allocated the first few sections of x$ksuse and then found that it didn’t have enough room left in the current granule for the next piece of the array? It would have to start using another granule. What if the granules were attached to the shared pool “from the top downwards” – the next granule would have a starting address that was 16MB lower than the current granule – so the step from the end of the current granule to the beginning of the next granule would be around 32MB backwards. Allowing for the fact that I was experimenting with numbers that produced sizes of about 2MB (200 rows @ 10,216 bytes) for each piece of the array the hypothesis is reasonably consistent with the step size of minus 30 million.

Further experimentation is left as an exercise for the interested reader, but if I wanted to test the granule hypothesis I could probably experiment with carefully selected values for sessions, maybe starting the instance with a reduced granule size (parameter _ksmg_granule_size) to see if I could get two or more backwards jumps in the array. But I think I’ve pursued x$ksuse far enough for the moment. Except I have one more (generic) problem that it highlights.

It’s not that simple

If I query x$kqfta – the list of x$ tables, I see the following for x$ksuse:

SQL> execute print_table('select * from x$kqfta where kqftanam = ''X$KSUSE''')
ADDR                           : 00000000143D1DD0
INDX                           : 43
INST_ID                        : 1
CON_ID                         : 0
KQFTAOBJ                       : 4294951004
KQFTAVER                       : 42
KQFTANAM                       : X$KSUSE
KQFTATYP                       : 2
KQFTAFLG                       : 1
KQFTARSZ                       : 8
KQFTACOC                       : 113

-----------------

1 row(s) selected

The column kqftarsz is the “Row SiZe” and 8 is a long way off the 10,216 that I’ve derived from the addresses of consecutive rows in the table. So how does an 8 byte column turn into a 10,216 byte row? (I wonder if it’s describing the rows in the table of pointers to the rows, not the length of the rows themselves). But (apparently) it’s neither 8 nor 10,216 anyway because if I look at x$kqfco (the list of x$ columns) for x$ksuse I see the following):

select
        co.indx,
        co.kqfconam     column_name,
        co.kqfcosiz     column_size,
        co.kqfcooff     column_offset
from
        x$kqfta ta,
        x$kqfco co
where
        co.kqfcotab = ta.indx
and     ta.kqftanam = 'X$KSUSE'
order by
        co.kqfcooff,
        co.indx
/

      INDX COLUMN_NAME                      COLUMN_SIZE COLUMN_OFFSET
---------- -------------------------------- ----------- -------------
       503 ADDR                                       8             0
       504 INDX                                       4             0
       505 INST_ID                                    4             0
       506 CON_ID                                     2             0
       507 KSSPAFLG                                   4             0
       508 KSSPAOWN                                   8             0
...
       603 KSUSELTXIDFLG                              4          9636
       604 KSUSELTXID                                 8          9648
       615 KSUSEFLG3                                  4         10000

According to this the last column of the row starts at byte 10,000 and ends at 10,003 giving us a third possible length for the row and a gap of 212 bytes to explain; and even though there are several x$ tables starting at the same address (x$ksuse, x$ksusio, x$lsusecst, x$ksuru, x$ksusm) on the same internal structure none of them has a column definition explaining the gap.

It looks as if any further investigation (of x$ksuse at least) will have to depend on dumping memory memory to trace files and chasing pointers.

Summary

When you try to find out how big a “row” in an x$ structure is you may find several pieces of contradictory information:

  1. The rowsize (kqftarsz) reported in x$kqfta.
  2. The offset + length of the final column in the description in x$kqfco.
  3. The “gap” between consecutive rows when you query addr in the object itself

Using oradebug to peek a couple of rows may help you uncover the truth. But you may end up chasing pointers to pointers to pointers to get to the ultimate source.

ANSI catalogue

Sun, 2022-03-20 03:19

This is a list of articles I’ve written that pick up some detail about using features of the ANSI syntax that are (or were) less commonly used in traditional Oracle SQL.Each entry has a date stamp and a short note of the contents. The articles are generally listed most-recent first.

v$ v_$ v$

Fri, 2022-03-18 12:33

One of the “curiosity” questions that comes up from time to time on the various forums is the one about the cascade of names from synonym to view to object that Oracle uses to provide access to the dynamic performance views. Typically we see the pattern:

  • v$XXX is a public synonym for a stored view v_$XXX
  • v_$XXX is defined as ‘select {list of columns} from v$XXX

So, after two steps, we seem to be back where we started. Why has Oracle done this? I came up with the hypothesis a long time ago that there were two parts to the answer; the first is a privilege requirement, the second is for convenience.

SYS is a very special user (as many people have found when running SQL in the sys schema and finding that it doesn’t behave in exactly the same way it does in every other schema) and access to the dynamic performance views is one of the most exotic features of the SYS account. Underneath the respectable veneer of the dynamic performance (v$) views lies the murky horror that is the x$ layer where an “table” may turn out to be a function call, some form of memory array, a linked list, or a hideously complex structure that needs to be filtered, fragmented, reassembled and sorted to yield something that looks like a row containing useful information – an “object” may even be a function that reads an array of pointers to linked lists of hideously complex structures.

So the dynamic performance views hide nasty things and SYS is the only schema allowed to execute the Oracle kernel code to interpret and display those things. But the developers of tools like EM may find it highly desirable to get access the contents of the dynamic performance views – so Oracle creates “real” (i.e. stored) views in the SYS schema to expose the contents of the dynamic performance views and may grant select privileges on those stored views to ordinary users. Of course, since the views are created by SYS the stored queries defining the view contents operate with the privileges of SYS , which means the views can execute the kernel code and return the correct results.

Everyone happy so far?

So now a developer writes a really terrific query that they test very carefully against the dynamic performance view to make sure it’s producing the right results. And when it’s working they have to write a new version of the query using the names of the stored view rather than the names of the dynamic performance views because they also want to include their query in an Enterprise Manager / Grid Control monitor screen.

But why have two versions of a query when, by creating a suitable set of synonyms, a single version of the query will work. Choose your synonym names carefull and when the super query is run by SYS the parser will interpret names as direct references to the dynamic performance views and when it’s run by any other user (who’s had select granted on the stored views) it will translate names of synonyms to names of stored views and onwards to the names of dynamic perfermance views.

So: the stored views make it possible for non-SYS users to run SQL with the privileges of SYS; the cunning use of synonyms means the same SQL text can be run by SYS and non-SYS users and mean the same thing.

Lagniappe

There’s one dynamic performance view that breaks the general pattern, the one about sequences. Try running the following as SYS – the first triplet demonstrates the general pattern – the second doesn’t:

select 'dynamic' object_type, kqfvinam     object_name from x$kqfvi       where kqfvinam     like 'V%SQLAREA'
union all
select 'real view',           view_name                from dba_views     where view_name    like 'V%SQLAREA'
union all
select 'synonym',             synonym_name             from dba_synonyms  where synonym_name like 'V%SQLAREA';

select 'dynamic' object_type, kqfvinam     object_name from x$kqfvi       where kqfvinam     like 'V%SEQUENCES'
union all
select 'real view',           view_name                from dba_views     where view_name    like 'V%SEQUENCES'
union all
select 'synonym',             synonym_name             from dba_synonyms  where synonym_name like 'V%SEQUENCES';

The dynamic performance view v$_sequences doesn’t follow the pattern. There is no associated stored view, hence no possibility of a working synonym. (The dynamic performance view is also unusual in having an underscore after the v$, maybe that’s why some automatic code to generate the view and synonym didn’t work for it ;)

You could argue that v$_lock and v$_lock1 also break the pattern, but they’re just Oracle flailing around frantically but deliberately invisibly under v$lock, which does follow the pattern.

You might also point out that the one pair of dynamic performance views that start with GO$/O$ also breaks the pattern as the stored view and synonym change the ‘O’s to ‘V’s.

Excavating x$

Wed, 2022-03-16 18:13

If you ever feel the need to dig down into the “dynamic performance views (v$)”, and then go a little deeper to get to the “internal structures (x$)” there are 5 objects in the list of x$ that you will eventually want to know about.

  • x$kqfta – headline information about the x$ structures – name, size, column count.
  • x$kqfvi – a list of the names of the dynamic performance views (mostly pairs of v$/gv$)
  • x$kqfvt – a list of the select statements that define the views in x$kqfvi
  • x$kqfdt – a list of “derived tables”: a cross between synonyms and views of a few of the structures in x$kqfta
  • x$kqfco – a list of the “columns” in each of the tables in x$kqfta (but not x$kqfvi and x$kqfdt)

I’ve listed the tables in this order because that’s the way they’re indexed (0 – 4) in x$kqfta, but your journey is likely to start from the view v$fixed_table, or the view v$fixed_view_definition. From a session connected as SYS to a cdb$root in 19.11.0.0:

SQL> describe v$fixed_view_definition
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 VIEW_NAME                                          VARCHAR2(128)
 VIEW_DEFINITION                                    VARCHAR2(4000)
 CON_ID                                             NUMBER

SQL> select view_definition
  2  from v$fixed_view_definition
  3  where view_name = 'V$FIXED_VIEW_DEFINITION'
  4  /

VIEW_DEFINITION
--------------------------------------------------------------------------------
select  VIEW_NAME , VIEW_DEFINITION, CON_ID from GV$FIXED_VIEW_DEFINITION where
inst_id = USERENV('Instance')


1 row selected.

SQL> select view_definition
  2  from v$fixed_view_definition
  3  where view_name = 'GV$FIXED_VIEW_DEFINITION'
  4  /

VIEW_DEFINITION
--------------------------------------------------------------------------------
select i.inst_id,kqfvinam,kqftpsel, i.con_id   from x$kqfvi i, x$kqfvt t where i
.indx = t.indx

There’s a fairly common pattern in the fixed view definitions – they often come in pairs, one starting GV$ the other starting V$, and the V$ is usually just “select most of the columns from the gv$ for this instance”. The GV$ views are the “Global” views relevant to RAC, the V$ views are local to the current instance. (Just pretend you haven’t notice that Oracle uses the deprecated userenv() function to find the instance id rather than the sys_context(‘userenv’,{variable}) function.)

If you are running RAC and query one of the GV$ views you’ll find that every other instance starts a Parallel Execution process to create it’s local results for the calling instance. This PX process doesn’t come from (and will ignore the limit on) the normal pool of PX processes and will have a name like PZnn rather than the traditional Pnnn.

You’ll note in the result from the last query that gv$fixed_view_definition is a simple join of x$kqfvi and x$kqfvt on the indx column. There is a problem with this view – hidden in plain sight – but I’ll have to come back to that later.

We can go through a similar set of queries to find the definition of v$fixed_table:

SQL> select view_definition
  2  from v$fixed_view_definition
  3  where view_name = 'V$FIXED_TABLE'
  4  /

VIEW_DEFINITION
--------------------------------------------------------------------------------
select  NAME , OBJECT_ID , TYPE , TABLE_NUM, CON_ID from GV$FIXED_TABLE where in
st_id = USERENV('Instance')


1 row selected.

SQL> select view_definition
  2  from v$fixed_view_definition
  3  where view_name = 'GV$FIXED_TABLE'
  4  /

VIEW_DEFINITION
--------------------------------------------------------------------------------
select inst_id,kqftanam, kqftaobj, 'TABLE', indx, con_id from x$kqfta union all
select inst_id,kqfvinam, kqfviobj, 'VIEW', 65537, con_id from x$kqfvi union all
select inst_id,kqfdtnam, kqfdtobj, 'TABLE', 65537, con_id from x$kqfdt


1 row selected.

So the “fixed tables” are really the combination of the basic structures, views on the structures, and the “derived tables”. The objects in the UNION ALL each have a unique object_id in a range close to power(2,32) and (for the tables in x$kqfta) if you make a call to dbms_stats.gather_table_stats() that object_id will be used to store the statistics in the data dictionary.

So what are the “derived tables”. There are two ways to address this question – query it, or find a place where one is used. Here’s a full listing of x$kqfdt (still 19.11.0.0:

SQL> select * from x$kqfdt;

ADDR                   INDX    INST_ID     CON_ID   KQFDTOBJ KQFDTNAM                         KQFDTEQU
---------------- ---------- ---------- ---------- ---------- -------------------------------- ---------------------------
000000001454E6C0          0          1          0 4294952735 X$KSLLTR_CHILDREN                X$KSLLTR
000000001454E6E8          1          1          0 4294952736 X$KSLLTR_PARENT                  X$KSLLTR
000000001454E710          2          1          0 4294956013 X$KSLLTR_OSP                     X$KSLLTR
000000001454E738          3          1          0 4294956014 X$KSLWSC_OSP                     X$KSLWSC
000000001454E760          4          1          0 4294951073 X$KCVFHONL                       X$KCVFH
000000001454E788          5          1          0 4294951074 X$KCVFHMRR                       X$KCVFH
000000001454E7B0          6          1          0 4294951440 X$KCVFHALL                       X$KCVFH
000000001454E7D8          7          1          0 4294951056 X$KGLTABLE                       X$KGLOB
000000001454E800          8          1          0 4294951057 X$KGLBODY                        X$KGLOB
000000001454E828          9          1          0 4294951058 X$KGLTRIGGER                     X$KGLOB
000000001454E850         10          1          0 4294951059 X$KGLINDEX                       X$KGLOB
000000001454E878         11          1          0 4294951060 X$KGLCLUSTER                     X$KGLOB
000000001454E8A0         12          1          0 4294951061 X$KGLCURSOR                      X$KGLOB
000000001454E8C8         13          1          0 4294952684 X$KGLCURSOR_CHILD_SQLID          X$KGLOB
000000001454E8F0         14          1          0 4294952680 X$KGLCURSOR_CHILD_SQLIDPH        X$KGLOB
000000001454E918         15          1          0 4294952683 X$KGLCURSOR_CHILD                X$KGLOB
000000001454E940         16          1          0 4294953372 X$KGLCURSOR_PARENT               X$KGLOB
000000001454E968         17          1          0 4294953759 X$KGLSQLTXL                      X$KGLOB
000000001454E990         18          1          0 4294956135 X$ALL_KQLFXPL                    X$KQLFXPL
000000001454E9B8         19          1          0 4294953124 X$KKSSQLSTAT_PLAN_HASH           X$KKSSQLSTAT
000000001454E9E0         20          1          0 4294953231 X$ZASAXTD1                       X$ZASAXTAB
000000001454EA08         21          1          0 4294953232 X$ZASAXTD2                       X$ZASAXTAB
000000001454EA30         22          1          0 4294953233 X$ZASAXTD3                       X$ZASAXTAB
000000001454EA58         23          1          0 4294951597 X$JOXFS                          X$JOXFT
000000001454EA80         24          1          0 4294951598 X$JOXFC                          X$JOXFT
000000001454EAA8         25          1          0 4294951599 X$JOXFR                          X$JOXFT
000000001454EAD0         26          1          0 4294951621 X$JOXFD                          X$JOXFT
000000001454EAF8         27          1          0 4294952364 X$JOXOBJ                         X$JOXFT
000000001454EB20         28          1          0 4294952365 X$JOXSCD                         X$JOXFT
000000001454EB48         29          1          0 4294952366 X$JOXRSV                         X$JOXFT
000000001454EB70         30          1          0 4294952367 X$JOXREF                         X$JOXFT
000000001454EB98         31          1          0 4294952368 X$JOXDRC                         X$JOXFT
000000001454EBC0         32          1          0 4294952369 X$JOXDRR                         X$JOXFT
000000001454EBE8         33          1          0 4294952370 X$JOXMOB                         X$JOXFM
000000001454EC10         34          1          0 4294952371 X$JOXMIF                         X$JOXFM
000000001454EC38         35          1          0 4294952372 X$JOXMIC                         X$JOXFM
000000001454EC60         36          1          0 4294952373 X$JOXMFD                         X$JOXFM
000000001454EC88         37          1          0 4294952374 X$JOXMMD                         X$JOXFM
000000001454ECB0         38          1          0 4294952375 X$JOXMAG                         X$JOXFM
000000001454ECD8         39          1          0 4294952376 X$JOXMEX                         X$JOXFM
000000001454ED00         40          1          0 4294956138 X$ALL_ASH                        X$ASH
000000001454ED28         41          1          0 4294956123 X$ALL_KESWXMON                   X$KESWXMON
000000001454ED50         42          1          0 4294956126 X$ALL_KESWXMON_PLAN              X$KESWXMON_PLAN

43 rows selected.

What it comes down to is that there are several different ways of addressing some of the base objects. x$kglob is the nicest example of this with 11 “equivalences”. I don’t know why Oracle has taken this approach – but it may reflect the fact that some of the underlying structures don’t look anything like “rows” and different derived tables expose different components of the same underlying in mutually incomaptible ways. Here’s an example showing one of these derived tables making an appearance:

SQL> explain plan for
  2  select  child_number, plan_hash_value, sql_text 
  3  from V$sql where sql_id = '7m239n32gftgh';

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------
Plan hash value: 1346707802

-----------------------------------------------------------------------------------------------------
| Id  | Operation               | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                           |     1 |   549 |     0   (0)| 00:00:01 |
|*  1 |  FIXED TABLE FIXED INDEX| X$KGLCURSOR_CHILD (ind:2) |     1 |   549 |     0   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("KGLOBT03"='7m239n32gftgh' AND "INST_ID"=USERENV('INSTANCE'))

13 rows selected.

When you try to find x$kglcursor_child in the fixed tables (x$kqfta) it’s not there – but a check against x$kqfdt tells us that it’s just another way of addressing x$kglob.

This plan also shows an interesting feature in the table access Operation, which is reported as “Fixed table fixed index” with “ind:2” appended when we check the Name column of the plan.

In this context “index” has a non-standard meaning – x$kglob is a memory structure helping to describe the contents of the Library Cache, so we shouldn’t expect to see anything that looks like a traditional index. In this context this just means that Oracle has a particularly efficient access path if your predicate is of the right type. We could guess in this case that it’s the predicate “kglobt03 =’7m239n32gftgh'” that Oracle is using, but it would be nice to check, and nice to know what other “indexed” access paths there are into the structure (this is, after all, “ind:2”, so there ought to be an “ind:1”). Conveniently Oracle has a dynamic performance view called v$index_fixed_columns that we can query:

SQL> select * from v$indexed_fixed_column where table_name = 'X$KGLOB';

TABLE_NAME                INDEX_NUMBER COLUMN_NAME          COLUMN_POSITION     CON_ID
------------------------- ------------ -------------------- --------------- ----------
X$KGLOB                              1 KGLNAHSH                           0          0
X$KGLOB                              2 KGLOBT03                           0          0

2 rows selected.

Note that the Index_Number column tells use that x$kglobt03 is defined as the 2nd index on the table – which is consistent with the “ind:2” in the plan. Of course, it’s not immediately obvious, and can take a bit of effort to work out, that kgnahsh corresponds to v$sql.hash_value and kglobt03 corresponds to v$sql.sql_id; but it’s nice to know that we can find all the efficent access paths if we need to tune a bit of “expert” SQL that we’ve discovered on the internet.

Even more exciting – we can go back to v$fixed_view_definition and find out what structure(s) are sitting underneath v$indexed_fixed_column:

SQL> select view_definition
  2  from v$fixed_view_definition
  3  where view_name = 'GV$INDEXED_FIXED_COLUMN'
  4  /

VIEW_DEFINITION
--------------------------------------------------------------------------------
select c.inst_id,kqftanam, kqfcoidx, kqfconam, kqfcoipo, c.con_id  from x$kqfco
c, x$kqfta t where t.indx = c.kqfcotab and kqfcoidx != 0

It’s a join between x$kqfta and x$kqfco – the fifth item on our list: the complete list of x$ columns. And you just can’t help wondering what results you would get if you removed the filter on column kqfcoidx. So here’s a little script I wrote a long time ago to do just that:

rem
rem     Script:         x_desc.sql
rem     Author:         Jonathan Lewis
rem     Dated:          September 2002
rem     Purpose:        Describe the X$ objects
rem
rem     Notes:
rem     This script can only be run by SYS
rem

set newpage 0
set pagesize 60
set linesize 132
set trimspoon on
set tab off

break on kqftanam skip page
column kqftanam new_value m_table noprint

column  kqfconam        heading "Column" format a34
column  kqfcodty        heading "Ext Type" 
column  kqfcotyp        heading "Int Type"
column  kqfcomax        heading "Array Max"
column  kqfcolsz        heading "Len Size"
column  kqfcolof        heading "Len Offset"
column  kqfcosiz        heading "Col Size"
column  kqfcooff        heading "Offset"
column  kqfcoidx        heading "Index"
column  kqfcoipo        heading "Idx Col"

ttitle -
        skip 1 -
        center m_table -
        skip 2

spool x_desc

select
        ta.kqftanam,
        co.kqfconam,
        co.kqfcodty,
        co.kqfcotyp,
        co.kqfcomax,
        co.kqfcolsz,
        co.kqfcolof,
        co.kqfcosiz,
        co.kqfcooff,
        co.kqfcoidx,
        co.kqfcoipo
from
        x$kqfta ta,
        x$kqfco co
where
        co.kqfcotab = ta.indx
order by
        ta.kqftanam,
        co.kqfcooff,
        co.indx
;

spool off

And here’s the result for just one “table” in 19.11.0.0 – x$ksled (corresponding to v$event_name) together with a query to report its column count and row size from x$kqfta.

Column                       Ext Type   Int Type  Array Max   Len Size Len Offset   Col Size     Offset      Index    Idx Col
-------------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
ADDR                               23          9          0          0          0          8          0          1          0
INDX                                2         11          0          0          0          4          0          2          0
INST_ID                             2         11          0          0          0          4          0          0          0
CON_ID                              2         11          0          0          0          2          0          0          0
KSLEDNAM                            1          6          0          0          0         64          0          0          0
KSLEDCLASSID                        2          7          0          0          0          4          0          0          0
KSLEDCLASS                          1          7          0          0          0         64          0          0          0
KSLEDP1                             1          6          0          0          0         64          8          0          0
KSLEDP2                             1          6          0          0          0         64         16          0          0
KSLEDP3                             1          6          0          0          0         64         24          0          0
KSLEDFLG                            2         11          0          0          0          4         32          0          0
KSLEDCLASS#                         2          0          0          0          0          2         36          0          0
KSLEDDSP                            1          6          0          0          0         64         40          0          0
KSLEDHASH                           2         11          0          0          0          4         52          0          0

SQL> select kqftarsz, kqftacoc from X$kqfta where kqftanam = 'X$KSLED';

  KQFTARSZ   KQFTACOC
---------- ----------
        56         14

In this particular case you can see that the column count (14) matches – I haven’t checked if this is always true, but I suspect it is – and in this case we can see from a quick check of the highest offset (52) plus its column size (4) that the row size (56) matches as well, but that’s not always the case as we shall see in other examples.

There are, inevitably, some puzzles in this output.

  • How can we have 7 columns all starting at the same zero offset?
  • How can you have a column with an offset of 16 (ksledp2) when the previous column has an offset of 8 and a length of 64?
  • Might there be an interesting 2 bytes of information between ksledclass# (offset 36 length 2) and ksleddsp (offset 40)?

I won’t attempt to unravel the mystery any further at this point, but I will just point out that there are two “indexes” on this “table”. To access a row efficiently you can either query by addr (its 64-bit address) or by indx (which suggests it’s stored in memory as a simple non-segmented, array – which is another detail I’ll explain at a later date).

Coming soon

The problem with v$fixed_view_definition – and how to circumvent it.

Notes on interpreting the columns in x$kqfco.

Three ways of view row sizes – which given different results!

I will leave you with one last thought – the X$ “tables” fall into 4 different categories and you can identify which category that is by check the address (addr) of the first row of each table (provided the table has some data in it, of course).

Session count

Fri, 2022-03-11 06:38

This note is an extended answer to a question the Kyle Hailey raised on the oracle-l list server a couple of days ago. Rather than repeat the question immediately, I’ll start with a little scripts (to be run by a suitably privileged account – which eventually will have to be SYS) and then telll you the question.

rem
rem     Script:         xksuse.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2022
rem     Purpose:        
rem
rem     Last tested 
rem             21.3.0.0
rem             19.11.0.0
rem
rem     Notes:
rem     Has to be run by sys in the cdb$root
rem
rem     See also:
rem     https://www.freelists.org/post/oracle-l/sysmetric-Session-Count-Count-from-vsession
rem

select value from v$sysmetric where metric_name = 'Session Count';

select current_utilization from v$resource_limit where resource_name = 'sessions';

select count(*) from v$session;

Here’s a set of results taken from a test instance of 19.11.0.0

     VALUE
----------
	78

CURRENT_UTILIZATION
-------------------
		 78

  COUNT(*)
----------
	54

The question on oracle-l was: “why doesn’t v$sysmetric agree with the count of v$session?”

This raises an internesting question about licencing. I don’t really pay much attention to licensing since I don’t run a production Oracle database any more, but I have a vague memory that for some versions there were things like “named user” licenses and “concurrent sessions” licenses – and there are a couple of parameters you can set to avoid the risk of breaching the license agreement:

Parameter                       Description   
-------------------------------------------------------------------------------------------------
license_max_sessions            maximum number of non-system user sessions allowed
license_max_users               maximum number of named users that can be created in the database

Do these license agreements still exist? Has anyone signed up for them?

I haven’t set either of these parameters in my database so they’ve defaulted to show zero, but by running up several extra sessions and checking the v$sysmetic entry “Session Limit %” I could infer that the default value for license_max_sessions was 1,000 (for my instance on my machine, possibly affected by other parameter settings), and that the limit percentage was based on the v$sysmetric value not a count from v$session. (And if it had been based on the v$session count that would have been almost as bad since the description of the parameter is “non-system user sessions” – and of the 54 sessions reported above only 2 of them were of type “USER”.)

The Geek Bit

Before Kyle mentioned the license issue I was only interested in finding out why there was a difference in the counts. The starting point for this was to look at the x$ structure underlying v$session – and the critical one is x$ksuse although for some time the view includes a join to two other x$ structures: x$kslwt (v$session_wait) and x$ksled (v$event_name).

If you want to see the definition of v$session (or, rather, gv$session) you will run into a problem if you query v$fixed_view_definition because the definition of v$session is longer than the varchar2(4000) limit.

A workaround to this problem is to extract the full definition from the oracle executable which, for Unix systems you can do with a command like:

strings -a oracle | grep -inT "x\$ksuse" | fold >temp.txt

You’ll have to search the (fairly small) text file to find the right definition, of course as there are several dynamic performance views that include x$ksuse or x$ things with names that start the same way. When you”ve identified the correct definition you’ll note that there are two predicates applied to the structure to generate v$session (and these have not changed since at least 8.1.7.4 – which is the oldest listing of the views I have stored):

where
        bitand(s.ksspaflg,1)!=0 
and     bitand(s.ksuseflg,1)!=0

Based on these predicates I wrote a simple script to report sessions (x$ entries) that matched each of the predicates separately:

select 'U' flag, count(*) from x$ksuse where bitand(ksuseflg,1)!=0
union all
select 'S' flag, count(*) from x$ksuse where bitand(ksspaflg,1)!=0
/

F   COUNT(*)
- ----------
U         61
S         78

The 78 S types matches (maybe coincidentally) the v$resource_limit and v$sysmetric figures, but there’s an interesting excess built into the U types when compared to the count of v$session. So the next step is to drill into the detail a bit.

To drill down I wrote a query to list the program column (ksusepnm) with a count of how many rows there were for that program name in the union all. For many program names we could expect to see the answer 2, one row of type U and one of type S, but there are likely to be many end user sessions showing showing some format of connection mechanism such as ((TNS V1-V3). Here’s the script with a predicate showing only the “interesting” rows (i.e. the ones where the result is not 2):

set null n/a

column ksusepnm format a40

select ksusepnm, count(*) from (
select 'U' flag, ksusepnm from x$ksuse where bitand(ksuseflg,1)!=0
union all
select 'S' flag, ksusepnm from x$ksuse where bitand(ksspaflg,1)!=0
)
group by ksusepnm
having count(*) != 2
order by count(*)
/

KSUSEPNM                                   COUNT(*)
---------------------------------------- ----------
oracle@linux183.localdomain (P001)                1
oracle@linux183.localdomain (P002)                1
oracle@linux183.localdomain (J00B)                1
oracle@linux183.localdomain (P000)                1
oracle@linux183.localdomain (MZ00)                1
oracle@linux183.localdomain (MZ01)                1
oracle@linux183.localdomain (P003)                1
oracle@linux183.localdomain (SCMN)                4
sqlplus@linux183.localdomain (TNS V1-V3)          4
n/a                                              24

10 rows selected.

We can ignore the sqlplus rows, they correspond to the two sessions (reported twice each) that are visible in v$session. Similarly we can ignore the SCMN rows, of which there are two in my v$session at present, but we have 24 rows with no program, and 7 rows that appear only in one of the two classes.

If we take the 7 rows where the count is only 1 we note that they are all named for “slave” processes: Jnnn are job queue slaves, Pnnn are parallel execution slaves, and MZnn are manageability monitor (MMON) slaves which, the operating system tells me, don’t currently exist.

The interesting thing is the set of 24 rows that have no associated program; and to look at those I’m just going list the whole dataset rather than counting it and (since the volume is small in my case) examine the results by eye.

select 'U' flag, ksusepnm from x$ksuse where bitand(ksuseflg,1)!=0
union all
select 'S' flag, ksusepnm from x$ksuse where bitand(ksspaflg,1)!=0
order by 1,2
/

I won’t reproduce the results but the key detail was that all the rows where the program name was null were from the S class (and that’s not too surprising, really).

The next (and final, so far) check was to add a few columns to the simple query and change the sort order to see if anything stood out:

break on flag 
compute count of ksspatyp on flag

select 
        'U' flag, indx, ksuseser, ksspatyp, ksusepnm, bitand(ksspaflg,1) spaflg, bitand(ksuseflg,1) useflag, con_id, ksusepro 
from    x$ksuse 
where   bitand(ksuseflg,1)!=0
union all
select 
        'S' flag, indx, ksuseser, ksspatyp, ksusepnm, bitand(ksspaflg,1) spaflg, bitand(ksuseflg,1) useflag, con_id, ksusepro 
from    x$ksuse 
where   bitand(ksspaflg,1)!=0
order by 
        flag, ksusepro, ksusepnm
/

The break/compute commands allow me to check that the totals for the S class and U class still matched with the original 78 and 61 – dynamic structures can change very quickly, and some of the oddities were about background processes. The interesting rows in the U class were as follows:

F       INDX   KSUSESER   KSSPATYP KSUSEPNM                                     SPAFLG    USEFLAG     CON_ID KSUSEPRO
- ---------- ---------- ---------- ---------------------------------------- ---------- ---------- ---------- ----------------
U        533          0          0 oracle@linux183.localdomain (J00B)                0          1          1 00
         546          0          0 oracle@linux183.localdomain (MZ00)                0          1          0 00
         539          0          0 oracle@linux183.localdomain (MZ01)                0          1          0 00
         537          0          0 oracle@linux183.localdomain (P000)                0          1          0 00
         525          0          0 oracle@linux183.localdomain (P001)                0          1          0 00
         544          0          0 oracle@linux183.localdomain (P002)                0          1          0 00
          49          0          0 oracle@linux183.localdomain (P003)                0          1          0 00

Basically this looks like a set of processes which used to be “user” processes, which have terminated and shutdown, but their entries in x$ksuse have not been completely “cleaned” out. The numbers are nice because we have 61 rows in this class, 7 rows show “no serial#, no process” and v$session is reporting 54 sessions which is the result of 61 minus 7.

So what about the S class rows. I’ve cunningly sorted by ksusepro which, if you hadn’t guessed, is the paddr for the process supporting the session. So do we have any cases where multiple sessions seem to be supported by a single process – and this is where I viewed the output and noticed I could show the interesting detail with a simple call to grep plus a little cosmetic editing:

grep -nT -B+1 "                                " xksuse.lst >temp.txt

                 INDX   KSUSESER   KSSPATYP KSUSEPNM                                     SPAFLG    USEFLAG     CON_ID KSUSEPRO
                 ----   --------   -------- -------------------------------------------- ------    -------     ------ ----------------
 270    -           3      62786          1 oracle@linux183.localdomain (GEN0)                1          1          0 000000008E567480
 271    :          26       4581          1                                                   1          0          0 000000008E567480
 274    -         504       2420          1 oracle@linux183.localdomain (GEN1)                1          1          0 000000008E56B4D0
 275    :         519      53839          1                                                   1          0          0 000000008E56B4D0
 278    -           6      44715          1 oracle@linux183.localdomain (OFSD)                1          1          0 000000008E56F520
 279    :          27      60151          1                                                   1          0          0 000000008E56F520
 281    -           4      65197          1 oracle@linux183.localdomain (DBRM)                1          1          0 000000008E572000
 282    :          25      41180          1                                                   1          0          0 000000008E572000
 294    -          13      25844          1 oracle@linux183.localdomain (W000)                1          1          0 000000008E582140
 295    :          14      42325          1                                                   1          0          0 000000008E582140
 296    -         513      33860          1 oracle@linux183.localdomain (LREG)                1          1          0 000000008E5836B0
 297    :         520      56123          1                                                   1          0          0 000000008E5836B0
 298    -          28       5509          1 oracle@linux183.localdomain (W005)                1          1          0 000000008E584C20
 299    :          29      49967          1                                                   1          0          0 000000008E584C20
 301    -         515      19909          1 oracle@linux183.localdomain (MMON)                1          1          0 000000008E588C70
 302    :         516       3981          1                                                   1          0          0 000000008E588C70
 303    -          17       5100          1 oracle@linux183.localdomain (MMNL)                1          1          0 000000008E58A1E0
 304    :          32      46728          1                                                   1          0          0 000000008E58A1E0
 306    -          37      30525          1 oracle@linux183.localdomain (M000)                1          1          0 000000008E58F7A0
 307    :          31      18290          1                                                   1          0          0 000000008E58F7A0
 308    -         523       2817          1 oracle@linux183.localdomain (M002)                1          1          0 000000008E590D10
 309    :         524      58038          1                                                   1          0          0 000000008E590D10
 310    -          19      30164          1 oracle@linux183.localdomain (M001)                1          1          0 000000008E592280
 311    :          35      25427          1                                                   1          0          0 000000008E592280
 315    -         518      20043          1 oracle@linux183.localdomain (AQPC)                1          1          0 000000008E598DB0
 316    :         530      24619          1                                                   1          0          0 000000008E598DB0
 317    -          15      27947          1 oracle@linux183.localdomain (M004)                1          1          0 000000008E59A320
 318    :          45      37304          1                                                   1          0          0 000000008E59A320
 319    -         521      58123          1 oracle@linux183.localdomain (CJQ0)                1          1          0 000000008E59E370
 320    :         522      32179          1                                                   1          0          0 000000008E59E370
 321    -          23      17837          1 oracle@linux183.localdomain (W003)                1          1          0 000000008E59F8E0
 322    :          30      24152          1                                                   1          0          0 000000008E59F8E0
 326    -         529      54638          1 oracle@linux183.localdomain (M003)                1          1          0 000000008E5A0E50
 327    :         526       9248          1                                                   1          0          0 000000008E5A0E50
 328    -          16      23626          1 oracle@linux183.localdomain (W001)                1          1          0 000000008E5A23C0
 329    :          20      52434          1                                                   1          0          0 000000008E5A23C0
 332    -         531      31198          1 oracle@linux183.localdomain (QM02)                1          1          0 000000008E5A6410
 333    :         532      53916          1                                                   1          0          0 000000008E5A6410
 334    -          38      28119          1 oracle@linux183.localdomain (W006)                1          1          0 000000008E5A7980
 335    :          39      55963          1                                                   1          0          0 000000008E5A7980
 336    -          42      46149          1 oracle@linux183.localdomain (W007)                1          1          0 000000008E5AA460
 337    :          43      45287          1                                                   1          0          0 000000008E5AA460
 341    -          47      36878          1 oracle@linux183.localdomain (W002)                1          1          0 000000008E5BD080
 342    :          48      26536          1                                                   1          0          0 000000008E5BD080
 343    -         538      50713          1 oracle@linux183.localdomain (Q002)                1          1          0 000000008E5BE5F0
 344    :         517      40400          1                                                   1          0          0 000000008E5BE5F0
 345    -          34      27163          1 oracle@linux183.localdomain (W004)                1          1          0 000000008E5C5120
 346    :          44      19000          1                                                   1          0          0 000000008E5C5120

Every single one of the S class with a blank program name is “sharing” a process with one of the background processes. In many of the pairs the row with a named program is a slave process (Mnnn, Wnnn etc.) but that’s far from a uniform pattern. I could imagine an argument that says Oracle wants to “pre-allocate” a spare session linked to an existing process in some special cases so that a program can switch to the alternate session (memory structures) if the first session gets corrupted in some way – but I can’t really see that as relevant to any slave processes, while I can think of a couple more programs where this strategy would be more important.

Adding a little weight to the “shadow” idea (or maybe just highlighting a bug) it’s interesting to note that a number of the pairs show consecutive values for the indx (SID) – as if Oracle has created two sessions at the same time.

All I can say at the moment, though, is that we have a consistent pattern for all the excess “invisible” sessions, and the strange entries make up the difference between the v$session count and the v$sysmetric report.

Summing up:

v$session shows a count of 54 but digging into x$ksuse for “user sessions” I can see a further 7 sessions; but the pattern for these sessions is simply “memory structure doesn’t need to be cleaned out after process terminated” so they have nothing to do with any counting anomaly.

v$resource_limit and v$sysmetric report a value that is larger than the v$session count by 24; and we can find 24 rows in x$ksuse which (a) “mirror” 24 background sessions and (b) are linked to processes (v$process.addr) but don’t have a program name (v$session.program) and aren’t flagged as user processes. So it looks as if we can (partially) explain why the two views are over-reporting.

There are some indications that if you are licensing by sessions (if that’s still possible) then the number used for checking the limit is not consistent with the description of the relevant parameter. A large number of background sessions is included in the v$sysmetric figure that seems to be the reference record.

Footnote:

I have a little oddity to pursue in 21.3.0.0, where my v$sysmetric.value was one less than the v$resource_limit current_utilization. The same “shadow” process strategy was visible though, and v$session count plus the “shadow” count agreed with v$resource_limit.

Pages