Feed aggregator

Automating Index Rebuild

Michael Dinh - 4 hours 48 min ago

IMPORTANT: This is not a recommendation to rebuild indexes.

The post will outline SQL used to determine index to rebuild.

PL/SQL will be used to check table lock for the underlying index and if there is no lock, then rebuild index else skip rebuild for index.

1.Download Index Sizing and create copy index_est_proc_2.sql.org

2. Create table index_rebuild.

SQL> desc index_rebuild
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 TABLE_OWNER                               NOT NULL VARCHAR2(30)
 TABLE_NAME                                NOT NULL VARCHAR2(30)
 INDEX_NAME                                NOT NULL VARCHAR2(20)
 LEAF_BLOCKS                                        NUMBER
 TARGET_SIZE                                        NUMBER

SQL>

3. Update index_est_proc_2.sql and include the following insert into table index rebuild.

if m_leaf_estimate < &m_scale_factor * r.leaf_blocks then
  dbms_output.put_line(
    to_char(sysdate,'hh24_mi_ss') || '|table|' ||
    trim(r.table_name) || '|index|' ||
    trim(r.index_name) || '|' || 'Current Leaf blocks|' || trim(to_char(r.leaf_blocks,'999,999,999')) || '|Target size|' || 
    trim(to_char(m_leaf_estimate,'999,999,999'))
  );

  -- Insert data into table index_rebuild as well as output to terminal.
  insert into index_rebuild(table_owner,table_name,index_name,leaf_blocks,target_size)
  values
  (UPPER('&m_owner'),trim(r.table_name),trim(r.index_name),r.leaf_blocks,m_leaf_estimate);
  dbms_output.new_line;
end if;

4. Create plsql_rebuild_idx.sql

set timing on time on serveroutput on size unlimited trimsp on tab off lines 200
col TABLE_OWNER for a30
col TABLE_NAME for a30
col INDEX_NAME for a35
col USERNAME for a10
col MACHINE for a10
col MODULE for a30
-- Display current user session info.
select s.username as Username,
       s.machine as Machine,
       s.module as Module,
       s.sid as SessionID,
       p.pid as ProcessID,
       p.spid as "UNIX ProcessID"
from
v$session s, v$process p
where s.sid = sys_context ('userenv','sid')
and s.PADDR = p.ADDR
;
set echo on
-- Rebuild indexes with LEAF_BLOCKS < 16000000 and edit as required.
select * from index_rebuild where LEAF_BLOCKS < 16000000;
exit
lock table index_rebuild in EXCLUSIVE mode WAIT 120;
DECLARE
  l_sql varchar2(1000);
  l_ct  number;
BEGIN
FOR d in (
  select TABLE_OWNER, TABLE_NAME, INDEX_NAME, LEAF_BLOCKS from index_rebuild order by leaf_blocks asc
)
LOOP
  select count(*) into l_ct
  from v$locked_object a, v$session b, dba_objects c
  where b.sid = a.session_id
  and a.object_id = c.object_id
  and c.object_type='TABLE'
  and c.owner=d.TABLE_OWNER
  and c.object_name=d.TABLE_NAME
  and d.LEAF_BLOCKS < 16000000;
  IF l_ct = 0 THEN
    dbms_output.put_line( '-- Check lock for owner|table|index : ' ||d.TABLE_OWNER||'.'||d.TABLE_NAME||'.'||d.INDEX_NAME||'='||l_ct );
    l_sql := 'alter index '||d.TABLE_OWNER||'.'||d.INDEX_NAME||' rebuild online parallel 4';
    dbms_output.put_line (l_sql);
    execute immediate l_sql;
    delete from index_rebuild where TABLE_OWNER=d.TABLE_OWNER and TABLE_NAME=d.TABLE_NAME and INDEX_NAME=d.INDEX_NAME;
  END IF;
END LOOP;
END;
/
delete from index_rebuild;
commit;
exit

5. Run plsql_rebuild_idx.sql using nohup

nohup sqlplus "/ as sysdba" @ plsql_rebuild_idx.sql > plsql_rebuild_idx.log 2>&1 &

6. Review

$ cat plsql_rebuild_idx.log
nohup: ignoring input

SQL*Plus: Release 12.1.0.2.0 Production on Thu Sep 24 14:13:00 2020

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

14:13:00 SQL> select * from index_rebuild;

TABLE_OWNER                    TABLE_NAME                     INDEX_NAME           LEAF_BLOCKS TARGET_SIZE
------------------------------ ------------------------------ -------------------- ----------- -----------
XXXX                           YYYYYYYYYY1                    ZZZZZZZZZZZ_M            9721430     4328586
XXXX                           YYYYYYYYYY2                    ZZZZZZZZZZZ_MP          15865953     5848673

Elapsed: 00:00:00.00
14:13:00 SQL> lock table index_rebuild in EXCLUSIVE mode WAIT 120;

Table(s) Locked.

Elapsed: 00:00:00.00
14:13:00 SQL> DECLARE
14:13:00   2    l_sql varchar2(1000);
14:13:00   3    l_ct  number;
14:13:00   4  BEGIN
14:13:00   5  FOR d in (
14:13:00   6    select TABLE_OWNER, TABLE_NAME, INDEX_NAME, LEAF_BLOCKS from index_rebuild order by leaf_blocks asc
14:13:00   7  )
14:13:00   8  LOOP
14:13:00   9    select count(*) into l_ct
14:13:00  10    from v$locked_object a, v$session b, dba_objects c
14:13:00  11    where b.sid = a.session_id
14:13:00  12    and a.object_id = c.object_id
14:13:00  13    and c.object_type='TABLE'
14:13:00  14    and c.owner=d.TABLE_OWNER
14:13:00  15    and c.object_name=d.TABLE_NAME;
14:13:00  16    IF l_ct = 0 THEN
14:13:00  17      dbms_output.put_line( '-- Check lock for owner|table|index : ' ||d.TABLE_OWNER||'.'||d.TABLE_NAME||'.'||d.INDEX_NAME||'='||l_ct );
14:13:00  18      l_sql := 'alter index '||d.TABLE_OWNER||'.'||d.INDEX_NAME||' rebuild online parallel 4';
14:13:00  19      dbms_output.put_line (l_sql);
14:13:00  20      execute immediate l_sql;
14:13:00  21      delete from index_rebuild where TABLE_OWNER=d.TABLE_OWNER and TABLE_NAME=d.TABLE_NAME and INDEX_NAME=d.INDEX_NAME;
14:13:00  22    END IF;
14:13:00  23  END LOOP;
14:13:00  24  END;
14:13:00  25  /
-- Check lock for owner|table|index : XXXX.YYYYYYYYYY1.ZZZZZZZZZZZ_M=0
alter index XXXX.ZZZZZZZZZZZ_M rebuild online parallel 4
-- Check lock for owner|table|index : XXXX.YYYYYYYYYY2.ZZZZZZZZZZZ_MP=0
alter index XXXX.ZZZZZZZZZZZ_MP rebuild online parallel 4

PL/SQL procedure successfully completed.

Elapsed: 04:00:23.08
18:13:23 SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
18:13:23 SQL> exit

7. Run index_est_proc_2.sql.org (screen output only) or index_est_proc_2.sql (screen output and insert into index_rebuild table) to determine if any more indexes are listed for rebuild.

Note: The first rebuild contained a few dozen of indexes for rebuild but was not automated.

Later, there were only 2 indexes for rebuild as shown above from real production environment before minor improvements, e.g. — Display current user session info.

Q.E.D.

Oracle App Cloud and Incorta

Dylan's BI Notes - 9 hours 40 min ago
OTBI is great. But when people are migrating from Oracle EBS to Oracle Cloud App, they would like to view the data from both EBS and Oracle Cloud, Incorta becomes a cost saving and a quick implementation solution without implementing a data warehouse. Incorta is not a data warehouse although it does has the data […]
Categories: BI & Warehousing

Oracle 20c release timeframe

Tom Kyte - 14 hours 10 min ago
Hello masters, Do you know when the Oracle v20 will be officialy released for production purpose? We are almost October the first, winter is coming in France, I have no news... I am worried about a production release in 2020. Here https://docs.oracle.com/en/database/oracle/oracle-database/20/index.html I can read "Oracle Database 20c is available only for preview. It is not available for production use." Bests regards, David D.
Categories: DBA Blogs

Hide link or item by role

Tom Kyte - 14 hours 10 min ago
Hey! I have a number of permissions in the app that I can give users, One of the permissions is a manager - and I want to give it a different access to some of the functions in the app. I wanted to ask, how can a dynamic action process hide an item by role? For example: I want to hide a field from someone who is not a manager. I guess I need to check the role after login process but I thought maybe there is something ready that checks me: What is the role of the user who is now connected to the app?
Categories: DBA Blogs

Changing Workspace name

Tom Kyte - 14 hours 10 min ago
Would you please help me to change my Workspace ID without deleting (remove) the workspace. Thank you
Categories: DBA Blogs

Oracle Cloud Free Tier Apex Public Page Document upload

Tom Kyte - 14 hours 10 min ago
Sorry I didn't put this through livesql but it is not applicable for this question. I have created and OCI Free Tier ATP DB and I have installed Oracle Apex. I have created an application and a public page in the application with a form based on a table that has a blob column. If the page is not public I can successfully upload a document and open it. If I change the page to public and try to upload a document again when I click submit is says 'it is not possible to upload a file on a public page on this instance'. I have checked this document but I don't see anything that says I should not be able to do this. https://docs.oracle.com/en/cloud/paas/atp-cloud/atpug/apex-restrictions.html#GUID-E13D5044-B9DD-4168-8A12-C99532940DA9 So my next step was to check the APEX Instance ALLOW_PUBLIC_FILE_UPLOAD parameter using an admin account via SQL Dev and via my Apex Admin login and the sql commands screen (this latter option was never going to work but I tried it anyway) This is the output from running using my ATP DB admin account via SQL Dev <code> BEGIN APEX_INSTANCE_ADMIN.SET_PARAMETER('ALLOW_PUBLIC_FILE_UPLOAD', 'Y'); END; </code> Error report - ORA-20987: APEX - Instance parameter not found - Contact your application administrator. Details about this incident are available via debug id "125044". ORA-06512: at "APEX_200100.WWV_FLOW_ERROR", line 1132 ORA-06512: at "APEX_200100.WWV_FLOW_ERROR", line 1499 ORA-06512: at "APEX_200100.WWV_FLOW_INSTANCE_ADMIN", line 87 ORA-06512: at "APEX_200100.WWV_FLOW_INSTANCE_ADMIN", line 190 ORA-06512: at line 2 So all I am basically asking is it possible to upload a file publicly on the OCI Free Tier via a public apex page or is it just a missing parameter? I can understand the reasons if it is not possible to do this but I am trying to understand how I can check what is and isn't possible as the documentation that I have found doesn't appear to say it isn't possible. Any information is greatly appreciated.
Categories: DBA Blogs

Modelling question

Tom Kyte - 14 hours 10 min ago
Hi TOM, I need to accommodate this kind of object (subscription) in the database (not necessarily JSON): <code> { "subscriber": 12343, --user_id "subscr_data_start": 20200901, "subscr_data_end": "", "object_name": "a very fancy name", "object_event_type": "create", "object_type": "product", "object_event_match": "exact", "conditions": { "countries": ["IT", "DE"], "categories": ["computers", "HI-FI"] } }</code> That would translate as: Starting from `20200901` with no end date, the user `12343` would like to be alerted every time we `create` a `product` of kind (`computers` or `HI-FI`) in either country (`IT` or `DE`) whose name matches `exact`ly `a very fancy name`. Data modelling details in the LiveSQL link. Is it a correct/valid approach later on when i need to query it (using `subscriber` column as index? If not, should i go with the standard relational approach instead ? Thanks, Alex
Categories: DBA Blogs

Serial Bloom

Jonathan Lewis - 15 hours 41 min ago

Following the recent note I wrote about an enhancement to the optimizer’s use of Bloom filters, I received a question by email asking about the use of Bloom filters in serial execution plans:

I’m having difficulty understanding the point of a Bloom filter when used in conjunction with a hash join where everything happens within the same process.

I believe you mentioned in your book (Cost Based Oracle) that hash joins have a mechanism similar to a Bloom filter where a row from the probe table is checked against a bitmap, where each hash table bucket is indicated by a single bit. (You have a picture on page 327 of the hash join and bitmap, etc).

The way that bitmap looks and operates appears to be similar to a Bloom filter to me…. So it looks (to me) like hash joins have a sort of “Bloom filter” already built into them.

My question is… What is the advantage of adding a Bloom filter to a hash join if you already have a type of Bloom filter mechanism thingy built in to hash joins?

I can understand where it would make sense with parallel queries having to pass data from one process to another, but if everything happens within the same process I’m just curious where the benefit is.

 

The picture on page 327 of CBO-F is a variation on the following, which is the penultimate snapshot of the sequence of events in a multi-pass hash join. The key feature is the in-memory bitmap at the top of the image describing which buckets in the (partitioned and spilled) hash table hold rows from the build table. I believe that it is exactly this bitmap that is used as the Bloom filter.

The question of why it might be worth creating and using a Bloom filter in a simple serial hash join is really a question of scale. What is the marginal benefit of the Bloom filter when the basic hash join mechanism is doing all the hash arithmetic and comparing with a bitmap anyway?

If the hash join is running on an Exadata machine then the bitmap can be passed as a predicate to the cell servers and the hash function can be used at the cell server to minimise the volume of data that has to be passed back to the database server – with various optimisations dependent on the version of the Exadata software. Clearly minimising traffic through the interconnect is going to have some benefit.

Similarly, as the email suggests, for a parallel query where (typically) one set of parallel processes will read the probe table and distribute the data to the second set of parallel processes which then do the hash join it’s clearly sensible to allow the first set of procsses to apply the hash function and discard as many rows as possible before distributing the survivors – minimising inter-process communication.

In both these cases, of course, there’s a break point to consider of how effective the Bloom filter needs to be before it’s worth taking advantage of the technology. If the Bloom filter allows 99 rows out of every hundred to be passed to the database server / second set of parallel processes then Oracle has executed the hash function and checked the bitmap 100 times to avoid sending one row (and it will (may) have to do the same hash function and bitmap check again to perform the hash join); on the other hand if the Bloom filter discards 99 rows and leaves only one row surviving then that’s a lot of traffic eliminated – and that’s likely to be a good thing. This is why there are a few hidden parameters defining the boundaries of when Bloom filters should be used – in particular there’s a parameter “_bloom_filter_ratio” which defaults to 35 and is, I suspect, a figure which says something like “use Bloom filtering only if it’s expected to reduce the probe data to 35% of the unfiltered size”.

So the question then becomes: “how could you benefit from a serial Bloom filter when it’s the same process doing everything and there’s no “long distance” traffic going on between processes?” The answer is simply that we’re operating at a much smaller scale. I’ve written blog notes in the past where the performance of a query depends largely on the number of rows that are passed up a query plan before being eliminated (for example here, where the volume of data moving results in a significant fraction of the total time).

If you consider a very simple hash join its plan is going to be shaped something like this:


-----------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    45 |   720 |    31 |
|*  1 |  HASH JOIN         |      |    45 |   720 |    31 |
|*  2 |   TABLE ACCESS FULL| T2   |    15 |   120 |    15 |
|   3 |   TABLE ACCESS FULL| T1   |  3000 | 24000 |    15 |
-----------------------------------------------------------

If you read Tanel Poder’s article on execution plans as a tree of Oracle function calls you’ll appreciate that you could translate this into informal English along the lines of:

  • Operation 1 calls a function (at operation 2) to do a tablescan of t1 and return all the relevant rows, building an in-memory hash table by applying a hashing function to the join column(s) of each row returned by the call to the tablescan. As the hash table is populated the operation also constructs a bitmap to flag buckets in the hash table that have been populated.
  • Operation 1 then calls a function (at operation 3) to start a tablescan and then makes repeated calls for it to return one row (or, in newer versions, a small rowset) at a time from table t2. For each row returned operation 1 applies the same hash function to the join column(s) and checks the bitmap to see if there’s a potential matching row in the relevant bucket of the hash table, and if there’s a potential match Oracle examines the actual contents of the bucket (which will be stored as a linked list) to see if there’s an actual match.

Taking the figures above, let’s imagine that Oracle is using a rowset size of 30 rows. Operation 1 will have to make 100 calls to Operation 3 to get all the data, and call the hashing function 3,000 times.  A key CPU component of the work done is that the function represented by operation 3 is called 100 times and (somehow) allocates and fills an array of 30 entries each time it is called.

Now assume operation 1 passes the bitmap to operation 3 as an input and it happens to be a perfect bitmap. Operation 3 starts its tablescan and will call the hash function 3,000 times, but at most 45 rows will get past the bitmap. So operation 1 will only have to call operation 3 twice.  Admittedly operation 1 will (possibly) call the hash function again for each row – but maybe operation 3 will supply the hash value in the return array. Clearly there’s scope here for a trade-off between the reduction in work due to the smaller number of calls and the extra work needed to take advantage of the bitmap technology.

Here’s an example that shows the potential for savings – if you want to recreate this test you’ll need about 800MB of free space in the database, the first table takes about 300MB and the second about 450MB.


rem
rem     Script:         bloom_filter_serial_02.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        lpad(rownum,30,'0')             v1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e7 -- > comment to avoid WordPress format issue
;

create table t2
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        round(rownum + 0.5,2)           id,
        mod(rownum,1e5)                 n1,
        lpad(rownum,10)                 v1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e7 -- > comment to avoid WordPress format issue
;


prompt  =================
prompt  With Bloom filter
prompt  =================

select 
        /*+ 
                px_join_filter(t1) 
                monitor
        */
        t1.v1, t2.v1
from 
        t2, t1
where 
        t2.n1 = 0
and 
        t1.id = t2.id
/

prompt  ===============
prompt  No Bloom filter
prompt  ===============

select 
        /*+
                monitor
        */
        t1.v1, t2.v1
from 
        t2, t1
where 
        t2.n1 = 0
and 
        t1.id = t2.id
/

I’ve created tables t1 and t2 with an id column that never quite matches, but the range of values is set so that the optimizer thinks the two tables might have a near-perfect 1 to 1 match. I’ve given t2 an extra column with 105 distinct values in its 107 rows, so it’s going to have 100 rows per distinct value. Then I’ve presented the optimizer with a query that looks as if it’s going to find 100 rows in t2 and needs to find a probable 100 rows of matches in t1. For my convenience, and to highlight a couple of details of Bloom filters, it’s not going to find any matches.

In both runs I’ve enabled the SQL Monitor feature with the /*+ monitor */ hint, and in the first run I’ve also hinted the use of a Bloom filter. Here are the resulting SQL Monitor outputs. Bear in mind we’re looking at a reasonably large scale query – volume of input data – with a small result set.

First without the Bloom filter:


Global Stats
================================================================
| Elapsed |   Cpu   |    IO    | Fetch | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |
================================================================
|    3.00 |    2.24 |     0.77 |     1 |  96484 |  773 | 754MB |
================================================================

SQL Plan Monitoring Details (Plan Hash Value=2959412835)
==================================================================================================================================================
| Id |      Operation       | Name |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  |  Mem  | Activity | Activity Detail |
|    |                      |      | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes | (Max) |   (%)    |   (# samples)   |
==================================================================================================================================================
|  0 | SELECT STATEMENT     |      |         |       |         2 |     +2 |     1 |        0 |      |       |     . |          |                 |
|  1 |   HASH JOIN          |      |     100 | 14373 |         2 |     +2 |     1 |        0 |      |       |   2MB |          |                 |
|  2 |    TABLE ACCESS FULL | T2   |      99 |  5832 |         2 |     +1 |     1 |      100 |  310 | 301MB |     . |          |                 |
|  3 |    TABLE ACCESS FULL | T1   |     10M |  8140 |         2 |     +2 |     1 |      10M |  463 | 453MB |     . |          |                 |
==================================================================================================================================================

According to the Global Stats the query has taken 3 seconds to complete, of which 2.24 seconds is CPU. (The 750MB read in 0.77 second would be due to the fact that I’m running off SSD, and I’ve got a 1MB read size that helps). A very large fraction of the CPU appears because of the number of calls from operation 1 to operation 3 (the projection information pulled from memory reports a rowset size of 256 rows, so that’s roughly 4,000 calls to the function.

When we force the use of a Bloom filter the plan doesn’t change much (though the creation and use of the Bloom filter has to be reported) – but the numbers do change quite significantly.

Global Stats
================================================================
| Elapsed |   Cpu   |    IO    | Fetch | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |
================================================================
|    1.97 |    0.99 |     0.98 |     1 |  96484 |  773 | 754MB |
================================================================

SQL Plan Monitoring Details (Plan Hash Value=4148581417)
======================================================================================================================================================
| Id |       Operation       |  Name   |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  |  Mem  | Activity | Activity Detail |
|    |                       |         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes | (Max) |   (%)    |   (# samples)   |
======================================================================================================================================================
|  0 | SELECT STATEMENT      |         |         |       |         1 |     +1 |     1 |        0 |      |       |     . |          |                 |
|  1 |   HASH JOIN           |         |     100 | 14373 |         1 |     +1 |     1 |        0 |      |       |   1MB |          |                 |
|  2 |    JOIN FILTER CREATE | :BF0000 |      99 |  5832 |         1 |     +1 |     1 |      100 |      |       |     . |          |                 |
|  3 |     TABLE ACCESS FULL | T2      |      99 |  5832 |         1 |     +1 |     1 |      100 |  310 | 301MB |     . |          |                 |
|  4 |    JOIN FILTER USE    | :BF0000 |     10M |  8140 |         1 |     +1 |     1 |    15102 |      |       |     . |          |                 |
|  5 |     TABLE ACCESS FULL | T1      |     10M |  8140 |         1 |     +1 |     1 |    15102 |  463 | 453MB |     . |          |                 |
======================================================================================================================================================


In this case, the elapsed time dropped to 1.97 seconds (depending on your viewpoint that’s either a drop of “only 1.03 seconds” or drop of “an amazing 34.3%”; with the CPU time dropping from 2.24 seconds to 0.99 seconds (55.8% drop!)

In this case you’ll notice that the tablescan of t1 produced only 15,102 rows to pass up to the hash join at operation 1 thanks to the application of the predicate (not reported here): filter(SYS_OP_BLOOM_FILTER(:BF0000,”T1″.”ID”)). Instead of 4,000 calls for the next rowset the hash function has been applied during the tablescan and operation 5 has exhausted the tablescan after only about 60 calls. This is what has given us the (relatively) significant saving in CPU.

This example of the use of a Bloom filter highlights up the two points I referred to earlier.

  • First, although we see operations 4 and 5 as Join (Bloom) filter use and Table access full respectively I don’t think the data from the tablescan is being “passed up” from operation 5 to 4; I believe operation 4 can be views as a “placeholder” in the plan to allow us to see the Bloom filter in action, the hashing and filtering actually happening during the tablescan.
  • Secondly, we know that there are ultimately no rows in the result set, yet the application of the Bloom filter has not eliminated all the data. Remember that the bitmap that Oracle constructs of the hash table identifies used buckets, not actual values. Those 15,102 rows are rows that “might” find a match in the hash table because they belong in buckets that are flagged. A Bloom filter won’t discard any data that is needed, but it might fail to eliminate data that subsequently turns out to be unwanted.
How parallel is parallel anyway?

I’ll leave you with one other thought. Here’s an execution plan from 12c (12.2.0.1) which joins three dimension tables to a fact table. There are 343,000 rows in the fact table and the three joins individually identify about 4 percent of the data in the table. In a proper data warehouse we might have been looking at a bitmap star transformation solution for this query, but in a mixed system we might want to run warehouse queries against normalised data – this plan shows what Bloom filters can do to minimise the workload. The plan was acquired from memory after enabling rowsource execution statistics:

--------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Starts | E-Rows |    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |          |      1 |        |        |      |            |      1 |00:00:00.05 |      22 |      3 |       |       |          |
|   1 |  SORT AGGREGATE              |          |      1 |      1 |        |      |            |      1 |00:00:00.05 |      22 |      3 |       |       |          |
|   2 |   PX COORDINATOR             |          |      1 |        |        |      |            |      2 |00:00:00.05 |      22 |      3 | 73728 | 73728 |          |
|   3 |    PX SEND QC (RANDOM)       | :TQ10000 |      0 |      1 |  Q1,00 | P->S | QC (RAND)  |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   4 |     SORT AGGREGATE           |          |      2 |      1 |  Q1,00 | PCWP |            |      2 |00:00:00.09 |    6681 |   6036 |       |       |          |
|*  5 |      HASH JOIN               |          |      2 |     26 |  Q1,00 | PCWP |            |     27 |00:00:00.09 |    6681 |   6036 |  2171K|  2171K|     2/0/0|
|   6 |       JOIN FILTER CREATE     | :BF0000  |      2 |      3 |  Q1,00 | PCWP |            |      6 |00:00:00.01 |      20 |      4 |       |       |          |
|*  7 |        TABLE ACCESS FULL     | T3       |      2 |      3 |  Q1,00 | PCWP |            |      6 |00:00:00.01 |      20 |      4 |       |       |          |
|*  8 |       HASH JOIN              |          |      2 |    612 |  Q1,00 | PCWP |            |     27 |00:00:00.08 |    6634 |   6026 |  2171K|  2171K|     2/0/0|
|   9 |        JOIN FILTER CREATE    | :BF0001  |      2 |      3 |  Q1,00 | PCWP |            |      6 |00:00:00.01 |      20 |      4 |       |       |          |
|* 10 |         TABLE ACCESS FULL    | T2       |      2 |      3 |  Q1,00 | PCWP |            |      6 |00:00:00.01 |      20 |      4 |       |       |          |
|* 11 |        HASH JOIN             |          |      2 |  14491 |  Q1,00 | PCWP |            |     27 |00:00:00.08 |    6614 |   6022 |  2171K|  2171K|     2/0/0|
|  12 |         JOIN FILTER CREATE   | :BF0002  |      2 |      3 |  Q1,00 | PCWP |            |      6 |00:00:00.01 |      20 |      4 |       |       |          |
|* 13 |          TABLE ACCESS FULL   | T1       |      2 |      3 |  Q1,00 | PCWP |            |      6 |00:00:00.01 |      20 |      4 |       |       |          |
|  14 |         JOIN FILTER USE      | :BF0000  |      2 |    343K|  Q1,00 | PCWP |            |     27 |00:00:00.08 |    6594 |   6018 |       |       |          |
|  15 |          JOIN FILTER USE     | :BF0001  |      2 |    343K|  Q1,00 | PCWP |            |     27 |00:00:00.08 |    6594 |   6018 |       |       |          |
|  16 |           JOIN FILTER USE    | :BF0002  |      2 |    343K|  Q1,00 | PCWP |            |     27 |00:00:00.08 |    6594 |   6018 |       |       |          |
|  17 |            PX BLOCK ITERATOR |          |      2 |    343K|  Q1,00 | PCWC |            |     27 |00:00:00.08 |    6594 |   6018 |       |       |          |
|* 18 |             TABLE ACCESS FULL| T4       |     48 |    343K|  Q1,00 | PCWP |            |     27 |00:00:00.05 |    6594 |   6018 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------

It’s a parallel plan, but it’s used the 12c “PQ_REPLICATE” strategy. The optimizer has decided that all the dimension tables are so small that it’s going to allow every PX process to read every (dimension) table through the buffer cache and build its own hash tables from them. (In earlier versions you might have seen the query coordinator scanning and broadcasting the three small tables, or one set of PX processes scanning and broadcasting to the other set).

So every PX process has an in-memory hash table of all three dimension tables and then (operation 17) they start a tablescan of the fact table, picking non-overlapping rowid ranges to scan. But since they’ve each created three in-memory hash tables they’ve also been able to create three Bloom filters each, which can all be applied simultaneously as the tablescan takes place; so instead of 343,000 rows being passed up the plan and through the first hash join (where we see from operation 11 that the number of surviving rows would have been about 14,500 ) we see all but 27 rows discarded very early on in the processing. Like bitmap indexes part of the power of Bloom filters lies in the fact that with the right plan the optimizer can combine them and identify a very small data set very precisely, very early.

The other thing I want you to realise about this plan, though, is that it’s not really an “extreme” parallel plan. It’s effectively running as a set of concurrent, non-interfering, serial plans. Since I was running (parallel 2) Oracle started just 2 PX processes: they both built three hash tables from the three dimension tables then split the fact table in half and took half each to do all the joins, and passed the nearly complete result to the query co-ordinator at the last moment. That’s as close as you can get to two serial, non-interfering, queries and still call it a parallel query. So, if you wonder why there might be any benefit in serial Bloom filters – Oracle’s actually being benefiting from it under the covers for several years.

Summary

Bloom filters trade a decrease in messaging against an increase in preparation and hashing operations. For Exadata systems with predicate offloading it’s very easy to see the potential benefit; for general parallel execution; it’s also fairly easy to see the potential benefit for parallel query execution what inter-process message between two sets of PX processes can be resource intensive; but even for serial queries there can be some benefit though, in absolute terms, they are likely to be only a small saving in CPU.

 

Partner Webcast – Migrating Oracle Java applications to Weblogic on OCI

Oracle WebLogic Server for Oracle Cloud Infrastructure (OCI) lets you deploy Java applications to the cloud with just a few clicks. Quickly create your Java Enterprise Edition (Java EE) application...

We share our skills to maximize your revenue!
Categories: DBA Blogs

about wm_concat() in 12c

Tom Kyte - Tue, 2020-09-29 15:06
Hi tom, when i tried use this wm_concat() function in 12c i'm getting this error can you briefly explain? ORA-00904: "WM_CONCAT": invalid identifier 00904. 00000 - "%s: invalid identifier
Categories: DBA Blogs

Custom domain for APEX

Tom Kyte - Tue, 2020-09-29 15:06
Hey! I'm connected to an Oracle cloud and using the latest version of apex. I want to know how I can exchange a domain for one unique one I have purchased This means that where it will be written to me for example: https://xxxxxxxxx.adb.eu-frankfurt-1.oraclecloudapps.com/ords/r/workspace/appname/ I want it to be: https://www.my-domain.com/ords/r/workspace/appname/l Where do you do it from? Will it affect other things to consider?
Categories: DBA Blogs

Truncating digits before decimal in a decimal number

Tom Kyte - Tue, 2020-09-29 15:06
Hi, I am facing a problem in one of my update queries. There is a column I am updating whose datatype is NUMBER(9,5). So it can hold 5 places after decimal, and 4 before decimal. I am using a round function, so that takes care of places after decimal. But for some of the records the value is getting computed as say 123546.12345, and so i am getting -1438 overflow error while updating. I want a quick way (or function) to just get rid of the extra digits before decimal point in such cases. So, in the example above, I want the output as 1235.12345 Please suggest a suitable way. Thanks!
Categories: DBA Blogs

My CLOB is occupying more than double the size it should take.

Tom Kyte - Tue, 2020-09-29 15:06
<code>CREATE TABLESPACE av_ag_temp_tablespace LOGGING DATAFILE 'C:\Oracle\oradata\DEMO1\temptablespace.dbf' SIZE 500M AUTOEXTEND ON NEXT 200M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO / create table c ( x int, y clob ) TABLESPACE AV_AG_TEMP_TABLESPACE;</code> I inserted 1000 rows( each row has 500000 size ) into table C and then checked for the size occupied versus expected. Then dropped the table C and table space and recreated the both table space and table. Again I performed Insert 1000 rows, then deleted them and then inserted them again. and dropped again and created table space and table and then performed insert - update operation. Rows Clob Size DML Total Size(GB) Expected Size (GB) 1000 500000 Insert 1.14 0.5 1000 500000 Insert-Delete-Insert 1.81 0.5 1000 500000 Insert-Update 2.01 0.5 I got the total size using bytes from user_extents for that clob column. Why the size is being occupied more than it needs to be.
Categories: DBA Blogs

How to improve Oracle data extraction throughput rate

Tom Kyte - Tue, 2020-09-29 15:06
For the last 17 years I've been focused on real-time apps where the number of rows retrieved from Oracle is usually 1 and almost always less than 10. I have a good reputation in being able to write very performant real-time applications, but suddenly I'm faced with reading millions (and possibly billions) of rows from a massive batch database server and am looking for ways to improve the speed. So here's the big question: What is hands down the absolute fastest way to extract data from an oracle table or view, given these constraints: 1) There is exactly one query being executed which traverses the entire table or view. 2) The rows being extracted from oracle need to be pre-processed by a formatting routine that will essentially convert the entire row into a fixed length record. The layout for the fixed length record is similar to a Cobol data layout, however, no packed binary, just good old plain ascii. 3) The fixed length records will be grouped into bundles and the bundles will be handed off to background threads that will further process the data and produce output files and/or summary information. Note that for our benchmark speed test, this last piece is omitted. We found this blub on the Driver page that states that using OCI over IPC is faster than a network connection and are trying to test the premise using a home-made Java based speed-test/benchmark program that just does steps 1 & 2 from the above constraints. https://docs.oracle.com/cd/E11882_01/appdev.112/e13995/oracle/jdbc/OracleDriver.html <code> An IPC connection is much faster than a network connection. </code> Our database hardware consists of a bare-metal AIX database server. We also have several other Linux based VM's in the same datacenter. The remote client for our testing purposed is just one of the Linux VM's in the same datacenter as the database AIX server. In the java speed-test app, we've set the statement fetch size to 4196 rows. The results are are confusing me, see table below. According to our results, using JDBC's thin driver (over the network) is faster than OCI over IPC. Note, we set a max records value to stop the benchmark at a fixed point rather than traversing the entire table. <code> Mib/ Library Protocol Connect String Client Records Seconds Second jdbc ojdbc jdbc:oracle:thin:@MyDbServer:1521/MySID remote 3,629,265 453.569 9.195 oci sql*net jdbc:oracle:oci8:@MyDbServer:1521/MySID remote 3,629,265 631.424 6.605 oci ipc jdbc:oracle:oci8:@ local 3,629,265 667.554 6.248 </code> Due the the poor Mib/Sec rate we see using the Java App, I decided to write a C++ OCCI program and it is much, much faster, but alas, the database server is AIX and I can't for the life of me figure out how to compile the program on AIX. Will the program be faster if I run it on the database server? Is there conceivably another method of data extraction that might be faster than the Java or C++ programs we've written? In the C++ program, I've set the prefetch memory size to 1Gib and have played with different fetch row sizes. The client is a remote Linux server in the same datacenter as the database server itself. <code> Prefetch Prefetch Mib/ Memory Max Rows Rows Second 1,073,741,824 100,000 10 2.529 1,073,741,824 100,000 100 11.135 1,073,741,824 100,000 1,000 29.115 1,073,741,824 100,000 10,000 32.064 1,073,741,824 100,000 100,000 23.067 1,073,741,824 1,000,000 100,000 32.970 1,073,741,824 10,000,000 10,000 33.358 1,073,741,824 3,000,00...
Categories: DBA Blogs

Procedure Inserting Dupes Despite Test for Existing Record Prior to Insert

Tom Kyte - Tue, 2020-09-29 15:06
Hi, I have a fairly simple pl/sql procedure invoked by a java program that itself is invoked when a user clicks a link on a website. The procedure takes in 2-3 parameters, and its main purpose is to insert 1 record (based on the params) into a table. (If needed, a version of the proc is in the LiveSQL link. Table and column names changed to protect the innocent.) Prior to the insert, the procedure tests whether the row it's about to insert already exists. This is necessary because we're only avoiding dupes for the specific values passed in by this program, whereas dupes involving other values are OK, so there is no unique constraint. If the row already exists, then the proc inserts this "failed" attempt into a logging table (this part is done by error handling, which I'm somewhat regretting and thinking of rewriting in a simple ELSE part of an IF, but it's unlikely this is relevant to my question). If the row does <i>not</i> yet exist, then it gets inserted, but I should note that there's a quick SELECT INTO between the test and the insert. Vast majority of the time this works fine, but a handful of duplicates have gotten into the table! The timestamps on the date_inserted field indicate that the dupes were inserted anywhere from 0 to a whole 18 seconds apart from one another. I have some ideas on why this may be happening (multiple clicks & network latency causing multiple sessions & procedure calls to fire simultaneously...perhaps first session committing between the time second session tests and inserts), but outside a unique constraint, is there anything I can do within the procedure to stop these dupes from sneaking in? Would it stop the dupes entirely if I test for the row's existence within the insert? Something like: <code>insert into user_demo with new_rec as ( select 1 as internal_id, 'ABC' as demo_code, 'blah' as user_demo_comment, sysdate as date_inserted from dual ) select * from new_rec where not exists (select 'x' from user_demo ud2 where ud2.internal_id = new_rec.internal_id and ud2.demo_code = demo_code); </code> Thank you! Phil PS. If you look at the proc in LiveSQL, please excuse the gross varchar2 type on what should probably be a date parameter. I'm aware this is not a best practice, but letting you know it's there in the interest of full disclosure. Thank you in advance, Phil
Categories: DBA Blogs

Oracle 18c DBMS_MVIEW.REFRESH_DEPENDENT "number_of_failures" OUT parameter is not returning value

Tom Kyte - Tue, 2020-09-29 15:06
We are using Oracle 18c and when I use DBMS_MVIEW.REFRESH_DEPENDENT procedure, "number_of_failures" OUT parameter is not returning any value. It is failing as I manage to catch the error in the exception. Any idea please? Is it returning number_of_failures on earlier oracle versions? <code>select * from v$version;</code> BANNER BANNER_FULL BANNER_LEGACY CON_ID -------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ---------- Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production 0 Version 18.5.1.0.0 <code>set serverout on DECLARE n_failures NUMBER(12):=0; BEGIN dbms_mview.refresh_dependent(number_of_failures => n_failures, list => 'MV_TABLE1', atomic_refresh => TRUE, nested => TRUE); dbms_output.put_line('Number of failures: '||n_failures); EXCEPTION WHEN OTHERS THEN dbms_output.put_line('Error: '||SQLERRM||', Number of failures in EXCEPTION: '||n_failures); END; /</code> Error: ORA-12008: error in materialized view or zonemap refresh path ORA-01427: single-row subquery returns more than one row, Number of failures in EXCEPTION: 0 PL/SQL procedure successfully completed.
Categories: DBA Blogs

Verifying an RMAN Backup -- Part 2

Hemant K Chitale - Tue, 2020-09-29 10:06

 Continuing on my previous blog post,  the question being "when you receive an RMAN Backup from another DBA, how do you confirm that the database can be restored and recovered to a Consistent Point In Time ?"

The quick steps, without actually running a RESTORE DATABASE command are :

1. Create a dummy parameter file with
    a. DB_NAME the same as the source database
    b. A different DB_UNIQUE_NAME (particulary if you have an existing database on the target server with the same DB_NAME)
    c. CONTROL_FILE specifying a "temporary" location -- you will be removing the control files and restoring them to the actual desired target location when you choose to do a Full Restore
2. Restore the Controlfile
3. Remove all entries about RMAN Backups from the Controlfile (as it has a history of recent backups and may even be a Controlfile backup newer than the Database backup that is provided to you, capturing more recent backups
4. Catalog the set of Backup Pieces that you receive
5. Query the catalog that you now create in the Controlfile to check the ArchiveLogs vis-a-vis the Datafiles in the set of Backup Pieces.


So, I'll demonstrate them again in 19c and a Non-CDB database here.  The source Database DB_NAME is "HEMANT" so I create in RTST parameter file with DB_NAME='HEMANT' and DB_UNIQUE_NAME='RTST'

I then restore the Controlfile, remove all entries of previous backups, CATALOG the Backup Pieces that I have received and then query the Controlfile.  (The CATALOG START WITH updates the Controlfile with information from the Backup Pieces, although the REPORT SCHEMA command is from the database structure in the controlfile).



oracle19c>echo $ORACLE_SID
RTST
oracle19c>cat $ORACLE_HOME/dbs/initRTST.ora
db_name = 'HEMANT'
db_unique_name = 'RTST'
control_files='/tmp/RTST_control.ctl'
#enable_pluggable_database=true
oracle19c>
oracle19c>cd HEMANT_DB_Backup
oracle19c>pwd
/home/oracle/HEMANT_DB_Backup
oracle19c>ls -l
total 140504
-rw-r-----. 1 oracle oinstall 4390912 Sep 29 22:01 0cvbkgui_1_1
-rw-r-----. 1 oracle oinstall 58507264 Sep 29 22:01 0evbkh0d_1_1
-rw-r-----. 1 oracle oinstall 6381568 Sep 29 22:01 0fvbkh0k_1_1
-rw-r-----. 1 oracle oinstall 51978240 Sep 29 22:01 0gvbkh0r_1_1
-rw-r-----. 1 oracle oinstall 2179072 Sep 29 22:01 0hvbkh12_1_1
-rw-r-----. 1 oracle oinstall 1622016 Sep 29 22:01 0ivbkh13_1_1
-rw-r-----. 1 oracle oinstall 4863488 Sep 29 22:01 0jvbkh14_1_1
-rw-r-----. 1 oracle oinstall 2187264 Sep 29 22:01 0kvbkh14_1_1
-rw-r-----. 1 oracle oinstall 11763712 Sep 29 22:01 c-432411782-20200929-06
oracle19c>
oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Tue Sep 29 22:05:03 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database (not started)

RMAN> startup nomount;

Oracle instance started

Total System Global Area 268434280 bytes

Fixed Size 8895336 bytes
Variable Size 201326592 bytes
Database Buffers 50331648 bytes
Redo Buffers 7880704 bytes

RMAN> restore controlfile from '/home/oracle/HEMANT_DB_Backup/c-432411782-20200929-06';

Starting restore at 29-SEP-20
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/tmp/RTST_control.ctl
Finished restore at 29-SEP-20

RMAN>
RMAN> delete backup;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK

List of Backup Pieces
BP Key BS Key Pc# Cp# Status Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
12 12 1 1 AVAILABLE DISK /opt/oracle/product/19c/dbhome_1/dbs/0cvbkgui_1_1
13 13 1 1 AVAILABLE DISK /opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-03
14 14 1 1 AVAILABLE DISK /opt/oracle/product/19c/dbhome_1/dbs/0evbkh0d_1_1
15 15 1 1 AVAILABLE DISK /opt/oracle/product/19c/dbhome_1/dbs/0fvbkh0k_1_1
16 16 1 1 AVAILABLE DISK /opt/oracle/product/19c/dbhome_1/dbs/0gvbkh0r_1_1
17 17 1 1 AVAILABLE DISK /opt/oracle/product/19c/dbhome_1/dbs/0hvbkh12_1_1
18 18 1 1 AVAILABLE DISK /opt/oracle/product/19c/dbhome_1/dbs/0ivbkh13_1_1
19 19 1 1 AVAILABLE DISK /opt/oracle/product/19c/dbhome_1/dbs/0jvbkh14_1_1
20 20 1 1 AVAILABLE DISK /opt/oracle/product/19c/dbhome_1/dbs/0kvbkh14_1_1
21 21 1 1 AVAILABLE DISK /opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-05

Do you really want to delete the above objects (enter YES or NO)? YES

RMAN-06207: warning: 10 objects could not be deleted for DISK channel(s) due
RMAN-06208: to mismatched status. Use CROSSCHECK command to fix status
RMAN-06210: List of Mismatched objects
RMAN-06211: ==========================
RMAN-06212: Object Type Filename/Handle
RMAN-06213: --------------- ---------------------------------------------------
RMAN-06214: Backup Piece /opt/oracle/product/19c/dbhome_1/dbs/0cvbkgui_1_1
RMAN-06214: Backup Piece /opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-03
RMAN-06214: Backup Piece /opt/oracle/product/19c/dbhome_1/dbs/0evbkh0d_1_1
RMAN-06214: Backup Piece /opt/oracle/product/19c/dbhome_1/dbs/0fvbkh0k_1_1
RMAN-06214: Backup Piece /opt/oracle/product/19c/dbhome_1/dbs/0gvbkh0r_1_1
RMAN-06214: Backup Piece /opt/oracle/product/19c/dbhome_1/dbs/0hvbkh12_1_1
RMAN-06214: Backup Piece /opt/oracle/product/19c/dbhome_1/dbs/0ivbkh13_1_1
RMAN-06214: Backup Piece /opt/oracle/product/19c/dbhome_1/dbs/0jvbkh14_1_1
RMAN-06214: Backup Piece /opt/oracle/product/19c/dbhome_1/dbs/0kvbkh14_1_1
RMAN-06214: Backup Piece /opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-05


RMAN> crosscheck backup;

using channel ORA_DISK_1
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0cvbkgui_1_1 RECID=12 STAMP=1052394450
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-03 RECID=13 STAMP=1052394452
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0evbkh0d_1_1 RECID=14 STAMP=1052394509
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0fvbkh0k_1_1 RECID=15 STAMP=1052394516
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0gvbkh0r_1_1 RECID=16 STAMP=1052394523
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0hvbkh12_1_1 RECID=17 STAMP=1052394530
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0ivbkh13_1_1 RECID=18 STAMP=1052394532
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0jvbkh14_1_1 RECID=19 STAMP=1052394532
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0kvbkh14_1_1 RECID=20 STAMP=1052394533
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-05 RECID=21 STAMP=1052394534
Crosschecked 10 objects


RMAN> delete expired backup;

using channel ORA_DISK_1

List of Backup Pieces
BP Key BS Key Pc# Cp# Status Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
12 12 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0cvbkgui_1_1
13 13 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-03
14 14 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0evbkh0d_1_1
15 15 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0fvbkh0k_1_1
16 16 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0gvbkh0r_1_1
17 17 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0hvbkh12_1_1
18 18 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0ivbkh13_1_1
19 19 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0jvbkh14_1_1
20 20 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0kvbkh14_1_1
21 21 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-05

Do you really want to delete the above objects (enter YES or NO)? YES
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0cvbkgui_1_1 RECID=12 STAMP=1052394450
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-03 RECID=13 STAMP=1052394452
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0evbkh0d_1_1 RECID=14 STAMP=1052394509
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0fvbkh0k_1_1 RECID=15 STAMP=1052394516
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0gvbkh0r_1_1 RECID=16 STAMP=1052394523
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0hvbkh12_1_1 RECID=17 STAMP=1052394530
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0ivbkh13_1_1 RECID=18 STAMP=1052394532
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0jvbkh14_1_1 RECID=19 STAMP=1052394532
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0kvbkh14_1_1 RECID=20 STAMP=1052394533
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-05 RECID=21 STAMP=1052394534
Deleted 10 EXPIRED objects


RMAN>
RMAN> list backup;

specification does not match any backup in the repository

RMAN> catalog start with '/home/oracle/HEMANT_DB_Backup/';

searching for all files that match the pattern /home/oracle/HEMANT_DB_Backup/

List of Files Unknown to the Database
=====================================
File Name: /home/oracle/HEMANT_DB_Backup/0cvbkgui_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0evbkh0d_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0fvbkh0k_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0gvbkh0r_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0hvbkh12_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0ivbkh13_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0jvbkh14_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0kvbkh14_1_1
File Name: /home/oracle/HEMANT_DB_Backup/c-432411782-20200929-06

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /home/oracle/HEMANT_DB_Backup/0cvbkgui_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0evbkh0d_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0fvbkh0k_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0gvbkh0r_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0hvbkh12_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0ivbkh13_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0jvbkh14_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0kvbkh14_1_1
File Name: /home/oracle/HEMANT_DB_Backup/c-432411782-20200929-06

RMAN>
RMAN> report schema;

RMAN-06139: warning: control file is not current for REPORT SCHEMA
Report of database schema for database with db_unique_name RTST

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 400 SYSTEM *** /opt/oracle/oradata/HEMANT/system.dbf
2 400 SYSAUX *** /opt/oracle/oradata/HEMANT/sysaux.dbf
3 200 UNDOTBS1 *** /opt/oracle/oradata/HEMANT/undotbs.dbf
4 10 USERS *** /opt/oracle/oradata/HEMANT/users01.dbf
5 10 INDX *** /opt/oracle/oradata/HEMANT/indx01.dbf
6 10 USERS *** /opt/oracle/oradata/HEMANT/users02.dbf
7 10 USERS *** /opt/oracle/oradata/HEMANT/users03.dbf
8 10 USERS *** /opt/oracle/oradata/HEMANT/users04.dbf
9 10 USERS *** /opt/oracle/oradata/HEMANT/users05.dbf
10 10 INDX *** /opt/oracle/oradata/HEMANT/indx02.dbf
11 10 INDX *** /opt/oracle/oradata/HEMANT/indx03.dbf

RMAN>
RMAN> quit


Recovery Manager complete.
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Sep 29 22:07:56 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL>
SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> select file#, checkpoint_change#, checkpoint_time, completion_time
2 from v$backup_datafile
3 order by 1
4 /

FILE# CHECKPOINT_CHANGE# CHECKPOINT_TIME COMPLETION_TIME
---------- ------------------ ------------------ ------------------
0 463290 29-SEP-20 11:49:14 29-SEP-20 11:49:15
1 456249 29-SEP-20 11:48:29 29-SEP-20 11:48:34
2 457590 29-SEP-20 11:48:36 29-SEP-20 11:48:39
3 458680 29-SEP-20 11:48:43 29-SEP-20 11:48:47
4 459759 29-SEP-20 11:48:50 29-SEP-20 11:48:51
5 459765 29-SEP-20 11:48:51 29-SEP-20 11:48:52
6 459779 29-SEP-20 11:48:53 29-SEP-20 11:48:53
7 459759 29-SEP-20 11:48:50 29-SEP-20 11:48:51
8 459765 29-SEP-20 11:48:51 29-SEP-20 11:48:52
9 459779 29-SEP-20 11:48:53 29-SEP-20 11:48:53
10 458680 29-SEP-20 11:48:43 29-SEP-20 11:48:43
11 456249 29-SEP-20 11:48:29 29-SEP-20 11:48:29

12 rows selected.

SQL>
SQL> select file#, checkpoint_change#, checkpoint_time, completion_time
2 from v$backup_datafile
3 order by 2
4 /

FILE# CHECKPOINT_CHANGE# CHECKPOINT_TIME COMPLETION_TIME
---------- ------------------ ------------------ ------------------
1 456249 29-SEP-20 11:48:29 29-SEP-20 11:48:34
11 456249 29-SEP-20 11:48:29 29-SEP-20 11:48:29
2 457590 29-SEP-20 11:48:36 29-SEP-20 11:48:39
3 458680 29-SEP-20 11:48:43 29-SEP-20 11:48:47
10 458680 29-SEP-20 11:48:43 29-SEP-20 11:48:43
4 459759 29-SEP-20 11:48:50 29-SEP-20 11:48:51
7 459759 29-SEP-20 11:48:50 29-SEP-20 11:48:51
5 459765 29-SEP-20 11:48:51 29-SEP-20 11:48:52
8 459765 29-SEP-20 11:48:51 29-SEP-20 11:48:52
6 459779 29-SEP-20 11:48:53 29-SEP-20 11:48:53
9 459779 29-SEP-20 11:48:53 29-SEP-20 11:48:53
0 463290 29-SEP-20 11:49:14 29-SEP-20 11:49:15

12 rows selected.

SQL>
SQL> select sequence#, first_change#, next_change#-1, next_time
2 from v$backup_archivelog_details
3 order by sequence#
4 /

SEQUENCE# FIRST_CHANGE# NEXT_CHANGE#-1 NEXT_TIME
---------- ------------- -------------- ------------------
170 442901 448665 29-SEP-20 11:45:17
171 448666 450050 29-SEP-20 11:45:51
172 450051 451367 29-SEP-20 11:47:30
173 451368 454869 29-SEP-20 11:48:19
174 454870 457557 29-SEP-20 11:48:33
175 457558 457611 29-SEP-20 11:48:41
176 457612 459744 29-SEP-20 11:48:48
177 459745 459767 29-SEP-20 11:48:52

8 rows selected.

SQL>


In this case, file#=0  is actually the Controlfile --- so it has the highest Checkpoint SCN and Time.  As I noted in my previous post, it doesn't matter that the Controlfile is "newer" than the Datafiles.  We need to check the Datafiles with the ArchiveLogs. So, we see that the datafiles have slightly different Checkpoint SCNs (the backup was created with FILESPERSET=2 so every pair of datafiles has a Checkpoint).  The highest Datafile Checkpoint is 459779.  But the ArchiveLogs end at 459767.  Therefore, this database cannot be RECOVERed to a Consistent Point In Time.

Should I try doing a RESTORE and RECOVER, nevertheless ?

I first revert to ORACLE_SID=HEMANT and use the initHEMANT.ora parameter file that I obtained from the source server.

SQL> shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL>
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>ORACLE_SID=HEMANT;export ORACLE_SID
oracle19c>ls -l $ORACLE_HOME/dbs/initHEMANT.ora
-rw-r--r--. 1 oracle oinstall 693 Sep 28 23:05 /opt/oracle/product/19c/dbhome_1/dbs/initHEMANT.ora
oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Tue Sep 29 22:25:13 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database (not started)

RMAN>
RMAN> startup nomount;

Oracle instance started

Total System Global Area 1207958960 bytes

Fixed Size 8895920 bytes
Variable Size 318767104 bytes
Database Buffers 872415232 bytes
Redo Buffers 7880704 bytes

RMAN> restore controlfile from '/home/oracle/HEMANT_DB_Backup/c-432411782-20200929-06';

Starting restore at 29-SEP-20
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/opt/oracle/oradata/HEMANT/control01.ctl
output file name=/opt/oracle/oradata/HEMANT/control02.ctl
Finished restore at 29-SEP-20

RMAN>
RMAN> alter database mount;

released channel: ORA_DISK_1
Statement processed

RMAN> crosscheck backup;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0cvbkgui_1_1 RECID=12 STAMP=1052394450
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-03 RECID=13 STAMP=1052394452
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0evbkh0d_1_1 RECID=14 STAMP=1052394509
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0fvbkh0k_1_1 RECID=15 STAMP=1052394516
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0gvbkh0r_1_1 RECID=16 STAMP=1052394523
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0hvbkh12_1_1 RECID=17 STAMP=1052394530
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0ivbkh13_1_1 RECID=18 STAMP=1052394532
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0jvbkh14_1_1 RECID=19 STAMP=1052394532
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0kvbkh14_1_1 RECID=20 STAMP=1052394533
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-05 RECID=21 STAMP=1052394534
Crosschecked 10 objects


RMAN> delete noprompt expired backup;

using channel ORA_DISK_1

List of Backup Pieces
BP Key BS Key Pc# Cp# Status Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
12 12 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0cvbkgui_1_1
13 13 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-03
14 14 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0evbkh0d_1_1
15 15 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0fvbkh0k_1_1
16 16 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0gvbkh0r_1_1
17 17 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0hvbkh12_1_1
18 18 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0ivbkh13_1_1
19 19 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0jvbkh14_1_1
20 20 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/0kvbkh14_1_1
21 21 1 1 EXPIRED DISK /opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-05
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0cvbkgui_1_1 RECID=12 STAMP=1052394450
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-03 RECID=13 STAMP=1052394452
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0evbkh0d_1_1 RECID=14 STAMP=1052394509
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0fvbkh0k_1_1 RECID=15 STAMP=1052394516
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0gvbkh0r_1_1 RECID=16 STAMP=1052394523
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0hvbkh12_1_1 RECID=17 STAMP=1052394530
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0ivbkh13_1_1 RECID=18 STAMP=1052394532
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0jvbkh14_1_1 RECID=19 STAMP=1052394532
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/0kvbkh14_1_1 RECID=20 STAMP=1052394533
deleted backup piece
backup piece handle=/opt/oracle/product/19c/dbhome_1/dbs/c-432411782-20200929-05 RECID=21 STAMP=1052394534
Deleted 10 EXPIRED objects


RMAN>
RMAN> catalog start with '/home/oracle/HEMANT_DB_Backup/';

searching for all files that match the pattern /home/oracle/HEMANT_DB_Backup/

List of Files Unknown to the Database
=====================================
File Name: /home/oracle/HEMANT_DB_Backup/0cvbkgui_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0evbkh0d_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0fvbkh0k_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0gvbkh0r_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0hvbkh12_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0ivbkh13_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0jvbkh14_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0kvbkh14_1_1
File Name: /home/oracle/HEMANT_DB_Backup/c-432411782-20200929-06

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /home/oracle/HEMANT_DB_Backup/0cvbkgui_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0evbkh0d_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0fvbkh0k_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0gvbkh0r_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0hvbkh12_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0ivbkh13_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0jvbkh14_1_1
File Name: /home/oracle/HEMANT_DB_Backup/0kvbkh14_1_1
File Name: /home/oracle/HEMANT_DB_Backup/c-432411782-20200929-06

RMAN>
RMAN> restore database;

Starting restore at 29-SEP-20
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /opt/oracle/oradata/HEMANT/system.dbf
channel ORA_DISK_1: restoring datafile 00011 to /opt/oracle/oradata/HEMANT/indx03.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/HEMANT_DB_Backup/0evbkh0d_1_1
channel ORA_DISK_1: piece handle=/home/oracle/HEMANT_DB_Backup/0evbkh0d_1_1 tag=TAG20200929T114829
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00002 to /opt/oracle/oradata/HEMANT/sysaux.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/HEMANT_DB_Backup/0fvbkh0k_1_1
channel ORA_DISK_1: piece handle=/home/oracle/HEMANT_DB_Backup/0fvbkh0k_1_1 tag=TAG20200929T114829
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00003 to /opt/oracle/oradata/HEMANT/undotbs.dbf
channel ORA_DISK_1: restoring datafile 00010 to /opt/oracle/oradata/HEMANT/indx02.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/HEMANT_DB_Backup/0gvbkh0r_1_1
channel ORA_DISK_1: piece handle=/home/oracle/HEMANT_DB_Backup/0gvbkh0r_1_1 tag=TAG20200929T114829
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to /opt/oracle/oradata/HEMANT/users01.dbf
channel ORA_DISK_1: restoring datafile 00007 to /opt/oracle/oradata/HEMANT/users03.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/HEMANT_DB_Backup/0hvbkh12_1_1
channel ORA_DISK_1: piece handle=/home/oracle/HEMANT_DB_Backup/0hvbkh12_1_1 tag=TAG20200929T114829
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /opt/oracle/oradata/HEMANT/indx01.dbf
channel ORA_DISK_1: restoring datafile 00008 to /opt/oracle/oradata/HEMANT/users04.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/HEMANT_DB_Backup/0ivbkh13_1_1
channel ORA_DISK_1: piece handle=/home/oracle/HEMANT_DB_Backup/0ivbkh13_1_1 tag=TAG20200929T114829
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00006 to /opt/oracle/oradata/HEMANT/users02.dbf
channel ORA_DISK_1: restoring datafile 00009 to /opt/oracle/oradata/HEMANT/users05.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/HEMANT_DB_Backup/0kvbkh14_1_1
channel ORA_DISK_1: piece handle=/home/oracle/HEMANT_DB_Backup/0kvbkh14_1_1 tag=TAG20200929T114829
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 29-SEP-20

RMAN>
RMAN> list backup of archivelog all;


List of Backup Sets
===================


BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
22 4.19M DISK 00:00:00 29-SEP-20
BP Key: 22 Status: AVAILABLE Compressed: YES Tag: TAG20200929T114730
Piece Name: /home/oracle/HEMANT_DB_Backup/0cvbkgui_1_1

List of Archived Logs in backup set 22
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 170 442901 29-SEP-20 448666 29-SEP-20
1 171 448666 29-SEP-20 450051 29-SEP-20
1 172 450051 29-SEP-20 451368 29-SEP-20

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
28 4.64M DISK 00:00:01 29-SEP-20
BP Key: 28 Status: AVAILABLE Compressed: YES Tag: TAG20200929T114852
Piece Name: /home/oracle/HEMANT_DB_Backup/0jvbkh14_1_1

List of Archived Logs in backup set 28
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 173 451368 29-SEP-20 454870 29-SEP-20
1 174 454870 29-SEP-20 457558 29-SEP-20
1 175 457558 29-SEP-20 457612 29-SEP-20
1 176 457612 29-SEP-20 459745 29-SEP-20
1 177 459745 29-SEP-20 459768 29-SEP-20

RMAN>
RMAN> recover database until sequence 178;

Starting recover at 29-SEP-20
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 09/29/2020 22:31:03
RMAN-06556: datafile 6 must be restored from backup older than SCN 459768

RMAN>
RMAN> restore archivelog all;

Starting restore at 29-SEP-20
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=255 device type=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 09/29/2020 22:35:48
RMAN-06026: some targets not found - aborting restore
RMAN-06025: no backup of archived log for thread 1 with sequence 178 and starting SCN of 459768 found to restore

RMAN> restore archivelog until sequence 178;

Starting restore at 29-SEP-20
using channel ORA_DISK_1

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 09/29/2020 22:36:17
RMAN-06026: some targets not found - aborting restore
RMAN-06025: no backup of archived log for thread 1 with sequence 178 and starting SCN of 459768 found to restore

RMAN>
RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name HEMANT
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
9 1 178 A 29-SEP-20
Name: /opt/oracle/archivelog/HEMANT/1_178_1052392838.dbf


RMAN> crosscheck archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=255 device type=DISK
validation failed for archived log
archived log file name=/opt/oracle/archivelog/HEMANT/1_178_1052392838.dbf RECID=9 STAMP=1052394543
Crosschecked 1 objects


RMAN> delete expired archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=255 device type=DISK
List of Archived Log Copies for database with db_unique_name HEMANT
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
9 1 178 X 29-SEP-20
Name: /opt/oracle/archivelog/HEMANT/1_178_1052392838.dbf


Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/opt/oracle/archivelog/HEMANT/1_178_1052392838.dbf RECID=9 STAMP=1052394543
Deleted 1 EXPIRED objects


RMAN>
RMAN> restore archivelog all;

Starting restore at 29-SEP-20
using channel ORA_DISK_1

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 09/29/2020 22:37:59
RMAN-06026: some targets not found - aborting restore
RMAN-06025: no backup of archived log for thread 1 with sequence 178 and starting SCN of 459768 found to restore

RMAN> restore archivelog until sequence 177;

Starting restore at 29-SEP-20
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=26 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=170
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=171
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=172
channel ORA_DISK_1: reading from backup piece /home/oracle/HEMANT_DB_Backup/0cvbkgui_1_1
channel ORA_DISK_1: piece handle=/home/oracle/HEMANT_DB_Backup/0cvbkgui_1_1 tag=TAG20200929T114730
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=173
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=174
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=175
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=176
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=177
channel ORA_DISK_1: reading from backup piece /home/oracle/HEMANT_DB_Backup/0jvbkh14_1_1
channel ORA_DISK_1: piece handle=/home/oracle/HEMANT_DB_Backup/0jvbkh14_1_1 tag=TAG20200929T114852
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 29-SEP-20

RMAN>
RMAN> exit


Recovery Manager complete.
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Sep 29 22:48:53 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> alter database recover using backup controlfile until cancel;
alter database recover using backup controlfile until cancel
*
ERROR at line 1:
ORA-00279: change 456249 generated at 09/29/2020 11:48:29 needed for thread 1
ORA-00289: suggestion : /opt/oracle/archivelog/HEMANT/1_174_1052392838.dbf
ORA-00280: change 456249 for thread 1 is in sequence #174


SQL>
SQL> alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_174_1052392838.dbf';
alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_174_1052392838.dbf'
*
ERROR at line 1:
ORA-00279: change 457558 generated at 09/29/2020 11:48:33 needed for thread 1
ORA-00289: suggestion : /opt/oracle/archivelog/HEMANT/1_175_1052392838.dbf
ORA-00280: change 457558 for thread 1 is in sequence #175
ORA-00278: log file '/opt/oracle/archivelog/HEMANT/1_174_1052392838.dbf' no longer needed for this recovery


SQL> alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_175_1052392838.dbf';
alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_175_1052392838.dbf'
*
ERROR at line 1:
ORA-00279: change 457612 generated at 09/29/2020 11:48:41 needed for thread 1
ORA-00289: suggestion : /opt/oracle/archivelog/HEMANT/1_176_1052392838.dbf
ORA-00280: change 457612 for thread 1 is in sequence #176
ORA-00278: log file '/opt/oracle/archivelog/HEMANT/1_175_1052392838.dbf' no longer needed for this recovery


SQL> alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_176_1052392838.dbf';
alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_176_1052392838.dbf'
*
ERROR at line 1:
ORA-00279: change 459745 generated at 09/29/2020 11:48:48 needed for thread 1
ORA-00289: suggestion : /opt/oracle/archivelog/HEMANT/1_177_1052392838.dbf
ORA-00280: change 459745 for thread 1 is in sequence #177
ORA-00278: log file '/opt/oracle/archivelog/HEMANT/1_176_1052392838.dbf' no longer needed for this recovery


SQL> alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_177_1052392838.dbf';
alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_177_1052392838.dbf'
*
ERROR at line 1:
ORA-00279: change 459768 generated at 09/29/2020 11:48:52 needed for thread 1
ORA-00289: suggestion : /opt/oracle/archivelog/HEMANT/1_178_1052392838.dbf
ORA-00280: change 459768 for thread 1 is in sequence #178
ORA-00278: log file '/opt/oracle/archivelog/HEMANT/1_177_1052392838.dbf' no longer needed for this recovery


SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-10877: error signaled in parallel recovery slave
ORA-10877: error signaled in parallel recovery slave
ORA-01153: an incompatible media recovery is active


SQL> alter database recover cancel;
alter database recover cancel
*
ERROR at line 1:
ORA-01112: media recovery not started


SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01152: file 6 was not restored from a sufficiently old backup
ORA-01110: data file 6: '/opt/oracle/oradata/HEMANT/users02.dbf'


SQL>



ArchiveLog Sequence#178 had been created in the source server before the controlfile backup but is not in the Backup Pieces I received.  So, Oracle refuses to allow me to RECOVER the database. 
A RESTORE is succesful, but the RECOVER fails.  The database cannot be OPENed.
Datafiles 6 and 9 have a higher Checkpoint SCN than the highest available in the ArchiveLogs.

Unfortunately, the default behaviour of Oracle is only to report the first Datafiles that has a higher SCN, it doesn't report all of them --- the database might have had 10 or 100 Datafiles that are "newer" than the ArchiveLogs.  That is why the SQL queries on V$BACKUP_DATAFILE and V$BACKUP_ARCHIVELOG_DETAILS that I have demonstrated earlier in this post are useful.


This is what the alert log shows :


2020-09-29T22:29:17.560085+08:00
Full restore complete of datafile 1 /opt/oracle/oradata/HEMANT/system.dbf. Elapsed time: 0:00:06
checkpoint is 456249
2020-09-29T22:29:23.415906+08:00
Full restore complete of datafile 2 /opt/oracle/oradata/HEMANT/sysaux.dbf. Elapsed time: 0:00:05
checkpoint is 457590
last deallocation scn is 450639
2020-09-29T22:29:25.874043+08:00
Full restore complete of datafile 10 /opt/oracle/oradata/HEMANT/indx02.dbf. Elapsed time: 0:00:00
checkpoint is 458680
last deallocation scn is 3
2020-09-29T22:29:29.812208+08:00
Full restore complete of datafile 3 /opt/oracle/oradata/HEMANT/undotbs.dbf. Elapsed time: 0:00:04
checkpoint is 458680
last deallocation scn is 3
2020-09-29T22:29:33.129942+08:00
Full restore complete of datafile 4 /opt/oracle/oradata/HEMANT/users01.dbf. Elapsed time: 0:00:01
checkpoint is 459759
last deallocation scn is 3
Full restore complete of datafile 7 /opt/oracle/oradata/HEMANT/users03.dbf. Elapsed time: 0:00:01
checkpoint is 459759
last deallocation scn is 3
Full restore complete of datafile 5 /opt/oracle/oradata/HEMANT/indx01.dbf. Elapsed time: 0:00:00
checkpoint is 459765
last deallocation scn is 3
Full restore complete of datafile 8 /opt/oracle/oradata/HEMANT/users04.dbf. Elapsed time: 0:00:01
checkpoint is 459765
last deallocation scn is 3
2020-09-29T22:29:35.182200+08:00
Full restore complete of datafile 6 /opt/oracle/oradata/HEMANT/users02.dbf. Elapsed time: 0:00:01
checkpoint is 459779
last deallocation scn is 3
Full restore complete of datafile 9 /opt/oracle/oradata/HEMANT/users05.dbf. Elapsed time: 0:00:01
checkpoint is 459779
last deallocation scn is 3
2020-09-29T22:34:44.026271+08:00
alter database recover using backup controlfile
2020-09-29T22:34:44.026373+08:00
Media Recovery Start
Started logmerger process
2020-09-29T22:34:44.322629+08:00
Parallel Media Recovery started with 2 slaves
ORA-279 signalled during: alter database recover using backup controlfile...
2020-09-29T22:35:34.263529+08:00
*************************************************************


2020-09-29T22:49:27.004784+08:00
alter database recover using backup controlfile until cancel
2020-09-29T22:49:27.004864+08:00
Media Recovery Start
Started logmerger process
2020-09-29T22:49:27.132583+08:00
Parallel Media Recovery started with 2 slaves
ORA-279 signalled during: alter database recover using backup controlfile until cancel...
2020-09-29T22:50:52.692824+08:00
alter database recover using backup controlfile
2020-09-29T22:50:52.692943+08:00
Media Recovery Start
ORA-275 signalled during: alter database recover using backup controlfile...
2020-09-29T22:52:19.356431+08:00
alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_174_1052392838.dbf'
2020-09-29T22:52:19.356498+08:00
Media Recovery Log /opt/oracle/archivelog/HEMANT/1_174_1052392838.dbf
ORA-279 signalled during: alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_174_1052392838.dbf'...
2020-09-29T22:52:36.435252+08:00
alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_175_1052392838.dbf'
2020-09-29T22:52:36.435374+08:00
Media Recovery Log /opt/oracle/archivelog/HEMANT/1_175_1052392838.dbf
ORA-279 signalled during: alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_175_1052392838.dbf'...
2020-09-29T22:52:51.906865+08:00
alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_176_1052392838.dbf'
2020-09-29T22:52:51.906956+08:00
Media Recovery Log /opt/oracle/archivelog/HEMANT/1_176_1052392838.dbf
ORA-279 signalled during: alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_176_1052392838.dbf'...
2020-09-29T22:53:18.228572+08:00
alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_177_1052392838.dbf'
2020-09-29T22:53:18.228668+08:00
Media Recovery Log /opt/oracle/archivelog/HEMANT/1_177_1052392838.dbf
ORA-279 signalled during: alter database recover logfile '/opt/oracle/archivelog/HEMANT/1_177_1052392838.dbf'...
2020-09-29T22:53:25.958701+08:00
alter database open resetlogs
2020-09-29T22:53:26.113916+08:00
Recovery interrupted!
ORA-10877 signalled during: alter database open resetlogs...
2020-09-29T22:53:35.846419+08:00
2020-09-29T22:53:35.846419+08:00
alter database recover cancel
ORA-1112 signalled during: alter database recover cancel...
2020-09-29T22:54:03.274546+08:00
alter database open resetlogs
2020-09-29T22:54:03.306918+08:00
Signalling error 1152 for datafile 6!
ORA-1152 signalled during: alter database open resetlogs...


So, even if I manually RESTORE the ArchiveLogs and then apply each one with the RECOVER LOGFILE command, Oracle still doesn't allow an OPEN RESETOGS because Sequence#178 is missing.


Categories: DBA Blogs

Network Policies In Kubernetes

Online Apps DBA - Tue, 2020-09-29 06:10

One of the important parts of the Kubernetes is security. Network policies are Kubernetes resources that control the traffic between pods. Kubernetes network policy lets developers secure access to and from their applications. To find out all about Network policies in Kubernetes Check out the blog at https://k21academy.com/kubernetes29. The blog will cover: ▪️What are Network […]

The post Network Policies In Kubernetes appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Labels And Annotations In Kubernetes

Online Apps DBA - Tue, 2020-09-29 04:43

Labels and Annotations are one of the main foundations for Kubernetes. They both provide a way for adding additional metadata to our Kubernetes Objects. Labels are used for grouping, viewing, and operating. Each object can have a set of key/value labels defined. Each Key must be unique for a given object. Annotations provide a place […]

The post Labels And Annotations In Kubernetes appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Case Study: 5 Pillars Of AWS Well-Architected Framework

Online Apps DBA - Tue, 2020-09-29 04:39

Essentially, a well-architected Amazon AWS framework is a concept of architecting cloud infrastructure for performing, resilient, and efficiency. The 5 pillars are the deciding factor that makes applications and workloads well architected. It seems simple and not important but trust us this is what separates an expert from the rest. Read the blog post at […]

The post Case Study: 5 Pillars Of AWS Well-Architected Framework appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator