DBA Blogs

Regarding the Current Role of PL/SQL in Modern Technology Stacks

Tom Kyte - 6 hours 49 min ago
Dear Team, I hope this message finds you well. I have been reflecting on the current landscape of PL/SQL and its role in contemporary technology stacks. I would greatly appreciate your insights on a few points that have been on my mind. <b> PLSQL for Business Logic ? </b> While it's widely acknowledged that "as long as there is Oracle, there will be PL/SQL," I am eager to explore forward-looking scenarios where PL/SQL remains a prominent choice for business logic. In today's context, it seems that business logic is predominantly implemented using modern object-oriented languages such as Java or .NET, leveraging features like Streams and Lambda functions. Could you provide examples or use cases where PL/SQL excels and is considered integral, especially in comparison to these object-oriented approaches? <b> PLSQL for Data Engineering ? </b> The ETL landscape has witnessed a significant shift towards technologies like Spark for seamless integration with data warehouses and data lakes. In this evolving scenario, I am curious to understand how PL/SQL continues to play a vital role in ETL processes. Are there specific use cases or examples where PL/SQL is still the preferred choice in modern data engineering stacks? I understand the historical significance of PL/SQL in minimizing network calls and maintaining code proximity to databases, as highlighted in research papers advocating for a thick database approach. However, I am keen to bridge the gap between theoretical advantages and practical implementations. Are enterprise projects aligning with this approach, or is the trend shifting towards business logic predominantly residing in Java/.NET environments? In essence, <b>could you kindly furnish examples and use cases illustrating</b> where PL/SQL stands out as a core, integral component in modern data engineering or application development stacks? <b>I am particularly interested in understanding if PL/SQL is now primarily considered a supplementary or exception-use language, driven by compliance requirements rather than intrinsic value in data movement scenarios.</b> I appreciate your time and insights into this matter, and I look forward to hearing from you soon.
Categories: DBA Blogs

Query taking very long.

Tom Kyte - Tue, 2024-02-20 10:46
Hi Tom, I'm facing an issue somewhat strange and to which a have no clear answer. The database version that I'm using is 12.1.0.1.0 on Windows 64 (both Standard and EE). With 11G (I believe) Oracle started using the Unified Audit Trail. The default "rules" for my version (12c) inserts a new record in the audit table every time a user connects to the DB. Because these databases are not mine, the size of the tablespace (SYSAUX), was not under surveillance and has grown to a considerable size without anyone noticing it. At the moment the tablespace and table are around 16GB and 14GB. We have tried to remove the information from the table and that process is ongoing. My question is: In the DBs where this table and tablespace have grown to such sizes, any (or many) query run against a object in that tablespace take a huge amount of time. Of course, I know that if a object is very large, then that means it's going to take a long time to read, but here we are talking about a really large amount of time. When I'm executing a query against that tablespace the disk subsystem (SSD disks) starts to read about 130 MBs a second. In a symplistic way, one could say that it should read the necessary 16GB in a bit less than 200 seconds, but the system takes more than 10 minutes (I never allowed it to actually finish because these are PROD systems). What I would like to know is if there is anything specific about objects inside the SYSAUX tablespace (namely the unified audit trail objects and the scheduler job objects) that could explain such a delay in execution while having such a huge disk access (reads). Thank you very much, hugo
Categories: DBA Blogs

APEX feature request, support needed!

Flavio Casetta - Tue, 2024-02-20 01:10
Categories: DBA Blogs

High db block gets for inserting into reference partitioned table

Tom Kyte - Thu, 2024-02-15 01:46
Hello Tom, Could you please advise why I'm getting so huge difference in db block gets and redo for insert between range and reference partitioned table? Db block gets are like 100x more for reference partitioned table and insert is 2-3 times slower. <code> DB01> create table t1 (id number(19) primary key, ts date) 2 partition by range (ts) interval (numtodsinterval(1, 'DAY')) (partition P0001 values less than (to_date('2024-01-01' ,'YYYY-MM-DD'))); Table created. DB01> DB01> insert into t1 (id, ts) values (1, sysdate); 1 row created. DB01> DB01> DB01> -- range interval DB01> create table t2 (id number(19), t1_id number(19) not null, constraint t2_fk foreign key (t1_id) references t1 (id)) 2 partition by range (t1_id) interval (1) (partition values less than (1)); Table created. DB01> set autotrace trace exp stat DB01> insert into t2 (id, t1_id) select level, 1 from dual connect by level <= 2000000; 2000000 rows created. Execution Plan ---------------------------------------------------------- Plan hash value: 1236776825 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | INSERT STATEMENT | | 1 | 2 (0)| 00:00:01 | | 1 | LOAD TABLE CONVENTIONAL | T2 | | | | |* 2 | CONNECT BY WITHOUT FILTERING| | | | | | 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter(LEVEL<=2000000) Statistics ---------------------------------------------------------- 105 recursive calls 51252 db block gets 7237 consistent gets 0 physical reads 147628492 redo size 123 bytes sent via SQL*Net to client 391 bytes received via SQL*Net from client 1 SQL*Net roundtrips to/from client 2 sorts (memory) 0 sorts (disk) 2000000 rows processed DB01> set autotrace off DB01> commit; Commit complete. DB01> DB01> DB01> -- reference DB01> create table t3 (id number(19), t1_id number(19) not null, constraint t3_fk foreign key (t1_id) references t1 (id)) 2 partition by reference (t3_fk); Table created. DB01> set autotrace trace exp stat DB01> insert into t3 (id, t1_id) select level, 1 from dual connect by level <= 2000000; 2000000 rows created. Execution Plan ---------------------------------------------------------- Plan hash value: 1236776825 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows ...
Categories: DBA Blogs

Adding a PDB (and Service) to a RAC database -- 1 - service running on only 1 Instance

Hemant K Chitale - Tue, 2024-02-13 02:10
On my existing, RAC database I have :

[oracle@node1 ~]$ srvctl status database -db DB21CRAC
Instance DB21CRAC1 is running on node node1
Instance DB21CRAC2 is running on node node2
[oracle@node1 ~]$
[oracle@node1 ~]$ srvctl status service -d DB21CRAC -s hemantpdb
PRKO-2017 : Service hemantpdb does not exist for database DB21CRAC.
[oracle@node1 ~]$ srvctl status service -db DB21CRAC -service hemantpdb -pdb HEMANTPDB
PRKO-2017 : Service hemantpdb does not exist for database DB21CRAC.
[oracle@node1 ~]$
[grid@node1 ~]$ lsnrctl status LISTENER_SCAN1

LSNRCTL for Linux: Version 21.0.0.0.0 - Production on 13-FEB-2024 15:06:11

Copyright (c) 1991, 2021, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN1
Version                   TNSLSNR for Linux: Version 21.0.0.0.0 - Production
Start Date                13-FEB-2024 14:51:13
Uptime                    0 days 0 hr. 14 min. 57 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/21.3.0.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/node1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.115)(PORT=1521)))
Services Summary...
Service "0f488ad896262f80e0636f38a8c0fc18" has 2 instance(s).
  Instance "DB21CRAC1", status READY, has 1 handler(s) for this service...
  Instance "DB21CRAC2", status READY, has 1 handler(s) for this service...
Service "DB21CRAC" has 2 instance(s).
  Instance "DB21CRAC1", status READY, has 1 handler(s) for this service...
  Instance "DB21CRAC2", status READY, has 1 handler(s) for this service...
Service "DB21CRACXDB" has 2 instance(s).
  Instance "DB21CRAC1", status READY, has 1 handler(s) for this service...
  Instance "DB21CRAC2", status READY, has 1 handler(s) for this service...
Service "hemantpdb" has 2 instance(s).
  Instance "DB21CRAC1", status READY, has 1 handler(s) for this service...
  Instance "DB21CRAC2", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@node1 ~]$
[oracle@node1 ~]$SQL> select inst_id, name, network_name from gv$services order by 1;

   INST_ID NAME             NETWORK_NAME
---------- ---------------- ----------------
         1 hemantpdb        hemantpdb
         1 DB21CRAC         DB21CRAC
         1 SYS$BACKGROUND
         1 DB21CRACXDB      DB21CRACXDB
         1 SYS$USERS
         2 hemantpdb        hemantpdb
         2 DB21CRAC         DB21CRAC
         2 SYS$BACKGROUND
         2 DB21CRACXDB      DB21CRACXDB
         2 SYS$USERS

10 rows selected.

SQL>
SQL> select inst_id, con_id, name, open_mode from gv$pdbs order by 1,2;

   INST_ID     CON_ID NAME             OPEN_MODE
---------- ---------- ---------------- ----------
         1          2 PDB$SEED         READ ONLY
         1          3 HEMANTPDB        READ WRITE
         2          2 PDB$SEED         READ ONLY
         2          3 HEMANTPDB        READ WRITE

SQL>


Which means that I have created a custom PDB called "HEMANTPDB" and there is a default service called "hemantpdb" on each instance.  However, this services is NOT listed when I check via srvctl. 

This is because srvctl queries the cluster for information about services.  Service"hemantpdb" is created by default when I run CREATE PLUGGABLE DATABASE HEMANTPDB.


So, I can add a new service and configure Transparent Application Failover for SELECT failover but with only the first instance(DB21CRAC1) as the only one to start the service on initially{and the second instance (DB21CRAC2) as the alternate} :

[oracle@node1 ~]$ srvctl add service -db DB21CRAC  -service newservice -preferred DB21CRAC1 -available DB21CRAC2 -tafpolicy BASIC -failovertype SELECT -pdb HEMANTPDB
[oracle@node1 ~]$
[oracle@node1 ~]$ srvctl config service -db DB21CRAC  -service newservice
Service name: newservice
Server pool:
Cardinality: 1
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Reset State: NONE
Failover type: SELECT
Failover method:
Failover retries:
Failover delay:
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Pluggable database name: HEMANTPDB
Hub service:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Failback :  no
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Preferred instances: DB21CRAC1
Available instances: DB21CRAC2
CSS critical: no
[oracle@node1 ~]$
[oracle@node1 ~]$ srvctl start service -db DB21CRAC -service newservice
[oracle@node1 ~]$ srvctl status service -db DB21CRAC -service newservice
Service newservice is running on instance(s) DB21CRAC1
[oracle@node1 ~]$




If the database instance DB21CRAC1 fails (crashes), the ClusterWare starts the service on DB21CRAC2.  Here I kill the DB21CRAC process and then verify that the service has restarted on DB1CRAC2  :

[oracle@node1 ~]$ srvctl status service -db DB21CRAC -service newservice
Service newservice is running on instance(s) DB21CRAC1
[oracle@node1 ~]$ ps -ef |grep smon
oracle    2951     1  0 16:05 ?        00:00:00 ora_smon_DB21CRAC1
root      3521     1  1 14:50 ?        00:00:48 /u01/app/21.3.0.0/grid/bin/osysmond.bin
grid      4068     1  0 14:50 ?        00:00:00 asm_smon_+ASM1
oracle    4146 25526  0 16:05 pts/0    00:00:00 grep --color=auto smon
[oracle@node1 ~]$ kill -9 2951
[oracle@node1 ~]$ srvctl status service -db DB21CRAC -service newservice
Service newservice is running on instance(s) DB21CRAC2
[oracle@node1 ~]$


In this case, connections using this service name ("newservice") will connect (failover) to the 2nd database instance running on node2 of the Cluster.

Categories: DBA Blogs

SQL loader not loading all the needed rows due to new line character and enclosement character

Tom Kyte - Mon, 2024-02-12 18:06
I have a problem with how SQL loader manage the end of a column value. I was hoping to manage CR LF, the enclosement character and the separator character but it seems I can't find a solution! The data I receive from the .csv file looks like this: <code>"C","I","FLAGS","LASTUPDATEDATE","BOEVERSION","C_OSUSER_UPDATEDBY","I_OSUSER_UPDATEDBY","C_OSUSER_PWF","DESCRIPTION","DURATION","ENDDATE","I_OSUSER_PWF","LASTSTATUSCHA","STARTDATE","DURATIONUNIT","TYPE","STATUS","C_BNFTRGHT_CONDITIONS","I_BNFTRGHT_CONDITIONS","C_CNTRCT1_CONDITION","I_CNTRCT1_CONDITION","EXTBLOCKTYPE","EXTBLOCKDURATIONUNIT","EXTBLOCKDURATION","EXTBLOCKDESCRIPTION","PARTITIONID" "7680","423","PE","2015-07-06 11:42:10","0","1000","1506","","No benefits are payable for a Total Disability period during a Parental or Family-Related Leave, for a Total Disability occurring during this period. ","0","","","","","69280000","69312015","71328000","7285","402","","","","","","","1" "7680","426","PE","2015-07-06 11:42:10","0","1000","1506","","""Means to be admitted to a Hospital as an in-patient for more than 18 consecutive hours. "" ","0","","","","","69280000","69312021","71328000","7285","402","","","","","","","1"</code> My ctl file is as follows: <code>Load Data infile 'C:\2020-07-29-03-04-48-TolCondition.csv' CONTINUEIF LAST != '"' into table TolCondition REPLACE FIELDS TERMINATED BY "," ENCLOSED by '"' ( C, I, FLAGS, LASTUPDATEDATE DATE "YYYY-MM-DD HH24:MI:SS", BOEVERSION, C_OSUSER_UPDATEDBY, I_OSUSER_UPDATEDBY, C_OSUSER_PWF, DESCRIPTION CHAR(1000), DURATION, ENDDATE DATE "YYYY-MM-DD HH24:MI:SS", I_OSUSER_PWF, LASTSTATUSCHA DATE "YYYY-MM-DD HH24:MI:SS", STARTDATE DATE "YYYY-MM-DD HH24:MI:SS", DURATIONUNIT, TYPE, STATUS, C_BNFTRGHT_CONDITIONS, I_BNFTRGHT_CONDITIONS, C_CNTRCT1_CONDITION, I_CNTRCT1_CONDITION, EXTBLOCKTYPE, EXTBLOCKDURATIONUNIT, EXTBLOCKDURATION, EXTBLOCKDESCRIPTION, PARTITIONID)</code> Here is what I tried in the control file: CONTINUEIF LAST != '"' CONTINUEIF THIS PRESERVE (1:2) != '",' "str X'220D0A'" Here is the result I currently have with "CONTINUEIF LAST != '"' <code>Record 2: Rejected - Error on table FNA_FNTFO2.TOLCONDITION, column DESCRIPTION. second enclosure string not present Record 3: Rejected - Error on table FNA_FNTFO2.TOLCONDITION, column C. no terminator found after TERMINATED and ENCLOSED field Table FNA_FNTFO2.TOLCONDITION: 1 Row successfully loaded. 2 Rows not loaded due to data errors. 0 Rows not loaded because all WHEN clauses were failed. 0 Rows not loaded because all fields were null.</code> Is there any way to manage line break and enclosement character in SQL Loader? I dont understand why we can`t change how it sees rows. Instead of seeing a new row when there is a CR LF, can we tell it to concacenate values until the last enclosement character (chr34 in my case) + the separator character (y, in my case) has been seen. I really ho...
Categories: DBA Blogs

Playlist of Oracle RAC Videos

Hemant K Chitale - Sat, 2024-02-10 02:23

 This is a link to my YouTube playlist of demonstrations on Oracle RAC (12c and 21c)



Categories: DBA Blogs

Generate java code in SQL/PLUS

Tom Kyte - Thu, 2024-02-08 22:26
Hi, Tom, How are you. I have below source code: create or replace and compile java source named "Something" as import oracle.sql.*; public class Something { ............... } When i wrote above java source file under SQLPLUS, I got the following errors: ERROR at line 1: ORA-29536: badly formed source: Encountered "<EOF>" at line 1, column 20. Was expecting: ";" ... It complains my ";" after "import oracle.sql.*", so do i need grant some priveleges or others? Thanks ============================ Hi Tom, My database is Oracle 8i, the message under SQLPLUS is: SQL*Plus: Release 8.0.6.0.0 - Production on Fri Jun 22 10:12:30 2001 (c) Copyright 1999 Oracle Corporation. All rights reserved. Connected to: Oracle8i Enterprise Edition Release 8.1.6.1.0 - Production With the Partitioning option JServer Release 8.1.6.0.0 - Production But when i executed the following, i got: dummy@someserver> create or replace and compile java source named 2 "Somthing" 3 as 4 import oracle.sql.*; "Somthing" * ERROR at line 2: ORA-29536: badly formed source: Encountered "<EOF>" at line 1, column 20. Was expecting: ";" ... How should i do? Thanks
Categories: DBA Blogs

Autonomous transactions and commits

Tom Kyte - Mon, 2024-02-05 02:46
Hi We are using autonomous transactions for logging and debugging purposes. In PL/SQL code there are calls to packages, which are logging information using autonomous transactions. Are autonomous transactions using commit point optimization ? Are there waits happening for writing data to redo logs? Do commit write options (write/nowait or immediate/batch) have any relevance when using autonomous transactions ? If autonomous transactions are used for solely for debugging purposes, which are best parameters in commit regarding performanc ? lh
Categories: DBA Blogs

dealing with the word "group in selecting json values

Tom Kyte - Mon, 2024-02-05 02:46
I have the following json string in mytbl.json_data column: {"resourceType":"QuestionnaireResponse","extension":[{"url":"ppp","valueCode":"en"}],"identifier":{"value":"222222"},"status":"completed","subject":{"reference":"Patient/12345"},"authored":"2024-01-17T20:13:46+00:00","group":{"linkId":"root_group","title":"Demographics","question":[{"linkId":"104573","text":"What is the highest grade of school you have completed","answer":[{"valueInteger":2}]},{"linkId":"333","text":"What describes your current marital status? ","answer":[{"valueInteger":1}]}]}} When I query the "group" field I get null: <code> SELECT s.json_data.group FROM mytbl s; </code> When I rename "group" field to"group_1" I get the correct value. I need to be able to use "group" field as this is what we get from the vendor. How can I do it? I am using SQL Developer. Thank you!!!
Categories: DBA Blogs

Why is plan from xplan different from sql monitor report

Tom Kyte - Mon, 2024-02-05 02:46
Hi Gurus, I have below query run in oracle 12c. Not sure why the plan I got from DBMS_XPLAN.DISPLAY_CURSOR different from DBMS_SQLTUNE.report_sql_monitor. below is detail. as you can see for some reason in xplan: INS_PT table was read and joined once, but in monitor report: this table was read and joined twice. please share your thoughts. thanks in advance Plan from sql monitor report: <code>SQL Monitoring Report SQL Plan Monitoring Details (Plan Hash Value=2657002414) =================================================================================================================================================================================================================================================== | Id | Operation | Name | Rows | Cost | Time | Start | Execs | Rows | Read | Read | Write | Write | Cell | Mem | Activity | Activity Detail | | | | | (Estim) | | Active(s) | Active | | (Actual) | Reqs | Bytes | Reqs | Bytes | Offload | | (%) | (# samples) | =================================================================================================================================================================================================================================================== | 0 | SELECT STATEMENT | | | | | | 1 | | | | | | | | | | | 1 | TEMP TABLE TRANSFORMATION | | | | | | 1 | | | | | | | | | | | 2 | LOAD AS SELECT | | | | 1 | +4 | 1 | 2 | | | 1 | 32768 | | | | | | 3 | SORT AGGREGATE | | 1 | | 1 | +4 | 1 | 1 | | | | | | | | | | 4 | TABLE ACCESS STORAGE FULL | B_CTL | 1 | 13 | 1 | +4 | 1 | 1 | | | | | | | | | | 9 | LOAD AS SELECT | | | | 1 | +4 | 1 | 2 | | | 15 | 15MB | | | | | | 10 | HASH UNIQUE ...
Categories: DBA Blogs

Oracle’s Vector Datatype

DBASolved - Fri, 2024-02-02 12:33

At Oracle Cloud World 2023, Oracle announced they were moving toward enabling Artificial Intelligence (AI) within many of their products. […]

The post Oracle’s Vector Datatype appeared first on DBASolved.

Categories: DBA Blogs

Step by Step Guide to Configure Amazon Bedrock with VPC Endpoints and PrivateLink

Pakistan's First Oracle Blog - Thu, 2024-02-01 04:20

This video is step by step tutorial to setup AWS Bedrock with VPC Endpoints and PrivateLink to build secure and private generative AI applications.



 Steps:


Step 0: Make sure that Private subnet with private route table without any route to internet is there.

Step 1: Create 2 SG  = Bedrock-Endpoint-SG and Bedrock-Lambda-SG

Step 2: In Bedrock-Lambda-SG , ADD Bedrock-EndPoint-SG for all traffic in INBOUND, and OUTBOUND FOR 0.0.0.0

Step 3: In Bedrock-EndpointP-SG, Add Bedrock-Lambda-SG for all traffic in INBOUND and OUTBOUND 

Step 4: Create 2 endpoints bedrock, bedrock-runtime in private subnet and attach Bedrock-EndpointP-SG with both

Step 5: Create lambda function, set time to 15 seconds, and attach Bedrock-Lambda-SG, lambda execution role should have bedrock permissions


Lambda Code:


import boto3

import json


def lambda_handler(event,context):

    bedrock = boto3.client(

     service_name='bedrock', 

     region_name='us-east-1'

    )

     

    # Bedrock Runtime client used to invoke and question the models

    bedrock_runtime = boto3.client(

     service_name='bedrock-runtime', 

     region_name='us-east-1'

    )


    models = bedrock.list_foundation_models().get('modelSummaries')


    for model in models:

        print(model['modelName'] + ', Input=' + '-'.join(model['inputModalities']) + ', Output=' + ''.join(model['outputModalities']) + ', Provider=' + model['providerName'])

        

    return{

        'statusCode':200,

}

Categories: DBA Blogs

How to Identify Oracle Database Orphan Sessions

Pakistan's First Oracle Blog - Fri, 2024-01-26 00:17

 In the world of database management, particularly with Oracle databases, "orphan sessions" are a common issue that can affect performance and resource utilization. 

In Oracle databases, an orphan session, sometimes known as a "zombie session," is a session that remains in the database even though its corresponding client process has terminated. These sessions no longer have a user actively interacting with them, yet they consume system resources and can hold locks, leading to performance degradation and blocking issues.

Orphan sessions can occur due to various reasons such as:

  • Network issues that disrupt the connection between the client and the server.
  • Application or client crashes that terminate the session abnormally.
  • Database bugs or misconfigurations.

Queries to Identify Orphan Sessions:

SELECT s.sid, s.serial#, p.spid, s.username, s.program
FROM v$session s
JOIN v$process p ON p.addr = s.paddr
WHERE s.type != 'BACKGROUND';

This query lists active sessions, excluding background processes. It provides session identifiers (sid, serial#), the operating system process identifier (spid), and the username and program name. Orphan sessions often show NULL or unusual entries in the program column.

SELECT s.sid, s.serial#, p.spid, s.username, s.program
FROM v$session s
JOIN v$process p ON p.addr = s.paddr
WHERE s.type != 'BACKGROUND'
AND NOT EXISTS (SELECT NULL FROM v$process WHERE spid = s.process);

This query filters the sessions where the client process (spid) associated with the session does not exist in the v$process view, indicating a potential orphan.


SELECT s.sid, s.serial#, l.object_id, o.object_name, o.object_type
FROM v$session s
JOIN dba_objects o ON o.object_id = l.object_id
JOIN v$lock l ON s.sid = l.sid
WHERE s.sid IN (SELECT sid FROM v$session WHERE ... /* Conditions from above queries */);


This query identifies locks held by sessions suspected to be orphans, which is useful for understanding the impact of these sessions on the database.

How to Manage Orphan Sessions:

Manual Termination: Using the ALTER SYSTEM KILL SESSION command to terminate the identified orphan sessions. Or Kill at OS level with kill -9 spid command.

Automated Monitoring and Cleanup: Implementing automated scripts or database jobs to periodically identify and clean up orphan sessions.

Prevention: Addressing the root causes, such as network stability and application robustness, can reduce the occurrence of orphan sessions.

Categories: DBA Blogs

Oracle OCI's Generative AI Service: A New Era in Cloud Computing

Pakistan's First Oracle Blog - Thu, 2024-01-25 23:47

 The world of cloud computing is witnessing a revolutionary change with the introduction of Oracle Cloud Infrastructure's (OCI) Generative AI Service. This innovative offering from Oracle is a testament to the rapidly evolving field of artificial intelligence (AI), particularly in the realm of generative models. As businesses and developers seek more efficient and creative solutions, Oracle's new service stands out as a significant milestone.


What is Oracle OCI's Generative AI Service?

Oracle's OCI Generative AI Service is a cloud-based platform that provides users with access to powerful generative AI models. These models are capable of creating a wide range of content, including text, images, and possibly even audio or video in the future. The service is designed to integrate seamlessly with other OCI offerings, ensuring a cohesive and efficient cloud computing experience.


Key Features and Capabilities

Advanced AI Models

At the heart of OCI's Generative AI Service are state-of-the-art AI models that have been trained on vast datasets. These models can generate high-quality, original content based on user inputs, making them invaluable for a variety of applications.


Scalability and Performance

Oracle's robust cloud infrastructure ensures that the Generative AI Service can scale to meet the demands of any project, big or small. This scalability is crucial for handling large-scale AI tasks without compromising on performance or speed.


Integration with OCI Ecosystem

The service is designed to work seamlessly with other OCI products, such as data storage, analytics, and security services. This integration allows for a more streamlined workflow, as users can easily access and combine different OCI services.


Use Cases

The potential applications of Oracle OCI's Generative AI Service are vast and varied. Here are a few examples:


Content Creation

For marketers and content creators, the service can generate written content, images, and potentially other forms of media. This capability can significantly speed up the content creation process and inspire new ideas.


Business Intelligence

Businesses can leverage the AI's ability to analyze and synthesize information to gain insights from data. This can aid in decision-making, trend analysis, and strategy development.

Research and Development

In the R&D sector, the service can assist in generating hypotheses, modeling complex systems, and even predicting outcomes, thereby accelerating the pace of innovation.


Security and Ethics

Oracle recognizes the importance of ethical AI use and has implemented measures to ensure the responsible deployment of its Generative AI Service. This includes safeguards against generating harmful or biased content and maintaining user privacy and data security.


Getting Started with OCI Generative AI Service

To start using the service, users need to have an Oracle Cloud account. Oracle provides comprehensive documentation and support to help users integrate the AI service into their projects.


Conclusion

Oracle OCI's Generative AI Service is a groundbreaking addition to the cloud computing landscape. It offers immense potential for businesses, developers, and creators to harness the power of AI for generating content and gaining insights. As the technology continues to evolve, it will be exciting to see the innovative applications that emerge from this platform.


Oracle's commitment to integrating advanced AI capabilities into its cloud services is a clear indicator of the transformative impact AI is set to have across industries. The OCI Generative AI Service is not just a tool; it's a gateway to a future where AI and cloud computing work hand in hand to unlock new possibilities.

Categories: DBA Blogs

Use nonprefix local index

Tom Kyte - Thu, 2024-01-25 10:06
Tom, Could you please explain for which scenario I should consider non-prefix local index over prefix local index? Thank you very much for you time.
Categories: DBA Blogs

How does one convert a column of values of different timezone to a common timezone?

Tom Kyte - Thu, 2024-01-25 10:06
I have a column with different timezone data. I need to convert every row to pst. ---------------------- | COLUMN | ---------------------- |01/17/2024 18:00 PST| |01/16/2024 18:00 CST| |01/18/2024 12:00 IST| |01/18/2024 07:00 -05| |01/16/2024 14:00 -05| |01/17/2024 18:00 IST| |01/18/2024 17:00 IST| |01/17/2024 16:00 GMT| |01/18/2024 14:00 EST| |01/17/2024 16:00 -05| ----------------------
Categories: DBA Blogs

Sessions waiting forever on SQL*Net message from db link but no session in link target db

Tom Kyte - Thu, 2024-01-25 10:06
Dear AskTom-Team, one of my customers has multiple Standard edition database locations worldwide (different continents) and data is transferred via database links in a PL/SQL fashion via job scheduler framework. There is one "management db", which is used as scheduler and this db has links to all databases. It then performs DML statements like: INSERT INTO table@target_db SELECT ... FROM table@source_db WHERE ... ; It turns out that every couple of weeks, a session is hung waiting on "SQL*Net message from db link" with SECONDS_IN_WAIT increasing but SEQ# staying. When I check the sessions on the database, where the link is pointing to, I do not see a session. Both databases (Windows) have SQLNET.EXPIRE_TIME set to 10. SQL_ID only shows the top level PL/SQL scheduler call, but not the actual statement. I am having a hard time troubleshooting this any further. Enabling SQL*Net tracing would be one option, but the issue only happens sporadically. 1) Do you recommend to adapt TCP send/receive windows in tnsnames/listener.ora's? 2) Is there sample code on how to handle exceptions regarding distributed transactions? Is there a way to let the PL/SQL code break out from the SQL*Net message from db link wait? 3) Would it be useful to check db link response with a dummy query (select null from dual@<target|source>_db) and only perform the DML statmeent if this was successful? 4) Should we proactively close the db link after use and if yes, after every query or at the end of the job? Best regards, Martin
Categories: DBA Blogs

ORA-12569: TNS:Packet Checksum Failure - How is this error even possible?

Tom Kyte - Thu, 2024-01-25 10:06
Hello masters. Here I am again with another mystery from our favorite DBMS (no, that was not sarcasm, I do love Oracle in fact). So, in my company we are facing the referred error, ORA-12569: TNS:Packet Checksum Failure While Trying To Connect Through Client. The message itself is kind of self explanatory, apparently a tcp package failed the checksum. There's a document for this error specifically, Doc ID 257793.1, which states it plain clear: <code>There is a mismatch in the header of the tcp packet between the client and the server. In other words, the "Packet Header" leaves the DB server but by the time the client gets the packet on the other end, the header has changed (or visa versa).</code> The doc even gives an exemple of captured packages with Net tracing from both the server and the client, demonstrating a corruption on the tcp header. And the proposed solution does not involve the database/client itself, but rather analysis on the network: <code>Check what components may be interfering with TCP Packet information, such as Firewalls or other NIPS (Network Intrusion Prevention Systems). Contact the Systems / Network Administrators for this and have them fix the underlying tcp/ip packets problem or disable any "SQL altering" programs / settings. </code> Well, all this make kind of make sense to me. But then after thinking a little about the situation, something struck me: the oracle client (the application on the higher level) should not receive any bad tcp packages because they are checked by network interface (the transmission itself on the lower level). I mean, the Transmission Control Protocol has it own checksum at the transport layer; before seding the datagram to the application layer, the transport layer validates the package and in case of errors it requests the client to resend that specific bad datagram. How is it possible that corrputed tcp packets arrive to the aplication level? I'm not a network specialist, but this situation (and the Oracle error), is really puzzling me. So riddle me this, masters: how come the oracle client detects a bad tcp package when the network level doesn't???
Categories: DBA Blogs

Coalesce Behavior

Tom Kyte - Wed, 2024-01-24 15:46
Here's the simplest test case I could come up with that demonstrates what I'm seeing. The actual query I'm trying to write is against a couple v$ views, and livesql doesn't seem to come with privileges on those. <code> create table mytab ( name varchar2(30), value varchar2(60) ); insert into mytab values( 'home_phone', null ); insert into mytab values( 'cell_phone', '867-5309' ); commit; select coalesce( home.value, cell.value, 'None' ) from ( select value from mytab where name = 'home_phone' ) home, ( select value from mytab where name = 'cell_phone' ) cell; select coalesce( home.value, cell.value, 'None' ) from ( select value from mytab where name = 'bad_phone' ) home, ( select value from mytab where name = 'cell_phone' ) cell; </code> The first query behaves exactly as expected; because the value for home_phone is null and the value for cell_phone isn't, it returns the value for cell_phone. The second one is the one that threw me - I would expect if there are no rows matching, for "bad_server" coalesce would move to the second result and return the value for cell_phone again. So my question is two-fold: Is this expected and correct behavior? I think there's an argument to be made that it doesn't match the sorta intuitive understanding most people will have of "returns the first non-null expression". And if this is correct, what's the best way to write a query that will return the value for home_phone if there's a row that matches, and moves on to cell_phone if there isn't a home_phone record?
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs