Tuesday, October 16, 2018

ORA-30009: Not enough memory for CONNECT BY operation


ORA-30009: Not enough memory for CONNECT BY operation
Cause: The memory size was not sufficient to process all the levels of the hierarchy specified by the CONNECT BY clause.
Action: In WORKAREA_SIZE_POLICY=AUTO mode, set PGA_AGGREGATE_TARGET to a reasonably larger value. Or, in WORKAREA_SIZE_POLICY=MANUAL mode, set SORT_AREA_SIZE to a reasonably larger value.



SQL> select count(*) from dual connect by level <= 100000000;
select count(*) from dual connect by level <= 100000000
                     *
ERROR at line 1:
ORA-30009: Not enough memory for CONNECT BY operation



SQL>select count(*) from
  2  (select level from dual connect by level <= 10000),
  3  (select level from dual connect by level <= 10000);

  COUNT(*)
----------
 100000000


Order By and Null values

SQL*Loader Concepts

SQL*Loader Features

Bad,log,discard files will be generated automatically by Oracle
Load data across a network. This means that you can run the SQL*Loader client on a different system from the one that is running the SQL*Loader server.
Specify the character set of the data.
Load data from multiple datafiles during the same load session.

Load data into multiple tables during the same load session.

Manipulate the data before loading it, using SQL functions.

Generate unique sequential key values in specified columns.

Use the operating system's file system to access the datafiles.



SQL*Loader Command-Line Reference
userid -- Oracle username/password
      control -- Control file name
          log -- Log file name
          bad -- Bad file name
         data -- Data file name
      discard -- Discard file name
   discardmax -- Number of discards to allow
                (Default all)
         skip -- Number of logical records to skip
                (Default 0)
         load -- Number of logical records to load
                (Default all)
       errors -- Number of errors to allow
                (Default 50)
         rows -- Number of rows in conventional path bind array
                or between direct path data saves
                (Default: Conventional Path 64, Direct path all)
     bindsize -- Size of conventional path bind array in bytes
                (System-dependent default)
       silent -- Suppress messages during run
                (header, feedback, errors, discards, partitions, all)
       direct -- Use direct path
                (Default FALSE)
      parfile -- Parameter file: name of file that contains
                parameter specifications
     parallel -- Perform parallel load
                (Default FALSE)
     readsize -- Size (in bytes) of the read buffer
         file -- File to allocate extents from


SQL*Loader Parameters

Parameters can be grouped together in a parameter file. You could then specify the name of the parameter file on the command line using the PARFILE parameter.

Certain parameters can also be specified within the SQL*Loader control file by using the OPTIONS clause

SQL*Loader Control File
Contains mapping information between file & the table
CONTROL specifies the name of the control file that describes how to load data.
If a file extension or file type is not specified, it defaults to CTL. If omitted, SQL*Loader prompts you for the file name.

Input Data and Datafiles

Data Conversion and Datatype Specification

Discarded and Rejected Records

Log File and Logging Information

Conventional Path Loads, Direct Path Loads, and External Table Loads

Partitioned Object Support

Application Development: Direct Path Load API

Wednesday, October 10, 2018

What's an explain plan?

An explain plan is a representation of the access path that is taken when a query is executed within Oracle.

Query processing can be divided into 7 phases:

[1] Syntactic Checks the syntax of the query
[2] Semantic Checks that all objects exist and are accessible
[3] View Merging Rewrites query as join on base tables as opposed to using views
[4] Statement
     Transformation Rewrites query transforming some complex constructs into simpler ones where appropriate (e.g. subquery merging, in/or transformation)
[5] Optimization Determines the optimal access path for the query to take. With the Rule Based Optimizer (RBO) it uses a set of heuristics to determine access path. With the Cost Based Optimizer (CBO) we use statistics to analyze the relative costs of accessing objects.
[6] QEP Generation QEP = Query Evaluation Plan
[7] QEP Execution QEP = Query Evaluation Plan


Join Types

Sort Merge Join (SMJ)
Nested Loops (NL)
Hash Join



Sort Merge Join

SQL> explain plan for
select /*+ ordered */ e.deptno,d.deptno
from emp e,dept d
where e.deptno = d.deptno
order by e.deptno,d.deptno;

Query Plan
-------------------------------------
SELECT STATEMENT [CHOOSE] Cost=17
  MERGE JOIN
    SORT JOIN
      TABLE ACCESS FULL EMP [ANALYZED]
    SORT JOIN
      TABLE ACCESS FULL DEPT [ANALYZED]



Nested Loops

SQL> explain plan for
select a.dname,b.sql
from dept a,emp b
where a.deptno = b.deptno;

Query Plan
-------------------------
SELECT STATEMENT [CHOOSE] Cost=5
  NESTED LOOPS
    TABLE ACCESS FULL DEPT [ANALYZED]
    TABLE ACCESS FULL EMP [ANALYZED]

Hash Join

SQL> explain plan for
select /*+ use_hash(emp) */ empno
from emp,dept
where emp.deptno = dept.deptno;

Query Plan
----------------------------
SELECT STATEMENT  [CHOOSE] Cost=3
  HASH JOIN
    TABLE ACCESS FULL DEPT
    TABLE ACCESS FULL EMP





SQL> set autotrace traceonly explain;
SQL> select * from test;

Execution Plan
----------------------------------------------------------
Plan hash value: 1357081020

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    20M|    95M|  8480   (3)| 00:01:42 |
|   1 |  TABLE ACCESS FULL| TEST |    20M|    95M|  8480   (3)| 00:01:42 |
--------------------------------------------------------------------------

SQL> select /*+ PARALLEL(4) */ *
  2  from test
  3  /

Execution Plan
----------------------------------------------------------
Plan hash value: 3388271637

--------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |    20M|    95M|  2351   (3)| 00:00:29 |        |      |         |
|   1 |  PX COORDINATOR      |          |       |       |            |          |        |      |         |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |    20M|    95M|  2351   (3)| 00:00:29 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX BLOCK ITERATOR |          |    20M|    95M|  2351   (3)| 00:00:29 |  Q1,00 | PCWC |         |
|   4 |     TABLE ACCESS FULL| TEST     |    20M|    95M|  2351   (3)| 00:00:29 |  Q1,00 | PCWP |         |
--------------------------------------------------------------------------------------------------------------

Note
-----
   - Degree of Parallelism is 4 because of hint

SQL> select /*+ PARALLEL(2) */ *
  2  from test ;

Execution Plan
----------------------------------------------------------
Plan hash value: 3388271637

--------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |    20M|    95M|  4703   (3)| 00:00:57 |        |      |         |
|   1 |  PX COORDINATOR      |          |       |       |            |          |        |      |         |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |    20M|    95M|  4703   (3)| 00:00:57 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX BLOCK ITERATOR |          |    20M|    95M|  4703   (3)| 00:00:57 |  Q1,00 | PCWC |         |
|   4 |     TABLE ACCESS FULL| TEST     |    20M|    95M|  4703   (3)| 00:00:57 |  Q1,00 | PCWP |         |
--------------------------------------------------------------------------------------------------------------

Note
-----
   - Degree of Parallelism is 2 because of hint

SQL>












Tuesday, October 9, 2018

how to delete duplicate records from a table in Oracle

4 ways to delete duplicate records Oracle

SQL> select empno,count(empno) from emp21 group by empno
  2  having count(empno)>1;

     EMPNO COUNT(EMPNO)
---------- ------------
      7782            2
      7844            2
      7698            2
      7902            2
      7566            2
      7788            2
      7654            2
      7934            2
      7876            2
      7900            2

10 rows selected.


1. Using rowid

  SQL>        delete from emp21 e1
         where rowid not in
          (select max(rowid) from emp21 e2
           where e1.empno = e2.empno );



2. Using self-join

  SQL> delete from emp
           where rowid not in
          (select max(rowid) from emp group by empno);


3. Using row_number()

SQL> delete from emp21 where rowid in
                   (
                     select rid from
                      (
                        select rowid rid,
                          row_number() over(partition by empno order by empno) rn
                          from emp21
                      )
                    where rn > 1
                  );



4. Using dense_rank()

SQL>               (
                 select rid from
                  (
                    select rowid rid,
                      dense_rank() over(partition by empno order by empno) rn
                      from emp21
                  )
                where rn > 1
               );

Monday, October 8, 2018

sql loader in oracle example



SQL-Loader: The Step by Step Basics - Example




SQL> create table customer_ldr (cid number primary key,cname varchar2(20),
loc varchar2(20),type char(1),load_date date);
 /

Table created.

*******************data.txt*************************

1,prabhakar,bidar,y,sysdate,
2,manju,bang,y,sysdate,
3,daivik,mysore,y,sysdate,


*********************controlfile*******************

 options  ( skip=1 )
load data
infile 'D:\sql_loader\lod\data.txt'
TRUNCATE
into table  customer_ldr
fields terminated by "," optionally enclosed by "#" TRAILING NULLCOLS
(cid ,cname,loc,type, load_date  sysdate)



C:\Users\Admin>sqlldr scott/tiger control='D:\sql_loader\lod\sysdate_ldr.txt' log='D:\sql_loader\lod\log.txt'

SQL*Loader: Release 11.2.0.1.0 - Production on Sun Oct 7 22:21:50 2018

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Commit point reached - logical record count 4




SQL> select * from customer_ldr;

       CID CNAME                LOC                  T  LOAD_DATE
---------- -------------------- -------------------- - ---------
         1 prabhakar            bidar                y       07-OCT-18
         2 manju                bang                 y        07-OCT-18
         3 daivik               mysore               y       07-OCT-18





Friday, October 5, 2018

Mutating triggers / Mutating errors in triggers




create or replace trigger trig_mut
    before insert on emp10
    for each row
    begin
    insert into emp10(empno,sal) values (:new.empno,:new.sal);
    end;
    /
Trigger created.



SQL> insert into emp10 select * from emp;
insert into emp10 select * from emp
            *
ERROR at line 1:
ORA-04091: table SCOTT.EMP10 is mutating, trigger/function may not see it
ORA-06512: at "SCOTT.TRIG_MUT", line 2
ORA-04088: error during execution of trigger 'SCOTT.TRIG_MUT'


Monday, October 1, 2018

Why I don't need to COMMIT in database trigger?


We can't COMMIT/ROLLBACK in DML triggers because transaction is handled manually after DML statement. However, database triggers seems to be an exception. For example, suppose there's a database trigger:

SQL> CREATE TABLE log (timestamp DATE, operation VARCHAR2(2000));

Table created.

SQL>  CREATE TABLE  honnikery (col1 NUMBER);

Table created.

SQL> create or replace trigger honni_trig
  2      after insert on honnikery
  3      begin
  4      insert into log values(sysdate,'insert on tab1');
  5      commit;
  6      end;
  7  /

Trigger created.

SQL> INSERT INTO honnikery VALUES (1);
INSERT INTO honnikery VALUES (1)
            *
ERROR at line 1:
ORA-04092: cannot COMMIT in a trigger
ORA-06512: at "SCOTT.HONNI_TRIG", line 3
ORA-04088: error during execution of trigger 'SCOTT.HONNI_TRIG'


As workaround, one can use autonomous transactions. Autonomous transactions execute separate from the current transaction.


create or replace trigger honnikery_trig
    after insert on honnikery
declare
  PRAGMA AUTONOMOUS_TRANSACTION;
    begin
    insert into log values(sysdate,'insert on honnikery');
    commit;
    end;

Trigger created.

SQL> INSERT INTO honnikery VALUES (1);
1 row created.