About
The array size is a configuration variable which set the fetch size.
Valid values are 1 to 5000. A large value increases the efficiency of queries and subqueries that fetch many rows, but requires more memory. Values over approximately 100 provide little added performance. ARRAYSIZE has no effect on the results of SQL*Plus operations other than increasing efficiency.
A nice array is between 100 and 500
When this parameter increase, it decrease :
- the number of buffer.
Articles Related
Syntax
SET ARRAY[SIZE] {15 | n}
where:
- 15 is the default
- n is an other number
Effect
Setting up the test :
gerardnico@orcl>create table t as select * from all_objects;
Table created.
gerardnico@orcl>set autotrace TRACEONLY
where:
With an array size of 2
gerardnico@orcl>set arraysize 2
gerardnico@orcl>select * from t;
66650 rows selected.
Statistics
----------------------------------------------------------
288 recursive calls
0 db block gets
33899 consistent gets
918 physical reads
0 redo size
7201134 bytes sent via SQL*Net to client
366960 bytes received via SQL*Net from client
33326 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
66650 rows processed
33899 consistent get is around the half of the number of rows 66650.
So with a array size of 2, for every 2 rows of data, the database do a logical I/O to get the data.
Oracle got a block, took two rows from it, and sent it to sqlplus. Then SQLPlus asked for the next two rows, and Oracle got that block again or got the next block and returned the next two rows and so, on.
Let's get increase to 5.
With an array size of 5
gerardnico@orcl>set arraysize 5
gerardnico@orcl>select * from t;
66650 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
14054 consistent gets
0 physical reads
0 redo size
4661769 bytes sent via SQL*Net to client
147015 bytes received via SQL*Net from client
13331 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
66650 rows processed
66650/14054 = 4.7 = 5
The share is not equal to 5 (but close) because sometimes in order to get the two next rows, Oracle :
- need to get two blocks
- don't need to get an other block
With an array size of 10
gerardnico@orcl>set arraysize 10
gerardnico@orcl>select * from t;
66650 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
7476 consistent gets
0 physical reads
0 redo size
3815314 bytes sent via SQL*Net to client
73700 bytes received via SQL*Net from client
6666 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
66650 rows processed
66650 / 7476 = 8.92
With an array size of 15
gerardnico@orcl>set arraysize 15
gerardnico@orcl>select * from t;
66650 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
5294 consistent gets
0 physical reads
0 redo size
3533247 bytes sent via SQL*Net to client
49269 bytes received via SQL*Net from client
4445 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
66650 rows processed
66650 / 5294 = 12.59
With an array size of 100
gerardnico@orcl>set arraysize 100
gerardnico@orcl>select * from t;
66650 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1578 consistent gets
0 physical reads
0 redo size
3053568 bytes sent via SQL*Net to client
7722 bytes received via SQL*Net from client
668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
66650 rows processed
66650 / 1578 = 42.23
With an array size of 5000
gerardnico@orcl>set arraysize 5000
gerardnico@orcl>select * from t;
66650 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
937 consistent gets
0 physical reads
0 redo size
2970637 bytes sent via SQL*Net to client
539 bytes received via SQL*Net from client
15 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
66650 rows processed
66650 / 937 = 71.13
Conclusion
As the array size goes up, the number of consistent gets goes down.
Does it means that we must set the array size to 5000 ? No
- The overall number of consistent get has not dropped dramatically gain between 100 and 5000
- The amount of RAM needed on the client and server goes up with the increased value.
- The client must be able to cache 5000 rows.
- The server work really hard and fast to process 5000 rows then the client works really hard to process 5000 rows, then the server, then the client and so on. It would be better to have more of stream of information flowing. Ask for 100, get 100, ask for 100, get 100. That way, both the client and server are more or less continuously processing data, rather than the processing occur in small bursts.
A nice array is between 100 and 500
A criticism of PL/SQL is its slow performance. This is not a PL/SQL problem but instead is due that people typically code PL/SQL in a very row-at-a-time oriented fashion. Given that even native dynamic SQL can now be bulk-based from Oracle9i, there is no longer any reason for this approach.