Table of Contents

Data Warehouse - Fast Table Scan

About

Ways to Scan a Fact table Faster

If you have the resources, then let divide the table into chunks and use all your system resources to scan it. Unfriendly in an OLTP environment, mind you.

Make the table smaller! Block level compression can shrink that data like a plunge into cold water. Very OLTP unfriendly.

PCTFREE = very small ⇒ more rows per block ⇒ smaller table. And potentially a higher chance of row migration, of course.

PCTUSED = large ⇒ less likely that blocks retain free space following deletes ⇒ more rows per block ⇒ smaller table.

For significantly long rows you may get reduced empty space, thus a smaller table to scan. Reorder The Columns: If you are commonly interested in just a subset of columns (for example metrics in a fact table) then consider making them the first columns of the table definition - for tables with lots of columns there is measurable overhead in finding the end columns (I’m not talking about row chaining here). Hmmm, maybe I’ll post something else about this.

An index can be treated as a skinny table, and your query might be satisfied by a fast ful or full index scan. The usual comments about NULL values in indexes apply here. Don’t neglect consideraton of index size either - index key compression and use of bitmap indexes provide smaller structures to scan.

Is there anything they’re not good for? This could be a genuine skinny table, or an aggregation of selected columns. Ensure Table Is Spread Over Available Devices: With consequent reduced likelihood of encountering an i/o choke point.

Documentation / Reference