The dd utility is a common Unix program whose primary purpose is the low-level copying and conversion of raw data. dd is an abbreviation for “dataset definition.”
To use the dd utility, the most important options for using dd are:
A very basic way to validate the operating system throughput on UNIX or Linux systems is to use the dd utility. Because there is almost no overhead involved, the output from the dd utility provides a reliable calibration.
Oracle Database and other application can reach a maximum throughput of approximately 90 percent of what the dd utility can achieve.
To estimate the maximum throughput, you can mimic a workload of a typical application, which consists of large, random sequential disk access.
In your test, you should include all the storage devices that you plan to include for your database storage. When you configure a clustered environment, you should run dd commands from every node.
The following dd command performs random sequential disk access across two devices reading a total of 2 GB. The throughput is 2 GB divided by the time it takes to finish the following command:
dd bs=1048576 count=200 if=/raw/data_1 of=/dev/null &
dd bs=1048576 count=200 skip=200 if=/raw/data_1 of=/dev/null &
dd bs=1048576 count=200 skip=400 if=/raw/data_1 of=/dev/null &
dd bs=1048576 count=200 skip=600 if=/raw/data_1 of=/dev/null &
dd bs=1048576 count=200 skip=800 if=/raw/data_1 of=/dev/null &
dd bs=1048576 count=200 if=/raw/data_2 of=/dev/null &
dd bs=1048576 count=200 skip=200 if=/raw/data_2 of=/dev/null &
dd bs=1048576 count=200 skip=400 if=/raw/data_2 of=/dev/null &
dd bs=1048576 count=200 skip=600 if=/raw/data_2 of=/dev/null &
dd bs=1048576 count=200 skip=800 if=/raw/data_2 of=/dev/null &
dd if=/dev/urandom of=testIODd bs=1024 count=102400
102400+0 records in
102400+0 records out
104857600 bytes (105 MB) copied, 17.131 seconds, 6.1 MB/s
where:
Then 1024 Bytes * 102400 Blocks makes a file of 100 MBytes