What is Oracle ASM?
The Oracle® automatic storage management system (ASM) was developed 10 years ago to make it much easier for database administrators (DBAs) to use and tune database storage. Oracle ASM enables DBAs to:

  • Automatically stripe data over each RAW device to improve database storage performance
  • Mirror data for greater fault tolerance
  • Simplify the management and extension of database storage for the cloud and, with the ASM Cluster File System (ACFS), use the snapshot and replication functionality to increase availability
  • Add the Oracle Real Application Clusters (RAC) capability to help reduce total cost of ownership (TCO), expand scalability and increase availability, among other benefits
  • Easily move data from one device to another while the database is active with no performance degradation
  • Reduce or eliminate storage or Linux administrator time for configuring database storage
  • Use ASM as a Linux®/Unix operating system file system called ACFS. (I know what you are thinking. Since you need Oracle Grid up and running to mount and use ASM, how can an ACFS device be available to the operating system at system boot? The reason is that the kernel has been modified to allow this functionality. Learn more about ACFS here.)
  • What’s more, it’s free – comes with Oracle Grid

The drawbacks of using Oracle ASM:

  • DBAs now control the storage they are using. Therefore, they need to know more about the storage and how the logical unit numbers (LUNs) are being used by Oracle ASM, and how to create ASM disk groups for higher performance.
  • Most ASM commands are executed through SQLPlus, not through the command line. That means storage is accessed through SQLPlus and sometimes ASMCMD, isolating the storage and making it harder for Linux admins to identify storage issues.
  • Recover Manager (RMAN) is the only guaranteed/supported method of backing up databases on ASM.

What will be covered in this blog and what won’t
ASM is quite complex to learn and to set up properly for both performance and high availability. I won’t be going over all the commands and configurations of ASM, but I will cover how to set up an aligned LSI Nytro WarpDrive and Nytro MegaRAID PCIe® card and create an ASM disk to be assigned to an ASM disk group. There are many websites and books that go over all the details of Oracle ASM, and the most current book that I would recommend is “Database Cloud Storage: The Essential Guide to Oracle Automatic Storage Management.” Or visit Oracle’s docs.oracle.com website.

Setting up ASM
The following steps cover configuring a LUN for ASM. In order to use ASM, you will need to install the Oracle Grid software from otn.oracle.com. I prefer using Oracle ASMLIB when configuring ASM.  Included in the box of the latest version of Oracle Linux,  ASMLIB offers an easier way to configure ASM. If you are using an older version of ASM, you will need to install the RPMs for ASM from support.oracle.com.

Step 1: Create aligned partition
Refer to Part 1 of this series to create a LUN on a 1M boundary. Oracle recommends using the full disk for ASM, so just create one large aligned partition. I suggest using this command:

echo “2048,,” | sfdisk –uS /dev/sdX –force

Step 2: Create an ASM disk
Once the device has an aligned partition created on it, we can assign it to ASM by using the ASM createdisk command with two input parameters – ASM disk name and the PCIe flash partitioned device name – as follows:

/etc/init.d/oracleasm createdisk ASMDISK1 /dev/sda1

To verify that the create ASM disk process was successful, and the device was marked as an ASM disk, enter the following commands:

/etc/init.d/oracleasm querydisk /dev/sda1

(the output should state: “/dev/sda is an Oracle ASM disk [OK])

/etc/init.d/oracleasm listdisks

(the output should state: ASMDISK1)

Step 3: Assign ASM disk to disk group
The ASM disk group is the primary component of ASM as well as the highest level data structure in ASM. A disk group is a container of multiple ASM disks, and it is the disk group that the database references when creating Oracle Tablespaces.

There are multiple ways to create an ASM disk group. The easiest way is to use ASM Configuration Assistant (ASMCA), which walks you through the creation process. See Oracle ASM documentation on how to use ASMCA.

Here are the steps for creating a disk group:

a: Log in to GRID using sqlplus / as sysasm.

b: Select name, path, header status, state from v$asm_disk as follows:

c: Create diskgroup DG1 external redundancy disk using this command:

‘/dev/oracleasm/disks/D1’;

The disk group is now ready to be used in creating an Oracle database Tablespace. To use this disk group in an Oracle database, please refer to Oracle’s database documentation at docs.oracle.com.

In Part 4, the final installment of this series, I’ll discuss how to persist assignment to dynamically changing  Nytro WarpDrive and Nytro MegaRAID PCIe cards.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , ,
Views: (800)


Customer dilemma: I just purchased PCIe® flash cards to increase performance of my enterprise applications that run on Linux® and Unix®. How do I set them up to get the best performance?

Good question. I wish there were a simple answer but each environment is different. There is no cookie-cutter configuration that fits all, though a few questions will reveal how the PCIe flash cards should be configured for optimum performance.

Most of the popular relational and non-relational databases run on many different operating systems. I will be describing Linux-specific configurations, but most of them should also work with Unix systems that are supported by the PCIe flash card vendor. I’m a database guy, but these same principals and techniques that I’ll be covering apply to other applications like mail servers, web servers, application servers and, of course, databases.

Aligning PCIe flash devices
The most important step to perform on each PCIe flash card is to create a partition that is aligned on a specific boundary (such as 4k or 8k) so each read and write to the flash device will require only one physical input/output (IO) operation. If the card is not partitioned on such a boundary, then reads and writes will span the sector groups, which doubles the IO latency for each read or write request.

To align a partition, I use the sfdisk command to start a partition on a 1M boundary (sector 2048). Aligning to a 1M boundary resolves the dependency to align to a 4k, 8k, or even a 64k boundary. But before I do this, I need to know how I am going to use this device. Will this be a standalone partition? Part of a logical volume? Or part of a RAID group?

Which one is best?
If I were deploying the PCIe flash device for database caching (for example, the Oracle database has provided this caching functionality for years using the Database Smart Flash Cache feature, and Facebook created the open source Flashcache used in MySQL databases), I would use a single-partitioned PCIe flash card if I knew the capacity would meet my needs now and over the next 5 years. If I selected this configuration, the sfdisk command to create the partition would be:

echo “2048,,” | sfdisk –uS /dev/sdX –force

This single partitioning is also required with the Oracle® Automatic Storage Management system (ASM). Oracle has provided ASM for many years and I will go over how to use this storage feature in Part 3 of this series.

If I need to deploy multiple PCIe flash cards for database caching, I would create Logical Volume Manager (LVM) over all the flash devices to simplify administration. The sfdisk command to create a partition for each PCIe flash card would be:

echo “2048,,8e” | sfdisk –uS /dev/sdX –force

“8e” is the system partition type for creating a logical volume.

Neither of these solutions needs fault tolerance since they will be used for write-thru caching. My recent blog “How to optimize PCIe flash cards – a new approach to creating logical volumes” covers this process in detail.

If I want to use the PCIe flash card for persisting data, I would need to make the PCIe flash cards fault tolerant, using two or more cards to build the RAID array and eliminate any single point of failure. There are a number of ways to create a RAID over multiple PCIe flash cards, two of which are:

  • Use LVM with the RAID option.
  • Use the software RAID utility MDADM (multiple device administration) to create the RAID array.

But what type of RAID setup is best to use?
Oracle coined the term S.A.M.E. – Stripe And Mirror Everything – in 1999 and popularized the practice, which many database administrators (DBA) and storage administrators have followed ever since. I follow this practice and suggest you do the same.

First, you need to determine how these cards will be accessed:

  • Small random reads and writes
  • Larger sequential reads
  • Hybrid (mix of both)

In database deployments, your choice is usually among online transaction processing (OLTP) applications like airline and hotel reservation systems and corporate financial or enterprise resource planning (ERP) applications, or data warehouse/data mining/data analytics applications, or a mix of both environments. OLTP applications involve small random reads and writes as well as many sequential writes for log files. Data warehouse/data mining/data analytics applications involve mostly large sequential reads with very few sequential log writes.

Before setting up one or many PCIe flash cards in a RAID array either using LVM on RAID or creating a RAID array using MDADM, you need to know the access pattern of the IO, capacity requirements and budget. These requirements will dictate which RAID level will work best for your environment and fit your budget.

I would pick either a RAID 1/RAID 10 configuration (mirroring without striping, or striping and mirroring respectively), or RAID 5 (striping with parity). RAID 1/RAID 10 costs more but delivers the best performance, whereas RAID 5 costs less but imposes a significant write penalty.

Optimizing OLTP application performance
To optimize performance of an OLTP application, I would implement either a RAID 1 or RAID 10 array. If I were budget constrained, or implementing a data warehouse application, I would use a RAID 5 array. Normally a RAID 5 array will produce a higher throughput (megabits per second) appropriate for a data warehouse/data mining application.

In a nutshell, knowing how to tune the configuration to the application is key to reaping the best performance.

For either RAID array, you need to create an aligned partition using sfdisk:

echo “2048,,fd” | sfdisk –uS /dev/sdX –force

“fd” is the system identifier for a Linux RAID auto device.

Keep in mind that it is not mandatory to create a partition for LVMs or RAID arrays. Instead, you can assign RAW devices. It’s important to remember to align the sectors if combining RAW and partitioned devices or just creating a basic partition. It’s sound practice to always create an aligned partition when using PCIe flash cards.

At this point, aligned partitions have been created and are now ready to be used in LVMs or RAID arrays. Instructions for creating these are on the web or in Linux/Unix reference manuals. Here are a couple of websites that go over the process of creating LVM, RAID, or LVM on RAID:

https://raid.wiki.kernel.org/index.php/Partitioning_RAID_/_LVM_on_RAID
http://www.gagme.com/greg/linux/raid-lvm.php

Specifying a stripe width value
Also remember that, when creating LVMs with striping or RAID arrays, you’ll need to specify a stripe width value. Many years ago, Oracle and EMC conducted a number studies on this and concluded that a 1M stripe width performed the best as long as the database IO request was equal to or less than 1M. When implementing Oracle ASM, Oracle’s standard is to use 1M allocation units, which matches its coarse striping size of 1M.

Part 2 of this series will describe how to create RAW devices or file systems.

Part 3 of this series will describe how to use Oracle ASM when deploying PCIe flash cards.

Part 4 of this series will describe how to persist assignment to dynamically changing NWD/NMR devices.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Views: (1093)