When implementing an LSI Nytro WarpDrive (NWD) or Nytro MegaRAID (NMR) PCIe flash card in a Linux server, you need to modify quite a few variables to get the best performance out of these cards.

In the Linux server, device assignments sometimes change after reboots. Sometimes, the PCIe flash card can be assigned /dev/sda. Other times, it can be assigned /dev/sdd, or any device name. This variability can wreak havoc when modifying the Linux environment variables. To get around this issue, assignments by the SCSI address should be used so all of the Linux performance variables will persist properly across reboots. If using a filesystem, use the device UUID address in the mount statement in /etc/fstab to persist the mount command across reboots.

Cut and paste the script
The first step to solving is to cut and paste the following script, except the SCSI address (highlighted in yellow), into /etc/rc.local. You’ll need to enter the SCSI address of the PCIe card before executing the script.

nwd_getdevice.sh
ls -al /dev/disk/by-id |grep 'scsi-3600508e07e726177965e06849461a804 ' |grep /sd > nwddevice.txt
awk '{split($11,arr,"/"); print arr[3]}' nwddevice.txt > nwd1device.txt
variable1=$(cat nwd1device.txt)
echo "4096" > /sys/block/$variable1/queue/nr_requests
echo "512" > /sys/block/$variable1/device/queue_depth
echo "deadline" > /sys/block/$variable1/queue/scheduler
echo "2" > /sys/block/$variable1/queue/rq_affinity
echo 0 > /sys/block/$variable1/queue/rotational
echo 0 > /sys/block/$variable1/queue/add_random
echo 1024 > /sys/block/$variable1/queue/max_sectors_kb
echo 0 > /sys/block/$variable1/queue/nomerges
blockdev --setra 0 /dev/$variable1

The highlighted SCSI address above needs to be modified with the SCSI address of the PCIe flash card. To get the address, issue this command:

ls –al /dev/disk/by-id

When you install the Nytro PCIe flash card, Linux will assign a name to the device. For example, the device name can be listed as /dev/sdX, and X can be any letter. The output from the ls command above will show the SCSI address for this PCIe device. Don’t use the address containing “-partX” in it. Be sure to note this SCSI address since you will need it to create the script below. Include a single space between the SCSI address and the closing single quote in the script.

Create nwd_getdevice.sh file
Next, copy the code and create a file called “nwd_getdevice.sh” with the modified SCSI address.

After saving this file, change file permissions to “execute” and then place this command in the /etc/rc.local file:

/path/nwd_getdevice.sh

Test the script
To test this script, execute it on the command line exactly how you stated it in the rc.local file. The next time the system reboots, the settings will be set to the appropriate device.

Multiple PCIe flash cards
If you plan to deploy multiple LSI PCIe flash cards in the server, the easiest way is to duplicate all of commands in the nwd_getdevice.sh script and paste them at the end. Then change the SCSI address of the next card and overlay the SCSI address in the newly pasted area. You can follow this procedure for as many LSI PCIe flash cards as are installed in the server. For example:

nwd_getdevice.sh
ls -al /dev/disk/by-id |grep 'scsi-1stscsiaddr83333365e06849461a804 ' |grep /sd > nwddevice.txt
awk '{split($11,arr,"/"); print arr[3]}' nwddevice.txt > nwd1device.txt
variable1=$(cat nwd1device.txt)
echo "4096" > /sys/block/$variable1/queue/nr_requests
echo "512" > /sys/block/$variable1/device/queue_depth
echo "deadline" > /sys/block/$variable1/queue/scheduler
echo "2" > /sys/block/$variable1/queue/rq_affinity
echo 0 > /sys/block/$variable1/queue/rotational
echo 0 > /sys/block/$variable1/queue/add_random
echo 1024 > /sys/block/$variable1/queue/max_sectors_kb
echo 0 > /sys/block/$variable1/queue/nomerges
blockdev --setra 0 /dev/$variable1
ls -al /dev/disk/by-id |grep 'scsi-2ndscsiaddr1234566666654444444444 ' |grep /sd > nwddevice.txt
awk '{split($11,arr,"/"); print arr[3]}' nwddevice.txt > nwd1device.txt
variable1=$(cat nwd1device.txt)
echo "4096" > /sys/block/$variable1/queue/nr_requests
echo "512" > /sys/block/$variable1/device/queue_depth
echo "deadline" > /sys/block/$variable1/queue/scheduler
echo "2" > /sys/block/$variable1/queue/rq_affinity
echo 0 > /sys/block/$variable1/queue/rotational
echo 0 > /sys/block/$variable1/queue/add_random
echo 1024 > /sys/block/$variable1/queue/max_sectors_kb
echo 0 > /sys/block/$variable1/queue/nomerges
blockdev --setra 0 /dev/$variable1

Final thoughts
The most important step in implementing Nytro PCIe flash cards under Linux is aligning the card on a boundary, which I cover in Part 1 of this series.  This step alone can deliver a 3x performance gain or more based on our in-house tests as well as testing from some of our customers. The rest of this series walks you through the process of setting up these aligned flash cards using a file system, ASM or RAW device and, finally, persisting all the Linux performance variables to the card so these settings are persisted across reboots.

Links to the other posts in this series:

How to maximize PCIe flash performance under Linux

Part 1: Aligning PCIe flash devices
Part 2: Creating the RAW device or filesystem
Part 3: Oracle ASM

Tags: , , , , , , , , , , ,
Views: (1736)


My first blog in this series, “How to maximize performance of PCIe flash for enterprise applications running on Linux,” describes the steps for aligning PCIe® flash devices. This blog covers the next stage of setting up the PCIe flash device when using the Linux® operating system: creating a RAW device or a file system.

At this point, one or more PCIe flash cards have been partitioned on a sector boundary. Depending on their use, these partitioned devices are either set up as a single RAW device or as part of a logical volume or RAID array.

Next step is to determine how these devices will be used. Most administrators will create file systems on these partitions. Some Oracle administrators will use them as RAW devices and assign them to Automatic Storage Management (ASM). Still others, those looking for the best performance possible from the device will stick to a RAW device. For many years, the recommendation was not to use RAW devices because the complexity of managing them outweighed their small potential gains in performance.

ASM uses RAW devices but makes administration of these devices much easier. More on ASM in Part 3 of this series.

Building a file system
Next is to build a file system on the RAW device, LVM or RAID. But first we need to determine the best type of file system to use. There are many to choose from including:

  • EXT-2
  • EXT-3
  • EXT-4
  • XFS
  • BTRFS
  • ZFS

To keep this brief, I will only go over EXT-4. This type of file system is the most current and provides the latest enhancements for increasing capacity, disabling journaling and many other capabilities, though XFS can be a higher performance alternative.

To create an EXT-4 file system, use this command:

mkfs.ext4 /dev/sdX

You can now turn off or on certain features of the EXT-4 file system by using “tune2fs. Here are a couple of examples of using tune2fs:

  • To list all file system features for /dev/sdX1, use this tune2fs command:

tune2fs –l /dev/sdX1 | grep ‘Filesystem features’

  • To disable journaling on /dev/sdX1, use this tune2fs command:

tune2fs -O ^has_journal /dev/sdX1

Mounting the file system
The next step is to mount the file system and assign the owner:group to the mount point. There are also many tuning options that can be added to the mount command when using PCIe flash cards. The mount options I use are:

  • NOATIME
  • NODIRATIME
  • MAX_BATCH_TIME=0
  • NOBARRIER
  • DISCARD

The mount command for /dev/sda1 to /u01 would be:

mount –o noatime,nodiratime,max_batch_time=0, nobarrier,discard /dev/sda1 /u01

To make these mount points persistent over reboots, add them to the mount entries in ‘/etc/fstab’. Finally, you need to give a user rights for reading and writing to this mount point, and to assign ownership to /u01 – for example, assigning ownership of /u01 to the oracle userid and to the dba group. To do this, use the “chown” command:

chown oracle:dba /u01

The PCIe flash device is now ready to be used.

Part 3 of this series will describe how to use Oracle ASM when deploying PCIe flash cards.

Part 4 of this series will describe how to persist assignment to dynamically changing NWD/NMR devices.

Tags: , , , , , , , , , ,
Views: (831)


Customer dilemma: I just purchased PCIe® flash cards to increase performance of my enterprise applications that run on Linux® and Unix®. How do I set them up to get the best performance?

Good question. I wish there were a simple answer but each environment is different. There is no cookie-cutter configuration that fits all, though a few questions will reveal how the PCIe flash cards should be configured for optimum performance.

Most of the popular relational and non-relational databases run on many different operating systems. I will be describing Linux-specific configurations, but most of them should also work with Unix systems that are supported by the PCIe flash card vendor. I’m a database guy, but these same principals and techniques that I’ll be covering apply to other applications like mail servers, web servers, application servers and, of course, databases.

Aligning PCIe flash devices
The most important step to perform on each PCIe flash card is to create a partition that is aligned on a specific boundary (such as 4k or 8k) so each read and write to the flash device will require only one physical input/output (IO) operation. If the card is not partitioned on such a boundary, then reads and writes will span the sector groups, which doubles the IO latency for each read or write request.

To align a partition, I use the sfdisk command to start a partition on a 1M boundary (sector 2048). Aligning to a 1M boundary resolves the dependency to align to a 4k, 8k, or even a 64k boundary. But before I do this, I need to know how I am going to use this device. Will this be a standalone partition? Part of a logical volume? Or part of a RAID group?

Which one is best?
If I were deploying the PCIe flash device for database caching (for example, the Oracle database has provided this caching functionality for years using the Database Smart Flash Cache feature, and Facebook created the open source Flashcache used in MySQL databases), I would use a single-partitioned PCIe flash card if I knew the capacity would meet my needs now and over the next 5 years. If I selected this configuration, the sfdisk command to create the partition would be:

echo “2048,,” | sfdisk –uS /dev/sdX –force

This single partitioning is also required with the Oracle® Automatic Storage Management system (ASM). Oracle has provided ASM for many years and I will go over how to use this storage feature in Part 3 of this series.

If I need to deploy multiple PCIe flash cards for database caching, I would create Logical Volume Manager (LVM) over all the flash devices to simplify administration. The sfdisk command to create a partition for each PCIe flash card would be:

echo “2048,,8e” | sfdisk –uS /dev/sdX –force

“8e” is the system partition type for creating a logical volume.

Neither of these solutions needs fault tolerance since they will be used for write-thru caching. My recent blog “How to optimize PCIe flash cards – a new approach to creating logical volumes” covers this process in detail.

If I want to use the PCIe flash card for persisting data, I would need to make the PCIe flash cards fault tolerant, using two or more cards to build the RAID array and eliminate any single point of failure. There are a number of ways to create a RAID over multiple PCIe flash cards, two of which are:

  • Use LVM with the RAID option.
  • Use the software RAID utility MDADM (multiple device administration) to create the RAID array.

But what type of RAID setup is best to use?
Oracle coined the term S.A.M.E. – Stripe And Mirror Everything – in 1999 and popularized the practice, which many database administrators (DBA) and storage administrators have followed ever since. I follow this practice and suggest you do the same.

First, you need to determine how these cards will be accessed:

  • Small random reads and writes
  • Larger sequential reads
  • Hybrid (mix of both)

In database deployments, your choice is usually among online transaction processing (OLTP) applications like airline and hotel reservation systems and corporate financial or enterprise resource planning (ERP) applications, or data warehouse/data mining/data analytics applications, or a mix of both environments. OLTP applications involve small random reads and writes as well as many sequential writes for log files. Data warehouse/data mining/data analytics applications involve mostly large sequential reads with very few sequential log writes.

Before setting up one or many PCIe flash cards in a RAID array either using LVM on RAID or creating a RAID array using MDADM, you need to know the access pattern of the IO, capacity requirements and budget. These requirements will dictate which RAID level will work best for your environment and fit your budget.

I would pick either a RAID 1/RAID 10 configuration (mirroring without striping, or striping and mirroring respectively), or RAID 5 (striping with parity). RAID 1/RAID 10 costs more but delivers the best performance, whereas RAID 5 costs less but imposes a significant write penalty.

Optimizing OLTP application performance
To optimize performance of an OLTP application, I would implement either a RAID 1 or RAID 10 array. If I were budget constrained, or implementing a data warehouse application, I would use a RAID 5 array. Normally a RAID 5 array will produce a higher throughput (megabits per second) appropriate for a data warehouse/data mining application.

In a nutshell, knowing how to tune the configuration to the application is key to reaping the best performance.

For either RAID array, you need to create an aligned partition using sfdisk:

echo “2048,,fd” | sfdisk –uS /dev/sdX –force

“fd” is the system identifier for a Linux RAID auto device.

Keep in mind that it is not mandatory to create a partition for LVMs or RAID arrays. Instead, you can assign RAW devices. It’s important to remember to align the sectors if combining RAW and partitioned devices or just creating a basic partition. It’s sound practice to always create an aligned partition when using PCIe flash cards.

At this point, aligned partitions have been created and are now ready to be used in LVMs or RAID arrays. Instructions for creating these are on the web or in Linux/Unix reference manuals. Here are a couple of websites that go over the process of creating LVM, RAID, or LVM on RAID:

https://raid.wiki.kernel.org/index.php/Partitioning_RAID_/_LVM_on_RAID
http://www.gagme.com/greg/linux/raid-lvm.php

Specifying a stripe width value
Also remember that, when creating LVMs with striping or RAID arrays, you’ll need to specify a stripe width value. Many years ago, Oracle and EMC conducted a number studies on this and concluded that a 1M stripe width performed the best as long as the database IO request was equal to or less than 1M. When implementing Oracle ASM, Oracle’s standard is to use 1M allocation units, which matches its coarse striping size of 1M.

Part 2 of this series will describe how to create RAW devices or file systems.

Part 3 of this series will describe how to use Oracle ASM when deploying PCIe flash cards.

Part 4 of this series will describe how to persist assignment to dynamically changing NWD/NMR devices.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Views: (1248)