When implementing an LSI Nytro WarpDrive (NWD) or Nytro MegaRAID (NMR) PCIe card in a Linux server, you’ll need to modify quite a few variables to produce the best performance.
In the Linux server, device assignments sometimes change after reboots. Sometimes, the PCIe card can be assigned /dev/sda. Other times, it can be assigned /dev/sdd, or any device name. This variability can wreak havoc when modifying the Linux environment variables. To get around this issue, assignments by the SCSI address should be used so all of the Linux performance variables will be persisted properly across reboots. If you’re using a file system, be sure to include the device UUID address in the mount statement in /etc/fstab to persist the mount command across reboots.
Cut and paste the script
The first step to solving this issue is to cut and paste the following script, except the SCSI address (highlighted in yellow) of the PCIe card, into /etc/rc.local. You’ll need to enter the SCSI address before executing the script.
The SCSI address needs to be modified with the address of the PCIe card. To get this value, issue this command:
ls –al /dev/disk/by-id
When you install the NWD/NMR PCIe card, Linux will assign a name to the device. For example, the device name can be listed as /dev/sdX, and X can be any letter. The output from the “ls” command above will show the SCSI address for this PCIe device. Don’t use the address containing “-partX.” Be sure to note this SCSI address since you will need it to create the script.
Copy the code
Next, copy the code and create a file called “nwd_getdevice.sh” with the modified SCSI address.
After saving this file, change permission of the file to “execute” and then place this command in the /etc/rc.local file:
Test the script
To test this script, execute it on the command line exactly how you stated it in the rc.local file. The next time the system is rebooted, the settings will be set to the appropriate device.
If you plan to deploy multiple LSI PCIe cards, you will have to perform the prior steps for each PCIe card, creating a new nwd_getdeviceX.sh script for each. For example, I use script names like nwd_getdevice1.sh, nwd_getdevice2.sh, etc, and then include separate execute statements in the rc.local file for each script.
The most important step in implementing NWD/NMR PCIe flash cards is aligning the card on a boundary, which I cover in Part 1 of this series. This step alone can deliver a 3x performance gain or more based on our in-house tests as well as testing from some of our customers. The rest of this series walks you through the process of setting up these aligned flash cards using a file system, ASM or RAW device and, finally, persisting all the Linux performance variables to the card so these settings are persisted across reboots.
What is Oracle ASM?
The Oracle® automatic storage management system (ASM) was developed 10 years ago to make it much easier for database administrators (DBAs) to use and tune database storage. Oracle ASM enables DBAs to:
The drawbacks of using Oracle ASM:
What will be covered in this blog and what won’t
ASM is quite complex to learn and to set up properly for both performance and high availability. I won’t be going over all the commands and configurations of ASM, but I will cover how to set up an aligned LSI Nytro WarpDrive and Nytro MegaRAID PCIe® card and create an ASM disk to be assigned to an ASM disk group. There are many websites and books that go over all the details of Oracle ASM, and the most current book that I would recommend is “Database Cloud Storage: The Essential Guide to Oracle Automatic Storage Management.” Or visit Oracle’s docs.oracle.com website.
Setting up ASM
The following steps cover configuring a LUN for ASM. In order to use ASM, you will need to install the Oracle Grid software from otn.oracle.com. I prefer using Oracle ASMLIB when configuring ASM. Included in the box of the latest version of Oracle Linux, ASMLIB offers an easier way to configure ASM. If you are using an older version of ASM, you will need to install the RPMs for ASM from support.oracle.com.
Step 1: Create aligned partition
Refer to Part 1 of this series to create a LUN on a 1M boundary. Oracle recommends using the full disk for ASM, so just create one large aligned partition. I suggest using this command:
echo “2048,,” | sfdisk –uS /dev/sdX –force
Step 2: Create an ASM disk
Once the device has an aligned partition created on it, we can assign it to ASM by using the ASM createdisk command with two input parameters – ASM disk name and the PCIe flash partitioned device name – as follows:
/etc/init.d/oracleasm createdisk ASMDISK1 /dev/sda1
To verify that the create ASM disk process was successful, and the device was marked as an ASM disk, enter the following commands:
/etc/init.d/oracleasm querydisk /dev/sda1
(the output should state: “/dev/sda is an Oracle ASM disk [OK])
(the output should state: ASMDISK1)
Step 3: Assign ASM disk to disk group
The ASM disk group is the primary component of ASM as well as the highest level data structure in ASM. A disk group is a container of multiple ASM disks, and it is the disk group that the database references when creating Oracle Tablespaces.
There are multiple ways to create an ASM disk group. The easiest way is to use ASM Configuration Assistant (ASMCA), which walks you through the creation process. See Oracle ASM documentation on how to use ASMCA.
Here are the steps for creating a disk group:
a: Log in to GRID using sqlplus / as sysasm.
b: Select name, path, header status, state from v$asm_disk as follows:
c: Create diskgroup DG1 external redundancy disk using this command:
The disk group is now ready to be used in creating an Oracle database Tablespace. To use this disk group in an Oracle database, please refer to Oracle’s database documentation at docs.oracle.com.
In Part 4, the final installment of this series, I’ll discuss how to persist assignment to dynamically changing Nytro WarpDrive and Nytro MegaRAID PCIe cards.
Tags: ACFS, ASM Cluster File System, database, database administrator, DBA, disk group, Linux, Logical Unit Number, LUN, Nytro MegaRAID card, Nytro WarpDrive card, Oracle, Oracle ASM, Oracle ASMLIB, Oracle Automatic Storage Management System, Oracle Grid, Oracle RAC, Oracle Real Application Clusters, Oracle Tablespaces, partition, PCI Express, PCIe flash, RAW, Recover Manager, RMAN, Unix
One of the coolest parts of my job is talking with customers and partners about their production environment challenges around database technology. A topic of particular interest lately is in-memory database (IMDB) systems and their integration into an existing environment.
The need for speed
Much of the media coverage of IMDB integrations is heavily focused on speed and loaded with terms like real-time processing, on-demand analytics and memory speed. But zeroing in on the performance benefits comes at the expense of so many other key aspects of IMDBs. The technology needs to be evaluated as a whole.
Granted, in-memory databases can store data structures in DRAM with latency that is measured in nanoseconds. (Latency of disk-based technology, comparatively, is glacial – clocked in milliseconds.) Depending on the workload and the vendor’s database engine architecture, DRAM processing can improve database performance by as much as 50X-100X.
How durable is it?
Keep in mind that most relational database systems conform to the ACID (Atomicity, Consistency, Isolation, and Durability) properties of transactions. (You can find a more thorough investigation of these properties in this paper – “The Transaction Concept: Virtues and Limitation” – authored by database pioneer Jim Gray.) The matter of relational database system durability naturally raises the question: But how is data protected from DRAM failures when things go haywire and what is the recovery experience like? Relational databases implement the durable property to prevent problems associated with power loss or hardware failure to ensure transaction information is permanently captured.
The commonly used WAL (Write Ahead Logging) method ensures that the transaction data is written to a log file (persisted on non-volatile storage) before it is committed and subsequently written to a data file (persisted on non-volatile storage). When the database engine restarts after a failure, it switches to recovery mode to read the log file and determine if the transactions should be rolled forward (committed) or rolled back (cancelled), depending on their state at the time of failure.
Current in-memory database systems support durability and their implementations vary by vendor. Here is a sampling of durability techniques they use:
Shopping tip: Consider durability when evaluating your options
If changes in your data environment are frequent and require greater persistence and consistency, be sure to also consider durability when evaluating and comparing vendor implementations. Durability is no less important than query speed. Different implementations may or may not be a good fit and in some cases might require additional hardware that can increase cost.
It’s easy to get swept away by all the media attention about how in-memory databases deliver blazing performance, but customers often tell me they would gladly give up some performance for rock-solid stability and interoperability.
For our part, LSI enterprise PCIe® flash storage solutions not only perform well but also include DuraClass™ technology, which can increase the endurance, reliability and power efficiency of non-volatile storage used for in-memory database systems.
Tags: ACID, data tiering, database, DRAM, durability, DuraClass, flash storage, IMDB, in-memory database systems, PCI Express, PCIe flash, relational database, replication, snapshots, WAL, write ahead logging
My first blog in this series, “How to maximize performance of PCIe flash for enterprise applications running on Linux,” describes the steps for aligning PCIe® flash devices. This blog covers the next stage of setting up the PCIe flash device when using the Linux® operating system: creating a RAW device or a file system.
At this point, one or more PCIe flash cards have been partitioned on a sector boundary. Depending on their use, these partitioned devices are either set up as a single RAW device or as part of a logical volume or RAID array.
Next step is to determine how these devices will be used. Most administrators will create file systems on these partitions. Some Oracle administrators will use them as RAW devices and assign them to Automatic Storage Management (ASM). Still others, those looking for the best performance possible from the device will stick to a RAW device. For many years, the recommendation was not to use RAW devices because the complexity of managing them outweighed their small potential gains in performance.
ASM uses RAW devices but makes administration of these devices much easier. More on ASM in Part 3 of this series.
Building a file system
Next is to build a file system on the RAW device, LVM or RAID. But first we need to determine the best type of file system to use. There are many to choose from including:
To keep this brief, I will only go over EXT-4. This type of file system is the most current and provides the latest enhancements for increasing capacity, disabling journaling and many other capabilities, though XFS can be a higher performance alternative.
To create an EXT-4 file system, use this command:
You can now turn off or on certain features of the EXT-4 file system by using “tune2fs. Here are a couple of examples of using tune2fs:
tune2fs –l /dev/sdX1 | grep ‘Filesystem features’
tune2fs -O ^has_journal /dev/sdX1
Mounting the file system
The next step is to mount the file system and assign the owner:group to the mount point. There are also many tuning options that can be added to the mount command when using PCIe flash cards. The mount options I use are:
The mount command for /dev/sda1 to /u01 would be:
mount –o noatime,nodiratime,max_batch_time=0, nobarrier,discard /dev/sda1 /u01
To make these mount points persistent over reboots, add them to the mount entries in ‘/etc/fstab’. Finally, you need to give a user rights for reading and writing to this mount point, and to assign ownership to /u01 – for example, assigning ownership of /u01 to the oracle userid and to the dba group. To do this, use the “chown” command:
chown oracle:dba /u01
The PCIe flash device is now ready to be used.
Part 3 of this series will describe how to use Oracle ASM when deploying PCIe flash cards.
Part 4 of this series will describe how to persist assignment to dynamically changing NWD/NMR devices.
Customer dilemma: I just purchased PCIe® flash cards to increase performance of my enterprise applications that run on Linux® and Unix®. How do I set them up to get the best performance?
Good question. I wish there were a simple answer but each environment is different. There is no cookie-cutter configuration that fits all, though a few questions will reveal how the PCIe flash cards should be configured for optimum performance.
Most of the popular relational and non-relational databases run on many different operating systems. I will be describing Linux-specific configurations, but most of them should also work with Unix systems that are supported by the PCIe flash card vendor. I’m a database guy, but these same principals and techniques that I’ll be covering apply to other applications like mail servers, web servers, application servers and, of course, databases.
Aligning PCIe flash devices
The most important step to perform on each PCIe flash card is to create a partition that is aligned on a specific boundary (such as 4k or 8k) so each read and write to the flash device will require only one physical input/output (IO) operation. If the card is not partitioned on such a boundary, then reads and writes will span the sector groups, which doubles the IO latency for each read or write request.
To align a partition, I use the sfdisk command to start a partition on a 1M boundary (sector 2048). Aligning to a 1M boundary resolves the dependency to align to a 4k, 8k, or even a 64k boundary. But before I do this, I need to know how I am going to use this device. Will this be a standalone partition? Part of a logical volume? Or part of a RAID group?
Which one is best?
If I were deploying the PCIe flash device for database caching (for example, the Oracle database has provided this caching functionality for years using the Database Smart Flash Cache feature, and Facebook created the open source Flashcache used in MySQL databases), I would use a single-partitioned PCIe flash card if I knew the capacity would meet my needs now and over the next 5 years. If I selected this configuration, the sfdisk command to create the partition would be:
echo “2048,,” | sfdisk –uS /dev/sdX –force
This single partitioning is also required with the Oracle® Automatic Storage Management system (ASM). Oracle has provided ASM for many years and I will go over how to use this storage feature in Part 3 of this series.
If I need to deploy multiple PCIe flash cards for database caching, I would create Logical Volume Manager (LVM) over all the flash devices to simplify administration. The sfdisk command to create a partition for each PCIe flash card would be:
echo “2048,,8e” | sfdisk –uS /dev/sdX –force
“8e” is the system partition type for creating a logical volume.
Neither of these solutions needs fault tolerance since they will be used for write-thru caching. My recent blog “How to optimize PCIe flash cards – a new approach to creating logical volumes” covers this process in detail.
If I want to use the PCIe flash card for persisting data, I would need to make the PCIe flash cards fault tolerant, using two or more cards to build the RAID array and eliminate any single point of failure. There are a number of ways to create a RAID over multiple PCIe flash cards, two of which are:
But what type of RAID setup is best to use?
Oracle coined the term S.A.M.E. – Stripe And Mirror Everything – in 1999 and popularized the practice, which many database administrators (DBA) and storage administrators have followed ever since. I follow this practice and suggest you do the same.
First, you need to determine how these cards will be accessed:
In database deployments, your choice is usually among online transaction processing (OLTP) applications like airline and hotel reservation systems and corporate financial or enterprise resource planning (ERP) applications, or data warehouse/data mining/data analytics applications, or a mix of both environments. OLTP applications involve small random reads and writes as well as many sequential writes for log files. Data warehouse/data mining/data analytics applications involve mostly large sequential reads with very few sequential log writes.
Before setting up one or many PCIe flash cards in a RAID array either using LVM on RAID or creating a RAID array using MDADM, you need to know the access pattern of the IO, capacity requirements and budget. These requirements will dictate which RAID level will work best for your environment and fit your budget.
I would pick either a RAID 1/RAID 10 configuration (mirroring without striping, or striping and mirroring respectively), or RAID 5 (striping with parity). RAID 1/RAID 10 costs more but delivers the best performance, whereas RAID 5 costs less but imposes a significant write penalty.
Optimizing OLTP application performance
To optimize performance of an OLTP application, I would implement either a RAID 1 or RAID 10 array. If I were budget constrained, or implementing a data warehouse application, I would use a RAID 5 array. Normally a RAID 5 array will produce a higher throughput (megabits per second) appropriate for a data warehouse/data mining application.
In a nutshell, knowing how to tune the configuration to the application is key to reaping the best performance.
For either RAID array, you need to create an aligned partition using sfdisk:
echo “2048,,fd” | sfdisk –uS /dev/sdX –force
“fd” is the system identifier for a Linux RAID auto device.
Keep in mind that it is not mandatory to create a partition for LVMs or RAID arrays. Instead, you can assign RAW devices. It’s important to remember to align the sectors if combining RAW and partitioned devices or just creating a basic partition. It’s sound practice to always create an aligned partition when using PCIe flash cards.
At this point, aligned partitions have been created and are now ready to be used in LVMs or RAID arrays. Instructions for creating these are on the web or in Linux/Unix reference manuals. Here are a couple of websites that go over the process of creating LVM, RAID, or LVM on RAID:
Specifying a stripe width value
Also remember that, when creating LVMs with striping or RAID arrays, you’ll need to specify a stripe width value. Many years ago, Oracle and EMC conducted a number studies on this and concluded that a 1M stripe width performed the best as long as the database IO request was equal to or less than 1M. When implementing Oracle ASM, Oracle’s standard is to use 1M allocation units, which matches its coarse striping size of 1M.
Part 2 of this series will describe how to create RAW devices or file systems.
Part 3 of this series will describe how to use Oracle ASM when deploying PCIe flash cards.
Part 4 of this series will describe how to persist assignment to dynamically changing NWD/NMR devices.
Tags: ASM, automatic storage management, data analytics, data mining, data warehouse, Database Smart Cache, EMC, enterprise resource planning, ERP, Facebook, flash storage, Flashcache, Linux, logical volume, Logical Volume Manager, LVM, MDADM, multiple device administration, MySQL, non-relational database, OLTP, online transaction processing, Oracle, partition, PCI Express, PCIe flash, performance, RAID, RAW, relational database, SAME, sector, Stripe and Mirror Everything, Unix
A major reason enterprise customers see high latency and poorer than expected performance when implementing flash technology is that the flash partition is not aligned on a sector boundary that allows the flash device to access its data efficiently. When creating a Logical Volume (LVM), things can even get more complicated. Proper partition alignment is critical to performance when implementing flash in your enterprise.
An aligned partition is one that starts on a sector number that’s evenly divisible by 4k, or 8k, or a starting sector that is divisible by eight. Aligned input-out (IO) operations will start at sector 8 for 4k alignments, 16 for an 8k alignments, and so forth, with sector 2048 for 1M alignments.
If a flash partition is unaligned – its IO operations start at a sector number not divisible by eight – the device will perform two IOs over adjacent blocks instead of one. These extra IOs will degrade performance of the flash device. In our testing, we have seen up to 4x performance gains by properly aligning the flash device.
Out with the old … in with the new
There are many articles, websites, and Linux system administrators best practice documents describing how to create a logical volume (LVM) – an abstraction of a number of flash devices into a single storage volume that enables dynamic volume resizing and makes it easier to replace, re-partition and back up individual devices in Linux. However, most of these practices were developed before the advent of PCIe® flash devices. I have worked with customers who have used these old practices of creating LVMs and some of them are seeing very poor performance when implementing flash in their environments.
My conversations with customers and documents I’ve read on creating LVMs have revealed that the first step in creating a LVM – to create a physical volume (PV) – needs refinement. The reason is the PV create process can use a raw device, a partitioned device, or a mix. I would suggest getting into the habit of aligning all flash devices on a physical sector boundary so that all PVs are aligned. The PV command is typically specified as either “pvcreate /dev/sdX,” which allocates the whole device (non-partitioned) to the PV, or “pvcreate /dev/sdX1,” which uses a partition to create the PV. If the PV is created using a mix of raw devices and partitioned devices, or multiple partitioned devices, is there alignment over all the PVs? Maybe! Maybe not! That’s the problem!
Aligning for higher speed
I recommend a new approach to creating LVMs when using flash technology. My suggestion is to align each of the flash devices on a 1M boundary before creating the PV. Here are the steps to help make sure that you are using boundary-aligned devices when creating a LVM:
echo “2048,,8e” | sfdisk – uS /dev/sdX – force
Implementing flash in the enterprise is a great way to produce low latencies while providing high IOPs and throughput. By following these steps, you will successfully set up an LVM over multiple flash devices that are aligned on a proper boundary to get the best performance.
The lifeblood of any online retailer is the speed of its IT infrastructure. Shoppers aren’t infinitely patient. Sluggish infrastructure performance can make shoppers wait precious seconds longer than they can stand, sending them fleeing to other sites for a faster purchase. Our federal government’s halting rollout of the Health Insurance Marketplace website is a glaring example of what can happen when IT infrastructure isn’t solid. A few bad user experiences that go viral can be damaging enough. Tens of thousands can be crippling.
In hyperscale datacenters, any number of problems including network issues, insufficient scaling and inconsistent management can undermine end users’ experience. But one that hits home for me is the impact of slow storage on the performance of databases, where the data sits. With the database at the heart of all those online transactions, retailers can ill afford to have their tier of database servers operating at anything less than peak performance.
Slow storage undermines database performance
Typically, Web 2.0 and e-commerce companies run relational databases (RDBs) on these massive server-centric infrastructures. (Take a look at my blog last week to get a feel for the size of these hyperscale datacenter infrastructures). If you are running that many servers to support millions of users, you are likely using some kind of open-sourced RDB such as MySQL or other variations. Keep in mind that Oracle 11gR2 likely retails around $30K per core but MSQL is free. But the performance of both, and most other relational databases, suffer immensely when transactions are retrieving data from storage (or disk). You can only throw so much RAM and CPU power at the performance problem … sooner rather than later you have to deal with slow storage.
Almost everyone in industry – Web 2.0, cloud, hyperscale and other providers of massive database infrastructures – is lining up to solve this problem the best way they can. How? By deploying flash as the sole storage for database servers and applications. But is low-latency flash enough? For sheer performance it beats rotational disk hands down. But … even flash storage has its limitations, most notably when you are trying to drive ultra-low latencies for write IOs. Most IO accesses by RDBs, which do the transactional processing, are a mix or read/writes to the storage. Specifically, the mix is 70%/30% reads/writes. These are also typically low q-depth accesses (less than 4). It is those writes that can really slow things down.
PCIe flash reduces write latencies
The good news is that the right PCIe flash technology in the mix can solve the slowdowns. Some interesting PCIe flash technologies designed to tackle this latency problem are on display at AIS this week. DRAM and in particular NVDRAM are being deployed as a tier in front of flash to really tackle those nasty write latencies.
Among other demos, we’re showing how a Nytro™ 6000 series PCIe flash card helps solve the MySQL database performance issues. The typical response time for a small data read (this is what the database will see for a Database IO) from an HDD is 5ms. Flash-based devices such as the Nytro WarpDrive® card can complete the same read in less than 50μs on average during testing, an improvement of several orders-of-magnitude in response time. This response time translates to getting much higher transactions out of the same infrastructure – but with less space (flash is denser) and a lot less power (flash consumes a lot lower power than HDDs).
We’re also showing the Nytro 7000 series PCIe flash cards. They reach even lower write latencies than the 6000 series and very low q-depths. The 7000 series cards also provide DRAM buffering while maintaining data-integrity even in the event of a power loss.
For online retailers and other businesses, higher database speeds mean more than just faster transactions. They can help keep those cash registers ringing.
Tags: AIS, database, DRAM, e-commerce, flash, flash memory, hard disk drive, HDD, hyperscale datacenter, latency, MySQL, NVDRAM, Nytro 6000, Nytro 7000, Nytro WarpDrive, Oracle, PCIe flash, relational database, storage latency, web 2.0, write latency