Scroll to Top

A couple of years ago I got a DSLR (digital single-lens reflex) camera.  After using a compact digital camera, the DSLR opened a new world of photography for me. It was great to have the option to shoot six frames per second, use different lenses and fine-tune shutter speed, exposure and other parameters.

Learning to take my photography to a higher level, from auto to manual settings, was quite an experience.  Through research and talking to friends and photographers, I discovered that I needed to learn these fundamentals:

  • Shutter Speed (time the sensor is exposed to light)
  • Aperture (how much light the lens will allow in)
  • ISO (sensor sensitivity to light)

Experimenting with each of these variables was a frustrating test of Murphy’s Law. Just when I thought I had found the right setting for, say, shutter speed, the other two would be thrown out of whack. You start to get used to shooting in some conditions, like low light, but it’s always a balancing act to understand the relationship among the three settings and how they influence each other. Should I go auto, do what the camera dictates, and settle for unsatisfying results, or go manual and struggle to get the results I want?

Fundamentals of database performance
At LSI I do a lot of database performance testing of systems, and it reminds me of photography. Many factors can affect system performance, but these are the fundamental ones:

  • Processor (instruction IO execution)
  • Memory (volatile data storage)
  • Storage (non-volatile data storage)

Consider the many systems that sport fast multi-core processors but use slow non-volatile data storage that bottleneck performance. As for memory, this IT maxim comes to mind: “Memory is like time. You can never have enough.” It would be great to always have enough memory for your working set, but this is seldom the case for large, high-transaction systems. With database systems, maintaining enough data in memory requires retrieving data from slower non-volatile data storage or, depending on the operating system resources available, reducing the working set of data that resides in memory.

Striking the right balance among the workload performance of the processor, memory and storage is key to optimizing database system performance.

 

Server components: Working together to optimize system performance
Non-volatile storage traditionally has been the system bottleneck, but this is changing with the growing adoption of fast PCIe® NAND flash and is evident with in-memory database (IMDB) systems.  I recently tested the customer preview of SQL Server 2014 In-Memory OLTP (online transaction processing), with the durable tables feature, code-named Hekaton, on Windows Server® 2012 R2.

With SQL Server 2014, tables and indexes reside in-memory, not in disk-based storage, significantly increasing the performance of short, highly concurrent transactions. Even with this fast in-memory processing capability, there is a performance tradeoff to maintain durability (See my post “In-memory shopping tip: Look for durability as much as speed.”)  Keep in mind that SQL server logging still needs to be performed on non-volatile storage, so the faster response of the storage the better.

 

Average processor utilization during workload testing of log file on various non-volatile data storage technologies.

 

With SQL Server logging, if the log file on non-volatile data storage is not fast enough, the processor will be underutilized. One way to shift more work to the processor is to deploy the LSI® Nytro™ WarpDrive® card for faster log writes. WarpDrive is one of several LSI non-volatile NAND-flash storage products that can help boost the performance and efficiency of your server components.

System performance optimization is an art that involves testing workload types. Changing one component affects the others, so striking the right balance among the server components is key to getting the results you want. Deploying the fastest processor available might seem like the best, most obvious way to goose system performance, but it’s even more important to optimize processor, memory and storage performance to the specific workload.

Tags: , , , , , , , , , ,
Views: (2158)


What is Oracle ASM?
The Oracle® automatic storage management system (ASM) was developed 10 years ago to make it much easier for database administrators (DBAs) to use and tune database storage. Oracle ASM enables DBAs to:

  • Automatically stripe data over each RAW device to improve database storage performance
  • Mirror data for greater fault tolerance
  • Simplify the management and extension of database storage for the cloud and, with the ASM Cluster File System (ACFS), use the snapshot and replication functionality to increase availability
  • Add the Oracle Real Application Clusters (RAC) capability to help reduce total cost of ownership (TCO), expand scalability and increase availability, among other benefits
  • Easily move data from one device to another while the database is active with no performance degradation
  • Reduce or eliminate storage or Linux administrator time for configuring database storage
  • Use ASM as a Linux®/Unix operating system file system called ACFS. (I know what you are thinking. Since you need Oracle Grid up and running to mount and use ASM, how can an ACFS device be available to the operating system at system boot? The reason is that the kernel has been modified to allow this functionality. Learn more about ACFS here.)
  • What’s more, it’s free – comes with Oracle Grid

The drawbacks of using Oracle ASM:

  • DBAs now control the storage they are using. Therefore, they need to know more about the storage and how the logical unit numbers (LUNs) are being used by Oracle ASM, and how to create ASM disk groups for higher performance.
  • Most ASM commands are executed through SQLPlus, not through the command line. That means storage is accessed through SQLPlus and sometimes ASMCMD, isolating the storage and making it harder for Linux admins to identify storage issues.
  • Recover Manager (RMAN) is the only guaranteed/supported method of backing up databases on ASM.

What will be covered in this blog and what won’t
ASM is quite complex to learn and to set up properly for both performance and high availability. I won’t be going over all the commands and configurations of ASM, but I will cover how to set up an aligned LSI Nytro WarpDrive and Nytro MegaRAID PCIe® card and create an ASM disk to be assigned to an ASM disk group. There are many websites and books that go over all the details of Oracle ASM, and the most current book that I would recommend is “Database Cloud Storage: The Essential Guide to Oracle Automatic Storage Management.” Or visit Oracle’s docs.oracle.com website.

Setting up ASM
The following steps cover configuring a LUN for ASM. In order to use ASM, you will need to install the Oracle Grid software from otn.oracle.com. I prefer using Oracle ASMLIB when configuring ASM.  Included in the box of the latest version of Oracle Linux,  ASMLIB offers an easier way to configure ASM. If you are using an older version of ASM, you will need to install the RPMs for ASM from support.oracle.com.

Step 1: Create aligned partition
Refer to Part 1 of this series to create a LUN on a 1M boundary. Oracle recommends using the full disk for ASM, so just create one large aligned partition. I suggest using this command:

echo “2048,,” | sfdisk –uS /dev/sdX –force

Step 2: Create an ASM disk
Once the device has an aligned partition created on it, we can assign it to ASM by using the ASM createdisk command with two input parameters – ASM disk name and the PCIe flash partitioned device name – as follows:

/etc/init.d/oracleasm createdisk ASMDISK1 /dev/sda1

To verify that the create ASM disk process was successful, and the device was marked as an ASM disk, enter the following commands:

/etc/init.d/oracleasm querydisk /dev/sda1

(the output should state: “/dev/sda is an Oracle ASM disk [OK])

/etc/init.d/oracleasm listdisks

(the output should state: ASMDISK1)

Step 3: Assign ASM disk to disk group
The ASM disk group is the primary component of ASM as well as the highest level data structure in ASM. A disk group is a container of multiple ASM disks, and it is the disk group that the database references when creating Oracle Tablespaces.

There are multiple ways to create an ASM disk group. The easiest way is to use ASM Configuration Assistant (ASMCA), which walks you through the creation process. See Oracle ASM documentation on how to use ASMCA.

Here are the steps for creating a disk group:

a: Log in to GRID using sqlplus / as sysasm.

b: Select name, path, header status, state from v$asm_disk as follows:

c: Create diskgroup DG1 external redundancy disk using this command:

‘/dev/oracleasm/disks/D1’;

The disk group is now ready to be used in creating an Oracle database Tablespace. To use this disk group in an Oracle database, please refer to Oracle’s database documentation at docs.oracle.com.

In Part 4, the final installment of this series, I’ll discuss how to persist assignment to dynamically changing  Nytro WarpDrive and Nytro MegaRAID PCIe cards.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , ,
Views: (1085)


How did he do that?
Growing up, I watched a little TV.  Okay, a lot of TV as I did not have my DVR or iPad and a man who would one day occupy the White House as VP had not yet invented the Internet.  Of the many shows I watched, MacGyver was one of my favorites. He would take ordinary objects and use them to solve complicated problems in a way no one could have imagined. Out of all the things he used, his trusty Swiss army knife was the most awesome.  With all its blades, tools and accessories, it could solve multiple problems at the same time.  It was easy to use, did not take up a lot of space and was very cost-effective.

Nytro MegaRAID – the Swiss Army knife of server storage
LSI has its own multi-function, get-yourself-out-of-a-fix workhorse – the Nytro MegaRAID® card, part of the Nytro product family. It combines caching intelligence, RAID protection and flash on a single PCIe® card to accelerate applications, so it can be deployed to solve problems across a broad number of applications.

A feature for every challenge!
The Nytro MegaRAID card is built on the same trusted technology as the MegaRAID cards deployed in datacenters worldwide. That means, it is enterprise architected and hardened and datacenter tested.  Its Swiss Army knife-like features include, as I mentioned, on-board flash storage that can be configured to monitor the flow of data from an application to the attached RAID protected storage, intelligently identify hot, or the most frequently accessed, data, and automatically move a copy of that data to the flash storage to accelerate applications.   The next time the application needs that data, the information is fetched from flash, not the much slower traditional hard disk drive (HDD) storage.

Hard drives can lead to slowdowns in another way, too, when the mechanics wear out and fail. When they do, your storage (and application) performance can dramatically decrease – in a RAID storage environment, this is called degraded mode. The good news is that the Nytro MegaRAID card stores much of an application’s frequently used data in its intelligent flash based cache, boosting the performance of a connected HDD in degrade mode by as much as 10x, depending on the configuration.  The Swiss Army knife follow-on benefit is that when you replace the failed drive, Nytro MegaRAID speeds RAID storage rebuilds by as much as 4x.  RAID rebuilds add to IT admin time, and IT time is money, so that’s money you get to keep in your pocket.

The Nytro MegaRAID card also can be configured so you can use half of its onboard flash as a pair of mirrored boot drives.  In big data environments, this mirroring frees up two boot drives for use as data storage to help increase your server storage density (aka available storage capacity), often significantly, while dramatically improving boot time.  What’s more, that same flash can be deployed instead as primary storage to complement your secondary HDD storage with higher speeds, providing a superfast repository for key files like virtual desktop infrastructure (VDI) golden images or key database log files.

One MacGyver Swiss Army knife, one Nytro MegaRAID card – both easy-to-use solutions for a number of complex problems.

Tags: , , , , , , , , , , , , , , , ,
Views: (768)


One of the coolest parts of my job is talking with customers and partners about their production environment challenges around database technology.  A topic of particular interest lately is in-memory database (IMDB) systems and their integration into an existing environment.

The need for speed
Much of the media coverage of IMDB integrations is heavily focused on speed and loaded with terms like real-time processing, on-demand analytics and memory speed.  But zeroing in on the performance benefits comes at the expense of so many other key aspects of IMDBs. The technology needs to be evaluated as a whole.

Granted, in-memory databases can store data structures in DRAM with latency that is measured in nanoseconds. (Latency of disk-based technology, comparatively, is glacial – clocked in milliseconds.)  Depending on the workload and the vendor’s database engine architecture, DRAM processing can improve database performance by as much as 50X-100X.

How durable is it?
Keep in mind that most relational database systems conform to the ACID (Atomicity, Consistency, Isolation, and Durability) properties of transactions. (You can find a more thorough investigation of these properties in this paper – “The Transaction Concept: Virtues and Limitation” – authored by database pioneer Jim Gray.) The matter of relational database system durability naturally raises the question: But how is data protected from DRAM failures when things go haywire and what is the recovery experience like?  Relational databases implement the durable property to prevent problems associated with power loss or hardware failure to ensure transaction information is permanently captured.

The commonly used WAL (Write Ahead Logging) method ensures that the transaction data is written to a log file (persisted on non-volatile storage) before it is committed and subsequently written to a data file (persisted on non-volatile storage). When the database engine restarts after a failure, it switches to recovery mode to read the log file and determine if the transactions should be rolled forward (committed) or rolled back (cancelled), depending on their state at the time of failure.

Current in-memory database systems support durability and their implementations vary by vendor.  Here is a sampling of durability techniques they use:

  • WAL (Write Ahead Logging)
    • Traditional method described above using a log file.
    • Changes are persisted to non-volatile storage that is used for recovery.
  • Replication
    • Data is copied to more than one location, and can be across different nodes.
    • Recovery can be handled using failover to alternate nodes.
  • Snapshots
    • Database snapshots are taken at intervals.
    • Previous snapshots can be used for recovery.
  • Data Tiering
    • Frequently accessed data resides only in in-memory DRAM structures.
    • Archival or less frequently accessed data resides only on non-volatile storage.
    • Replication can be used as well.

Shopping tip: Consider durability when evaluating your options
If changes in your data environment are frequent and require greater persistence and consistency, be sure to also consider durability when evaluating and comparing vendor implementations.  Durability is no less important than query speed.  Different implementations may or may not be a good fit and in some cases might require additional hardware that can increase cost.

It’s easy to get swept away by all the media attention about how in-memory databases deliver blazing performance, but customers often tell me they would gladly give up some performance for rock-solid stability and interoperability.

For our part, LSI enterprise PCIe® flash storage solutions not only perform well but also include DuraClass™ technology, which can increase the endurance, reliability and power efficiency of non-volatile storage used for in-memory database systems.

*Old suitcase by allesok used with permission.

 

Tags: , , , , , , , , , , , , , , ,
Views: (1489)


The lifeblood of any online retailer is the speed of its IT infrastructure. Shoppers aren’t infinitely patient. Sluggish infrastructure performance can make shoppers wait precious seconds longer than they can stand, sending them fleeing to other sites for a faster purchase. Our federal government’s halting rollout of the Health Insurance Marketplace website is a glaring example of what can happen when IT infrastructure isn’t solid. A few bad user experiences that go viral can be damaging enough. Tens of thousands can be crippling.  

In hyperscale datacenters, any number of problems including network issues, insufficient scaling and inconsistent management can undermine end users’ experience. But one that hits home for me is the impact of slow storage on the performance of databases, where the data sits. With the database at the heart of all those online transactions, retailers can ill afford to have their tier of database servers operating at anything less than peak performance.

Slow storage undermines database performance
Typically, Web 2.0 and e-commerce companies run relational databases (RDBs) on these massive server-centric infrastructures. (Take a look at my blog last week to get a feel for the size of these hyperscale datacenter infrastructures). If you are running that many servers to support millions of users, you are likely using some kind of open-sourced RDB such as MySQL or other variations. Keep in mind that Oracle 11gR2 likely retails around $30K per core but MSQL is free. But the performance of both, and most other relational databases, suffer immensely when transactions are retrieving data from storage (or disk). You can only throw so much RAM and CPU power at the performance problem … sooner rather than later you have to deal with slow storage.

Almost everyone in industry – Web 2.0, cloud, hyperscale and other providers of massive database infrastructures – is lining up to solve this problem the best way they can. How? By deploying flash as the sole storage for database servers and applications. But is low-latency flash enough? For sheer performance it beats rotational disk hands down. But … even flash storage has its limitations, most notably when you are trying to drive ultra-low latencies for write IOs. Most IO accesses by RDBs, which do the transactional processing, are a mix or read/writes to the storage. Specifically, the mix is 70%/30% reads/writes. These are also typically low q-depth accesses (less than 4). It is those writes that can really slow things down.

PCIe flash reduces write latencies
The good news is that the right PCIe flash technology in the mix can solve the slowdowns. Some interesting PCIe flash technologies designed to tackle this latency problem are on display at AIS this week. DRAM and in particular NVDRAM are being deployed as a tier in front of flash to really tackle those nasty write latencies.

Among other demos, we’re showing how a Nytro™ 6000 series PCIe flash card helps solve the MySQL database performance issues. The typical response time for a small data read (this is what the database will see for a Database IO) from an HDD is 5ms. Flash-based devices such as the Nytro WarpDrive® card can complete the same read in less than 50μs on average during testing, an improvement of several orders-of-magnitude in response time. This response time translates to getting much higher transactions out of the same infrastructure – but with less space (flash is denser) and a lot less power (flash consumes a lot lower power than HDDs).

We’re also showing the Nytro 7000 series PCIe flash cards. They reach even lower write latencies than the 6000 series and very low q-depths.  The 7000 series cards also provide DRAM buffering while maintaining data-integrity even in the event of a power loss.

For online retailers and other businesses, higher database speeds mean more than just faster transactions. They can help keep those cash registers ringing.

Tags: , , , , , , , , , , , , , , , , , , , ,
Views: (1026)