Monday, July 6, 2009

Storage Design for Datawarehousing

With all the talk about exadata and netezza going around, I gave a presentation on conventional storage design for Datawarehousing to our team. For a small to medium corporate DW, conventional storage arrays perform adequately well (if properly designed).

Storage Design for Datawarehousing

Storage Subsystem Design for Datawarehousing Array, Drive and RAID Selection Krishna Manoharan krishmanoh@gmail.com http://dsstos.blogspot.com 1 Again, what is this about? An attempt to show how to design a Storage Subsystem for an Datawarehouse Environment from a physical perspective. Aimed at conventional environments using standard devices such as Fibre Channel SAN arrays for Oracle Databases. The presentation will demonstrate the Array and Drive selection process using real life examples. You will be in for a few surprises! 2 Enterprise Business Intelligence (EBI) Most companies have multiple Oracle instances (such as ODS and DW) with an ETL Engine (Informatica) and Reporting tool (Business Objects) all rolled into an Enterprise Business Intelligence (EBI) Environment. ODS is the Operational Data Store (a near real time copy of the company's Transactional data) and DW is the Datawarehouse (a collection of aggregated corporate data). The ETL Engine (such as Informatica) transforms and loads data contained in the ODS into the DW. The Reporting Engine (such as Business Objects) reports off data from both the ODS and DW. This presentation covers the storage design for the DW. Typical size of an DW is around 5-10TB for a large software company. Though the typical Enterprise Warehouse is small in size, it is by no means less busy. 3 Enterprise Business Intelligence (EBI) – contd. Users Re po rts Reporting Engine Database Layer ODS One Way Replication from Source Systems DW Load ETL Engine Ex t ac tr HR Online Sales Click Stream ERP CRM Transaction Systems 4 Datawarehousing and the Storage Subsystem One of the biggest factors affecting performance in Datawarehousing is the Storage subsystem. Once the environment is live, it becomes difficult to change a storage subsystem or the layers within. So it is important to design, size and configure the Storage subsystem appropriately for any Datawarehousing Environment. 5 What is the Storage Subsystem? CPU Switch Memory System PCI The physical components of a conventional storage subsystem are System PCI Interface SAN Fabric Array Port n SAN Switch SAN Fabric Port 1 Port 2 Front End Ports to Host CPU Cache In this presentation, we talk about the Array component of the Storage Subsystem. Drives Array 6 IO Fundamentals from the Storage Subsystem READS are to the Storage Subsystem WRITES are Storage Subsystem IO in the simplest of terms is a combination of reads and writes. Reads and Writes can be Random or Sequential. 7 IO Fundamentals – contd. Random or Sequential is determined at the array level (Meaningful) Random Reads Random Read Hit - If present in the Array Cache, then occurs at wire speed. Random Read Miss - If not present in the Array Cache, hits the drives directly. This is the slowest IO operation. Sequential Reads First few reads are treated by the array as Random. Judging by the incoming requests (if determined to be sequential), then data is pre-fetched from the drives in fixed sized chunks and stored in cache. Subsequent reads are met from Cache at wire speed to the requestor. Random/Sequential Writes Normally are staged to cache and then written to disk. Will occur at wire speeds to the requestor. 8 IO Metrics Key IO Metrics are IOPS – Number of IO requests/second issued by the application. IO Size – The size of the IO requests as issued by the application. Latency – The time taken to complete a single IO Operation (IOP). Bandwidth – The anticipated bandwidth that the IO Operations are expected to consume. Latency or response time is the time it takes for 1 IO Operation (IOP) to complete. 264K 264K 264K 1024K 264K 264K 1024K 16K 1024K 1024K 16K 16K 264K 16K 16K 264K 16K 16K 16K 1024K 16K 16K 1024K Source IOPS 16K 16K 16K 264K 16K IOPS Destination 1024K 1024K 16K 16K Bandwidth is the total capacity of the pipe. Bandwidth capabilities are fixed. 9 Datawarehousing Storage Challenges Storage design in a corporate environment is typically Storage Centric - based on Capacity Requirements, not Application Requirements. When applied to Datawarehousing, this results in sub-standard user experience as Datawarehousing is heavily dependent on IO performance. 10 Profiling Datawarehousing IO - Reads 1 · Normally, in a conventional DW, you would notice many reports running against the same set of objects by different users for different requirements at the same time. From a IO performance perspective - Array capabilities along with Raid and Drive configuration determine Read performance in a Datawarehouse. Users Re ad s Typical DW (less than 10TB) · Since the size of the DW is not very big (~5-10TB) and hence the objects are relatively small in size, it is a normal tendency to place these objects on the same set of spindles (Also given the fact that today’s Drives are geared for capacity, not performance). Object1 Object3 Object2 Object4 Object5 Object7 Object6 Object8 Object9 Objectn 2 Database Objects 3 Reads · 4 · High degree of random concurrency (along with write intensive Loads) to single set of disks will absolutely kill your user experience. Due to high concurrency of the requests, about 60% of these read requests end up as Random Read Miss to the Array. Random Reads Miss is the slowest operation on an Array and require such reads to be met from the Disks. Such Random Reads can be big (1MB Sized IOP) or as small as DB Block Size. To accommodate both such requirements, throughput and latency require to be taken into consideration. · 11 Profiling Datawarehousing IO – writes From a IO performance perspective, Cache sizing & Cache reservation along with Raid and Disk configurations determine write performance in a Datawarehouse. 1 ETL Engine · In a typical DW, different write operations occur at different intervals - 24*7*365. These writes can be direct path or conventional. Fast loading of data is important to be able to present the latest information to your customer. Normally, these are driven by rigid SLA’s. Users (Temp Tablespace Writes) · · Writes DW (Typically less than 10TB) Object1 Object3 Object2 Object4 Object5 Object7 Object6 Object8 Object9 Objectn Database Objects · In an Array environment, writes are staged to cache on the Array and then written to disks. Write performance would depend on the size of the cache and the speed at which data can be written to disks. If your cache overflows (Array not being able to keep up with the Writes), then you will see an immediate spike in write response times and corresponding impact on your write operation. 3 · 2 Writes Cache (Memory) on Array · The speed at which data can be written to disks depends on the drive busyness. A combination of reads and writes occurring simultaneously to a single set of spindles will result in poor user experience. This normally happens when you place the objects on the same set of spindles without regard as to their usage patterns. · · · 12 Profiling Datawarehousing IO – Summary To summarize (in Storage Terminology) Enterprise Datawarehousing is an environment in which Performance is important, not just capacity. Read and Write intensive ( Typically 70:30 Ratio) Small (KB) to large sized IOPS (> 1MB) – for both reads and writes. Latency is very important and the IO Operations can consume significant amount of bandwidth. In order to make these requirements more meaningful, you need to put numbers against each of these terms - for e.g. IOPS, bandwidth and Latency - so that a solution can be designed to meet these requirements. 13 Starting the Design Okay I get the idea, so where do I begin? 14 Storage Subsystem Design Process If you have an existing Warehouse 1 2 3 Collect Stats from Oracle Collect Stats from System Collect Stats from Storage Correlate Stats and Summarize/Forward Project 4 If not available, then document requirements as best as you can. If you have an existing Warehouse – Collect stats from all sources and correlate to ensure you are reading it correctly. Requirements Gathering Phase 5 Identify suitable System(s) 6 Identify suitable Array Drive Drive Identify suitable SAN switches 7 RAID If not, then you would have to document your requirements based on an understanding of how your environment will be used and proceed to the design phase. Infrastructure Design 15 Storage Subsystem Design Requirements – contd. If using the data from an existing Warehouse - Do a forward projection, using these stats as raw data, for your design requirements. The existing IO subsystem would be affecting the quality of the stats that you have gathered and you need to factor this in. Separate out Reads and Writes along with the IO size. Document your average and peak numbers at Oracle Level. Anticipated IOPS – Number of IO Requests/Sec. Anticipated IO Request Size – IO Request Sizes as issued by the application for different operations. Acceptable Latency per IO Request. Anticipated Bandwidth requirements as consumed by the IOPS. 16 A Real World approach to the Design In order to make the design process more realistic, let us look at requirements for a DW for a large software company and use these requirements to build a suitable Storage Subsystem. 17 Requirements for a typical Corporate DW (Assuming 5TB in size) Performance The requirements below are as to be seen by Oracle. These are today’s requirements. It is expected that as the database grows, the performance requirements would scale accordingly. Read peaks need not be at the same time as the Write peaks. Same scenario for multiblock/single block traffic. Requirement Acceptable Latency/IO Request Expected IO Request Size Average Peak Average Bandwidth Peak 1492 MB/sec (Using 764K sized IOP) 81.2 MB/sec 262.5 MB/sec (Using 512K sized IOP) 10MB/sec Reads Multi Block Reads < = 30ms >16KB <= 1MB (Average IOP Size 764K) 1200 IOPS Writes Multi Block Writes < 20ms >16KB <= 1MB (Average IOP Size 512K) 400 IOPS Single Block Reads < = 10ms Single Block Writes < 5ms Total 5ms to 30ms 16KB to 1MB 6050 IOPS 8375 IOPS 1.2 GB/sec 1.8 GB/sec 18 16KB 16KB 4000 IOPS 450 IOPS IOPS (IO Requests/Sec) 2000 IOPS 918 MB/sec (Using 764K sized IOP) 5200 IOPS 525 IOPS 200MB/sec (Using 512K sized IOP) 650 IOPS 62.5 MB/sec 7 MB/sec Requirements for a typical EDW (Assuming 5TB in size) – contd. Capacity The database is 5TB in size (Data/Index). And so provide 10TB of usable space on Day 1 to ensure that sufficient space is available for future growth (Filesystems at 50% capacity). Scale performance requirements appropriately for 10TB. Misc IO from redo/archive/backup is not included in the above requirements. The storage subsystem needs to have the ability to handle 1024K IO Request size to prevent IO fragmentation. 19 Conventional Storage thinking Let us look at a typical Corporate Storage Design as a response to the requirements. 20 Requirements to Array & Drive Capabilities The below would be a typical response to the requirements. However as we shall we, implementing as below would result in a failure. Feature Net Bandwidth Consumed Requirement 1.8 GB/sec (Today) 3.6 GB/sec (Tomorrow) > 8375 IOPS 5ms to 30ms 10TB 1024K Determines Writes Performance 1024K Recommended Notes AMS1000 has 8*4Gb Front End Ports for a total of 4 GB/sec Bandwidth 1 Hitachi Modular Array AMS1000 146GB, 15K RPM Drives 165 Drives 10TB Usable (RAID 10) 1024K 16GB RAID10 512K IOPS Latency Capacity Max IO Size Cache Raid Levels Stripe Width Drive Specs of the 146GB, 15K RPM Drive show that the requirements can be easily met. AMS1000 supports a 1024K IOP size. Maximum Cache is 16GB RAID10 offers the best performance. 512K is the maximum offered by the array. 21 Storage Subsystem Design Process - Array What is required is a more thorough analysis of all the components of the storage subsystem and then fit the requirements appropriately to the solution. We start with the Array. This is the most vital part of the solution and is not easily replaceable. 22 Storage Array – Enterprise or Modular? Arrays come in different configurations – Modular, Enterprise, JBOD etc. Modular arrays are inexpensive and easy to manage. They provide good value for money. Enterprise arrays are extremely expensive & offer a lot more functionality geared towards enterprise needs such as wan replication, virtualization and vertical growth capabilities. As I will show later on, vertical scaling of an array is not really conducive for performance. Adding more modular arrays is a cheaper/flexible option. For this presentation, I am using the Hitachi Modular Series AMS 1000 as an example. 23 Typical Modular Array (simplified) Servers Conventional Array specs include SAN Switch Port 1 Port 2 Port x Array Front End Ports to Host Number/Speed of Host Ports (Ports available for the host system to connect to). Size of Cache. Maximum Number of Drives. Number of Raid Controllers. Number of Backend Loops for Drive connectivity. 24 Management CPU Raid Controllers Cache Disk Controllers Drives Oracle requirements to Array Specs Unfortunately Array specs as provided by the vendor do not allow us to match it with Oracle requirements (apart from Capacity). So you need to ask your Array vendor some questions that are relevant to your requirement. 25 Array Specs – contd. (Questions for Array Vendor) · 1 Port 1 Port 2 Port x · · Are these ports full speed? What is the queue depth they can sustain? Maximum IO Size that the Port can accept? Front End Ports to Host 2 Can we manipulate the Cache reservation policy between Reads and Writes? Raid Controllers Cache Management CPU 3 4 How many CPUs in all? Disk Controllers What is the bandwidth available between these components? 5 · How may drives can this array sustain before consuming the entire bandwidth of the array? Optimal Raid Configurations? Drives · Array 26 The HDS AMS1000 (Some questions answered) · 1 · Port 1 Port 2 Port x Are these ports full speed? – Only 4 out of 8 are Full Speed for a peak speed of 2048MB/ sec. What is the queue depth they can sustain? – 512/Port Maximum IO Size that the Port can accept? – 1024K · Front End Ports to Host on hy ac ps 4 T Chi 1066 MB/sec 2 2 Raid Controllers Cache 12 2132 MB/s ec Can we manipulate the Cache reservation policy between Reads and Writes? - No Management CPU 3 What is the bandwidth available between these components? – Effective Bandwidth is 1066 MB/sec U U/ CP ID ller A o 1 R ontr C n chyo 4 Ta ps i Ch .8 2 CP GB /s ec How many CPUs in all? Disk Controllers 4 ex) impl c (S e MB/s Drives 2048 5 · How may drives can this array sustain before consuming the entire bandwidth of the array? – Depends on Drive Performance Optimal Raid Configurations? – Raid 1 or Raid 10. For Raid 5 – Not enough CPU/Cache. Raid 10 – Stripe width 64K default, Upto 512K with CPM (License) · · AMS1000 27 Analyzing the HDS AMS1000 Regardless of internal capabilities, you cannot exceed 1066 MB/sec as net throughput (Reads and Writes). Limited Cache (16GB) and the inability to manipulate cache reservation means that faster and smaller drives would be required to complete writes in time. The 1066MB/sec limit and the Backend Architecture restricts the number of drives that can be sustained by this array. Limited number of CPUs and Cache rule out using RAID 5 as a viable option. 28 Matching the AMS1000 to our requirements Feature Requirement AMS1000 Capability 1066MB/sec (Theoritical) 750 MB/sec (Realistic) Recommendation Notes 1 AMS1000 = 750MB/sec 5 AMS1000 = 3.6 GB/sec 8 AMS1000 = 5.8 GB/sec Net Bandwidth 1.8 GB/sec (Today) 3.6 GB/sec (Tomorrow) Consumed IOPS Latency Capacity Scalability Max IO Size Cache Raid Levels Stripe Width > 8375 IOPS 5 Arrays (Min) 8 Arrays (Recommended) 5ms to 30ms Depends on type of IO Operation, RAID/Drive performance and Drive Capacity. 10TB Future growth 1024K Determines Writes Performance 1024K 1024K 16GB RAID 0, RAID 1, RAID 10, RAID 5 64K, 256K and 512K Need to simulate the requirements along with various drive and raid configurations. 1024K 16GB (Max) RAID 1 and RAID 10 Test to determine stripe width 1024K is supported on the AMS1000. Cache is preset at 50% Reads/Writes Not enough CPU for RAID 5 Beyond 64K, require additional License feature 29 HDS AMS1000 - Conclusions Bandwidth requirements – We would need min of 5 Arrays to meet today + future requirement. Physical hard drives and RAID configuration determine the storage capacity and other performance requirements (IOPS/Latency). Testing various configurations of Drive and RAID levels would determine how desired requirements - (IOPS/latencies) can be met. 30 Storage Subsystem Design Process – The Drives Now that we have established Array capabilities, we can move on to the Drive Selection. 31 Hard Drives Regardless of how capable your array is, the choice of the Drives will ultimately decide the performance. Ultimately all IO gets passed down to the physical hard drives. The performance characteristics (Throughput, IOPS and Latency) vary depending of the type of the IO request and the drive busyness. 32 Hard Drives – FC or SATA or SAS Choice limited by selection of array. The drive interface speed (2Gb/4Gb etc) is not relevant as the bottleneck is the media and not in the interface. SAS is a more robust protocol than FC with native support for dynamic failover. SAS is a switched, serial and point to point architecture whereas FC is Arbitrated Loop at the Backend. The IDE equivalent of SAS is SATA. SATA offers larger capacities at slower speeds. For an Enterprise DW with stringent IO requirements, SAS would be the ideal choice (If Array supports SAS). Faster the drives, better the overall performance. 33 Hard Drives – Capacities – Is bigger better? Bigger drives results in the ability to store more objects resulting in more concurrent requests and thus a more busier drive. 300GB 146GB Object1 Object3 Object2 Object4 Object5 Object7 Object6 Object8 What Capacity should I pick? 450GB Object1 Object3 Object2 Object4 Object1 Object3 Object2 Object4 Object5 Object7 Object6 Object8 Object1 Object3 Object2 Object4 Object5 Object7 Object6 Object8 All offer (supposedly) same performance 167 Random IOPS at 8K IO Size 73-125 MB/sec (Sustained) But if you compare IOPS/GB, then the true picture is revealed. 146GB drive = 1.14 IOPS/GB 300GB drive = 0.55 IOPS/GB 450GB drive = 0.37 IOPS/GB 34 Performance of Drives vis-à-vis Active Surface Usage As free space is consumed on the drive, so does the performance start to degrade. Smaller drives are a better choice for Enterprise Warehousing. 300 15K RPM Drive 250 IOPS Random 8K 200 150 100 50 0 25% 50% 75% 100% % Active Surface Usage 35 Hard Drives Specs Hard Drive specs from Manufacturers typically include the below: Capacity – 146GB, 300GB, 450GB etc Speed – 7.2K/10K/15K RPM Interface Type/Speed – SAS/FC/SATA, 2/3/4 Gb/sec Internal Cache – 16MB Average Latency – 2 ms Sustained Transfer rate – 73-125 MB/sec 36 Oracle requirements to Disk Specs Unfortunately Disk specs as provided by the vendor do not allow us to match it with Oracle requirements (apart from Capacity). Also, Hard Drives are always used in a RAID Configuration (In an Array). So you need to test various RAID Configurations and arrive at conclusions that are relevant to your requirement. 37 RAID, Raid Groups & LUNS RAID is essentially a method to improve drive performance by splitting requests between multiple drives and reduce drive busyness. And provide redundancy at the same time. RAID GROUP 1 LUN 4 LUN 5 RAID GROUP 2 LUN 1 LUN 2 LUN 3 Systems Host Systems see Luns as individual disks (presented by the Array). Array · · Luns are carved out from Raid Groups on the Array. Raid Groups are sets of disks in the Array in pre-defined combinations (Raid 1, Raid 5, Raid 10 etc). 38 RAID Levels – RAID 1 Reads from either drive will help reduce drive busyness. Minimal CPU utilization during routine/recovery operations. Not Cache intensive. RAID 1 MIRROR Since traditional RAID 1 is 1D+1P combination, it would require combining multiple such luns on the system to create big volumes. Writes require 2 IOP (Overwrite existing data) 39 RAID Levels – RAID 5 Reads will be split across drives depending on size of IO request/stripe width and help reduce drive busyness. Each additional IO operation will consume bandwidth within the array. Depending on stripe width, a request may be split between drives. CPU Intensive due to parity bit calculation. DATA1 DATA4 DATA2 DATA5 DATA3 PARITY PARITY DATA6 Writes require 4 IOP (Retrieve Data & Parity into Cache, Update Data & Parity in Cache and then Write Back into Disk) High Write Penalty and hence Cache intensive. High CPU overhead during recovery. Bigger the RAID group (More drives), higher the penalty (especially during recovery). 40 RAID 5 RAID Levels – RAID 10 Writes require 2 IOP (Overwrite existing data) · · MIRROR DATA1 DATA1 Combines both RAID1 and RAID0. Reads from either of the mirrored drives. Reads will be split across drives depending on size of IO request/stripe width. Same advantages of RAID1 with the advantage of striping (scaling across multiple drives). With a bigger stripe width, the IO requests can be met within a single drive. Traditionally Modular Arrays have been able to offer lengths of 64K stripe width only (on a single disk). This means that an IO request exceeding 64K would need to be split across the drives. Splitting across drives means more IOP’s and consuming more backend capacity (overall Array+ Drive busyness). Newer arrays (AMS2500) offer up to 512K stripe width. You can do a combination of RAID1 on the array and stripe on the system (Volume Manager) to overcome the array stripe width limitation. 41 MIRROR DATA2 DATA2 MIRROR DATA3 DATA3 MIRROR DATA4 DATA4 Drive and RAID – Initial Conclusions Since the AMS1000 supports only FC/SATA drives, we will use FC Drives. We will test using 146GB 15K RPM drives. RAID5 is not an option due to high write penalty. RAID10 on the array is not an option as the array can offer only 512K stripe width. Our preference is 1024K stripe width so that a single 1024K multiblock IO request from Oracle can (at best) be met from a single drive. This leaves us with only RAID1 on the array. We can test using RAID1 and RAID10 (Striping on the system) under various conditions. 42 RAID Level Performance Requirements The intent is to identify individual drive performance (in a RAID configuration). This will allow us to determine the number of drives that will be required to meet our requirements. We will simulate peak reads/writes to identify a worst case scenario. 43 Test Methodology to determine Drive performance We will simulate Oracle traffic for 20 minutes using VxBench. We will test on a subset (400GB) of the 5TB expected data volume. Operations Reads Multiblock IOP Single Block OP Operations Writes Multiblock IOP Single Block OP Type Asynchronous Sync Type Asynchronous Sync IO Size 784K 16K IO Size 512K 16K IOPS 156 IOPS 406 IOPS IOPS/sec 41 IOPS 51 IOPS IOPS for 20 minutes 187200 487200 IOPS for 20 minutes 49200 61200 We will generate the required IOPS and measure latency and consumed bandwidth. 44 And the results are .. RAID Config Active Surface Area /Drive Data Drives Feature IOPS Expectation 654 IOPS 147 MB/sec 5ms to 30 ms Actual 567 IOPS 141 MB/sec 46 ms Notes RAID 1 4 Concat Volumes across 4 Raid 1 Luns 68% 400 GB 8 Bandwidth Latency IOPS RAID1 8 Concat Volumes across 8 Raid 1 Luns 654 IOPS 147 MB/sec 5ms to 30 ms 654 IOPS 147 MB/sec 5ms to 30 ms 654 IOPS 147 MB/sec 5ms to 30 ms 642 IOPS 142 MB/sec 15 ms 509 IOPS 136 MB/sec 87 ms 626 IOPS 142 MB/sec 25 ms Linux Host with NOOP Elevator and Vxvm Volumes. 33% 16 Bandwidth Latency IOPS RAID 10 2 Stripe Volumes across 4 RAID 1 luns (Raid 0 on system and Raid 1 on Array) 68% 400 GB 33% 8 Bandwidth Latency IOPS RAID 10 4 Stripe Volumes across 8 RAID 1 luns (Raid 0 on system and Raid 1 on Array) Linux Host with NOOP Elevator and Vxvm Volumes (1MB Stripe Width) 16 Bandwidth Latency 45 Drive and RAID conclusions RAID1 on a Linux Host outperforms a RAID10 combination (RAID0 + RAID1 Combination). To meet our requirements, usable surface area cannot exceed 33% of a single 146 GB, 15K RPM FC Drive. For 10 TB (Day 1 + Future growth), we would need 410 drives of 146GB, 15K RPM Drives. 46 Match requirements to Array and Drive capabilities Now that we have established both Array and Drive capabilities, we can finally match these to our requirements. 47 Requirements to Array & Drive Capabilities Feature Requirement Typical Storage Design Actual Minimum Requirement 5 AMS1000 Arrays Recommended 8 AMS1000 Arrays is preferable. Notes 1 AMS1000 = 750MB/sec 5 AMS1000 = 3.6 GB/sec 8 AMS1000 = 5.8 GB/sec 1 Hitachi Modular Array Net Bandwidth 1.8 GB/sec (Today) Consumed 3.6 GB/sec (Tomorrow) AMS1000 IOPS Latency Capacity Max IO Size Cache Raid Levels Stripe Width > 8375 IOPS 5ms to 30ms 10TB 1024K Determines Writes Performance 1024K 1024K 16GB RAID10 512K AMS1000 can meet the required 1024K IOP Size. 16GB RAID1 NA RAID1 Maximum Cache is 16GB RAID1 (on a Linux system) performed better than RAID10. 48 146GB, 15K RPM Drives 165 Drives 146GB, 15K RPM Drives 450 Drives (410 + 40) 90 Drives/Array 2TB/Array (Usable space) 146GB, 15K RPM Drives 410 Drives to meet Performance 450 Drives (410 + 40) and Capacity Requirements 60 Drives/Array 450 drives (Including Spares) 1.3 TB/Array (Usable Space) Final Thoughts If we had followed the capacity method of allocating storage to the Instance, a single AMS1000 would have been sufficient. But as we discovered, we would require at least 5 arrays to meet requirements. Similarly, the initial recommendation was 165 146GB drives . However we determined that a minimum of 410 drives is required to meet performance requirements. Out of the 146GB of available capacity in the drive, only 49GB is really usable. RAID1 outperforming RAID10 is a surprise, but this may not be case on all platforms. The choice of Operating System, Volume Management and other configuration aspects do influence the final outcome. 49 The Future is Bright As always, Low Price does not equal Low Cost. If you design the environment appropriately, you will spend more initially, but the rewards are plentiful. Modular Arrays are continuously improving and the new AMS2500 from Hitachi has an internal bandwidth capability of 8GB/sec (Simplex). So a single AMS2500 would suffice for our needs from a Bandwidth perspective. Solid State Devices appears to be gaining momentum in the main stream market and hopefully within the next 2 years, HDD will be history. 50 Questions ? 51

No comments: