Wednesday, June 10, 2015

Why are we having the same storage discussion???


I feel that despite the efforts and the advancements in the data center and virtualization space (specifically around storage), that many IT professionals are still having a tough time coming to grasp with capacity over performance.

I mean at the end of the day it really revolves around one thing, requirements. What are your requirements? This isn't always an easy question to answer as those requirements generally come from your business or other department that doesn't really have an insight into your data center technology solutions. More importantly, they don't have any idea what their storage requirements are (in my respective cases). So instead of taking the time to discuss application performance needs, workload scalability, utilization times, backups, archives, high availability etc., we revert to the capacity question.

Well how much storage do you need? STOP!!!

Capacity doesn't mean you're taking care of the big picture.

For example, I'm working with a customer that continues to deliver capacity. However, that capacity comes in the form of large capacity 7.2k drives. When I throw a workload at it, I'm generating a metric ton of latency (specifically closer to 6k CMD/S latency if we're looking at ESXTOP).

When I present my problem to the on-site IT staff, they're baffled that I'm generating that much workload. Is there a disconnect or did someone take my request for storage from a different perspective? Where they thought the issue was capacity, the issue is really around performance.

What am I talking about here? The issue centers around one area of focus IOPS.

IOPS or Inputs/Outputs per second is basically a method of identifying how much read and write requests are generated per second on your storage. To have a high number of IOPS you need multiple  disks. The number of IOPS is generated by the type of disk speed leveraged.

For example a 7.2k RPM spinning disk is about 75 IOPS, 10K about 125, 15k about 175. (These are rough estimates so take them with a grain of salt)

SSD doesn't use a motor or spinning platters so they don't generate RPMs but they are flash based technologies so they generate a lot of IOPS because operations run as memory.

IOPS are a little bit more complicated to calculate but they generally start in the thousands (they vary by technology used in the disk and by vendor). This is one of the big reasons why SSDs are so much more expensive than spinning disks. This is also why when you go to a retailer and look up hard drives for your computer or server that the higher the RPM the more expensive (when compared with drives of the same capacity).

Scott Lowe did an article for TechRepublic that does a much better job explaining the spinning disk IOP calculation requirements so I'll include them here:

IOPS calculations

Every disk in your storage system has a maximum theoretical IOPS value that is based on a formula. Disk performance -- and IOPS -- is based on three key factors:
  • Rotational speed (aka spindle speed). Measured in revolutions per minute (RPM), most disks you'll consider for enterprise storage rotate at speeds of 7,200, 10,000 or 15,000 RPM with the latter two being the most common. A higher rotational speed is associated with a higher performing disk. This value is not used directly in calculations, but it is highly important. The other three values depend heavily on the rotational speed, so I've included it for completeness.
  • Average latency. The time it takes for the sector of the disk being accessed to rotate into position under a read/write head.
  • Average seek time. The time (in ms) it takes for the hard drive's read/write head to position itself over the track being read or written. There are both read and write seek times; take the average of the two values.
To calculate the IOPS range, use this formula: Average IOPS: Divide 1 by the sum of the average latency in ms and the average seek time in ms (1 / (average latency in ms + average seek time in ms).

Scott Lowe is a revered technologist and blogger/author from EMC. He wrote the Mastering vSphere books. I highly respect his work and knowledge.

When you take these technologies and expand them across many disks, you get an IOP calculation.

Now if you want to do this yourself, I recommend looking into "The Cloud Calculator"

This calculator takes the capacity, speed, disk count, and read/write percentages and factors them into into various RAID groups (common groups are RAID 5, 10, and 6) and calculates total IOPS based on those pieces of detail.

THIS IS HOW YOU SHOULD GATHER YOUR STORAGE REQUIREMENTS!!!!


However, it's not that simple I'm afraid. You have to understand the performance workload needs of your environment to quantify the read and write activity. For example, databases high a high read capacity if you have a lot of connections into that database reading information from it (i.e. queries). Other databases have a lot of write activity (i.e. committing changes). Generally speaking, your application will have some detailing requirements about how the environment interacts and whether you need higher end storage or not.

Use cases like E-mail (Microsoft Exchange) restructured their architecture so you can use JBOD (Just a bunch of disks) to deliver e-mail solution on the cheap. Exchange doesn't need high end storage but a large quantity of disks running at slower speeds. 

Other use cases like Data Center Virtualization or Virtual Desktops require more than JBOD. This is where companies like EMC, NetApp, Pure Storage, Tegile, Dell, HP, etc. love to talk about their storage arrays. Whether they are all flash, or a hybrid of flash and spinning disks. They're tailored to your organizational need.

Hopefully, next time you look at your infrastructure requirements you factor the storage performance needs and expectations before you immediately address your capacity requirement.

Just like Austin Powers used to say... "It's not the size that matters... it's how you use it..." 


Thanks for reading and thanks for your time. 

Monday, March 23, 2015

VMUG UserCon 2015 - St. Louis

The St. Louis area VMUG UserCon event was held last week at the Hyatt Regency at the Arch in downtown St. Louis. This one day event allowed numerous vendors to come and speak about the trending topics in the IT space.

Overall, I felt this event lacked diversity within the vendor space but given the notice and changes, the VMUG leadership putting this event together did an amazing job.

The overall theme seemed to go between hyper-converged infrastructures and flash based storage technology providers (I counted six companies offering flash based arrays).

The two biggest nods go to Simplivity and NUTANIX in my opinion. Both offering their own 2U platform that provides both compute, network, and storage capabilities. However, Simplivity impressed me slightly more by offering a UCS platform running on a C240 server that could be managed by UCS Manager (a later release being managed by UCS director is forthcoming).

With the costs of convergence being extremely high with big name vendors and OEMs like EMC, Dell, NetApp, etc. companies like Simplivity and NUTANIX shine with a product that can deliver the high capacity that your organization needs with performance and cost in mind. This will only go down in cost as the costs of storage disk based technologies (specifically those around flash based storage) decrease over time.

All in all, I was really impressed with the event and the attendance. I was told over 400 attended by a VMUG leader coordinating the event. Huge thanks go out to them and everyone else responsible for the setup and coordination of that event.

Below is some additional information about hyper-convergence (i.e. software defined data centers) from Matthew Brisse at Gartner that I found interesting and I think is a great read and guide.

Gartner Asset

  1. Define the application use cases and entry points for SDDC. Select use cases and entry points for SDDC such as self-service provisioning of IT infrastructure resources in support of cloud-based applications, improved IT and business process automation. Note: Software-defined security and policy-based orchestration can reside with a cloud management solution above the SDDC layer.
  2. Identify the abstracted infrastructure layers required by application and process use cases. Expose the layer of abstraction and virtualization requirements for storage, networking, compute and facilities components. Define infrastructure implementation requirements based on application and process pain points. Administrator-based requirements may be focused on a single data center pain point such as storage provisioning, while applications often require multiple data center technologies to be abstracted for end-to-end provisioning. Note that SDDC is an optional enablement architecture, and as such, not every component has to be abstracted or virtualized.
  3. Define abstracted/virtual infrastructure service. Define detailed data center services based on application and process requirements, not on current infrastructure capabilities. Examples of storage services can include provisioning, thick or thin logical unit number (LUN) assignments, snapshots, replication, cloning, and data deduplication or other data services. Implement services providing the greatest value to speed of execution with increased agility in support of cloud services provisioning and automation.
  4. Perform an infrastructure assessment focusing on use case requirements and data center services. Determine if infrastructure or architecture alternatives can fill gaps based on the ability to deliver abstraction, instrumentation, programmability (API), automation, policy-based management and orchestration capabilities. Hyper-converged integrated systems may be leveraged for faster time to value and increased agility. Facility-based operational technologies such as monitors and sensors should be integrated as part of workload placement based on power, temperature and other sensor metrics.
  5. Define abstracted/virtual policies. Define policies for infrastructure services and process requirements leveraging northbound and southbound APIs, policies and automation. Test and implement API interoperability extensively because the lack of SDDC maturity could see API instability impacting data center operations.
  6. Implement software-defined data center components. Implement the software-defined components based on use case. Test northbound and southbound APIs associated with the control and data planes to ensure infrastructure interoperability. Validate that policies and services can be automated within each infrastructure component. Perform IT service continuity and disaster recovery testing for each software-defined component.
  7. Integrate software-defined security. Identify and standardize on well-established processes and patterns that have to be secured throughout the entire SDDC environment. Each data center component will have infrastructure-specific security that must be orchestrated through API or scripting to ensure interoperability and workflow processes. Programmatically enforce security-based policies to ensure the workflow models are enforced across the infrastructure layers.
  8. Integrate policy-based orchestration and management. Select and implement an overarching policy-based solution to provide management and infrastructure orchestration. Policy-based orchestration and security requirements may be provided by a cloud management layer that resides above the SDDC. For example, OpenStack can help orchestrate the infrastructure by enabling a standard set of APIs and providing templates for common tasks.

Monday, December 22, 2014

An Quick Look at Software Defined Storage (SDS)

Looking at traditional virtualization practices compared to this new fangled software defined storage (or SDS), you would think we're just re-branding the same technology in a new way. Not so fast... 

Let's break it down briefly. Look at virtualization and storage specifically. Whether you're running technologies like Hyper-V or vSphere, you're taking raw storage (generally off a shared resource like a NAS or SAN) and carving it into a logical volume or unit (a LUN) or file share (i.e. NFS) from there the hypervisor formats it so you have the flexibility of using that storage for virtual machine use/deployment/storage/etc. VMware calls these datastores, Microsoft calls them virtual hard drives. They serve the same premise; to store VMs. That's where virtualization storage really ends, a means to store VMs. 

Enter software defined storage, we're taking this process and essentially accomplishing two additional things; governance, and management. 

Take a look at the below diagram from VMware we're taking those datastores and enhancing them by providing some additional data services and control policies on top of them. This in theory will allow us to better utilize, protect, and monitor them so our VMs are better sized, managed, and protected. Make sense?
(VMware, 2014)

Let's look a little further here and outline those features that VMware identified around SDS:

Benefits of Software-Defined Storage:

  • Faster provisioning. Provisioning of storage becomes extremely fast and simple. Infrastructure and data service do not require complex pre-configurations as the right storage service is created dynamically.
  • Granular dynamic control of service levels across heterogeneous resources. Adjusting service levels to an app is as simple as changing a policy. The underlying service delivery dynamically adjusts accordingly.
  • Simplified operations and troubleshooting. Storage virtualization eliminates the notion of hard mapping of roles and responsibilities has been eliminated and replaced with automatic enforcement and monitoring of policies.
  • Flexible resource management. All components are separable. For instance, the data plane could be replaced without disrupting the data services or the control plane.
  • Agility with control. Infrastructure teams can define classes of services by exposing policy tiers to end-users. Compliance and charge back are simplified.
  • Increased asset utilization. Composed services will be more precise and aligned on application boundaries. An application container’s storage services are also much better aligned, delivering exactly what’s needed, when it’s needed.

Some of the information here is a little sales pitchy but the key words you should take away here is flexible, agility, and control. With a capability like this, you can really enhance how your storage is utilized. Factor that into some of the other technology features your back end storage architectures provide (i.e. caching, deduplication,etc.) and you have some features that provide a very well rounded and automated solution around your storage provisioning and management. 

Thinking more like a data center service that's provided on demand and by request, it's clear to see that there is plenty of potential around technologies like software defined storage. This is a similar approach we've seen to other technologies like database as a service and software defined networking. Imagine what cloud providers are going to do when they refine the process around software define storage and networking and start providing software defined data centers on demand with the level of policy management, security, and control it will be a true public hosted, private managed, logically dedicated data center in the cloud. 

For more information around SDS from VMware see the article here

Thanks for reading!!!

Monday, December 15, 2014

New job... New thoughts... New direction...

Well it's clear that my availability isn't necessarily conducive of blogging. It's been over a year since my last post and here's a quick recap on where I'm at now and what's changed in a year...

My previous employer was putting a lot of extra work on my plate which prevented me from blogging actively. Furthermore, I've been teaching with a university here in St. Louis part time teaching IT courses in a variety of subjects (Active Directory, Computing Fundamentals, Virtualization, etc.).

In March 2014, I took a new position with a utility company providing gas and electric services to our customers in the bi-state area. Now I focus on technology planning and architectures. As a technical architect, Our team is basically a technical liaison to the CIO of our company. Our range is broad and we posses many skills in a myriad of different areas. While mine still focus on infrastructure capabilities, my focus has expanded to a more true holistic, enterprise (dare I say), agile model.

As a result, my focus isn't going to be driven so much on product and technologies anymore. While the inception of this blog was to discuss virtualization practices and technologies, the drive has evolved beyond conventional virtualization and into new technological avenues. I'm talking Software Defined Storage, Networking, and Data centers. As well as, the Internet of Things, Big Data, and continued improvement and evolution of cloud computing technologies (both managed/un-managed & private/public).

Bottom line, a lot has changed in over a year both in the technology space, and in my roles as a technology professional. As a result, this blog is going to get an attempted revival and I hope people will see potential value in reading it...

We'll have to wait and see...


Wednesday, March 27, 2013

Six Months Later and I'm Still Here

It's been a grueling six months since my last update. Travel in the fall to work on a large scale high performance compute (HPC) project as well as my current workloads kept me away from my hobbies for the foreseeable future.

Tonight I actually have time to provide something of a technical update so I thought it would be good to spend a few moments and talk about some of the work that I'm doing with the new VMware View product (now called Horizon) v5.2.

VMware published a pretty straight forward document that covers the key differences between 5.1 and the new 5.2 but instead of going through each line of this document, I'd like to focus on the neat features that I've had the privilege of working with.

Some of the key features are:
  • Increased scalability
  • Increased support on mobile platforms (iOS, Android, etc.)
  • Support for Windows 8
  • HTML Access via HTML 5 enabled web browsers
  • Media services support for unified communications
  • Enhanced graphics capabilities
Click here for the full description from VMware.

Now my particular favorite new capability is the enhanced graphics supportability. This doesn't just mean the ability to support both DirectX and OpenGL APIs but also means that there is some enhanced performance benefits gained using VMware Horizon View 5.2 over other product currently in operations.

One of the neat thing we did was test virtual software graphics adapters (sSGA) as well as virtual dedicated graphics adapters (vDGA). To accomplish this we were using our compute onboard processing (vSGA) on our server nodes as well as adding dedicated GPUs to each node (vDGA) to test similar desktop workloads (HD video playback, Google Earth, CAD, etc.)

For a complete list of supportability from NVIDIA click here.

So now to the meat and potatoes of our testing. We decided to run similar playback tests using a variety of viewer software applications on each test platform. Each platform consisted of a different VDI offering. Prior to our test we had already narrowed down our selections to one of two solutions (Microsoft or VMware). So our tests involved running on 2008 R2 RemoteFX, 2012 RemoteFX, View 5.1 and View 5.2.

We ran through our gauntlet of tests and in the end (against our initial predictions). View 5.2 out performed using vSGA against all other platforms and delivered us frames per seconds nearly 40% faster than the other platforms tested (even some using vDGA).

Now we're still early in our testing and haven't come up with a final average on our reporting metrics but the initial results look very promising for VMware Horizon View 5.2. My initial gut was actually thinking that Server 2012 RemoteFX with vDGA would blow everyone out of the water but vSGA on 5.2 was slightly better (which I found very surprising).

For further information on Horizon View 5.2 review the Evaluators Guide here.

Thanks for reading!!!

Wednesday, August 29, 2012

It's been a while... Let's talk for a minute about... Everything.

So it's been a really long while (about 4 months) since my last blog post. My apologies for not getting updates out there more frequently...

Since late April when I talked about what I saw at the MMS conference and what's been in the pipeline with Windows Server 2012 and Hyper-V, I was working in the background with my company (NJVC) and Microsoft on their 2012 Rapid Deployment Program (RDP) on the IP Address Management (IPAM) service offering that will be natively offered in Server 2012. Additionally, I have attended EMCWorld 2012, Microsoft TechEd, and VMworld 2012. The commonality of these three conferences has been primarily focused on the following areas, big data, desktop virtualization, and cloud computing. Don't get me wrong, there are still a lot of topics covering Virtualization and infrastructure modernization and data center centric services.

Focusing on Big Data, EMC (among other storage centric companies) are looking at how big data requirements can be met using their product offering. This means looking at functions like MapReduce and Hadoop as well as deduplication technologies to help try and get these different types of expanding data sources stored on these platforms as effectively and efficiently as possible. This drives down costs by reducing your storage footprint while still being able to accommodate these larger data requirements.

A couple of links on the topics can be found here

MapReduce - http://www.business2community.com/strategy/what-is-mapreduce-0264794

Hadoop - http://hadoop.apache.org/

EMC Big Data - http://www.emc.com/microsites/bigdata/index.htm
Moving on to desktop virtualization, both VMware and Microsoft are running neck and neck trying to obtain a market share from Citrix which arguably leads the market in this particular service offering.

VMware View 5.1 will be released in the relative future and the capabilities to include persona management of physical desktops, operations management capabilities, as well as support for new devices and languages. Microsoft's RemoteFX in Server 2012 has been the talk of the town regarding 3D rendering and graphics capabilities offered to desktop sessions without the need of high end dedicated graphics processing units (GPU's).

More information on RemoteFX and VMware View 5.1 can be found here.

RemoteFX - http://technet.microsoft.com/en-us/library/ff817578(v=ws.10).aspx

VMware View 5.1 - http://blogs.vmware.com/euc/2012/05/vmware-announces-general-availability-of-view-51.html

Regarding cloud computing, it seems that every company and their brother is going to have a public/private cloud offering (including NJVC). At VMworld 2012, VMware, Savvis, Terremark, CSC, AT&T, and many more were showcasing their offerings. VMware took the most interest because it showcased the utilization of the vSphere infrastructure stack in addition to vCenter Operations manager, and vCloud Director's suite of products allowing VMware to effectively showcase both private and public cloud designs and offerings based on best practices and use cases of many reputable organizations to include VMware, Department of Veterans Affairs and the Department of Energy. This interests me highly as we support a similar customer base and we would more than likely design a similar offering to our customer based on comparable government requirements.

In addition to all this, VMworld had a great lab environment showcasing vSphere 5.1 and View 5.1 products and features as well as speaker sessions that highlighted cloud services, data center services, enterprise application virtualization designs and best practices, as well as desktop virtualization and a myriad of hardware and software products.

Lastly, I'm going to be studying for my VCAP-DCA5 exam again. Hopefully, I do better this go around compared to last time. Also, I'll do a better job at posting updates... Four months is way too long to go between updates... :P

Thanks for reading.

Friday, April 20, 2012

2012 is going to be a good year... for Microsoft...

I've been somewhat silent on the blog for the past couple of months mainly for two reasons. 1. Work has been keeping me too busy to take the time to update the site, and 2. I've been reading and researching into this little known gem called Windows Server "8".

After this weeks MMS 2012 Conference hosted by Microsoft at the Venetian Hotel and Casino in Las Vegas, Microsoft has announced that Windows Server "8" will now be known as Windows Server 2012. Not a huge shocker with regards to the name but the feature sets and workloads in comparison to Windows Server 2008 R2 are exponential.

NJVC (my company) was fortunate to be invited to participate in the rapid deployment program (RDP) which myself and another colleague were equally fortunate to travel to Seattle and see first hand what the hype was all about.

From a virtualization standpoint, Microsoft is going to be doing just about everything that VMware is currently doing with their Enterprise Plus licensing model out of the box with their Enterprise and Datacenter licensing model. To recap, a previous post. Microsoft Datacenter licensing is about $5,000 per processor and VMware Enterprise Plus is about $4,500 a processor. However, if you want to host large quantities (four or more) VMs, you need to buy Datacenter Licensing in addition to VMware. This is where costing always tend to sway in favor of Microsoft but before 2012, the feature-sets couldn't compete with VMware.

Server 2012 has two additional features that their Hyper-V role can do that I am really excited to learn more about Shared Nothing Live Migration (SNO Live Migration) and Virtual networking.

  • SNO Live migrating is simply a process that doesn't require any shared resources to migrate a VM between hosts. i.e. the only thing that is needed is a comms connection between hosts. It works pretty well in demonstrations, and I think this will really go well with SMB sized organizations and even large enterprises that service numerous regional based infrastructures as well.
  •  Virtual Networking is not the same as what virtualization gurus call virtual networking. This feature allows VMs to share the same IP space and the VMs still communicate over the wire between resources. They've essentially taken the vSwitch model where you can use an adapter as a virtual switch and extended that concept to the IP that is assigned to that adapter and uses that IP as a shared switching source (how I'm not sure) and allows VMs spun up on that adapter to share the same IP.while avoiding collisions and other issues that you would otherwise encounter when housing duplicate IPs.
With regards to the RDP, I will be working closely with Microsoft on the RDP effort and another neat feature of Windows Server 2012, IP Address Management. Prior to 2012, you had to use third party products and solutions to manage your IP space (i.e. Bluecat or InfoBlox), with 2012, that is also a native in-the-box role that can be enabled and managed via Windows. I can't speak to many of the details barring our NDA with Microsoft but as details develop (and I can share them), I will.

So back to MMS 2012. This year is also a big year for System Center. Microsoft has also announced System Center 2012 launches and that includes extensions into their orchestration, runbook automation, patch management, image provisioning, deployment, operations management and virtualization management products. These products coupled with the use of Hyper-V has allowed Microsoft to be one of the very first vendors to offer a scalable (and somewhat affordable) software stack that allows companies to easily provision and scale their own private cloud infrastructures.

I look forward to researching into this more and more (especially as cloud computing gains in popularity by the day in the IT world). Information on these products can be found below. Thanks for reading.

http://www.microsoft.com/en-us/server-cloud/readynow/ - Both Windows Server and System Center 2012