There are many customers out there using Storage Spaces and Scale-Out File Servers with SMB3 since their initial release in Windows Server 2012 a few years back.
Every once in a while, someone will ask me for details on how these technologies were deployed by customers. The best source for those examples is the Microsoft Case Studies site.
The list below includes case studies on how a customer deployed a solution using Storage Spaces, SMB3 file servers or both combined:
- A1 Telekom Austria AG – IT Services Provider Boosts Network Availability, Resilience with Server Upgrade
- ABM – Facilities Solutions Firm Reduces IP Address Management Time by 90 Percent
- Avanade – Microsoft Co-Venture Uses Server Solution to Eliminate Latency for Field Force
- Chunghwa Telecom – Telecom Reduces Costs by 30 Percent, Downtime by 50 Percent with Server Upgrade
- ClearPointe – IT Firm Boosts Storage Performance Tenfold, Trims Costs with Software-Defined Storage
- Convergent Computing – IT Consultant Delivers Flexible, Hybrid Cloud Computing and 50 Percent Lower Costs
- Dimension Data – Hosting Provider Upgrades Software to Reduce Costs, Expand Choice and Services
- Edgenet – Data Services Provider Upgrades Operating System to Boost IT Management Efficiency
- Equifax – Financial Services Firm Improves File Server Availability by 40 Hours a Year
- Fasthosts – Hosting Provider Increases IT Efficiency and Agility with Software Upgrade
- Fujitsu – Fujitsu Reduces Cloud Storage Costs by 35 Percent, Creates More Competitive Offering
- Georgia Institute of Technology – Georgia Tech Students Gain Broad Access to Powerful Design Software with Virtualization
- ING Direct Australia – Online Bank Boosts IT Efficiency by 20 Percent with Upgrade, Strengthens Innovation
- iWeb – Hosting Provider Grows Revenue and Customer Base with Private Cloud Offering
- Kennards Hire – Equipment Rental Firm Improves IT Efficiency and Business Agility with Software Upgrade
- Lower Saxony Ministry of Justice – State Reduces IP Management Work by 25 Percent, Extends Virtualization with Upgrade
- Lufthansa Systems – Lufthansa Systems Uses Hybrid Cloud to Trim IT Delivery to Hours and Reduce Costs
- Microsoft – Microsoft Uses Operating System to Triple Storage Capacity, Reduce Time-to-Market
- nGenx Corporation – Cloud Service Provider Builds Cost-Effective Storage Solution to Support Business Growth
- NTTX Select – Hosting Provider Redesigns Data Center, Expands Offerings with Server Upgrade
- NTTX Select – Hosting Provider Uses Industry-Standard Storage to Slash Storage Costs by 30 Percent
- Pedcor Companies – Real Estate Firm Expects to Avoid $560,000 in IT Costs by Using New Operating System
- Studio Moderna – Retailer Deploys File Share Storage, Saves 50 Percent in Storage Costs
- Telekom Slovenije – Telecom Company Uses File-Based Storage to Reduce Costs, Not Performance
- TrinityComputer.de – IT Provider Gives SMB Customers Flexible Cloud Options, Lower Costs with Upgrade
- T-Systems International GmbH – Cloud Hosting Provider Aims to Improve Service, Lower Costs with Server Upgrade
- VaiSulWeb – Hosting Provider Quickly Offers Cost-Effective Cloud Services to Small Businesses
- Volkswagen Financial Services – Car Financer Improves Performance of Web Applications, Reduces Costs with Upgrade
- Western Health – Australian Healthcare Organization Gains Efficiency, Lower Costs with Private Cloud
- WorkITsafe – IT Provider Gives SMBs Greater Business Continuity, Lower Costs with Software Upgrade
And you should also note that the recently release Cloud Platform System (CPS) is another example of a solution that uses both Storage Spaces and Scale-Out File Servers with SMB3:
- Main page for the Microsoft Cloud Platform System (CPS)
- Blog Post: Unveiling The Microsoft Cloud Platform System, powered by Dell
If you’re focused on gathering data about the performance of Storage Spaces and Scale-Out File Servers, there are a few interesting white papers available:
- Microsoft Windows Server 2012 – Storage Performance and Cost Analysis
- Achieving over 1-Million IOPS from Hyper-V VMs in a Scale-Out File Server Cluster using Windows Server 2012 R2
- Microsoft’s Cloud Platform System (CPS) Delivers Best Price-to-Performance
For more information about Storage Spaces or SMB, you can check these blog posts:
@Wes
For your first scenario, you should be able to use the same SSDs for both WBC and Tiering. However, the column count and data copies applies to all tiers in your virtual disk. To use 6 columns and 2 copies, you would need at least 12 SSDs and 12 HDDs. With
12 HDDs but only 2 SSDs, you would need to use 1 column and 2 copies.
On your second scenario, you would need to create the second HDD-only virtual disk without using tiers. You need to use the -Size parameter instead of the -StorageTierSizes parameter and the -PhysicalDisks parameters (with only the list of HDDs) instead of
the -StorageTiers parameter.
LikeLike
Hi Jose, thank you for all the fantastic information. I am setting up a new hyper-v host and we have twelve 10k SAS drives and two 240gb SSD drives. I’d like to create a sizeable WBC, and then set aside the rest of the SSD space as a separately addressed
drive letter. Is this possible, or do I lose the extra space if I want to use these SSDs to cache with?
this is successful after setting the SSDs to journal usage but I can’t use my extra SSD space: New-VirtualDisk -StoragePoolFriendlyName test -FriendlyName maindisk -UseMaximumSize -ResiliencySettingName mirror -ProvisioningType Fixed -NumberOfColumns 6 -NumberOfDataCopies
2 -WriteCacheSize 16GB
if I don’t set my SSDs to journal usage, and then try this: New-VirtualDisk -StoragePoolFriendlyName test -FriendlyName maindisk -StorageTiers ($tier_hdd) -StorageTierSizes (120GB) -ResiliencySettingName mirror -ProvisioningType Fixed -NumberOfColumns 6 -NumberOfDataCopies
2 -WriteCacheSize 16GB
it keeps saying "You must specify the size info (either the Size or UseMaximumSize parameter) or the tier info (the StorageTiers and StorageTierSizes parameters), but not both size info and tier info." I don’t understand why this error is coming up, since I
am specifying the storagetier and storagetiersizes without any size parameter…
thanks!!
Wes
LikeLike
Although I successfully created a basic virtual disk on the SSD disks, every time I try to build something with the HDD tier it fails:
New-VirtualDisk -StoragePoolFriendlyName test -FriendlyName hddisk -StorageTiers @($tier_hdd) -StorageTierSizes @(11gb) -ResiliencySettingName mirror -WriteCacheSize 5gb
New-VirtualDisk : Failed to run CIM method CreateVirtualDisk on the MSFT_StoragePool (ObjectId =
"{1}SSTESTroot/Microsoft/Windows/Stor…) CIM object. CIM array cannot contain null elements.
Parameter name: value
At line:1 char:1
+ New-VirtualDisk -StoragePoolFriendlyName test -FriendlyName hddisk -StorageTiers …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (MSFT_StoragePoo…indows/Stor…):CimInstance) [New-VirtualDisk], CimJobE
xception
+ FullyQualifiedErrorId : CimJob_ArgumentException,New-VirtualDisk
LikeLike
Awesome stuff Jose.
It seems the Storage Spaces team wants some feedback as vNext no doubt grinds to an interesting debut next year. I’d love to chat but just in case we can’t, some brief thoughts.
I was skeptical that storage spaces was ready for primetime in 2012 and some nasty dedupe experiences reinforced that, but with the constant improvements through R2, R2 + Updates, and the ever-evolving cmdlets, my confidence in the storage product has grown
to the point that I’m ready to deploy it in production scenarios save for one small problem: lack of off-site vDisk replication.
I know that Server Technical Preview was going that direction and I hope it’s still in next year’s product. My sense is that virtual disk/pool replication with an off-site storage spaces array utilizing ODX like a 3Par would be a game-changer. It’d be even
more amazing if I could use odx to replicate my vdisks to Azure and I didn’t have to stress over .vhd vs .vhdx
Speaking of Azure, the 1023GB limit .vhd stresses me, but I can speak from experience and say D-sized Azure VMs with 12 or more attached virtual disks works just fine with Storage Spaces, even if my brain starts to hurt thinking about 3 way mirrors in the context
of geo-redundant sets. Cool stuff, especially the way one can use Azure-only cmdlets and commands to copy terabytes of Storage Spaces vdisk data across Azure regions. I don’t know much about object storage, but if it’s object storage that lets me do that,
I say that I want some more of it!
Some other general comments:
– SMB shares mapped to client PC Lettered drive has changed from annoying crutch to a top 10 attack vector in age of increasingly sophisticated ransomware. This isn’t Microsoft’s fault but if ever the bandaid needed tearing off…
– I think Microsoft should put more effort into explaining the benefits of File Classification Infrastructure as I see so many disorganized, lazy, terribly-insecure department file shares out there it keeps me up at night
LikeLike
Nice with a little insigt of Microsoft’s Windows Build Team 🙂
LikeLike
Hi Jose, We met back in August/September 2014 in Redmond, after a Dell supported Clustered Storage Spaces deployment where we experienced performance issues. When we met with you and some of your colleagues you described our issue as “excess disk cache flushes”. We’re now back to a clustered solution which again we’re experiencing the same performance issues with event log entries in the SMBServer |Operation Log folder as we had back in 2014. Researching the events that occurred previously and finding new resources I found a paper concerning the PerfMon Cluster CSV File System Flushes counter. I’m looking for your opinion of what an acceptable value is. We’re seeing the values on four volumes as 419,000.000;293,530.000;118,125.000 and 804,080. In task manager the response times(ms) are 500-1000ms which ramp up over a period of time and then drop down to the 10-50ms and then repeat again. Also in task manager the disk performance has 10-20mbs of disk writes. Performance on VMs goes from ok to a drastic poor performance. Can you shed any light on this? What would be acceptable values? We’ll be submitting a ticket with MS tomorrow.
Thanks.
Dave
LikeLike