Featured Post

YouTube and link library for S2D.dk

2020/08/25

DataON Azure Stack HCI - Public Preview

 Azure Stack HCI - Public Preview - Installation and Troubleshooting series with DataON


*** Disclaimer ***
s2d.dk is not responsible for any errors, or for the results obtained from the use of this information on s2d.dk. All information in this site is provided as "draft notes" and "as is", with no guarantee of completeness, accuracy, timeliness or of the results obtained from the use of this information. Always test in a lab setup, before use any of the information in production environment.
For any reference links to other websites we encourages you to read the privacy statements of the third-party websites.
The names of actual companies and products mentioned herein may be the trademarks of their respective owners.
***

Azure Stack HCI - Public Preview - Installation and Troubleshooting series with DataON

DataON Azure Stack HCI
  • DataON Hosts
  • Intel NVMe
  • Mellanox Network

Part 1

Setup test Domain and Windows Admin Center
Installation of the physical DataON with the Azure Stack HCI - Public Preview
Configuration of network with one dual port Mellanox ConnectX-4 adapter for both Storage and Guest Traffic

Setup of VMs for Performance test with DiskSpd.exe
Se the impact on the Host CPUs for Storage, Network and Guest workload

(Video will come one day... when I have time to finish the editing)

Part 2

Configuration of network with two dual port Mellanox ConnectX-4 adapters. The Storage adapter are direct connected and the Guest Traffic use a SET switch connect to Mellanox SN2100 physical Switch


Notes and time agenda for the Video: 00:20 Agenda, Create one virtual switch for compute only and use direct connection for storage 00:45 Start the cleaning of the previous setup 00:50 Delete all the VMs (Was exported before the recording was started) 01:20 Delete all the vDisk 01:50 Disable-ClusterS2D the "destroy cluster" part is NOT included in this video... 02:10 Clear-Cluster Node was performed on all Nodes 02:44 Start the "Create new" "Server cluster" from WAC 02:56 WAC step 1.1 Check the prerequisites 03:05 WAC step 1.2 Add servers 04:35 WAC step 2.1 Verify network adapters (Remove Existing Switches, from last demo) 04:56 WAC step 2.2 Select management adapter (Use a 10Gbps pNIC, only 1Gbps connection in the Demo) 05:24 WAC step 2.3 Define networks (Add the Name, IP, Subnet and vlan for the Storage pNICs) 06:20 WAC step 2.4 Virtual Switch 07:00 WAC step 3.1 Validate cluster 07:30 WAC step 3.2 Create cluster 08:30 WAC step 4.1 Clean drives 09:18 WAC step 4.4 Enable Storage Spaces Direct More to come in the next videos about setup of vDisk and performance

Part 3

More to come

2020/08/02

Rebuild Storage Spaces Direct (S2D) SOFS Cluster

Rebuild Storage Spaces Direct S2D and Scale-Out File Server (SOFS) - Troubleshooting series

*** Disclaimer ***
s2d.dk is not responsible for any errors, or for the results obtained from the use of this information on s2d.dk. All information in this site is provided as "draft notes" and "as is", with no guarantee of completeness, accuracy, timeliness or of the results obtained from the use of this information. Always test in a lab setup, before use any of the information in production environment.
For any reference links to other websites we encourages you to read the privacy statements of the third-party websites.
The names of actual companies and products mentioned herein may be the trademarks of their respective owners.
***

Rebuild Storage Spaces Direct S2D and Scale-Out File Server (SOFS) - Troubleshooting series

Rebuild a 4 Node Storage Spaces Direct (S2D) SOFS Cluster. Reconfigured the Network and use Validate-DCB to ensure the RoCE setup of DCB, PFC, ETS. Enable the S2D again on the "new" cluster Reuse/add the "old" Storage Pool back to the "new" cluster Add the "old" vDisks and share it again with the SOFS Role If it work, I get my VMs back... no need to restore from Backup... The "old" Cluster was deleted and one Host was complet reinstalled. The "Clear-ClusterNode" was used to clear the cluster configuration from a node. This ensure that the failover cluster configuration has been completely removed from a node. Sit back, enjoy 1 hour and 12 minutes of planned and unplanned challenges. Troubleshooting with Validate-DCB and the use of SCVMM/PowerShell to configure the Cluster Network



If you don't have time to see it all (recommended to see it all, it is fun to see me get into trouble) Then I have created some time agenda, that can help to jump in the Video 0:03:08 See the old disk Pool info. on one Host from "Disk Management" 0:04:05 SCVMM Add the Host to SCVMM 0:05:05 SCVMM Add the Logical Switch to the Host 0:06:43 See the vNIC be created on the Host from SCVMM 0:08:14 See the vNIC for Storage 0:09:02 See the configuration of DCB, PFC and ETS (PowerShell) 0:13:56 The new cluster dont use the same vlan as the old, will give problems later 0:14:10 Use the wrong vNIC name when configure the MTU (See if Validate-DCB will detect the mistake 0:24:54) 0:16:44 See how I make a mistake. Add the wrong Subnet Mask for my Management vNIC... 0:21:59 Validate-DCB Installation of the Module 0:22:55 Validate-DCB Run the first time... 0:24:54 Validate-DCB shows the MTU Error 0:26:54 Validate-DCB on all Hosts (Changed the script in Notepad) 0:28:18 Validate-DCB Use the wrong Policy Name for the SMB 0:29:38 Validate-DCB NetQosPolicy Name changed 0:31:54 Validate-DCB ETS Traffic Class missing 0:34:00 Validate-DCB Okay 0:37:05 Found the Subnet error on the Management vNIC 0:40:01 Cluster wizard failed first time 0:41:32 Cluster wizard failed second time 0:42:02 Cluster Wizard failed again 0:42:56 Cluster Wizard started again, this time it works... 0:45:07 Cluster created (Back in the game) 0:46:10 Enable Storage Spaces Direct S2D again (cross the finger that is accept the old pool) 0:48:08 Add the "old" Storage Pool back to the "new" Cluster 0:48:26 Add the "old" Disk back to the "new" Cluster (vDisk 1 to 3) 0:49:21 Rename vDisks after add to the cluster 0:51:14 Add SOFS03 Role 0:55:42 SOFS Role created for "SOFS03" 0:56:09 Delegation of Control to the CNO 0:56:53 Add File Share for vDisk2 0:58:00 Add File Share for vDisk2 0:59:10 Add File Share for vDisk3 1:01:35 SCVMM Remove the old SOFS Provider from SCVMM 1:01:43 SCVMM Add the rebuild SOFS 1:05:04 SCVMM Add the vDisk Access Control 1:06:31 SCVMM Missing the "File Share managed by Virtual Machine Manager" 1:07:02 SCVMM Add the File Share Storage vDisk1 1:08:58 SCVMM Repair the vDisk2 File Share Storage connection 1:09:06 SCVMM Add the File Share Storage vDisk3 1:09:28 SCVMM File Share Storage all vDisks are now green 1:09:50 Explain the last problems (vlan and SOFS Client access) 1:11:00 Start the first VM *** Links: DCB, PFC and ETS Configuration for RDMA/RoCE.
https://www.s2d.dk/2019/12/dcb-configuration.html Validator for RDMA Configuration and Best Practices

Microsoft Docs version of the Blog is now released Validate an Azure Stack HCI cluster https://docs.microsoft.com/en-us/azure-stack/hci/deploy/validate Validate-DCB Disconnected installation