8 Slot Nvme



NVM Express Revision 1.3d compliance; NVMe multiple Submission Queues w/shared Completion Queue; NVMe 32MB transfer per slot in Queue mode; NVMe Replay Protected Memory Block, Boot Partitions; NVMe SQID, CQID and CQE DW0 in DM command trace; PCIe PERST# via ULINK PSPA+ boards; PLN#/PLA# via ULINK PSPA+ M.2 board; PWDIS#/DUALPortEn# via ULINK. Trifecta-SSD’s scalable architecture combines four or eight M.2 NVMe SSDs in a single-slot PXIe/CPCIe module that supports 4, 8, 16, 32 or 64 TB, which is 2-to-16 times greater than other single-slot PXIe SSD RAID modules. An M.2 NVMe SSD such as the relatively affordable and very fast (except for extremely large transfers) Samsung 970 EVO can live in a M.2/PCIe slot, or in a regular PCIe slot (x4 or greater).

Abstract

Either m.2 slot using NVMe PCIe SSD should perform the same keeping the stated lane sharing restrictions in mind. I have a similar Gigabyte system board and I didn't notice any performance difference between m.2 slots. Buy AMPCOM M.2 M Key Nvme SSD to PCI-e Adapter, PCI Express X16 Card with Aluminum Case, Supports Windows 7/8/ 10, Supports 2230, 2242, 2260, 2280: Hard Drive Enclosures - Amazon.com FREE DELIVERY possible on eligible purchases.

This paper describes the key components required for building an end-to-end NVMe over FC deployment with Lenovo servers, Lenovo Storage, and Lenovo Fibre Channel networking from Brocade to help solve the bottleneck some businesses may be feeling with their existing storage network. Lenovo is the first in the industry to ship end-to-end NVMe over FC.

Change History

Changes in the December 27 update:

  • Added the latest Lenovo ThinkSystem offerings

Introduction

Slot8 Slot Nvme

Fibre Channel is the most trusted and widely deployed purpose-built network infrastructure for storage. Decades of use supporting mission-critical applications have proven that Fibre Channel has the reliability, scalability, and performance to handle evolving, demanding storage applications.

Lenovo can provide end-to-end Fibre Channel solutions that take advantage of new All Flash Arrays (AFAs) that utilize the latest 32/64Gb Fibre Channel technologies and features including Non-Volatile Memory Express over Fibre Channel (NVMe over FC) available with the Lenovo ThinkSystem DM Series All Flash Storage Arrays, ThinkSystem DB Series FC SAN Switches and Directors from Brocade, and Emulex Host Bus Adapters.

Business problem

Data storage is consistently getting faster, delivering improved economics for data centers of all sizes. Flash memory-based storage is a key technology that has significantly increased the performance of storage systems. Solid state disks (SSDs) are now so fast, that the SCSI I/O interface has become the bottleneck.

Solution

The solution is based on a modern storage network capable of adapting to the requirement of NVMe-based arrays. There are critical capabilities that are a requirement to support next-generation flash that every customer needs to keep in mind:

  • First, it has to keep running no matter what. With more data moving faster, any slowdowns or disruptions in the network could be catastrophic.
  • Next, you need a network that’s designed for flash and ready for NVMe. This means support for low latency, high speed and high bandwidth.
  • Then, it needs to scale and be able to adapt to the business. For many enterprises this is petabytes of storage and thousands of servers and storage.
  • Finally, it has to be secure to mitigate the risks of breaches. This requires isolation and managed access for peace of mind.

NVMe over Fibre Channel is a solution that is defined by two standards: NVMe-oF and FC-NVMe. NVMe-oF is a specification from the NVM Express organization that is transport agnostic, and FC-NVMe is an INCITS T11 standard. These two standards define how NVMe leverages Fibre Channel. NVMe over Fibre Channel was designed to be backward compatible with the existing Fibre Channel technology, supporting both the traditional SCSI protocol and the new NVMe protocol using the same hardware adapters, Fibre Channel switches, and Enterprise AFAs. There is no need to 'rip and replace' the SAN infrastructure with NVMe over Fibre Channel.

An NVMe all-flash array connected to a storage network will help eliminate the bottleneck and deliver more value back to the business. This scalable storage network will deliver reliable flash that performs at the speed of memory.

With this newfound performance:

  • Critical applications will accelerate transactions and lead to better user experiences
  • Databases will increase the number of queries they support, leading to faster decisions and results
  • VM farms will be more efficient with higher VM densities per server, reducing infrastructure costs and simplifying IT
  • More types of workloads can be consolidated on hypervisors due to the improved storage performance

Benefits

The benefits of NVMe over FC include the following:

  • IOPS: Experience a 2.1x increase in IOPS using a more efficient command by simply moving over to NVMe/FC from the traditional SCSI FCP command set.
  • Latency: Achieve 52% lower latency using NVMe over FC. NVMe over FC will have has lower latency than traditional SCSI FCP. Depending on the workload scenario.
  • Throughput: Accomplish up to 2.1x throughput improvement over traditional SCSI FCP when using NVMe/FC
  • Increased OLTP performance Microsoft SQL Server 2019 for Linux up to up to 2.7x more transactions/minute.
  • Increased OLTP performance for Oracle 19c Databases up to 2.6x more transactions/minute and up to 60% better CPU efficiency.

A comparison of performance between Fibre Channel (FCP) and NVMe over FC is shown in the following figure.


Figure 1. DM7000F Performance

The following figure shows the Increased OLTP performance Microsoft SQL Server 2019 for Linux up to up to 2.7x more transactions/minute.


Figure 2. Microsoft SQL Performance (*Emulex ECD Labs tested 9/4/2020, Microsoft SQL 2019 for Linux, RHEL 8.1 NVMe/FC, Lenovo ThinkSystem SR650 with 2x Intel Xeon 8280 Scalable Processors, Emulex LPe35002, Lenovo ThinkSystem DB620S, Lenovo DM7100F, HammerDB TPC-C Transactions per minute. 1:10 mem/dataset ratio, XFS filesystem).

The following figure shows the Increased OLTP performance for Oracle 19c Databases up to 2.6x more transactions/minute and up to 60% better CPU efficiency.


Figure 3. Oracle Performance (*Emulex ECD Labs tested 8/24/2020, Oracle 19c, RHEL 8.1 NVMe/FC, Lenovo ThinkSystem SR650 with 2x Intel Xeon 8280 Scalable Processors, Emulex LPe35002, Lenovo ThinkSystem DB620S, Lenovo DM7100F, HammerDB TPC-C Transactions per minute. 1:10 mem/dataset ratio, 8k XFS, Direct/Async IO).

Deployment Scenarios

NVMe over Fibre Channel can enhance existing SAN workloads. Enterprise applications such as Oracle, SAP, Microsoft SQL Server and others can immediately take advantage of NVMe/FC performance benefits.

NVMe over Fibre Channel can also enable new SAN workload scenarios as well. Big data analytics, Internet of Things (IoT) and AI/deep learning will all benefit from the faster performance and lower latency of NVMe over FC.

Lenovo Solution Components

The Lenovo components in the NVMe over FC solution are as follows.

Table 1. Servers
ThinkSystem
Server
Machine
types
Host Bus AdapterHost Operating System
SR6307X01, 7X02

4XC7A08251 - ThinkSystem Emulex LPe35002 32Gb 2-port PCIe Fibre Channel Adapter (12.6.x.x or later)

4XC7A08250 - ThinkSystem Emulex LPe35000 32Gb 1-port PCIe Fibre Channel Adapter (12.6.x.x or later)

7ZT7A00519 – Emulex LPe32002-M2-L PCIe 32Gb 2-Port SFP+ Fibre Channel Adapter (12.6.x.x or later)

7ZT7A00517 - Emulex LPe32000-M2-L PCIe 32Gb 1-Port SFP+ Fibre Channel Adapter (12.6.x.x or later)

01CV840 – Emulex LPe31002-M6-L PCIe 16Gb 2-Port SFP+ Fibre Channel Adapter (12.6.x.x or later)

01CV830 - Emulex LPe31000-M6-L PCIe 16Gb 1-Port SFP+ Fibre Channel Adapter (12.6.x.x or later)

RHEL 8.0 and later

RHEL 7.7 and later

SLES 15 and later

SLES 12 SP4 and later

Windows Server 2016 and later

Windows Server 2019 and later

ESXI 7.0 and later

SR6357Y98, 7Y99
SR6457D2Y, 7D2X
SR6507X05, 7X06
SR6557Y00, 7Z01
SR6657D2W, 7D2V
SR8507X18, 7X19
SR850P7D2F, 2D2G
SR850 V27D31, 7D32, 7D33
SR8607X69, 7X70
SR860 V27Z59, 7Z60
SR9507X11, 7X12, 7X13
SD5307X21
Table 2. Networking
DescriptionPart NumbersSoftware
Lenovo B6505 24-port 16Gb FC SAN Switch3873AR5, 3873HC5, 3873HC8, 3873ER1Fabric OS 8.1.0b and later
Lenovo B6510 48-port 16Gb FC SAN Switch3873BR3, 3873HC6, 3873HC9, 3873IR1Fabric OS 8.1.0b and later
Lenovo ThinkSystem DB610S FC SAN Switch6559D1Y, 6559D2Y, 6559D3Y, 6559HC1, 6559HC2, 6559HC3, 6559HC4, 6559HC5, 6559HC6, 6559HC7, 6559F1A, 6559F2A, 6559F3A, 6559F4AFabric OS 8.1.0b and later
Lenovo ThinkSystem DB620S FC SAN Switch6415HC1, 6415HC2, 6415HC3, 6415HC4, 6415HC5, 6415HC6, 6415HC7, 6415HC8, 64615HC9, 6415G11, 6415G2A, 6415G3A, 6515H11, 6415H2A, 6415J1A, 6415L1A, 6415L2A, 6415L3AFabric OS 8.1.0b and later
Lenovo ThinkSystem DB630S FC SAN Switch7D1S-CTO1WW, 7D1S-CTO2WW, 7D1S-CTO3WW, 7D1S-CTO5WW, 7D1S-CTO6WW, 7D1SA001WW, 7D1SA002WW, 7D1SA003WW, 7D1SA004WW, 7D1SA005WWFabric OS 8.1.0b and later
Lenovo ThinkSystem DB720S FC SAN Switch7D5JCTO1WW, 7D5JCTO2WW, 7D5JCTO3WW, 7D5JCTO4WWFabric OS 9.0 and later
Lenovo ThinkSystem DB400D 4-slot FC SAN Director6684B2A, 6684D2A, 6684HC1, 6684HC2Fabric OS 8.1.0b and later
Lenovo ThinkSystem DB800D 8-slot FC SAN Director6682B1A, 6682D1A, 6682HC2Fabric OS 8.1.0b and later
Table 3. Storage Array
DescriptionPart NumberAdapterSoftware
Lenovo ThinkSystem DM5100F7D3KCTO1WWLenovo ThinkSystem DM Series HIC, 16/32Gb FC,4-ports (4C57A67133)ONTAP 9.8 and later
Lenovo ThinkSystem DM7000F Unified Flash Storage Array7Y40CTO1WWEmulex 32Gb Host Bus Adapter (Option PN: 4XC7A14396)ONTAP 9.4 and later
Lenovo ThinkSystem DM7100F Unified All Flash Storage Array7D25CTO1WWLenovo ThinkSystem DM Series 32Gb 4 port Fibre Channel Card (4XC7A38326)ONTAP 9.8 and later

The following figure shows an example of the components.


Figure 4. Lenovo End to End NVMe over FC

Additional Information

For more information, see these web resources:

  • Video on the Lenovo ThinkSystem End-to-End NVMe over Fibre Channel Solution
    https://youtu.be/fwxWGAhXXmY
  • Infographic on the Lenovo ThinkSystem End-to-End NVMe over Fibre Channel Solution
    https://static.lenovo.com/ww/docs/lenovo-thinksystem-dm-series-nvme-infographic.pdf
  • ThinkSystem DM Series Storage product web page
    https://www.lenovo.com/us/en/data-center/storage-area-network/thinksystem-dm-series/c/thinksystem-dm-series
  • Lenovo ThinkSystem DM5100F Unified Flash Storage Array product guide
    https://lenovopress.com/lp1365-thinksystem-dm5100f-unified-flash-storage-array
  • Lenovo ThinkSystem DM7000F Unified Flash Storage Array product guide
    https://lenovopress.com/lp0912-lenovo-thinksystem-dm7000f-unified-flash-storage-array
  • Lenovo ThinkSystem DM7100F Unified All Flash Storage Array product guide
    https://lenovopress.com/lp1271-thinksystem-dm7100f-unified-all-flash-storage-array
  • ThinkSystem DM Series All-Flash Array datasheet
    https://lenovopress.com/ds0047-lenovo-dm-series-all-flash-array
  • Emulex Single and Dual Port 32Gb Fibre Channel Adapters product guide
    https://lenovopress.com/lp0692-emulex-32gb-fibre-channel-adapters-for-thinksystem
  • ThinkSystem SAN Switches product guides
    https://lenovopress.com/storage/switches/rack
  • Emulex NVMe over Fibre Channel User’s Guide
    https://docs.broadcom.com/docs/12380302

Related product families

Product families related to this document are the following:

Trademarks

Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at https://www.lenovo.com/us/en/legal/copytrade/.

The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
ThinkSystem

The following terms are trademarks of other companies:

Intel® and Xeon® are trademarks of Intel Corporation or its subsidiaries.

Linux® is the trademark of Linus Torvalds in the U.S. and other countries.

Microsoft®, SQL Server®, Windows Server®, and Windows® are trademarks of Microsoft Corporation in the United States, other countries, or both.

TPC and TPC-C are trademarks of Transaction Processing Performance Council.

Other company, product, or service names may be trademarks or service marks of others.

Introduction

Such adapter cards pass signals from your M.2 NVMe device right through to your motherboards PCIe slot, with no speed degradation. You can see these simple wire traces in the close-look video that's featured below, so there's no special drivers needed. Modern PCs and servers simply 'see' the NVMe drive, whether it's in a motherboard socket, or installed in any of these adapters. A nice perk, if your system can boot from NVMe, it can likely boot just fine from any of the M.2 NVMe devices mounted in any of these cards. Note, these items were all purchased, and some of the shopping links below are affiliate links, see detailed disclaimer and disclosure below every article.

8 slot nvme

EZDIY M.2 SSD to PCIe 3.0 x4 Adapter

  • PCI Express M.2 SSD NGFF PCIe Card to PCIe 3.0 x4 Adapter (Support M.2 PCIe 22110 ,2280, 2260, 2242
  • Available at Amazon.

Simple and affordable (around $22 USD) way to add M.2 NVMe to your PCIe equipped system, best speeds if your PCIe is 3.0 x 4 lanes. Because it's longer than the Lycom DT-120 I wrote about a year ago, it can handle not just the common 2280, but also those supercapacitor-equipped enterprise 22110 length drives. You know, the ones you might need for those future vSAN tests you're pondering.

In the video, you may notice I've covered up the too-bright LED on this card, since I sometimes run the system with the lid off at demonstrations and VMUG events. The fix was simple, LightDims Original Strength LED covers.

The Good

Simple, looks good, and affordable, and it includes full height and half height backplates.

The Not So Good

That blue LED is way too bright, but that's easily fixed.

Note that the new Samsung 960 PRO/EVO (that one TinkerTry visitor already has!) is said to run cooler under load than the 950 PRO partly because of the new Polaris controller, and partly because of the heat-spreader copper layer under the sticker, which better spreads the heat out. The idea here is to prevent thermal throttling whenever possible, and even with throttled, it's still way faster than any 2.5' SATA3 SSD. Simply having your M.2 storage up off the motherboard can mean that this affordable card lets you run cooler longer.

Angelbird Wings PX1 PCIe x4 M.2 Adapter

  • Available at Amazon and Newegg.
  • Product Page and Specs.
  • User Manual already references Samsung 960 PRO/EVO compatibility.

Haven't fully tested this yet, as the need for supplemental cooling for my beloved and (formerly) world's-fastest consumer SSD is only rarely, such as when I want to run extended duration benchmarks without cranking the chassis fan to max. I don't benchmark to show off, I benchmark to determine whether I'm getting roughly the expected performance of my investment. This does require application of a special adhesive strip to your NVMe, thermally connecting it to the large aluminum passive cooling surface.

But people love this thing, and the construction and look sure seems of high quality, have a look at the video below.

The Good

Looks good, great construction quality, haven't fully tested.

The Not So Good

No half-height backplate included.

Amfeltec Squid PCI Express Gen3 Carrier Board for 4 M.2 SSD modules

  • SKU-086-34 SQUID PCIe Gen 3 Carrier Board for 4 M.2 SSD modules (M.2 key M)
    (full or half-height bracket; x16 or x8 PCI Express Adapter)
  • Available at Amfeltec site which has you contact Amfeltec sales via email to get the latest pricing
  • Active cooling fan moves provides laminar airflow across all 4 slots, at the cost of ~16dB noise increase, but that's at an 8' distance with no PC cover in place.

Now this is pretty spectacular. A way to get 4 M.2 NVMe drives into one PCIe 3.0 x16 slot. Do the math. Samsung 960 PRO/EVO likes PCIe 3.0 x 4, and we have x 16 in those longer slots, such as those found in the Xeon D. No, it's not RAID, or a RAID controller, it's just a way for me personally to use my 128GB Samsung SM951 alongside my Samsung 950 PRO 512GB, and soon my Samsung 960 EVO 1TB.

They also make a more affordable 2 M.2 slot and Gen 2 versions, see the whole product line, Squid PCI Express Carrier Boards™ for M.2 SSD modules. , but if you want to be able to use several more M.2 devices over time, this device should serve you well for years to come. Note, if your motherboard supports Intel RST/RSTe, it's often for SATA only, so don't assume that means you can RAID M.2 NVMe. Check your motherboard or system documentation.

The Good

  • initial tests, seen near the end of the video below, do indicate that the speeds I'm getting from my M.2 drives are just as fast as when they were mounted right on the motherboard, this is good
  • no drivers needed, it just works
  • the cooling fan does seem like it might be effective, but I haven't really tested it yet

The Not As Good

  • unfortunately, you can't get pricing easily online, but hopefully this Canadian company will work on easier availability worldwide, see also their current list of distributors
  • it is pricey (hundreds), you should , to ask for a quote shipped to your address
  • fan noise can be a concern for some use cases, but it's easily detached (demonstrated in the video) for users who have already have adequate airflow

VMware Considerations

  • no driver needed for any of these card, they pass signals through that seamlessly, so picky OSs like VMware 'see' each mounted M.2 NVMe device natively
  • did my brief testing under ESXi 6.5 (see video), but NVMe has been supported since 5.5, works fine
  • there seems to be little reason to worry VMware HCL listing such adapters as compatible, because they're invisible to the ESXi hypervisor, nice!

Video

Dec 15 2016 Update

I have found a US based reseller of Amfeltec products:

  • PCI Express Gen 3 Carrier Board for 4 M.2 SSD Modules (SKU-086-34)
    saelig.com/product/sku-086-34.htm

Mar 26 2017 Update

Without asking, Amfeltec in Canada decided to send me 2 more of their latest product offerings. Because there is no affiliate program (no commissions) and prohibitive return shipping fees, I decided to keep the 2 items, and disclose how they were obtained.

Slot

Recording the unboxing of the items while recording in 4K gives you a very detailed look, admittedly uncut and unedited. Since the recorded video is a massive 5GB in all, editing this for small gaffes and gaps just wasn't feasible in any reasonable amount of time.

4 M.2 slots

  • PCI Express Gen 3 Carrier Board for 4 M.2 SSD modules - original review above

2 M.2 slots

  • PCI Express Gen 3 Carrier Board for 2 M.2 SSD modules - new unboxing video below

1 M.2 slot

  • 1U PCI Express Gen 3 Carrier Board for M.2 SSD module - new unboxing video below

This last model features a metal retainer clip that could thankfully easily be removed (off camera), since for Xeon D systems like my Supermicro SuperServer SYS-5028D-TN4T, this could touch some nearby motherboard jumpers.

All of Amfeltec's PCI Express Gen 3 Carrier Boards can be found at:

Nvme

May 03 2017 Update

See also:

  • AOC-SLG3-2M2 Supermicro Add-On Card for up two M.2 NVMe SSDs - Internal, PCI-E 3.0 x8, Low-Profile, available at Wiredzone.

TinkerTry Udo Keller commented:

...as soon as you change the BIOS setting for PCIe bifurcation from x16 to x4x4x4x4, both slots are recognised.

Mar 18 2018 Update

This is a big development. I have now succesfully tested the Supermicro SuperServer SYS-5028D-TN4T with the HYPER M.2 X16 CARD! There's a catch: I only got this working by using a PCIe ribbon cable slot extender to run the Hyper card externally. It's a full height PCIe card, so there's no other way.

But because this Xeon D system doesn't need a card with a PLX on it, it works fine with this card by just turning on x4x4x4x4 Bifurcation in the BIOS. That's right, no RAID, just JBOD style passthrough of 4 M.2 x4 devices at full speed, seen as native NVMe devices under VMware ESXi 6.5U1, for example. Should work fine with any OS though, since it's just passing M.2 devices right through.

Here's the ASUS HYPER M.2 X16 CARD Product Page, with the list of compatible models filled with X299 gaming-centric motherboards:

With Intel® Virtual RAID on CPU (VROC), unused CPU PCle® lanes can be assigned to storage, allowing you to create a bootable RAID array with multiple M.2 SSDs. Unlike motherboard PCH-based RAID, VROC isn't confined by the 32 Gbps DMI bottleneck, so multiple drives can be teamed to provide incredible throughput.

Shop

  • B&H.
  • Newegg.

Thermaltake TT Gaming PCI-E x16 3.0 Black Extender Riser Cable 200mm AC-053-CN1OTN-C1
I can't seem to find the exact model number of the ribbon cable I used from 2 years ago, as featured in the video above. But this Thermaltake readily-available ribbon cable seems appears to be of the same specifications, and is likely to work just fine with the ASUS HYPER M.2 adapter, or whatever other PCIe device you are interested in testing.

  • Newegg.

Mar 26 2018 Update

Full article on this project is planned, with video. For now, here's a picture of the card situated inside my SYS-5028D-TN4T, and I was able to put the cover back on without any issues. I had to remove the aluminum cover/heatsink to get it to fit though. You'll notice the power lead for the fan is detached, since it won't do any good without the aluminum cover. I also used some 3M 03614 Scotch-Mount 1/2' x 15' Molding Tape to fasten it without any possibility of shorts, since the Hyper card's entire back surface is painted. The red foam tape cover that's visibile was used to protect the capacitors on the front from any inadvertent bumps, when installing/removing the lid. If you look closely, you'll see my 960 PRO still on the motherboard's M.2 slot. That's right, I now have FIVE M.2 NVMe devices, all running at full PCIe 3.0 x 4 lanes of speed, at a much lower price point. This is great, and things I'll keep in mind as I begin my quest for future Xeon D-1500 and Xeon D-2100 chassis designs, and beyond.

Nov 15 2018 Update

There's a new tiny PCIe to M.2 adapter now available, as David Chung shared here. Only 4 of those 16 lanes of connections are used, the first 8 pins near the M.2 connector actually. The rest of the pins are likely there to provide some extra friction and stability, since this isn't a card that can be screwed down and fastened in place. Given the way this adapter is notched, it should fit in most modern motherboards, even if those extra pins aren't plugged into anything.

Given it's also just another pass through device, I don't see how this would affect vSphere/vSAN compatibility or support.

8 Slot Nvme Game

Jan 23 2020 Update

Great twitter conversation with William Lam about the new:

These appear to be 5 x M.2 card clones that use PCIe 3.0 x2.

See also at TinkerTry

8 Slot Nvme Ssd

  • Boot-from-NVMe has some special NVMe-aware BIOS/UEFI considerations that I wrote about back in November of 2015 here.
    Nov 05 2015

  • How to configure VMware ESXi 6.7 or later for VMDirectPath I/O pass-through of any NVMe SSD, such as Windows 10 1809 installed directly on an Intel 900P booted in an EFI VM
    Nov 28 2018

  • Where to buy your Samsung 960 EVO or PRO M.2 NVMe SSDs, featuring the latest ordering and availability info
    Nov 30 2016

  • First look - Samsung SM951 M.2 SSD in a Supermicro SuperServer 5028D-TN4T
    Sep 23 2015

  • Samsung announces 960 PRO, 960 EVO, and SM961 NVMe M.2 SSDs in up to 1TB capacity
    Nov 07 2015

  • World's fastest consumer SSD - Samsung 950 PRO M.2 NVMe benchmark results
    Nov 07 2015

  • How to install Samsung 950 PRO M.2 SSD in a PCIe slot - tested with Supermicro 5028D-TN4T & Lycom DT-120 M.2 to PCIe adapter
    Nov 05 2015

  • How to boot Windows 10 from NVMe based PCIe storage, featuring Samsung 950 PRO M.2 SSD in a Supermicro SYS-5028D-TN4T
    Nov 05 2015