1. Marvell Scsi Command
  2. Marvell Scsi & Raid Devices Driver Download Windows 10
  3. Marvell SCSI & RAID Devices Driver Download
  4. Marvell Scsi Drive

ARM Marvell SoCs¶. This document lists all the ARM Marvell SoCs that are currently supported in mainline by the Linux kernel. As the Marvell families of SoCs are large and complex, it is hard to understand where the support for a particular SoC is available in the Linux kernel. Nov 15, 2015 Go to Storage Controller and identify your Marvell 91xx controller driver. Right-click on this device and click 'Update driver software'. Choose 'Browse my computer for driver software'. Navigate or enter%USERPROFILE Downloads MarvellSATAV1.2.0.1047 Your device driver will be updated and you will be prompted to reboot. Mar 06, 2018 The Hardware ID was quite cryptic listed as 'SCSI/PCI.' It did not look like a normal Hardware ID. After the driver provided here is installed, the true hardware ID of the Marvell SATA 6GB Controller listed in Device Manager is as below: PCI VEN1B4B&DEV9220&SUBSYS92201B4B&REV10 PCI VEN1B4B&DEV9220&SUBSYS92201B4B.

This document lists all the ARM Marvell SoCs that are currentlysupported in mainline by the Linux kernel. As the Marvell families ofSoCs are large and complex, it is hard to understand where the supportfor a particular SoC is available in the Linux kernel. This documenttries to help in understanding where those SoCs are supported, and tomatch them with their corresponding public datasheet, when available.

Orion family¶

Flavors:
  • 88F5082

  • 88F5181

  • 88F5181L

  • 88F5182

    • Datasheet: http://www.embeddedarm.com/documentation/third-party/MV88F5182-datasheet.pdf
    • Programmer’s User Guide: http://www.embeddedarm.com/documentation/third-party/MV88F5182-opensource-manual.pdf
    • User Manual: http://www.embeddedarm.com/documentation/third-party/MV88F5182-usermanual.pdf
  • 88F5281

    • Datasheet: http://www.ocmodshop.com/images/reviews/networking/qnap_ts409u/marvel_88f5281_data_sheet.pdf
  • 88F6183

Core:
Feroceon 88fr331 (88f51xx) or 88fr531-vd (88f52xx) ARMv5 compatible
Linux kernel mach directory:
arch/arm/mach-orion5x
Linux kernel plat directory:
arch/arm/plat-orion

Kirkwood family¶

Flavors:
  • 88F6282 a.k.a Armada 300

    • Product Brief : http://www.marvell.com/embedded-processors/armada-300/assets/armada_310.pdf
  • 88F6283 a.k.a Armada 310

    • Product Brief : http://www.marvell.com/embedded-processors/armada-300/assets/armada_310.pdf
  • 88F6190

    • Product Brief : http://www.marvell.com/embedded-processors/kirkwood/assets/88F6190-003_WEB.pdf
    • Hardware Spec : http://www.marvell.com/embedded-processors/kirkwood/assets/HW_88F619x_OpenSource.pdf
    • Functional Spec: http://www.marvell.com/embedded-processors/kirkwood/assets/FS_88F6180_9x_6281_OpenSource.pdf
  • 88F6192

    • Product Brief : http://www.marvell.com/embedded-processors/kirkwood/assets/88F6192-003_ver1.pdf
    • Hardware Spec : http://www.marvell.com/embedded-processors/kirkwood/assets/HW_88F619x_OpenSource.pdf
    • Functional Spec: http://www.marvell.com/embedded-processors/kirkwood/assets/FS_88F6180_9x_6281_OpenSource.pdf
  • 88F6182

  • 88F6180

    • Product Brief : http://www.marvell.com/embedded-processors/kirkwood/assets/88F6180-003_ver1.pdf
    • Hardware Spec : http://www.marvell.com/embedded-processors/kirkwood/assets/HW_88F6180_OpenSource.pdf
    • Functional Spec: http://www.marvell.com/embedded-processors/kirkwood/assets/FS_88F6180_9x_6281_OpenSource.pdf
  • 88F6281

    • Product Brief : http://www.marvell.com/embedded-processors/kirkwood/assets/88F6281-004_ver1.pdf
    • Hardware Spec : http://www.marvell.com/embedded-processors/kirkwood/assets/HW_88F6281_OpenSource.pdf
    • Functional Spec: http://www.marvell.com/embedded-processors/kirkwood/assets/FS_88F6180_9x_6281_OpenSource.pdf
Homepage:
http://www.marvell.com/embedded-processors/kirkwood/
Core:
Feroceon 88fr131 ARMv5 compatible
Linux kernel mach directory:
arch/arm/mach-mvebu
Linux kernel plat directory:
none

Discovery family¶

Flavors:
  • MV78100

    • Product Brief : http://www.marvell.com/embedded-processors/discovery-innovation/assets/MV78100-003_WEB.pdf
    • Hardware Spec : http://www.marvell.com/embedded-processors/discovery-innovation/assets/HW_MV78100_OpenSource.pdf
    • Functional Spec: http://www.marvell.com/embedded-processors/discovery-innovation/assets/FS_MV76100_78100_78200_OpenSource.pdf
  • MV78200

    • Product Brief : http://www.marvell.com/embedded-processors/discovery-innovation/assets/MV78200-002_WEB.pdf
    • Hardware Spec : http://www.marvell.com/embedded-processors/discovery-innovation/assets/HW_MV78200_OpenSource.pdf
    • Functional Spec: http://www.marvell.com/embedded-processors/discovery-innovation/assets/FS_MV76100_78100_78200_OpenSource.pdf
  • MV76100

Core:
Feroceon 88fr571-vd ARMv5 compatible
Linux kernel mach directory:
arch/arm/mach-mv78xx0
Linux kernel plat directory:
arch/arm/plat-orion

EBU Armada family¶

Armada 370 Flavors:
  • Product Brief: http://www.marvell.com/embedded-processors/armada-300/assets/Marvell_ARMADA_370_SoC.pdf
  • Hardware Spec: http://www.marvell.com/embedded-processors/armada-300/assets/ARMADA370-datasheet.pdf
  • Functional Spec: http://www.marvell.com/embedded-processors/armada-300/assets/ARMADA370-FunctionalSpec-datasheet.pdf
Core:
Sheeva ARMv7 compatible PJ4B
Armada 375 Flavors:
  • 88F6720
  • Product Brief: http://www.marvell.com/embedded-processors/armada-300/assets/ARMADA_375_SoC-01_product_brief.pdf
Core:
ARM Cortex-A9
Armada 38x Flavors:
  • 88F6810 Armada 380
  • 88F6820 Armada 385
  • 88F6828 Armada 388
  • Product infos: http://www.marvell.com/embedded-processors/armada-38x/
  • Functional Spec: https://marvellcorp.wufoo.com/forms/marvell-armada-38x-functional-specifications/
Core:
ARM Cortex-A9
Armada 39x Flavors:
  • Product infos: http://www.marvell.com/embedded-processors/armada-39x/
Core:
ARM Cortex-A9
Armada XP Flavors:
  • MV78230
  • MV78260
  • MV78460
NOTE:
not to be confused with the non-SMP 78xx0 SoCs
Product Brief:
http://www.marvell.com/embedded-processors/armada-xp/assets/Marvell-ArmadaXP-SoC-product%20brief.pdf
Functional Spec:
http://www.marvell.com/embedded-processors/armada-xp/assets/ARMADA-XP-Functional-SpecDatasheet.pdf
  • Hardware Specs:

Core:
Sheeva ARMv7 compatible Dual-core or Quad-core PJ4B-MP

Marvell Scsi Command

Linux kernel mach directory:
arch/arm/mach-mvebu
Linux kernel plat directory:
none

EBU Armada family ARMv8¶

Armada 3710/3720 Flavors:
  • 88F3710
  • 88F3720
Core:
ARM Cortex A53 (ARMv8)
Homepage:
http://www.marvell.com/embedded-processors/armada-3700/
Product Brief:
http://www.marvell.com/embedded-processors/assets/PB-88F3700-FNL.pdf
Device tree files:
arch/arm64/boot/dts/marvell/armada-37*
Armada 7K Flavors:
  • 88F7020 (AP806 Dual + one CP110)
  • 88F7040 (AP806 Quad + one CP110)

Core: ARM Cortex A72

Homepage:
http://www.marvell.com/embedded-processors/armada-70xx/
Product Brief:
Device tree files:
arch/arm64/boot/dts/marvell/armada-70*
Armada 8K Flavors:
  • 88F8020 (AP806 Dual + two CP110)
  • 88F8040 (AP806 Quad + two CP110)
Core:
ARM Cortex A72
Homepage:
http://www.marvell.com/embedded-processors/armada-80xx/
Product Brief:
Device tree files:
arch/arm64/boot/dts/marvell/armada-80*

Avanta family¶

Flavors:

Marvell Scsi & Raid Devices Driver Download Windows 10

  • 88F6510
  • 88F6530P
  • 88F6550
  • 88F6560
Homepage:
http://www.marvell.com/broadband/
Product Brief:
http://www.marvell.com/broadband/assets/Marvell_Avanta_88F6510_305_060-001_product_brief.pdf

No public datasheet available.

Core:
ARMv5 compatible
Linux kernel mach directory:
no code in mainline yet, planned for the future
Linux kernel plat directory:
no code in mainline yet, planned for the future

Storage family¶

Armada SP:
  • 88RC1580
Product infos:
http://www.marvell.com/storage/armada-sp/
Core:
Sheeva ARMv7 comatible Quad-core PJ4C

(not supported in upstream Linux kernel)

Dove family (application processor)¶

Flavors:
Product Brief:
http://www.marvell.com/application-processors/armada-500/assets/Marvell_Armada510_SoC.pdf
Hardware Spec:
http://www.marvell.com/application-processors/armada-500/assets/Armada-510-Hardware-Spec.pdf
Functional Spec:
http://www.marvell.com/application-processors/armada-500/assets/Armada-510-Functional-Spec.pdf
Homepage:
http://www.marvell.com/application-processors/armada-500/
Core:
ARMv7 compatible
Directory:
  • arch/arm/mach-mvebu (DT enabled platforms)
  • arch/arm/mach-dove (non-DT enabled platforms)

PXA 2xx/3xx/93x/95x family¶

Flavors:
  • PXA21x, PXA25x, PXA26x
    • Application processor only
    • Core: ARMv5 XScale1 core
  • PXA270, PXA271, PXA272
    • Product Brief : http://www.marvell.com/application-processors/pxa-family/assets/pxa_27x_pb.pdf
    • Design guide : http://www.marvell.com/application-processors/pxa-family/assets/pxa_27x_design_guide.pdf
    • Developers manual : http://www.marvell.com/application-processors/pxa-family/assets/pxa_27x_dev_man.pdf
    • Specification : http://www.marvell.com/application-processors/pxa-family/assets/pxa_27x_emts.pdf
    • Specification update : http://www.marvell.com/application-processors/pxa-family/assets/pxa_27x_spec_update.pdf
    • Application processor only
    • Core: ARMv5 XScale2 core
  • PXA300, PXA310, PXA320
    • PXA 300 Product Brief : http://www.marvell.com/application-processors/pxa-family/assets/PXA300_PB_R4.pdf
    • PXA 310 Product Brief : http://www.marvell.com/application-processors/pxa-family/assets/PXA310_PB_R4.pdf
    • PXA 320 Product Brief : http://www.marvell.com/application-processors/pxa-family/assets/PXA320_PB_R4.pdf
    • Design guide : http://www.marvell.com/application-processors/pxa-family/assets/PXA3xx_Design_Guide.pdf
    • Developers manual : http://www.marvell.com/application-processors/pxa-family/assets/PXA3xx_Developers_Manual.zip
    • Specifications : http://www.marvell.com/application-processors/pxa-family/assets/PXA3xx_EMTS.pdf
    • Specification Update : http://www.marvell.com/application-processors/pxa-family/assets/PXA3xx_Spec_Update.zip
    • Reference Manual : http://www.marvell.com/application-processors/pxa-family/assets/PXA3xx_TavorP_BootROM_Ref_Manual.pdf
    • Application processor only
    • Core: ARMv5 XScale3 core
  • PXA930, PXA935
    • Application processor with Communication processor
    • Core: ARMv5 XScale3 core
  • PXA955
    • Application processor with Communication processor
    • Core: ARMv7 compatible Sheeva PJ4 core

Comments:

  • This line of SoCs originates from the XScale family developed byIntel and acquired by Marvell in ~2006. The PXA21x, PXA25x,PXA26x, PXA27x, PXA3xx and PXA93x were developed by Intel, whilethe later PXA95x were developed by Marvell.
  • Due to their XScale origin, these SoCs have virtually nothing incommon with the other (Kirkwood, Dove, etc.) families of MarvellSoCs, except with the MMP/MMP2 family of SoCs.
Linux kernel mach directory:
arch/arm/mach-pxa
Linux kernel plat directory:
arch/arm/plat-pxa

MMP/MMP2/MMP3 family (communication processor)¶

Flavors:
  • PXA168, a.k.a Armada 168
    • Homepage : http://www.marvell.com/application-processors/armada-100/armada-168.jsp
    • Product brief : http://www.marvell.com/application-processors/armada-100/assets/pxa_168_pb.pdf
    • Hardware manual : http://www.marvell.com/application-processors/armada-100/assets/armada_16x_datasheet.pdf
    • Software manual : http://www.marvell.com/application-processors/armada-100/assets/armada_16x_software_manual.pdf
    • Specification update : http://www.marvell.com/application-processors/armada-100/assets/ARMADA16x_Spec_update.pdf
    • Boot ROM manual : http://www.marvell.com/application-processors/armada-100/assets/armada_16x_ref_manual.pdf
    • App node package : http://www.marvell.com/application-processors/armada-100/assets/armada_16x_app_note_package.pdf
    • Application processor only
    • Core: ARMv5 compatible Marvell PJ1 88sv331 (Mohawk)
  • PXA910/PXA920
    • Homepage : http://www.marvell.com/communication-processors/pxa910/
    • Product Brief : http://www.marvell.com/communication-processors/pxa910/assets/Marvell_PXA910_Platform-001_PB_final.pdf
    • Application processor with Communication processor
    • Core: ARMv5 compatible Marvell PJ1 88sv331 (Mohawk)
  • PXA688, a.k.a. MMP2, a.k.a Armada 610
    • Product Brief : http://www.marvell.com/application-processors/armada-600/assets/armada610_pb.pdf
    • Application processor only
    • Core: ARMv7 compatible Sheeva PJ4 88sv581x core
  • PXA2128, a.k.a. MMP3 (OLPC XO4, Linux support not upstream)
    • Product Brief : http://www.marvell.com/application-processors/armada/pxa2128/assets/Marvell-ARMADA-PXA2128-SoC-PB.pdf
    • Application processor only
    • Core: Dual-core ARMv7 compatible Sheeva PJ4C core
  • PXA960/PXA968/PXA978 (Linux support not upstream)
    • Application processor with Communication Processor
    • Core: ARMv7 compatible Sheeva PJ4 core
  • PXA986/PXA988 (Linux support not upstream)
    • Application processor with Communication Processor
    • Core: Dual-core ARMv7 compatible Sheeva PJ4B-MP core
  • PXA1088/PXA1920 (Linux support not upstream)
    • Application processor with Communication Processor
    • Core: quad-core ARMv7 Cortex-A7
  • PXA1908/PXA1928/PXA1936
    • Application processor with Communication Processor
    • Core: multi-core ARMv8 Cortex-A53
Marvell SCSI & RAID Devices Driver Download

Comments:

  • This line of SoCs originates from the XScale family developed byIntel and acquired by Marvell in ~2006. All the processors ofthis MMP/MMP2 family were developed by Marvell.
  • Due to their XScale origin, these SoCs have virtually nothing incommon with the other (Kirkwood, Dove, etc.) families of MarvellSoCs, except with the PXA family of SoCs listed above.
Linux kernel mach directory:
arch/arm/mach-mmp
Linux kernel plat directory:
arch/arm/plat-pxa

Berlin family (Multimedia Solutions)¶

  • Flavors:
    • 88DE3010, Armada 1000 (no Linux support)
      • Core: Marvell PJ1 (ARMv5TE), Dual-core
      • Product Brief: http://www.marvell.com.cn/digital-entertainment/assets/armada_1000_pb.pdf
    • 88DE3005, Armada 1500 Mini
      • Design name: BG2CD
      • Core: ARM Cortex-A9, PL310 L2CC
    • 88DE3006, Armada 1500 Mini Plus
      • Design name: BG2CDP
      • Core: Dual Core ARM Cortex-A7
    • 88DE3100, Armada 1500
      • Design name: BG2
      • Core: Marvell PJ4B-MP (ARMv7), Tauros3 L2CC
    • 88DE3114, Armada 1500 Pro
      • Design name: BG2Q
      • Core: Quad Core ARM Cortex-A9, PL310 L2CC
    • 88DE3214, Armada 1500 Pro 4K
      • Design name: BG3
      • Core: ARM Cortex-A15, CA15 integrated L2CC
    • 88DE3218, ARMADA 1500 Ultra
      • Core: ARM Cortex-A53

Homepage: https://www.synaptics.com/products/multimedia-solutionsDirectory: arch/arm/mach-berlin

Comments:

  • This line of SoCs is based on Marvell Sheeva or ARM Cortex CPUswith Synopsys DesignWare (IRQ, GPIO, Timers, …) and PXA IP (SDHCI, USB, ETH, …).
  • The Berlin family was acquired by Synaptics from Marvell in 2017.

CPU Cores¶

The XScale cores were designed by Intel, and shipped by Marvell in the olderPXA processors. Feroceon is a Marvell designed core that developed in-house,and that evolved into Sheeva. The XScale and Feroceon cores were phased outover time and replaced with Sheeva cores in later products, which subsequentlygot replaced with licensed ARM Cortex-A cores.

XScale 1
CPUID 0x69052xxxARMv5, iWMMXt
XScale 2
CPUID 0x69054xxxARMv5, iWMMXt
XScale 3
CPUID 0x69056xxx or 0x69056xxxARMv5, iWMMXt
Feroceon-1850 88fr331 “Mohawk”
CPUID 0x5615331x or 0x41xx926xARMv5TE, single issue
Feroceon-2850 88fr531-vd “Jolteon”
CPUID 0x5605531x or 0x41xx926xARMv5TE, VFP, dual-issue
Feroceon 88fr571-vd “Jolteon”
CPUID 0x5615571xARMv5TE, VFP, dual-issue
Feroceon 88fr131 “Mohawk-D”
CPUID 0x5625131xARMv5TE, single-issue in-order
Sheeva PJ1 88sv331 “Mohawk”
CPUID 0x561584xxARMv5, single-issue iWMMXt v2
Sheeva PJ4 88sv581x “Flareon”
CPUID 0x560f581xARMv7, idivt, optional iWMMXt v2
Sheeva PJ4B 88sv581x
CPUID 0x561f581xARMv7, idivt, optional iWMMXt v2
Sheeva PJ4B-MP / PJ4C
CPUID 0x562f584xARMv7, idivt/idiva, LPAE, optional iWMMXt v2 and/or NEON

Long-term plans¶

  • Unify the mach-dove/, mach-mv78xx0/, mach-orion5x/ into themach-mvebu/ to support all SoCs from the Marvell EBU (EngineeringBusiness Unit) in a single mach-<foo> directory. The plat-orion/would therefore disappear.
  • Unify the mach-mmp/ and mach-pxa/ into the same mach-pxadirectory. The plat-pxa/ would therefore disappear.

Credits¶

  • Maen Suleiman <maen@marvell.com>
  • Lior Amsalem <alior@marvell.com>
  • Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
  • Andrew Lunn <andrew@lunn.ch>
  • Nicolas Pitre <nico@fluxnic.net>
  • Eric Miao <eric.y.miao@gmail.com>

As native Non-volatile Memory Express (NVMe®) share-storagearrays continue enhancing our ability to store and access more informationfaster across a much bigger network, customers of all sizes – enterprise,mid-market and SMBs – confront a common question: what is required to takeadvantage of this quantum leap forward in speed and capacity?

Of course, NVMe technology itself is not new, and is commonly found in laptops, servers and enterprise storage arrays. NVMe provides an efficient command set that is specific to memory-based storage, provides increased performance that is designed to run over PCIe 3.0 or PCIe 4.0 bus architectures, and — offering 64,000 command queues with 64,000 commands per queue — can provide much more scalability than other storage protocols.

Unfortunately, most of the NVMe in use today is held captive inthe system in which it is installed. While there are a few storage vendors offeringNVMe arrays on the market today, the vast majority of enterprise datacenter andmid-market customers are still using traditional storage area networks, runningSCSI protocol over either Fibre Channel or Ethernet Storage Area Networks(SAN).

The newest storage networks, however, will be enabled by what wecall NVMe over Fabric (NVMe-oF) networks. As with SCSI today, NVMe-oF will offerusers a choice of transport protocols. Today, there are three standardprotocols that will likely make significant headway into the marketplace. Theseinclude:

  • NVMe over Fibre Channel (FC-NVMe)
  • NVMe over RoCE RDMA (NVMe/RoCE)
  • NVMe over TCP (NVMe/TCP)

If NVMe over Fabrics are to achieve their true potential, however,there are three major elements that need to align. First, users will need anNVMe-capable storage network infrastructure in place. Second, all of the majoroperating system (O/S) vendors will need to provide support for NVMe-oF. Third,customers will need disk array systems that feature native NVMe. Let’s look ateach of these in order.

  1. NVMe Storage Network Infrastructure

In addition to Marvell, several leading network and SANconnectivity vendors support one or more varieties of NVMe-oF infrastructuretoday. This storage network infrastructure (also called the storage fabric), ismade up of two main components: the host adapter that provides serverconnectivity to the storage fabric; and the switch infrastructure that providesall the traffic routing, monitoring and congestion management.

For FC-NVMe, today’s enhanced 16Gb Fibre Channel (FC) host busadapters (HBA) and 32Gb FC HBAs already support FC-NVMe. This includes theMarvell® QLogic® 2690 series Enhanced 16GFC, 2740 series 32GFC and 2770 SeriesEnhanced 32GFC HBAs.

On the Fibre Channel switch side, no significant changes are neededto transition from SCSI-based connectivity to NVMe technology, as the FC switchis agnostic about the payload data. The job of the FC switch is to just routeFC frames from point to point and deliver them in order, with the lowest latencyrequired. That means any 16GFC or greater FC switch is fully FC-NVMecompatible.

A key decision regarding FC-NVMe infrastructure, however, is whetheror not to support both legacy SCSI and next-generation NVMe protocols simultaneously.When customers eventually deploy new NVMe-based storage arrays (and many will overthe next three years), they are not going to simply discard their existingSCSI-based systems. In most cases, customers will want individual ports onindividual server HBAs that can communicate using both SCSI and NVMe,concurrently. Fortunately, Marvell’s QLogic 16GFC/32GFC portfolio does supportconcurrent SCSI and NVMe, all with the same firmware and a single driver. This useof a single driver greatly reduces complexity compared to alternative solutions,which typically require two (one for FC running SCSI and another for FC-NVMe).

If we look at Ethernet, which is the other popular transportprotocol for storage networks, there is one option for NVMe-oF connectivitytoday and a second option on the horizon. Currently, customers can already deployNVMe/RoCE infrastructure to support NVMe connectivity to shared storage. Thisrequires RoCE RDMA-enabled Ethernet adapters in the host, and Ethernetswitching that is configured to support a lossless Ethernet environment. Thereare a variety of 10/25/50/100GbE network adapters on the market today thatsupport RoCE RDMA, including the Marvell FastLinQ® 41000 Series and the 45000Series adapters.

On the switching side, most 10/25/100GbE switches that haveshipped in the past 2-3 years support data center bridging (DCB) and priorityflow control (PFC), and can support the lossless Ethernet environment needed tosupport a low-latency, high-performance NVMe/RoCE fabric.

While customers may have to reconfigure their networks to enablethese features and set up the lossless fabric, these features will likely besupported in any newer Ethernet switch or director. One point of caution: withlossless Ethernet networks, scalability is typically limited to only 1 or 2hops. For high scalability environments, consider alternative approaches to theNVMe storage fabric.

One such alternative is NVMe/TCP. This is a relatively newprotocol (NVM Express Group ratification in late 2018), and as such is notwidely available today. However, the advantage of NVMe/TCP is that it runs ontoday’s TCP stack, leveraging TCP’s congestion control mechanisms. That meansthere’s no need for a tuned environment (like that required with NVMe/RoCE),and NVMe/TCP can scale right along with your network. Think of NVMe/TCP in thesame way as you do iSCSI today. Like iSCSI, NVMe/TCP will provide goodperformance, work with existing infrastructure, and be highly scalable. Forthose customers seeking the best mix of performance and ease of implementation,NVMe/TCP will be the best bet.

Because there is limited operating system (O/S) support for NVMe/TCP (more on this below), I/O vendors are not currently shipping firmware and drivers that support NVMe/TCP. But a few, like Marvell, have adapters that, from a hardware standpoint, are NVMe/TCP-ready; all that will be required is a firmware update in the future to enable the functionality. Notably, Marvell will support NVMe over TCP with full hardware offload on its FastLinQ adapters in the future. This will enable our NVMe/TCP adapters to deliver high performance and low latency that rivals NVMe/RoCE implementations.

  • Operating System Support

While it’s great that there is already infrastructure to supportNVMe-oF implementations, that’s only the first part of the equation. Next comesO/S support. When it comes to support for NVMe-oF, the major O/S vendors areall in different places – see the table below for a current (August 2020) summary.The major Linux distributions from RHEL and SUSE support both FC-NVMe andNVMe/RoCE and have limited support for NVMe/TCP. VMware, beginning with ESXi7.0, supports both FC-NVMe and NVMe/RoCE but does not yet support NVMe/TCP.Microsoft Windows Server currently uses an SMB-direct network protocol and offersno support for any NVMe-oF technology today.

With VMware ESXi 7.0, be aware of a couple of caveats: VMware doesnot currently support FC-NVMe or NVMe/RoCE in vSAN or with vVolsimplementations. However, support for these configurations, along with supportfor NVMe/TCP, is expected in future releases.

  • Storage Array Support

A few storage array vendors have released mid-range and enterpriseclass storage arrays that are NVMe-native. NetApp sells arrays that supportboth NVMe/RoCE and FC-NVMe, and are available today. PureStorage offers NVMe arrays that support NVMe/RoCE, with plans to supportFC-NVMe and NVMe/TCP in the future. In late 2019, Dell EMC introduced its PowerMax line offlash storage that supports FC-NVMe. This year and next, other storage vendorswill be bringing arrays to market that will support both NVMe/RoCE and FC-NMVe.We expect storage arrays that support NVMe/TCP will become available in the sametime frame.

Future-proof your investments by anticipating NVMe-oF tomorrow

Marvell SCSI & RAID Devices Driver Download

Altogether, we are not too far away from having all the elementsin place to make NVMe-oF a reality in the data center. If you expect theservers you are deploying today to operate for the next five years, there is nodoubt they will need to connect to NVMe-native storage during that time. Soplan ahead.

The key from an I/O and infrastructure perspective is to make sureyou are laying the groundwork today to be able to implement NVMe-oF tomorrow.Whether that’s Fibre Channel or Ethernet, customers should be deploying I/Otechnology that supports NVMe-oF today. Specifically, that means deploying 16GFCenhanced or 32GFC HBAs and switching infrastructure for Fibre Channel SANconnectivity. This includes the Marvell QLogic 2690, 2740 or 2770-series FibreChannel HBAs. For Ethernet, this includes Marvell’s FastLinQ 41000/45000 seriesEthernet adapter technology.

These advances represent a big leap forward and will deliver greatbenefits to customers. The sooner we build industry consensus around the leadingprotocols, the faster these benefits can be realized.

For more information on Marvell Fibre Channel and Ethernettechnology, go to www.marvell.com. Fortechnology specific to our OEM customer servers and storage, go to www.marvell.com/hpe or www.marvell.com/dell.

Marvell Scsi Drive

Tags: FastLinQ, FC-NVMe, Marvell, NVMe-oF, NVMe/RoCE, NVMe/TCP, QLogic