├── .gitbook └── assets │ ├── Capture.PNG │ ├── capture.PNG │ ├── storage_pyramid.png │ ├── ndctl-fsdax-config.png │ ├── ndctl - fsdax config.png │ ├── pmem_storage_pyramid.jpg │ ├── snia_programming_model.png │ ├── NVDIMM_DSM_Interface-V1.8.pdf │ ├── nvdimm_dsm_interface-v1.8.pdf │ ├── memory-storage-hierachy-default.png │ ├── Memory-Storage Hierachy - Default.png │ ├── draw.io-gitbook-interleaved-dimms-fsdax.jpg │ ├── draw.io-gitbook-interleaved-dimms-fsdax-1.jpg │ ├── memory-storage-hierachy-persistent-memory.jpg │ ├── memory-storage-hierachy-persistent-memory.png │ ├── Memory-Storage Hierachy - Persistent Memory.jpg │ ├── Memory-Storage Hierachy - Persistent Memory.png │ ├── draw.io - GitBook - Interleaved DIMMs FSDAX.jpg │ ├── draw.io-gitbook-non-interleaved-dimm-fsdax-1.jpg │ ├── draw.io-gitbook-non-interleaved-dimm-fsdax.jpg │ ├── draw.io - GitBook - Non-Interleaved DIMM FSDAX.jpg │ ├── memory-storage-hierachy-persistent-memory (1).png │ ├── memory-storage-hierachy-persistent-memory (2).png │ ├── memory-storage-hierachy-persistent-memory (3).png │ ├── Memory-Storage Hierachy - Persistent Memory (1).png │ ├── Memory-Storage Hierachy - Persistent Memory (2).png │ ├── Memory-Storage Hierachy - Persistent Memory (3).png │ ├── draw.io - GitBook - Interleaved DIMMs FSDAX (1).jpg │ ├── draw.io - GitBook - Non-Interleaved DIMM FSDAX (1).jpg │ ├── draw.io-gitbook-interleaved-dimms-with-2-namespaces-fsdax.jpg │ ├── draw.io - GitBook - Interleaved DIMMs with 2 namespaces FSDAX.jpg │ ├── draw.io-gitbook-interleaved-dimms-with-2-namespaces-fsdax (1).jpg │ └── draw.io - GitBook - Interleaved DIMMs with 2 namespaces FSDAX (1).jpg ├── getting-started-guide ├── creating-development-environments │ ├── linux-environments │ │ ├── using-ramdisks.md │ │ ├── using-the-xen-hypervisor.md │ │ ├── advanced-topics │ │ │ ├── README.md │ │ │ └── partitioning-namespaces.md │ │ ├── README.md │ │ └── linux-memmap.md │ ├── cloud-environments │ │ ├── google-cloud-platform-gcp.md │ │ ├── README.md │ │ └── microsoft-azure-cloud.md │ ├── virtualization │ │ ├── README.md │ │ ├── windows-server-hyper-v.md │ │ └── vmware-vsphere-esxi.md │ ├── README.md │ └── windows-environments.md ├── installing-pmdk │ ├── compiling-and-running-the-examples.md │ ├── pmdk-directory-structure.md │ ├── README.md │ ├── installing-pmdk-on-windows.md │ ├── compiling-pmdk-from-source.md │ └── installing-pmdk-using-linux-packages.md ├── installing-ndctl.md ├── README.md ├── system-requirements.md ├── what-is-ndctl.md ├── what-is-pmdk.md └── introduction.md ├── ipmctl-user-guide ├── .gitbook │ └── assets │ │ ├── image.png │ │ └── capture.PNG ├── installing-ipmctl.md ├── support-and-maintenance │ ├── README.md │ ├── version-and-firmware.md │ └── show-events.md ├── provisioning │ ├── provision-memory-mode.md │ ├── provision-mixed-mode.md │ ├── delete-memory-allocation-goal.md │ ├── dump-memory-allocation-settings.md │ ├── provision-app-direct.md │ ├── README.md │ ├── concepts.md │ ├── show-memory-allocation-goal.md │ ├── load-memory-allocation-goal.md │ └── create-memory-allocation-goal.md ├── instrumentation │ ├── README.md │ ├── change-sensor-settings.md │ ├── show-device-performance.md │ └── show-sensor.md ├── security │ ├── README.md │ ├── erase-device-data.md │ ├── enable-device-security.md │ ├── change-device-security.md │ └── change-device-passphrase.md ├── installing-ipmctl │ ├── installing-ipmctl-on-microsoft-windows-using-the-msi-installer.md │ ├── building-and-installing-ipmctl-on-microsoft-windows-from-source.md │ └── README.md ├── debug │ ├── show-acpi-tables.md │ ├── show-device-platform-configuration-data.md │ ├── delete-device-platform-configuration-data.md │ ├── README.md │ ├── dump-debug-log.md │ ├── show-error-log.md │ ├── run-diagnostic.md │ └── inject-error.md ├── module-discovery │ ├── show-socket.md │ ├── README.md │ ├── show-system-capabilities.md │ ├── show-memory-resources.md │ ├── show-topology.md │ └── show-device.md ├── basic-usage.md ├── README.md └── SUMMARY.md ├── ndctl-users-guide ├── concepts │ ├── label-area.md │ ├── regions │ │ ├── README.md │ │ └── untitled.md │ ├── nvdimm-devices.md │ ├── nvdimm-namespaces.md │ ├── README.md │ └── libnvdimm-pmem-and-blk-modes.md ├── references.md ├── glossary.md ├── managing-regions.md ├── man-pages.md ├── managing-nvdimms.md └── README.md ├── LICENSE ├── SUMMARY.md └── README.md /.gitbook/assets/Capture.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/Capture.PNG -------------------------------------------------------------------------------- /.gitbook/assets/capture.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/capture.PNG -------------------------------------------------------------------------------- /.gitbook/assets/storage_pyramid.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/storage_pyramid.png -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/linux-environments/using-ramdisks.md: -------------------------------------------------------------------------------- 1 | # Using RAMDisks 2 | 3 | -------------------------------------------------------------------------------- /.gitbook/assets/ndctl-fsdax-config.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/ndctl-fsdax-config.png -------------------------------------------------------------------------------- /.gitbook/assets/ndctl - fsdax config.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/ndctl - fsdax config.png -------------------------------------------------------------------------------- /.gitbook/assets/pmem_storage_pyramid.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/pmem_storage_pyramid.jpg -------------------------------------------------------------------------------- /.gitbook/assets/snia_programming_model.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/snia_programming_model.png -------------------------------------------------------------------------------- /ipmctl-user-guide/.gitbook/assets/image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/ipmctl-user-guide/.gitbook/assets/image.png -------------------------------------------------------------------------------- /.gitbook/assets/NVDIMM_DSM_Interface-V1.8.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/NVDIMM_DSM_Interface-V1.8.pdf -------------------------------------------------------------------------------- /.gitbook/assets/nvdimm_dsm_interface-v1.8.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/nvdimm_dsm_interface-v1.8.pdf -------------------------------------------------------------------------------- /ipmctl-user-guide/.gitbook/assets/capture.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/ipmctl-user-guide/.gitbook/assets/capture.PNG -------------------------------------------------------------------------------- /.gitbook/assets/memory-storage-hierachy-default.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/memory-storage-hierachy-default.png -------------------------------------------------------------------------------- /.gitbook/assets/Memory-Storage Hierachy - Default.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/Memory-Storage Hierachy - Default.png -------------------------------------------------------------------------------- /.gitbook/assets/draw.io-gitbook-interleaved-dimms-fsdax.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/draw.io-gitbook-interleaved-dimms-fsdax.jpg -------------------------------------------------------------------------------- /.gitbook/assets/draw.io-gitbook-interleaved-dimms-fsdax-1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/draw.io-gitbook-interleaved-dimms-fsdax-1.jpg -------------------------------------------------------------------------------- /.gitbook/assets/memory-storage-hierachy-persistent-memory.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/memory-storage-hierachy-persistent-memory.jpg -------------------------------------------------------------------------------- /.gitbook/assets/memory-storage-hierachy-persistent-memory.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/memory-storage-hierachy-persistent-memory.png -------------------------------------------------------------------------------- /.gitbook/assets/Memory-Storage Hierachy - Persistent Memory.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/Memory-Storage Hierachy - Persistent Memory.jpg -------------------------------------------------------------------------------- /.gitbook/assets/Memory-Storage Hierachy - Persistent Memory.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/Memory-Storage Hierachy - Persistent Memory.png -------------------------------------------------------------------------------- /.gitbook/assets/draw.io - GitBook - Interleaved DIMMs FSDAX.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/draw.io - GitBook - Interleaved DIMMs FSDAX.jpg -------------------------------------------------------------------------------- /.gitbook/assets/draw.io-gitbook-non-interleaved-dimm-fsdax-1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/draw.io-gitbook-non-interleaved-dimm-fsdax-1.jpg -------------------------------------------------------------------------------- /.gitbook/assets/draw.io-gitbook-non-interleaved-dimm-fsdax.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/draw.io-gitbook-non-interleaved-dimm-fsdax.jpg -------------------------------------------------------------------------------- /.gitbook/assets/draw.io - GitBook - Non-Interleaved DIMM FSDAX.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/draw.io - GitBook - Non-Interleaved DIMM FSDAX.jpg -------------------------------------------------------------------------------- /.gitbook/assets/memory-storage-hierachy-persistent-memory (1).png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/memory-storage-hierachy-persistent-memory (1).png -------------------------------------------------------------------------------- /.gitbook/assets/memory-storage-hierachy-persistent-memory (2).png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/memory-storage-hierachy-persistent-memory (2).png -------------------------------------------------------------------------------- /.gitbook/assets/memory-storage-hierachy-persistent-memory (3).png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/memory-storage-hierachy-persistent-memory (3).png -------------------------------------------------------------------------------- /.gitbook/assets/Memory-Storage Hierachy - Persistent Memory (1).png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/Memory-Storage Hierachy - Persistent Memory (1).png -------------------------------------------------------------------------------- /.gitbook/assets/Memory-Storage Hierachy - Persistent Memory (2).png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/Memory-Storage Hierachy - Persistent Memory (2).png -------------------------------------------------------------------------------- /.gitbook/assets/Memory-Storage Hierachy - Persistent Memory (3).png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/Memory-Storage Hierachy - Persistent Memory (3).png -------------------------------------------------------------------------------- /.gitbook/assets/draw.io - GitBook - Interleaved DIMMs FSDAX (1).jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/draw.io - GitBook - Interleaved DIMMs FSDAX (1).jpg -------------------------------------------------------------------------------- /.gitbook/assets/draw.io - GitBook - Non-Interleaved DIMM FSDAX (1).jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/draw.io - GitBook - Non-Interleaved DIMM FSDAX (1).jpg -------------------------------------------------------------------------------- /.gitbook/assets/draw.io-gitbook-interleaved-dimms-with-2-namespaces-fsdax.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/draw.io-gitbook-interleaved-dimms-with-2-namespaces-fsdax.jpg -------------------------------------------------------------------------------- /.gitbook/assets/draw.io - GitBook - Interleaved DIMMs with 2 namespaces FSDAX.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/draw.io - GitBook - Interleaved DIMMs with 2 namespaces FSDAX.jpg -------------------------------------------------------------------------------- /.gitbook/assets/draw.io-gitbook-interleaved-dimms-with-2-namespaces-fsdax (1).jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/draw.io-gitbook-interleaved-dimms-with-2-namespaces-fsdax (1).jpg -------------------------------------------------------------------------------- /.gitbook/assets/draw.io - GitBook - Interleaved DIMMs with 2 namespaces FSDAX (1).jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pmem/docs/HEAD/.gitbook/assets/draw.io - GitBook - Interleaved DIMMs with 2 namespaces FSDAX (1).jpg -------------------------------------------------------------------------------- /ipmctl-user-guide/installing-ipmctl.md: -------------------------------------------------------------------------------- 1 | # Installing IPMCTL 2 | 3 | For latest package repos and installation instructions, please see the [README](https://github.com/intel/ipmctl#ipmctl) on the ipmctl github page. 4 | 5 | -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/linux-environments/using-the-xen-hypervisor.md: -------------------------------------------------------------------------------- 1 | # Using the XEN Hypervisor 2 | 3 | TODO: Investigate how XEN can be used to create persistent memory development environments. 4 | 5 | -------------------------------------------------------------------------------- /ndctl-users-guide/concepts/label-area.md: -------------------------------------------------------------------------------- 1 | # Label Storage Area \(LSA\) 2 | 3 | The namespace label area is a small persistent partition of capacity available on some NVDIMM devices. The label area is used to store the definition of NVDIMM _namespaces._ 4 | 5 | -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/cloud-environments/google-cloud-platform-gcp.md: -------------------------------------------------------------------------------- 1 | # Google Cloud Platform \(GCP\) 2 | 3 | On October 30th, 2018 Google announced early availability of systems with Intel\(R\) Optane\(TM\) DC Persistent Memory. The announcement and signup form can be found [here](https://cloud.google.com/blog/topics/partners/available-first-on-google-cloud-intel-optane-dc-persistent-memory). 4 | 5 | 6 | 7 | -------------------------------------------------------------------------------- /ipmctl-user-guide/support-and-maintenance/README.md: -------------------------------------------------------------------------------- 1 | # Support and Maintenance 2 | 3 | The ipmctl utility provides support and maintenance commands. Here are the articles in this section: 4 | 5 | {% content-ref url="show-events.md" %} 6 | [show-events.md](show-events.md) 7 | {% endcontent-ref %} 8 | 9 | {% content-ref url="version-and-firmware.md" %} 10 | [version-and-firmware.md](version-and-firmware.md) 11 | {% endcontent-ref %} 12 | 13 | -------------------------------------------------------------------------------- /getting-started-guide/installing-pmdk/compiling-and-running-the-examples.md: -------------------------------------------------------------------------------- 1 | # Compiling and Running the Examples 2 | 3 | Info regarding how to compile the examples within pmdk and pmdk-example repositories 4 | 5 | ## Overview 6 | 7 | Describe the examples delivered within the main pmdk repo and additional examples in the pmdk-examples repo. 8 | 9 | ## Install Pre-Requisites 10 | 11 | ## Clone the Repo 12 | 13 | ## Compile 14 | 15 | ## Install 16 | 17 | -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/cloud-environments/README.md: -------------------------------------------------------------------------------- 1 | # Cloud Environments 2 | 3 | Several cloud service providers \(CSPs\) have announced availability of persistent memory. We will track those announcements here with additional information and resources for each CSP. 4 | 5 | * Microsoft Azure 6 | * Google Cloud Platform \(GCP\) 7 | 8 | Emulated persistent memory devices can be used for development purposes within the public cloud. See [Creating Development Environments](../). 9 | 10 | -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/cloud-environments/microsoft-azure-cloud.md: -------------------------------------------------------------------------------- 1 | # Microsoft Azure Cloud 2 | 3 | On May 7th, 2019, Microsoft Azure announced availability of Intel Optane DC Persistent Memory within the Azure Cloud stack. See their announcement here - [https://azure.microsoft.com/en-us/blog/introducing-new-product-innovations-for-sap-hana-expanded-ai-collaboration-with-sap-and-more/](https://azure.microsoft.com/en-us/blog/introducing-new-product-innovations-for-sap-hana-expanded-ai-collaboration-with-sap-and-more/) 4 | 5 | 6 | 7 | -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/linux-environments/advanced-topics/README.md: -------------------------------------------------------------------------------- 1 | # Advanced Topics 2 | 3 | This section describes advanced topics relating to configuring and using either physical or emulated persistent memory devices. 4 | 5 | * [Partitioning Namespaces](partitioning-namespaces.md) - Describes how to use tools such as fdisk, parted, or gparted to partition larger namespaces in to smaller ones. 6 | * [I/O Alignment Considerations](i-o-alignment-considerations.md) - Describes how to correctly align the IO for 2MiB hugepages 7 | 8 | -------------------------------------------------------------------------------- /ndctl-users-guide/concepts/regions/README.md: -------------------------------------------------------------------------------- 1 | # Regions 2 | 3 | A generic REGION device is registered for each PMEM range or BLK-aperture set. LIBNVDIMM provides a built-in driver for these REGION devices. This driver is responsible for reconciling the aliased DPA mappings across all regions, parsing the LABEL, if present, and then emitting NAMESPACE devices with the resolved/exclusive DPA-boundaries for the nd\_pmem or nd\_blk device driver to consume. 4 | 5 | Refer to [Managing Regions](../../managing-regions.md) for information on creating, deleting, and managing regions configurations. 6 | 7 | -------------------------------------------------------------------------------- /ipmctl-user-guide/provisioning/provision-memory-mode.md: -------------------------------------------------------------------------------- 1 | # Provision Memory Mode 2 | 3 | Creates a memory allocation goal for a percentage (0-100) of total capacity to be used in Memory Mode and the rest to be either reserved or used for App Direct mode. 4 | 5 | ## **Examples** 6 | 7 | Configures all the PMem module capacity in Memory Mode. 8 | 9 | ``` 10 | $ ipmctl create -goal MemoryMode=100 11 | ``` 12 | 13 | By default, if a goal is set for any percentage less than 100%, the remaining capacity will be configured to App Direct mode. See the [Mixed Mode](provision-mixed-mode.md) section for examples. 14 | -------------------------------------------------------------------------------- /ipmctl-user-guide/instrumentation/README.md: -------------------------------------------------------------------------------- 1 | # Instrumentation 2 | 3 | The instrumentation feature allows access to and monitoring of sensors on the persistent memory modules. Here are the articles in this section: 4 | 5 | {% content-ref url="show-sensor.md" %} 6 | [show-sensor.md](show-sensor.md) 7 | {% endcontent-ref %} 8 | 9 | {% content-ref url="change-sensor-settings.md" %} 10 | [change-sensor-settings.md](change-sensor-settings.md) 11 | {% endcontent-ref %} 12 | 13 | {% content-ref url="show-device-performance.md" %} 14 | [show-device-performance.md](show-device-performance.md) 15 | {% endcontent-ref %} 16 | 17 | -------------------------------------------------------------------------------- /ipmctl-user-guide/security/README.md: -------------------------------------------------------------------------------- 1 | # Security 2 | 3 | Intel DC persistent memory modules support data-at-rest security by encrypting the data stored in the persistent regions of the DIMM. The CLI only supports transitioning to Disable Security State using the Change Device Security command. 4 | 5 | > NOTE: Security commands are subject to OS Vendor \(OSV\) support and will return "Not Supported." An exception is if the module is in Unlocked Security State, then transitioning to Disabled is permitted. 6 | 7 | For further security information, refer to the [Managing NVDIMM Security](../../ndctl-users-guide/managing-nvdimm-security.md) section of the NDCTL user guide. 8 | 9 | -------------------------------------------------------------------------------- /getting-started-guide/installing-ndctl.md: -------------------------------------------------------------------------------- 1 | # Installing NDCTL 2 | 3 | The `ndctl` utility is used to manage the libnvdimm \(non-volatile memory device\) sub-system in the Linux Kernel. It is required for several Persistent Memory Developer Kit \(PMDK\) features if compiling from source. If ndctl is not available, the PMDK may not build all components and features. 4 | 5 | Refer to the [installation instructions](https://docs.pmem.io/ndctl-user-guide/installing-ndctl) in the NDCTL User Guide for [installing prebuilt packages](https://docs.pmem.io/ndctl-user-guide/installing-ndctl/installing-ndctl-packages-on-linux) or [building ndctl from source](https://docs.pmem.io/ndctl-user-guide/installing-ndctl/installing-ndctl-from-source-on-linux). 6 | 7 | -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/virtualization/README.md: -------------------------------------------------------------------------------- 1 | # Virtualization 2 | 3 | This section describes how support for physical and emulated persistent memory devices, commonly referred to as Non-Volatile DIMMs \(NVDIMMs\), is provided through various virtualization and hypervisor technologies. The table below shows the supported features of each technology. 4 | 5 | | | NVDIMM | Regions | Namespaces | FSDax | DevDax | Persistent Pools | 6 | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | 7 | | [QEMU Virtualization](qemu.md) | Yes | Yes | Yes | Yes | Yes | Yes | 8 | | Docker Containers | No | No | No | Yes\* | Yes\* | Yes | 9 | | [VMWare VSphere](vmware-vsphere-esxi.md) | Yes | Yes | Yes | Yes | Yes | Yes | 10 | 11 | 12 | 13 | -------------------------------------------------------------------------------- /ipmctl-user-guide/provisioning/provision-mixed-mode.md: -------------------------------------------------------------------------------- 1 | # Provision Mixed Mode 2 | 3 | Creates a memory allocation goal for a percentage (0-100) of total capacity to be used in Memory Mode and the rest to be either reserved or used for App Direct mode. 4 | 5 | ## **Examples** 6 | 7 | Configure 20% of the available capacity on each persistent memory module to be in Memory Mode and the remaining will be not-interleaved App Direct. 8 | 9 | ``` 10 | $ ipmctl create -goal MemoryMode=20 PersistentMemoryType=AppDirectNotInterleaved 11 | ``` 12 | 13 | Configure the persistent memory modules with 25% of the capacity in Memory Mode, 25% reserved, and the remaining 50% as interleaved App Direct by default. 14 | 15 | ``` 16 | $ ipmctl create -goal MemoryMode=25 Reserved=25 17 | ``` 18 | -------------------------------------------------------------------------------- /ipmctl-user-guide/installing-ipmctl/installing-ipmctl-on-microsoft-windows-using-the-msi-installer.md: -------------------------------------------------------------------------------- 1 | # Installing IPMCTL on Microsoft Windows using the MSI Installer 2 | 3 | Step 1) The latest ipmctl Windows installer can be downloaded from the “Releases” section of the GutHub project page ([https://github.com/intel/ipmctl/releases](https://github.com/intel/ipmctl/releases)) as shown in Figure 1. 4 | 5 | ![Figure 1: ipmctl releases on GitHub](../.gitbook/assets/image.png) 6 | 7 | Download the `ipmctl_windows_install-.exe` file. 8 | 9 | Step 2) Run the `ipmctl_windows_install-.exe` file. The installer makes the ipmctl utility available in the powershell available to the system administrator. 10 | 11 | Step 3) Go to the [Basic Usage](../basic-usage.md) section of this user guide for more information. 12 | -------------------------------------------------------------------------------- /ipmctl-user-guide/provisioning/delete-memory-allocation-goal.md: -------------------------------------------------------------------------------- 1 | # Delete Memory Allocation Goal 2 | 3 | Deletes the memory allocation goal from one or more persistent memory modules. This command deletes a memory allocation goal request that has not yet been processed by the BIOS. 4 | 5 | ``` 6 | $ ipmctl delete [OPTIONS] -goal [TARGETS] 7 | ``` 8 | 9 | ## **Targets** 10 | 11 | * `-dimm [(DimmIDs)]`: Restricts output to specific DIMMs by optionally supplying the DIMM target and one or more comma-separated DIMM identifiers. The default is to display all manageable modules. 12 | * `-socket (SocketIDs)`: Restricts output to the DIMMs installed on specific sockets by supplying the socket target and one or more comma-separated socket identifiers. The default is to display all sockets. 13 | 14 | ## **Example** 15 | 16 | ``` 17 | $ ipmctl delete -goal 18 | ``` 19 | -------------------------------------------------------------------------------- /ipmctl-user-guide/provisioning/dump-memory-allocation-settings.md: -------------------------------------------------------------------------------- 1 | # Dump Memory Allocation Settings 2 | 3 | Store the currently configured memory allocation settings for all persistent memory modules in the system to a file in order to replicate the configuration elsewhere. Apply the stored memory allocation settings using the command [Load Memory Allocation Goal](load-memory-allocation-goal.md) 4 | 5 | ``` 6 | $ ipmctl dump [OPTIONS] -destination (path) -system -config 7 | ``` 8 | 9 | ## **Example** 10 | 11 | ``` 12 | $ ipmctl dump -destination config.txt -system -config 13 | ``` 14 | 15 | ## **Limitations** 16 | 17 | Only memory allocation settings for manageable persistent memory modules that have been successfully applied by the BIOS are stored in the file. Unconfigured modules are not included, nor are memory allocation goals that have not been applied. 18 | -------------------------------------------------------------------------------- /ipmctl-user-guide/installing-ipmctl/building-and-installing-ipmctl-on-microsoft-windows-from-source.md: -------------------------------------------------------------------------------- 1 | # Building and Installing IPMCTL on Microsoft Windows from Source 2 | 3 | Step 1) Install Visual Studio 2017 (or newer). Install these optional components: 4 | 5 | * Workloads -> Desktop Development with C++ 6 | * Individual Components -> Compilers, build tools, and runtimes -> Visual C++ tools for CMake 7 | 8 | Step 2) Download the [ipmctl source code from GitHub](https://github.com/intel/ipmctl) 9 | 10 | Step 3) Open the ipmctl folder as a CMake project. See: [https://docs.microsoft.com/en-us/cpp/build/cmake-projects-in-visual-studio](https://docs.microsoft.com/en-us/cpp/build/cmake-projects-in-visual-studio) 11 | 12 | Step 4) Begin the build process 13 | 14 | Step 5) Once installed, go to the [Basic Usage](../basic-usage.md) section of this user guide for more information. 15 | -------------------------------------------------------------------------------- /getting-started-guide/installing-pmdk/pmdk-directory-structure.md: -------------------------------------------------------------------------------- 1 | # PMDK Directory Structure 2 | 3 | The source tree is organized as follows: 4 | 5 | * **doc** -- man pages describing each library contained here 6 | * **src** -- the source for the libraries 7 | * **src/include** -- public header files for all the libraries 8 | * **src/benchmarks** -- benchmarks used by development team 9 | * **src/examples** -- example programs demonstrating the PMDK libraries 10 | * **src/freebsd** -- FreeBSD-specific header files 11 | * **src/test** -- unit tests used by development team 12 | * **src/tools** -- various tools developed for PMDK 13 | * **src/windows** -- Windows-specific source and header files 14 | * **utils** -- utilities used during build & test 15 | * **CONTRIBUTING.md** -- instructions for people wishing to contribute 16 | * **CODING\_STYLE.md** -- coding standard and conventions for PMDK 17 | 18 | -------------------------------------------------------------------------------- /ipmctl-user-guide/debug/show-acpi-tables.md: -------------------------------------------------------------------------------- 1 | # Show ACPI Tables 2 | 3 | Shows the system ACPI tables related to persistent memory modules. 4 | 5 | ``` 6 | ipmctl show [OPTIONS] -system (NFIT|PCAT|PMTT) 7 | ``` 8 | 9 | ## **Targets** 10 | 11 | * `system (NFIT|PCAT|PMTT)`: The system ACPI table(s) to display. By default both the NFIT and PCAT tables are displayed. One of: 12 | * "NFIT": The NVDIMM Firmware Interface Table 13 | * "PCAT": The Platform Capabilities Table 14 | * "PMTT": The Platform Memory Topology Table 15 | 16 | Refer to the ACPI specification for detailed information about the ACPI tables. 17 | 18 | ## **Examples** 19 | 20 | Show the ACPI NFIT 21 | 22 | ``` 23 | $ sudo ipmctl show -system NFIT 24 | ``` 25 | 26 | ## **Return Data** 27 | 28 | Returns the formatted data from the requested ACPI tables and their sub-tables. Refer to the ACPI specification for detailed information about the format of the ACPI tables. 29 | 30 | > Note: All data is presented in ACPI little endian format. 31 | -------------------------------------------------------------------------------- /ipmctl-user-guide/provisioning/provision-app-direct.md: -------------------------------------------------------------------------------- 1 | # Provision App Direct 2 | 3 | Creates a memory allocation goal for App Direct in either one fully interleaved region, or non-interleaved, meaning one region per module. 4 | 5 | ## **Examples** 6 | 7 | Configures all the PMem module capacity in **** AppDirect mode with all modules in an interleaved set: 8 | 9 | ``` 10 | $ ipmctl create -goal PersistentMemoryType=AppDirect 11 | ``` 12 | 13 | Configures all the PMem module capacity in **** AppDirect mode with all modules in an interleaved set: 14 | 15 | ``` 16 | $ ipmctl create -goal PersistentMemoryType=AppDirectNotInterleaved 17 | ``` 18 | 19 | Configures the PMem module capacity across the entire system with 50% in AppDirect (Interleaved) and 50% reserved: 20 | 21 | ``` 22 | ipmctl create -goal PersistentMemoryType=AppDirect Reserved=50 23 | ``` 24 | 25 | Create an Interleaved AppDirect goal using all modules in Socket0: 26 | 27 | ``` 28 | $ ipmctl create -goal -socket 0x0000 PersistentMemoryType=AppDirect 29 | ``` 30 | -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/virtualization/windows-server-hyper-v.md: -------------------------------------------------------------------------------- 1 | # Windows Server Hyper-V 2 | 3 | Microsoft added JDEC-compliant NVDIMM-N persistent memory device support in Windows Server 2016 and Windows 10. NVDIMM-P devices, such at the Intel(R) Optane(TM) DC Persistent Memory Modules are supported in Windows Server 2019 or later. The following link takes you to the documentation for configuring NVDIMMs within Hyper-V. 4 | 5 | {% embed url="https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/persistent-memory-cmdlets" %} 6 | Cmdlets for configuring persistent memory devices for Hyper-V VMs 7 | {% endembed %} 8 | 9 | Dell has published step-by-step guides for provisioning NVDIMMs within Windows Server 2019 for use with Hyper-V Guest OS's: 10 | 11 | {% embed url="https://www.dell.com/support/article/us/en/19/how16843/configuring-nvdimm-n-on-poweredge-servers-with-windows-server-2019?lang=en" %} 12 | 13 | {% embed url="https://www.dell.com/support/article/us/en/19/how16794/how-to-configure-persistent-memory-nvdimm-on-windows-server-2019-guest-os?lang=en" %} 14 | -------------------------------------------------------------------------------- /ipmctl-user-guide/debug/show-device-platform-configuration-data.md: -------------------------------------------------------------------------------- 1 | # Show Device Platform Configuration Data 2 | 3 | Shows the platform configuration data for one or more persistent memory modules. 4 | 5 | ``` 6 | ipmctl show [OPTIONS] -dimm (DimmIds) -pcd (Config|LSA) 7 | ``` 8 | 9 | ## **Targets** 10 | 11 | * `-dimm (DimmIDs)`: Show the platform configuration data on specific modules by supplying the DIMM target and one or more comma-separated DimmIDs. The default is to display the platform configuration data for all manageable modules. 12 | * `-pcd (Config|LSA)`: Restricts output to a specific partition of the platform configuration data. The default is to display both. One of: 13 | * `Config`: Configuration management information 14 | * `LSA`: Namespace label storage area 15 | 16 | ## **Examples** 17 | 18 | Show the configuration information from the platform configuration data for all manageable modules 19 | 20 | ``` 21 | $ sudo ipmctl show -dimm -pcd 22 | ``` 23 | 24 | Show the configuration information from the platform configuration data for module 0x0001 25 | 26 | ``` 27 | ipmctl show -dimm 0x0001 -pcd Config 28 | ``` 29 | -------------------------------------------------------------------------------- /ndctl-users-guide/concepts/nvdimm-devices.md: -------------------------------------------------------------------------------- 1 | # NVDIMM Devices 2 | 3 | A generic DIMM device object, named /dev/nmemX, is registered for each physical memory device indicated in the ACPI NFIT table, or other platform NVDIMM resource discovery mechanism. Physical NVDIMMs may have configuration options available via the BIOS. The features and available configuration options will be dependant on the NVDIMM device and BIOS. 4 | 5 | The Linux LIBNVDIMM core provides a built-in driver for these DIMM devices. The driver is responsible for determining if the DIMM implements a namespace label area, and initializing the kernel's in-memory copy of that label data. 6 | 7 | The kernel performs runtime modifications of that data when namespace provisioning actions are taken, and actively blocks user-space from initiating label data changes while the DIMM is active in any region. Disabling a DIMM, after all the regions it is a member of have been disabled, allows userspace to manually update the label data to be consumed when the DIMM is next enabled. 8 | 9 | Refer to [Managing NVDIMMs](../managing-nvdimms.md) for more information on administering NVDIMMs. 10 | 11 | -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/virtualization/vmware-vsphere-esxi.md: -------------------------------------------------------------------------------- 1 | # VMware VSphere/ESXi 2 | 3 | VMware VSphere v6.7 introduced a new _PMEM Datastore_ which exposes persistent memory to a VM in two different modes. PMem-aware VMs can have direct access to persistent memory. Traditional VMs can use fast virtual disks stored on the PME datastore. 4 | 5 | For servers that do not have physical persistent memory DIMMs, VSphere has an option to experience the behavior of Persistent Memory using PMEM Emulator. The emulator uses a chunk of volatile memory and makes it behave like Persistent Memory. 6 | 7 | For more information please refer to the latest VMware [VSphere documentation](https://docs.vmware.com/en/VMware-vSphere/). 8 | 9 | ## Reference Material 10 | 11 | * VMware [Persistent Memory Initiative](https://code.vmware.com/persistent-memory-initiative) 12 | * VMware VSphere v6.7 [Using Persistent Memory](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-93E5390A-8FCF-4CE1-8927-9FC36E889D00.html) Documentation 13 | * [VMware vSphere Virtualization of PMEM](https://www.youtube.com/watch?v=r5HLmt0GVRo) \[YouTube\] from the SNIA Persistent Memory Summit 2018 14 | 15 | -------------------------------------------------------------------------------- /getting-started-guide/installing-pmdk/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: Installing the Persistent Memory Development Kit 3 | --- 4 | 5 | # Installing PMDK 6 | 7 | ## Introduction 8 | 9 | The Persistent Memory Development Kit \(PMDK\) is available on Supported Operating Systems in package and source code formats. Some features of the PMDK require additional packages. Prerequisite packages and utilities are describe in the [Installing PMDK from Source on Linux and FreeBSD](compiling-pmdk-from-source.md) section. 10 | 11 | ## Contents 12 | 13 | ### Linux 14 | 15 | * [Installing PMDK using Linux Packages](installing-pmdk-using-linux-packages.md) 16 | * [Installing PMDK from Source on Linux and FreeBSD](compiling-pmdk-from-source.md) 17 | 18 | ### Windows 19 | 20 | * [Installing PMDK on Windows](installing-pmdk-on-windows.md) 21 | 22 | ## PMDK Version Convention 23 | 24 | PMDK delivers stable releases for production use, release candidates for early development and testing, and development builds for early access. 25 | 26 | * **Builds** are tagged something like `0.2+b1`, which means _Build 1 on top of version 0.2_ 27 | * **Release Candidates** have a '-rc{version}' tag, eg `0.2-rc3`, which means _Release Candidate 3 for version 0.2_. 28 | * **Stable releases** use a _major.minor_ tag like `0.2` 29 | 30 | -------------------------------------------------------------------------------- /getting-started-guide/README.md: -------------------------------------------------------------------------------- 1 | # Getting Started Guide 2 | 3 | This document contains instructions for installing and configuring the Persistent Memory Development Kit (PMDK) and Non-Volatile Device Management software (NDCTL). It is designed to get users up and running quickly. It describes how to create development environments using physical NVDIMMs or emulated persistent memory devices for rapid application development. 4 | 5 | ## Reading Collection 6 | 7 | The following is a list of documents in this collection in the suggested reading order: 8 | 9 | * [**Getting Started Guide**](./) (this document): Describes how to install and configure the Persistent Memory Development Kit (PMDK) and Non-Volatile Device Control (NDCTL). It is designed to get users up and running quickly with the software. 10 | * [**NDCTL User Guide**](https://docs.pmem.io/ndctl-user-guide/)**:** Describes how to use the ndctl, daxctl, daxio, and cxl utilities to configure, manage, and monitor real or emulated Non-Volatile Devices. 11 | * ****[**IPMCTL User Guide**](https://docs.pmem.io/ipmctl-user-guide/): Describes how to use the Intel Optane DC Persistent Memory utility called ipmctl to configure, manage, and monitor the modules. 12 | * [**Changelog**](broken-reference): Describes the historical log or record of all notable changes made to the projects. 13 | -------------------------------------------------------------------------------- /ipmctl-user-guide/support-and-maintenance/version-and-firmware.md: -------------------------------------------------------------------------------- 1 | # Version and Firmware 2 | 3 | ## Get Version 4 | 5 | Shows the persistent memory module host software versions 6 | 7 | ``` 8 | ipmctl version 9 | ``` 10 | 11 | ## Show Device Firmware 12 | 13 | Shows detailed information about the firmware on one or more modules 14 | 15 | ``` 16 | $ ipmctl show [OPTIONS] -firmware [TARGETS] 17 | ``` 18 | 19 | ## Update Firmware 20 | 21 | Updates the firmware on one or more modules 22 | 23 | ``` 24 | $ ipmctl load [OPTIONS] -source (path) -dimm (DimmIds) [TARGETS] 25 | ``` 26 | 27 | > NOTE: If Address Range Scrub (ARS) is in progress on any target DIMM, an attempt will be made to abort ARS and the proceed with the firmware update. 28 | > 29 | > NOTE: A reboot is required to activate the updated firmware image and is recommended to ensure ARS runs to completion. 30 | 31 | ### **Examples** 32 | 33 | Update the firmware on all modules in the system to the image in sourcefile.pkg on the next power cycle. 34 | 35 | ``` 36 | $ sudo ipmctl load -source sourcefile.pkg -dimm 37 | ``` 38 | 39 | Check the firmware image in sourcefile.pkg and retrieve the version. 40 | 41 | ``` 42 | $ sudo ipmctl load -examine -source sourcefile.pkg -dimm 43 | ``` 44 | 45 | > NOTE: Once a firmware image is staged for execution, a power cycle is required before another firmware image of the same type (production or debug) can be staged for execution using this command. 46 | -------------------------------------------------------------------------------- /ipmctl-user-guide/debug/delete-device-platform-configuration-data.md: -------------------------------------------------------------------------------- 1 | # Delete Device Platform Configuration Data 2 | 3 | When `Config` is specified, the `Current`, `Input Data Size`, `Output Data Size` and `Start Offset` values in the Configuration header are set to zero, making those tables invalid. This action can be useful before moving modules from one system to another, as goal creation rules may restrict provisioning dimms with an existing configuration. 4 | 5 | > Warning: This command may result in data loss. Data should be backed up to other storage before executing this command. 6 | 7 | ``` 8 | ipmctl delete [OPTIONS] -dimm (DimmIds) -pcd (Config) 9 | ``` 10 | 11 | ## **Targets** 12 | 13 | * `-dimm (DimmIDs)`: Deletes the PCD data on specific persistent memory modules by supplying the DIMM target and one or more comma-separated DimmIDs. The default is to delete the PCD data for all manageable modules. 14 | * `-pcd Config`: Clears the configuration management information 15 | 16 | ## **Examples** 17 | 18 | Clear the Cin, Cout, Ccur tables from all manageable modules 19 | 20 |
$ sudo ipmctl delete -dimm -pcd Config
21 | 
22 | 23 | ## **Limitations** 24 | 25 | The specified module(s) must be manageable by the host software, and if data-at-rest security is enabled, the modules must be unlocked. Any existing namespaces associated with the requested module(s) should be deleted before running this command. 26 | -------------------------------------------------------------------------------- /ipmctl-user-guide/debug/README.md: -------------------------------------------------------------------------------- 1 | # Debug 2 | 3 | The ipmctl utility provides several debugging features for persistent memory modules. 4 | 5 | For an in-depth explanation for how ipmctl works with the hardware see the [Intel® Optane™ Persistent Memory OS Provisioning Specification](https://cdrdv2.intel.com/v1/dl/getContent/634430), which describes all the firmware interface commands used for this operation. 6 | 7 | Here are the articles in this section: 8 | 9 | {% content-ref url="run-diagnostic.md" %} 10 | [run-diagnostic.md](run-diagnostic.md) 11 | {% endcontent-ref %} 12 | 13 | {% content-ref url="show-error-log.md" %} 14 | [show-error-log.md](show-error-log.md) 15 | {% endcontent-ref %} 16 | 17 | {% content-ref url="dump-debug-log.md" %} 18 | [dump-debug-log.md](dump-debug-log.md) 19 | {% endcontent-ref %} 20 | 21 | {% content-ref url="show-acpi-tables.md" %} 22 | [show-acpi-tables.md](show-acpi-tables.md) 23 | {% endcontent-ref %} 24 | 25 | {% content-ref url="show-device-platform-configuration-data.md" %} 26 | [show-device-platform-configuration-data.md](show-device-platform-configuration-data.md) 27 | {% endcontent-ref %} 28 | 29 | {% content-ref url="delete-device-platform-configuration-data.md" %} 30 | [delete-device-platform-configuration-data.md](delete-device-platform-configuration-data.md) 31 | {% endcontent-ref %} 32 | 33 | {% content-ref url="inject-error.md" %} 34 | [inject-error.md](inject-error.md) 35 | {% endcontent-ref %} 36 | 37 | -------------------------------------------------------------------------------- /ipmctl-user-guide/debug/dump-debug-log.md: -------------------------------------------------------------------------------- 1 | # Dump Debug Log 2 | 3 | Dumps encoded firmware debug logs from specified persistent memory modules and optionally decodes to human readable text. 4 | 5 | ``` 6 | ipmctl dump [OPTIONS] -destination (file_prefix) [-dict (file)] -debug -dimm (DimmIDs) [PROPERTIES] 7 | ``` 8 | 9 | ## **Targets** 10 | 11 | * `-destination (file_prefix)`: The command will create files that use the given filename as a prefix and append the DIMM UID, DIMM handle, debug log source, and the appropriate file type (.bin for encoded logs, .txt for decoded logs) onto the end. 12 | 13 | > file\_prefix\_Uid\_Handle\_logsource.\[bin,txt] 14 | * `-dict (path)`: Optional file path to the dictionary file. If supplied, the command will create both the binary debug log and a text file with decoded log data with the file prefix specified by destination. 15 | * `-dimm (DimmIDs)`: Dumps the debug logs from the specified modules. 16 | 17 | ## **Examples** 18 | 19 | Dump and decode the debug log from persistent memory modules 0x0001 and 0x0011 using the dictionary file. 20 | 21 | ``` 22 | $ sudo ipmctl dump -destination file_prefix -dict nlog_dict.txt -debug -dimm 0x0001,0x0011 23 | ``` 24 | 25 | ## **Sample Output** 26 | 27 | ``` 28 | Dumped media FW debug logs to file 29 | (file_prefix_8089-A1-1816-00000016_0x0001_media.bin) 30 | Decoded 456 records to file (file_prefix_8089-A1-1816-00000016_0x0001_media.txt) 31 | No spi FW debug logs found 32 | ``` 33 | -------------------------------------------------------------------------------- /ipmctl-user-guide/security/erase-device-data.md: -------------------------------------------------------------------------------- 1 | # Erase Device Data 2 | 3 | Erases the persistent data on one or more module. 4 | 5 | ```text 6 | ipmctl delete [OPTIONS] -dimm [TARGETS] Passphrase=(string) 7 | ``` 8 | 9 | ### **Targets** 10 | 11 | * `-dimm (DimmIDs)`: Erases specific module device data by supply one or more comma-separated DimmIDs. 12 | 13 | ### **Properties** 14 | 15 | * `Passphrase`: The current passphrase \(1-32 characters\). If security state is disabled, then passphrase is not required and will be ignored if supplied. If security state is enabled, then a passphrase must be supplied. 16 | 17 | ### **Examples** 18 | 19 | Security disabled modules: Erase all persistent data on all modules in the system 20 | 21 | ```text 22 | $ sudo ipmctl delete -dimm 23 | ``` 24 | 25 | Security enabled specifics: Erase all the persistent data on all modules in the system 26 | 27 | ```text 28 | $ sudo ipmctl delete -dimm Passphrase=123 29 | ``` 30 | 31 | Erase all the persistent data on all modules using the CLI prompt for the current passphrase. 32 | 33 | ```text 34 | $ sudo ipmctl delete -dimm Passphrase="" 35 | ``` 36 | 37 | ### **Limitations** 38 | 39 | * The specified module must be manageable by the host software, have security enabled and not be in the "Unlocked, Frozen", "Disabled, Frozen", or "Exceeded" lock states. 40 | * Command is subject to OS Vendor \(OSV\) support. If the OSV does not provide support, command will return "Not Supported." 41 | 42 | -------------------------------------------------------------------------------- /ipmctl-user-guide/provisioning/README.md: -------------------------------------------------------------------------------- 1 | # Provisioning 2 | 3 | Intel Optane persistent memory modules can be provisioned into three different operating modes: 4 | 5 | * **App Direct**: Persistent memory exposed as block devices to the operating system and DRAM is used as main memory. 6 | * **Memory Mode**: Uses persistent memory as main memory and DRAM is used as a cache not explicitly managed by the operating system. Data placement is managed by the memory controller. 7 | * **Mixed Mode**: A combination of App Direct and Memory Modes where a percentage of total memory is allocated for Memory Mode, and the rest is used for App Direct. 8 | 9 | You can learn more about each operating mode and which is right for your application in this [video](https://www.youtube.com/watch?v=gqo3gty-R4s). 10 | 11 | To learn more about how ipmctl works with the hardware see the [Intel® Optane™ Persistent Memory OS Provisioning Specification](https://cdrdv2.intel.com/v1/dl/getContent/634430), which describes all the firmware interface commands used for this operation. 12 | 13 | {% hint style="warning" %} 14 | **WARNING:** Provisioning or changing modes may result in data loss. Data should be backed up to other storage before executing this command. 15 | 16 | Changing a memory allocation goal modifies how the platform firmware maps persistent memory in the system address space which may result in data loss or inaccessible data, but does not explicitly delete or modify user data found in persistent memory. 17 | {% endhint %} 18 | -------------------------------------------------------------------------------- /ndctl-users-guide/references.md: -------------------------------------------------------------------------------- 1 | # References 2 | 3 | The following is a list of additional reference material: 4 | 5 | ## Documents 6 | 7 | ACPI v6.0: [http://www.uefi.org/sites/default/files/resources/ACPI\_6.0.pdf ](http://www.uefi.org/sites/default/files/resources/ACPI_6.0.pdf) 8 | 9 | ACPI Specification \(Latest\): [http://www.uefi.org/specifications](http://www.uefi.org/specifications) 10 | 11 | NVDIMM DSM Interface Specification \(v1.8\): [http://pmem.io/documents/NVDIMM\_DSM\_Interface-V1.8.pdf](http://pmem.io/documents/NVDIMM_DSM_Interface-V1.8.pdf) 12 | 13 | NVDIMM DSM Interface Example: [http://pmem.io/documents/NVDIMM\_DSM\_Interface\_Example.pdf ](http://pmem.io/documents/NVDIMM_DSM_Interface_Example.pdf) 14 | 15 | NVDIMM Driver Writer's Guide: [http://pmem.io/documents/NVDIMM\_Driver\_Writers\_Guide.pdf](http://pmem.io/documents/NVDIMM_Driver_Writers_Guide.pdf) 16 | 17 | NVDIMM Namespace: [http://pmem.io/documents/NVDIMM\_Namespace\_Spec.pdf ](http://pmem.io/documents/NVDIMM_Namespace_Spec.pdf) 18 | 19 | UEFI Specification \(Latest\): [http://www.uefi.org/specifications](http://www.uefi.org/specifications) 20 | 21 | ## Source Code Repositories 22 | 23 | LIBNVDIMM: [https://git.kernel.org/cgit/linux/kernel/git/djbw/nvdimm.git ](https://git.kernel.org/cgit/linux/kernel/git/djbw/nvdimm.git) 24 | 25 | LIBNDCTL: [https://github.com/pmem/ndctl.git ](https://github.com/pmem/ndctl.git) 26 | 27 | NDCTL: [https://github.com/pmem/ndctl](https://github.com/pmem/ndctl) 28 | 29 | PMEM: [https://github.com/01org/prd](https://github.com/01org/prd) 30 | 31 | -------------------------------------------------------------------------------- /getting-started-guide/system-requirements.md: -------------------------------------------------------------------------------- 1 | # System Requirements 2 | 3 | The following minimum system requirements are required to support either physical NVDIMMs or emulated NVDIMMs. 4 | 5 | {% tabs %} 6 | {% tab title="Linux" %} 7 | * Linux Kernel Version 4.0 or later 8 | * Kernel Version 4.19 or later is recommended for production 9 | * 4GB of DDR Memory \(Minimum\), 16GB or more recommended for Emulated NVDIMMs. 10 | 11 | The 4.15 Linux kernel introduced a new flag to mmap\(\) called MAP\_SYNC. If the mapping with MAP\_SYNC is successful, PMDK knows that flushing from user space using instructions like CLWB is safe. Otherwise, it falls back to calling msync\(\). The msync\(\) call can be slower for the PMDK-style transactions because they are fine-grained, flushing lots of little stores during a typical transaction. Calling into the kernel has an overhead, as does the code path that msync\(\) takes. 12 | 13 | Setting PMEM\_IS\_PMEM\_FORCE=1 tells the PMDK libraries "even if mmap\(\) with MAP\_SYNC was unsuccessful, pretend it worked and do user space flushing anyway, avoiding calls to msync\(\)". This environment variable is meant for testing, not for production use. Both FSDAX and DEVDAX use user space flushing when it is available. 14 | {% endtab %} 15 | 16 | {% tab title="Windows" %} 17 | * Windows Server 2016 or later for NVDIMM support 18 | * Windows Server 2019 or later for Intel\(R\) Optane\(TM\) DC Persistent Memory Support 19 | * 4GB of DDR Memory \(Minimum\), 16GB or more recommended for Emulated NVDIMMs 20 | {% endtab %} 21 | {% endtabs %} 22 | 23 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | BSD 3-Clause License 2 | 3 | Copyright (c) 2018, Persistent Memory Programming 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | * Redistributions of source code must retain the above copyright notice, this 10 | list of conditions and the following disclaimer. 11 | 12 | * Redistributions in binary form must reproduce the above copyright notice, 13 | this list of conditions and the following disclaimer in the documentation 14 | and/or other materials provided with the distribution. 15 | 16 | * Neither the name of the copyright holder nor the names of its 17 | contributors may be used to endorse or promote products derived from 18 | this software without specific prior written permission. 19 | 20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 21 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 22 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 25 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 26 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 27 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 28 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | -------------------------------------------------------------------------------- /ipmctl-user-guide/module-discovery/show-socket.md: -------------------------------------------------------------------------------- 1 | # Show Socket 2 | 3 | Shows basic information about the physical processors in the host server. 4 | 5 | ``` 6 | ipmctl show [OPTIONS] -socket [TARGETS] 7 | ``` 8 | 9 | ## **Targets** 10 | 11 | * `-socket (SocketIDs)`: Restricts output to the DIMMs installed on specific sockets by supplying the socket target and one or more comma-separated socket identifiers. The default is to display all sockets. 12 | 13 | ## **Examples** 14 | 15 | Display information about all the processors. 16 | 17 | ``` 18 | # ipmctl show -socket 19 | 20 | SocketID | MappedMemoryLimit | TotalMappedMemory 21 | ================================================== 22 | 0x0000 | 4608.0 GiB | 851.0 GiB 23 | 0x0001 | 4608.0 GiB | 850.5 GiB 24 | ``` 25 | 26 | List all properties for socket 1. 27 | 28 | ``` 29 | # ipmctl show -socket 1 30 | 31 | SocketID | MappedMemoryLimit | TotalMappedMemory 32 | ================================================== 33 | 0x0001 | 4608.0 GiB | 850.5 GiB 34 | ``` 35 | 36 | Retrieve specific properties for each processor. 37 | 38 | ``` 39 | # ipmctl show -d MappedMemoryLimit -socket 40 | 41 | ---SocketID=0x0000--- 42 | MappedMemoryLimit=4608.0 GiB 43 | ---SocketID=0x0001--- 44 | MappedMemoryLimit=4608.0 GiB 45 | ``` 46 | 47 | ## **Return Data** 48 | 49 | * The `MappedMemoryLimit` is the maximum amount of memory that is allowed to be mapped into the system physical address space for this processor based on its SKU. 50 | * `TotalMappedMemory` is the total amount of memory that is currently mapped into the system physical address space for this processor. 51 | -------------------------------------------------------------------------------- /ipmctl-user-guide/basic-usage.md: -------------------------------------------------------------------------------- 1 | # Basic Usage 2 | 3 | The `ipmctl` utility has many options. A complete list of commands can be shown by executing `ipmctl` with no arguments, `ipmctl help`, or reading the `ipmctl(1)` man page. Running `ipmctl`requires root privileges. `ipmctl` can also be used from UEFI. Namespaces are created using `ipmctl` at UEFI level or the [ndctl utility](https://github.com/sscargal/pmem-docs-ipmctl-user-guide/tree/f25a04768fa69975fc7b10ea1818b460255f1b79/getting-started-guide/what-is-ndctl.md). 4 | 5 | Usage: 6 | 7 | ``` 8 | ipmctl COMMAND [OPTIONS] [TARGETS] [PROPERTIES] 9 | ``` 10 | 11 | Items in square brackets `[..]` are optional. Options, targets, and property values are separated by a pipe `|` meaning "or", and the default value is italicized. Items in parenthesis `(..)` indicate a user supplied value. 12 | 13 | `ipmctl` commands include: 14 | 15 | * create 16 | * delete 17 | * dump 18 | * help 19 | * load 20 | * set 21 | * show 22 | * start 23 | * version 24 | 25 | More information can be shown for each command using the `-verbose` flag, which is helpful for debugging. 26 | 27 | A video recorded by Lenovo shows [How to use ipmctl commands to monitor Intel® Optane™ DC Persistent Memory Module health status on Microsoft Windows](https://www.youtube.com/watch?v=pzSsdcfL-vg). It provides an introduction to using ipmctl. The commands are the same on Linux. 28 | 29 | To learn more about how ipmctl works with the hardware see the [Intel® Optane™ Persistent Memory OS Provisioning Specification](https://cdrdv2.intel.com/v1/dl/getContent/634430), which describes all the firmware interface commands used for this operation. 30 | -------------------------------------------------------------------------------- /ipmctl-user-guide/README.md: -------------------------------------------------------------------------------- 1 | # IPMCTL User Guide 2 | 3 | ## Introduction 4 | 5 | `ipmctl` is an open source utility created and maintained by Intel to manage Intel® Optane™ persistent memory modules. `ipmctl`, works on both Linux and Windows. The full project is open source and can be seen on [GitHub](https://github.com/intel/ipmctl). In this guide we will refer to Intel® Optane™ memory modules simply as _modules_ or _persistent memory modules_. 6 | 7 | `ipmctl` refers to the following interface components: 8 | 9 | * `libipmctl`: An Application Programming Interface (API) library for managing persistent memory modules. 10 | * `ipmctl`: A Command Line Interface (CLI) application for configuring and managing persistent memory modules from the command line. 11 | * `ipmctl-monitor`: A monitor daemon/system service for monitoring the health and status of persistent memory modules. 12 | 13 | Functionality includes: 14 | 15 | * Discover Intel Optane persistent memory modules on the platform 16 | * Provision the platform memory configuration 17 | * Learn more about operating modes in this [video](https://www.youtube.com/watch?v=gqo3gty-R4s) 18 | * View and update module firmware 19 | * Configure data-at-rest security 20 | * Monitor module health 21 | * Track performance of modules 22 | * Debug and troubleshoot modules 23 | 24 | Architecture Diagram: 25 | 26 | ![](.gitbook/assets/capture.PNG) 27 | 28 | To learn more about how ipmctl works with the hardware see the [Intel® Optane™ Persistent Memory OS Provisioning Specification](https://cdrdv2.intel.com/v1/dl/getContent/634430), which describes all the firmware interface commands used for this operation. 29 | -------------------------------------------------------------------------------- /ipmctl-user-guide/instrumentation/change-sensor-settings.md: -------------------------------------------------------------------------------- 1 | # Change Sensor Settings 2 | 3 | Changes the non-critical threshold or enabled state for one or more persistent memory module sensors. Use the command [Show Sensor](show-sensor.md) to view the current settings. 4 | 5 | ``` 6 | $ ipmctl set [OPTIONS] -sensor (SENSORS) [TARGETS] NonCriticalThreshold=(temperature) EnabledState=(0|1) 7 | ``` 8 | 9 | ## **Sensors** 10 | 11 | * `MediaTemperature`: The current module media temperature in Celsius. Valid values: 0-2047. 12 | * `ControllerTemperature`: The current module controller temperature in Celsius. Valid values 0-2047. 13 | * `PercentageRemaining`: Remaining module life as a percentage value of factory expected life span. Valid values: 1-99. 14 | 15 | ## **Targets** 16 | 17 | * `dimm (DimmIDs)`: Show the sensors on specific modules by supplying one or more comma-separated DimmIDs. The default is to display sensors for all manageable modules. 18 | 19 | ## **Properties** 20 | 21 | * `NonCriticalThreshold`: The upper (for temperatures) or lower (for spare capacity) non-critical alarm threshold of the sensor. If the current value of the sensor is at or above for thermal, or below for capacity, the threshold value, then the sensor will indicate a "NonCritical" state. Temperatures may be specified in degrees Celsius to a precision of 1/16 a degree. 22 | * `EnabledState`: Enable or disable the non-critical threshold alarm. One of: 23 | * "0": Disable 24 | * "1": Enable 25 | 26 | ## **Examples** 27 | 28 | Change the media temperature threshold to 51 on the specified module and enable the alarm. 29 | 30 | ``` 31 | ipmctl set -sensor MediaTemperature –dimm 0x0001 NonCriticalThreshold=51 EnabledState=1 32 | ``` 33 | -------------------------------------------------------------------------------- /ipmctl-user-guide/instrumentation/show-device-performance.md: -------------------------------------------------------------------------------- 1 | # Show Device Performance 2 | 3 | Shows performance metrics for one or more persistent memory modules. 4 | 5 | ``` 6 | ipmctl show [OPTIONS] -performance [METRICS] [TARGETS] 7 | ``` 8 | 9 | ## **Metrics** 10 | 11 | Shows output of a specific performance metric by supplying the metric name. See RETURN DATA for more information. One of: 12 | 13 | * `MediaReads`: Number of 64 byte reads from media on the module since last AC cycle. 14 | * `MediaWrites`: Number of 64 byte writes to media on the module since last AC cycle. 15 | * `ReadRequests`: Number of DDRT read transactions the module has serviced since last AC cycle. 16 | * `WriteRequests`: Number of DDRT write transactions the module has serviced since last AC cycle. 17 | * `TotalMediaReads`: Number of 64 byte reads from media on the module over its lifetime. 18 | * `TotalMediaWrites`: Number of 64 byte writes to media on the module over its lifetime. 19 | * `TotalReadRequests`: Number of DDRT read transactions the module has serviced over its lifetime. 20 | * `TotalWriteRequests`: Number of DDRT write transactions the module has serviced over its lifetime. 21 | 22 | The default is to display all performance metrics. 23 | 24 | ## **Targets** 25 | 26 | * `dimm (DimmIDs)`: Show the performance metrics of specific modules by supplying one or more comma-separated DimmIDs. The default is to display sensors for all manageable modules. 27 | 28 | ## **Examples** 29 | 30 | Show all performance metrics for all modules in the server 31 | 32 | ``` 33 | $ ipmctl show -dimm -performance 34 | ``` 35 | 36 | Show the number of 64 byte reads since last AC cycle for all modules in the server 37 | 38 | ``` 39 | $ ipmctl show -dimm -performance MediaReads 40 | ``` 41 | -------------------------------------------------------------------------------- /getting-started-guide/installing-pmdk/installing-pmdk-on-windows.md: -------------------------------------------------------------------------------- 1 | # Installing PMDK on Windows 2 | 3 | The recommended and easiest way to install PMDK on Windows is to use a Microsoft vcpkg. Vcpkg is an open source tool and ecosystem created for libraries management. PMDK requires following: 4 | 5 | * [MS Visual Studio 2015](https://visualstudio.microsoft.com/) or later 6 | * [Windows SDK 10.0.16299.15 ](https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk)or later 7 | * perl \(i.e. [ActivePerl ](https://www.activestate.com/products/activeperl/)or [StrawberryPerl](http://strawberryperl.com/)\) 8 | * PowerShell 5 or later 9 | 10 | To install the latest PMDK release and link it to your Visual Studio solution at first you need to clone and set up vcpkg on your machine as described on the [vcpkg github page](https://github.com/Microsoft/vcpkg) in **Quick Start** section. Run the following within the powershell: 11 | 12 | ```text 13 | > git clone https://github.com/Microsoft/vcpkg 14 | > cd vcpkg 15 | > .\bootstrap-vcpkg.bat 16 | > .\vcpkg integrate install 17 | > .\vcpkg install pmdk:x64-windows 18 | ``` 19 | 20 | {% hint style="info" %} 21 | **Note:** The last command can take several minutes while it is builds and installs PMDK. 22 | {% endhint %} 23 | 24 | After successful completion of all of the above steps, libraries are ready to be used in Visual Studio. No additional configuration is required. Create a new project or open an existing project within Visual Studio \(remember to use platform **x64**\), then include the PMDK headers in your project. 25 | 26 | ## Additional Resources 27 | 28 | * [Persistent Memory Programming in Windows - NVML Integration](https://docs.microsoft.com/en-us/windows/desktop/persistent-memory-programming-in-windows---nvml-integration) \(NVML was renamed to PMDK\) 29 | 30 | -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/linux-environments/README.md: -------------------------------------------------------------------------------- 1 | # Linux Environments 2 | 3 | ## Emulating Persistent Memory NVDIMMs on Linux 4 | 5 | Support for physical and emulated persistent memory devices, commonly referred to as Non-Volatile DIMMs \(NVDIMMs\), is present in the Linux Kernel v4.0 or newer. It is recommended to use Kernel version 4.2 or later since NVDIMM support is enabled by default. Kernel versions 4.0 and 4.1 require manual configuration and re-compiling the Kernel to enable support. 6 | 7 | Linux offers several options to emulate persistent memory for development. The features and functionality vary for each option. The following table describes which features are available for each development environment. 8 | 9 | | | NVDIMM | Regions | Namespaces | FSDax | DevDax | Persistent Pools | 10 | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | 11 | | [memmap Kernel Option](linux-memmap.md) | No | Yes | Yes | Yes | No | Yes | 12 | | [QEMU Virtualization](../virtualization/qemu.md) | Yes | Yes | Yes | Yes | Yes | Yes | 13 | | Docker Containers | No | No | No | Yes\* | Yes\* | Yes | 14 | | [VMWare VSphere](../virtualization/vmware-vsphere-esxi.md) | Yes | Yes | Yes | Yes | Yes | Yes | 15 | 16 | {% hint style="info" %} 17 | Because emulation of persistent memory uses Volatile DRAM, the performance of emulated NVDIMMs will not match the performance of physical NVDIMMs. It is not recommended to rely on performance data when using emulated NVDIMMs. 18 | {% endhint %} 19 | 20 | {% hint style="info" %} 21 | Data stored on emulated NVDIMMs will loose all the data when the system is power-cycled. Do not store critical data on emulated persistent memory. 22 | {% endhint %} 23 | 24 | \(\*\) Devices are passed through to the guest from the host. The guest cannot directly manage the devices. 25 | 26 | -------------------------------------------------------------------------------- /ipmctl-user-guide/debug/show-error-log.md: -------------------------------------------------------------------------------- 1 | # Show Error Log 2 | 3 | Shows thermal or media errors on the specified persistent memory modules. 4 | 5 | ``` 6 | ipmctl show [OPTIONS] -error (Thermal|Media) [TARGETS] [PROPERTIES] 7 | ``` 8 | 9 | ## **Targets** 10 | 11 | * `-dimm (DimmIDs)`: Show only the events on specific modules by supplying the DIMM target and one or more comma-separated DimmIDs. 12 | 13 | ## **Properties** 14 | 15 | * `SequenceNumber`: Error log entries are stored with a sequence number starting with 1 and rolling over back to 1 after 65535. Limit the error log entries returned by providing a sequence number. Only errors with a sequence number equal to or higher than that provided will be returned. The default is 1. 16 | * `Level`: Severity level of errors to be fetched. One of: 17 | * "High": high severity errors (default) 18 | * "Low": Low severity errors 19 | * `Count`: Max number of error entries to be fetched and printed. The default is 8 for media errors and 16 for thermal errors. 20 | 21 | ## **Examples** 22 | 23 | Show all high thermal error log entries 24 | 25 | ``` 26 | $ sudo ipmctl show -error Thermal Level=High 27 | ``` 28 | 29 | Show all low media error log entries 30 | 31 | ``` 32 | $ sudo ipmctl show -error Thermal Level=Low 33 | ``` 34 | 35 | ## **Sample Output** 36 | 37 | ``` 38 | Thermal Error occurred on Dimm (DimmID): 39 | System Timestamp : 1527273299 40 | Temperature : 88C 41 | Reported : 4 - Critical 42 | Temperature Type : 0 - Media Temperature 43 | Sequence Number : 1 44 | ``` 45 | 46 | ``` 47 | Media Error occurred on Dimm (DimmID): 48 | System Timestamp : 1527266471 49 | DPA : 0x000014c0 50 | PDA : 0x00000600 51 | Range : 4B 52 | Error Type : 4 - Locked/Illegal Access 53 | Error Flags : DPA Valid 54 | Transaction Type : 11 - CSR Write 55 | Sequence Number : 2 56 | ``` 57 | -------------------------------------------------------------------------------- /ipmctl-user-guide/module-discovery/README.md: -------------------------------------------------------------------------------- 1 | # Module Discovery 2 | 3 | Persistent memory modules are uniquely referenced by one of two IDs: `DimmHandle` or `DimmUID`. Either ID may be used for commands that utilize the `-dimm` flag to perform operations on a single module or set of modules. The default is to operate on all modules, which does not require the use of `-dimm`. 4 | 5 | For example, each of the following are equivalent: 6 | 7 | ``` 8 | $ ipmctl show -d DimmHandle,DimmUID -dimm 8089-a2-1748-00000001 9 | $ ipmctl show -d DimmHandle,DimmUID -dimm 0x0001 10 | $ ipmctl show -d DimmHandle,DimmUID -dimm 1 11 | ``` 12 | 13 | For simplicity, this document will primarily use `DimmUID`. 14 | 15 | The `-dimm` option accepts a single DimmUID or a comma separated list of DimmUIDs to filter the results. For example, the following `ipmctl show` command displays the `DimmHandle` and `DimmUID` properties for two modules with IDs of `0x0001` and `0x1001`: 16 | 17 | ``` 18 | $ ipmctl show -d DimmHandle,DimmUID -dimm 0x0001,0x1001 19 | 20 | ---DimmID=0x0001--- 21 | DimmHandle=0x0001 22 | DimmUID=8089-a2-1748-00000001 23 | ---DimmID=0x1001--- 24 | DimmHandle=0x1001 25 | DimmUID=8089-a2-1748-00000002 26 | ``` 27 | 28 | ## DimmHandle 29 | 30 | The `DimmHandle` is equivalent to the `DimmUID` and is formatted as 0xABCD, where A, B, C, and D are defined as follows: 31 | 32 | * A = Socket 33 | * B = Memory Controller 34 | * C = Channel 35 | * D = Slot 36 | 37 | ## DimmUID 38 | 39 | The `DimmUID` is a unique identifier specific to each physical module. The unique identifier of an Intel Optane persistent memory module is formatted as VVVV-ML-MMYYSNSNSNSN or VVVV-SNSNSNSN (if the manufacturing information is not available) where: 40 | 41 | * VVVV = VendorID 42 | * ML = Manufacturing Location 43 | * MMYY = Manufacturing Date 44 | * SNSNSNSN = Serial Number 45 | -------------------------------------------------------------------------------- /ipmctl-user-guide/security/enable-device-security.md: -------------------------------------------------------------------------------- 1 | # Enable Device Security 2 | 3 | Enable data-at-rest security for the persistent memory on one or more persistent memory modules by setting a passphrase. For better passphrase protection, specify an empty string \(e.g., ConfirmPassphrase=""\) to be prompted for the passphrase or to use a file containing the passphrase with the source option. 4 | 5 | ```text 6 | ipmctl set [OPTIONS] -dimm [TARGETS] NewPassphrase=(string) ConfirmPassphrase=(string) 7 | ``` 8 | 9 | ### **Targets** 10 | 11 | * `-dimm (DimmIDs)` 12 | 13 | Set the passphrase on specific modules by supplying one or more comma separated DimmIDs. However, this is not recommended as it may put the system in an undesirable state. The default is to set the passphrase on all manageable modules. 14 | 15 | ### **Properties** 16 | 17 | * `NewPassphrase`: The new passphrase \(1-32 characters\). 18 | * `ConfirmPassphrase`: Confirmation of the new passphrase \(1-32 character and must match NewPassphrase\). 19 | 20 | ### **Examples** 21 | 22 | Set a passphrase on DIMM 0x0001 23 | 24 | ```text 25 | $ sudo ipmctl set -dimm 0x0001 NewPassphrase=123 ConfirmPassphrase=123 26 | ``` 27 | 28 | Set a passphrase on DIMM 0x0001 by supplying the passphrase in the file mypassphrase.file. 29 | 30 | ```text 31 | ipmctl set -source mypassphrase.file -dimm 0x0001 NewPassphrase="" ConfirmPassphrase="" 32 | ``` 33 | 34 | In the previous example, the format of the file would be: 35 | 36 | ```text 37 | #ascii 38 | NewPassphrase=myNewPassphrase 39 | ``` 40 | 41 | ### **Limitations** 42 | 43 | In order to successfully execute this command: 44 | 45 | * The caller must have the appropriate privileges. The specified module must have security disabled and be manageable by the host software. 46 | * There must not be any goal creation pending. 47 | * Command is subject to OS Vendor \(OSV\) support. If OSV does not provide support, command will return "Not Supported." 48 | 49 | -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/README.md: -------------------------------------------------------------------------------- 1 | # Creating Development Environments 2 | 3 | Providing developers and users access to specific environments is an important part of the application development lifecycle. Environments that support active development, testing, User Acceptance Testing \(UAT\), and production provide a full range of functionality. 4 | 5 | Development environments can be created on servers with and without physical NVDIMMs installed. For servers without physical NVDIMMs, NVDIMM functionality can be emulated using volatile DDR memory. Several methods exist to create environments with emulated NVDIMMs. These are described in the following sections. 6 | 7 | Application development using the PMDK can be done using traditional memory mapped files without emulating NVDIMMs. Such files can exist on any storage media. However, data consistency assurance embedded within PMDK requires frequent synchronization of data that is being modified. Depending on platform capabilities, and underlying device where the files are, a different set of commands is used to facilitate synchronization. It might be `msync(2)` for the regular hard drives, or combination of cache flushing instructions followed by memory fence instruction for the real persistent memory. Calling `msync` or `fsync` frequently can cause significant IO performance issues. For this reason, it is not recommended to use this approach for persistent memory application development. 8 | 9 | The following sections describe how to create development environments using a variety of technologies for different operating systems: 10 | 11 | Linux 12 | 13 | * [Using the memmap Kernel Option](linux-environments/linux-memmap.md) 14 | 15 | Windows 16 | 17 | * [Windows Server 2016](windows-environments.md) 18 | 19 | [Cloud Environments](cloud-environments/) 20 | 21 | Virtualization 22 | 23 | * [Using QEMU Virtualization](virtualization/qemu.md) 24 | * [Using VMWare VSphere/ESXi](virtualization/vmware-vsphere-esxi.md) 25 | 26 | -------------------------------------------------------------------------------- /ipmctl-user-guide/provisioning/concepts.md: -------------------------------------------------------------------------------- 1 | # Concepts 2 | 3 | To provision or change modes, create a **goal configuration** that will take effect after a system reboot. Goals specify which operating mode the modules are to be used in. The goal is stored on the persistent memory modules for the BIOS to read on the next reboot. 4 | 5 | A **region** is a grouping of one or more persistent memory modules. Regions can be created as either non-interleaved, meaning one region per persistent memory module, or interleaved, which creates one large region over all modules in a CPU socket. Regions cannot be created across CPU sockets. 6 | 7 | ![Non-interleaved](https://user-images.githubusercontent.com/21182867/59884137-2a7b8580-936c-11e9-8f6a-e16aa3efcb19.png) ![Fully interleaved](https://user-images.githubusercontent.com/21182867/59884182-5991f700-936c-11e9-88dc-5f483f1d433a.png) 8 | 9 | Many users choose to have one fully-interleaved set across their memory modules because this allows for increased bandwidth performance. Regions can be created or modified using the `ipmctl -create -goal` command, or via an option in the BIOS. 10 | 11 | Regions can be divided up into one or more **namespaces**. A namespace defines a contiguously addressed range of non-volatile memory conceptually similar to a disk partition, or an NVM Express namespace. A namespace is the unit of persistent memory storage that appears in the /dev directory as a device that can be used for input and output or partitioned further. Intel recommends using the [ndctl utility](https://docs.pmem.io/ndctl-users-guide) for creating namespaces in Linux and the native PowerShell commands in Microsoft Windows. Namespaces can further be partitioned using commands such as fdisk and parted on Linux or native Microsoft Windows commands and utilities. 12 | 13 | ![Namespaces](https://user-images.githubusercontent.com/21182867/59884230-94942a80-936c-11e9-8b66-ab911a240fb0.png) 14 | 15 | A **label** is similar to a partition table. Intel persistent memory modules support labels that allow regions to be further divided into namespaces. A label contains metadata stored on a persistent memory module. 16 | -------------------------------------------------------------------------------- /ipmctl-user-guide/security/change-device-security.md: -------------------------------------------------------------------------------- 1 | # Change Device Security 2 | 3 | Changes the data-at-rest security lock state for the persistent memory on one or more persistent memory modules. 4 | 5 | ```text 6 | ipmctl set [OPTIONS] -dimm [TARGETS] Lockstate=(Unlocked|Disabled|Frozen) Passphrase=(string) 7 | ``` 8 | 9 | ### **Targets** 10 | 11 | * `-dimm (DimmIDs)`: Changes the lock state of a specific module by supplying one or more comma separated DimmIDs. However, this is not recommended as it may put the system in an undesirable state. The default is to modify all manageable DCPMMs. 12 | 13 | ### **Properties** 14 | 15 | * `LockState`: The desired lockstate 16 | * `Disabled`: Removes the passphrase on a module to disable security. Permitted only when LockState is Unlocked. 17 | * `Unlocked`: Unlocks the persistent memory on a locked module. 18 | * `Frozen`: Prevents further lock state changes to the module until the next reboot. 19 | * `Passphrase`: The current passphrase \(1-32 characters\). 20 | 21 | ### **Examples** 22 | 23 | Unlock device 0x0001 24 | 25 | ```text 26 | $ sudo ipmctl set -dimm 0x0001 LockState=Unlocked Passphrase="" 27 | ``` 28 | 29 | Unlock device 0x0001 by supplying the passphrase in the file "mypassphrase.file". 30 | 31 | ```text 32 | ipmctl set -source myfile.file -dimm 0x0001 LockState=Unlocked Passphrase="" 33 | ``` 34 | 35 | In the previous example, the file would be: 36 | 37 | ```text 38 | #ascii 39 | Passphrase=myPassphrase 40 | ``` 41 | 42 | ### **Limitations** 43 | 44 | To successfully execute this command: 45 | 46 | * The caller must have the appropriate privileges and the specified modules must be manageable by the host software, meaning: 47 | * have security enabled 48 | * not be in the `unlocked`, `frozen`, or `Exceeded` lock states 49 | * not be executing a long operation \(ARS, Overwrite, FWUpdate\) 50 | * The command is subject to OS Vendor \(OSV\) support. If the OSV does not provide support, the command may return "Not Supported." An exception is if the module is Unlocked \(via UEFI or OSV tools\), then transitioning to Disabled is possible regardless of OSV support. 51 | 52 | -------------------------------------------------------------------------------- /ipmctl-user-guide/module-discovery/show-system-capabilities.md: -------------------------------------------------------------------------------- 1 | # Show System Capabilities 2 | 3 | Shows the platform supported Intel Optane DC persistent memory capabilities across the server. 4 | 5 | ``` 6 | ipmctl show [OPTIONS] -system -capabilities 7 | ``` 8 | 9 | ## **Example** 10 | 11 | ``` 12 | $ ipmctl show -system -capabilities 13 | 14 | PlatformConfigSupported=1 15 | Alignment=1.0 GiB 16 | AllowedVolatileMode=Memory Mode 17 | CurrentVolatileMode=1LM 18 | AllowedAppDirectMode=App Direct 19 | ``` 20 | 21 | ## **Return Data** 22 | 23 | * `PlatformConfigSupported`: Whether the platform level configuration of persistent memory modules can be modified with the host software. One of: 24 | * 0: Changes must be made in the BIOS. 25 | * 1: The command Create Memory Allocation Goal is supported. 26 | * `Alignment`: Capacity alignment requirement for all memory types as reported by the BIOS. 27 | * `AllowedVolatileMode`: The volatile mode allowed as determined by BIOS setup. One of: 28 | * 1LM: One-level volatile mode. All memory resources in the platform are independently accessible, and not captive of the other resources. 29 | * Memory Mode: Modules act as system memory under the control of the operating system. In Memory Mode, any DDR in the platform will act as a cache working in conjunction with the persistent memory modules. 30 | * Unknown: The allowed volatile mode cannot be determined. 31 | * `CurrentVolatileMode`: The current volatile mode. One of: 32 | * 1LM: One-level volatile mode. All memory resources in the platform are independently accessible, and not captive of the other resources. 33 | * Memory Mode: Modules act as system memory under the control of the operating system. In Memory Mode, any DDR in the platform will act as a cache working in conjunction with the persistent memory modules. 34 | * Unknown: The current volatile mode cannot be determined. 35 | * `AllowedAppDirectMode`: The App Direct mode allowed as determined by BIOS setup. One of: 36 | * Disabled: App Direct support is currently disabled by the BIOS. 37 | * App Direct: App Direct support is currently enabled by the BIOS. 38 | * Unknown: The current App Direct support cannot be determined. 39 | -------------------------------------------------------------------------------- /ipmctl-user-guide/provisioning/show-memory-allocation-goal.md: -------------------------------------------------------------------------------- 1 | # Show Memory Allocation Goal 2 | 3 | Shows the memory allocation goal on one or more persistent memory modules. Once the goal is successfully applied by the BIOS, it is no longer displayed. Use the command Show Memory Resources to view the system-wide memory resources. 4 | 5 | ``` 6 | ipmctl show [OPTIONS] -goal [TARGETS] [PROPERTIES] 7 | ``` 8 | 9 | **Targets** 10 | 11 | * `-dimm [(DimmIDs)]`: Restricts output to specific DIMMs by optionally supplying the DIMM target and one or more comma-separated DIMM identifiers. The default is to display all manageable persistent memory modules with memory allocation goals. 12 | * `-socket (SocketIDs)`: Restricts output to the DIMMs installed on specific sockets by supplying the socket target and one or more comma-separated socket identifiers. The default is to display all sockets with memory allocation goals. 13 | 14 | **Examples** 15 | 16 | ``` 17 | $ ipmctl show -goal 18 | ``` 19 | 20 | ``` 21 | $ ipmctl show -goal -socket 1 22 | ``` 23 | 24 | **Return Data** 25 | 26 | * `SocketID`: The processor socket identifier where the module is installed. 27 | * `DimmID`: The persistent memory module identifier. 28 | * `MemorySize`: The module capacity that will be configured in Memory Mode. 29 | * `AppDirect1Size`: The persistent module capacity that will be configured as the first App Direct interleave set if applicable. 30 | * `AppDirect2Size`: The persistent memory module capacity that will be configured as the second App Direct interleave set if applicable. 31 | * `Status`: The status of the memory allocation goal. One of: 32 | * `Unknown`: The status cannot be determined. 33 | * `New`: A reboot is required for the memory allocation goal to be processed by the BIOS. 34 | * `Failed - Bad Request`: The BIOS failed to process the memory allocation goal because it was invalid. 35 | * `Failed - Not enough resources`: There were not enough resources for the BIOS to process the memory allocation goal. 36 | * `Failed - Firmware error`: The BIOS failed to process the memory allocation goal due to a firmware error. 37 | * `Failed - Unknown`: The BIOS failed to process the memory allocation goal due to an unknown error. 38 | -------------------------------------------------------------------------------- /ndctl-users-guide/glossary.md: -------------------------------------------------------------------------------- 1 | # Glossary 2 | 3 | **BLK:** Block Mode: A set of one or more programmable memory mapped apertures provided by an NVDIMM to access its media. This indirection precludes the performance benefit of interleaving, but enables NVDIMM-bounded failure modes. 4 | 5 | **BTT:** Block Translation Table: Persistent memory is byte addressable. Existing software may have an expectation that the power-fail-atomicity of writes is at least one sector, 512 bytes. The BTT is an indirection table with atomic update semantics to front a PMEM/BLK block device driver and present arbitrary atomic sector sizes. 6 | 7 | **DAX:** Direct Access File system extensions to bypass the page cache and block layer to mmap persistent memory, from a PMEM block device, directly into a process address space. 8 | 9 | **DCR:** NVDIMM Control Region Structure: It defines a vendor-id, device-id, and interface format for a given DIMM. See [ACPI v6.0](http://www.uefi.org/sites/default/files/resources/ACPI_6_0_Errata_A.PDF) Section 5.2.25.5. 10 | 11 | **DPA:** DIMM Physical Address: is an NVDIMM-relative offset. With one NVDIMM in the system there would be a 1:1 system-physical-address:DPA association. Once more NVDIMMs are added a memory controller interleave must be decoded to determine the DPA associated with a given system-physical-address. BLK capacity always has a 1:1 relationship with a single-NVDIMM's DPA range. 12 | 13 | **DSM:** Device Specific Method: ACPI method to to control specific device - in this case the firmware. 14 | 15 | **LABEL:** Metadata stored on an NVDIMM device that partitions and identifies \(persistently names\) storage between PMEM and BLK. It also partitions BLK storage to host BTTs with different parameters per BLK-partition. Note that traditional partition tables, GPT/MBR, are layered on top of a BLK or PMEM device. 16 | 17 | **NVDIMM:** A Non-Volatile DIMM device \(also called a module\) installed in to a memory socket on the system board. Data is written to the device and stored persistently, meaning data is retained across power-cycles. 18 | 19 | **PMEM:** Persistent Memory Mode: A system-physical-address range where writes are persistent. A block device composed of PMEM is capable of DAX. A PMEM address range may span an interleave of several NVDIMMs. 20 | 21 | -------------------------------------------------------------------------------- /ipmctl-user-guide/security/change-device-passphrase.md: -------------------------------------------------------------------------------- 1 | # Change Device Passphrase 2 | 3 | Changes the security passphrase on one or more persistent memory modules. For better passphrase protection, specify an empty string \(e.g., Passphrase=""\) to be prompted for the current passphrase or to use a file containing the passphrases with the source option. 4 | 5 | ```text 6 | ipmctl set [OPTIONS] -dimm [TARGETS] Passphrase=(string) NewPassphrase=(string) ConfirmPassphrase=(string) 7 | ``` 8 | 9 | ### **Targets** 10 | 11 | * `-dimm (DimmIDs)`: Changes the passphrase on specific module by supplying one or more comma separated persistent memory module identifiers. However, this is not recommended as it may put the system in an undesirable state. The default is to change the passphrase on all manageable modules. 12 | 13 | ### **Properties** 14 | 15 | * `Passphrase`: The current passphrase \(1-32 characters\). 16 | * `NewPassphrase`: The new passphrase \(1-32 characters\). 17 | * `ConfirmPassphrase`: Confirmation of the new passphrase \(1-32 character and must match NewPassphrase\). 18 | 19 | ### **Examples** 20 | 21 | Change the passphrase from `mypassphrase` to `mynewpassphrase` on all modules. 22 | 23 | ```text 24 | $ sudo ipmctl set -dimm Passphrase=mypassphrase NewPassphrase=mynewpassphrase ConfirmPassphrase=mynewpassphrase 25 | ``` 26 | 27 | Change the passphrase on all modules by having the CLI prompt for the current and new passphrases. 28 | 29 | ```text 30 | $ sudo ipmctl set -dimm Passphrase="" NewPassphrase="" ConfirmPassphrase="" 31 | ``` 32 | 33 | Change the passphrase on all modules by supplying the current and new passphrases from the specified file. 34 | 35 | ```text 36 | $ sudo ipmctl set -source passphrase.file -dimm Passphrase="" NewPassphrase="" ConfirmPassphrase="" 37 | ``` 38 | 39 | In the previous example, the format of the file would be: 40 | 41 | ```text 42 | #ascii 43 | Passphrase=myOldPassphrase 44 | NewPassphrase=myNewPassphrase 45 | ``` 46 | 47 | ### **Limitations** 48 | 49 | * The specified module must be manageable by the host software, have security enabled and not be in the "Unlocked, Frozen", "Disabled, Frozen", or "Exceeded" lock states. 50 | * Command is subject to OS Vendor \(OSV\) support. If OSV does not provide support, command will return "Not Supported." 51 | 52 | -------------------------------------------------------------------------------- /ipmctl-user-guide/provisioning/load-memory-allocation-goal.md: -------------------------------------------------------------------------------- 1 | # Load Memory Allocation Goal 2 | 3 | Load a memory allocation goal from a file collected by the [dump command](dump-memory-allocation-settings.md) onto one or more persistent memory modules. 4 | 5 | {% hint style="warning" %} 6 | **WARNING:** Provisioning or changing modes may result in data loss. Data should be backed up to other storage before executing this command. 7 | 8 | Changing a memory allocation goal modifies how the platform firmware maps persistent memory in the system address space (SPA), which may result in data loss or inaccessible data, but does not explicitly delete or modify user data found in persistent memory. 9 | {% endhint %} 10 | 11 | ``` 12 | $ ipmctl load [OPTIONS] -source (path) -goal [TARGETS] 13 | ``` 14 | 15 | ## **Targets** 16 | 17 | * `-dimm [(DimmIDs)]`: Restricts output to specific DIMMs by optionally supplying the DIMM target and one or more comma-separated DIMM identifiers. The default is to display all manageable persistent memory modules. 18 | * `-socket (SocketIDs)`: Restricts output to the DIMMs installed on specific sockets by supplying the socket target and one or more comma-separated socket identifiers. The default is to display all sockets. 19 | 20 | ## **Examples** 21 | 22 | Load the configuration settings stored in "config.txt" onto all modules in the system as a memory allocation goal to be applied by the BIOS on the next reboot. 23 | 24 | ``` 25 | $ ipmctl load -source config.txt -goal 26 | ``` 27 | 28 | Load the configuration settings stored in "config.txt" onto modules 1, 2, and 3 in the system as a memory allocation goal to be applied by the BIOS on the next reboot. 29 | 30 | ``` 31 | $ ipmctl load -source config.txt -goal -dimm 1,2,3 32 | ``` 33 | 34 | Load the configuration settings stored in "config.txt" onto all manageable modules on sockets 1 and 2 as a memory allocation goal to be applied by the BIOS on the next reboot. 35 | 36 | ``` 37 | $ ipmctl load -source config.txt -goal -socket 1,2 38 | ``` 39 | 40 | ## **Limitations** 41 | 42 | * The caller must have appropriate privileges. 43 | * The specified modules must be manageable by the host software and must all have the same SKU. 44 | * Existing memory allocation goals that have not been applied and any namespaces associated with the requested modules must be deleted before running this command. 45 | -------------------------------------------------------------------------------- /ipmctl-user-guide/debug/run-diagnostic.md: -------------------------------------------------------------------------------- 1 | # Run Diagnostic 2 | 3 | Runs a diagnostic test. 4 | 5 | ``` 6 | ipmctl start [OPTIONS] -diagnostic (Quick|Config|Security|FW) -dimm(DIMMIDs) 7 | ``` 8 | 9 | ## **Targets** 10 | 11 | * `-diagnostic (Quick|Config|Security|FW)`: Run a specific test by supplying its name. By default all tests are run. One of: 12 | * `Quick` - This test verifies that the persistent memory module host mailbox is accessible and that basic health indicators can be read and are currently reporting acceptable values. 13 | * `Config` - This test verifies that the BIOS platform configuration matches the installed hardware and the platform configuration conforms to best known practices. 14 | * `Security` - This test verifies that all modules have a consistent security state. It’s a best practice to enable security on all modules rather than just some. 15 | * `FW` - This test verifies that all modules of a given model have consistent firmware installed and other firmware modifiable attributes are set in accordance with best practices. 16 | * Note that the test does not have a means of verifying that the installed firmware is the optimal version for a given Intel Optane DC memory module model - just that it’s been consistently applied across the system. 17 | * `dimm (DimmIDs)`: Runs a diagnostic test on specific modules by supplying one or more comma-separated DimmIDs. The default is to run the specified tests on all manageable modules. Only valid for the `Quick` diagnostic test. 18 | 19 | ## **Examples** 20 | 21 | Run all diagnostics 22 | 23 | ``` 24 | $ sudo ipmctl start -diagnostic 25 | ``` 26 | 27 | Run the quick check diagnostic on module 0x0001 28 | 29 | ``` 30 | $ sudo ipmctl start -diagnostic Quick -dimm 0x0001 31 | ``` 32 | 33 | ## **Return Data** 34 | 35 | Each diagnostic generates one or more log messages. A successful test generates a single log message per module indicating that no errors were found. A failed test might generate multiple log messages each highlighting a specific error with all the relevant details. Each log contains the following information: 36 | 37 | ``` 38 | `TestName`: The test name. One of: 39 | - `Quick` 40 | - `Config` 41 | - `Security` 42 | - `FW` 43 | 44 | `State`: The severity of the error. One of: 45 | - `Ok` 46 | - `Warning` 47 | - `Failed` 48 | - `Aborted` 49 | 50 | `Message`: A free form textual description of the error. 51 | ``` 52 | -------------------------------------------------------------------------------- /ipmctl-user-guide/installing-ipmctl/README.md: -------------------------------------------------------------------------------- 1 | # Installing IPMCTL 2 | 3 | ipmctl is available for both Linux and Microsoft Windows. Follow the links below for step-by-step instructions to install using available packages, binaries, or build and install from source code if you require the most recent version. 4 | 5 | ### Linux 6 | 7 | {% content-ref url="installing-ipmctl-packages-on-linux.md" %} 8 | [installing-ipmctl-packages-on-linux.md](installing-ipmctl-packages-on-linux.md) 9 | {% endcontent-ref %} 10 | 11 | {% content-ref url="building-and-installing-ipmctl-from-source-on-linux.md" %} 12 | [building-and-installing-ipmctl-from-source-on-linux.md](building-and-installing-ipmctl-from-source-on-linux.md) 13 | {% endcontent-ref %} 14 | 15 | ### Microsoft Windows 16 | 17 | {% content-ref url="installing-ipmctl-on-microsoft-windows-using-the-msi-installer.md" %} 18 | [installing-ipmctl-on-microsoft-windows-using-the-msi-installer.md](installing-ipmctl-on-microsoft-windows-using-the-msi-installer.md) 19 | {% endcontent-ref %} 20 | 21 | {% content-ref url="building-and-installing-ipmctl-on-microsoft-windows-from-source.md" %} 22 | [building-and-installing-ipmctl-on-microsoft-windows-from-source.md](building-and-installing-ipmctl-on-microsoft-windows-from-source.md) 23 | {% endcontent-ref %} 24 | 25 | ## Releases 26 | 27 | ipmctl supports all product generations for Intel Optane persistent memory and is backward compatible with older generations. The table describes the ipmctl version required for each product generation. 28 | 29 | | ipmctl version | Intel Optane Persistent Memory Generation | 30 | | -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 31 | | 01.00.00.xxxx |

1st Generation Intel Optane Persistent Memory Series 100

Code name: Apache Pass

| 32 | | 02.00.00.xxxx |

1st Generation Intel Optane Persistent Memory Series 100

Code name: Apache Pass
Recommend using v02.00.00.3693 or later


2nd Generation Intel Optane Persistent Memory Series 200

Code Name: Barlow Pass

| 33 | 34 | -------------------------------------------------------------------------------- /ipmctl-user-guide/SUMMARY.md: -------------------------------------------------------------------------------- 1 | # Table of contents 2 | 3 | * [IPMCTL User Guide](README.md) 4 | * [Installing IPMCTL](installing-ipmctl/README.md) 5 | * [Installing IPMCTL packages on Linux](installing-ipmctl/installing-ipmctl-packages-on-linux.md) 6 | * [Building and Installing IPMCTL from Source on Linux](installing-ipmctl/building-and-installing-ipmctl-from-source-on-linux.md) 7 | * [Installing IPMCTL on Microsoft Windows using the MSI Installer](installing-ipmctl/installing-ipmctl-on-microsoft-windows-using-the-msi-installer.md) 8 | * [Building and Installing IPMCTL on Microsoft Windows from Source](installing-ipmctl/building-and-installing-ipmctl-on-microsoft-windows-from-source.md) 9 | * [Basic Usage](basic-usage.md) 10 | * [Module Discovery](module-discovery/README.md) 11 | * [Show System Capabilities](module-discovery/show-system-capabilities.md) 12 | * [Show Socket](module-discovery/show-socket.md) 13 | * [Show Topology](module-discovery/show-topology.md) 14 | * [Show Memory Resources](module-discovery/show-memory-resources.md) 15 | * [Show Device](module-discovery/show-device.md) 16 | * [Provisioning](provisioning/README.md) 17 | * [Concepts](provisioning/concepts.md) 18 | * [Create Memory Allocation Goal](provisioning/create-memory-allocation-goal.md) 19 | * [Provision App Direct](provisioning/provision-app-direct.md) 20 | * [Provision Memory Mode](provisioning/provision-memory-mode.md) 21 | * [Provision Mixed Mode](provisioning/provision-mixed-mode.md) 22 | * [Show Memory Allocation Goal](provisioning/show-memory-allocation-goal.md) 23 | * [Dump Memory Allocation Settings](provisioning/dump-memory-allocation-settings.md) 24 | * [Load Memory Allocation Goal](provisioning/load-memory-allocation-goal.md) 25 | * [Delete Memory Allocation Goal](provisioning/delete-memory-allocation-goal.md) 26 | * [Instrumentation](instrumentation/README.md) 27 | * [Show Sensor](instrumentation/show-sensor.md) 28 | * [Change Sensor Settings](instrumentation/change-sensor-settings.md) 29 | * [Show Device Performance](instrumentation/show-device-performance.md) 30 | * [Debug](debug/README.md) 31 | * [Run Diagnostic](debug/run-diagnostic.md) 32 | * [Show Error Log](debug/show-error-log.md) 33 | * [Dump Debug Log](debug/dump-debug-log.md) 34 | * [Show ACPI Tables](debug/show-acpi-tables.md) 35 | * [Show Device Platform Configuration Data](debug/show-device-platform-configuration-data.md) 36 | * [Delete Device Platform Configuration Data](debug/delete-device-platform-configuration-data.md) 37 | * [Inject Error](debug/inject-error.md) 38 | * [Support and Maintenance](support-and-maintenance/README.md) 39 | * [Show Events](support-and-maintenance/show-events.md) 40 | * [Version and Firmware](support-and-maintenance/version-and-firmware.md) 41 | -------------------------------------------------------------------------------- /ipmctl-user-guide/support-and-maintenance/show-events.md: -------------------------------------------------------------------------------- 1 | # Show Events 2 | 3 | Shows persistent memory module related events. The options, targets, and properties can be used to filter the events. If no filters are provided, the default is to display up to 50 events. Refer to the Event Log Specification for detailed information about events. 4 | 5 | ``` 6 | ipmctl show [OPTIONS] -event [TARGETS] [PROPERTIES] 7 | ``` 8 | 9 | ## **Targets** 10 | 11 | * `-dimm [(DimmID)]`: Filter output to events on a specific module by optionally supplying the DIMM target and one or more DimmID. 12 | 13 | ## **Properties** 14 | 15 | * `Category`: Filters output to events of a specific category. One of: 16 | * `Diag`: Filters output to diagnostic events. 17 | * `FW`: Filters output to FW consistency diagnostic test events. 18 | * `Config`: Filters output to platform config diagnostic test events. 19 | * `PM`: Filters output to PM meta data diagnostic test events. 20 | * `Quick`: Filters output to quick diagnostic test events. 21 | * `Security`: Filters output to security diagnostic test events. 22 | * `Health`: Filters output to device health events. 23 | * `Mgmt`: Filters output to management software generated events. 24 | * `Severity`: Filters output of events based on the severity of the event. One of: 25 | * `Info`: (Default) Shows informational, warning, and error severity events. 26 | * `Warning`: Shows warning and error events. 27 | * `Error`: Shows error events. 28 | * `ActionRequired`: Filters output to events that require corrective action or acknowledgment. 29 | * `0`: Filters output to only show non-ActionRequired events. 30 | * `1`: Filters output to only show ActionRequired events. 31 | * `Count`: Filters output of events limited to the specified count, starting with the most recent. Count may be a value from 1 to 2,147,483,647. Default = 50. 32 | 33 | ## **Examples** 34 | 35 | Display the 50 most recent events. 36 | 37 | ``` 38 | $ sudo ipmctl show -event 39 | ``` 40 | 41 | Show the 10 most recent error events. With the exception that this call limits the output to 10, this is equivalent to calling `ipmctl show -error`. 42 | 43 | ``` 44 | $ sudo ipmctl show -event count=10 severity=error 45 | ``` 46 | 47 | ## **Return Data** 48 | 49 | This command displays a table with a row for each event matching the provided filters, showing most recent first. 50 | 51 | * `Time`: Time of the event in the format MM:dd:yyyy:hh:mm:ss. 52 | * `EventID`: The event identifier 53 | * `Severity`: The severity of the event 54 | * `ActionRequired` A flag indicating that the event requires corrective action or acknowledgment. One of: 55 | * `0`: A action is not required. 56 | * `1`: An action is required. 57 | * `Code`: The event code defined by the SW Event Log specification 58 | * `Message`: The event message 59 | -------------------------------------------------------------------------------- /SUMMARY.md: -------------------------------------------------------------------------------- 1 | # Table of contents 2 | 3 | * [Persistent Memory Documentation](README.md) 4 | * [Getting Started Guide](getting-started-guide/README.md) 5 | * [Introduction](getting-started-guide/introduction.md) 6 | * [PMDK Introduction](getting-started-guide/what-is-pmdk.md) 7 | * [NDCTL Introduction](getting-started-guide/what-is-ndctl.md) 8 | * [System Requirements](getting-started-guide/system-requirements.md) 9 | * [Creating Development Environments](getting-started-guide/creating-development-environments/README.md) 10 | * [Linux Environments](getting-started-guide/creating-development-environments/linux-environments/README.md) 11 | * [Using the memmap Kernel Option](getting-started-guide/creating-development-environments/linux-environments/linux-memmap.md) 12 | * [Advanced Topics](getting-started-guide/creating-development-environments/linux-environments/advanced-topics/README.md) 13 | * [Partitioning Namespaces](getting-started-guide/creating-development-environments/linux-environments/advanced-topics/partitioning-namespaces.md) 14 | * [I/O Alignment Considerations](getting-started-guide/creating-development-environments/linux-environments/advanced-topics/i-o-alignment-considerations.md) 15 | * [Windows Environments](getting-started-guide/creating-development-environments/windows-environments.md) 16 | * [Virtualization](getting-started-guide/creating-development-environments/virtualization/README.md) 17 | * [Windows Server Hyper-V](getting-started-guide/creating-development-environments/virtualization/windows-server-hyper-v.md) 18 | * [Using QEMU Virtualization](getting-started-guide/creating-development-environments/virtualization/qemu.md) 19 | * [VMware VSphere/ESXi](getting-started-guide/creating-development-environments/virtualization/vmware-vsphere-esxi.md) 20 | * [Cloud Environments](getting-started-guide/creating-development-environments/cloud-environments/README.md) 21 | * [Microsoft Azure Cloud](getting-started-guide/creating-development-environments/cloud-environments/microsoft-azure-cloud.md) 22 | * [Google Cloud Platform (GCP)](getting-started-guide/creating-development-environments/cloud-environments/google-cloud-platform-gcp.md) 23 | * [Installing NDCTL](getting-started-guide/installing-ndctl.md) 24 | * [Installing PMDK](getting-started-guide/installing-pmdk/README.md) 25 | * [PMDK Directory Structure](getting-started-guide/installing-pmdk/pmdk-directory-structure.md) 26 | * [Installing PMDK using Linux Packages](getting-started-guide/installing-pmdk/installing-pmdk-using-linux-packages.md) 27 | * [Installing PMDK from Source on Linux](getting-started-guide/installing-pmdk/compiling-pmdk-from-source.md) 28 | * [Installing PMDK on Windows](getting-started-guide/installing-pmdk/installing-pmdk-on-windows.md) 29 | * [IPMCTL User Guide](https://docs.pmem.io/ipmctl-user-guide/) 30 | * [NDCTL User Guide](https://docs.pmem.io/ndctl-user-guide/) 31 | -------------------------------------------------------------------------------- /ndctl-users-guide/managing-regions.md: -------------------------------------------------------------------------------- 1 | # Managing Regions 2 | 3 | A region is a grouping of one or more NVDIMMs, or an interleaved set, that can be divided up into one or more Namespaces. Regions are of type PMEM or BLK. See [PMEM or BLK](concepts/libnvdimm-pmem-and-blk-modes.md) modes for more information. The type can only be changed using the vendor specific NVDIMM utility that manages the NVDIMMs and/or a BIOS option if available. 4 | 5 | ## Disabling Regions 6 | 7 | 1\) List all active/enabled regions: 8 | 9 | ```text 10 | # ndctl list -R 11 | { 12 | "dev":"region0", 13 | "size":134217728000, 14 | "available_size":134217728000, 15 | "type":"pmem", 16 | "numa_node":0, 17 | "iset_id":-7501067817058727390, 18 | "state":"disabled", 19 | "persistence_domain":"memory_controller" 20 | } 21 | ``` 22 | 23 | A filtered list of active/enabled regions can be displayed using the `-r ` or `--region ` option, eg: 24 | 25 | ```text 26 | # ndctl list -R -r region0 27 | - or - 28 | # ndctl list -R -region region0 29 | ``` 30 | 31 | 2\) Disable the region 32 | 33 | ```text 34 | # ndctl disable-region region0 35 | disabled 1 region 36 | ``` 37 | 38 | 3\) Verify the region is disabled by including the `-i` option: 39 | 40 | ```text 41 | # ndctl list -Ri 42 | { 43 | "dev":"region0", 44 | "size":134217728000, 45 | "available_size":134217728000, 46 | "type":"pmem", 47 | "numa_node":0, 48 | "iset_id":-7501067817058727390, 49 | "state":"disabled", 50 | "persistence_domain":"memory_controller" 51 | } 52 | ``` 53 | 54 | A filtered list of disabled/inactive regions can be displayed using the `-r ` or `--region ` option, eg: 55 | 56 | ```text 57 | # ndctl list -Ri -r region0 58 | - or - 59 | # ndctl list -Ri -region region0 60 | ``` 61 | 62 | ## Enabling Regions 63 | 64 | 1\) List disabled/inactive regions 65 | 66 | ```text 67 | # ndctl list -Ri 68 | { 69 | "dev":"region0", 70 | "size":134217728000, 71 | "available_size":134217728000, 72 | "type":"pmem", 73 | "numa_node":0, 74 | "iset_id":-7501067817058727390, 75 | "state":"disabled", 76 | "persistence_domain":"memory_controller" 77 | } 78 | ``` 79 | 80 | A filtered list of active/enabled regions can be displayed using the `-r ` or `--region ` option, eg: 81 | 82 | ```text 83 | # ndctl list -R -r region0 84 | - or - 85 | # ndctl list -R -region region0 86 | ``` 87 | 88 | 2\) Enable the region 89 | 90 | ```text 91 | # ndctl enable-region region0 92 | enabled 1 region 93 | ``` 94 | 95 | 3\) Verify the region is enabled using `ndctl list -R`. Note the 'state' field is not displayed for enabled regions. 96 | 97 | ```text 98 | # ndctl list -R 99 | { 100 | "dev":"region0", 101 | "size":134217728000, 102 | "available_size":134217728000, 103 | "type":"pmem", 104 | "numa_node":0, 105 | "iset_id":-7501067817058727390, 106 | "persistence_domain":"memory_controller" 107 | } 108 | ``` 109 | 110 | -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/windows-environments.md: -------------------------------------------------------------------------------- 1 | # Windows Environments 2 | 3 | ## Windows Server 2019 4 | 5 | ### Storage Spaces Direct 6 | 7 | Storage Spaces Direct has [native support for persistent memory](https://docs.microsoft.com/en-us/windows-server/storage/whats-new-in-storage#storage-spaces-direct). **** Unlock unprecedented performance with native Storage Spaces Direct support for persistent memory modules, including Intel® Optane™ DC PM and NVDIMM-N. Use persistent memory as cache to accelerate the active working set, or as capacity to guarantee consistent low latency on the order of microseconds. Manage persistent memory just as you would any other drive in PowerShell or Windows Admin Center 8 | 9 | Microsoft demonstrated Windows Server 2019 with Intel Optane DC Persistent Memory at Microsoft Ignite 2018 - [https://www.youtube.com/watch?v=8WMXkMLJORc](https://www.youtube.com/watch?v=8WMXkMLJORc) 10 | 11 | ### Hyper-V 12 | 13 | Cmdlets for configuring persistent memory devices for Hyper-V VMs - [https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/persistent-memory-cmdlets](https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/persistent-memory-cmdlets) 14 | 15 | Dell has published step-by-step guides for provisioning NVDIMMs within Windows Server 2019 for use with Hyper-V Guest OS's: 16 | 17 | {% embed url="https://www.dell.com/support/article/us/en/19/how16843/configuring-nvdimm-n-on-poweredge-servers-with-windows-server-2019?lang=en" %} 18 | 19 | {% embed url="https://www.dell.com/support/article/us/en/19/how16794/how-to-configure-persistent-memory-nvdimm-on-windows-server-2019-guest-os?lang=en" %} 20 | 21 | ## Windows Server 2016 22 | 23 | Step-by-step details are not available at this time. Microsoft will provide detailed documentation. See the resources listed below for information and a state of the ecosphere by Microsoft. 24 | 25 | * [Persistent Memory Summit 2018 - Windows OS Support for PM](https://www.youtube.com/watch?v=1J3ussLcv64) \[YouTube] **-** Tom Talpey, Microsoft delivers an update to Persistent Memory Support in Windows 26 | * Slides are available [here](https://www.snia.org/sites/default/files/PM-Summit/2018/presentations/06\_PM\_Summit\_2018\_Talpey-Final\_Post-CORRECTED.pdf) 27 | * [Update on Windows Persistent Memory Support](https://www.youtube.com/watch?v=bcdf7w2v\_nw) \[YouTube] - Neal Christiansen, Microsoft @ SNIA Storage Developer Conference 2017 28 | * Slides are available [here](https://www.snia.org/sites/default/files/SDC/2017/presentations/Solid\_State\_Stor\_NVM\_PM\_NVDIMM/Christiansen\_Neal\_Update\_on\_Windows\_Persistent\_Memory\_Support.pdf) 29 | * [Persistent Memory in Windows](https://www.snia.org/sites/default/files/PM-Summit/2017/presentations/Tom\_Talpey\_Persistent\_Memory\_in\_Windows\_Server\_2016.pdf) - Tom Tapley, Microsoft @ SNIA Storage Developer Conference 2017 30 | * [Storage Spaces Direct with Persistent Memory](https://blogs.technet.microsoft.com/filecab/2016/10/17/storage-spaces-direct-with-persistent-memory/) \[Blog] 31 | -------------------------------------------------------------------------------- /getting-started-guide/what-is-ndctl.md: -------------------------------------------------------------------------------- 1 | # NDCTL Introduction 2 | 3 | The Non-Volatile Device Control \(ndctl\) is a utility for managing the LIBNVDIMM Linux Kernel subsystem. The LIBNVDIMM subsystem defines a kernel device model and control message interface for platform NFIT \(NVDIMM Firmware Interface Table\). This interface was first defined by the [ACPI v6.0 specification](http://www.uefi.org/sites/default/files/resources/ACPI_6.0.pdf). Later versions may enhance or modify this specification. The latest ACPI and UEFI specifications can be found at [http://uefi.org/specifications](http://uefi.org/specifications). 4 | 5 | The LIBNVDIMM subsystem provides support for three types of NVDIMMs, namely, PMEM, BLK, and NVDIMM devices that can simultaneously support both PMEM and BLK mode access. These three modes of operation are described by the "NVDIMM Firmware Interface Table" \(NFIT\) in ACPI v6.0 or later. While the LIBNVDIMM implementation is generic and supports pre-NFIT platforms, it was guided by the superset of capabilities need to support this ACPI 6 definition for NVDIMM resources. The bulk of the kernel implementation is in place to handle the case where DPA accessible via PMEM is aliased with DPA accessible via BLK. When that occurs a LABEL is needed to reserve DPA for exclusive access via one mode a time. 6 | 7 | Operations supported by the tool include provisioning capacity \(namespaces\), as well as enumerating/enabling/disabling the devices \(DIMMs, regions, namespaces\) associated with an NVDIMM bus. 8 | 9 | ## Get Started 10 | 11 | To get started with the ndctl utility, follow the [Installing NDCTL](https://docs.pmem.io/ndctl-user-guide/installing-ndctl) document then the [NDCTL User Guide](https://docs.pmem.io/ndctl-user-guide/) and [man pages](https://docs.pmem.io/ndctl-user-guide/man-pages). 12 | 13 | ## **Supporting Documents** 14 | 15 | * NDCTL Project Homepage: [http://pmem.io/ndctl/](http://pmem.io/ndctl/) 16 | * LIBNVDIMM Documentation: [https://www.kernel.org/doc/Documentation/nvdimm/nvdimm.txt](https://www.kernel.org/doc/Documentation/nvdimm/nvdimm.txt) 17 | * ACPI v6.0: [http://www.uefi.org/sites/default/files/resources/ACPI\_6.0.pdf](http://www.uefi.org/sites/default/files/resources/ACPI_6.0.pdf) 18 | * Latest ACPI & UEFI Specifications: [http://uefi.org/specifications](http://uefi.org/specifications) 19 | * NVDIMM Namespace: [http://pmem.io/documents/NVDIMM\_Namespace\_Spec.pdf](http://pmem.io/documents/NVDIMM_Namespace_Spec.pdf) 20 | * DSM Interface Specification \(v1.8\): [http://pmem.io/documents/NVDIMM\_DSM\_Interface-V1.8.pdf](http://pmem.io/documents/NVDIMM_DSM_Interface-V1.8.pdf) 21 | * DSM Interface Example: [http://pmem.io/documents/NVDIMM\_DSM\_Interface\_Example.pdf](http://pmem.io/documents/NVDIMM_DSM_Interface_Example.pdf) 22 | * Driver Writer's Guide: [http://pmem.io/documents/NVDIMM\_Driver\_Writers\_Guide.pdf](http://pmem.io/documents/NVDIMM_Driver_Writers_Guide.pdf) 23 | 24 | **Source Code Repositories** 25 | 26 | * LIBNVDIMM: [https://git.kernel.org/cgit/linux/kernel/git/djbw/nvdimm.git](https://git.kernel.org/cgit/linux/kernel/git/djbw/nvdimm.git) 27 | * LIBNDCTL: [https://github.com/pmem/ndctl.git](https://github.com/pmem/ndctl.git) 28 | * PMEM: [https://github.com/01org/prd](https://github.com/01org/prd) 29 | 30 | -------------------------------------------------------------------------------- /ndctl-users-guide/concepts/nvdimm-namespaces.md: -------------------------------------------------------------------------------- 1 | # Namespaces 2 | 3 | The capacity of an NVDIMM REGION \(contiguous span of persistent memory\) is accessed via one or more NAMESPACE devices. A REGION is the Linux term for what ACPI and UEFI call a DIMM-interleave-set, or a system-physical-address-range that is striped \(by the memory controller\) across one or more memory modules. 4 | 5 | The UEFI specification defines the _NVDIMM Label Protocol_ as the combination of label area access methods and a data format for provisioning one or more NAMESPACE objects from a REGION. Note that label support is optional and if Linux does not detect the label capability it will automatically instantiate a "label-less" namespace per region. Examples of label-less namespaces are the ones created by the kernel's [memmap ](../../getting-started-guide/creating-development-environments/linux-environments/linux-memmap.md)option, or NVDIMMs without a valid _namespace index_ in their label area. 6 | 7 | A namespace can be provisioned to operate in one of 4 modes, fsdax, devdax, sector, and raw: 8 | 9 | * **fsdax:** Filesystem-DAX mode is the default mode of a namespace when specifying `ndctl create-namespace` with no options. It creates a block device \(/dev/pmemX\[.Y\]\) that supports the DAX capabilities of Linux filesystems \(xfs and ext4 to date\). DAX removes the page cache from the I/O path and allows mmap\(2\) to establish direct mappings to persistent memory media. The DAX capability enables workloads / working-sets that would exceed the capacity of the page cache to scale up to the capacity of persistent memory. Workloads that fit in page cache or perform bulk data transfers may not see benefit from DAX. When in doubt, pick this mode. 10 | * **devdax:** Device-DAX mode enables similar mmap\(2\) DAX mapping capabilities as Filesystem-DAX. Instead of a block-device that can support a DAX enabled filesystem, this mode provides a single character device file \(/dev/daxX.Y\). Use this mode to assign persistent memory to a virtual-machine, register persistent memory for RDMA, or when gigantic mappings are needed. 11 | * **sector:** Use this mode to host legacy filesystems that do not checksum metadata or applications that are not prepared for torn sectors after a crash. Expected usage for this mode is for small boot volumes. This mode is compatible with other operating systems. 12 | * **raw:** Raw mode is effectively just a memory disk that does not support DAX. Typically this indicates a namespace that was created by tooling or another operating system that did not know how to create a Linux fsdax or devdax mode namespace. This mode is compatible with other operating systems, but again, does not support DAX operation. 13 | 14 | Figure 1 below shows a typical Filesystem-DAX \(FSDAX\) configuration with three physical NVDIMMs that have been interleaved. All available capacity has been used to create a single region \(Region0\), namespace \(Namespace0.0\), and dax enabled filesystem created. Namespace naming convention is commonly X.Y where X is the region number and Y is the namespace 15 | 16 | ![Figure 1: Common FSDAX Configuration](../../.gitbook/assets/draw.io-gitbook-interleaved-dimms-fsdax.jpg) 17 | 18 | Refer to [Managing Namespaces](../managing-namespaces.md) for information on how to create, delete, edit, and manage the namespace configuration. 19 | 20 | -------------------------------------------------------------------------------- /ndctl-users-guide/concepts/README.md: -------------------------------------------------------------------------------- 1 | # Concepts 2 | 3 | ## Persistent Memory Concepts 4 | 5 | This section describes the basic concepts when configuring and managing NVDIMMs. The following describes terms used throughout this chapter. 6 | 7 | **Interleave Sets:** two or more NVDIMMs create an N-Way interleave set to provide stripes read/write operations for increased throughput. Commonly 2-Way or 4-Way interleaving. 8 | 9 | **Region:** a grouping of one or more NVDIMMs, or an interleaved set, that can be divided up into one or more Namespaces. Regions are created within interleaved sets. 10 | 11 | **Label Storage Area \(LSA\):** Namespaces are defined by Labels which are stored in the Label Storage Area\(s\). 12 | 13 | **Namespace:** defines a contiguously-addressed range of Non-Volatile Memory conceptually similar to a hard disk partition, SCSI Logical Unit \(LUN\), or an NVM Express namespace. It is the unit of persistent memory storage that appears in /dev as a device usable for I/O. 14 | 15 | **Type:** defines the way in which the persistent memory associated with a Namespace or Region can be accessed. Valid Types are PMEM and BLK: 16 | 17 | * **PMEM:** Direct access to the media via load/store operations. DAX Supported. 18 | * **BLK:** Direct access to the media via Apertures \(sliding memory windows\). DAX is not supported. 19 | 20 | **Mode:** defines which NVDIMM software features are enabled for a given Namespace. Namespace Modes include raw, sector, fsdax\(memory\), and devdax\(dax\). Sibling Namespaces of the same parent Region will always have the same Type, but might be configured to have different Modes. 21 | 22 | ## Configuration Options 23 | 24 | This section discusses some of the common configuration options but does not cover all possible supported config. 25 | 26 | Using one or more physical NVDIMMs, the available capacity can be configure in many different ways. Figure 1 below shows a typical Filesystem-DAX \(FSDAX\) configuration with three physical NVDIMMs that have been interleaved. All available capacity has been used to create a single region \(Region0\), namespace \(Namespace0.0\), and dax enabled filesystem created. Namespace naming convention is commonly X.Y where X is the region number and Y is the namespace. The persistent memory pool\(s\) are memory mapped to the application, thus allowing direct load/store access to the NVDIMMs. 27 | 28 | ![Figure 1: Interleaved NVDIMMs with a single Region and Namespace](../../.gitbook/assets/draw.io-gitbook-interleaved-dimms-fsdax.jpg) 29 | 30 | Figure 2 shows a configuration where the region has been partitioned into two namespaces. Each namespace has a single DAX Filesystem. 31 | 32 | ![Figure 2: Interleaved NVDIMMs with a single Region and two Namespaces](../../.gitbook/assets/draw.io-gitbook-interleaved-dimms-fsdax-1.jpg) 33 | 34 | Figure 3 shows a configuration where NVDIMM interleaving has been disabled. In this mode, each NVDIMM becomes addressable with single regions per device. Regions and Namespaces cannot be striped or interleaved across NVDIMMs. It is possible to use software RAID such as Linux's Device Mapper to implement mirroring, striping, or concatenation of the /dev/pmem{N} devices. 35 | 36 | ![Figure 3: Non-Interleaved NVDIMMs, each with their own Region and Namespace](../../.gitbook/assets/draw.io-gitbook-non-interleaved-dimm-fsdax-1.jpg) 37 | 38 | The following sections describe each layer in more detail. 39 | 40 | -------------------------------------------------------------------------------- /ipmctl-user-guide/provisioning/create-memory-allocation-goal.md: -------------------------------------------------------------------------------- 1 | # Create Memory Allocation Goal 2 | 3 | ``` 4 | ipmctl create [OPTIONS] -goal [TARGETS] [PROPERTIES] 5 | ``` 6 | 7 | The `ipmctl create -goal` command has many options. A complete list of options can be shown by executing `ipmctl create -help`, or reading the `ipmctl(1)` man page. Once a goal is created, it does not take effect until the system is rebooted. After a reboot, the BIOS configures the requested goal and clears the goal. 8 | 9 | ## **Targets** 10 | 11 | * `-dimm [(DimmIDs)]`: Creates a memory allocation goal on specific persistent memory modules by optionally supplying one or more comma-separated DimmIDs. The default is to configure all manageable modules on all sockets. 12 | * `-socket (SocketIds)`: Creates a memory allocation goal on specific modules by supplying one or more comma-separated SocketIds. The default is to configure all manageable modules on all sockets. 13 | 14 | ## **Properties** 15 | 16 | * `MemoryMode`: Percentage of the total capacity to use in Memory Mode (0-100). Default=0. 17 | * `PersistentMemoryType`: If MemoryMode is not 100%, specify the type of persistent memory to create: 18 | * `AppDirect`: (default) Create App Direct capacity utilizing hardware interleaving across the requested modules. 19 | * `AppDirectNotInterleaved`: Create App Direct capacity that is not interleaved with any other modules. 20 | * `NamespaceLabelVersion`: The version of the namespace label storage area (LSA) index block: 21 | * `1.2`: (default) Defined in UEFI 2.7a - sections 13.19 22 | * `1.1`: Legacy 1.1 namespace label support 23 | * `Reserved`: Reserve a percentage (0-100) of the requested persistent memory App Direct capacity that will not be mapped into the system physical address space and will be presented as `ReservedCapacity` with Show Device and Show Memory Resources commands. 24 | 25 | ## **Limitations** 26 | 27 | * The caller must have appropriate privileges. 28 | * The specified modules must be manageable by the host software and must all have the same SKU. 29 | * Existing memory allocation goals that have not been applied and any namespaces associated with the requested modules must be deleted before running this command. 30 | 31 | ## Examples 32 | 33 | Configures all the PMem module capacity in Memory Mode: 34 | 35 | ``` 36 | ipmctl create -goal MemoryMode=100 37 | ``` 38 | 39 | Configures all the PMem module capacity as App Direct: 40 | 41 | ``` 42 | ipmctl create -goal PersistentMemoryType=AppDirect 43 | ``` 44 | 45 | Configures the capacity on each PMem module with 20% of the capacity in Memory Mode and the remaining as App Direct capacity that does not use hardware interleaving: 46 | 47 | ``` 48 | ipmctl create -goal MemoryMode=20 PersistentMemoryType=AppDirectNotInterleaved 49 | ``` 50 | 51 | Configures the PMem module capacity across the entire system with 50% in AppDirect (Interleaved) and 50% reserved: 52 | 53 | ``` 54 | ipmctl create -goal PersistentMemoryType=AppDirect Reserved=50 55 | ``` 56 | 57 | Configures the PMem module capacity across the entire system with 25% of the capacity in Memory Mode, 25% reserved, and the remaining 50% as App Direct. Configures the PMem module capacity across the entire system with 25% of the capacity in Memory Mode and the remaining 75% as App Direct. 58 | 59 | ``` 60 | ipmctl create -goal MemoryMode=25 PersistentMemoryType=AppDirect Reserved=25 61 | ``` 62 | 63 | Create an Interleaved AppDirect goal using all modules in Socket0: 64 | 65 | ``` 66 | $ ipmctl create -goal -socket 0x0000 PersistentMemoryType=AppDirect 67 | ``` 68 | -------------------------------------------------------------------------------- /ndctl-users-guide/man-pages.md: -------------------------------------------------------------------------------- 1 | # NDCTL Man Pages 2 | 3 | This section links to the online man pages: 4 | 5 | * [ndctl](https://pmem.io/ndctl/ndctl.html) - Manage “libnvdimm” subsystem devices \(Non-volatile Memory\) 6 | * [ndctl-check-labels](https://pmem.io/ndctl/ndctl-check-labels.html) - determine if the given dimms have a valid namespace index block 7 | * [ndctl-check-namespace](https://pmem.io/ndctl/ndctl-check-namespace.html) - check namespace metadata consistency 8 | * [ndctl-create-namespace](https://pmem.io/ndctl/ndctl-create-namespace.html) - provision or reconfigure a namespace 9 | * [ndctl-destroy-namespace](https://pmem.io/ndctl/ndctl-destroy-namespace.html) - destroy the given namespace\(s\) 10 | * [ndctl-disable-dimm](https://pmem.io/ndctl/ndctl-disable-dimm.html) - disable one or more idle dimms 11 | * [ndctl-disable-namespace](https://pmem.io/ndctl/ndctl-disable-namespace.html) - disable the given namespace\(s\) 12 | * [ndctl-disable-region](https://pmem.io/ndctl/ndctl-disable-region.html) - disable the given region\(s\) and all descendant namespaces 13 | * [ndctl-enable-dimm](https://pmem.io/ndctl/ndctl-enable-dimm.html) - enable one more dimms 14 | * [ndctl-enable-namespace](https://pmem.io/ndctl/ndctl-enable-namespace.html) - enable the given namespace\(s\) 15 | * [ndctl-enable-region](https://pmem.io/ndctl/ndctl-enable-region.html) - enable the given region\(s\) and all descendant namespaces 16 | * [ndctl-freeze-security](https://pmem.io/ndctl/ndctl-freeze-security.html) 17 | * [ndctl-init-labels](https://pmem.io/ndctl/ndctl-init-labels.html) - initialize the label data area on a dimm or set of dimms 18 | * [ndctl-inject-error](https://pmem.io/ndctl/ndctl-inject-error.html) - inject media errors at a namespace offset 19 | * [ndctl-inject-smart](https://pmem.io/ndctl/ndctl-inject-smart.html) - perform smart threshold/injection operations on an NVDIMM 20 | * [ndctl-list](https://pmem.io/ndctl/ndctl-list.html) - print the platform nvdimm device topology and attributes 21 | * [ndctl-load-keys](https://pmem.io/ndctl/ndctl-load-keys.html) - load the master key \(kek\) and encrypted passphrases into the keyring 22 | * [ndctl-monitor](https://pmem.io/ndctl/ndctl-monitor.html) - Monitor the SMART events from NVDIMMs 23 | * [ndctl-read-labels](https://pmem.io/ndctl/ndctl-read-labels.html) - read out the label area on a NVDIMM or set of NVDIMMs 24 | * [ndctl-remove-passphrase](https://pmem.io/ndctl/ndctl-remove-passphrase.html) - Stop an NVDIMM from locking at power-loss and requiring a passphrase to access media 25 | * [ndctl-sanitize-dimm](https://pmem.io/ndctl/ndctl-sanitize-dimm.html) - Perform a cryptographic destruction or overwrite of the contents of the given NVDIMM\(s\) 26 | * [ndctl-setup-passphrase](https://pmem.io/ndctl/ndctl-setup-passphrase.html) - setup and enable the security passphrase for one or more NVDIMMs 27 | * [ndctl-start-scrub](https://pmem.io/ndctl/ndctl-start-scrub.html) - start an Address Range Scrub \(ARS\) operation 28 | * [ndctl-update-firmware](https://pmem.io/ndctl/ndctl-update-firmware.html) - provides for updating the firmware on an NVDIMM 29 | * [ndctl-update-passphrase](https://pmem.io/ndctl/ndctl-update-passphrase.html) - update the security passphrase for one or more NVDIMMs 30 | * [ndctl-wait-overwrite](https://pmem.io/ndctl/ndctl-wait-overwrite.html) - wait for an overwrite operation to complete 31 | * [ndctl-wait-scrub](https://pmem.io/ndctl/ndctl-wait-scrub.html) - wait for an Address Range Scrub \(ARS\) operation to complete 32 | * [ndctl-write-labels](https://pmem.io/ndctl/ndctl-write-labels.html) - write data to the label area on a dimm 33 | * [ndctl-zero-labels](https://pmem.io/ndctl/ndctl-zero-labels.html) - zero out the label area on a dimm or set of dimms 34 | 35 | -------------------------------------------------------------------------------- /ndctl-users-guide/concepts/regions/untitled.md: -------------------------------------------------------------------------------- 1 | # Regions, Atomic Sectors, and DAX 2 | 3 | One of the few reasons to allow multiple BLK namespaces per REGION is so that each BLK-namespace can be configured with a Block Translation Table \(BTT\) with unique atomic sector sizes. While a PMEM device can host a BTT, the LABEL specification does not provide for a sector size to be specified for a PMEM namespace. This is due to the expectation that the primary usage model for PMEM is via DAX, and the BTT is incompatible with DAX. However, for the cases where an application or filesystem still needs atomic sector update guarantees, it can register a BTT on a PMEM device or partition. 4 | 5 | ## Example NVDIMM Platform 6 | 7 | Figure 1 shows an example platform with four NVDIMMs, two integrated memory controllers \(IMC\), and a single CPU . 8 | 9 | ```text 10 | (a) (b) DIMM BLK-REGION 11 | +-------------------+--------+--------+--------+ 12 | +------+ | pm0.0 | blk2.0 | pm1.0 | blk2.1 | 0 region2 13 | | imc0 +--+- - - region0- - - +--------+ +--------+ 14 | +--+---+ | pm0.0 | blk3.0 | pm1.0 | blk3.1 | 1 region3 15 | | +-------------------+--------v v--------+ 16 | +--+---+ | | 17 | | cpu0 | region1 18 | +--+---+ | | 19 | | +----------------------------^ ^--------+ 20 | +--+---+ | blk4.0 | pm1.0 | blk4.0 | 2 region4 21 | | imc1 +--+----------------------------| +--------+ 22 | +------+ | blk5.0 | pm1.0 | blk5.0 | 3 region5 23 | +----------------------------+--------+--------+ 24 | ``` 25 | 26 | _Figure 1: Example sysfs layout_ 27 | 28 | Each unique interface \(BLK or PMEM\) to DPA space is identified by a region device with a dynamically assigned id \(REGION0 - REGION5\). 29 | 30 | 1. The first portion of DIMM0 and DIMM1 are interleaved as REGION0. A 31 | 32 | single PMEM namespace is created in the REGION0-SPA-range that spans most 33 | 34 | of DIMM0 and DIMM1 with a user-specified name of "pm0.0". Some of that 35 | 36 | interleaved system-physical-address range is reclaimed as BLK-aperture 37 | 38 | accessed space starting at DPA-offset \(a\) into each DIMM. In that 39 | 40 | reclaimed space we create two BLK-aperture "namespaces" from REGION2 and 41 | 42 | REGION3 where "blk2.0" and "blk3.0" are just human readable names that 43 | 44 | could be set to any user-desired name in the LABEL. 45 | 46 | 2. In the last portion of DIMM0 and DIMM1 we have an interleaved 47 | 48 | ```text 49 | system-physical-address range, REGION1, that spans those two DIMMs as 50 | ``` 51 | 52 | well as DIMM2 and DIMM3. Some of REGION1 is allocated to a PMEM namespace 53 | 54 | named "pm1.0", the rest is reclaimed in 4 BLK-aperture namespaces \(for 55 | 56 | each DIMM in the interleave set\), "blk2.1", "blk3.1", "blk4.0", and 57 | 58 | "blk5.0". 59 | 60 | 3. The portion of DIMM2 and DIMM3 that do not participate in the REGION1 61 | 62 | interleaved system-physical-address range \(i.e. the DPA address past 63 | 64 | offset \(b\) are also included in the "blk4.0" and "blk5.0" namespaces. 65 | 66 | Note, that this example shows that BLK-aperture namespaces don't need to 67 | 68 | be contiguous in DPA-space. 69 | 70 | This bus is provided by the kernel under the device 71 | 72 | `/sys/devices/platform/nfit_test.0` when CONFIG\_NFIT\_TEST is enabled and 73 | 74 | the nfit\_test.ko module is loaded. This not only test LIBNVDIMM but the 75 | 76 | acpi\_nfit.ko driver as well. 77 | 78 | -------------------------------------------------------------------------------- /ipmctl-user-guide/instrumentation/show-sensor.md: -------------------------------------------------------------------------------- 1 | # Show Sensor 2 | 3 | Shows health statistics for one or more persistent memory modules. 4 | 5 | ``` 6 | $ ipmctl show [OPTIONS] -sensor [SENSORS] [TARGETS] 7 | ``` 8 | 9 | ## **Sensors** 10 | 11 | * `Health`: The current module health as reported in the SMART log 12 | * `MediaTemperature`: The current module media temperature in Celsius 13 | * `ControllerTemperature`: The current module controller temperature in Celsius 14 | * `PercentageRemaining`: Remaining module life as a percentage value of factory expected life span 15 | * `LatchedDirtyShutdownCount`: The number of shutdowns without notification over the lifetime of the module 16 | * `Unlatched DirtyShutdownCount`: The number of shutdowns without notification over the lifetime of the module. This counter is the same as LatchedDirtyShutdownCount except it will always be incremented on a dirty shutdown, even if Latch System Shutdown Status was not enabled. 17 | * `PowerOnTime`: The total power-on time over the lifetime of the module 18 | * `UpTime`: The total power-on time since the last power cycle of the module 19 | * `PowerCycles`: The number of power cycles over the lifetime of the module 20 | * `FwErrorCount`: The total number of firmware error log entries 21 | 22 | ## **Targets** 23 | 24 | * `dimm (DimmIDs)`: Show only the sensors on specific modules by supplying the DIMM target and one or more comma-separated DimmIDs. The default is to display sensors for all manageable modules. 25 | 26 | ## **Examples** 27 | 28 | Get all sensor information for all modules 29 | 30 | ``` 31 | $ ipmctl show -sensor 32 | ``` 33 | 34 | Show the media temperature sensor for the specified module 35 | 36 | ``` 37 | $ ipmctl show -sensor MediaTemperature -dimm 0x0011 38 | ``` 39 | 40 | ## **Return Data** 41 | 42 | * `DimmID`: The persistent memory module identifier 43 | * `Type`: The sensor type. Refer to the Sensors list above 44 | * `CurrentValue`: The current reading followed by the units of measurement (e.g., 57 °C or 25%) 45 | * `CurrentState`: The current value in relation to the threshold settings (if supported). One of: 46 | * `Unknown`: The state cannot be determined 47 | * `Normal`: The current reading is within the normal range. This is the default when the sensor does not support thresholds 48 | * `NonCritical`: The current reading is within the non-critical range. For example, an alarm threshold has been reached 49 | * `Critical`: The current reading is within the critical range. For example, the firmware has begun throttling down traffic to the persistent memory module due to the temperature 50 | * `Fatal`: The current reading is within the fatal range. For example, the firmware is shutting down the module due to the temperature 51 | * `LowerThresholdNonCritical`: The threshold value below which the state is considered "NonCritical". 52 | * `UpperThresholdNonCritical`: The threshold value at or above which the state is considered "NonCritical". 53 | * `LowerThresholdCritical`: The threshold value below which the state is considered "Critical". 54 | * `UpperThresholdCritical`: The threshold value at or above which the state is considered "Critical". 55 | * `UpperThresholdFatal`: The threshold value at or above which the state is considered "Fatal". 56 | * `SettableThresholds`: A list of user settable thresholds. Zero or more of: 57 | * `LowerThresholdNonCritical` 58 | * `UpperThresholdNonCritical` 59 | * `SupportedThresholds`: A list of supported thresholds. Zero or more of: 60 | * `LowerThresholdNonCritical` 61 | * `UpperThresholdNonCritical` 62 | * `LowerThresholdCritical` 63 | * `UpperThresholdCritical` 64 | * `UpperThresholdFatal` 65 | * `EnabledState`: Whether the critical threshold alarm is enabled, disabled or not applicable. One of: 66 | * 0: Disabled 67 | * 1: Enabled 68 | * N/A 69 | -------------------------------------------------------------------------------- /ipmctl-user-guide/module-discovery/show-memory-resources.md: -------------------------------------------------------------------------------- 1 | # Show Memory Resources 2 | 3 | Show the total Intel Optane persistent memory resource allocation across the host server. 4 | 5 | ``` 6 | ipmctl show [OPTIONS] -memoryresources 7 | ``` 8 | 9 | ## **Examples** 10 | 11 | ### AppDirect 12 | 13 | The following shows persistent memory module resource allocation from a system configured in 100% AppDirect mode: 14 | 15 | ``` 16 | $ ipmctl show -memoryresources 17 | MemoryType | DDR | PMemModule | Total 18 | ========================================================== 19 | Volatile | 381.500 GiB | 0.000 GiB | 381.500 GiB 20 | AppDirect | - | 3024.000 GiB | 3024.000 GiB 21 | Cache | 0.000 GiB | - | 0.000 GiB 22 | Inaccessible | 2.500 GiB | 5.452 GiB | 7.952 GiB 23 | Physical | 384.000 GiB | 3029.452 GiB | 3413.452 GiB 24 | ``` 25 | 26 | ### Human Readable Output 27 | 28 | Show persistent memory module resource allocation with human-readable output (MiB and TiB). 29 | 30 | ``` 31 | # ipmctl show -units MiB -memoryresources 32 | MemoryType | DDR | PMemModule | Total 33 | =================================================================== 34 | Volatile | 390656.000 MiB | 0.000 MiB | 390656.000 MiB 35 | AppDirect | - | 3096576.000 MiB | 3096576.000 MiB 36 | Cache | 0.000 MiB | - | 0.000 MiB 37 | Inaccessible | 2560.000 MiB | 5583.000 MiB | 8143.000 MiB 38 | Physical | 393216.000 MiB | 3102159.000 MiB | 3495375.000 MiB 39 | 40 | 41 | # ipmctl show -units TiB -memoryresources 42 | MemoryType | DDR | PMemModule | Total 43 | =================================================== 44 | Volatile | 0.373 TiB | 0.000 TiB | 0.373 TiB 45 | AppDirect | - | 2.953 TiB | 2.953 TiB 46 | Cache | 0.000 TiB | - | 0.000 TiB 47 | Inaccessible | 0.002 TiB | 0.005 TiB | 0.008 TiB 48 | Physical | 0.375 TiB | 2.958 TiB | 3.333 TiB 49 | ``` 50 | 51 | ## **Return Data** 52 | 53 | * `Volatile DDR Capacity`: Total DDR capacity that is used as volatile memory. 54 | * `Volatile PMem module Capacity`: Total PMem module capacity that is used as volatile memory. 55 | * `Total Volatile Capacity`: Total DDR and PMem module capacity that is used as volatile memory. 56 | * `AppDirect PMem module Capacity`: Total PMem module capacity used as persistent memory. 57 | * `Total AppDirect Capacity`: Total DDR and PMem module capacity used as persistent memory. 58 | * `Cache DDR Capacity`: Total DDR capacity used as a cache for PMem modules. 59 | * `Total Cache Capacity`: Total DDR capacity used as a cache for PMem modules. 60 | * `Inaccessible DDR Capacity`: Total DDR capacity that is inaccessible. 61 | * `InaccessibleCapacity`: Total system persistent memory capacity that is inaccessible due to any of: 62 | * Platform configuration prevents accessing this capacity. For example, MemoryCapacity is configured but MemoryMode is not enabled by platform FW (current Memory Mode is 1LM). 63 | * Capacity is inaccessible because it is not mapped into the System Physical Address space (SPA). This is usually due to platform firmware memory alignment requirements. 64 | * Persistent capacity that is reserved. This capacity is the persistent memory partition capacity (rounded down for alignment) less any App Direct capacity. Reserved capacity typically results from a Memory Allocation Goal request that specified the Reserved property. This capacity is not mapped to System Physical Address space (SPA). 65 | * Capacity that is unusable because it has not been configured. 66 | * PMem module configured capacity but SKU prevents usage. For example, AppDirectCapacity but PMem module SKU is MemoryMode only. 67 | * `Total Inaccessible Capacity`: Total capacity of DDR and PMem module that is inaccessible. 68 | * `Physical DDR Capacity`: Total physical DDR capacity populated on the platform. 69 | * `Physical PMem module Capacity`: Total physical PMem module capacity populated on the platform. 70 | * `Total Physical Capacity`: Total physical capacity populated on the platform. 71 | -------------------------------------------------------------------------------- /ipmctl-user-guide/module-discovery/show-topology.md: -------------------------------------------------------------------------------- 1 | # Show Topology 2 | 3 | Shows the topology of DDR and persistent memory modules installed in the host server. 4 | 5 | ``` 6 | ipmctl show [OPTIONS] -topology [TARGETS] 7 | ``` 8 | 9 | ## **Targets** 10 | 11 | * `-dimm [(DimmIDs)]`: Restricts output to specific DIMMs by optionally supplying the DIMM target and one or more comma-separated DIMM identifiers. The default is to display all DIMMs. 12 | * `-socket (SocketIDs)`: Restricts output to the DIMMs installed on specific sockets by supplying the socket target and one or more comma-separated socket identifiers. The default is to display all sockets. 13 | * If ACPI PMTT table is not present, then DDR4 memory will not be displayed in the filtered socket list. 14 | 15 | ## **Examples** 16 | 17 | Display all DDR and Optane persistent memory modules installed in the system 18 | 19 | ``` 20 | $ ipmctl show -topology 21 | 22 | DimmID | MemoryType | Capacity | PhysicalID| DeviceLocator 23 | ============================================================================== 24 | 0x0001 | Logical Non-Volatile Device | 126.3 GiB | 0x0028 | CPU1_DIMM_A2 25 | 0x0011 | Logical Non-Volatile Device | 126.3 GiB | 0x002c | CPU1_DIMM_B2 26 | 0x0021 | Logical Non-Volatile Device | 126.3 GiB | 0x0030 | CPU1_DIMM_C2 27 | 0x0101 | Logical Non-Volatile Device | 126.3 GiB | 0x0036 | CPU1_DIMM_D2 28 | 0x0111 | Logical Non-Volatile Device | 126.3 GiB | 0x003a | CPU1_DIMM_E2 29 | 0x0121 | Logical Non-Volatile Device | 126.3 GiB | 0x003e | CPU1_DIMM_F2 30 | 0x1001 | Logical Non-Volatile Device | 126.3 GiB | 0x0044 | CPU2_DIMM_A2 31 | 0x1011 | Logical Non-Volatile Device | 126.3 GiB | 0x0048 | CPU2_DIMM_B2 32 | 0x1021 | Logical Non-Volatile Device | 126.3 GiB | 0x004c | CPU2_DIMM_C2 33 | 0x1101 | Logical Non-Volatile Device | 126.3 GiB | 0x0052 | CPU2_DIMM_D2 34 | 0x1111 | Logical Non-Volatile Device | 126.3 GiB | 0x0056 | CPU2_DIMM_E2 35 | 0x1121 | Logical Non-Volatile Device | 126.3 GiB | 0x005a | CPU2_DIMM_F2 36 | N/A | DDR4 | 16.0 GiB | 0x0026 | CPU1_DIMM_A1 37 | N/A | DDR4 | 16.0 GiB | 0x002a | CPU1_DIMM_B1 38 | N/A | DDR4 | 16.0 GiB | 0x002e | CPU1_DIMM_C1 39 | N/A | DDR4 | 16.0 GiB | 0x0034 | CPU1_DIMM_D1 40 | N/A | DDR4 | 16.0 GiB | 0x0038 | CPU1_DIMM_E1 41 | N/A | DDR4 | 16.0 GiB | 0x003c | CPU1_DIMM_F1 42 | N/A | DDR4 | 16.0 GiB | 0x0042 | CPU2_DIMM_A1 43 | N/A | DDR4 | 16.0 GiB | 0x0046 | CPU2_DIMM_B1 44 | N/A | DDR4 | 16.0 GiB | 0x004a | CPU2_DIMM_C1 45 | N/A | DDR4 | 16.0 GiB | 0x0050 | CPU2_DIMM_D1 46 | N/A | DDR4 | 16.0 GiB | 0x0054 | CPU2_DIMM_E1 47 | N/A | DDR4 | 16.0 GiB | 0x0058 | CPU2_DIMM_F1 48 | ``` 49 | 50 | Display all DDR and Optane persistent memory installed in CPU Socket 0 51 | 52 | ``` 53 | # ipmctl show -topology -socket 0 54 | DimmID | MemoryType | Capacity | PhysicalID| DeviceLocator 55 | ================================================================================ 56 | 0x0001 | Logical Non-Volatile Device | 252.438 GiB | 0x0026 | CPU1_DIMM_A2 57 | 0x0011 | Logical Non-Volatile Device | 252.438 GiB | 0x0028 | CPU1_DIMM_B2 58 | 0x0021 | Logical Non-Volatile Device | 252.438 GiB | 0x002a | CPU1_DIMM_C2 59 | 0x0101 | Logical Non-Volatile Device | 252.438 GiB | 0x002c | CPU1_DIMM_D2 60 | 0x0111 | Logical Non-Volatile Device | 252.438 GiB | 0x002e | CPU1_DIMM_E2 61 | 0x0121 | Logical Non-Volatile Device | 252.438 GiB | 0x0030 | CPU1_DIMM_F2 62 | N/A | DDR4 | 32.000 GiB | 0x0025 | CPU1_DIMM_A1 63 | N/A | DDR4 | 32.000 GiB | 0x0027 | CPU1_DIMM_B1 64 | N/A | DDR4 | 32.000 GiB | 0x0029 | CPU1_DIMM_C1 65 | N/A | DDR4 | 32.000 GiB | 0x002b | CPU1_DIMM_D1 66 | N/A | DDR4 | 32.000 GiB | 0x002d | CPU1_DIMM_E1 67 | N/A | DDR4 | 32.000 GiB | 0x002f | CPU1_DIMM_F1 68 | ``` 69 | 70 | Display information for Optane persistent memory modules 0x0001 and 0x101 71 | 72 | ``` 73 | # ipmctl show -topology -dimm 0x0001,0x0101 74 | DimmID | MemoryType | Capacity | PhysicalID| DeviceLocator 75 | ================================================================================ 76 | 0x0001 | Logical Non-Volatile Device | 252.438 GiB | 0x0026 | CPU1_DIMM_A2 77 | 0x0101 | Logical Non-Volatile Device | 252.438 GiB | 0x002c | CPU1_DIMM_D2 78 | ``` 79 | -------------------------------------------------------------------------------- /ndctl-users-guide/concepts/libnvdimm-pmem-and-blk-modes.md: -------------------------------------------------------------------------------- 1 | # PMEM and BLK Modes 2 | 3 | The NFIT \(**N**VDIMM **F**irmware **I**nterface **T**able\) specification defined by the [ACPI v6.0](http://www.uefi.org/sites/default/files/resources/ACPI_6_0_Errata_A.PDF) standardizes not only the description of Persistent Memory \(PMEM\) and Block \(BLK\) modes, but also the platform message-passing entry points for control and configuration. 4 | 5 | ## LIBNVDIMM Sub-System 6 | 7 | The Linux LIBNVDIMM subsystem provides support for three types of NVDIMMs; PMEM, BLK, and NVDIMM. PMEM \(Persistent Memory\) devices allow byte addressable access. BLK \(block\) devices allow sector atomicity like traditional storage devices. NVDIMM devices can simultaneously support both PMEM and BLK mode access. These three modes of operation are described by the "NVDIMM Firmware Interface Table" \(NFIT\) in [ACPI v6.0](http://www.uefi.org/sites/default/files/resources/ACPI_6_0_Errata_A.PDF) or later. While the LIBNVDIMM implementation is generic and supports pre-NFIT platforms, it was guided by the superset of capabilities need to support this ACPI v6.0 definition for NVDIMM resources. The bulk of the kernel implementation is in place to handle the case where the DIMM Physical Address \(DPA\) accessible via PMEM is aliased with DPA accessible via BLK. When that occurs a LABEL is needed to reserve DPA for exclusive access via one mode a time. 8 | 9 | For each NVDIMM access method \(PMEM, BLK\), LIBNVDIMM provides a block device driver: 10 | 11 | 1. **PMEM \(nd\_pmem.ko\):** Drives a system-physical-address range. This range is contiguous in system memory and may be interleaved \(hardware memory controller striped\) across multiple DIMMs. When interleaved the platform may optionally provide details of which DIMMs are participating in the interleave. Note that while LIBNVDIMM describes system-physical-address ranges that may alias with BLK access as ND\_NAMESPACE\_PMEM ranges and those without alias as ND\_NAMESPACE\_IO ranges, to the nd\_pmem driver there is no distinction. The different device-types are an implementation detail that userspace can exploit to implement policies like "only interface with address ranges from certain DIMMs". It is worth noting that when aliasing is present and a DIMM lacks a label, then no block device can be created by default as userspace needs to do at least one allocation of DPA to the PMEM range. In contrast ND\_NAMESPACE\_IO ranges, once registered, can be immediately attached to nd\_pmem. 12 | 2. **BLK \(nd\_blk.ko\):** This driver performs I/O using a set of platform defined apertures. A set of apertures will access just one DIMM. Multiple windows \(apertures\) allow multiple concurrent accesses, much like tagged-command-queuing, and would likely be used by different threads or different CPUs. The NFIT specification defines a standard format for a BLK-aperture, but the spec also allows for vendor specific layouts, and non-NFIT BLK implementations may have other designs for BLK I/O. For this reason "nd\_blk" calls back into platform-specific code to perform the I/O. One such implementation is defined in the "Driver Writer's Guide" and "DSM Interface Example". 13 | 14 | ## Differences between PMEM and BLK Modes 15 | 16 | While PMEM provides direct byte-addressable CPU-load/store access to NVDIMM storage, it does not provide the best system RAS \(recovery, availability, and serviceability\) model. An access to a corrupted system-physical-address address causes a CPU exception while an access to a corrupted address through an BLK-aperture causes that block window to raise an error status in a register. The latter is more aligned with the standard error model that host-bus-adapter attached disks present. Also, if an administrator ever wants to replace a memory it is easier to service a system at DIMM module boundaries. Compare this to PMEM where data could be interleaved in an opaque hardware specific manner across several DIMMs. 17 | 18 | BLK-apertures solve these RAS problems, but their presence is also the major contributing factor to the complexity of the ND subsystem. They complicate the implementation because PMEM and BLK alias in DPA space. Any given DIMM's DPA-range may contribute to one or more system-physical-address sets of interleaved DIMMs, \*and\* may also be accessed in its entirety through its BLK-aperture. Accessing a DPA through a system-physical-address while simultaneously accessing the same DPA through a BLK-aperture has undefined results. For this reason, DIMMs with this dual interface configuration include a DSM function to store/retrieve a LABEL. The LABEL effectively partitions the DPA-space into exclusive system-physical-address and BLK-aperture accessible regions. For simplicity a DIMM is allowed a PMEM "region" per each interleave set in which it is a member. The remaining DPA space can be carved into an arbitrary number of BLK devices with discontiguous extents. 19 | 20 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # PROJECT NOT UNDER ACTIVE MANAGEMENT # 2 | This project will no longer be maintained by Intel. 3 | Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project. 4 | Intel no longer accepts patches to this project. 5 | If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project. 6 | 7 | # Persistent Memory Documentation 8 | 9 | ## **Introductory Materials** 10 | 11 | This [USENIX ;login: article](https://www.usenix.org/system/files/login/articles/login\_summer17\_07\_rudoff.pdf), published in the Summer of 2017, provides an overview of the persistent memory programming model and the related software work that has been taking place. It includes a very high-level description of programming with [PMDK](http://pmem.io/pmdk/), although it refers to it by the old name, [NVM Libraries](http://pmem.io/2017/12/11/NVML-is-now-PMDK.html). This article is actually a follow-on to an [earlier article](https://www.usenix.org/system/files/login/articles/08\_rudoff\_040-045\_final.pdf), published in the Summer 2013 issue of USENIX ;login:. 12 | 13 | The [Persistent Memory Summit 2018](https://www.snia.org/pm-summit), held January 24th, 2018, provided a day full of informative talks and panels on persistent memory (slides and videos of the sesssions are available at above link). 14 | 15 | The projects on this site, like [PMDK](http://pmem.io/pmdk/), are product-neutral, meant to work on any product that provides the persistent memory programming model. One such product is Intel’s [3D XPoint](https://www.youtube.com/watch?v=Wgk4U4qVpNY). Intel has assembled a collection of persistent memory programming documentation, webinars, videos, and related materials on the [Intel Developer Zone](https://software.intel.com/en-us/persistent-memory). 16 | 17 | Dozens of companies have collaborated on a common persistent memory programming model in the SNIA NVM Programming Technical Workgroup (TWG). SNIA provides a [persistent memory page](http://www.snia.org/PM) on their web site with links to recent presentations, persistent memory standards and white papers, etc. The SNIA activity continues to be very active, working on areas like remote persistent memory and security. The original programming model specification put forward the basic idea that applications access persistent memory as _memory-mapped files_, a concept which has appeared as the **DAX** feature (Direct Access) on both [Linux](https://nvdimm.wiki.kernel.org/) and [Windows](https://channel9.msdn.com/Events/Build/2016/P470). 18 | 19 | ## **Standards** 20 | 21 | The [SNIA NVM Programming Model](https://www.snia.org/sites/default/files/technical\_work/final/NVMProgrammingModel\_v1.2.pdf) standard describes the basic programming model used for persistent memory programming. 22 | 23 | The [ACPI Specification](http://www.uefi.org/specifications), starting with version 6.0, defines the _NVDIMM Firmware Interface Table_ (NFIT) which is how the existence of persistent memory is communicated to operating systems. The specification also describes how NVDIMMs are partitioned into _namespaces_, methods for communicating with NVDIMMs, etc. 24 | 25 | The [UEFI Specification](http://www.uefi.org/specifications), covers other NVDIMM-related topics such as the _Block Translation Table_ (BTT) which allows an NVDIMM to provide block device semantics. 26 | 27 | Before the above two standards were developed, the [NVDIMM Namespace Specification](http://pmem.io/documents/NVDIMM\_Namespace\_Spec.pdf) described the namespace and BTT mechanisms. The link to the outdated document is maintained here for reference. 28 | 29 | ## **Related Specifications** 30 | 31 | The [NVDIMM Driver Writers Guide](http://pmem.io/documents/NVDIMM\_DriverWritersGuide-July-2016.pdf) is targeted to driver writers for NVDIMMs that adhere to the NFIT tables in the Advanced Configuration and Power Interface (ACPI) V6.0 specification, the Device Specific Method (DSM) specification, and the NVDIMM Namespace Specification. This document specifically discusses the block window HW interface and persistent memory interface that Intel is proposing for NVDIMMs. A version of the document with [change bars](http://pmem.io/documents/NVDIMM\_DriverWritersGuide-July-2016\_wChanges.pdf) \[pdf] from the previous version is also available. 32 | 33 | The [Intel Optane PMem DSM Interface](https://pmem.io/documents/IntelOptanePMem\_DSM\_Interface-V3.0.pdf), Version 3.0, describes the NVDIMM Device Specific Methods (\_DSM) that pertain to Optane PMem modules. This document is provided as a reference for BIOS and OS driver writers supporting NVDIMMs and similar devices that appear in the ACPI NFIT table. 34 | 35 | The [Dirty Shutdown Handling guide](https://pmem.io/documents/Dirty\_Shutdown\_Handling-V1.0.pdf) describes the (hopefully rare) error case where flushes to persistent memory did not go as expected on power failure or system crash. The guide describes how this failure is detected, and how it is communicated to applications via the dirty shutdown count**.** 36 | -------------------------------------------------------------------------------- /ndctl-users-guide/managing-nvdimms.md: -------------------------------------------------------------------------------- 1 | # Managing NVDIMMs 2 | 3 | Managing physical or emulated NVDIMMs using `ndctl` has no functional difference. Physical NVDIMM features and options may be controlled through the system BIOS. The BIOS cannot see emulated NVDIMMs. 4 | 5 | Observe the following restrictions when managing NVDIMMs 6 | 7 | * DO NOT change the memory slot of physical NVDIMMs when they are part of an Interleave Set. Doing so changes the order of interleaving so data access will be compromised or corrupted. If interleaving is disabled, moving NVDIMMs to different slot locations is okay, but not recommended. 8 | * DO NOT disable NVDIMMs when they are part of an active Region and/or Namespace as this will prevent data access and may corrupt data. 9 | * ALWAYS backup the data and make a copy of the configuration layout prior to any changes. 10 | 11 | ## Listing active/enabled NVDIMMs 12 | 13 | The `ndct list -D` , or equivalent `ndct list --dimm` , can be used to show active/enabled NVDIMM devices on the system, eg: 14 | 15 | ```text 16 | # ndctl list -D 17 | { 18 | "dev":"nmem0", 19 | "id":"8089-a2-1809-00000107", 20 | "handle":1, 21 | "phys_id":29 22 | } 23 | ``` 24 | 25 | ## Listing disabled/inactive NVDIMMs 26 | 27 | By default, `ndctl` only lists enabled/active dimms, regions, and namespaces. To include previously disabled \(inactive\) NVDIMMs, include the `-i` flag to show both enabled and disabled devices, eg: 28 | 29 | ```text 30 | # ndctl list -Di 31 | { 32 | "dev":"nmem0", 33 | "id":"8089-a2-1809-00000107", 34 | "handle":0, 35 | "phys_id":29 36 | } 37 | { 38 | "dev":"nmem1", 39 | "id":"9759-b5-1459-00000502", 40 | "handle":1, 41 | "phys_id":30 42 | "state":"disabled" 43 | } 44 | ... 45 | ``` 46 | 47 | {% hint style="info" %} 48 | NVDIMM vendor specific tools can be used to display more information about the NVDIMMs from the operating system layer. For example, Intel Optane DC Persistent Memory Modules can be managed using the [ipmctl](https://github.com/intel/ipmctl) utility. These tools are outside the scope for this documentation. Refer to the vendor specific documentation. 49 | {% endhint %} 50 | 51 | ## Disabling NVDIMMs 52 | 53 | NVDIMMs can only be disabled if they have no active Regions or Namespaces. If an active/enabled namespace and/or region exists, a message is displayed: 54 | 55 | ```text 56 | # ndctl disable-dimm nmem0 57 | nmem0 is active, skipping... 58 | disabled 0 nmem 59 | ``` 60 | 61 | 1\) List the current active/enabled configuration 62 | 63 | ```text 64 | # ndctl list -NRD 65 | ``` 66 | 67 | 2\) Verify no fsdax or devdax namespaces are mounted or in-use by running applications 68 | 69 | 3\) Destroy or disable any active/enabled namespace\(s\). 70 | 71 | ```text 72 | # ndctl disable-namespace 73 | - or - 74 | # ndctl destroy-namespace 75 | ``` 76 | 77 | See [Destroying Namespaces](managing-namespaces.md#destroying-namespaces) or [Disabling Namespaces](managing-namespaces.md#disabling-namespaces) for more information. 78 | 79 | 4\) Disable the regions used by the NVDIMM \(nmem\) that needs to be disabled 80 | 81 | ```text 82 | # ndctl disable-region 83 | ``` 84 | 85 | See [Disabling Regions](managing-regions.md#disabling-regions) for more information. 86 | 87 | 5\) Disable a single, subset, or all NVDIMM \(nmem\) devices 88 | 89 | To disable a single NVDIMM, use: 90 | 91 | ```text 92 | # ndctl disable-dimm nmem0 93 | disabled 1 nmem 94 | ``` 95 | 96 | To disable a subset or specific list or NVDIMMs, use: 97 | 98 | ```text 99 | # ndctl disable-dimm nmem0 nmem1 nmem5 100 | disabled 3 nmem 101 | ``` 102 | 103 | To disable all NVDIMMs, use: 104 | 105 | ```text 106 | # ndctl disable-dimm all 107 | disabled 12 nmem 108 | ``` 109 | 110 | 6\) Verify the NVDIMM\(s\) are disabled by listing inactive dimms and verifying the 'state': 111 | 112 | ```text 113 | # ndctl list -Di 114 | { 115 | "dev":"nmem0", 116 | "id":"8089-a2-1809-00000107", 117 | "handle":1, 118 | "phys_id":29, 119 | "state":"disabled" 120 | } 121 | ... 122 | ``` 123 | 124 | ## Enabling NVDIMMs 125 | 126 | 1\) Verify the nmem device, or list of nmem devices, that need to be enabled using the `ndctl list -Di` command: 127 | 128 | ```text 129 | # ndctl list -Di 130 | { 131 | "dev":"nmem0", 132 | "id":"8089-a2-1809-00000107", 133 | "handle":1, 134 | "phys_id":29, 135 | "state":"disabled" 136 | } 137 | ``` 138 | 139 | 2\) Enable the NVDIMM\(s\) 140 | 141 | To enable a single NVDIMM \(nmemX\) device, use: 142 | 143 | ```text 144 | # ndctl enable-dimm nmem0 145 | enabled 1 nmem 146 | ``` 147 | 148 | To enable a subset of disabled NVDIMMs, use: 149 | 150 | ```text 151 | # ndctl enable-dimm nmem0 nmem1 nmem5 152 | enabled 3 nmem 153 | ``` 154 | 155 | To enable all disabled NVDIMMs, use: 156 | 157 | ```text 158 | # ndctl enable-dimm all 159 | enabled 12 nmem 160 | ``` 161 | 162 | 3\) Verify the state by listing all NVDIMMs 163 | 164 | ```text 165 | # ndctl list -Di 166 | ``` 167 | 168 | A filtered list of NVDIMMs can shown using the `-d ` or `-dimm ` option, eg: 169 | 170 | ```text 171 | # ndctl list -Di -d nmem0 172 | - or - 173 | # ndctl list -Di -dimm nmem0 174 | ``` 175 | 176 | -------------------------------------------------------------------------------- /ipmctl-user-guide/module-discovery/show-device.md: -------------------------------------------------------------------------------- 1 | # Show Device 2 | 3 | Shows information about one or more Intel Optane DC memory modules. 4 | 5 | ``` 6 | ipmctl show [OPTIONS] -dimm [TARGETS] 7 | ``` 8 | 9 | ## **Targets** 10 | 11 | * `-dimm (DimmIDs)`: Restricts output to specific modules by supplying the DIMM target and one or more comma-separated DimmIDs. The default is to display all DCPMMs. 12 | * `-socket (SocketIDs)`: Restricts output to the modules installed on specific sockets by supplying the socket target and one or more comma-separated socket identifiers. The default is to display all sockets. 13 | * If ACPI PMTT table is not present, then DDR4 memory will not be displayed in the filtered socket list. 14 | 15 | ## **Examples** 16 | 17 | List a few key fields for each persistent memory module. 18 | 19 | ``` 20 | $ sudo ipmctl show -dimm 21 | 22 | DimmID | Capacity | HealthState | ActionRequired | LockState | FWVersion 23 | ============================================================================== 24 | 0x0001 | 126.4 GiB | Healthy | 0 | Disabled | 01.02.00.5310 25 | 0x0011 | 126.4 GiB | Healthy | 0 | Disabled | 01.02.00.5310 26 | 0x0021 | 126.4 GiB | Healthy | 0 | Disabled | 01.02.00.5310 27 | 0x0101 | 126.4 GiB | Healthy | 0 | Disabled | 01.02.00.5310 28 | 0x0111 | 126.4 GiB | Healthy | 0 | Disabled | 01.02.00.5310 29 | 0x0121 | 126.4 GiB | Healthy | 0 | Disabled | 01.02.00.5310 30 | 0x1001 | 126.4 GiB | Healthy | 0 | Disabled | 01.02.00.5310 31 | 0x1011 | 126.4 GiB | Healthy | 0 | Disabled | 01.02.00.5310 32 | 0x1021 | 126.4 GiB | Healthy | 0 | Disabled | 01.02.00.5310 33 | 0x1101 | 126.4 GiB | Healthy | 0 | Disabled | 01.02.00.5310 34 | 0x1111 | 126.4 GiB | Healthy | 0 | Disabled | 01.02.00.5310 35 | 0x1121 | 126.4 GiB | Healthy | 0 | Disabled | 01.02.00.5310 36 | ``` 37 | 38 | List all properties for module 0x0001. 39 | 40 | ``` 41 | $ sudo ipmctl show -a -dimm 0x0001 42 | ---DimmID=0x0001--- 43 | Capacity=252.454 GiB 44 | LockState=Disabled 45 | S3Resume=Unknown 46 | HealthState=Healthy 47 | HealthStateReason=None 48 | FWVersion=01.02.00.5417 49 | FWAPIVersion=01.15 50 | InterfaceFormatCode=0x0301 (Non-Energy Backed Byte Addressable) 51 | ManageabilityState=Manageable 52 | PopulationViolationState=Is in supported configuration. 53 | PhysicalID=0x0026 54 | DimmHandle=0x0001 55 | DimmUID=8089-a2-1837-00000be8 56 | SocketID=0x0000 57 | MemControllerID=0x0000 58 | ChannelID=0x0000 59 | ChannelPos=1 60 | MemoryType=Logical Non-Volatile Device 61 | Manufacturer=Intel 62 | VendorID=0x8089 63 | DeviceID=0x5141 64 | RevisionID=0x0000 65 | SubsystemVendorID=0x8089 66 | SubsystemDeviceID=0x097a 67 | SubsystemRevisionID=0x0018 68 | DeviceLocator=CPU1_DIMM_A2 69 | ManufacturingInfoValid=1 70 | ManufacturingLocation=0xa2 71 | ManufacturingDate=18-37 72 | SerialNumber=0x00000be8 73 | PartNumber=NMA1XBD256GQS 74 | BankLabel=NODE 1 75 | DataWidth=64 b 76 | TotalWidth=72 b 77 | Speed=2666 MT/s 78 | FormFactor=DIMM 79 | ManufacturerID=0x8089 80 | ControllerRevisionID=B0, 0x0020 81 | MemoryCapacity=0.000 GiB 82 | AppDirectCapacity=252.000 GiB 83 | UnconfiguredCapacity=0.000 GiB 84 | InaccessibleCapacity=0.454 GiB 85 | ReservedCapacity=0.000 GiB 86 | PackageSparingCapable=1 87 | PackageSparingEnabled=1 88 | PackageSparesAvailable=1 89 | IsNew=0 90 | ViralPolicy=0 91 | ViralState=0 92 | PeakPowerBudget=20000 mW 93 | AvgPowerLimit=18000 mW 94 | MaxAveragePowerLimit=18000 mW 95 | LatchedLastShutdownStatus=PM ADR Command Received, DDRT Power Fail Command Received, PMIC 12V/DDRT 1.2V Power Loss (PLI), Controller's FW State Flush Complete, Write Data Flush Complete 96 | UnlatchedLastShutdownStatus=PMIC 12V/DDRT 1.2V Power Loss (PLI), PM Warm Reset Received, Controller's FW State Flush Complete, Write Data Flush Complete, PM Idle Received 97 | ThermalThrottleLossPercent=N/A 98 | LastShutdownTime=Sat May 30 00:49:03 UTC 2020 99 | ModesSupported=Memory Mode, App Direct 100 | SecurityCapabilities=Encryption, Erase 101 | MasterPassphraseEnabled=0 102 | ConfigurationStatus=Valid 103 | SKUViolation=0 104 | ARSStatus=Completed 105 | OverwriteStatus=Unknown 106 | AitDramEnabled=1 107 | BootStatus=Success 108 | BootStatusRegister=0x00000004_985d00f0 109 | ErrorInjectionEnabled=0 110 | MediaTemperatureInjectionEnabled=0 111 | SoftwareTriggersEnabled=0 112 | SoftwareTriggersEnabledDetails=None 113 | PoisonErrorInjectionsCounter=0 114 | PoisonErrorClearCounter=0 115 | MediaTemperatureInjectionsCounter=0 116 | SoftwareTriggersCounter=0 117 | MaxControllerTemperature=81 C 118 | MaxMediaTemperature=78 C 119 | MixedSKU=0 120 | 121 | ``` 122 | 123 | Retrieve specific properties for each module. 124 | 125 | ``` 126 | $ sudo ipmctl show -d HealthState,LockState -dimm 0x0001 127 | 128 | ---DimmID=0x0001--- 129 | LockState=Disabled 130 | HealthState=Healthy 131 | ``` 132 | 133 | ## **Return Data** 134 | 135 | See the ipmctl-show-device(1) man page for a detailed explanation of all fields displayed by `ipmctl-show-device`. 136 | -------------------------------------------------------------------------------- /getting-started-guide/what-is-pmdk.md: -------------------------------------------------------------------------------- 1 | # PMDK Introduction 2 | 3 | With persistent memory, applications have a new tier available for data placement as shown in Figure 1. In addition to the memory and storage tiers, the persistent memory tier offers greater capacity than DRAM and significantly faster performance than storage. Applications can access persistent memory resident data structures in-place, like they do with traditional memory, eliminating the need to page blocks of data back and forth between memory and storage. 4 | 5 | ![Figure 1: Memory-Storage Hierarchy with Persistent Memory Tier](../.gitbook/assets/pmem_storage_pyramid.jpg) 6 | 7 | To get this low-latency direct access, a new software architecture is required that allows applications to access ranges of persistent memory. 8 | 9 | The [Persistent Memory Development Kit \(PMDK\)](http://pmem.io/pmdk) is a collection of libraries and tools for System Administrators and Application Developers to simplify managing and accessing persistent memory devices. Tuned and validated on both Linux and Windows, the libraries build on the Direct Access \(DAX\) feature which allows applications to directly access persistent memory as memory-mapped files. This is described in detail in the [Storage Network Industry Association \(SNIA\) NVM Programming Model](https://www.snia.org/sites/default/files/technical_work/final/NVMProgrammingModel_v1.2.pdf). Figure 2 shows the model which describes how applications can access persistent memory devices \(NVDIMMs\) using traditional POSIX standard APIs such as read, write, pread, and pwrite, or load/store operations such as memcpy when the data is memory mapped to the application. The 'Persistent Memory' area describes the fastest possible access because the application I/O bypasses existing filesystem page caches and goes directly to/from the persistent memory media. 10 | 11 | ![Fig 2: SNIA Programming Model](../.gitbook/assets/snia_programming_model.png) 12 | 13 | Directly accessing the physical media introduces new programming challenges and paradigms. The PMDK offers application developers many libraries and features highlighted below to solve some of the more difficult programming issues: 14 | 15 | **Available Libraries:** 16 | 17 | * [**libpmem**](http://pmem.io/pmdk/libpmem/)**:** provides low-level persistent memory support 18 | * [**libpmemobj**](http://pmem.io/pmdk/libpmemobj/)**:** provides a transactional object store, providing memory allocation, transactions, and general facilities for persistent memory programming. 19 | * [**libpmemblk**](http://pmem.io/pmdk/libpmemblk/)**:** supports arrays of pmem-resident blocks, all the same size, that are atomically updated. 20 | * [**libpmemlog**](http://pmem.io/pmdk/libpmemlog/)**:** provides a pmem-resident log file. 21 | * [**libvmem**](http://pmem.io/pmdk/libvmem/) **\[deprecated\]:** turns a pool of persistent memory into a volatile memory pool, similar to the system heap but kept separate and with its own malloc-style API. Since persistent memory support has been integrated into [libmemkind](https://github.com/memkind/memkind), that library is the recommended choice for any volatile implementations. [Libmemkind](https://github.com/memkind/memkind) combines support for multiple types of volatile memory into a single, convenient API. 22 | * [**libvmmalloc**](http://pmem.io/pmdk/libvmmalloc/)**:** library transparently converts all the dynamic memory allocations into persistent memory allocations. 23 | * [**libpmempool**](http://pmem.io/pmdk/libpmempool/)**:** provides support for off-line pool management and diagnostics. 24 | * [**librmem**](http://pmem.io/pmdk/librpmem/)**:** provides low-level support for remote access to persistent memory utilizing RDMA-capable RNICs. 25 | * \*\*\*\*[**libvmemcache**](https://github.com/pmem/vmemcache)**:** is an embeddable and lightweight in-memory caching solution. It's designed to fully take advantage of large capacity memory, such as Persistent Memory with DAX, through memory mapping in an efficient and scalable way. 26 | 27 | **Available Utilities**: 28 | 29 | * [**pmempool**](http://pmem.io/pmdk/pmempool/)**:** Manage and analyze persistent memory pools with this stand-alone utility 30 | * [**pmemcheck**](http://pmem.io/2015/07/17/pmemcheck-basic.html)**:** Use dynamic runtime analysis with an enhanced version of Valgrind for use with persistent memory. 31 | 32 | **Supporting Documents** 33 | 34 | * **Introduction to Programming with Persistent Memory from Intel:** [https://software.intel.com/en-us/articles/introduction-to-programming-with-persistent-memory-from-intel](https://software.intel.com/en-us/articles/introduction-to-programming-with-persistent-memory-from-intel) 35 | 36 | **Supporting Videos** 37 | 38 | * **Get Started with Persistent Memory Programming \(Series\):** 39 | 40 | The non-volatile memory library \(NVML\) is now called the Persistent Memory Development Kit \(PMDK\). In these videos, its architect, Andy Rudoff, introduces you to persistent memory programming and shows you how to apply it to your application. 41 | 42 | * Part 1: [What is Persistent Memory?](https://software.intel.com/en-us/persistent-memory/get-started/series) 43 | * Part 2: [Describing The SNIA Programming Model](https://software.intel.com/en-us/videos/the-nvm-programming-model-persistent-memory-programming-series) 44 | * Part 3: [Introduction to PMDK Libraries](https://software.intel.com/en-us/videos/intro-to-the-nvm-libraries-persistent-memory-programming-series) 45 | * Part 4: [Thinking Transactionally](https://software.intel.com/en-us/videos/thinking-transactionally-persistent-memory-programming-series) 46 | * Part 5: [A C++ Example](https://software.intel.com/en-us/videos/a-c-example-persistent-memory-programming-series) 47 | 48 | -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/linux-environments/advanced-topics/partitioning-namespaces.md: -------------------------------------------------------------------------------- 1 | # Partitioning Namespaces 2 | 3 | It is possible to partition raw, sector, and fsdax devices \(namespaces\) using tools such as fdisk, parted, or gparted, etc. This is useful to meet application space requirements. 4 | 5 | The following shows how to partition a new namespace with no existing partition table using fdisk and parted. If an existing partition table exists, delete or modify the entries first. The example uses a 256GB FSDAX device \(namespace\) and creates 2 x 100GB and 1 x ~50GB partitions on which filesystems can be created. 6 | 7 | {% hint style="danger" %} 8 | **WARNING:** Data could or will be lost. Backup the data before proceeding. 9 | {% endhint %} 10 | 11 | Print the current partition table, if any, using `fdisk -l`: 12 | 13 | ```text 14 | $ sudo fdisk -l /dev/pmem0 15 | Disk /dev/pmem0: 245.1 GiB, 263182090240 bytes, 514027520 sectors 16 | Units: sectors of 1 * 512 = 512 bytes 17 | Sector size (logical/physical): 512 bytes / 4096 bytes 18 | I/O size (minimum/optimal): 4096 bytes / 4096 bytes 19 | ``` 20 | 21 | {% tabs %} 22 | {% tab title="fdisk" %} 23 | 1\) Launch fdisk for the device \(/dev/pmem0\) 24 | 25 | ```text 26 | $ sudo fdisk /dev/pmem0 27 | Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. 28 | Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0xc637b85f. 29 | Command (m for help): 30 | ``` 31 | 32 | 2\) Create the first new 100GB partition using the '\(n\)ew' command 33 | 34 | ```text 35 | Command (m for help): n 36 | Partition type 37 | p primary (0 primary, 0 extended, 4 free) 38 | e extended (container for logical partitions) 39 | Select (default p): p 40 | Partition number (1-4, default 1): 41 | First sector (2048-514027519, default 2048): 42 | Last sector, +sectors or +size{K,M,G,T,P} (2048-514027519, default 514027519): +100G 43 | Created a new partition 1 of type 'Linux' and of size 100 GiB. 44 | ``` 45 | 46 | 3\) Create the second new 100GB partition using the '\(n\)ew' command: 47 | 48 | ```text 49 | Command (m for help): n 50 | Partition type 51 | p primary (1 primary, 0 extended, 3 free) 52 | e extended (container for logical partitions) 53 | Select (default p): 54 | 55 | Using default response p. 56 | Partition number (2-4, default 2): 57 | First sector (209717248-514027519, default 209717248): 58 | Last sector, +sectors or +size{K,M,G,T,P} (209717248-514027519, default 514027519): +100G 59 | Created a new partition 2 of type 'Linux' and of size 100 GiB. 60 | ``` 61 | 62 | 4\) Create the last partition using the remaining space: 63 | 64 | ```text 65 | Command (m for help): n 66 | Partition type 67 | p primary (2 primary, 0 extended, 2 free) 68 | e extended (container for logical partitions) Select (default p): 69 | 70 | Using default response p. 71 | Partition number (3,4, default 3): 72 | First sector (419432448-514027519, default 419432448): 73 | Last sector, +sectors or +size{K,M,G,T,P} (419432448-514027519, default 514027519): 74 | Created a new partition 3 of type 'Linux' and of size 45.1 GiB. 75 | ``` 76 | 77 | 5\) Print the new partition table to verify the changes using the '\(p\)rint' command: 78 | 79 | ```text 80 | Command (m for help): p 81 | Disk /dev/pmem0: 245.1 GiB, 263182090240 bytes, 514027520 sectors 82 | Units: sectors of 1 * 512 = 512 bytes 83 | Sector size (logical/physical): 512 bytes / 4096 bytes 84 | I/O size (minimum/optimal): 4096 bytes / 4096 bytes 85 | Disklabel type: dos 86 | Disk identifier: 0xc637b85f 87 | 88 | Device Boot Start End Sectors Size Id Type 89 | /dev/pmem0p1 2048 209717247 209715200 100G 83 Linux 90 | /dev/pmem0p2 209717248 419432447 209715200 100G 83 Linux 91 | /dev/pmem0p3 419432448 514027519 94595072 45.1G 83 Linux 92 | ``` 93 | 94 | 6\) Commit the changes using the '\(w\)rite' command and return to the shell prompt: 95 | 96 | ```text 97 | Command (m for help): w 98 | The partition table has been altered. 99 | Calling ioctl() to re-read partition table. 100 | Syncing disks. 101 | 102 | # 103 | ``` 104 | {% endtab %} 105 | 106 | {% tab title="parted" %} 107 | 1\) Launch parted and select the device \(/dev/pmem0\) 108 | 109 | ```text 110 | $ sudo parted /dev/pmem0 111 | GNU Parted 3.2 112 | Using /dev/pmem0 113 | Welcome to GNU Parted! Type 'help' to view a list of commands. 114 | (parted) 115 | ``` 116 | 117 | 2\) Create the first new 100GB partition using the 'mkpart' command 118 | 119 | ```text 120 | (parted) mkpart 121 | Partition type? primary/extended? p 122 | File system type? [ext2]? ext4 123 | Start? 2MiB 124 | End? 100GiB 125 | (parted) 126 | ``` 127 | 128 | 3\) Create the second new 100GB partition using the 'mkpart' command: 129 | 130 | ```text 131 | (parted) mkpart 132 | Partition type? primary/extended? p 133 | File system type? [ext2]? ext4 134 | Start? 100GiB 135 | End? 200GiB 136 | ``` 137 | 138 | 4\) Create the last partition using the remaining space \(-1MiB\): 139 | 140 | ```text 141 | (parted) mkpart 142 | Partition type? primary/extended? p 143 | File system type? [ext2]? xfs 144 | Start? 200GiB 145 | End? -1MiB 146 | (parted) p 147 | ``` 148 | 149 | 5\) Print the new partition table to verify the changes using the 'print' command: 150 | 151 | ```text 152 | (parted) p 153 | Model: NVDIMM Device (pmem) 154 | Disk /dev/pmem0: 263GB 155 | Sector size (logical/physical): 512B/4096B 156 | Partition Table: msdos 157 | Disk Flags: 158 | 159 | Number Start End Size Type File system Flags 160 | 1 2097kB 107GB 107GB primary ext4 lba 161 | 2 107GB 215GB 107GB primary ext4 lba 162 | 3 215GB 263GB 48.4GB primary xfs lba 163 | ``` 164 | 165 | 6\) Quit parted and return to the shell prompt: 166 | 167 | ```text 168 | (parted) q 169 | Information: You may need to update /etc/fstab. 170 | ``` 171 | {% endtab %} 172 | {% endtabs %} 173 | 174 | The partitions can now be used with DAX enabled filesystems such as EXT4 and XFS and mounted with the `-o dax` option. The following shows how to create and mount an EXT4 or XFS filesystem. 175 | 176 | {% tabs %} 177 | {% tab title="EXT4" %} 178 | ```text 179 | $ sudo mkfs.ext4 /dev/pmem0p1 180 | $ sudo mkdir /pmem1 181 | $ sudo mount -o dax /dev/pmem0p1 /pmem1 182 | $ sudo mount -v | grep /pmem1 183 | /dev/pmem0p1 on /pmem1 type ext4 (rw,relatime,seclabel,dax,data=ordered) 184 | ``` 185 | {% endtab %} 186 | 187 | {% tab title="XFS" %} 188 | ```text 189 | $ sudo mkfs.xfs /dev/pmem0p1 190 | $ sudo mkdir /pmem1 191 | $ sudo mount -o dax /dev/pmem0p1 /mnt/dax 192 | $ sudo mount -v | grep /pmem1 193 | /dev/pmem0p1 on /pmem1 type xfs (rw,relatime,seclabel,attr2,dax,inode64,noquota) 194 | ``` 195 | {% endtab %} 196 | {% endtabs %} 197 | 198 | -------------------------------------------------------------------------------- /getting-started-guide/introduction.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | 3 | ## Persistent Memory Overview 4 | 5 | Over the last few decades, computer systems have implemented the memory-storage hierarchy shown in Figure 1. The memory-storage hierarchy utilizes the principle of locality which keeps frequently accessed data closest to the CPU. Successive generations of technologies have iterated on the number, size, and speed of caches to ensure the CPUs have access to the most frequently used data. CPU speeds have continued to get faster, adding more cores and threads with each new CPU generations as they try to maintain [Moore's Law](https://en.wikipedia.org/wiki/Moore's\_law). The capacity, price, and speed of volatile DRAM and non-volatile storage such as NAND SSDs or Hard Disk Drives have not kept pace and quickly become a bottleneck for system and application performance. 6 | 7 | ![Figure 1: Memory Storage Hierarchy](<../.gitbook/assets/Memory-Storage Hierachy - Default.png>) 8 | 9 | Persistent Memory (PMEM), also referred to as Non-Volatile Memory (NVM), or Storage Class Memory (SCM), provides a new entry in the memory-storage hierarchy shown in Figure 2 that fills the performance/capacity gap. 10 | 11 | ![Figure 2: Memory-Storage Hierarchy with Persistent Memory Tier](../.gitbook/assets/pmem\_storage\_pyramid.jpg) 12 | 13 | With persistent memory, applications have a new tier available for data placement. In addition to the memory and storage tiers, the persistent memory tier offers greater capacity than DRAM and significantly faster performance than storage. Applications can access persistent memory like they do with traditional memory, eliminating the need to page blocks of data back and forth between memory and storage. 14 | 15 | ## SNIA Programming Model 16 | 17 | The [Storage Network Industry Association (SNIA)](http://www.snia.org) and several technology industry companies derived several standards, including the _NVM Programming Model_, to enable application development for persistent memory. [NVM Programming Model Version 1.2](https://www.snia.org/sites/default/files/technical\_work/final/NVMProgrammingModel\_v1.2.pdf) can be found on the [SNIA Persistent Memory website](https://www.snia.org/PM). Each year SNIA hosts a Non-Volatile conference. Papers and recordings of previous conferences can be found on the [SNIA Persistent Memory home page](https://www.snia.org/PM). 18 | 19 | ## What does this mean for application developers? 20 | 21 | The introduction of a persistent memory tier offers application developers a choice of where to put data and data structures. Traditionally data was read and written to volatile memory and flushed to non-volatile persistent storage. When the application is started, data has to be read from storage into volatile memory before it can be accessed. Depending on the size of the working dataset, this can take seconds, minutes, or hours. With clever application design, developers and application architects can now take advantage of this new technology to improve performance and reduce application startup times. 22 | 23 | Persistent Memory introduces some new programming concerns, which did not apply to traditional, volatile memory. These include: 24 | 25 | * Data Persistence: 26 | * Stores are not guaranteed to be persistent until flushed. Although this is also true for the decades-old memory-mapped file APIs (like mmap() and msync() on Linux), many programmers have not dealt with the need to flush to persistence for memory. Following the standard API (like msync() to flush changes to persistence) will work as expected. But more optimal flushing, where the application flushes stores from the CPU caches directly, instead of calling into the kernel, is also possible. 27 | * CPUs have out-of-order CPU execution and cache access/flushing. This means if two values are stored by the application, the order in which they become persistent may not be the order that the application wrote them. 28 | * Data Consistency: 29 | * 8-byte stores are powerfail atomic on the x86 architecture -- if a powerfail happens during an aligned, 8-byte store to PMEM, either the old 8-bytes or the new 8-bytes (not a combination of the two) will be found in that location after reboot. 30 | * Anything larger than 8-bytes on x86 is not powerfail atomic, so it is up to software to implement whatever transactions/logging/recovery is required for consistency. Note that this is specific to x86 -- other hardware platforms may have different atomicity sizes (PMDK is designed so applications using it don't have to worry about these details). 31 | * Memory Leaks: 32 | * Memory leaks to persistent storage are persistent. Rebooting the server doesn't change the on-device contents. In the current volatile model, if an application leaks memory, restarting the application or system frees that memory. 33 | * Byte Level Access: 34 | * Application developers can read and write at the byte level according to the application requirements. The read/writes no longer need to be aligned or equal to storage block boundaries, eg: 512byte, 4KiB, or 8KiB. The storage doesn't need to read an entire block to modify a few bytes, to then write that entire block back to persistent storage. Applications are free to read/write as much or as little as required. This improves performance and reduces memory footprint overheads. 35 | * Error Handling: 36 | * Applications may need to detect and handle hardware errors directly. Since applications have direct access to the persistent memory media, any errors will be returned back to the application as memory errors. 37 | 38 | Application developers can use traditional system calls such as memcpy(), memmap(), etc. to access the persistent memory devices. This is a challenging route to take as it requires intimate knowledge of the persistent memory device and CPU features. Applications developed for one server platform may not be portable to a different platform or CPU generation. To help application developers with these challenges, an open source [Persistent Memory Development Kit](http://pmem.io/pmdk/) can be downloaded and freely used. More information can be found in the [PMDK Introduction](what-is-pmdk.md), [Installing PMDK](installing-pmdk/), and [PMDK Programmers Guide](introduction.md) found within this document collection. Tuned and validated on both Linux and Windows, the libraries build on the Direct Access (DAX) feature of those operating systems which allows applications to access persistent memory as memory-mapped files, as described in the [SNIA NVM Programming Model](introduction.md#snia-programming-model). 39 | 40 | Managing persistent memory devices on Linux is achieved using the `ndctl` utility. Read the [NDCTL Introduction](what-is-ndctl.md), [Installing NDCTL](installing-ndctl.md), then [NDCTL User Guide](https://docs.pmem.io/ndctl-user-guide/) within this collection. Windows users can use the `Get-PhysicalDisk` power-shell command. More information can be found on the [Microsoft Documents](https://docs.microsoft.com/en-us/windows/desktop/persistent-memory-programming-in-windows---nvml-integration) site. 41 | 42 | -------------------------------------------------------------------------------- /ipmctl-user-guide/debug/inject-error.md: -------------------------------------------------------------------------------- 1 | # Inject Error 2 | 3 | Injects an error or clears a previously injected error on one or more persistent memory modules for testing purposes. 4 | 5 | {% hint style="info" %} 6 | Note: Error Injection is disabled by default in the BIOS, and may not be available on all platforms. Consult your server documentation or vendor before proceeding. 7 | {% endhint %} 8 | 9 | ``` 10 | ipmctl set [OPTIONS] -dimm (DimmIDs) [PROPERTIES] 11 | ``` 12 | 13 | ## **Targets** 14 | 15 | * `-dimm (DimmIDs)`: Injects or clears an error on a specified module by supplying the DIMM target and one or more comma-separated DimmIDs. The default is to inject the error on all manageable modules. 16 | 17 | ## **Properties** 18 | 19 | This command supports setting or clearing one type of error at a time 20 | 21 | * `Clear=1`: Clears a previously injected error. This property must be combined with one of the other properties indicating the previously injected error to clear. 22 | * `Temperature`: Injects an artificial media temperature in degrees Celsius into the module. The firmware that is monitoring the temperature of the module will then be alerted and take necessary precautions to preserve the module. The value is injected immediately and will override the firmware from reading the actual media temperature of the device, directing it to use this value instead. This may cause adverse reactions by the firmware and result in an alert or log. 23 | 24 | > Note: The injected temperature value will remain until the next reboot or it is cleared. The media temperature is an artificial temperature and will not cause harm to the part, although firmware actions due to improper temperature injections may cause adverse effects on the module. If the Critical Shutdown Temperature or higher is passed in, this may cause the module firmware to perform a shutdown in order to preserve the part and data. The temperature value will be ignored on clear. 25 | * `Poison`: The physical address to poison. 26 | 27 | > Note: The address must be 256 byte aligned (e.g., 0x10000000, 0x10000100, 0x10000200...). 28 | > 29 | > Poison is not possible for any address in the PM region if the PM region is locked. Injected poison errors are only triggered on a subsequent read of the poisoned address, in which case an error log will be generated by the firmware. No alerts will be sent. 30 | > 31 | > This command can be used to clear non-injected poison errors. The data will be zero’d after clearing. There is no requirement to enable error injection prior to request to clear poison errors. 32 | > 33 | > The caller is responsible for keeping a list of injected poison errors in order to properly clear the injected errors afterwards. Simply disabling injection does not clear injected poison errors. Injected poison errors are persistent across power cycles and system resets. 34 | * `PoisonType`: The type of memory to poison. One of: 35 | * `PatrolScrub`: Injects a poison error at the specified address simulating an error found during a patrol scrub operation, which is indifferent to how the memory is currently allocated. This is the default. 36 | * `MemoryMode`: Injects a poison error at the specified address currently allocated in Memory Mode. 37 | * `AppDirect`: Injects a poison error at the specified address currently allocated as App Direct. 38 | 39 | > Note: If the address to poison is not currently allocated as the specified memory type, an error is returned. 40 | * `PackageSparing=1`: Triggers an artificial package sparing. If package sparing is enabled and the module still has spares remaining, this will cause the firmware to report that there are no spares remaining. 41 | * `PercentageRemaining`: Injects an artificial module life remaining percentage into the persistent memory module. This will cause the firmware to take appropriate action based on the value and if necessary generate an error log and an alert and update the health status. 42 | * `FatalMediaError=1`: Injects a fake media fatal error which will cause the firmware to generate an error log and an alert. 43 | 44 | > NOTE: When a fatal media error is injected, the BSR Media Disabled status bit will be set, indicating a media error. Use the disable trigger input parameter to clear the injected fatal error. 45 | > 46 | > NOTE: Injecting a fatal media error is unsupported on Windows. Please contact Microsoft for assistance in performing this action. 47 | * `DirtyShutdown=1`: Injects an ADR failure, which will result in a dirty shutdown upon reboot. 48 | 49 | ## **Examples** 50 | 51 | Set the media temperature on all manageable modules to 100 degrees Celsius. 52 | 53 | ``` 54 | $ sudo ipmctl set -dimm Temperature=100 55 | ``` 56 | 57 | Clear the injected media temperature on all manageable modules 58 | 59 | ``` 60 | $ sudo ipmctl set -dimm Temperature=1 Clear=1 61 | ``` 62 | 63 | Poison address 0x10000100 on module 0x0001 64 | 65 | ``` 66 | $ sudo ipmctl set -dimm 0x0001 Poison=0x10000200 67 | ``` 68 | 69 | Clear the injected poison of address 0x10000200 on module 0x0001. 70 | 71 | ``` 72 | $ sudo ipmctl set -dimm 0x0001 Poison=0x10000200 Clear=1 73 | ``` 74 | 75 | Trigger an artificial package sparing on all manageable modules. 76 | 77 | ``` 78 | $ sudo ipmctl set -dimm PackageSparing=1 79 | ``` 80 | 81 | Trigger an artificial package sparing on module 0x0001. 82 | 83 | ``` 84 | $ sudo ipmctl set -dimm 0x0001 PackageSparing=1 85 | ``` 86 | 87 | Set the life remaining percentage on all manageable modules to 10%. 88 | 89 | ``` 90 | $ sudo ipmctl set -dimm PercentageRemaining=10 91 | ``` 92 | 93 | Set the life remaining percentage module 0x0001 to 10%. 94 | 95 | ``` 96 | $ sudo ipmctl set -dimm 0x0001 PercentageRemaining=10 97 | ``` 98 | 99 | Clear the injected remaining life percentage on all manageable modules. The value of PercentageRemaining is irrelevant. 100 | 101 | ``` 102 | $ sudo ipmctl set -dimm PercentageRemaining=10 Clear=1 103 | ``` 104 | 105 | Trigger an artificial Asynchronous DRAM Refresh (ADR) failure on all manageable modules, which will result in a dirty shutdown on each module on the next reboot. 106 | 107 | ``` 108 | $ sudo ipmctl set -dimm DirtyShutdown=1 109 | ``` 110 | 111 | Trigger an artificial Asynchronous DRAM Refresh (ADR) failure on module 0x0001, which will result in a dirty shutdown on each module on the next reboot. 112 | 113 | ``` 114 | $ sudo ipmctl set -dimm 0x0001 DirtyShutdown=1 115 | ``` 116 | 117 | Simulate a Fatal Media Error on PMem module 0x2001. 118 | 119 | {% hint style="danger" %} 120 | **WARNING**: Injecting a Fatal Media Error may cause the host to crash. Linux hosts may enter emergency mode if there are PMem mount points in /etc/fstab. Additional recovery activities may be required. 121 | 122 | Clearing the fault will not cause any data loss. 123 | {% endhint %} 124 | 125 | ``` 126 | $ sudo ipmctl set -dimm 0x2001 FatalMediaError=1 127 | ``` 128 | 129 | Clear the Fatal Media Error on PMem module 0x20001 130 | 131 | ``` 132 | $ sudo ipmctl set -dimm 0x2001 FatalMediaError=1 Clear=1 133 | ``` 134 | 135 | -------------------------------------------------------------------------------- /getting-started-guide/installing-pmdk/compiling-pmdk-from-source.md: -------------------------------------------------------------------------------- 1 | # Installing PMDK from Source on Linux 2 | 3 | ## Overview 4 | 5 | This procedure describes how to clone the source code from the pmdk github repository and compile, then install it. 6 | 7 | {% hint style="info" %} 8 | **Note:** We recommend [installing NDCTL](../installing-ndctl.md) first so PMDK builds all features. If the ndctl development packages and header files are not installed, PMDK will build successfully, but will disable some of the RAS \(Reliability, Availability and Serviceability\) features. 9 | {% endhint %} 10 | 11 | If your system is behind a firewall and requires a proxy to access the Internet, configure your package manager to use a proxy. 12 | 13 | ## Install Prerequisites 14 | 15 | To build the PMDK libraries on Linux, you may need to install the following required packages on the build system: 16 | 17 | * autoconf 18 | * automake 19 | * gcc 20 | * gcc-c++ 21 | * glib2-devel 22 | * libfabric-devel 23 | * pandoc 24 | * pkg-config 25 | * ncurses-devel 26 | 27 | {% tabs %} 28 | {% tab title="Fedora" %} 29 | ```text 30 | $ sudo dnf install autoconf automake pkg-config glib2-devel libfabric-devel pandoc ncurses-devel 31 | ``` 32 | {% endtab %} 33 | 34 | {% tab title="RHEL/CentOS" %} 35 | Some of the required packages can be found in the EPEL repository. Verify the EPEL repository is active: 36 | 37 | ```text 38 | $ sudo yum repolist 39 | ``` 40 | 41 | If the EPEL repository is not listed, install and activate it using: 42 | 43 | ```text 44 | $ sudo yum -y install epel-release 45 | ``` 46 | 47 | To install the prerequisite packages, run: 48 | 49 | ```text 50 | $ sudo yum install autoconf automake pkgconfig glib2-devel libfabric-devel pandoc ncurses-devel 51 | ``` 52 | {% endtab %} 53 | 54 | {% tab title="Ubuntu & Debian" %} 55 | **For Ubuntu 18.04 \(Bionic\) or Debian 9 \(Stretch\) or later** 56 | 57 | ```text 58 | $ sudo apt install autoconf automake pkg-config libglib2.0-dev libfabric-dev pandoc libncurses5-dev 59 | ``` 60 | 61 | **For Ubuntu 16.04 \(Xenial\) and Debian 8 \(Jessie\):** 62 | 63 | {% hint style="info" %} 64 | Earlier releases of Ubuntu and Debian do not have libfabric-dev available in the repository. If this library is required, you should compile it yourself. See [https://github.com/ofiwg/libfabric](https://github.com/ofiwg/libfabric) 65 | {% endhint %} 66 | 67 | ```text 68 | $ sudo apt install autoconf automake pkg-config libglib2.0-dev pandoc libncurses5-dev 69 | ``` 70 | 71 | \*\*\*\* 72 | {% endtab %} 73 | 74 | {% tab title="FreeBSD" %} 75 | To build and test the PMDK library on FreeBSD, you may need to install the following required packages on the build system: 76 | 77 | * autoconf 78 | * bash 79 | * binutils 80 | * coreutils 81 | * e2fsprogs-libuuid 82 | * gmake 83 | * glib2 84 | * glib2-devel 85 | * libunwind 86 | * ncurses\* 87 | * pandoc 88 | * pkgconf 89 | 90 | ```text 91 | $ sudo pkg install autoconf automake bash coreutils e2fsprogs-libuuid glib2 glib2-devel gmake libunwind pandoc ncurses pkg-config 92 | ``` 93 | 94 | \(\*\) The pkg version of ncurses is required for proper operation; the base version included in FreeBSD is not sufficient. 95 | {% endtab %} 96 | {% endtabs %} 97 | 98 | The `git` utility is required to clone the repository or you can download the source code as a [zip file](https://github.com/pmem/pmdk/archive/master.zip) directly from the [repository ](https://github.com/pmem/pmdk)on GitHub. 99 | 100 | ### Optional Prerequisites 101 | 102 | The following packages are required only by selected PMDK components or features. If not present, those components or features may not be available: 103 | 104 | * **libfabric** \(v1.4.2 or later\) -- required by **librpmem** 105 | * **libndctl** and **libdaxctl** \(v60.1 or later\) -- required by **daxio** and RAS features. See [Installing NDCTL](../installing-ndctl.md) 106 | * To build pmdk without ndctl support, set 'NDCTL\_ENABLE=n' using: `$ export NDCTL_ENABLE=n` 107 | 108 | ### Compiler Requirements 109 | 110 | A C/C++ Compiler is required. GCC/G++ will be used in this documentation but you may use a different compiler then set the `CC` and `CXX` shell environments accordingly. 111 | 112 | {% tabs %} 113 | {% tab title="Fedora" %} 114 | ```text 115 | $ sudo dnf install gcc gcc-c++ 116 | ``` 117 | {% endtab %} 118 | 119 | {% tab title="RHELCentOS" %} 120 | ```text 121 | $ sudo yum install gcc gcc-c++ 122 | ``` 123 | {% endtab %} 124 | 125 | {% tab title="Ubuntu/Debian" %} 126 | ```text 127 | $ sudo apt install gcc g++ 128 | ``` 129 | {% endtab %} 130 | {% endtabs %} 131 | 132 | ## Clone the PMDK GitHub Repository 133 | 134 | The following uses the `git` utility to clone the repository. 135 | 136 | ```text 137 | $ git clone https://github.com/pmem/pmdk 138 | $ cd pmdk 139 | ``` 140 | 141 | Alternatively you may download the source code as a [zip file](https://github.com/pmem/pmdk/archive/master.zip) from the [GitHub website](https://github.com/pmem/pmdk). 142 | 143 | ```text 144 | $ wget https://github.com/pmem/pmdk/archive/master.zip 145 | $ unzip master.zip 146 | $ cd pmdk-master 147 | ``` 148 | 149 | ## Compile 150 | 151 | {% tabs %} 152 | {% tab title="Linux & FreeBSD" %} 153 | To build the master branch run the `make` utility in the root directory: 154 | 155 | ```text 156 | $ make 157 | ``` 158 | 159 | If you want to compile with a different compiler, you have to provide the `CC` and `CXX`variables. For example: 160 | 161 | ```text 162 | $ make CC=clang CXX=clang++ 163 | ``` 164 | 165 | These variables are independent and setting `CC=clang` does not set `CXX=clang++`. 166 | 167 | {% hint style="info" %} 168 | If the `make` command returns an error similar to the following, this is caused by pkg-config being unable to find the required "libndctl.pc" file. 169 | 170 | ```text 171 | $ make 172 | src/common.inc:370: *** libndctl(version >= 60.1) is missing -- see README. Stop. 173 | ``` 174 | 175 | This can occur when libndctl was installed in a directory other than the /usr location. 176 | 177 | To resolve this issue, the PKG\_CONFIG\_PATH is a environment variable that specifies additional paths in which pkg-config will search for its .pc files. 178 | 179 | This variable is used to augment pkg-config's default search path. On a typical Unix system, it will search in the directories /usr/lib/pkgconfig and /usr/share/pkgconfig. This will usually cover system installed modules. However, some local modules may be installed in a different prefix such as /usr/local. In that case, it's necessary to prepend the search path so that pkg-config can locate the .pc files. 180 | 181 | The pkg-config program is used to retrieve information about installed libraries in the system. The primary use of pkg-config is to provide the necessary details for compiling and linking a program to a library. This metadata is stored in pkg-config files. These files have the suffix .pc and reside in specific locations known to the pkg-config tool. 182 | 183 | To check the PKG\_CONFIG\_PATH value use this command: 184 | 185 | `$ echo $PKG_CONFIG_PATH` 186 | 187 | To set the PKG\_CONFIG\_PATH value use: 188 | 189 | ```text 190 | $ export PKG_CONFIG_PATH=/usr/local/lib64/pkgconfig:/usr/local/lib/pkgconfig:${PKG_CONFIG_PATH} 191 | ``` 192 | 193 | Now execute the `make` command again. 194 | {% endhint %} 195 | {% endtab %} 196 | {% endtabs %} 197 | 198 | ## Install 199 | 200 | Installing the library is more convenient since it installs man pages and libraries in the standard system locations. 201 | 202 | {% tabs %} 203 | {% tab title="Linux & FreeBSD" %} 204 | To install the libraries to the default `/usr/local` location: 205 | 206 | ```text 207 | $ sudo make install 208 | ``` 209 | 210 | To install this library into other locations, you can use the `prefix=path` option, e.g: 211 | 212 | ```text 213 | $ sudo make install prefix=/usr 214 | ``` 215 | 216 | If you installed to non-standard directory \(anything other than /usr\) you may need to add $prefix/lib or $prefix/lib64 \(depending on the distribution you use\) to the list of directories searched by the linker: 217 | 218 | ```text 219 | sudo sh -c "echo /usr/local/lib >> /etc/ld.so.conf" 220 | sudo sh -c "echo /usr/local/lib64 >> /etc/ld.so.conf" 221 | sudo ldconfig 222 | ``` 223 | {% endtab %} 224 | {% endtabs %} 225 | 226 | -------------------------------------------------------------------------------- /getting-started-guide/creating-development-environments/linux-environments/linux-memmap.md: -------------------------------------------------------------------------------- 1 | # Using the memmap Kernel Option 2 | 3 | The `pmem` driver allows users to begin developing software using Direct Access Filesystems \(DAX\) such as EXT4 and XFS. A new `memmap` option was added that supports reserving one or more ranged of unassigned memory for use with emulated persistent memory. The `memmap` parameter documentation can be found at [https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt](https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt). This feature was upstreamed in the v4.0 Kernel. Kernel v4.15 introduced performance improvements and is recommended for production environments. 4 | 5 | ## Quick Start 6 | 7 | The memmap option uses a`memmap=nn[KMG]!ss[KMG]` format; where`nn` is the size of the region to reserve, `ss` is the starting offset, and `[KMG]` specifies the size in Kilobytes, Megabytes, or Gigabytes. The configuration option is passed to the Kernel using GRUB. Changing GRUB menu entries and kernel arguments vary between Linux distributions and versions of the same distro. Instructions for some of the common Linux distros can be found below. Refer to the documentation of the Linux distro and version being used for more information. 8 | 9 | The memory region will be marked as e820 type 12 \(0xc\) . This is visible at boot time. Use the dmesg command to view these messages. 10 | 11 | ```text 12 | $ dmesg | grep e820 13 | ``` 14 | 15 | **Example:** 16 | 17 | 1. A GRUB entry of `memmap=4G!12G` reserves 4GB of memory starting at 12GB through 16GB. See [How To Choose the Correct memmap Option for Your System](linux-memmap.md#how-to-choose-the-correct-memmap-option-for-your-system) for more details. Each Linux distro has a different method for modifying the GRUB entries. Follow the documentation for your distro. A few of the common distros are provided below for quick reference. 18 | 19 | {% tabs %} 20 | {% tab title="Fedora" %} 21 | Create a new memory mapping of 4GB starting at the 12GB boundary \(ie from 12GB to 16GB\) 22 | 23 | **Fedora 23 or later:** 24 | 25 | `$ grubby --args="memmap=4G\!12G" --update-kernel=ALL` 26 | 27 | **Fedora 22 and earlier:** 28 | 29 | ```text 30 | $ sudo vi /etc/default/grub 31 | GRUB_CMDLINE_LINUX="memmap=4G!12G" 32 | ``` 33 | 34 | Update the grub config using one of the following methods: 35 | 36 | On BIOS-based machines: 37 | 38 | ```text 39 | $ sudo grub2-mkconfig -o /boot/grub2/grub.cfg 40 | ``` 41 | 42 | On UEFI-based machines: 43 | 44 | ```text 45 | $ sudo grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg 46 | ``` 47 | {% endtab %} 48 | 49 | {% tab title="Ubuntu" %} 50 | Create a new memory mapping of 4GB starting at the 12GB boundary \(ie from 12GB to 16GB\) 51 | 52 | `$ sudo vi /etc/default/grub` 53 | 54 | Add or edit the "GRUB\_CMDLINE\_LINUX" entry to include the mapping, eg: 55 | 56 | `GRUB_CMDLINE_LINUX="memmap=4G!12G"` 57 | 58 | Then update grub and reboot 59 | 60 | `$ sudo update-grub2` 61 | {% endtab %} 62 | 63 | {% tab title="RHEL & CentOS" %} 64 | Create a new memory mapping of 4GB starting at the 12GB boundary \(ie from 12GB to 16GB\) 65 | 66 | ```text 67 | $ sudo vi /etc/default/grub 68 | GRUB_CMDLINE_LINUX="memmap=4G!12G" 69 | ``` 70 | 71 | Update the grub config using one of the following methods: 72 | 73 | On BIOS-based machines: 74 | 75 | `$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg` 76 | 77 | On UEFI-based machines: 78 | 79 | `$ sudo grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg` 80 | {% endtab %} 81 | {% endtabs %} 82 | 83 | {% hint style="info" %} 84 | **Note:** If more than one persistent memory namespace is required, specify a memmap entry for each namespace. For example, "memmap=2G!12G memmap=2G!14G" will create two 2GB namespaces, one in the 12GB-14GB memory address offsets, the other at 14GB-16GB. 85 | {% endhint %} 86 | 87 | 1. Reboot the host 88 | 2. After the host has been rebooted, a new `/dev/pmem{N}` device should exist, one for each memmap region specified in the GRUB config. These can be shown using `ls /dev/pmem*`. Naming convention starts at `/dev/pmem0` and increments for each device. The `/dev/pmem{N}` devices can be used to create a DAX filesystem. 89 | 3. Create and mount a filesystem using /dev/pmem device\(s\), then verify the `dax` flag is set for the mount point to confirm the DAX feature is enabled. The following shows how to create and mount an EXT4 or XFS filesystem. 90 | 91 | {% tabs %} 92 | {% tab title="EXT4" %} 93 | ```text 94 | $ sudo mkfs.ext4 /dev/pmem0 95 | $ sudo mkdir /pmem 96 | $ sudo mount -o dax /dev/pmem0 /pmem 97 | $ sudo mount -v | grep /pmem 98 | /dev/pmem0 on /pmem type ext4 (rw,relatime,seclabel,dax,data=ordered) 99 | ``` 100 | {% endtab %} 101 | 102 | {% tab title="XFS" %} 103 | ```text 104 | $ sudo mkfs.xfs /dev/pmem0 105 | $ sudo mkdir /pmem 106 | $ sudo mount -o dax /dev/pmem0 /pmem 107 | $ sudo mount -v | grep /pmem 108 | /dev/pmem0 on /pmem type xfs (rw,relatime,seclabel,attr2,dax,inode64,noquota) 109 | ``` 110 | {% endtab %} 111 | {% endtabs %} 112 | 113 | {% hint style="info" %} 114 | **Note:** Refer to the [Advanced Topics](advanced-topics/) section for information on [Partitioning Namespaces](advanced-topics/partitioning-namespaces.md) and [I/O Alignment Considerations](advanced-topics/i-o-alignment-considerations.md) using hugepages. 115 | {% endhint %} 116 | 117 | ## How To Choose the Correct memmap Option for Your System 118 | 119 | When selecting values for the `memmap` kernel parameter, consideration that the start and end addresses represent usable RAM must be made. Using or overlapping with reserved memory can result in corruption or undefined behaviour. This information is easily available in the e820 table, available via dmesg. 120 | 121 | The following shows an example server with 16GiB of memory with "usable" memory between 4GiB \(0x100000000\) and ~16GiB \(0x3ffffffff\): 122 | 123 | ```text 124 | $ dmesg | grep BIOS-e820 125 | [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable 126 | [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved 127 | [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved 128 | [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdffff] usable 129 | [ 0.000000] BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved 130 | [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved 131 | [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved 132 | [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x00000003ffffffff] usable 133 | ``` 134 | 135 | To reserve the 12GiB usable space between 4GiB and 16GiB as emulated persistent memory, the syntax for this reservation will be as follows: 136 | 137 | ```text 138 | memmap=12G!4G 139 | ``` 140 | 141 | After rebooting a new user defined e820 table entry shows the range is now "persistent \(type 12\)": 142 | 143 | ```text 144 | $ dmesg | grep user: 145 | [ 0.000000] user: [mem 0x0000000000000000-0x000000000009fbff] usable 146 | [ 0.000000] user: [mem 0x000000000009fc00-0x000000000009ffff] reserved 147 | [ 0.000000] user: [mem 0x00000000000f0000-0x00000000000fffff] reserved 148 | [ 0.000000] user: [mem 0x0000000000100000-0x00000000bffdffff] usable 149 | [ 0.000000] user: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved 150 | [ 0.000000] user: [mem 0x00000000feffc000-0x00000000feffffff] reserved 151 | [ 0.000000] user: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved 152 | [ 0.000000] user: [mem 0x0000000100000000-0x00000003ffffffff] persistent (type 12) 153 | ``` 154 | 155 | The `fdisk` or `lsblk` utilities can be used to show the capacity, eg: 156 | 157 | ```text 158 | # sudo fdisk -l /dev/pmem0 159 | Disk /dev/pmem0: 12 GiB, 12884901888 bytes, 25165824 sectors 160 | Units: sectors of 1 * 512 = 512 bytes 161 | Sector size (logical/physical): 512 bytes / 4096 bytes 162 | I/O size (minimum/optimal): 4096 bytes / 4096 bytes 163 | ``` 164 | 165 | ```text 166 | $ sudo lsblk /dev/pmem0 167 | NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT 168 | pmem0 259:0 0 12G 0 disk /pmem 169 | ``` 170 | 171 | {% hint style="info" %} 172 | **Note:** Most Linux distributions ship with Kernel Address Space Layout Randomization \(KASLR\) enabled. This is defined by `CONFIG_RANDOMIZE_BASE`. When enabled, the Kernel may potentially use memory previously reserved for persistent memory without warning, resulting in corruption or undefined behaviour. It is recommended to disable KASLR on systems with 16GiB or less. Refer to your Linux distribution documentation for details as the procedure varies per distro. 173 | {% endhint %} 174 | 175 | -------------------------------------------------------------------------------- /getting-started-guide/installing-pmdk/installing-pmdk-using-linux-packages.md: -------------------------------------------------------------------------------- 1 | # Installing PMDK using Linux Packages 2 | 3 | PMDK is upstreamed to many Linux distro package repositories including Fedora and Ubuntu. Older versions of PMDK in RPM and DEB format are available from the [PMDK releases repository](https://github.com/pmem/pmdk/releases). 4 | 5 | ## Installing PMDK Using the Linux Distro Package Repository 6 | 7 | The PMDK is a collection of different libraries, each one provides different functionality. This provides greater flexibility for developers as only the required runtime or header files need to be installed without installing unnecessary libraries. Libraries are available in runtime, development header files \(\*-devel\), and debug \(\*-debug\) versions. For example: 8 | 9 | | Library | Description | 10 | | :--- | :--- | 11 | | libpmem | Low-level persistent memory support library | 12 | | libpmem-debug | Debug variant of the libpmem low-level persistent memory library | 13 | | libpmem-devel | Development files for the low-level persistent memory library | 14 | 15 | The following table shows the list of available libraries: 16 | 17 | | Library | Description | 18 | | :--- | :--- | 19 | | libpmem | Low-level persistent memory support library | 20 | | librpmem | Remote Access to Persistent Memory library | 21 | | libpmemblk | Persistent Memory Resident Array of Blocks library | 22 | | libpmemcto | Close-to-Open Persistence library \(Deprecated in PMDK v1.5\) | 23 | | libpmemlog | Persistent Memory Resident Log File library | 24 | | libpmemobj | Persistent Memory Transactional Object Store library | 25 | | libpmempool | Persistent Memory pool management library | 26 | | pmempool | Utilities for Persistent Memory | 27 | 28 | {% tabs %} 29 | {% tab title="Fedora" %} 30 | 1\) Query the repository to identify if pmdk is available: 31 | 32 | Fedora 21 or earlier 33 | 34 | ```text 35 | $ yum search pmem 36 | ``` 37 | 38 | Fedora 22 or later 39 | 40 | ```text 41 | $ dnf search pmem 42 | ``` 43 | 44 | 2\) Install the pmdk packages 45 | 46 | Fedora 21 or earlier 47 | 48 | ```text 49 | $ yum install 50 | ``` 51 | 52 | Fedora 22 or later 53 | 54 | ```text 55 | $ dnf install 56 | 57 | All Runtime: 58 | $ dnf install libpmem librpmem libpmemblk libpmemlog libpmemobj libpmempool pmempool 59 | All Development: 60 | $ dnf install libpmem-devel librpmem-devel libpmemblk-devel libpmemlog-devel libpmemobj-devel libpmemobj++-devel libpmempool-devel 61 | All Debug: 62 | $ dnf install libpmem-debug librpmem-debug libpmemblk-debug libpmemlog-debug libpmemobj-debug libpmempool-debug 63 | ``` 64 | {% endtab %} 65 | 66 | {% tab title="RHEL & CentOS" %} 67 | The pmdk package is available on CentOS and RHEL 7.0 or later. 68 | 69 | 1\) Query the repository to identify if pmdk is delivered: 70 | 71 | ```text 72 | $ yum search pmem 73 | ``` 74 | 75 | 2\) Install the pmdk packages 76 | 77 | ```text 78 | $ yum install 79 | 80 | All Runtime: 81 | $ yum install libpmem librpmem libpmemblk libpmemlog libpmemobj libpmempool pmempool 82 | All Development: 83 | $ yum install libpmem-devel librpmem-devel libpmemblk-devel libpmemlog-devel libpmemobj-devel libpmemobj++-devel libpmempool-devel 84 | All Debug: 85 | $ yum install libpmem-debug librpmem-debug libpmemblk-debug libpmemlog-debug libpmemobj-debug libpmempool-debug 86 | ``` 87 | {% endtab %} 88 | 89 | {% tab title="SLES & OpenSUSE" %} 90 | 1\) Query the repository to identify if ndctl is delivered: 91 | 92 | ```text 93 | $ zypper search pmem 94 | ``` 95 | 96 | 2\) Install the ndctl packages 97 | 98 | ```text 99 | $ zypper install 100 | 101 | All Runtime: 102 | $ zypper install libpmem librpmem libpmemblk libpmemlog libpmemobj libpmempool pmempool 103 | All Development: 104 | $ zypper install libpmem-devel librpmem-devel libpmemblk-devel libpmemlog-devel libpmemobj-devel libpmemobj++-devel libpmempool-devel 105 | All Debug: 106 | $ zypper install libpmem-debug librpmem-debug libpmemblk-debug libpmemlog-debug libpmemobj-debug libpmempool-debug 107 | ``` 108 | {% endtab %} 109 | 110 | {% tab title="Ubuntu" %} 111 | The pmdk package is available on Ubuntu 18.10 \(Cosmic Cuttlefish\) or later. 112 | 113 | 1\) Query the repository to identify if ndctl is delivered using either the aptitude, apt-cache, or apt utilities 114 | 115 | ```text 116 | $ aptitude search pmem 117 | $ apt-cache search pmem 118 | $ apt search pmem 119 | ``` 120 | 121 | 2\) Install the pmdk packages 122 | 123 | ```text 124 | $ apt-get install 125 | 126 | All Runtime: 127 | $ sudo apt-get install libpmem1 librpmem1 libpmemblk1 libpmemlog1 libpmemobj1 libpmempool1 128 | All Development: 129 | $ sudo apt-get install libpmem-dev librpmem-dev libpmemblk-dev libpmemlog-dev libpmemobj-dev libpmempool-dev libpmempool-dev 130 | All Debug: 131 | $ sudo apt-get install libpmem1-debug librpmem1-debug libpmemblk1-debug libpmemlog1-debug libpmemobj1-debug libpmempool1-debug 132 | ``` 133 | {% endtab %} 134 | {% endtabs %} 135 | 136 | ## Installing PMDK from \*.RPM or \*.DEB 137 | 138 | {% hint style="info" %} 139 | Since libraries are available in most Linux Distro repositories, the PMDK RPM and DEB packages will no longer be built for each release. The following refers to PMDK v1.4 and earlier available from [https://github.com/pmem/pmdk/releases](https://github.com/pmem/pmdk/releases). 140 | {% endhint %} 141 | 142 | ### Installing \*.RPM Packages 143 | 144 | 1\) Download the RPM bundle. 145 | 146 | ```text 147 | $ mkdir -p /downloads/pmdk1.4 148 | $ cd /downloads/pmdk1.4 149 | $ wget 150 | $ tar zxf pmdk-1.4-rpms.tar.gz 151 | ``` 152 | 153 | 2\) The download bundle includes an rpm with the source code and an `x86_64` sub-directory with installable packages 154 | 155 | ```text 156 | $ ls -1R 157 | .: 158 | pmdk-1.4-1.fc25.src.rpm 159 | pmdk-1.4-rpms.tar.gz 160 | x86_64 161 | 162 | ./x86_64: 163 | libpmem-1.4-1.fc25.x86_64.rpm 164 | libpmemlog-devel-1.4-1.fc25.x86_64.rpm 165 | <...snip...> 166 | ``` 167 | 168 | 3\) Install the rpm packages with dependencies 169 | 170 | ```text 171 | $ cd x86_64 172 | $ dnf install *.rpm 173 | ``` 174 | 175 | ### Upgrading Installed \*.RPM Packages 176 | 177 | If PMDK was previously installed using the downloaded rpm packages, use the following to upgrade the installed packages. 178 | 179 | {% hint style="info" %} 180 | If you are upgrading from PMDK v1.3.1 \(formally NVML\) to PMDK 1.4 or later, the name change may cause package conflicts which causes some packages to fail. It is recommended to remove all nvml\* packages before trying to upgrade/install pmdk. 181 | 182 | ```text 183 | $ dnf remove nvml* 184 | ``` 185 | {% endhint %} 186 | 187 | 1\) Download the latest RPM bundle from [https://github.com/pmem/pmdk/releases](https://github.com/pmem/pmdk/releases), eg 1.4.1 188 | 189 | ```text 190 | $ mkdir -p /downloads/pmdk1.4.1 191 | $ cd /downloads/pmdk1.4.1 192 | $ wget 193 | $ tar zxf pmdk-1.4.1-rpms.tar.gz 194 | ``` 195 | 196 | 2\) The download bundle includes an rpm with the source code and an `x86_64` sub-directory with installable packages 197 | 198 | ```text 199 | $ ls -1R 200 | .: 201 | pmdk-1.4.1-1.fc25.src.rpm 202 | pmdk-1.4.1-rpms.tar.gz 203 | x86_64 204 | 205 | ./x86_64: 206 | libpmem-1.4.1-1.fc25.x86_64.rpm 207 | libpmemlog-devel-1.4.1-1.fc25.x86_64.rpm 208 | <...snip...> 209 | ``` 210 | 211 | 3\) Upgrade the packages using the `upgrade` or `install` sub-command 212 | 213 | ```text 214 | $ cd x86_64 215 | $ dnf upgrade *.rpm 216 | - or - 217 | $ dnf install *.rpm 218 | ``` 219 | 220 | ### Installing \*.DEB Packages 221 | 222 | 1\) Download the pmdk-{version}-dpkgs.tar.gz from [https://github.com/pmem/pmdk/releases](https://github.com/pmem/pmdk/releases), eg to download PMDK v1.4: 223 | 224 | ```text 225 | $ mkdir -p /downloads/pmdk1.4 226 | $ cd /downloads/pmdk1.4 227 | $ wget 228 | $ tar zxf pmdk-1.4-dpkgs.tar.gz 229 | ``` 230 | 231 | 2\) The download bundle includes installable packages in the root 232 | 233 | ```text 234 | $ ls -1 235 | libpmem_1.4-1_amd64.deb 236 | libpmemblk_1.4-1_amd64.deb 237 | <...snip...> 238 | ``` 239 | 240 | 3\) Install the packages 241 | 242 | ```text 243 | $ sudo dpkg -i *.deb 244 | ``` 245 | 246 | ### Upgrading Installed \*.DEB Packages 247 | 248 | If PMDK was previously installed using the downloaded deb packages, use the following to upgrade the installed packages. 249 | 250 | {% hint style="info" %} 251 | If you are upgrading from PMDK v1.3.1 \(formally NVML\) to PMDK 1.4 or later, the name change may cause package conflicts which causes some packages to fail. It is recommended to remove all nvml\* packages before trying to upgrade/install pmdk. 252 | 253 | ```text 254 | $ dpkg -r nvml* 255 | ``` 256 | {% endhint %} 257 | 258 | 1\) Download the pmdk-{version}-dpkgs.tar.gz from [https://github.com/pmem/pmdk/releases](https://github.com/pmem/pmdk/releases), eg to download PMDK v1.4.1: 259 | 260 | ```text 261 | $ mkdir -p /downloads/pmdk1.4.1 262 | $ cd /downloads/pmdk1.4.1 263 | $ wget 264 | $ tar zxf pmdk-1.4.1-dpkgs.tar.gz 265 | ``` 266 | 267 | 2\) The download bundle includes installable packages in the root 268 | 269 | ```text 270 | $ ls -1 271 | libpmem_1.4.1-1_amd64.deb 272 | libpmemblk_1.4.1-1_amd64.deb 273 | <...snip...> 274 | ``` 275 | 276 | 3\) Install the packages 277 | 278 | ```text 279 | $ sudo dpkg -u *.deb 280 | ``` 281 | 282 | -------------------------------------------------------------------------------- /ndctl-users-guide/README.md: -------------------------------------------------------------------------------- 1 | # NDCTL User Guide 2 | 3 | ## Introduction 4 | 5 | `ndctl` is a utility for managing the Linux LIBNVDIMM Kernel subsystem. It is designed to work with various non-volatile memory devices \(NVDIMMs\) from different vendors. The LIBNVDIMM subsystem defines a kernel device model and control message interface for platform NVDIMM resources like those defined by the [ACPI v6.0](http://www.uefi.org/sites/default/files/resources/ACPI_6_0_Errata_A.PDF) NFIT \(**N**VDIMM **F**irmware **I**nterface **T**able\). The latest [ACPI ](http://www.uefi.org/specifications)and [UEFI ](http://www.uefi.org/specifications)specifications can be found at [uefi.org](http://www.uefi.org). Operations supported by `ndctl` include: 6 | 7 | * Provisioning capacity \(namespaces\) 8 | * Enumerating Devices 9 | * Enabling and Disabling NVDIMMs, Regions, and Namespaces 10 | * Managing NVDIMM Labels 11 | 12 | ## Installing NDCTL 13 | 14 | See [Installing NDCTL](../getting-started-guide/installing-ndctl.md) in the [Getting Started](../getting-started-guide/) Guide. 15 | 16 | ## Basic Usage 17 | 18 | The `ndctl` command is designed to be user friendly. Once installed, a list of commands can be shown using any of the following: 19 | 20 | 1\) With no arguments or options, `ndctl` shows a simple usage message: 21 | 22 | ```text 23 | # ndctl 24 | 25 | usage: ndctl [--version] [--help] COMMAND [ARGS] 26 | 27 | 28 | See 'ndctl help COMMAND' for more information on a specific command. 29 | ndctl --list-cmds to see all available commands 30 | 31 | ``` 32 | 33 | 2\) Using `ndctl help` displays basic help and syntax: 34 | 35 | ```text 36 | # ndctl help 37 | 38 | usage: ndctl [--version] [--help] COMMAND [ARGS] 39 | 40 | See 'ndctl help COMMAND' for more information on a specific command. 41 | ndctl --list-cmds to see all available commands 42 | ``` 43 | 44 | Below is an example of using the `ndctl help` command to launch the `create-namespace` man page: 45 | 46 | ```text 47 | # ndctl help create-namespace 48 | ``` 49 | 50 | 3\) Using `ndctl --list-cmds` lists all commands as a single list. 51 | 52 | ```text 53 | # ndctl --list-cmds 54 | version 55 | enable-namespace 56 | disable-namespace 57 | create-namespace 58 | destroy-namespace 59 | check-namespace 60 | clear-errors 61 | enable-region 62 | disable-region 63 | enable-dimm 64 | disable-dimm 65 | zero-labels 66 | read-labels 67 | write-labels 68 | init-labels 69 | check-labels 70 | inject-error 71 | update-firmware 72 | inject-smart 73 | wait-scrub 74 | start-scrub 75 | setup-passphrase 76 | update-passphrase 77 | remove-passphrase 78 | freeze-security 79 | sanitize-dimm 80 | load-keys 81 | wait-overwrite 82 | list 83 | monitor 84 | help 85 | ``` 86 | 87 | An alternative method for listing commands uses the [TAB key completion](./#tab-command-and-argument-completion) feature of ndctl. By executing `ndctl ` we can list the commands, eg: 88 | 89 | ```text 90 | # ndctl 91 | check-labels disable-namespace help monitor update-firmware zero-labels 92 | check-namespace disable-region init-labels read-labels update-passphrase 93 | clear-errors enable-dimm inject-error remove-passphrase version 94 | create-namespace enable-namespace inject-smart sanitize-dimm wait-overwrite 95 | destroy-namespace enable-region list setup-passphrase wait-scrub 96 | disable-dimm freeze-security load-keys start-scrub write-labels 97 | ``` 98 | 99 | ### TAB Command and Argument Completion 100 | 101 | `ndctl` supports command completion using the TAB key. For example, typing `ndctl enable-` lists all commands beginning with 'enable', eg: 102 | 103 | ```text 104 | # ndctl enable- 105 | enable-dimm enable-namespace enable-region 106 | ``` 107 | 108 | TAB completion also works with command arguments. For example, typing `ndctl enable-dimm ` will show all available command arguments. For example, the 'enable-dimm' command can enable one, more than one, or all NVDIMMs. It will list all available NVDIMMs \(nmem\) devices when using the TAB completion, eg: 109 | 110 | ```text 111 | # ndctl enable-dimm 112 | all nmem0 113 | ``` 114 | 115 | ### Getting Help 116 | 117 | NDCTL ships with a man page for each command. Each man page describes the required arguments and features in detail. Man pages can be found and accessed using the `man` or `ndctl` utilities. The following `man -k ndctl` searches for any man page containing the "ndctl" keyword: 118 | 119 | ```text 120 | # man -k ndctl 121 | ndctl (1) - Manage "libnvdimm" subsystem devices (Non-volatile Memory) 122 | ndctl-check-labels (1) - determine if the given dimms have a valid namespace index block 123 | ndctl-check-namespace (1) - check namespace metadata consistency 124 | ndctl-clear-errors (1) - clear all errors (badblocks) on the given namespace 125 | ndctl-create-namespace (1) - provision or reconfigure a namespace 126 | ndctl-destroy-namespace (1) - destroy the given namespace(s) 127 | ndctl-disable-dimm (1) - disable one or more idle dimms 128 | ndctl-disable-namespace (1) - disable the given namespace(s) 129 | ndctl-disable-region (1) - disable the given region(s) and all descendant namespaces 130 | ndctl-enable-dimm (1) - enable one more dimms 131 | ndctl-enable-namespace (1) - enable the given namespace(s) 132 | ndctl-enable-region (1) - enable the given region(s) and all descendant namespaces 133 | ndctl-freeze-security (1) - Set the given DIMM(s) to reject future security operations 134 | ndctl-init-labels (1) - initialize the label data area on a dimm or set of dimms 135 | ndctl-inject-error (1) - inject media errors at a namespace offset 136 | ndctl-inject-smart (1) - perform smart threshold/injection operations on a DIMM 137 | ndctl-list (1) - dump the platform nvdimm device topology and attributes in json 138 | ndctl-load-keys (1) - load the kek and encrypted passphrases into the keyring 139 | ndctl-monitor (1) - Monitor the smart events of nvdimm objects 140 | ndctl-read-labels (1) - read out the label area on a dimm or set of dimms 141 | ndctl-remove-passphrase (1) - Stop a DIMM from locking at power-loss and requiring a passphrase to access media 142 | ndctl-sanitize-dimm (1) - Perform a cryptographic destruction or overwrite of the contents of the given NVDIMM(s) 143 | ndctl-setup-passphrase (1) - setup and enable the security passphrase for an NVDIMM 144 | ndctl-start-scrub (1) - start an Address Range Scrub (ARS) operation 145 | ndctl-update-firmware (1) - provides for updating the firmware on an NVDIMM 146 | ndctl-update-passphrase (1) - update the security passphrase for an NVDIMM 147 | ndctl-wait-overwrite (1) - wait for an overwrite operation to complete 148 | ndctl-wait-scrub (1) - wait for an Address Range Scrub (ARS) operation to complete 149 | ndctl-write-labels (1) - write data to the label area on a dimm 150 | ndctl-zero-labels (1) - zero out the label area on a dimm or set of dimms 151 | ``` 152 | 153 | {% hint style="info" %} 154 | Note: If `man -k ndctl` returns "ndctl: nothing appropriate." or similar, see the [Troubleshooting](troubleshooting.md#man-k-ndctl-returns-ndctl-nothing-appropriate) section to manually build the indexes. 155 | {% endhint %} 156 | 157 | Additionally, executing `ndctl help ` can be used to display the man page for the command, eg: 158 | 159 | ```text 160 | # ndctl help enable-dimm 161 | ``` 162 | 163 | A list of ndctl man pages are available online. See '[NDCTL Man Pages](man-pages.md)' for a complete list. 164 | 165 | ## Displaying Bus, NVDIMM, Region, and Namespace Information 166 | 167 | The `ndctl list` command is a very powerful and feature rich command. A list of options is shown below: 168 | 169 | ```text 170 | # ndctl list -? 171 | Error: unknown switch `?' 172 | 173 | usage: ndctl list [] 174 | 175 | -b, --bus filter by bus 176 | -r, --region 177 | filter by region 178 | -d, --dimm filter by dimm 179 | -n, --namespace 180 | filter by namespace id 181 | -m, --mode 182 | filter by namespace mode 183 | -t, --type 184 | filter by region-type 185 | -U, --numa-node 186 | filter by numa node 187 | -B, --buses include bus info 188 | -D, --dimms include dimm info 189 | -F, --firmware include firmware info 190 | -H, --health include dimm health 191 | -R, --regions include region info 192 | -N, --namespaces include namespace info (default) 193 | -X, --device-dax include device-dax info 194 | -i, --idle include idle devices 195 | -M, --media-errors include media errors 196 | -u, --human use human friendly number formats 197 | ``` 198 | 199 | Using the filters is a powerful way to limit the output. 200 | 201 | ### Examples 202 | 203 | To list all active/enabled namespaces: 204 | 205 | ```text 206 | # ndctl list -N 207 | ``` 208 | 209 | To list all active/enabled regions: 210 | 211 | ```text 212 | # ndctl list -R 213 | ``` 214 | 215 | To list all active/enabled NVDIMMs: 216 | 217 | ```text 218 | # ndctl list -D 219 | ``` 220 | 221 | To list all active/enabled NVDIMMs, Regions, and Namespaces: 222 | 223 | ```text 224 | # ndctl list -DRN 225 | ``` 226 | 227 | To list all active/enabled and disabled/inactive \(idle\) NVDIMMs, Regions, and Namespaces: 228 | 229 | ```text 230 | # ndctl list -DRNi 231 | ``` 232 | 233 | To list all active/enabled and disabled/inactive \(idle\) NVDIMMs, Regions, and Namespaces with human readable values: 234 | 235 | ```text 236 | # ndctl list -iNuRD 237 | ``` 238 | 239 | 240 | 241 | --------------------------------------------------------------------------------