├── .gitignore ├── Doc ├── readme.api.docx └── readme.relnotes ├── LICENSE.txt ├── Makefile ├── README ├── build.sh ├── configure ├── definitions.h ├── dnvme.spec ├── dnvme_cmds.c ├── dnvme_cmds.h ├── dnvme_ds.c ├── dnvme_ds.h ├── dnvme_interface.h ├── dnvme_ioctls.c ├── dnvme_ioctls.h ├── dnvme_irq.c ├── dnvme_irq.h ├── dnvme_queue.c ├── dnvme_queue.h ├── dnvme_reg.c ├── dnvme_reg.h ├── dnvme_sts_chk.c ├── dnvme_sts_chk.h ├── doxygen.conf ├── etc └── 55-dnvme.rules ├── funcHierarchy.sh ├── scripts ├── checkSrc.sh └── checkpatch.pl ├── sysdnvme.c ├── sysdnvme.h ├── sysfuncproto.h ├── unittest ├── .gitignore ├── Makefile ├── test.c ├── test_alloc.c ├── test_irq.c ├── test_irq.h ├── test_metrics.c ├── test_metrics.h ├── test_rng_sqxdbl.c ├── test_send_cmd.c └── test_send_cmd.h └── version.h /.gitignore: -------------------------------------------------------------------------------- 1 | rpm/ 2 | rpmbuild/ 3 | dnvme-*/ 4 | Debug/ 5 | Doc/HTML 6 | 7 | *.o 8 | *.ko 9 | *.cmd 10 | *.symvers 11 | *.mod.c 12 | *.order 13 | doxygen.log 14 | .tmp_versions/ 15 | dnvme*.tar.gz 16 | .project 17 | .cproject 18 | *.swp 19 | -------------------------------------------------------------------------------- /Doc/readme.api.docx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nvmecompliance/dnvme/74c7ed7332a7fc3a3ae437845386c489585d1df6/Doc/readme.api.docx -------------------------------------------------------------------------------- /Doc/readme.relnotes: -------------------------------------------------------------------------------- 1 | nvmecompliance_release=1.1.0 Added interrupt support; MSI, MSI-X, no pin based 2 | 1) MSI-single and MSI-multi are coded along side of MSI-X, however no 3 | hardware is available, and QEMU does not support MSI based interrupts, 4 | and thus this support has not be verified bug free. 5 | 2) MSI-X is verified bug free to the ability of tnvme and unit tests 6 | which reside within this repo. 7 | 3) Pin-based interrupts need to be supported in dnvme on an as needed basis. 8 | nvmecompliance_release=1.0.1 Enhancements and bugfixes; added read/write cmds 9 | 1) Planning on supporting interrupts in the future. 10 | nvmecompliance_release=1.0.0 First official release of the NVME compliance suite 11 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 2, June 1991 3 | 4 | Copyright (C) 1989, 1991 Free Software Foundation, Inc., 5 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA 6 | Everyone is permitted to copy and distribute verbatim copies 7 | of this license document, but changing it is not allowed. 8 | 9 | Preamble 10 | 11 | The licenses for most software are designed to take away your 12 | freedom to share and change it. By contrast, the GNU General Public 13 | License is intended to guarantee your freedom to share and change free 14 | software--to make sure the software is free for all its users. This 15 | General Public License applies to most of the Free Software 16 | Foundation's software and to any other program whose authors commit to 17 | using it. (Some other Free Software Foundation software is covered by 18 | the GNU Lesser General Public License instead.) You can apply it to 19 | your programs, too. 20 | 21 | When we speak of free software, we are referring to freedom, not 22 | price. Our General Public Licenses are designed to make sure that you 23 | have the freedom to distribute copies of free software (and charge for 24 | this service if you wish), that you receive source code or can get it 25 | if you want it, that you can change the software or use pieces of it 26 | in new free programs; and that you know you can do these things. 27 | 28 | To protect your rights, we need to make restrictions that forbid 29 | anyone to deny you these rights or to ask you to surrender the rights. 30 | These restrictions translate to certain responsibilities for you if you 31 | distribute copies of the software, or if you modify it. 32 | 33 | For example, if you distribute copies of such a program, whether 34 | gratis or for a fee, you must give the recipients all the rights that 35 | you have. You must make sure that they, too, receive or can get the 36 | source code. And you must show them these terms so they know their 37 | rights. 38 | 39 | We protect your rights with two steps: (1) copyright the software, and 40 | (2) offer you this license which gives you legal permission to copy, 41 | distribute and/or modify the software. 42 | 43 | Also, for each author's protection and ours, we want to make certain 44 | that everyone understands that there is no warranty for this free 45 | software. If the software is modified by someone else and passed on, we 46 | want its recipients to know that what they have is not the original, so 47 | that any problems introduced by others will not reflect on the original 48 | authors' reputations. 49 | 50 | Finally, any free program is threatened constantly by software 51 | patents. We wish to avoid the danger that redistributors of a free 52 | program will individually obtain patent licenses, in effect making the 53 | program proprietary. To prevent this, we have made it clear that any 54 | patent must be licensed for everyone's free use or not licensed at all. 55 | 56 | The precise terms and conditions for copying, distribution and 57 | modification follow. 58 | 59 | GNU GENERAL PUBLIC LICENSE 60 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 61 | 62 | 0. This License applies to any program or other work which contains 63 | a notice placed by the copyright holder saying it may be distributed 64 | under the terms of this General Public License. The "Program", below, 65 | refers to any such program or work, and a "work based on the Program" 66 | means either the Program or any derivative work under copyright law: 67 | that is to say, a work containing the Program or a portion of it, 68 | either verbatim or with modifications and/or translated into another 69 | language. (Hereinafter, translation is included without limitation in 70 | the term "modification".) Each licensee is addressed as "you". 71 | 72 | Activities other than copying, distribution and modification are not 73 | covered by this License; they are outside its scope. The act of 74 | running the Program is not restricted, and the output from the Program 75 | is covered only if its contents constitute a work based on the 76 | Program (independent of having been made by running the Program). 77 | Whether that is true depends on what the Program does. 78 | 79 | 1. You may copy and distribute verbatim copies of the Program's 80 | source code as you receive it, in any medium, provided that you 81 | conspicuously and appropriately publish on each copy an appropriate 82 | copyright notice and disclaimer of warranty; keep intact all the 83 | notices that refer to this License and to the absence of any warranty; 84 | and give any other recipients of the Program a copy of this License 85 | along with the Program. 86 | 87 | You may charge a fee for the physical act of transferring a copy, and 88 | you may at your option offer warranty protection in exchange for a fee. 89 | 90 | 2. You may modify your copy or copies of the Program or any portion 91 | of it, thus forming a work based on the Program, and copy and 92 | distribute such modifications or work under the terms of Section 1 93 | above, provided that you also meet all of these conditions: 94 | 95 | a) You must cause the modified files to carry prominent notices 96 | stating that you changed the files and the date of any change. 97 | 98 | b) You must cause any work that you distribute or publish, that in 99 | whole or in part contains or is derived from the Program or any 100 | part thereof, to be licensed as a whole at no charge to all third 101 | parties under the terms of this License. 102 | 103 | c) If the modified program normally reads commands interactively 104 | when run, you must cause it, when started running for such 105 | interactive use in the most ordinary way, to print or display an 106 | announcement including an appropriate copyright notice and a 107 | notice that there is no warranty (or else, saying that you provide 108 | a warranty) and that users may redistribute the program under 109 | these conditions, and telling the user how to view a copy of this 110 | License. (Exception: if the Program itself is interactive but 111 | does not normally print such an announcement, your work based on 112 | the Program is not required to print an announcement.) 113 | 114 | These requirements apply to the modified work as a whole. If 115 | identifiable sections of that work are not derived from the Program, 116 | and can be reasonably considered independent and separate works in 117 | themselves, then this License, and its terms, do not apply to those 118 | sections when you distribute them as separate works. But when you 119 | distribute the same sections as part of a whole which is a work based 120 | on the Program, the distribution of the whole must be on the terms of 121 | this License, whose permissions for other licensees extend to the 122 | entire whole, and thus to each and every part regardless of who wrote it. 123 | 124 | Thus, it is not the intent of this section to claim rights or contest 125 | your rights to work written entirely by you; rather, the intent is to 126 | exercise the right to control the distribution of derivative or 127 | collective works based on the Program. 128 | 129 | In addition, mere aggregation of another work not based on the Program 130 | with the Program (or with a work based on the Program) on a volume of 131 | a storage or distribution medium does not bring the other work under 132 | the scope of this License. 133 | 134 | 3. You may copy and distribute the Program (or a work based on it, 135 | under Section 2) in object code or executable form under the terms of 136 | Sections 1 and 2 above provided that you also do one of the following: 137 | 138 | a) Accompany it with the complete corresponding machine-readable 139 | source code, which must be distributed under the terms of Sections 140 | 1 and 2 above on a medium customarily used for software interchange; or, 141 | 142 | b) Accompany it with a written offer, valid for at least three 143 | years, to give any third party, for a charge no more than your 144 | cost of physically performing source distribution, a complete 145 | machine-readable copy of the corresponding source code, to be 146 | distributed under the terms of Sections 1 and 2 above on a medium 147 | customarily used for software interchange; or, 148 | 149 | c) Accompany it with the information you received as to the offer 150 | to distribute corresponding source code. (This alternative is 151 | allowed only for noncommercial distribution and only if you 152 | received the program in object code or executable form with such 153 | an offer, in accord with Subsection b above.) 154 | 155 | The source code for a work means the preferred form of the work for 156 | making modifications to it. For an executable work, complete source 157 | code means all the source code for all modules it contains, plus any 158 | associated interface definition files, plus the scripts used to 159 | control compilation and installation of the executable. However, as a 160 | special exception, the source code distributed need not include 161 | anything that is normally distributed (in either source or binary 162 | form) with the major components (compiler, kernel, and so on) of the 163 | operating system on which the executable runs, unless that component 164 | itself accompanies the executable. 165 | 166 | If distribution of executable or object code is made by offering 167 | access to copy from a designated place, then offering equivalent 168 | access to copy the source code from the same place counts as 169 | distribution of the source code, even though third parties are not 170 | compelled to copy the source along with the object code. 171 | 172 | 4. You may not copy, modify, sublicense, or distribute the Program 173 | except as expressly provided under this License. Any attempt 174 | otherwise to copy, modify, sublicense or distribute the Program is 175 | void, and will automatically terminate your rights under this License. 176 | However, parties who have received copies, or rights, from you under 177 | this License will not have their licenses terminated so long as such 178 | parties remain in full compliance. 179 | 180 | 5. You are not required to accept this License, since you have not 181 | signed it. However, nothing else grants you permission to modify or 182 | distribute the Program or its derivative works. These actions are 183 | prohibited by law if you do not accept this License. Therefore, by 184 | modifying or distributing the Program (or any work based on the 185 | Program), you indicate your acceptance of this License to do so, and 186 | all its terms and conditions for copying, distributing or modifying 187 | the Program or works based on it. 188 | 189 | 6. Each time you redistribute the Program (or any work based on the 190 | Program), the recipient automatically receives a license from the 191 | original licensor to copy, distribute or modify the Program subject to 192 | these terms and conditions. You may not impose any further 193 | restrictions on the recipients' exercise of the rights granted herein. 194 | You are not responsible for enforcing compliance by third parties to 195 | this License. 196 | 197 | 7. If, as a consequence of a court judgment or allegation of patent 198 | infringement or for any other reason (not limited to patent issues), 199 | conditions are imposed on you (whether by court order, agreement or 200 | otherwise) that contradict the conditions of this License, they do not 201 | excuse you from the conditions of this License. If you cannot 202 | distribute so as to satisfy simultaneously your obligations under this 203 | License and any other pertinent obligations, then as a consequence you 204 | may not distribute the Program at all. For example, if a patent 205 | license would not permit royalty-free redistribution of the Program by 206 | all those who receive copies directly or indirectly through you, then 207 | the only way you could satisfy both it and this License would be to 208 | refrain entirely from distribution of the Program. 209 | 210 | If any portion of this section is held invalid or unenforceable under 211 | any particular circumstance, the balance of the section is intended to 212 | apply and the section as a whole is intended to apply in other 213 | circumstances. 214 | 215 | It is not the purpose of this section to induce you to infringe any 216 | patents or other property right claims or to contest validity of any 217 | such claims; this section has the sole purpose of protecting the 218 | integrity of the free software distribution system, which is 219 | implemented by public license practices. Many people have made 220 | generous contributions to the wide range of software distributed 221 | through that system in reliance on consistent application of that 222 | system; it is up to the author/donor to decide if he or she is willing 223 | to distribute software through any other system and a licensee cannot 224 | impose that choice. 225 | 226 | This section is intended to make thoroughly clear what is believed to 227 | be a consequence of the rest of this License. 228 | 229 | 8. If the distribution and/or use of the Program is restricted in 230 | certain countries either by patents or by copyrighted interfaces, the 231 | original copyright holder who places the Program under this License 232 | may add an explicit geographical distribution limitation excluding 233 | those countries, so that distribution is permitted only in or among 234 | countries not thus excluded. In such case, this License incorporates 235 | the limitation as if written in the body of this License. 236 | 237 | 9. The Free Software Foundation may publish revised and/or new versions 238 | of the General Public License from time to time. Such new versions will 239 | be similar in spirit to the present version, but may differ in detail to 240 | address new problems or concerns. 241 | 242 | Each version is given a distinguishing version number. If the Program 243 | specifies a version number of this License which applies to it and "any 244 | later version", you have the option of following the terms and conditions 245 | either of that version or of any later version published by the Free 246 | Software Foundation. If the Program does not specify a version number of 247 | this License, you may choose any version ever published by the Free Software 248 | Foundation. 249 | 250 | 10. If you wish to incorporate parts of the Program into other free 251 | programs whose distribution conditions are different, write to the author 252 | to ask for permission. For software which is copyrighted by the Free 253 | Software Foundation, write to the Free Software Foundation; we sometimes 254 | make exceptions for this. Our decision will be guided by the two goals 255 | of preserving the free status of all derivatives of our free software and 256 | of promoting the sharing and reuse of software generally. 257 | 258 | NO WARRANTY 259 | 260 | 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY 261 | FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN 262 | OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES 263 | PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED 264 | OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 265 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS 266 | TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE 267 | PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, 268 | REPAIR OR CORRECTION. 269 | 270 | 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 271 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR 272 | REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, 273 | INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING 274 | OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED 275 | TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY 276 | YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER 277 | PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE 278 | POSSIBILITY OF SUCH DAMAGES. 279 | 280 | END OF TERMS AND CONDITIONS 281 | 282 | How to Apply These Terms to Your New Programs 283 | 284 | If you develop a new program, and you want it to be of the greatest 285 | possible use to the public, the best way to achieve this is to make it 286 | free software which everyone can redistribute and change under these terms. 287 | 288 | To do so, attach the following notices to the program. It is safest 289 | to attach them to the start of each source file to most effectively 290 | convey the exclusion of warranty; and each file should have at least 291 | the "copyright" line and a pointer to where the full notice is found. 292 | 293 | 294 | Copyright (C) 295 | 296 | This program is free software; you can redistribute it and/or modify 297 | it under the terms of the GNU General Public License as published by 298 | the Free Software Foundation; either version 2 of the License, or 299 | (at your option) any later version. 300 | 301 | This program is distributed in the hope that it will be useful, 302 | but WITHOUT ANY WARRANTY; without even the implied warranty of 303 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 304 | GNU General Public License for more details. 305 | 306 | You should have received a copy of the GNU General Public License along 307 | with this program; if not, write to the Free Software Foundation, Inc., 308 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 309 | 310 | Also add information on how to contact you by electronic and paper mail. 311 | 312 | If the program is interactive, make it output a short notice like this 313 | when it starts in an interactive mode: 314 | 315 | Gnomovision version 69, Copyright (C) year name of author 316 | Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 317 | This is free software, and you are welcome to redistribute it 318 | under certain conditions; type `show c' for details. 319 | 320 | The hypothetical commands `show w' and `show c' should show the appropriate 321 | parts of the General Public License. Of course, the commands you use may 322 | be called something other than `show w' and `show c'; they could even be 323 | mouse-clicks or menu items--whatever suits your program. 324 | 325 | You should also get your employer (if you work as a programmer) or your 326 | school, if any, to sign a "copyright disclaimer" for the program, if 327 | necessary. Here is a sample; alter the names: 328 | 329 | Yoyodyne, Inc., hereby disclaims all copyright interest in the program 330 | `Gnomovision' (which makes passes at compilers) written by James Hacker. 331 | 332 | , 1 April 1989 333 | Ty Coon, President of Vice 334 | 335 | This General Public License does not permit incorporating your program into 336 | proprietary programs. If your program is a subroutine library, you may 337 | consider it more useful to permit linking proprietary applications with the 338 | library. If this is what you want to do, use the GNU Lesser General 339 | Public License instead of this License. 340 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | # 2 | # NVM Express Compliance Suite 3 | # Copyright (c) 2011, Intel Corporation. 4 | # 5 | # This program is free software; you can redistribute it and/or modify it 6 | # under the terms and conditions of the GNU General Public License, 7 | # version 2, as published by the Free Software Foundation. 8 | # 9 | # This program is distributed in the hope it will be useful, but WITHOUT 10 | # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 | # FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 | # more details. 13 | # 14 | # You should have received a copy of the GNU General Public License along with 15 | # this program; if not, write to the Free Software Foundation, Inc., 16 | # 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 | # 18 | 19 | # Modify the Makefile to point to Linux build tree. 20 | DIST ?= $(shell uname -r) 21 | KDIR:=/lib/modules/$(DIST)/build/ 22 | CDIR:=/usr/src/linux-source-3.5.0-generic/debian/scripts/ 23 | SOURCE:=$(shell pwd) 24 | DRV_NAME:=dnvme 25 | 26 | # QEMU_ON should be used when running the driver within QEMU, which forces 27 | # dnvme to convert 8B writes to 2 4B writes patchin a QEMU deficiency 28 | #QEMU_ON:=-DQEMU 29 | 30 | # Introduces more logging in /var/log/messages; enhances debug but slows down 31 | #DBG_ON:=-g -DDEBUG 32 | 33 | EXTRA_CFLAGS+=-Wall $(QEMU_ON) $(DBG_ON) -I$(PWD)/ 34 | 35 | SOURCES := \ 36 | dnvme_reg.c \ 37 | sysdnvme.c \ 38 | dnvme_ioctls.c \ 39 | dnvme_sts_chk.c \ 40 | dnvme_queue.c \ 41 | dnvme_cmds.c \ 42 | dnvme_ds.c \ 43 | dnvme_irq.c 44 | 45 | # 46 | # RPM build parameters 47 | # 48 | RPMBASE=$(DRV_NAME) 49 | MAJOR=$(shell awk 'FNR==29' $(PWD)/version.h) 50 | MINOR=$(shell awk 'FNR==32' $(PWD)/version.h) 51 | SOFTREV=$(MAJOR).$(MINOR) 52 | RPMFILE=$(RPMBASE)-$(SOFTREV) 53 | RPMCOMPILEDIR=$(PWD)/rpmbuild 54 | RPMSRCFILE=$(PWD)/$(RPMFILE) 55 | RPMSPECFILE=$(RPMBASE).spec 56 | SRCDIR?=./src 57 | 58 | obj-m := dnvme.o 59 | dnvme-objs += sysdnvme.o dnvme_ioctls.o dnvme_reg.o dnvme_sts_chk.o dnvme_queue.o dnvme_cmds.o dnvme_ds.o dnvme_irq.o 60 | 61 | all: 62 | make -C $(KDIR) M=$(PWD) modules 63 | 64 | rpm: rpmzipsrc rpmbuild 65 | 66 | clean: 67 | make -C $(KDIR) M=$(PWD) clean 68 | rm -f doxygen.log 69 | rm -rf $(SRCDIR) 70 | rm -rf $(RPMFILE) 71 | rm -rf $(RPMCOMPILEDIR) 72 | rm -rf $(RPMSRCFILE) 73 | rm -f $(RPMSRCFILE).tar* 74 | 75 | clobber: clean 76 | rm -rf Doc/HTML 77 | rm -f $(DRV_NAME) 78 | 79 | doc: all 80 | doxygen doxygen.conf > doxygen.log 81 | 82 | # Specify a custom source c:ompile dir: "make src SRCDIR=../compile/dir" 83 | # If the specified dir could cause recursive copies, then specify w/o './' 84 | # "make src SRCDIR=src" will copy all except "src" dir. 85 | src: 86 | rm -rf $(SRCDIR) 87 | mkdir -p $(SRCDIR) 88 | (git archive HEAD) | tar xf - -C $(SRCDIR) 89 | 90 | install: 91 | # typically one invokes this as "sudo make install" 92 | mkdir -p $(DESTDIR)/lib/modules/$(DIST) 93 | install -p $(DRV_NAME).ko $(DESTDIR)/lib/modules/$(DIST) 94 | install -p etc/55-$(RPMBASE).rules $(DESTDIR)/etc/udev/rules.d 95 | ifeq '$(DESTDIR)' '' 96 | # DESTDIR only defined when installing to generate an RPM, i.e. psuedo 97 | # install thus don't update /lib/modules/xxx/modules.dep file 98 | /sbin/depmod -a 99 | endif 100 | 101 | rpmzipsrc: SRCDIR:=$(RPMFILE) 102 | rpmzipsrc: clobber src 103 | rm -f $(RPMSRCFILE).tar* 104 | tar cvf $(RPMSRCFILE).tar $(RPMFILE) 105 | gzip $(RPMSRCFILE).tar 106 | 107 | rpmbuild: rpmzipsrc 108 | # Build the RPM and then copy the results local 109 | ./build.sh $(RPMCOMPILEDIR) $(RPMSPECFILE) $(RPMSRCFILE) 110 | rm -rf ./rpm 111 | mkdir ./rpm 112 | cp -p $(RPMCOMPILEDIR)/RPMS/x86_64/*.rpm ./rpm 113 | cp -p $(RPMCOMPILEDIR)/SRPMS/*.rpm ./rpm 114 | 115 | .PHONY: all clean clobber doc src install rpmzipsrc rpmbuild 116 | -------------------------------------------------------------------------------- /README: -------------------------------------------------------------------------------- 1 | This is the dnvm repository in the Nvme Compliance project. 2 | 3 | This repo contains the Linux driver for the NVMe Compliance Suite. 4 | 5 | Contact compliance@nvmexpress.org for more information. 6 | -------------------------------------------------------------------------------- /build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # NVM Express Compliance Suite 4 | # Copyright (c) 2011, Intel Corporation. 5 | # 6 | # This program is free software; you can redistribute it and/or modify it 7 | # under the terms and conditions of the GNU General Public License, 8 | # version 2, as published by the Free Software Foundation. 9 | # 10 | # This program is distributed in the hope it will be useful, but WITHOUT 11 | # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 | # FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 | # more details. 14 | # 15 | # You should have received a copy of the GNU General Public License along with 16 | # this program; if not, write to the Free Software Foundation, Inc., 17 | # 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 18 | # 19 | 20 | RPMCOMPILEDIR=$1 21 | RPMSPECFILE=$2 22 | RPMSRCFILE=$3 23 | 24 | Usage() { 25 | echo "usage...." 26 | echo " $0 " 27 | echo " Specify full path to the base RPM compilation dir" 28 | echo " Specify filename only of the RPM spec file (*.spec)" 29 | echo " Specify full path to the RPM source file (*.tar.gz)" 30 | echo "" 31 | } 32 | 33 | # RPM build errors under ROOT a potentially catastrophic 34 | if [[ $EUID -eq 0 ]]; then 35 | echo Running as 'root' is not supported 36 | exit 37 | fi 38 | 39 | if [ -z $RPMCOMPILEDIR ] || [ -z $RPMSPECFILE ] || [ -z $RPMSRCFILE ]; then 40 | Usage 41 | exit 42 | fi 43 | 44 | # Setup a fresh RPM build environment, and then build 45 | MAJOR=`awk 'FNR == 29' version.h` 46 | MINOR=`awk 'FNR == 32' version.h` 47 | rm -rf $RPMCOMPILEDIR 48 | mkdir -p $RPMCOMPILEDIR/{BUILDROOT,BUILD,RPMS,S{OURCE,PEC,RPM}S} 49 | cp -p $RPMSRCFILE.tar.gz $RPMCOMPILEDIR 50 | cp -p $RPMSRCFILE.tar.gz $RPMCOMPILEDIR/SOURCES 51 | cp -p ${RPMSPECFILE} $RPMCOMPILEDIR/SPECS 52 | cd $RPMCOMPILEDIR/SPECS 53 | rpmbuild --define "_topdir ${RPMCOMPILEDIR}" --define "_major $MAJOR" --define "_minor $MINOR" -ba ${RPMSPECFILE} 54 | 55 | exit 56 | -------------------------------------------------------------------------------- /configure: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nvmecompliance/dnvme/74c7ed7332a7fc3a3ae437845386c489585d1df6/configure -------------------------------------------------------------------------------- /definitions.h: -------------------------------------------------------------------------------- 1 | /* 2 | * NVM Express Compliance Suite 3 | * Copyright (c) 2011, Intel Corporation. 4 | * 5 | * This program is free software; you can redistribute it and/or modify it 6 | * under the terms and conditions of the GNU General Public License, 7 | * version 2, as published by the Free Software Foundation. 8 | * 9 | * This program is distributed in the hope it will be useful, but WITHOUT 10 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 | * more details. 13 | * 14 | * You should have received a copy of the GNU General Public License along with 15 | * this program; if not, write to the Free Software Foundation, Inc., 16 | * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 | */ 18 | 19 | #ifndef _DEFINITIONS_H_ 20 | #define _DEFINITIONS_H_ 21 | 22 | #define SUCCESS 0 23 | #define FAIL -1 24 | #define DEVICE_LIST_SIZE 20 25 | #define CONFIG_PCI 1 26 | #define NVME_DEV_INIT 0x3 27 | 28 | #ifndef USHRT_MAX 29 | #define USHRT_MAX ((u16)(~0U)) 30 | #endif 31 | /** 32 | * @def PCI_CLASS_STORAGE_EXPRESS 33 | * Set to value matching with NVME HW 34 | */ 35 | #define PCI_CLASS_STORAGE_EXPRESS 0x010802 36 | 37 | #define NVME_MINORS 16 38 | 39 | /** 40 | * @def MAX_PCI_EXPRESS_CFG 41 | * Maximum pcie config space. 42 | */ 43 | #define MAX_PCI_EXPRESS_CFG 0xFFF 44 | 45 | /** 46 | * @def MAX_PCI_CFG 47 | * Maximum pci config space available 48 | */ 49 | #define MAX_PCI_CFG 0xFF 50 | 51 | /** 52 | * @def MAX_PCI_HDR 53 | * Maximum pci header offset. 54 | */ 55 | #define MAX_PCI_HDR 0x3F 56 | 57 | #define LOWER_16BITS 0xFFFF 58 | 59 | /** 60 | * @def CAP_REG 61 | * Set to offset defined in NVME Spec 1.0b. 62 | */ 63 | #define CAP_REG 0x34 64 | 65 | /** 66 | * @def PMCS 67 | * Set to offset defined in NVME Spec 1.0b. 68 | */ 69 | #define PMCS 0x4 70 | 71 | /** 72 | * @def AER_ID_MASK 73 | * Mask bits will extract last bit and that will help 74 | * in determining what this bit corresponds to in terms of 75 | * AER capability. 76 | */ 77 | #define AER_ID_MASK 0x1 78 | 79 | /** 80 | * @def AER_CAP_ID 81 | * Indicate that this capability structure is an Advanced Error 82 | * reporting capability. 83 | */ 84 | #define AER_CAP_ID 0x1 85 | 86 | /** 87 | * @def MAX_METABUFF_SIZE 88 | * Indicates the max meta buff size allowed per cmd is 2GB 89 | */ 90 | #define MAX_METABUFF_SIZE (1 << 31) 91 | 92 | #endif 93 | -------------------------------------------------------------------------------- /dnvme.spec: -------------------------------------------------------------------------------- 1 | %define _distro %(uname -r) 2 | 3 | Name: dnvme 4 | Version: %{_major}.%{_minor} 5 | Release: 1%{?dist} 6 | Summary: NVM Express hardware compliance test suite kernel driver 7 | Group: System Environment/Kernel 8 | License: Commercial 9 | URL: http://www.intel.com 10 | Source0: %{name}-%{version}.tar.gz 11 | BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n) 12 | 13 | %description 14 | NVM Express hardware compliance test suite kernel driver. 15 | 16 | %prep 17 | %setup -q 18 | 19 | %build 20 | %configure 21 | make %{?_smp_mflags} 22 | 23 | %install 24 | rm -rf $RPM_BUILD_ROOT 25 | mkdir -p $RPM_BUILD_ROOT/etc/udev/rules.d 26 | make install DESTDIR=$RPM_BUILD_ROOT 27 | 28 | %clean 29 | rm -rf $RPM_BUILD_ROOT 30 | 31 | %files 32 | %defattr(644,root,root,755) 33 | /etc/udev/rules.d/* 34 | /lib/modules/%{_distro}/* 35 | 36 | %post 37 | /sbin/depmod -a 38 | 39 | %preun 40 | 41 | %postun 42 | /sbin/depmod -a 43 | 44 | %changelog 45 | -------------------------------------------------------------------------------- /dnvme_cmds.c: -------------------------------------------------------------------------------- 1 | /* 2 | * NVM Express Compliance Suite 3 | * Copyright (c) 2011, Intel Corporation. 4 | * 5 | * This program is free software; you can redistribute it and/or modify it 6 | * under the terms and conditions of the GNU General Public License, 7 | * version 2, as published by the Free Software Foundation. 8 | * 9 | * This program is distributed in the hope it will be useful, but WITHOUT 10 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 | * more details. 13 | * 14 | * You should have received a copy of the GNU General Public License along with 15 | * this program; if not, write to the Free Software Foundation, Inc., 16 | * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 | */ 18 | 19 | #include 20 | #include 21 | #include 22 | #include 23 | 24 | #include "sysdnvme.h" 25 | #include "definitions.h" 26 | #include "dnvme_reg.h" 27 | #include "dnvme_ds.h" 28 | #include "dnvme_cmds.h" 29 | #include 30 | 31 | 32 | 33 | /* Declaration of static functions belonging to Submitting 64Bytes Command */ 34 | static int data_buf_to_prp(struct nvme_device *nvme_dev, 35 | struct metrics_sq *pmetrics_sq, struct nvme_64b_send *nvme_64b_send, 36 | struct nvme_prps *prps, u8 opcode, u16 persist_q_id, 37 | enum data_buf_type data_buf_type, u16 cmd_id); 38 | static int map_user_pg_to_dma(struct nvme_device *nvme_dev, 39 | enum dma_data_direction kernel_dir, unsigned long buf_addr, 40 | unsigned total_buf_len, struct scatterlist **sg_list, 41 | struct nvme_prps *prps, enum data_buf_type data_buf_type); 42 | static int pages_to_sg(struct page **pages, int num_pages, int buf_offset, 43 | unsigned len, struct scatterlist **sg_list); 44 | static int setup_prps(struct nvme_device *nvme_dev, struct scatterlist *sg, 45 | s32 buf_len, struct nvme_prps *prps, u8 cr_io_q, 46 | enum send_64b_bitmask prp_mask); 47 | static void unmap_user_pg_to_dma(struct nvme_device *nvme_dev, 48 | struct nvme_prps *prps); 49 | static void free_prp_pool(struct nvme_device *nvme_dev, 50 | struct nvme_prps *prps, u32 npages); 51 | 52 | 53 | /* prep_send64b_cmd: 54 | * Prepares the 64 byte command to be sent 55 | * with PRP generation and addition of nodes 56 | * inside cmd track list 57 | */ 58 | int prep_send64b_cmd(struct nvme_device *nvme_dev, struct metrics_sq 59 | *pmetrics_sq, struct nvme_64b_send *nvme_64b_send, struct nvme_prps *prps, 60 | struct nvme_gen_cmd *nvme_gen_cmd, u16 persist_q_id, 61 | enum data_buf_type data_buf_type, u8 gen_prp) 62 | { 63 | int ret_code; 64 | 65 | if (gen_prp) { 66 | /* Create PRP and add the node inside the command track list */ 67 | ret_code = data_buf_to_prp(nvme_dev, pmetrics_sq, nvme_64b_send, prps, 68 | nvme_gen_cmd->opcode, persist_q_id, data_buf_type, 69 | nvme_gen_cmd->command_id); 70 | if (ret_code < 0) { 71 | LOG_ERR("Data buffer to PRP generation failed"); 72 | return ret_code; 73 | } 74 | 75 | /* Update the PRP's in the command based on type */ 76 | if ((prps->type == (PRP1 | PRP2)) || 77 | (prps->type == (PRP2 | PRP_List))) { 78 | 79 | nvme_gen_cmd->prp1 = cpu_to_le64(prps->prp1); 80 | nvme_gen_cmd->prp2 = cpu_to_le64(prps->prp2); 81 | } else { 82 | nvme_gen_cmd->prp1 = cpu_to_le64(prps->prp1); 83 | } 84 | } else { 85 | /* Adding node inside cmd_track list for pmetrics_sq */ 86 | ret_code = add_cmd_track_node(pmetrics_sq, persist_q_id, prps, 87 | nvme_gen_cmd->opcode, nvme_gen_cmd->command_id); 88 | if (ret_code < 0) { 89 | LOG_ERR("Failure to add command track node for\ 90 | Create Contig Queue Command"); 91 | return ret_code; 92 | } 93 | } 94 | return 0; 95 | } 96 | 97 | /* 98 | * add_cmd_track_node: 99 | * Create and add the node inside command track list 100 | */ 101 | int add_cmd_track_node(struct metrics_sq *pmetrics_sq, 102 | u16 persist_q_id, struct nvme_prps *prps, u8 opcode, u16 cmd_id) 103 | { 104 | /* pointer to cmd track linked list node */ 105 | struct cmd_track *pcmd_track_list; 106 | 107 | /* Fill the cmd_track structure */ 108 | pcmd_track_list = kmalloc(sizeof(struct cmd_track), 109 | GFP_ATOMIC | __GFP_ZERO); 110 | if (pcmd_track_list == NULL) { 111 | LOG_ERR("Failed to alloc memory for the command track list"); 112 | return -ENOMEM; 113 | } 114 | 115 | /* Fill the node */ 116 | pcmd_track_list->unique_id = cmd_id; 117 | pcmd_track_list->persist_q_id = persist_q_id; 118 | pcmd_track_list->opcode = opcode; 119 | /* non_persist PRP's not filled for create/delete contig/discontig IOQ */ 120 | if (!persist_q_id) { 121 | memcpy(&pcmd_track_list->prp_nonpersist, prps, 122 | sizeof(struct nvme_prps)); 123 | } 124 | 125 | /* Add an element to the end of the list */ 126 | list_add_tail(&pcmd_track_list->cmd_list_hd, 127 | &pmetrics_sq->private_sq.cmd_track_list); 128 | LOG_DBG("Node created and added inside command track list"); 129 | return 0; 130 | } 131 | 132 | /* 133 | * empty_cmd_track_list: 134 | * Delete command track list completely per SQ 135 | */ 136 | void empty_cmd_track_list(struct nvme_device *nvme_device, 137 | struct metrics_sq *pmetrics_sq) 138 | { 139 | struct cmd_track *pcmd_track_element; /* ptr to 1 element within list */ 140 | struct list_head *pos, *temp; /* required for list_for_each_safe */ 141 | 142 | list_for_each_safe(pos, temp, &pmetrics_sq->private_sq.cmd_track_list) { 143 | 144 | pcmd_track_element = list_entry(pos, struct cmd_track, cmd_list_hd); 145 | del_prps(nvme_device, &pcmd_track_element->prp_nonpersist); 146 | list_del(pos); 147 | kfree(pcmd_track_element); 148 | } 149 | } 150 | 151 | /* 152 | * del_prps: 153 | * Deletes the PRP structures of SQ/CQ or command track node 154 | */ 155 | void del_prps(struct nvme_device *nvme_device, struct nvme_prps *prps) 156 | { 157 | /* First unmap the dma */ 158 | unmap_user_pg_to_dma(nvme_device, prps); 159 | /* free prp list pointed by this non contig cq */ 160 | free_prp_pool(nvme_device, prps, prps->npages); 161 | } 162 | 163 | /* 164 | * destroy_dma_pool: 165 | * Destroy's the dma pool 166 | * Returns void 167 | */ 168 | void destroy_dma_pool(struct nvme_device *nvme_dev) 169 | { 170 | /* Destroy the DMA pool */ 171 | dma_pool_destroy(nvme_dev->private_dev.prp_page_pool); 172 | } 173 | 174 | /* 175 | * data_buf_to_prp: 176 | * Creates persist or non persist PRP's from data_buf_ptr memory 177 | * and addes a node inside cmd track list pointed by pmetrics_sq 178 | */ 179 | static int data_buf_to_prp(struct nvme_device *nvme_dev, 180 | struct metrics_sq *pmetrics_sq, struct nvme_64b_send *nvme_64b_send, 181 | struct nvme_prps *prps, u8 opcode, u16 persist_q_id, 182 | enum data_buf_type data_buf_type, u16 cmd_id) 183 | { 184 | int err; 185 | unsigned long addr; 186 | struct scatterlist *sg_list = NULL; 187 | enum dma_data_direction kernel_dir; 188 | #ifdef TEST_PRP_DEBUG 189 | int last_prp, i, j; 190 | __le64 *prp_vlist; 191 | s32 num_prps; 192 | #endif 193 | 194 | 195 | /* Catch common mistakes */ 196 | addr = (unsigned long)nvme_64b_send->data_buf_ptr; 197 | if ((addr & 3) || (addr == 0) || 198 | (nvme_64b_send->data_buf_size == 0) || (nvme_dev == NULL)) { 199 | 200 | LOG_ERR("Invalid Arguments"); 201 | return -EINVAL; 202 | } 203 | 204 | /* Typecase is only possible because the kernel vs. user space contract 205 | * states the following which agrees with 'enum dma_data_direction' 206 | * 0=none; 1=to_device, 2=from_device, 3=bidirectional, others illegal */ 207 | kernel_dir = (enum dma_data_direction)nvme_64b_send->data_dir; 208 | 209 | /* Mapping user pages to dma memory */ 210 | err = map_user_pg_to_dma(nvme_dev, kernel_dir, addr, 211 | nvme_64b_send->data_buf_size, &sg_list, prps, data_buf_type); 212 | if (err < 0) { 213 | return err; 214 | } 215 | 216 | err = setup_prps(nvme_dev, sg_list, nvme_64b_send->data_buf_size, prps, 217 | data_buf_type, nvme_64b_send->bit_mask); 218 | if (err < 0) { 219 | unmap_user_pg_to_dma(nvme_dev, prps); 220 | return err; 221 | } 222 | 223 | #ifdef TEST_PRP_DEBUG 224 | last_prp = PAGE_SIZE / PRP_Size - 1; 225 | if (prps->type == (PRP1 | PRP_List)) { 226 | num_prps = DIV_ROUND_UP(nvme_64b_send->data_buf_size + 227 | offset_in_page(addr), PAGE_SIZE); 228 | } else { 229 | num_prps = DIV_ROUND_UP(nvme_64b_send->data_buf_size, PAGE_SIZE); 230 | } 231 | 232 | if (prps->type == (PRP1 | PRP_List) || prps->type == (PRP2 | PRP_List)) { 233 | if (!(prps->vir_prp_list)) { 234 | LOG_ERR("Creation of PRP failed"); 235 | err = -ENOMEM; 236 | goto err_unmap_prp_pool; 237 | } 238 | prp_vlist = prps->vir_prp_list[0]; 239 | if (prps->type == (PRP2 | PRP_List)) { 240 | LOG_DBG("P1 Entry: %llx", (unsigned long long) prps->prp1); 241 | } 242 | for (i = 0, j = 0; i < num_prps; i++) { 243 | 244 | if (j < (prps->npages - 1) && i == last_prp) { 245 | j++; 246 | num_prps -= i; 247 | i = 0 ; 248 | prp_vlist = prps->vir_prp_list[j]; 249 | LOG_DBG("Physical address of next PRP Page: %llx", 250 | (__le64) prp_vlist); 251 | } 252 | 253 | LOG_DBG("PRP List: %llx", (unsigned long long) prp_vlist[i]); 254 | } 255 | 256 | } else if (prps->type == PRP1) { 257 | LOG_DBG("P1 Entry: %llx", (unsigned long long) prps->prp1); 258 | } else { 259 | LOG_DBG("P1 Entry: %llx", (unsigned long long) prps->prp1); 260 | LOG_DBG("P2 Entry: %llx", (unsigned long long) prps->prp2); 261 | } 262 | #endif 263 | 264 | /* Adding node inside cmd_track list for pmetrics_sq */ 265 | err = add_cmd_track_node(pmetrics_sq, persist_q_id, prps, opcode, cmd_id); 266 | if (err < 0) { 267 | LOG_ERR("Failure to add command track node"); 268 | goto err_unmap_prp_pool; 269 | } 270 | 271 | LOG_DBG("PRP Built and added to command track node successfully"); 272 | return 0; 273 | 274 | err_unmap_prp_pool: 275 | unmap_user_pg_to_dma(nvme_dev, prps); 276 | free_prp_pool(nvme_dev, prps, prps->npages); 277 | return err; 278 | } 279 | 280 | static int map_user_pg_to_dma(struct nvme_device *nvme_dev, 281 | enum dma_data_direction kernel_dir, unsigned long buf_addr, 282 | unsigned total_buf_len, struct scatterlist **sg_list, 283 | struct nvme_prps *prps, enum data_buf_type data_buf_type) 284 | { 285 | int i, err, buf_pg_offset, buf_pg_count, num_sg_entries; 286 | struct page **pages; 287 | void *vir_kern_addr = NULL; 288 | 289 | 290 | buf_pg_offset = offset_in_page(buf_addr); 291 | buf_pg_count = DIV_ROUND_UP(buf_pg_offset + total_buf_len, PAGE_SIZE); 292 | LOG_DBG("User buf addr = 0x%016lx", buf_addr); 293 | LOG_DBG("User buf pg offset = 0x%08x", buf_pg_offset); 294 | LOG_DBG("User buf pg count = 0x%08x", buf_pg_count); 295 | LOG_DBG("User buf total length = 0x%08x", total_buf_len); 296 | 297 | pages = kcalloc(buf_pg_count, sizeof(*pages), GFP_KERNEL); 298 | if (pages == NULL) { 299 | LOG_ERR("Memory alloc for describing user pages failed"); 300 | return -ENOMEM; 301 | } 302 | 303 | /* Pinning user pages in memory, always assuming writing in case user space 304 | * specifies an incorrect direction of data xfer */ 305 | err = get_user_pages_fast(buf_addr, buf_pg_count, WRITE_PG, pages); 306 | if (err < buf_pg_count) { 307 | buf_pg_count = err; 308 | err = -EFAULT; 309 | LOG_ERR("Pinning down user pages failed"); 310 | goto error; 311 | } 312 | 313 | /* Kernel needs direct access to all Q memory, so discontiguously backed */ 314 | /* IOQ's must be mapped to allow the access to the memory */ 315 | if (data_buf_type == DISCONTG_IO_Q) { 316 | /* Note: Not suitable for pages with offsets, but since discontig back'd 317 | * Q's are required to be page aligned this isn't an issue */ 318 | vir_kern_addr = vmap(pages, buf_pg_count, VM_MAP, PAGE_KERNEL); 319 | LOG_DBG("Map'd user space buf to vir Kernel Addr: %p", vir_kern_addr); 320 | if (vir_kern_addr == NULL) { 321 | err = -EFAULT; 322 | LOG_ERR("Unable to map user space buffer to kernel space"); 323 | goto error; 324 | } 325 | } 326 | 327 | /* Generate SG List from pinned down pages */ 328 | err = pages_to_sg(pages, buf_pg_count, buf_pg_offset, 329 | total_buf_len, sg_list); 330 | if (err < 0) { 331 | LOG_ERR("Generation of sg lists failed"); 332 | goto error_unmap; 333 | } 334 | 335 | /* Mapping SG List to DMA; NOTE: The sg list could be coalesced by either 336 | * an IOMMU or the kernel, so checking whether or not the number of 337 | * mapped entries equate to the number given to the func is not warranted */ 338 | num_sg_entries = dma_map_sg(&nvme_dev->private_dev.pdev->dev, *sg_list, 339 | buf_pg_count, kernel_dir); 340 | LOG_DBG("%d elements mapped out of %d sglist elements", 341 | num_sg_entries, num_sg_entries); 342 | if (num_sg_entries == 0) { 343 | LOG_ERR("Unable to map the sg list into dma addr space"); 344 | err = -ENOMEM; 345 | goto error_unmap; 346 | } 347 | kfree(pages); 348 | 349 | #ifdef TEST_PRP_DEBUG 350 | { 351 | struct scatterlist *work; 352 | LOG_DBG("Dump sg list after DMA mapping"); 353 | for (i = 0, work = *sg_list; i < num_sg_entries; i++, 354 | work = sg_next(work)) { 355 | 356 | LOG_DBG(" sg list: page_link=0x%016lx", work->page_link); 357 | LOG_DBG(" sg list: offset=0x%08x, length=0x%08x", 358 | work->offset, work->length); 359 | LOG_DBG(" sg list: dmaAddr=0x%016llx, dmaLen=0x%08x", 360 | (u64)work->dma_address, work->dma_length); 361 | } 362 | } 363 | #endif 364 | 365 | /* Fill in nvme_prps */ 366 | prps->sg = *sg_list; 367 | prps->num_map_pgs = num_sg_entries; 368 | prps->vir_kern_addr = vir_kern_addr; 369 | prps->data_dir = kernel_dir; 370 | prps->data_buf_addr = buf_addr; 371 | prps->data_buf_size = total_buf_len; 372 | return 0; 373 | 374 | error_unmap: 375 | vunmap(vir_kern_addr); 376 | error: 377 | for (i = 0; i < buf_pg_count; i++) { 378 | put_page(pages[i]); 379 | } 380 | kfree(pages); 381 | return err; 382 | } 383 | 384 | static int pages_to_sg(struct page **pages, int num_pages, int buf_offset, 385 | unsigned len, struct scatterlist **sg_list) 386 | { 387 | int i; 388 | struct scatterlist *sg; 389 | 390 | *sg_list = NULL; 391 | sg = kmalloc((num_pages * sizeof(struct scatterlist)), GFP_KERNEL); 392 | if (sg == NULL) { 393 | LOG_ERR("Memory alloc for sg list failed"); 394 | return -ENOMEM; 395 | } 396 | 397 | /* Building the SG List */ 398 | sg_init_table(sg, num_pages); 399 | for (i = 0; i < num_pages; i++) { 400 | if (pages[i] == NULL) { 401 | kfree(sg); 402 | return -EFAULT; 403 | } 404 | sg_set_page(&sg[i], pages[i], 405 | min_t(int, len, (PAGE_SIZE - buf_offset)), buf_offset); 406 | len -= (PAGE_SIZE - buf_offset); 407 | buf_offset = 0; 408 | } 409 | sg_mark_end(&sg[i - 1]); 410 | *sg_list = sg; 411 | return 0; 412 | } 413 | 414 | /* 415 | * setup_prps: 416 | * Sets up PRP'sfrom DMA'ed memory 417 | * Returns Error codes 418 | */ 419 | static int setup_prps(struct nvme_device *nvme_dev, struct scatterlist *sg, 420 | s32 buf_len, struct nvme_prps *prps, u8 cr_io_q, 421 | enum send_64b_bitmask prp_mask) 422 | { 423 | dma_addr_t prp_dma, dma_addr; 424 | s32 dma_len; /* Length of DMA'ed SG */ 425 | __le64 *prp_list; /* Pointer to PRP List */ 426 | u32 offset; 427 | u32 num_prps, num_pg, prp_page = 0; 428 | int index, err; 429 | struct dma_pool *prp_page_pool; 430 | 431 | dma_addr = sg_dma_address(sg); 432 | dma_len = sg_dma_len(sg); 433 | offset = offset_in_page(dma_addr); 434 | 435 | /* Create IO CQ/SQ's */ 436 | if (cr_io_q) { 437 | /* Checking for PRP1 mask */ 438 | if (!(prp_mask & MASK_PRP1_LIST)) { 439 | LOG_ERR("bit_mask does not support PRP1 list"); 440 | return -EINVAL; 441 | } 442 | /* Specifies PRP1 entry is a PRP_List */ 443 | prps->type = (PRP1 | PRP_List); 444 | goto prp_list; 445 | } 446 | 447 | LOG_DBG("PRP1 Entry: Buf_len %d", buf_len); 448 | LOG_DBG("PRP1 Entry: dma_len %u", dma_len); 449 | LOG_DBG("PRP1 Entry: PRP entry %llx", (unsigned long long) dma_addr); 450 | 451 | /* Checking for PRP1 mask */ 452 | if (!(prp_mask & MASK_PRP1_PAGE)) { 453 | LOG_ERR("bit_mask does not support PRP1 page"); 454 | return -EINVAL; 455 | } 456 | prps->prp1 = cpu_to_le64(dma_addr); 457 | buf_len -= (PAGE_SIZE - offset); 458 | dma_len -= (PAGE_SIZE - offset); 459 | 460 | if (buf_len <= 0) { 461 | prps->type = PRP1; 462 | return 0; 463 | } 464 | 465 | /* If pages were contiguous in memory use same SG Entry */ 466 | if (dma_len) { 467 | dma_addr += (PAGE_SIZE - offset); 468 | } else { 469 | sg = sg_next(sg); 470 | dma_addr = sg_dma_address(sg); 471 | dma_len = sg_dma_len(sg); 472 | } 473 | 474 | offset = 0; 475 | 476 | if (buf_len <= PAGE_SIZE) { 477 | /* Checking for PRP2 mask */ 478 | if (!(prp_mask & MASK_PRP2_PAGE)) { 479 | LOG_ERR("bit_mask does not support PRP2 page"); 480 | return -EINVAL; 481 | } 482 | prps->prp2 = cpu_to_le64(dma_addr); 483 | prps->type = (PRP1 | PRP2); 484 | LOG_DBG("PRP2 Entry: Type %u", prps->type); 485 | LOG_DBG("PRP2 Entry: Buf_len %d", buf_len); 486 | LOG_DBG("PRP2 Entry: dma_len %u", dma_len); 487 | LOG_DBG("PRP2 Entry: PRP entry %llx", (unsigned long long) dma_addr); 488 | return 0; 489 | } 490 | 491 | /* Specifies PRP2 entry is a PRP_List */ 492 | prps->type = (PRP2 | PRP_List); 493 | /* Checking for PRP2 mask */ 494 | if (!(prp_mask & MASK_PRP2_LIST)) { 495 | LOG_ERR("bit_mask does not support PRP2 list"); 496 | return -EINVAL; 497 | } 498 | 499 | prp_list: 500 | /* Generate PRP List */ 501 | num_prps = DIV_ROUND_UP(offset + buf_len, PAGE_SIZE); 502 | /* Taking into account the last entry of PRP Page */ 503 | num_pg = DIV_ROUND_UP(PRP_Size * num_prps, PAGE_SIZE - PRP_Size); 504 | 505 | prps->vir_prp_list = kmalloc(sizeof(__le64 *) * num_pg, GFP_ATOMIC); 506 | if (NULL == prps->vir_prp_list) { 507 | LOG_ERR("Memory allocation for virtual list failed"); 508 | return -ENOMEM; 509 | } 510 | 511 | LOG_DBG("No. of PRP Entries inside PRPList: %u", num_prps); 512 | 513 | prp_page = 0; 514 | prp_page_pool = nvme_dev->private_dev.prp_page_pool; 515 | 516 | prp_list = dma_pool_alloc(prp_page_pool, GFP_ATOMIC, &prp_dma); 517 | if (NULL == prp_list) { 518 | kfree(prps->vir_prp_list); 519 | LOG_ERR("Memory allocation for prp page failed"); 520 | return -ENOMEM; 521 | } 522 | prps->vir_prp_list[prp_page++] = prp_list; 523 | prps->npages = prp_page; 524 | prps->first_dma = prp_dma; 525 | if (prps->type == (PRP2 | PRP_List)) { 526 | prps->prp2 = cpu_to_le64(prp_dma); 527 | LOG_DBG("PRP2 Entry: %llx", (unsigned long long) prps->prp2); 528 | } else if (prps->type == (PRP1 | PRP_List)) { 529 | prps->prp1 = cpu_to_le64(prp_dma); 530 | prps->prp2 = 0; 531 | LOG_DBG("PRP1 Entry: %llx", (unsigned long long) prps->prp1); 532 | } else { 533 | LOG_ERR("PRP cmd options don't allow proper description of buffer"); 534 | err = -EFAULT; 535 | goto error; 536 | } 537 | 538 | index = 0; 539 | for (;;) { 540 | if ((index == PAGE_SIZE / PRP_Size - 1) && (buf_len > PAGE_SIZE)) { 541 | __le64 *old_prp_list = prp_list; 542 | prp_list = dma_pool_alloc(prp_page_pool, GFP_ATOMIC, &prp_dma); 543 | if (NULL == prp_list) { 544 | LOG_ERR("Memory allocation for prp page failed"); 545 | err = -ENOMEM; 546 | goto error; 547 | } 548 | prps->vir_prp_list[prp_page++] = prp_list; 549 | prps->npages = prp_page; 550 | old_prp_list[index] = cpu_to_le64(prp_dma); 551 | index = 0; 552 | } 553 | 554 | LOG_DBG("PRP List: dma_len %d", dma_len); 555 | LOG_DBG("PRP List: Buf_len %d", buf_len); 556 | LOG_DBG("PRP List: offset %d", offset); 557 | LOG_DBG("PRP List: PRP entry %llx", (unsigned long long)dma_addr); 558 | 559 | prp_list[index++] = cpu_to_le64(dma_addr); 560 | dma_len -= (PAGE_SIZE - offset); 561 | dma_addr += (PAGE_SIZE - offset); 562 | buf_len -= (PAGE_SIZE - offset); 563 | offset = 0; 564 | 565 | if (buf_len <= 0) { 566 | break; 567 | } else if (dma_len > 0) { 568 | continue; 569 | } else if (dma_len < 0) { 570 | LOG_ERR("DMA data length is illegal"); 571 | err = -EFAULT; 572 | goto error; 573 | } else { 574 | sg = sg_next(sg); 575 | dma_addr = sg_dma_address(sg); 576 | dma_len = sg_dma_len(sg); 577 | } 578 | } 579 | return 0; 580 | 581 | error: 582 | LOG_ERR("Error in setup_prps function: %d", err); 583 | free_prp_pool(nvme_dev, prps, prp_page); 584 | return err; 585 | } 586 | 587 | 588 | /* 589 | * unmap_user_pg_to_dma: 590 | * Unmaps mapped DMA pages and frees the pinned down pages 591 | */ 592 | static void unmap_user_pg_to_dma(struct nvme_device *nvme_dev, 593 | struct nvme_prps *prps) 594 | { 595 | int i; 596 | struct page *pg; 597 | 598 | if (!prps) { 599 | return; 600 | } 601 | 602 | /* Unammping Kernel Virtual Address */ 603 | if (prps->vir_kern_addr && prps->type != NO_PRP) { 604 | vunmap(prps->vir_kern_addr); 605 | } 606 | 607 | if (prps->type != NO_PRP) { 608 | dma_unmap_sg(&nvme_dev->private_dev.pdev->dev, prps->sg, 609 | prps->num_map_pgs, prps->data_dir); 610 | 611 | for (i = 0; i < prps->num_map_pgs; i++) { 612 | pg = sg_page(&prps->sg[i]); 613 | if ((prps->data_dir == DMA_FROM_DEVICE) || 614 | (prps->data_dir == DMA_BIDIRECTIONAL)) { 615 | 616 | set_page_dirty_lock(pg); 617 | } 618 | put_page(pg); 619 | } 620 | kfree(prps->sg); 621 | } 622 | } 623 | 624 | 625 | /* 626 | * free_prp_pool: 627 | * Free's PRP List and virtual List 628 | */ 629 | static void free_prp_pool(struct nvme_device *nvme_dev, 630 | struct nvme_prps *prps, u32 npages) 631 | { 632 | int i; 633 | __le64 *prp_vlist; 634 | const int last_prp = ((PAGE_SIZE / PRP_Size) - 1); 635 | dma_addr_t prp_dma, next_prp_dma = 0; 636 | 637 | 638 | if (prps == NULL) { 639 | return; 640 | } 641 | 642 | if (prps->type == (PRP1 | PRP_List) || prps->type == (PRP2 | PRP_List)) { 643 | 644 | prp_dma = prps->first_dma; 645 | for (i = 0; i < npages; i++) { 646 | 647 | prp_vlist = prps->vir_prp_list[i]; 648 | if (i < (npages - 1)) { 649 | next_prp_dma = le64_to_cpu(prp_vlist[last_prp]); 650 | } 651 | dma_pool_free( 652 | nvme_dev->private_dev.prp_page_pool, prp_vlist, prp_dma); 653 | prp_dma = next_prp_dma; 654 | } 655 | kfree(prps->vir_prp_list); 656 | } 657 | } 658 | -------------------------------------------------------------------------------- /dnvme_cmds.h: -------------------------------------------------------------------------------- 1 | /* 2 | * NVM Express Compliance Suite 3 | * Copyright (c) 2011, Intel Corporation. 4 | * 5 | * This program is free software; you can redistribute it and/or modify it 6 | * under the terms and conditions of the GNU General Public License, 7 | * version 2, as published by the Free Software Foundation. 8 | * 9 | * This program is distributed in the hope it will be useful, but WITHOUT 10 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 | * more details. 13 | * 14 | * You should have received a copy of the GNU General Public License along with 15 | * this program; if not, write to the Free Software Foundation, Inc., 16 | * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 | */ 18 | 19 | #ifndef _DNVME_CMDS_H_ 20 | #define _DNVME_CMDS_H_ 21 | 22 | /* define's for unique QID creation */ 23 | #define UNIQUE_QID_FLAG 0x01 24 | 25 | 26 | enum { 27 | PRP_PRESENT = 1, /* Specifies to generate PRP's for a particular command */ 28 | PRP_ABSENT = 0 /* Specifies not to generate PRP's per command */ 29 | }; 30 | 31 | enum { 32 | WRITE_PG = 1, 33 | READ_PG = 0 34 | }; 35 | 36 | /* Enum specifying Writes/Reads to mapped pages and other general enums */ 37 | enum { 38 | PRP_Size = 8, /* Size of PRP entry in bytes */ 39 | PERSIST_QID_0 = 0, /* Default value of Persist queue ID */ 40 | CDW11_PC = 1, /* Mask for checking CDW11.PC of create IO Q cmds */ 41 | CDW11_IEN = 2, /* Mask to check if CDW11.IEN is set */ 42 | }; 43 | 44 | /* Enum specifying PRP1,PRP2 or List */ 45 | enum prp_type { 46 | NO_PRP = 0, 47 | PRP1 = 1, 48 | PRP2 = 2, 49 | PRP_List = 4, 50 | }; 51 | 52 | /* Enum specifying type of data buffer */ 53 | enum data_buf_type { 54 | DATA_BUF, 55 | CONTG_IO_Q, 56 | DISCONTG_IO_Q 57 | }; 58 | 59 | /** 60 | * prep_send64b_cmd: 61 | * Prepares the 64 byte command to be sent 62 | * with PRP generation and addition of nodes 63 | * inside cmd track list 64 | * @param nvme_dev 65 | * @param pmetrics_sq 66 | * @param nvme_64b_send 67 | * @param prps 68 | * @param nvme_gen_cmd 69 | * @param persist_q_id 70 | * @param data_buf_type 71 | * @param gen_prp 72 | * @return Error Codes 73 | */ 74 | int prep_send64b_cmd(struct nvme_device *nvme_dev, struct metrics_sq 75 | *pmetrics_sq, struct nvme_64b_send *nvme_64b_send, struct nvme_prps *prps, 76 | struct nvme_gen_cmd *nvme_gen_cmd, u16 persist_q_id, 77 | enum data_buf_type data_buf_type, u8 gen_prp); 78 | 79 | /** 80 | * add_cmd_track_node: 81 | * Add node inside the cmd track list 82 | * @param pmetrics_sq 83 | * @param persist_q_id 84 | * @param prps 85 | * @param cmd_type 86 | * @param opcode 87 | * @param cmd_id 88 | * @return Error codes 89 | */ 90 | int add_cmd_track_node(struct metrics_sq *pmetrics_sq, 91 | u16 persist_q_id, struct nvme_prps *prps, u8 opcode, u16 cmd_id); 92 | 93 | /** 94 | * empty_cmd_track_list: 95 | * Delete command track list completley per SQ 96 | * @param nvme_device 97 | * @param pmetrics_sq 98 | * @return void 99 | */ 100 | void empty_cmd_track_list(struct nvme_device *nvme_device, 101 | struct metrics_sq *pmetrics_sq); 102 | 103 | /** 104 | * destroy_dma_pool: 105 | * Destroy's the dma pool 106 | * @param nvme_dev 107 | * @return void 108 | */ 109 | void destroy_dma_pool(struct nvme_device *nvme_dev); 110 | 111 | /** 112 | * del_prps: 113 | * Deletes the PRP structures of SQ/CQ or command track node 114 | * @param nvme_device 115 | * @param prps 116 | * @return void 117 | */ 118 | void del_prps(struct nvme_device *nvme_device, struct nvme_prps *prps); 119 | 120 | 121 | #endif 122 | -------------------------------------------------------------------------------- /dnvme_ds.h: -------------------------------------------------------------------------------- 1 | /* 2 | * NVM Express Compliance Suite 3 | * Copyright (c) 2011, Intel Corporation. 4 | * 5 | * This program is free software; you can redistribute it and/or modify it 6 | * under the terms and conditions of the GNU General Public License, 7 | * version 2, as published by the Free Software Foundation. 8 | * 9 | * This program is distributed in the hope it will be useful, but WITHOUT 10 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 | * more details. 13 | * 14 | * You should have received a copy of the GNU General Public License along with 15 | * this program; if not, write to the Free Software Foundation, Inc., 16 | * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 | */ 18 | 19 | #ifndef _DNVME_DS_H_ 20 | #define _DNVME_DS_H_ 21 | 22 | #include 23 | #include 24 | #include 25 | #include 26 | #include 27 | 28 | #include "dnvme_interface.h" 29 | 30 | /* 0.0.01 */ 31 | #define DRIVER_VERSION 0x00000001 32 | #define DRIVER_VERSION_STR(VER) #VER 33 | 34 | /* To store the max vector locations */ 35 | #define MAX_VEC_SLT 2048 36 | /* 37 | * Strucutre used to define all the essential parameters 38 | * related to PRP1, PRP2 and PRP List 39 | */ 40 | struct nvme_prps { 41 | u32 npages; /* No. of pages inside the PRP List */ 42 | u32 type; /* refers to types of PRP Possible */ 43 | /* List of virtual pointers to PRP List pages */ 44 | __le64 **vir_prp_list; 45 | u8 *vir_kern_addr; /* K.V.A for pinned down pages */ 46 | __le64 prp1; /* Physical address in PRP1 of command */ 47 | __le64 prp2; /* Physical address in PRP2 of command */ 48 | dma_addr_t first_dma; /* First entry in PRP List */ 49 | /* Size of data buffer for the specific command */ 50 | u32 data_buf_size; 51 | /* Pointer to SG list generated */ 52 | struct scatterlist *sg; 53 | /* Number of pages mapped to DMA area */ 54 | u32 num_map_pgs; 55 | /* Address of data buffer for the specific command */ 56 | u64 data_buf_addr; 57 | enum dma_data_direction data_dir; 58 | }; 59 | 60 | /* 61 | * structure for the CQ tracking params with virtual address and size. 62 | */ 63 | struct nvme_trk_cq { 64 | u8 *vir_kern_addr; /* phy addr ptr to the q's alloc to kern mem */ 65 | dma_addr_t cq_dma_addr; /* dma mapped address using dma_alloc */ 66 | u32 size; /* length in bytes of the alloc Q in kernel */ 67 | u32 __iomem *dbs; /* Door Bell stride */ 68 | u8 contig; /* Indicates if prp list is contig or not */ 69 | u8 bit_mask; /* bitmask added for unique ID creation */ 70 | struct nvme_prps prp_persist; /* PRP element in CQ */ 71 | }; 72 | 73 | /* 74 | * Structure definition for tracking the commands. 75 | */ 76 | struct cmd_track { 77 | u16 unique_id; /* driver assigned unique id for a particular cmd */ 78 | u16 persist_q_id; /* target Q ID used for Create/Delete Q's, never == 0 */ 79 | u8 opcode; /* command opcode as per spec */ 80 | struct list_head cmd_list_hd; /* link-list using the kernel list */ 81 | struct nvme_prps prp_nonpersist; /* Non persistent PRP entries */ 82 | }; 83 | 84 | /* 85 | * structure definition for SQ tracking parameters. 86 | */ 87 | struct nvme_trk_sq { 88 | void *vir_kern_addr; /* virtual kernal address using kmalloc */ 89 | dma_addr_t sq_dma_addr; /* dma mapped address using dma_alloc */ 90 | u32 size; /* len in bytes of allocated Q in kernel */ 91 | u32 __iomem *dbs; /* Door Bell stride */ 92 | u16 unique_cmd_id; /* unique counter for each comand in SQ */ 93 | u8 contig; /* Indicates if prp list is contig or not */ 94 | u8 bit_mask; /* bitmask added for unique ID creation */ 95 | struct nvme_prps prp_persist; /* PRP element in CQ */ 96 | struct list_head cmd_track_list;/* link-list head for cmd_track list */ 97 | }; 98 | 99 | /* 100 | * Structure with Metrics of CQ. Has a node which makes it work with 101 | * kernel linked lists. 102 | */ 103 | struct metrics_cq { 104 | struct list_head cq_list_hd; /* link-list using the kernel list */ 105 | struct nvme_gen_cq public_cq; /* parameters in nvme_gen_cq */ 106 | struct nvme_trk_cq private_cq; /* parameters in nvme_trk_cq */ 107 | }; 108 | 109 | /* 110 | * Structure with Metrics of SQ. Has a node which makes it work with 111 | * kernel linked lists. 112 | */ 113 | struct metrics_sq { 114 | struct list_head sq_list_hd; /* link-list using the kernel list */ 115 | struct nvme_gen_sq public_sq; /* parameters in nvme_gen_sq */ 116 | struct nvme_trk_sq private_sq; /* parameters in nvme_trk_sq */ 117 | }; 118 | 119 | /* 120 | * Structure with cq track parameters for interrupt related functionality. 121 | * Note:- Struct used for u16 for future additions 122 | */ 123 | struct irq_cq_track { 124 | struct list_head irq_cq_head; /* linked list head for irq CQ trk */ 125 | u16 cq_id; /* Completion Q id */ 126 | }; 127 | 128 | /* 129 | * Structure with parameters of IRQ vector, CQ track linked list and irq_no 130 | */ 131 | struct irq_track { 132 | struct list_head irq_list_hd; /* list head for irq track list */ 133 | struct list_head irq_cq_track; /* linked list of IRQ CQ nodes */ 134 | u16 irq_no; /* idx in list; always 0 based */ 135 | u32 int_vec; /* vec number; assigned by OS */ 136 | u8 isr_fired; /* flag to indicate if irq has fired */ 137 | u32 isr_count; /* total no. of times irq fired */ 138 | }; 139 | 140 | /* 141 | * structure for meta data per device parameters. 142 | */ 143 | struct metrics_meta_data { 144 | struct list_head meta_trk_list; 145 | struct dma_pool *meta_dmapool_ptr; 146 | u32 meta_buf_size; 147 | }; 148 | 149 | /* 150 | * Structure for meta data buffer allocations. 151 | */ 152 | struct metrics_meta { 153 | struct list_head meta_list_hd; 154 | u32 meta_id; 155 | void * vir_kern_addr; 156 | dma_addr_t meta_dma_addr; 157 | }; 158 | 159 | /* 160 | * Structure for Nvme device private parameters. These parameters are 161 | * device specific and populated while the nvme device is being opened 162 | * or during probe. 163 | */ 164 | struct private_metrics_dev { 165 | struct pci_dev *pdev; /* Pointer to the PCIe device */ 166 | struct device *spcl_dev; /* Special device file */ 167 | struct nvme_ctrl_reg __iomem *ctrlr_regs; /* Pointer to reg space */ 168 | u8 __iomem *bar0; /* 64 bit BAR0 memory mapped ctrlr regs */ 169 | u8 __iomem *bar1; /* 64 bit BAR1 I/O mapped registers */ 170 | u8 __iomem *bar2; /* 64 bit BAR2 memory mapped MSIX table */ 171 | struct dma_pool *prp_page_pool; /* Mem for PRP List */ 172 | struct device *dmadev; /* Pointer to the dma device from pdev */ 173 | int minor_no; /* Minor no. of the device being used */ 174 | u8 open_flag; /* Allows device opening only once */ 175 | }; 176 | 177 | /* 178 | * Structure with nvme device related public and private parameters. 179 | */ 180 | struct nvme_device { 181 | struct private_metrics_dev private_dev; 182 | struct public_metrics_dev public_dev; 183 | }; 184 | 185 | /* 186 | * Work container which holds vectors and scheduled work queue item 187 | */ 188 | struct work_container { 189 | struct list_head wrk_list_hd; 190 | struct work_struct sched_wq; /* Work Struct item used in bh */ 191 | u16 irq_no; /* 0 based irq_no */ 192 | u32 int_vec; /* Interrupt vectors assigned by the kernel */ 193 | /* Pointer to the IRQ_processing strucutre of the device */ 194 | struct irq_processing *pirq_process; 195 | }; 196 | 197 | /* 198 | * Irq Processing structure to hold all the irq parameters per device. 199 | */ 200 | struct irq_processing { 201 | /* irq_track_mtx is used only while traversing/editing/deleting the 202 | * irq_track_list 203 | */ 204 | struct list_head irq_track_list; /* IRQ list; sorted by irq_no */ 205 | struct mutex irq_track_mtx; /* Mutex for access to irq_track_list */ 206 | 207 | /* To resolve contention for ISR's getting scheduled on different cores */ 208 | spinlock_t isr_spin_lock; 209 | 210 | /* Mask pointer for ISR (read both in ISR and BH) */ 211 | /* Pointer to MSI-X table offset or INTMS register */ 212 | u8 __iomem *mask_ptr; 213 | /* Will only be read by ISR and set once per SET/DISABLE of IRQ scheme */ 214 | u8 irq_type; /* Type of IRQ set */ 215 | 216 | /* Used by ISR to enqueue work to BH */ 217 | struct workqueue_struct *wq; /* Wq per device */ 218 | 219 | /* Used by BH to dequeue work and process on it */ 220 | /* Head of work_container's list */ 221 | /* Remains static throughout the lifetime of the interrupts */ 222 | struct list_head wrk_item_list; 223 | }; 224 | 225 | /* 226 | * Structure which defines the device list for all the data structures 227 | * that are defined. 228 | */ 229 | struct metrics_device_list { 230 | struct list_head metrics_device_hd; /* metrics linked list head */ 231 | struct list_head metrics_cq_list; /* CQ linked list */ 232 | struct list_head metrics_sq_list; /* SQ linked list */ 233 | struct nvme_device *metrics_device; /* Pointer to this nvme device */ 234 | struct mutex metrics_mtx; /* Mutex for locking per device */ 235 | struct metrics_meta_data metrics_meta; /* Pointer to meta data buff */ 236 | struct irq_processing irq_process; /* IRQ processing structure */ 237 | }; 238 | 239 | /* Global linked list for the entire data structure for all devices. */ 240 | extern struct list_head metrics_dev_ll; 241 | 242 | #endif 243 | -------------------------------------------------------------------------------- /dnvme_interface.h: -------------------------------------------------------------------------------- 1 | /* 2 | * NVM Express Compliance Suite 3 | * Copyright (c) 2011, Intel Corporation. 4 | * 5 | * This program is free software; you can redistribute it and/or modify it 6 | * under the terms and conditions of the GNU General Public License, 7 | * version 2, as published by the Free Software Foundation. 8 | * 9 | * This program is distributed in the hope it will be useful, but WITHOUT 10 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 | * more details. 13 | * 14 | * You should have received a copy of the GNU General Public License along with 15 | * this program; if not, write to the Free Software Foundation, Inc., 16 | * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 | */ 18 | 19 | #ifndef _DNVME_INTERFACE_H_ 20 | #define _DNVME_INTERFACE_H_ 21 | 22 | /** 23 | * API version coordinates the tnvme binary to the dnvme binary. If the dnvme 24 | * interface changes at all, then this file will be modified and thus this 25 | * revision will be bumped up. Only when this file changes does this version 26 | * change. The dnvme driver version is registered by the contents of file 27 | * version.h. Although version.h will change whenever API_VERSION changes, the 28 | * API_VERSION won't necessarily change each time version.h changes. Rather 29 | * version.h changes whenever a new release of the driver logic has changed. 30 | * 31 | * Thus when this API changes, then tnvme will have to be recompiled against 32 | * this file to adhere to the new modification and requirements of the API. 33 | * tnvme refuses to execute when it detects a API version mismatch to dnvme. 34 | */ 35 | #define API_VERSION 0x00010402 /* 1.4.2 */ 36 | 37 | 38 | /** 39 | * These are the enum types used for branching to 40 | * required offset as specified by either PCI space 41 | * or a NVME space enum value defined here. 42 | */ 43 | enum nvme_io_space { 44 | NVMEIO_PCI_HDR, 45 | NVMEIO_BAR01, 46 | NVMEIO_FENCE /* always must be the last element */ 47 | }; 48 | 49 | /** 50 | * These are the enum types used for specifying the 51 | * required access width of registers or memory space. 52 | */ 53 | enum nvme_acc_type { 54 | BYTE_LEN, 55 | WORD_LEN, 56 | DWORD_LEN, 57 | QUAD_LEN, 58 | ACC_FENCE 59 | }; 60 | 61 | /** 62 | * These enums define the type of interrupt scheme that the overall 63 | * system uses. 64 | */ 65 | enum nvme_irq_type { 66 | INT_MSI_SINGLE, 67 | INT_MSI_MULTI, 68 | INT_MSIX, 69 | INT_NONE, 70 | INT_FENCE /* Last item to guard from loop run-overs */ 71 | }; 72 | 73 | /** 74 | * enums to define the q types. 75 | */ 76 | enum nvme_q_type { 77 | ADMIN_SQ, 78 | ADMIN_CQ, 79 | }; 80 | 81 | /** 82 | * This struct is the basic structure which has important 83 | * parameter for the generic read and write function to seek the correct 84 | * offset and length while reading or writing to nvme card. 85 | */ 86 | struct rw_generic { 87 | enum nvme_io_space type; 88 | uint32_t offset; 89 | uint32_t nBytes; 90 | enum nvme_acc_type acc_type; 91 | uint8_t *buffer; 92 | }; 93 | 94 | /** 95 | * These enums are used while enabling or disabling or completely disabling the 96 | * controller. 97 | */ 98 | enum nvme_state { 99 | ST_ENABLE, /* Set the NVME Controller to enable state */ 100 | ST_DISABLE, /* Controller reset without affecting Admin Q */ 101 | ST_DISABLE_COMPLETELY /* Completely destroy even Admin Q's */ 102 | }; 103 | 104 | /* Enum specifying bitmask passed on to IOCTL_SEND_64B */ 105 | enum send_64b_bitmask { 106 | MASK_PRP1_PAGE = 1, /* PRP1 can point to a physical page */ 107 | MASK_PRP1_LIST = 2, /* PRP1 can point to a PRP list */ 108 | MASK_PRP2_PAGE = 4, /* PRP2 can point to a physical page */ 109 | MASK_PRP2_LIST = 8, /* PRP2 can point to a PRP list */ 110 | MASK_MPTR = 16, /* MPTR may be modified */ 111 | }; 112 | 113 | /** 114 | * This struct is the basic structure which has important parameter for 115 | * sending 64 Bytes command to both admin and IO SQ's and CQ's 116 | */ 117 | struct nvme_64b_send { 118 | /* BIT MASK for PRP1,PRP2 and metadata pointer */ 119 | enum send_64b_bitmask bit_mask; 120 | /* Data buffer or discontiguous CQ/SQ's user space address */ 121 | uint8_t const *data_buf_ptr; 122 | /* 0=none; 1=to_device, 2=from_device, 3=bidirectional, others illegal */ 123 | uint8_t data_dir; 124 | 125 | uint8_t *cmd_buf_ptr; /* Virtual Address pointer to 64B command */ 126 | uint32_t meta_buf_id; /* Meta buffer ID when MASK_MPTR is set */ 127 | uint32_t data_buf_size; /* Size of Data Buffer */ 128 | uint16_t unique_id; /* Value returned back to user space */ 129 | uint16_t q_id; /* Queue ID where the cmd_buf command should go */ 130 | }; 131 | 132 | /** 133 | * This structure defines the overall interrupt scheme used and 134 | * defined parameters to specify the driver version and application 135 | * version. A verification is performed by driver and application to 136 | * check if these versions match. 137 | */ 138 | struct metrics_driver { 139 | uint32_t driver_version; /* dnvme driver version */ 140 | uint32_t api_version; /* tnvme test application version */ 141 | }; 142 | 143 | /** 144 | * This structure defines the parameters required for creating any CQ. 145 | * It supports both Admin CQ and IO CQ. 146 | */ 147 | struct nvme_gen_cq { 148 | uint16_t q_id; /* even admin q's are supported here q_id = 0 */ 149 | uint16_t tail_ptr; /* The value calculated for respective tail_ptr */ 150 | uint16_t head_ptr; /* Actual value in CQxTDBL for this q_id */ 151 | uint32_t elements; /* pass the actual elements in this q */ 152 | uint8_t irq_enabled; /* sets when the irq scheme is active */ 153 | uint16_t irq_no; /* idx in list; always 0 based */ 154 | uint8_t pbit_new_entry; /* Indicates if a new entry is in CQ */ 155 | }; 156 | 157 | /** 158 | * This structure defines the parameters required for creating any SQ. 159 | * It supports both Admin SQ and IO SQ. 160 | */ 161 | struct nvme_gen_sq { 162 | uint16_t sq_id; /* Admin SQ are supported with q_id = 0 */ 163 | uint16_t cq_id; /* The CQ ID to which this SQ is associated */ 164 | uint16_t tail_ptr; /* Actual value in SQxTDBL for this SQ id */ 165 | uint16_t tail_ptr_virt; /* future SQxTDBL write value based on no. 166 | of new cmds copied to SQ */ 167 | uint16_t head_ptr; /* Calculate this value based on cmds reaped */ 168 | uint32_t elements; /* total number of elements in this Q */ 169 | }; 170 | 171 | /** 172 | * enum for metrics type. These enums are used when returning the device 173 | * metrics. 174 | */ 175 | enum metrics_type { 176 | METRICS_CQ, /* Completion Q Metrics */ 177 | METRICS_SQ, /* Submission Q Metrics */ 178 | MTERICS_FENCE, /* Always last item */ 179 | }; 180 | 181 | /** 182 | * Interface structure for returning the Q metrics. The buffer is where the 183 | * data is stored for the user to copy from. This assumes that the user will 184 | * provide correct buffer space to store the required metrics. 185 | */ 186 | struct nvme_get_q_metrics { 187 | uint16_t q_id; /* Pass the Q id for which metrics is desired */ 188 | enum metrics_type type; /* SQ or CQ metrics desired */ 189 | uint32_t nBytes; /* Number of bytes to copy into buffer */ 190 | uint8_t * buffer; /* to store the required data */ 191 | }; 192 | 193 | /** 194 | * Interface structure for creating Admin Q's. The elements is a 1 based value. 195 | */ 196 | struct nvme_create_admn_q { 197 | enum nvme_q_type type; /* Admin q type, ASQ or ACQ */ 198 | uint32_t elements; /* No. of elements of size 64 B */ 199 | }; 200 | 201 | /** 202 | * Interface structure for allocating SQ memory. The elements are 1 based 203 | * values and the CC.IOSQES is 2^n based. 204 | */ 205 | struct nvme_prep_sq { 206 | uint32_t elements; /* Total number of entries that need kernel mem */ 207 | uint16_t sq_id; /* The user specified unique SQ ID */ 208 | uint16_t cq_id; /* Existing or non-existing CQ ID */ 209 | uint8_t contig; /* Indicates if SQ is contig or not, 1 = contig */ 210 | }; 211 | 212 | /** 213 | * Interface structure for allocating CQ memory. The elements are 1 based 214 | * values and the CC.IOSQES is 2^n based. 215 | */ 216 | struct nvme_prep_cq { 217 | uint32_t elements; /* Total number of entries that need kernal mem */ 218 | uint16_t cq_id; /* Existing or non-existing CQ ID. */ 219 | uint8_t contig; /* Indicates if SQ is contig or not, 1 = contig */ 220 | }; 221 | 222 | /** 223 | * Interface structure for getting the metrics structure into a user file. 224 | * The filename and location are specified thought file_name parameter. 225 | */ 226 | struct nvme_file { 227 | uint16_t flen; /* Length of file name, it is not the total bytes */ 228 | const char *file_name; /* location and file name to copy metrics */ 229 | }; 230 | 231 | /** 232 | * Interface structure for reap inquiry ioctl. It works well for both admin 233 | * and IO Q's. 234 | */ 235 | struct nvme_reap_inquiry { 236 | uint16_t q_id; /* CQ ID to reap commands for */ 237 | uint32_t num_remaining; /* return no of cmds waiting to be reaped */ 238 | 239 | /* no of times isr was fired which is associated with cq reaped on */ 240 | uint32_t isr_count; 241 | }; 242 | 243 | /** 244 | * Interface structure for reap ioctl. Admin Q and all IO Q's are supported. 245 | */ 246 | struct nvme_reap { 247 | uint16_t q_id; /* CQ ID to reap commands for */ 248 | uint32_t elements; /* Get the no. of elements to be reaped */ 249 | uint32_t num_remaining; /* return no. of cmds waiting for this cq */ 250 | uint32_t num_reaped; /* Return no. of elements reaped */ 251 | uint8_t *buffer; /* Buffer to copy reaped data */ 252 | /* no of times isr was fired which is associated with cq reaped on */ 253 | uint32_t isr_count; 254 | uint32_t size; /* Size of buffer to fill data to */ 255 | }; 256 | 257 | /** 258 | * Format of general purpose nvme command DW0-DW9 259 | */ 260 | struct nvme_gen_cmd { 261 | uint8_t opcode; 262 | uint8_t flags; 263 | uint16_t command_id; 264 | uint32_t nsid; 265 | uint64_t rsvd2; 266 | uint64_t metadata; 267 | uint64_t prp1; 268 | uint64_t prp2; 269 | }; 270 | 271 | /** 272 | * Specific structure for Create CQ command 273 | */ 274 | struct nvme_create_cq { 275 | uint8_t opcode; 276 | uint8_t flags; 277 | uint16_t command_id; 278 | uint32_t rsvd1[5]; 279 | uint64_t prp1; 280 | uint64_t rsvd8; 281 | uint16_t cqid; 282 | uint16_t qsize; 283 | uint16_t cq_flags; 284 | uint16_t irq_no; 285 | uint32_t rsvd12[4]; 286 | }; 287 | 288 | /** 289 | * Specific structure for Create SQ command 290 | */ 291 | struct nvme_create_sq { 292 | uint8_t opcode; 293 | uint8_t flags; 294 | uint16_t command_id; 295 | uint32_t rsvd1[5]; 296 | uint64_t prp1; 297 | uint64_t rsvd8; 298 | uint16_t sqid; 299 | uint16_t qsize; 300 | uint16_t sq_flags; 301 | uint16_t cqid; 302 | uint32_t rsvd12[4]; 303 | }; 304 | 305 | /** 306 | * Specific structure for Delete Q command 307 | */ 308 | struct nvme_del_q { 309 | uint8_t opcode; 310 | uint8_t flags; 311 | uint16_t command_id; 312 | uint32_t rsvd1[9]; 313 | uint16_t qid; 314 | uint16_t rsvd10; 315 | uint32_t rsvd11[5]; 316 | }; 317 | 318 | /** 319 | * Interface structure for setting the desired IRQ type. 320 | * works for all type of interrupt scheme expect PIN based. 321 | */ 322 | struct interrupts { 323 | uint16_t num_irqs; /* total no. of irqs req by tnvme */ 324 | enum nvme_irq_type irq_type; /* Active IRQ scheme for this dev */ 325 | }; 326 | 327 | /** 328 | * Public interface for the nvme device parameters. These parameters are 329 | * copied to user on request through an IOCTL interface GET_DEVICE_METRICS. 330 | */ 331 | struct public_metrics_dev { 332 | struct interrupts irq_active; /* Active IRQ state of the nvme device */ 333 | }; 334 | 335 | /** 336 | * Describes bits/bytes within an existing SQ indicating a new value for any 337 | * cmd dword. This is only allowed for those cmds for which the doorbell hasn't 338 | * already rung. 339 | */ 340 | struct backdoor_inject { 341 | uint16_t q_id; /* SQ ID where the cmd is residing */ 342 | uint16_t cmd_ptr; /* [0 -> (CreateIOSQ.DW10.SIZE-1)] which cmd in SQ? */ 343 | uint8_t dword; /* [0 -> (CC.IOSQES-1)] which DWORD in the cmd */ 344 | uint32_t value_mask; /* Bitmask indicates which 'value' bits to use */ 345 | uint32_t value; /* Extract spec'd bits; overwrite those exact bits */ 346 | }; 347 | 348 | /** 349 | * Interface structure for marking a unique string to the system log. 350 | */ 351 | struct nvme_logstr { 352 | uint16_t slen; /* sizeof(log_str) */ 353 | const char *log_str; /* NULl terminated ASCII logging statement */ 354 | }; 355 | 356 | 357 | #endif 358 | -------------------------------------------------------------------------------- /dnvme_ioctls.h: -------------------------------------------------------------------------------- 1 | /* 2 | * NVM Express Compliance Suite 3 | * Copyright (c) 2011, Intel Corporation. 4 | * 5 | * This program is free software; you can redistribute it and/or modify it 6 | * under the terms and conditions of the GNU General Public License, 7 | * version 2, as published by the Free Software Foundation. 8 | * 9 | * This program is distributed in the hope it will be useful, but WITHOUT 10 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 | * more details. 13 | * 14 | * You should have received a copy of the GNU General Public License along with 15 | * this program; if not, write to the Free Software Foundation, Inc., 16 | * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 | */ 18 | 19 | #ifndef _DNVME_IOCTLS_H_ 20 | #define _DNVME_IOCTLS_H_ 21 | 22 | /** 23 | * Enumeration types which provide common interface between kernel driver and 24 | * user app layer ioctl functions. dnvme is using letter 'N' to designate the 25 | * first param to _IO macros because of "Nvme" designation. Since 26 | * drivers/usb/scanner.h is officially designated to use this letter, but 27 | * only for the 2nd param values ranging from 0x00-0x1f, dnmve will therefore 28 | * use higher values for the 2nd param as indicated by the value of the 29 | * following enum. 30 | */ 31 | enum { 32 | NVME_READ_GENERIC = 0xB0, /** 20 | #include 21 | #include 22 | #include 23 | 24 | #include "dnvme_reg.h" 25 | #include "definitions.h" 26 | #include "sysdnvme.h" 27 | 28 | #ifdef QEMU 29 | /* QEMU doesn't suppport writeq() or readq(), thus make our own */ 30 | inline u64 READQ(const volatile void __iomem *addr) 31 | { 32 | return (((u64)readl(addr + 4) << 32) | (u64)readl(addr)); 33 | } 34 | inline void WRITEQ(u64 val, volatile void __iomem *addr) 35 | { 36 | writel(val, addr); 37 | writel(val >> 32, addr + 4); 38 | } 39 | 40 | #else 41 | 42 | inline void WRITEQ(u64 val, volatile void __iomem *addr) 43 | { 44 | writeq(val, addr); 45 | } 46 | inline u64 READQ(const volatile void __iomem *addr) 47 | { 48 | return readq(addr); 49 | } 50 | #endif 51 | 52 | /* 53 | * read_nvme_reg_generic - Function to read the controller registers located in 54 | * the MLBAR/MUBAR (PCI BAR 0 and 1) that are mapped to memory area which 55 | * supports in-order access. 56 | */ 57 | int read_nvme_reg_generic(u8 __iomem *bar0, 58 | u8 *udata, u32 nbytes, u32 offset, enum nvme_acc_type acc_type) 59 | { 60 | u32 index = 0; 61 | u32 u32data; 62 | u64 u64data; 63 | u16 u16data; 64 | u8 u8data; 65 | 66 | bar0 += offset; 67 | 68 | /* loop until user requested nbytes are read. */ 69 | for (index = 0; index < nbytes; ) { 70 | 71 | /* Read data from NVME space */ 72 | if (acc_type == BYTE_LEN) { 73 | u8data = readb(bar0); 74 | 75 | /* Copy data to user buffer. */ 76 | memcpy((u8 *)&udata[index], &u8data, sizeof(u8)); 77 | LOG_DBG("NVME Read byte at 0x%llX:0x%0X", (u64)bar0, u8data); 78 | 79 | bar0 += 1; 80 | index += 1; 81 | 82 | } else if (acc_type == WORD_LEN) { 83 | u16data = readw(bar0); 84 | 85 | /* Copy data to user buffer. */ 86 | memcpy((u8 *)&udata[index], &u16data, sizeof(u16)); 87 | LOG_DBG("NVME Read WORD at 0x%llX:0x%X", (u64)bar0, u16data); 88 | 89 | bar0 += 2; 90 | index += 2; 91 | 92 | } else if (acc_type == DWORD_LEN) { 93 | u32data = readl(bar0); 94 | 95 | /* Copy data to user buffer. */ 96 | memcpy((u8 *)&udata[index], &u32data, sizeof(u32)); 97 | LOG_DBG("NVME Read DWORD at 0x%llX:0x%X", (u64)bar0, u32data); 98 | 99 | bar0 += 4; 100 | index += 4; 101 | 102 | } else if (acc_type == QUAD_LEN) { 103 | u64data = READQ(bar0); 104 | 105 | /* Copy data to user buffer. */ 106 | memcpy((u8 *)&udata[index], &u64data, sizeof(u64)); 107 | LOG_DBG("NVME Read QUAD 0x%llX:0x%llX", 108 | (u64)bar0, u64data); 109 | 110 | bar0 += 8; 111 | index += 8; 112 | 113 | } else { 114 | LOG_ERR("Use only BYTE/WORD/DWORD/QUAD access type"); 115 | return -EINVAL; 116 | } 117 | } 118 | 119 | return 0; 120 | } 121 | 122 | /* 123 | * write_nvme_reg_generic - Function to write the controller registers 124 | * located in the MLBAR/MUBAR (PCI BAR 0 and 1) that are mapped to 125 | * memory area which supports in-order access. 126 | */ 127 | int write_nvme_reg_generic(u8 __iomem *bar0, 128 | u8 *udata, u32 nbytes, u32 offset, enum nvme_acc_type acc_type) 129 | { 130 | u32 index = 0; 131 | u32 u32data; 132 | u64 u64data; 133 | u16 u16data; 134 | u8 u8data; 135 | 136 | bar0 += offset; 137 | 138 | /* loop until user requested nbytes are written. */ 139 | for (index = 0; index < nbytes;) { 140 | 141 | /* Check the acc_type and do write as per the access type */ 142 | if (acc_type == BYTE_LEN) { 143 | memcpy((u8 *)&u8data, &udata[index], sizeof(u8)); 144 | writeb(u8data, bar0); 145 | LOG_DBG("NVME Writing BYTE at Addr:Val::0x%llX:0x%X", 146 | (u64)bar0, u8data); 147 | 148 | bar0 += 1; 149 | index += 1; 150 | 151 | } else if (acc_type == WORD_LEN) { 152 | memcpy((u8 *)&u16data, &udata[index], sizeof(u16)); 153 | writew(u16data, bar0); 154 | LOG_DBG("NVME Writing WORD at Addr:Val::0x%llX:0x%X", 155 | (u64)bar0, u16data); 156 | 157 | bar0 += 2; 158 | index += 2; 159 | 160 | } else if (acc_type == DWORD_LEN) { 161 | memcpy((u8 *)&u32data, &udata[index], sizeof(u32)); 162 | writel(u32data, bar0); 163 | LOG_DBG("NVME Writing DWORD at Addr:Val::0x%llX:0x%X", 164 | (u64)bar0, u32data); 165 | 166 | bar0 += 4; 167 | index += 4; 168 | 169 | } else if (acc_type == QUAD_LEN) { 170 | memcpy((u8 *)&u64data, &udata[index], sizeof(u64)); 171 | WRITEQ(u64data, bar0); 172 | LOG_DBG("NVME Writing QUAD at Addr:Val::0x%llX:0x%llX", 173 | (u64)bar0, u64data); 174 | 175 | bar0 += 8; 176 | index += 8; 177 | 178 | } else { 179 | LOG_ERR("use only BYTE/WORD/DWORD/QUAD"); 180 | return -EINVAL; 181 | } 182 | } 183 | 184 | return 0; 185 | } 186 | -------------------------------------------------------------------------------- /dnvme_reg.h: -------------------------------------------------------------------------------- 1 | /* 2 | * NVM Express Compliance Suite 3 | * Copyright (c) 2011, Intel Corporation. 4 | * 5 | * This program is free software; you can redistribute it and/or modify it 6 | * under the terms and conditions of the GNU General Public License, 7 | * version 2, as published by the Free Software Foundation. 8 | * 9 | * This program is distributed in the hope it will be useful, but WITHOUT 10 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 | * more details. 13 | * 14 | * You should have received a copy of the GNU General Public License along with 15 | * this program; if not, write to the Free Software Foundation, Inc., 16 | * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 | */ 18 | 19 | #ifndef _DNVME_REG_H_ 20 | #define _DNVME_REG_H_ 21 | 22 | #include "dnvme_interface.h" 23 | 24 | 25 | /** 26 | * nvme_ctrl_reg defines the register space for the 27 | * nvme controller registers as defined in NVME Spec 1.0b. 28 | */ 29 | struct nvme_ctrl_reg { 30 | u64 cap; /* Controller Capabilities */ 31 | u32 vs; /* Version */ 32 | u32 intms; /* Interrupt Mask Set */ 33 | u32 intmc; /* Interrupt Mask Clear */ 34 | u32 cc; /* Controller Configuration */ 35 | u32 rsvd1; /* Reserved */ 36 | u32 csts; /* Controller Status */ 37 | u32 rsvd2; /* Reserved */ 38 | u32 aqa; /* Admin Queue Attributes */ 39 | u64 asq; /* Admin SQ Base Address */ 40 | u64 acq; /* Admin CQ Base Address */ 41 | }; 42 | 43 | 44 | #define REGMASK_CAP_CQR (1 << 16) 45 | 46 | 47 | /** 48 | * read_nvme_reg_generic function is a generic function which 49 | * reads data from the controller registers of the nvme with 50 | * user specified offset and bytes. Copies data back to udata 51 | * pointer which points to user space buffer. 52 | * 53 | */ 54 | int read_nvme_reg_generic(u8 __iomem *bar0, 55 | u8 *udata, u32 nbytes, u32 offset, enum nvme_acc_type acc_type); 56 | 57 | /** 58 | * write_nvme_reg_generic function is a generic function which 59 | * writes data to the controller registers of the nvme with 60 | * user specified offset and bytes. 61 | */ 62 | int write_nvme_reg_generic(u8 __iomem *bar0, 63 | u8 *udata, u32 nbytes, u32 offset, enum nvme_acc_type acc_type); 64 | 65 | #endif 66 | -------------------------------------------------------------------------------- /dnvme_sts_chk.c: -------------------------------------------------------------------------------- 1 | /* 2 | * NVM Express Compliance Suite 3 | * Copyright (c) 2011, Intel Corporation. 4 | * 5 | * This program is free software; you can redistribute it and/or modify it 6 | * under the terms and conditions of the GNU General Public License, 7 | * version 2, as published by the Free Software Foundation. 8 | * 9 | * This program is distributed in the hope it will be useful, but WITHOUT 10 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 | * more details. 13 | * 14 | * You should have received a copy of the GNU General Public License along with 15 | * this program; if not, write to the Free Software Foundation, Inc., 16 | * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 | */ 18 | 19 | #include 20 | #include 21 | #include 22 | #include 23 | 24 | #include "dnvme_sts_chk.h" 25 | #include "dnvme_queue.h" 26 | #include "definitions.h" 27 | #include "sysdnvme.h" 28 | #include "dnvme_reg.h" 29 | 30 | 31 | /* 32 | * device_status_pci - PCI device status check function 33 | * which checks error registers and set kernel 34 | * alert if a error is detected. 35 | */ 36 | int device_status_pci(u16 device_data) 37 | { 38 | int status = SUCCESS; 39 | 40 | 41 | LOG_DBG("PCI Device Status (STS) Data = 0x%X", device_data); 42 | LOG_DBG("Checking all the PCI register error bits"); 43 | if (device_data & DEV_ERR_MASK) { 44 | status = FAIL; 45 | 46 | if (device_data & DPE) { 47 | LOG_ERR("Device Status - DPE Set"); 48 | LOG_ERR("Detected Data parity Error"); 49 | } 50 | if (device_data & DPD) { 51 | LOG_ERR("Device Status - DPD Set"); 52 | LOG_ERR("Detected Master Data Parity Error"); 53 | } 54 | if (device_data & RMA) { 55 | LOG_ERR("Device Status - RMA Set"); 56 | LOG_ERR("Received Master Abort..."); 57 | } 58 | if (device_data & RTA) { 59 | LOG_ERR("Device Status - RTA Set"); 60 | LOG_ERR("Received Target Abort..."); 61 | } 62 | } 63 | 64 | if ((device_data & CL_MASK) != CL_MASK) { 65 | LOG_ERR("In STS, the CL bit indicates empty Capabilites list."); 66 | LOG_ERR("The controller should support PCI Power Mnmt as a min."); 67 | status = FAIL; 68 | } 69 | 70 | return status; 71 | } 72 | 73 | 74 | /* 75 | * nvme_controller_status - This function checks the controller status 76 | */ 77 | int nvme_controller_status(struct nvme_ctrl_reg __iomem *ctrlr_regs) 78 | { 79 | int status; 80 | u32 u32data; 81 | u32 tmp; 82 | 83 | LOG_DBG("Checking the NVME Controller Status (CSTS)..."); 84 | u32data = readl(&ctrlr_regs->csts); 85 | tmp = u32data; 86 | 87 | LOG_DBG("NVME Controller Status CSTS = 0x%X", u32data); 88 | u32data &= NVME_CSTS_RSVD; 89 | 90 | status = SUCCESS; 91 | if ((u32data != NVME_CSTS_RDY) || (u32data == NVME_CSTS_CFS)) { 92 | if ((u32data & NVME_CSTS_RDY) == 0x0) { 93 | LOG_DBG("NVME Controller is not ready (RDY)..."); 94 | } 95 | if ((u32data & NVME_CSTS_CFS) == NVME_CSTS_CFS) { 96 | status = FAIL; 97 | LOG_ERR("NVME Controller Fatal Status (CFS) is set..."); 98 | } 99 | } else { 100 | LOG_DBG("NVME Controller Status (CSTS) Success"); 101 | } 102 | 103 | u32data = tmp; 104 | u32data &= NVME_CSTS_SHST_MASK; 105 | 106 | /* Right shift by 2 bits. */ 107 | u32data >>= 2; 108 | 109 | switch (u32data) { 110 | LOG_DBG("The Shutdown Status of the NVME Controller (SHST):"); 111 | case NVME_CSTS_NRML_OPER: 112 | LOG_DBG("No Shutdown requested"); 113 | break; 114 | case NVME_CSTS_SHT_OCC: 115 | LOG_DBG("Shutdown Processing occurring"); 116 | break; 117 | case NVME_CSTS_SHT_COMP: 118 | LOG_DBG("Shutdown Process Complete"); 119 | break; 120 | case NVME_CSTS_SHT_RSVD: 121 | LOG_DBG("Reserved Bits set"); 122 | break; 123 | } 124 | 125 | return status; 126 | } 127 | 128 | 129 | /* 130 | * device_status_next - This function will check if the NVME device supports 131 | * NEXT capability item in the linked list. If the device supports the NEXT 132 | * capabilty then it goes into each of the status registers and checks the 133 | * device current state. It reports back to the caller wither SUCCESS or FAIL. 134 | * Print out to the kernel message details of the status. 135 | */ 136 | int device_status_next(struct pci_dev *pdev) 137 | { 138 | int status = SUCCESS; 139 | int ret_code = 0; 140 | u16 pci_offset = 0; 141 | u32 cap_aer = 0; 142 | u16 next_item = 1; 143 | u16 capability = 0; 144 | u16 data = 0; 145 | u8 power_management_feature = 0; 146 | 147 | 148 | LOG_DBG("Checking NEXT Capabilities of the NVME Controller"); 149 | LOG_DBG("Checks if PMCS is supported as a minimum"); 150 | 151 | /* 152 | * Check if CAP pointer points to next available 153 | * linked list registers in the PCI Header. 154 | */ 155 | ret_code = pci_read_config_byte(pdev, CAP_REG, (u8 *)&pci_offset); 156 | if (ret_code < 0) { 157 | LOG_ERR("pci_read_config failed in driver error check"); 158 | } 159 | LOG_DBG("CAP_REG Contents = 0x%X", pci_offset); 160 | 161 | /* 162 | * Read 16 bits of data from the Next pointer as PMS bit 163 | * which is must */ 164 | ret_code = pci_read_config_word(pdev, pci_offset, (u16 *)&cap_aer); 165 | if (ret_code < 0) { 166 | LOG_ERR("pci_read_config failed in driver error check"); 167 | } 168 | cap_aer = (u16)(cap_aer & LOWER_16BITS); 169 | 170 | /* 171 | * Enter into loop if cap_aer has non zero value. 172 | * next_item is set to 1 for entering this loop for first time. 173 | */ 174 | while (cap_aer != 0 || next_item != 0) { 175 | 176 | LOG_DBG("CAP Value 16/32 Bits = 0x%X", cap_aer); 177 | /* 178 | * AER error mask is used for checking if it is not AER type. 179 | * Right Shift by 16 bits. 180 | */ 181 | if (((cap_aer & AER_ERR_MASK) >> 16) != NVME_AER_CVER) { 182 | 183 | capability = (u16)(cap_aer & LOWER_16BITS); 184 | 185 | /* Mask out higher bits */ 186 | next_item = (capability & NEXT_MASK) >> 8; 187 | 188 | /* Get next item offset */ 189 | cap_aer = 0; /* Reset cap_aer */ 190 | LOG_DBG("Capability Value = 0x%X", capability); 191 | 192 | /* Switch based on which ID Capability indicates */ 193 | switch (capability & ~NEXT_MASK) { 194 | case PMCAP_ID: 195 | LOG_DBG("PCI Pwr Mgmt is Supported (PMCS Exists)"); 196 | LOG_DBG("Checking PCI Pwr Mgmt Capabilities Status"); 197 | 198 | /* Set power management is supported */ 199 | power_management_feature = 1; 200 | 201 | if ((0x0 != pci_offset) && (MAX_PCI_HDR < pci_offset)) { 202 | /* Compute the PMCS offset from CAP data */ 203 | pci_offset = pci_offset + PMCS; 204 | 205 | ret_code = pci_read_config_word(pdev, pci_offset, &data); 206 | 207 | if (ret_code < 0) { 208 | LOG_ERR("pci_read_config failed"); 209 | } 210 | 211 | status = (status == SUCCESS) ? 212 | device_status_pmcs(data) : FAIL; 213 | } else { 214 | LOG_DBG("Invalid offset = 0x%x", pci_offset); 215 | } 216 | break; 217 | case MSICAP_ID: 218 | LOG_DBG("Checking MSI Capabilities"); 219 | status = (status == SUCCESS) ? 220 | device_status_msicap(pdev, pci_offset) : FAIL; 221 | break; 222 | case MSIXCAP_ID: 223 | LOG_DBG("Checking MSI-X Capabilities"); 224 | status = (status == SUCCESS) ? 225 | device_status_msixcap(pdev, pci_offset) : FAIL; 226 | break; 227 | case PXCAP_ID: 228 | LOG_DBG("Checking PCI Express Capabilities"); 229 | status = (status == SUCCESS) ? 230 | device_status_pxcap(pdev, pci_offset) : FAIL; 231 | break; 232 | default: 233 | LOG_ERR("Next Device Status check in default case!!!"); 234 | break; 235 | } /* end of switch case */ 236 | 237 | } else { 238 | LOG_DBG("All Advanced Error Reporting.."); 239 | 240 | /* 241 | * Advanced Error capabilty function. 242 | * Right shift by 20 bits to get the next item. 243 | */ 244 | next_item = (cap_aer >> 20) & MAX_PCI_EXPRESS_CFG; 245 | 246 | switch (cap_aer & AER_ID_MASK) { 247 | case AER_CAP_ID: 248 | LOG_DBG("Checking Advanced Error Reporting Capability"); 249 | status = device_status_aercap(pdev, pci_offset); 250 | break; 251 | } 252 | 253 | /* Check if more items are there*/ 254 | if (next_item == 0) { 255 | LOG_DBG("No NEXT item in the list Exiting..1"); 256 | break; 257 | } 258 | } /* end of else if cap_aer */ 259 | 260 | /* If item exists then read else break here */ 261 | if (next_item != 0 && pci_offset != 0) { 262 | /* Get the next item in the linked lint */ 263 | ret_code = pci_read_config_word(pdev, next_item, &pci_offset); 264 | if (ret_code < 0) { 265 | LOG_ERR("pci_read_config failed in driver error check"); 266 | } 267 | /* Read 32 bits as next item could be AER cap */ 268 | ret_code = pci_read_config_dword(pdev, pci_offset, &cap_aer); 269 | if (ret_code < 0) { 270 | LOG_ERR("pci_read_config failed in driver error check"); 271 | } 272 | } else { 273 | LOG_DBG("No NEXT item in the list exiting...2"); 274 | break; 275 | } 276 | } /* end of while loop */ 277 | 278 | /* Check if PCI Power Management cap is supported as a min */ 279 | if (power_management_feature == 0) { 280 | LOG_ERR("The controller should support PCI Pwr management as a min"); 281 | LOG_ERR("PCI Power Management Capability is not Supported."); 282 | status = FAIL; 283 | } 284 | 285 | return status; 286 | } 287 | 288 | 289 | /* 290 | * device_status_pmcs: This function checks the pci power management 291 | * control and status. 292 | * PMCAP + 4h --> PMCS. 293 | */ 294 | int device_status_pmcs(u16 device_data) 295 | { 296 | LOG_DBG("PCI Power Management Control and Status = %x", device_data); 297 | return SUCCESS; 298 | } 299 | 300 | 301 | /* 302 | * device_status_msicap: This function checks the Message Signaled Interrupt 303 | * control and status bits. 304 | */ 305 | int device_status_msicap(struct pci_dev *pdev, u16 device_data) 306 | { 307 | LOG_DBG("PCI MSI Cap= %x", device_data); 308 | return SUCCESS; 309 | } 310 | 311 | 312 | /* 313 | * device_status_msixcap: This func checks the Message Signaled Interrupt - X 314 | * control and status bits. 315 | */ 316 | int device_status_msixcap(struct pci_dev *pdev, u16 device_data) 317 | { 318 | LOG_DBG("PCI MSI-X Cap= %x", device_data); 319 | return SUCCESS; 320 | } 321 | 322 | 323 | /* 324 | * device_status_pxcap: This func checks the PCI Express 325 | * Capability status register 326 | */ 327 | int device_status_pxcap(struct pci_dev *pdev, u16 base_offset) 328 | { 329 | int status = SUCCESS; 330 | u16 pxcap_sts_reg; 331 | int ret_code; 332 | u16 offset; 333 | 334 | 335 | LOG_DBG("Offset Value of PXCAP= %x", base_offset); 336 | LOG_DBG("Checking the PCI Express Device Status (PXDS)..."); 337 | LOG_DBG("Offset PXCAP + Ah: PXDS"); 338 | 339 | /* Compute the PXDS offset from the PXCAP */ 340 | offset = base_offset + NVME_PXCAP_PXDS; 341 | LOG_DBG("PXDS Offset = 0x%X", offset); 342 | 343 | /* Read the device status register value into pxcap_sts_reg */ 344 | ret_code = pci_read_config_word(pdev, offset, &pxcap_sts_reg); 345 | if (ret_code < 0) { 346 | LOG_ERR("pci_read_config failed in driver error check"); 347 | } 348 | LOG_DBG("PXDS Device status register data = 0x%X\n", pxcap_sts_reg); 349 | 350 | /* Mask off the reserved bits. */ 351 | pxcap_sts_reg &= ~(NVME_PXDS_RSVD); 352 | 353 | /* Check in device status register if any error bit is set */ 354 | /* 355 | * Each Bit position has different status indication as indicated 356 | * in NVME Spec 1.0b. 357 | */ 358 | if (pxcap_sts_reg) { 359 | /* Check if Correctable error is detected */ 360 | if (pxcap_sts_reg & NVME_PXDS_CED) { 361 | status = FAIL; 362 | LOG_ERR("Correctable Error Detected (CED) in PXDS"); 363 | } 364 | /* Check if Non fatal error is detected */ 365 | if ((pxcap_sts_reg & NVME_PXDS_NFED) >> 1) { 366 | status = FAIL; 367 | LOG_ERR("Non-Fatal Error Detected (NFED) in PXDS"); 368 | } 369 | /* Check if fatal error is detected */ 370 | if ((pxcap_sts_reg & NVME_PXDS_FED) >> 2) { 371 | status = FAIL; 372 | LOG_ERR("Fatal Error Detected (FED) in PXDS"); 373 | } 374 | /* Check if Unsupported Request detected */ 375 | if ((pxcap_sts_reg & NVME_PXDS_URD) >> 3) { 376 | status = FAIL; 377 | LOG_ERR("Unsupported Request Detected (URD) in PXDS"); 378 | } 379 | /* Check if AUX Power detected */ 380 | if ((pxcap_sts_reg & NVME_PXDS_APD) >> 4) { 381 | LOG_DBG("AUX POWER Detected (APD) in PXDS"); 382 | } 383 | /* Check if Transactions Pending */ 384 | if ((pxcap_sts_reg & NVME_PXDS_TP) >> 5) { 385 | LOG_DBG("Transactions Pending (TP) in PXDS"); 386 | } 387 | } 388 | return status; 389 | } 390 | 391 | /* 392 | * device_status_aerap: This func checks the status register of 393 | * Advanced error reporting capability of PCI express device. 394 | */ 395 | int device_status_aercap(struct pci_dev *pdev, u16 base_offset) 396 | { 397 | int status = SUCCESS; 398 | u16 offset; /* Offset 16 bit for PCIE space */ 399 | u32 u32aer_sts = 0; /* AER Cap Status data */ 400 | u32 u32aer_msk = 0; /* AER Mask bits data */ 401 | int ret_code = 0; /* Return code for pci reads */ 402 | 403 | LOG_DBG("Offset in AER CAP= 0x%X", base_offset); 404 | LOG_DBG("Checking Advanced Err Capability Status Regs (AERUCES and AERCS)"); 405 | 406 | /* Compute the offset of AER Uncorrectable error status */ 407 | offset = base_offset + NVME_AERUCES_OFFSET; 408 | 409 | /* Read the aer status bits */ 410 | ret_code = pci_read_config_dword(pdev, offset, &u32aer_sts); 411 | if (ret_code < 0) { 412 | LOG_ERR("pci_read_config failed in driver error check"); 413 | } 414 | /* Mask the reserved bits */ 415 | u32aer_sts &= ~NVME_AERUCES_RSVD; 416 | 417 | /* compute the mask offset */ 418 | offset = base_offset + NVME_AERUCEM_OFFSET; 419 | 420 | /* get the mask bits */ 421 | ret_code = pci_read_config_dword(pdev, offset, &u32aer_msk); 422 | if (ret_code < 0) { 423 | LOG_ERR("pci_read_config failed in driver error check"); 424 | } 425 | /* zero out the reserved bits */ 426 | u32aer_msk &= ~NVME_AERUCES_RSVD; 427 | 428 | /* 429 | * Complement the Mask Registers and check if unmasked 430 | * bits have a error set 431 | */ 432 | if (u32aer_sts & ~u32aer_msk) { 433 | /* Data Link Protocol Error check */ 434 | if ((u32aer_sts & NVME_AERUCES_DLPES) >> 4) { 435 | status = FAIL; 436 | LOG_ERR("Data Link Protocol Error Status is Set (DLPES)"); 437 | } 438 | /* Pointed TLP status, not an error. */ 439 | if ((u32aer_sts & NVME_AERUCES_PTS) >> 12) { 440 | LOG_ERR("Poisoned TLP Status (PTS)"); 441 | } 442 | /* Check if Flow control Protocol error is set */ 443 | if ((u32aer_sts & NVME_AERUCES_FCPES) >> 13) { 444 | status = FAIL; 445 | LOG_ERR("Flow Control Protocol Error Status (FCPES)"); 446 | } 447 | /* check if completion time out status is set */ 448 | if ((u32aer_sts & NVME_AERUCES_CTS) >> 14) { 449 | status = FAIL; 450 | LOG_ERR("Completion Time Out Status (CTS)"); 451 | } 452 | /* check if completer Abort Status is set */ 453 | if ((u32aer_sts & NVME_AERUCES_CAS) >> 15) { 454 | status = FAIL; 455 | LOG_ERR("Completer Abort Status (CAS)"); 456 | } 457 | /* Check if Unexpected completion status is set */ 458 | if ((u32aer_sts & NVME_AERUCES_UCS) >> 16) { 459 | status = FAIL; 460 | LOG_ERR("Unexpected Completion Status (UCS)"); 461 | } 462 | /* Check if Receiver Over Flow status is set, status not error */ 463 | if ((u32aer_sts & NVME_AERUCES_ROS) >> 17) { 464 | LOG_ERR("Receiver Overflow Status (ROS)"); 465 | } 466 | /* Check if Malformed TLP Status is set, not an error */ 467 | if ((u32aer_sts & NVME_AERUCES_MTS) >> 18) { 468 | LOG_ERR("Malformed TLP Status (MTS)"); 469 | } 470 | /* ECRC error status check */ 471 | if ((u32aer_sts & NVME_AERUCES_ECRCES) >> 19) { 472 | status = FAIL; 473 | LOG_ERR("ECRC Error Status (ECRCES)"); 474 | } 475 | /* Unsupported Request Error Status*/ 476 | if ((u32aer_sts & NVME_AERUCES_URES) >> 20) { 477 | status = FAIL; 478 | LOG_ERR("Unsupported Request Error Status (URES)"); 479 | } 480 | /* Acs violation status check */ 481 | if ((u32aer_sts & NVME_AERUCES_ACSVS) >> 21) { 482 | status = FAIL; 483 | LOG_ERR("ACS Violation Status (ACSVS)"); 484 | } 485 | /* uncorrectable error status check */ 486 | if ((u32aer_sts & NVME_AERUCES_UIES) >> 22) { 487 | status = FAIL; 488 | LOG_ERR("Uncorrectable Internal Error Status (UIES)"); 489 | } 490 | /* MC blocked TLP status check, not an error*/ 491 | if ((u32aer_sts & NVME_AERUCES_MCBTS) >> 23) { 492 | LOG_ERR("MC Blocked TLP Status (MCBTS)"); 493 | } 494 | /* Atomic Op Egress blocked status, not an error */ 495 | if ((u32aer_sts & NVME_AERUCES_AOEBS) >> 24) { 496 | LOG_ERR("AtomicOp Egress Blocked Status (AOEBS)"); 497 | } 498 | /* TLP prefix blocked error status. */ 499 | if ((u32aer_sts & NVME_AERUCES_TPBES) >> 25) { 500 | status = FAIL; 501 | LOG_ERR("TLP Prefix Blocked Error Status (TPBES)"); 502 | } 503 | } 504 | 505 | /* Compute the offset for AER Correctable Error Status Register */ 506 | offset = base_offset + NVME_AERCS_OFFSET; 507 | 508 | /* Read data from pcie space into u32aer_sts */ 509 | ret_code = pci_read_config_dword(pdev, offset, &u32aer_sts); 510 | if (ret_code < 0) { 511 | LOG_ERR("pci_read_config failed in driver error check"); 512 | } 513 | /* zero out Reserved Bits*/ 514 | u32aer_sts &= ~NVME_AERCS_RSVD; 515 | 516 | /* Compute the offset for AER correctable Mask register */ 517 | offset = base_offset + NVME_AERCM_OFFSET; 518 | 519 | /* 520 | * Read the masked bits. When they are set to 1 it means that the s/w 521 | * should ignore those errors 522 | */ 523 | ret_code = pci_read_config_dword(pdev, offset, &u32aer_msk); 524 | if (ret_code < 0) { 525 | LOG_ERR("pci_read_config failed in driver error check"); 526 | } 527 | /* Zero out any reserved bits if they are set */ 528 | u32aer_msk &= ~NVME_AERCS_RSVD; 529 | 530 | /* 531 | * Complement the mask so that any value which is 1 is not be tested 532 | * so it becomes zero. Remaining unmasked bits are bit wise anded to 533 | * check if any error is set which is not masked. 534 | */ 535 | if (u32aer_sts & ~u32aer_msk) { 536 | /* Checked if receiver error status is set */ 537 | if (u32aer_sts & NVME_AERCS_RES) { 538 | status = FAIL; 539 | LOG_ERR("Receiver Error Status (RES)"); 540 | } 541 | 542 | /* check if Bad TLP status is set */ 543 | if ((u32aer_sts & NVME_AERCS_BTS) >> 6) { 544 | status = FAIL; 545 | LOG_ERR("BAD TLP Status (BTS)"); 546 | } 547 | /* check if BAD DLP is set */ 548 | if ((u32aer_sts & NVME_AERCS_BDS) >> 7) { 549 | status = FAIL; 550 | LOG_ERR("BAD DLLP Status (BDS)"); 551 | } 552 | /* Check if RRS is set, status not an error */ 553 | if ((u32aer_sts & NVME_AERCS_RRS) >> 8) { 554 | LOG_ERR("REPLAY_NUM Rollover Status (RRS)"); 555 | } 556 | /* Check if RTS is set */ 557 | if ((u32aer_sts & NVME_AERCS_RTS) >> 12) { 558 | status = FAIL; 559 | LOG_ERR("Replay Timer Timeout Status (RTS)"); 560 | } 561 | /* Check if non fatal error is set */ 562 | if ((u32aer_sts & NVME_AERCS_ANFES) >> 13) { 563 | status = FAIL; 564 | LOG_ERR("Advisory Non Fatal Error Status (ANFES)"); 565 | } 566 | /* Check if CIES is set */ 567 | if ((u32aer_sts & NVME_AERCS_CIES) >> 14) { 568 | status = FAIL; 569 | LOG_ERR("Corrected Internal Error Status (CIES)"); 570 | } 571 | /* check if HLOS is set, Status not an error */ 572 | if ((u32aer_sts & NVME_AERCS_HLOS) >> 15) { 573 | LOG_ERR("Header Log Overflow Status (HLOS)"); 574 | } 575 | } 576 | 577 | return status; 578 | } 579 | -------------------------------------------------------------------------------- /dnvme_sts_chk.h: -------------------------------------------------------------------------------- 1 | /* 2 | * NVM Express Compliance Suite 3 | * Copyright (c) 2011, Intel Corporation. 4 | * 5 | * This program is free software; you can redistribute it and/or modify it 6 | * under the terms and conditions of the GNU General Public License, 7 | * version 2, as published by the Free Software Foundation. 8 | * 9 | * This program is distributed in the hope it will be useful, but WITHOUT 10 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 | * more details. 13 | * 14 | * You should have received a copy of the GNU General Public License along with 15 | * this program; if not, write to the Free Software Foundation, Inc., 16 | * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 | */ 18 | 19 | #ifndef _DNVME_STS_CHK_H_ 20 | #define _DNVME_STS_CHK_H_ 21 | 22 | #include "dnvme_reg.h" 23 | 24 | 25 | /** 26 | * @def PCI_DEVICE_STATUS 27 | * define the offset for STS register 28 | * from the start of PCI config space as specified in the 29 | * NVME_Comliance 1.0b. offset 06h:STS - Device status. 30 | * This register has error status for NVME PCI Exress 31 | * Card. After reading data from this reagister, the driver 32 | * will identify if any error is set during the operation and 33 | * report as kernel alert message. 34 | */ 35 | #define PCI_DEVICE_STATUS 0x6 36 | 37 | /** 38 | * @def DEV_ERR_MASK 39 | * The bit positions that are set in this 16 bit word 40 | * implies that the error is defined for those positions in 41 | * STS register. The bits that are 0 are non error positions. 42 | */ 43 | #define DEV_ERR_MASK 0xB100 44 | 45 | /** 46 | * @def DPE 47 | * This bit position indicates data parity error. 48 | * Set to 1 by h/w when the controller detects a 49 | * parity error on its interface. 50 | */ 51 | #define DPE 0x8000 52 | 53 | 54 | /** 55 | * @def DPD 56 | * This bit position indicates Master data parity error. 57 | * Set to 1 by h/w if parity error is set or parity 58 | * error line is asserted and parity error response bit 59 | * in CMD.PEE is set to 1. 60 | */ 61 | #define DPD 0x0100 62 | 63 | /** 64 | * @def RMA 65 | * This bit position indicates Received Master Abort. 66 | * Set to 1 by h/w if when the controller receives a 67 | * master abort to a cycle it generated. 68 | */ 69 | #define RMA 0x2000 70 | 71 | /** 72 | * @def RTA 73 | * This bit position indicates Received Target Abort. 74 | * Set to 1 by h/w if when the controller receives a 75 | * target abort to a cycle it generated. 76 | */ 77 | #define RTA 0x1000 78 | 79 | /** 80 | * @def CL_MASK 81 | * This bit position indicates Capabilities List of the controller 82 | * The controller should support the PCI Power Management cap as a 83 | * minimum. 84 | */ 85 | #define CL_MASK 0x0010 86 | 87 | /** 88 | * @def NEXT_MASK 89 | * This indicates the location of the next capability item 90 | * in the list. 91 | */ 92 | #define NEXT_MASK 0xFF00 93 | 94 | #define AER_ERR_MASK 0x20000 95 | /** 96 | * @def PMCAP_ID 97 | * This bit indicates if the pointer leading to this position 98 | * is a PCI power management capability. 99 | */ 100 | #define PMCAP_ID 0x1 101 | 102 | /** 103 | * @def MSICAP_ID 104 | * This bit indicates if the pointer leading to this position 105 | * is a capability. 106 | */ 107 | #define MSICAP_ID 0x5 108 | 109 | /** 110 | * @def MSIXCAP_ID 111 | * This bit indicates if the pointer leading to this position 112 | * is a capability. 113 | */ 114 | #define MSIXCAP_ID 0x11 115 | 116 | /** 117 | * @def PXCAP_ID 118 | * This bit indicates if the pointer leading to this position 119 | * is a capability. 120 | */ 121 | #define PXCAP_ID 0x10 122 | 123 | 124 | /** 125 | * PCI Express Device Status- PXDS 126 | * The below enums are for PCI express device status reister 127 | * individual status bits and offset position. 128 | */ 129 | enum { 130 | NVME_PXDS_CED = 0x1 << 0, /* Correctable Error */ 131 | NVME_PXDS_NFED = 0x1 << 1, /* Non Fatal Error */ 132 | NVME_PXDS_FED = 0x1 << 2, /* Fatal Error */ 133 | NVME_PXDS_URD = 0x1 << 3, /* Unsupported Request*/ 134 | NVME_PXDS_APD = 0x1 << 4, /* AUX Power */ 135 | NVME_PXDS_TP = 0x1 << 5, /* Transactions Pending */ 136 | NVME_PXDS_RSVD = 0xFFE0, /* Reserved Bits in PXDS */ 137 | NVME_PXCAP_PXDS = 0xA, /* Device Status offset from PXCAP */ 138 | }; 139 | 140 | /** 141 | * @def AERCAP_ID 142 | * This bit indicates if the pointer leading to this position 143 | * is a capability. 144 | */ 145 | #define AERCAP_ID 0x0001 146 | 147 | /** 148 | * enums for bit positions specified in NVME Controller Status 149 | * offset 0x1Ch CSTS register. 150 | */ 151 | enum { 152 | NVME_CSTS_SHST = 0x3, 153 | NVME_CSTS_SHST_MASK = 0xC, 154 | NVME_CSTS_RSVD = 0xF, 155 | }; 156 | 157 | /** 158 | * enums for bit positions specified in NVME controller status 159 | * offset 1C CSTS in bits 02:03 160 | */ 161 | enum { 162 | NVME_CSTS_NRML_OPER = 0x0, 163 | NVME_CSTS_SHT_OCC = 0x1, 164 | NVME_CSTS_SHT_COMP = 0x2, 165 | NVME_CSTS_SHT_RSVD = 0x3, 166 | }; 167 | 168 | /** 169 | * enums for capability version indicated in the AER capability ID 170 | * Offset AERCAP:AERID 171 | */ 172 | enum { 173 | NVME_AER_CVER = 0x2, 174 | }; 175 | 176 | /** 177 | * enums for Advanced Error reporting Status and Mask Registers offsets 178 | */ 179 | enum { 180 | NVME_AERUCES_OFFSET = 0x4, 181 | NVME_AERUCEM_OFFSET = 0x8, 182 | NVME_AERUCESEV_OFFSET = 0xC, 183 | NVME_AERCS_OFFSET = 0x10, 184 | NVME_AERCM_OFFSET = 0x14, 185 | NVME_AERCC_OFFSET = 0x14, 186 | }; 187 | 188 | /** 189 | * enums for AER Uncorrectable Error Status and Mask bits. 190 | * The bit positions for status and Mask are same in the NVME Spec 1.0b 191 | * so here we have only this bit positions defined for both AERUCES 192 | * and AERUCEM, mask register. 193 | */ 194 | enum { 195 | NVME_AERUCES_RSVD = 0xFC00002F, 196 | NVME_AERUCES_DLPES = 0x1 << 4, 197 | NVME_AERUCES_PTS = 0x1 << 12, 198 | NVME_AERUCES_FCPES = 0x1 << 13, 199 | NVME_AERUCES_CTS = 0x1 << 14, 200 | NVME_AERUCES_CAS = 0x1 << 15, 201 | NVME_AERUCES_UCS = 0x1 << 16, 202 | NVME_AERUCES_ROS = 0x1 << 17, 203 | NVME_AERUCES_MTS = 0x1 << 18, 204 | NVME_AERUCES_ECRCES = 0x1 << 19, 205 | NVME_AERUCES_URES = 0x1 << 20, 206 | NVME_AERUCES_ACSVS = 0x1 << 21, 207 | NVME_AERUCES_UIES = 0x1 << 22, 208 | NVME_AERUCES_MCBTS = 0x1 << 23, 209 | NVME_AERUCES_AOEBS = 0x1 << 24, 210 | NVME_AERUCES_TPBES = 0x1 << 25, 211 | }; 212 | 213 | /** 214 | * enums for AER Correctable Error Status and Mask bits. 215 | * The bit positions for status and Mask are same in the NVME Spec 1.0b 216 | * so here we have only this bit positions defined for both AERCS 217 | * and AERCEM, mask register. 218 | */ 219 | enum { 220 | NVME_AERCS_RSVD = 0xFFFF0E3E, 221 | NVME_AERCS_HLOS = 0x1 << 15, 222 | NVME_AERCS_CIES = 0x1 << 14, 223 | NVME_AERCS_ANFES = 0x1 << 13, 224 | NVME_AERCS_RTS = 0x1 << 12, 225 | NVME_AERCS_RRS = 0x1 << 8, 226 | NVME_AERCS_BDS = 0x1 << 7, 227 | NVME_AERCS_BTS = 0x1 << 6, 228 | NVME_AERCS_RES = 0x1 << 0, 229 | }; 230 | 231 | /** 232 | * device_status_pci function returns the device status of 233 | * the PCI Device status register set in STS register. The offset for this 234 | * register is 0x06h as specified in NVME Express 1.0b spec. 235 | * @param device_data 236 | * @return SUCCESS or FAIL 237 | */ 238 | int device_status_pci(u16 device_data); 239 | 240 | /** 241 | * device_status_next function checks if the next capability of the NVME 242 | * Express device exits and if it exists then gets its status. 243 | * @param pdev 244 | * @return SUCCESS or FAIL 245 | */ 246 | int device_status_next(struct pci_dev *pdev); 247 | 248 | /** 249 | * nvme_controller_status - This function checks the controller status 250 | * @param ctrlr_regs 251 | * @return SUCCESS or FAIL 252 | */ 253 | int nvme_controller_status(struct nvme_ctrl_reg __iomem *ctrlr_regs); 254 | 255 | /** 256 | * device_status_pci function returns the device status of 257 | * the PCI Power Management status register set in PMCS register. 258 | * @param device_data 259 | * @return SUCCESS or FAIL 260 | */ 261 | int device_status_pmcs(u16 device_data); 262 | 263 | /** 264 | * device_status_msicap function returns the device status of 265 | * Message signaled Interrupt status register in MC and MPEND 266 | * @param pdev 267 | * @param device_data 268 | * @return SUCCESS or FAIL 269 | */ 270 | int device_status_msicap(struct pci_dev *pdev, u16 device_data); 271 | 272 | /** 273 | * device_status_msixcap function returns the device status of 274 | * Message signaled Interrupt-X status register in MXC 275 | * @param pdev 276 | * @param device_data 277 | * @return SUCCESS or FAIL 278 | */ 279 | int device_status_msixcap(struct pci_dev *pdev, u16 device_data); 280 | 281 | /** 282 | * device_status_pxcap function returns the device status of 283 | * PCI express capability device status register in PXDS. 284 | * @param pdev 285 | * @param base_offset 286 | * @return SUCCESS or FAIL 287 | */ 288 | int device_status_pxcap(struct pci_dev *pdev, u16 base_offset); 289 | 290 | /** 291 | * device_status_aercap function returns the device status of 292 | * Advanced Error Reporting AER capability device status registers 293 | * The register checked are AERUCES, AERCS and AERCC 294 | * @param pdev 295 | * @param base_offset 296 | * @return SUCCESS or FAIL 297 | */ 298 | int device_status_aercap(struct pci_dev *pdev, u16 base_offset); 299 | 300 | #endif 301 | -------------------------------------------------------------------------------- /etc/55-dnvme.rules: -------------------------------------------------------------------------------- 1 | # 2 | # tdi module rules 3 | # 4 | KERNEL=="tdi*", NAME="%k", MODE="0666" 5 | -------------------------------------------------------------------------------- /funcHierarchy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # 4 | # NVM Express Compliance Suite 5 | # Copyright (c) 2011, Intel Corporation. 6 | # 7 | # This program is free software; you can redistribute it and/or modify it 8 | # under the terms and conditions of the GNU General Public License, 9 | # version 2, as published by the Free Software Foundation. 10 | # 11 | # This program is distributed in the hope it will be useful, but WITHOUT 12 | # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 | # FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 | # more details. 15 | # 16 | # You should have received a copy of the GNU General Public License along with 17 | # this program; if not, write to the Free Software Foundation, Inc., 18 | # 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 19 | # 20 | 21 | firefox ./Doc/HTML/index.html 22 | 23 | exit 24 | -------------------------------------------------------------------------------- /scripts/checkSrc.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # NVM Express Compliance Suite 4 | # Copyright (c) 2011, Intel Corporation. 5 | # 6 | # This program is free software; you can redistribute it and/or modify it 7 | # under the terms and conditions of the GNU General Public License, 8 | # version 2, as published by the Free Software Foundation. 9 | # 10 | # This program is distributed in the hope it will be useful, but WITHOUT 11 | # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 | # FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 | # more details. 14 | # 15 | # You should have received a copy of the GNU General Public License along with 16 | # this program; if not, write to the Free Software Foundation, Inc., 17 | # 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 18 | # 19 | 20 | ./scripts/checkpatch.pl --file -terse --no-tree *.c 21 | ./scripts/checkpatch.pl --file -terse --no-tree *.h 22 | -------------------------------------------------------------------------------- /sysdnvme.h: -------------------------------------------------------------------------------- 1 | /* 2 | * NVM Express Compliance Suite 3 | * Copyright (c) 2011, Intel Corporation. 4 | * 5 | * This program is free software; you can redistribute it and/or modify it 6 | * under the terms and conditions of the GNU General Public License, 7 | * version 2, as published by the Free Software Foundation. 8 | * 9 | * This program is distributed in the hope it will be useful, but WITHOUT 10 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 | * more details. 13 | * 14 | * You should have received a copy of the GNU General Public License along with 15 | * this program; if not, write to the Free Software Foundation, Inc., 16 | * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 | */ 18 | 19 | #ifndef _SYSDNVME_H_ 20 | #define _SYSDNVME_H_ 21 | 22 | #define APPNAME "dnvme" 23 | #define LEVEL APPNAME 24 | 25 | /* LOG_NRM() macro should be used with caution. It was originally peppered 26 | * throughout the code and enough latency was introduced while running within 27 | * QEMU that tnmve would sometimes miss CE's arriving from the simulated hdw. 28 | * It was discovered that syslogd was running causing QEMU to hold off 29 | * important threads. Converting almost all LOG_NRM's to LOG_DBG's and realizing 30 | * that the dnvme should be as efficient as possible made this issue disappear. 31 | */ 32 | #define LOG_NRM(fmt, ...) \ 33 | printk("%s: " fmt "\n", LEVEL, ## __VA_ARGS__) 34 | #define LOG_ERR(fmt, ...) \ 35 | printk("%s-err:%s:%d: " fmt "\n", \ 36 | LEVEL, __FILE__, __LINE__, ## __VA_ARGS__) 37 | #ifdef DEBUG 38 | #define LOG_DBG(fmt, ...) \ 39 | printk("%s-dbg:%s:%d: " fmt "\n", \ 40 | LEVEL, __FILE__, __LINE__, ## __VA_ARGS__) 41 | #else 42 | #define LOG_DBG(fmt, ...) 43 | #endif 44 | 45 | /* Debug flag for IOCT_SEND_64B module */ 46 | #define TEST_PRP_DEBUG 47 | 48 | /** 49 | * Absract the differences in trying to make this driver run within QEMU and 50 | * also within real world 64 bit platforms agaisnt real hardware. 51 | */ 52 | inline u64 READQ(const volatile void __iomem *addr); 53 | inline void WRITEQ(u64 val, volatile void __iomem *addr); 54 | 55 | 56 | #endif /* sysdnvme.h */ 57 | -------------------------------------------------------------------------------- /sysfuncproto.h: -------------------------------------------------------------------------------- 1 | /* 2 | * NVM Express Compliance Suite 3 | * Copyright (c) 2011, Intel Corporation. 4 | * 5 | * This program is free software; you can redistribute it and/or modify it 6 | * under the terms and conditions of the GNU General Public License, 7 | * version 2, as published by the Free Software Foundation. 8 | * 9 | * This program is distributed in the hope it will be useful, but WITHOUT 10 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 | * more details. 13 | * 14 | * You should have received a copy of the GNU General Public License along with 15 | * this program; if not, write to the Free Software Foundation, Inc., 16 | * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 | */ 18 | 19 | #ifndef _SYSFUNCPROTO_H_ 20 | #define _SYSFUNCPROTO_H_ 21 | 22 | #include 23 | #include 24 | #include 25 | #include 26 | #include 27 | #include 28 | #include 29 | #include 30 | 31 | #include "dnvme_ds.h" 32 | #include "dnvme_reg.h" 33 | #include "sysdnvme.h" 34 | 35 | /** 36 | * driver_generic_read is a function that is called from 37 | * driver IOCTL when user want to read data from the 38 | * NVME card. The read parameter like offset and length 39 | * etc are specified from the struct rw_generic 40 | * @param nvme_data pointer to the device opened. 41 | * @param pmetrics_device 42 | * @return read success or failure. 43 | */ 44 | int driver_generic_read(struct rw_generic *nvme_data, 45 | struct metrics_device_list *pmetrics_device); 46 | /** 47 | * driver_generic_write is a function that is called from 48 | * driver IOCTL when user want to write data to the 49 | * NVME card. The write parameters offset and length 50 | * etc are specified from the struct nvme_write_generic 51 | * @param nvme_data pointer to the device opened. 52 | * @param pmetrics_device 53 | * @return write success or failure. 54 | */ 55 | int driver_generic_write(struct rw_generic *nvme_data, 56 | struct metrics_device_list *pmetrics_device); 57 | 58 | /** 59 | * device_status_chk - Generic error checking function 60 | * which checks error registers and set kernel 61 | * alert if a error is detected. 62 | * @param pmetrics_device 63 | * @param status 64 | * @return device status fail or success. 65 | */ 66 | int device_status_chk(struct metrics_device_list *pmetrics_device, 67 | int *status); 68 | 69 | /** 70 | * driver_create_asq - Driver Admin Submission Queue creation routine 71 | * @param create_admn_q 72 | * @param pmetrics_device 73 | * @return ASQ creation SUCCESS or FAIL 74 | */ 75 | int driver_create_asq(struct nvme_create_admn_q *create_admn_q, 76 | struct metrics_device_list *pmetrics_device); 77 | 78 | /* 79 | * driver_iotcl_init - Driver Initialization routine before starting to 80 | * issue ioctls. 81 | * @param pdev 82 | * @param pmetrics_device_list 83 | * @return init SUCCESS or FAIL 84 | */ 85 | int driver_ioctl_init(struct pci_dev *pdev, void __iomem *bar0, 86 | void __iomem *bar1, void __iomem *bar2, 87 | struct metrics_device_list *pmetrics_device_list); 88 | 89 | /** 90 | * driver_create_acq - Driver Admin completion Queue creation routine 91 | * @param create_admn_q 92 | * @param pmetrics_device_list 93 | * @return ACQ creation SUCCESS or FAIL 94 | */ 95 | int driver_create_acq(struct nvme_create_admn_q *create_admn_q, 96 | struct metrics_device_list *pmetrics_device_list); 97 | 98 | /** 99 | * driver_nvme_prep_sq - Driver routine to set up user parameters into metrics 100 | * for prepating the IO SQ. 101 | * @param prep_sq 102 | * @param pmetrics_device 103 | * @return allocation of contig mem SUCCESS or FAIL. 104 | */ 105 | int driver_nvme_prep_sq(struct nvme_prep_sq *prep_sq, 106 | struct metrics_device_list *pmetrics_device); 107 | 108 | /** 109 | * driver_nvme_prep_cq - Driver routine to set up user parameters into metrics 110 | * for prepating the IO CQ. 111 | * @param prep_cq 112 | * @param pmetrics_device 113 | * @return allocation of contig mem SUCCESS or FAIL. 114 | */ 115 | int driver_nvme_prep_cq(struct nvme_prep_cq *prep_cq, 116 | struct metrics_device_list *pmetrics_device); 117 | 118 | /** 119 | * driver_send_64b - Routine for sending 64 bytes command into 120 | * admin/IO SQ/CQ's 121 | * @param pmetrics_device 122 | * @param nvme_64b_send 123 | * @return Error Codes 124 | */ 125 | int driver_send_64b(struct metrics_device_list *pmetrics_device, 126 | struct nvme_64b_send *cmd_request); 127 | 128 | /** 129 | * driver_toxic_dword - Please refer to the header file comment for 130 | * NVME_IOCTL_TOXIC_64B_CMD. 131 | * @param pmetrics_device 132 | * @param err_inject Pass ptr to the user space buffer describing the error 133 | * to inject. 134 | * @return Error Codes 135 | */ 136 | int driver_toxic_dword(struct metrics_device_list *pmetrics_device, 137 | struct backdoor_inject *err_inject); 138 | 139 | /** 140 | * driver_log - Driver routine to log data into file from metrics 141 | * @param n_file 142 | * @return allocation of contig mem SUCCESS or FAIL. 143 | */ 144 | int driver_log(struct nvme_file *n_file); 145 | 146 | /** 147 | * driver_logstr - Driver routine to log a custom string to the system log 148 | * @param logStr 149 | * @return SUCCESS or FAIL. 150 | */ 151 | int driver_logstr(struct nvme_logstr *logStr); 152 | 153 | /** 154 | * deallocate_all_queues - This function will start freeing up the memory for 155 | * the queues (SQ and CQ) allocated during the prepare queues. This function 156 | * takes a parameter, ST_DISABLE or ST_DISABLE_COMPLETELY, which identifies if 157 | * you need to clear Admin or not. Also while driver exit call this function 158 | * with ST_DISABLE_COMPLETELY. 159 | * @param pmetrics_device 160 | * @param nstate 161 | */ 162 | void deallocate_all_queues(struct metrics_device_list *pmetrics_device, 163 | enum nvme_state nstate); 164 | 165 | /** 166 | * driver_reap_inquiry - This function will traverse the metrics device list 167 | * for the given cq_id and return the number of commands that are to be reaped. 168 | * This is only function apart from initializations, that will modify tail_ptr 169 | * for the corresponding CQ. 170 | * @param pmetrics_device 171 | * @param usr_reap_inq 172 | * @return success or failure based on reap_inquiry 173 | */ 174 | int driver_reap_inquiry(struct metrics_device_list *pmetrics_device, 175 | struct nvme_reap_inquiry *usr_reap_inq); 176 | 177 | /** 178 | * dnvme_device_open - This operation is always the first operation performed 179 | * on the device file. 180 | * @param inode 181 | * @param filp 182 | * @return success or failure based on device open 183 | */ 184 | int dnvme_device_open(struct inode *inode, struct file *filp); 185 | 186 | /** 187 | * dnvme_device_release - This operation is invoked when the file structure 188 | * is being released. 189 | * @param inode 190 | * @param filp 191 | * @return success or failure based on device clean up. 192 | */ 193 | int dnvme_device_release(struct inode *inode, struct file *filp); 194 | 195 | /** 196 | * dnvme_device_mmap - This mmap will do the linear mapping to device memory 197 | * into user space. 198 | * The parameter vma holds all the required mapping and return the caller with 199 | * virtual address. 200 | * @param filp 201 | * @param vma 202 | * @return success or failure depending on mapping. 203 | */ 204 | int dnvme_device_mmap(struct file *filp, struct vm_area_struct *vma); 205 | 206 | /** 207 | * driver_reap_cq - Reap the number of elements specified for the given CQ id. 208 | * Return the CQ entry data in the buffer specified. 209 | * @param pmetrics_device 210 | * @param usr_reap_data 211 | * @return Success of Failure based on Reap Success or failure. 212 | */ 213 | int driver_reap_cq(struct metrics_device_list *pmetrics_device, 214 | struct nvme_reap *usr_reap_data); 215 | 216 | /** 217 | * Create a dma pool for the requested size. Initialize the DMA pool pointer 218 | * with DWORD alignment and associate it with the active device. 219 | * @param pmetrics_device 220 | * @param alloc_size 221 | * @return SUCCESS or FAIL based on dma pool creation. 222 | */ 223 | int metabuff_create(struct metrics_device_list *pmetrics_device, 224 | u32 alloc_size); 225 | 226 | /** 227 | * Create a meta buffer node when user request and allocate a consistent 228 | * dma memory from the meta dma pool. Add this node into the meta data 229 | * linked list. 230 | * @param pmetrics_device 231 | * @param meta_id 232 | * @return Success of Failure based on dma alloc Success or failure. 233 | */ 234 | int metabuff_alloc(struct metrics_device_list *pmetrics_device, 235 | u32 meta_id); 236 | 237 | /** 238 | * Delete a meta buffer node when user requests and deallocate a consistent 239 | * dma memory. Delete this node from the meta data linked list. 240 | * @param pmetrics_device 241 | * @param meta_id 242 | * @return Success of Failure based on metabuff delete 243 | */ 244 | int metabuff_del(struct metrics_device_list *pmetrics_device, 245 | u32 meta_id); 246 | 247 | /* 248 | * deallocate_mb will free up the memory and nodes for the meta buffers 249 | * that were allocated during the alloc and create meta. Finally 250 | * destroys the dma pool and free up the metrics meta node. 251 | * @param pmetrics_device 252 | */ 253 | void deallocate_mb(struct metrics_device_list *pmetrics_device); 254 | 255 | int check_cntlr_cap(struct pci_dev *pdev, enum nvme_irq_type cap_type, 256 | u16 *offset); 257 | 258 | #endif 259 | -------------------------------------------------------------------------------- /unittest/.gitignore: -------------------------------------------------------------------------------- 1 | test 2 | -------------------------------------------------------------------------------- /unittest/Makefile: -------------------------------------------------------------------------------- 1 | CC=gcc 2 | CFLAGS=-g -W -Wall 3 | APP_NAME := test 4 | 5 | SOURCES := test.c test_metrics.c test_alloc.c test_rng_sqxdbl.c test_send_cmd.c test_irq.c 6 | 7 | INCLUDE := 8 | 9 | all: $(APP_NAME) 10 | 11 | $(APP_NAME): $(SOURCES) 12 | $(CC) $(CFLAGS) $(SOURCES) -o $(APP_NAME) $(LDFLAGS) 13 | 14 | clean: 15 | rm -f *.o 16 | rm -f $(APP_NAME) 17 | 18 | .PHONY: all clean doc 19 | 20 | -------------------------------------------------------------------------------- /unittest/test_alloc.c: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | #include 14 | 15 | #include "../dnvme_interface.h" 16 | #include "../dnvme_ioctls.h" 17 | 18 | #include "test_metrics.h" 19 | 20 | struct cq_completion { 21 | uint32_t cmd_specifc; /* DW 0 all 32 bits */ 22 | uint32_t reserved; /* DW 1 all 32 bits */ 23 | uint16_t sq_head_ptr; /* DW 2 lower 16 bits */ 24 | uint16_t sq_identifier; /* DW 2 higher 16 bits */ 25 | uint16_t cmd_identifier; /* Cmd identifier */ 26 | uint8_t phase_bit:1; /* Phase bit */ 27 | uint16_t status_field:15; /* Status field */ 28 | }; 29 | 30 | int ioctl_prep_sq(int file_desc, uint16_t sq_id, uint16_t cq_id, uint16_t elem, uint8_t contig) 31 | { 32 | int ret_val = -1; 33 | struct nvme_prep_sq prep_sq; 34 | 35 | prep_sq.sq_id = sq_id; 36 | prep_sq.cq_id = cq_id; 37 | prep_sq.elements = elem; 38 | prep_sq.contig = contig; 39 | 40 | printf("\tCalling Prepare SQ Creation...\n"); 41 | printf("\tSQ ID = %d\n", prep_sq.sq_id); 42 | printf("\tAssoc CQ ID = %d\n", prep_sq.cq_id); 43 | printf("\tNo. of Elem = %d\n", prep_sq.elements); 44 | printf("\tContig(Y|N=(1|0)) = %d\n", prep_sq.contig); 45 | 46 | ret_val = ioctl(file_desc, NVME_IOCTL_PREPARE_SQ_CREATION, &prep_sq); 47 | 48 | if(ret_val < 0) { 49 | printf("\tSQ ID = %d Preparation failed!\n", prep_sq.sq_id); 50 | } else { 51 | printf("\tSQ ID = %d Preparation success\n", prep_sq.sq_id); 52 | } 53 | return ret_val; 54 | } 55 | 56 | int ioctl_prep_cq(int file_desc, uint16_t cq_id, uint16_t elem, uint8_t contig) 57 | { 58 | int ret_val = -1; 59 | struct nvme_prep_cq prep_cq; 60 | 61 | prep_cq.cq_id = cq_id; 62 | prep_cq.elements = elem; 63 | prep_cq.contig = contig; 64 | 65 | printf("\tCalling Prepare CQ Creation...\n"); 66 | printf("\tCQ ID = %d\n", prep_cq.cq_id); 67 | printf("\tNo. of Elem = %d\n", prep_cq.elements); 68 | printf("\tContig(Y|N=(1|0)) = %d\n", prep_cq.contig); 69 | 70 | ret_val = ioctl(file_desc, NVME_IOCTL_PREPARE_CQ_CREATION, &prep_cq); 71 | 72 | if(ret_val < 0) { 73 | printf("\tCQ ID = %d Preparation failed!\n", prep_cq.cq_id); 74 | } else { 75 | printf("\tCQ ID = %d Preparation success\n", prep_cq.cq_id); 76 | } 77 | return ret_val; 78 | } 79 | 80 | int ioctl_reap_inquiry(int file_desc, int cq_id) 81 | { 82 | int ret_val = -1; 83 | struct nvme_reap_inquiry rp_inq; 84 | 85 | rp_inq.q_id = cq_id; 86 | 87 | ret_val = ioctl(file_desc, NVME_IOCTL_REAP_INQUIRY, &rp_inq); 88 | if(ret_val < 0) { 89 | printf("\nreap inquiry failed!\n"); 90 | exit(-1); 91 | } 92 | else { 93 | printf("\tReap Inquiry on CQ ID = %d, Num_Remaining = %d," 94 | " ISR_count = %d\n", rp_inq.q_id, rp_inq.num_remaining, 95 | rp_inq.isr_count); 96 | } 97 | return rp_inq.num_remaining; 98 | } 99 | 100 | void display_cq_data(unsigned char *cq_buffer, int reap_ele) 101 | { 102 | struct cq_completion *cq_entry; 103 | while (reap_ele) { 104 | cq_entry = (struct cq_completion *)cq_buffer; 105 | printf("\n\t\tCmd Id = 0x%x", cq_entry->cmd_identifier); 106 | printf("\n\t\tCmd Spec = 0x%x", cq_entry->cmd_specifc); 107 | printf("\n\t\tPhase Bit = %d", cq_entry->phase_bit); 108 | printf("\n\t\tSQ Head Ptr = %d", cq_entry->sq_head_ptr); 109 | printf("\n\t\tSQ ID = 0x%x", cq_entry->sq_identifier); 110 | printf("\n\t\tStatus = %d\n", cq_entry->status_field); 111 | reap_ele--; 112 | cq_buffer += sizeof(struct cq_completion); 113 | } 114 | } 115 | 116 | void ioctl_reap_cq(int file_desc, int cq_id, int elements, int size, int display) 117 | { 118 | struct nvme_reap rp_cq; 119 | int ret_val; 120 | 121 | rp_cq.q_id = cq_id; 122 | rp_cq.elements = elements; 123 | rp_cq.size = (size * elements); 124 | rp_cq.buffer = malloc(sizeof(char) * rp_cq.size); 125 | if (rp_cq.buffer == NULL) { 126 | printf("Malloc Failed"); 127 | return; 128 | } 129 | ret_val = ioctl(file_desc, NVME_IOCTL_REAP, &rp_cq); 130 | if(ret_val < 0) { 131 | printf("\nreap inquiry failed!\n"); 132 | } 133 | else { 134 | 135 | printf("\tReaped on CQ ID = %d, No Request = %d, No Reaped = %d," 136 | " No Rem = %d, ISR_count = %d\n", rp_cq.q_id, rp_cq.elements, 137 | rp_cq.num_reaped, rp_cq.num_remaining, rp_cq.isr_count); 138 | 139 | if (display) 140 | display_cq_data(rp_cq.buffer, rp_cq.num_reaped); 141 | } 142 | free(rp_cq.buffer); 143 | } 144 | 145 | void set_admn(int file_desc) 146 | { 147 | ioctl_disable_ctrl(file_desc, ST_DISABLE); 148 | ioctl_create_acq(file_desc); 149 | ioctl_create_asq(file_desc); 150 | ioctl_enable_ctrl(file_desc); 151 | } 152 | 153 | void test_meta(int file_desc, int log) 154 | { 155 | int size; 156 | int ret_val; 157 | int meta_id; 158 | uint64_t *kadr; 159 | char *tmpfile12 = "/tmp/file_name12.txt"; 160 | char *tmpfile13 = "/tmp/file_name13.txt"; 161 | 162 | size = 4096; 163 | ret_val = ioctl(file_desc, NVME_IOCTL_METABUF_CREATE, size); 164 | if(ret_val < 0) { 165 | printf("\nMeta data creation failed!\n"); 166 | } 167 | else { 168 | printf("Meta Data creation success!!\n"); 169 | } 170 | getchar(); 171 | ret_val = ioctl(file_desc, NVME_IOCTL_METABUF_CREATE, size); 172 | if(ret_val < 0) { 173 | printf("\nMeta data creation failed!\n"); 174 | } 175 | else { 176 | printf("Meta Data creation success!!\n"); 177 | } 178 | size = 1024 * 16 + 10; 179 | ret_val = ioctl(file_desc, NVME_IOCTL_METABUF_CREATE, size); 180 | if(ret_val < 0) { 181 | printf("\nMeta data creation failed!\n"); 182 | } 183 | else { 184 | printf("Meta Data creation success!!\n"); 185 | } 186 | getchar(); 187 | 188 | for (meta_id = 0; meta_id < 20; meta_id++) { 189 | ret_val = ioctl(file_desc, NVME_IOCTL_METABUF_ALLOC, meta_id); 190 | if(ret_val < 0) { 191 | printf("\nMeta Id = %d allocation failed!\n", meta_id); 192 | } 193 | else { 194 | printf("Meta Id = %d allocation success!!\n", meta_id); 195 | } 196 | } 197 | if (log != 0) 198 | ioctl_dump(file_desc, tmpfile12); 199 | 200 | meta_id = 5; 201 | ret_val = ioctl(file_desc, NVME_IOCTL_METABUF_DELETE, meta_id); 202 | if(ret_val < 0) { 203 | printf("\nMeta Id = %d deletion failed!\n", meta_id); 204 | } 205 | else { 206 | printf("Meta Id = %d deletion success!!\n", meta_id); 207 | } 208 | meta_id = 6; 209 | ret_val = ioctl(file_desc, NVME_IOCTL_METABUF_DELETE, meta_id); 210 | if(ret_val < 0) { 211 | printf("\nMeta Id = %d deletion failed!\n", meta_id); 212 | } 213 | else { 214 | printf("Meta Id = %d deletion success!!\n", meta_id); 215 | } 216 | meta_id = 6; 217 | ret_val = ioctl(file_desc, NVME_IOCTL_METABUF_DELETE, meta_id); 218 | if(ret_val < 0) { 219 | printf("\nMeta Id = %d deletion failed!\n", meta_id); 220 | } 221 | else { 222 | printf("Meta Id = %d deletion success!!\n", meta_id); 223 | } 224 | meta_id = 5; 225 | ret_val = ioctl(file_desc, NVME_IOCTL_METABUF_DELETE, meta_id); 226 | if(ret_val < 0) { 227 | printf("\nMeta Id = %d deletion failed!\n", meta_id); 228 | } 229 | else { 230 | printf("Meta Id = %d deletion success!!\n", meta_id); 231 | } 232 | 233 | meta_id = 6; 234 | ret_val = ioctl(file_desc, NVME_IOCTL_METABUF_ALLOC, meta_id); 235 | if(ret_val < 0) { 236 | printf("\nMeta Id = %d allocation failed!\n", meta_id); 237 | } 238 | else { 239 | printf("Meta Id = %d allocation success!!\n", meta_id); 240 | } 241 | 242 | meta_id = 0x80004; 243 | printf("\nTEST 3.1: Call to Mmap encoded Meta Id = 0x%x\n", meta_id); 244 | kadr = mmap(0, 4096, PROT_READ, MAP_SHARED, file_desc, 4096 * meta_id); 245 | if (!kadr) { 246 | printf("mapping failed\n"); 247 | exit(-1); 248 | } 249 | 250 | munmap(kadr, 4096); 251 | if (log != 0) 252 | ioctl_dump(file_desc, tmpfile13); 253 | } 254 | 255 | uint32_t test_meta_buf(int file_desc, uint32_t meta_id) { 256 | int ret_val; 257 | ret_val = ioctl(file_desc, NVME_IOCTL_METABUF_CREATE, 4096); 258 | if(ret_val < 0) { 259 | printf("\nMeta data creation failed!\n"); 260 | } else { 261 | printf("Meta Data creation success!!\n"); 262 | } 263 | ret_val = ioctl(file_desc, NVME_IOCTL_METABUF_ALLOC, meta_id); 264 | if(ret_val < 0) { 265 | printf("\nMeta Id = %d allocation failed!\n", meta_id); 266 | } else { 267 | printf("Meta Id = %d allocation success!!\n", meta_id); 268 | } 269 | return ret_val; 270 | } 271 | -------------------------------------------------------------------------------- /unittest/test_irq.c: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | 6 | #include "../dnvme_interface.h" 7 | #include "../dnvme_ioctls.h" 8 | 9 | #include "test_metrics.h" 10 | #include "test_irq.h" 11 | 12 | void test_interrupts(int fd) /* Call with 16 number */ 13 | { 14 | int ret_val; 15 | struct interrupts new_irq; 16 | 17 | printf("Setting MSI X IRQ Type...\n"); 18 | printf("Press any key to continue...\n"); 19 | getchar(); 20 | 21 | /* MSI SINGLE Testing */ 22 | new_irq.num_irqs = 32; 23 | new_irq.irq_type = INT_MSIX; 24 | 25 | ret_val = ioctl(fd, NVME_IOCTL_SET_IRQ, &new_irq); 26 | if(ret_val < 0) { 27 | printf("\nSet IRQ MSI-X failed!\n"); 28 | } 29 | else { 30 | printf("\nSet IRQ MSI-X success!!\n"); 31 | } 32 | printf("Press any key to continue...\n"); 33 | getchar(); 34 | 35 | printf("Setting NONE IRQ Type...\n"); 36 | /* Setting to INT_NONE Testing */ 37 | new_irq.num_irqs = 1; 38 | new_irq.irq_type = INT_NONE; 39 | 40 | ret_val = ioctl(fd, NVME_IOCTL_SET_IRQ, &new_irq); 41 | if(ret_val < 0) { 42 | printf("\nSet IRQ NONE failed!\n"); 43 | } 44 | else { 45 | printf("\nSet IRQ NONE success!!\n"); 46 | } 47 | printf("Press any key to continue...\n"); 48 | getchar(); 49 | } 50 | 51 | void set_irq_none(int fd) 52 | { 53 | int ret_val; 54 | struct interrupts new_irq; 55 | 56 | printf("Setting NONE IRQ Type...\n"); 57 | /* Setting to INT_NONE Testing */ 58 | new_irq.num_irqs = 0; 59 | new_irq.irq_type = INT_NONE; 60 | 61 | ret_val = ioctl(fd, NVME_IOCTL_SET_IRQ, &new_irq); 62 | if(ret_val < 0) { 63 | printf("\nSet IRQ NONE failed!\n"); 64 | } 65 | else { 66 | printf("\nSet IRQ NONE success!!\n"); 67 | } 68 | } 69 | 70 | void set_irq_msix(int fd) 71 | { 72 | int ret_val; 73 | struct interrupts new_irq; 74 | printf("Setting MSI X IRQ Type...\n"); 75 | 76 | printf("Enter desired num_irqs = "); 77 | fflush(stdout); 78 | scanf("%hu", &new_irq.num_irqs); 79 | 80 | new_irq.irq_type = INT_MSIX; 81 | 82 | ret_val = ioctl(fd, NVME_IOCTL_SET_IRQ, &new_irq); 83 | if(ret_val < 0) { 84 | printf("\nSet IRQ MSI-X failed!\n"); 85 | } 86 | else { 87 | printf("\nSet IRQ MSI-X success!!\n"); 88 | } 89 | } 90 | 91 | void test_irq_review568(int fd) 92 | { 93 | int i; 94 | 95 | i = 10000; 96 | 97 | while(i) { 98 | printf("\nIRQ Loop Test = %d\n", i + 1); 99 | set_irq_msix(fd); 100 | i--; 101 | } 102 | set_irq_none(fd); 103 | printf("\nCalling Dump Metrics to irq_loop_test\n"); 104 | ioctl_dump(fd, "/tmp/test_rev568.txt"); 105 | printf("\nPressAny key..\n"); 106 | getchar(); 107 | } 108 | 109 | void test_loop_irq(int fd) 110 | { 111 | int i, cmds, num; 112 | void *rd_buffer; 113 | if (posix_memalign(&rd_buffer, 4096, READ_BUFFER_SIZE)) { 114 | printf("Memalign Failed"); 115 | return; 116 | } 117 | num = ioctl_reap_inquiry(fd, 0); 118 | /* Submit 10 cmds */ 119 | printf("\nEnter no of commands in ACQ:"); 120 | fflush(stdout); 121 | scanf ("%d", &cmds); 122 | for (i = 0; i < cmds; i++) { 123 | ioctl_send_identify_cmd(fd, rd_buffer); 124 | } 125 | 126 | ioctl_tst_ring_dbl(fd, 0); /* Ring Admin Q Doorbell */ 127 | while(ioctl_reap_inquiry(fd, 0) != (num + cmds)); 128 | 129 | ioctl_dump(fd, "/tmp/irq_loop_test.txt"); 130 | free(rd_buffer); 131 | } 132 | 133 | int irq_for_io_discontig(int file_desc, int cq_id, int irq_no, int cq_flags, 134 | uint16_t elem, void *addr) 135 | { 136 | int ret_val = -1; 137 | struct nvme_64b_send user_cmd; 138 | struct nvme_create_cq create_cq_cmd; 139 | 140 | /* Fill the command for create discontig IOSQ*/ 141 | create_cq_cmd.opcode = 0x05; 142 | create_cq_cmd.cqid = cq_id; 143 | create_cq_cmd.qsize = elem; 144 | create_cq_cmd.cq_flags = cq_flags; 145 | create_cq_cmd.irq_no = irq_no; 146 | create_cq_cmd.rsvd1[0] = 0x00; 147 | 148 | /* Fill the user command */ 149 | user_cmd.q_id = 0; 150 | user_cmd.bit_mask = MASK_PRP1_LIST; 151 | user_cmd.cmd_buf_ptr = (u_int8_t *) &create_cq_cmd; 152 | user_cmd.data_buf_size = PAGE_SIZE_I * 16; 153 | user_cmd.data_buf_ptr = addr; 154 | user_cmd.data_dir = 0; 155 | 156 | printf("User Call to send command\n"); 157 | 158 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 159 | if (ret_val < 0) { 160 | printf("Sending of Command Failed!\n"); 161 | } else { 162 | printf("Command sent succesfully\n"); 163 | } 164 | return ret_val; 165 | } 166 | 167 | int irq_for_io_contig(int file_desc, int cq_id, int irq_no, 168 | int cq_flags, uint16_t elems) 169 | { 170 | int ret_val = -1; 171 | struct nvme_64b_send user_cmd; 172 | struct nvme_create_cq create_cq_cmd; 173 | 174 | /* Fill the command for create contig IOSQ*/ 175 | create_cq_cmd.opcode = 0x05; 176 | create_cq_cmd.cqid = cq_id; 177 | create_cq_cmd.qsize = elems; 178 | create_cq_cmd.cq_flags = cq_flags; 179 | create_cq_cmd.irq_no = irq_no; 180 | create_cq_cmd.rsvd1[0] = 0x00; 181 | 182 | /* Fill the user command */ 183 | user_cmd.q_id = 0; 184 | user_cmd.bit_mask = (MASK_PRP1_PAGE); 185 | user_cmd.cmd_buf_ptr = (u_int8_t *) &create_cq_cmd; 186 | user_cmd.data_buf_size = 0; 187 | user_cmd.data_buf_ptr = NULL; 188 | user_cmd.data_dir = 0; 189 | 190 | printf("User Call to send command\n"); 191 | 192 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 193 | if (ret_val < 0) { 194 | printf("Sending of Command Failed!\n"); 195 | } else { 196 | printf("Command sent succesfully\n"); 197 | } 198 | return ret_val; 199 | } 200 | 201 | void send_nvme_read(int file_desc, int sq_id, void* addr) 202 | { 203 | int ret_val = -1; 204 | struct nvme_64b_send user_cmd; 205 | struct nvme_user_io nvme_read; 206 | 207 | /* Fill the command for create discontig IOSQ*/ 208 | nvme_read.opcode = 0x02; 209 | nvme_read.flags = 0; 210 | nvme_read.control = 0; 211 | nvme_read.nsid = 1; 212 | nvme_read.metadata = 0; 213 | nvme_read.slba = 0; 214 | nvme_read.nlb = 15; 215 | nvme_read.cmd_flags = 0; 216 | nvme_read.dsm = 0; 217 | nvme_read.ilbrt = 0; 218 | nvme_read.lbat = 0; 219 | nvme_read.lbatm = 0; 220 | 221 | /* Fill the user command */ 222 | user_cmd.q_id = sq_id; 223 | user_cmd.bit_mask = (MASK_PRP1_PAGE | MASK_PRP1_LIST | 224 | MASK_PRP2_PAGE | MASK_PRP2_LIST); 225 | user_cmd.cmd_buf_ptr = (u_int8_t *) &nvme_read; 226 | user_cmd.data_buf_size = READ_BUFFER_SIZE; 227 | user_cmd.data_buf_ptr = addr; 228 | user_cmd.data_dir = 0; 229 | 230 | printf("User Call to send command\n"); 231 | 232 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 233 | if (ret_val < 0) { 234 | printf("Sending of Command Failed!\n"); 235 | } else { 236 | printf("Command sent succesfully\n"); 237 | } 238 | } 239 | 240 | void send_nvme_read_mb(int file_desc, int sq_id, void* addr, uint32_t meta_id) 241 | { 242 | int ret_val = -1; 243 | struct nvme_64b_send user_cmd; 244 | struct nvme_user_io nvme_read; 245 | 246 | /* Fill the command for create discontig IOSQ*/ 247 | nvme_read.opcode = 0x02; 248 | nvme_read.flags = 0; 249 | nvme_read.control = 0; 250 | nvme_read.nsid = 1; 251 | nvme_read.metadata = 0; 252 | nvme_read.slba = 0; 253 | nvme_read.nlb = 15; 254 | nvme_read.cmd_flags = 0; 255 | nvme_read.dsm = 0; 256 | nvme_read.ilbrt = 0; 257 | nvme_read.lbat = 0; 258 | nvme_read.lbatm = 0; 259 | 260 | /* Fill the user command */ 261 | user_cmd.q_id = sq_id; /* Contig SQ ID */ 262 | user_cmd.bit_mask = (MASK_PRP1_PAGE | MASK_PRP1_LIST | 263 | MASK_PRP2_PAGE | MASK_PRP2_LIST); 264 | user_cmd.cmd_buf_ptr = (u_int8_t *) &nvme_read; 265 | user_cmd.data_buf_size = READ_BUFFER_SIZE; 266 | user_cmd.data_buf_ptr = addr; 267 | user_cmd.meta_buf_id = meta_id; 268 | user_cmd.data_dir = 0; 269 | 270 | printf("User Call to send command\n"); 271 | 272 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 273 | if (ret_val < 0) { 274 | printf("Sending of Command Failed!\n"); 275 | } else { 276 | printf("Command sent succesfully\n"); 277 | } 278 | } 279 | 280 | int admin_create_iocq_irq(int fd, int cq_id, int irq_no, int cq_flags) 281 | { 282 | struct nvme_64b_send user_cmd; 283 | struct nvme_create_cq create_cq_cmd; 284 | int ret_val; 285 | 286 | ioctl_prep_cq(fd, cq_id, 20, 1); 287 | 288 | /* Fill the command for create contig IOSQ*/ 289 | create_cq_cmd.opcode = 0x05; 290 | create_cq_cmd.cqid = cq_id; 291 | create_cq_cmd.qsize = 20; 292 | create_cq_cmd.cq_flags = cq_flags; 293 | create_cq_cmd.irq_no = irq_no; 294 | 295 | /* Fill the user command */ 296 | user_cmd.q_id = 0; 297 | user_cmd.bit_mask = (MASK_PRP1_PAGE); 298 | user_cmd.cmd_buf_ptr = (u_int8_t *) &create_cq_cmd; 299 | user_cmd.data_buf_size = 0; 300 | user_cmd.data_buf_ptr = NULL; 301 | user_cmd.data_dir = 0; 302 | 303 | ret_val = ioctl(fd, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 304 | if (ret_val < 0) { 305 | printf("Sending Admin Command Create IO Q Failed!\n"); 306 | } else { 307 | printf("Admin Command Create IO Q SUCCESS\n\n"); 308 | } 309 | return ret_val; 310 | } 311 | 312 | 313 | void set_cq_irq(int fd, void *p_dcq_buf) 314 | { 315 | int ret_val; 316 | int cq_id; 317 | int irq_no; 318 | int cq_flags; 319 | int num; 320 | 321 | /* Discontig case */ 322 | cq_id = 20; 323 | irq_no = 1; 324 | cq_flags = 0x2; 325 | num = ioctl_reap_inquiry(fd, 0); 326 | 327 | ret_val = ioctl_prep_cq(fd, cq_id, PAGE_SIZE_I, 0); 328 | if (ret_val < 0) exit(-1); 329 | ret_val = irq_for_io_discontig(fd, cq_id, irq_no, cq_flags, 330 | PAGE_SIZE_I, p_dcq_buf); 331 | ioctl_tst_ring_dbl(fd, 0); 332 | /* contig case */ 333 | cq_id = 21; 334 | irq_no = 2; 335 | cq_flags = 0x3; 336 | 337 | ret_val = ioctl_prep_cq(fd, cq_id, PAGE_SIZE_I, 1); 338 | if (ret_val < 0) exit(-1); 339 | ret_val= irq_for_io_contig(fd, cq_id, irq_no, cq_flags, PAGE_SIZE_I); 340 | 341 | cq_id = 22; 342 | irq_no = 2; 343 | cq_flags = 0x3; 344 | 345 | ret_val = ioctl_prep_cq(fd, cq_id, PAGE_SIZE_I, 1); 346 | if (ret_val < 0) exit(-1); 347 | 348 | ret_val= irq_for_io_contig(fd, cq_id, irq_no, cq_flags, PAGE_SIZE_I); 349 | 350 | cq_id = 23; 351 | irq_no = 3; 352 | cq_flags = 0x3; 353 | 354 | ret_val = ioctl_prep_cq(fd, cq_id, PAGE_SIZE_I, 1); 355 | if (ret_val < 0) exit(-1); 356 | 357 | ret_val= irq_for_io_contig(fd, cq_id, irq_no, cq_flags, PAGE_SIZE_I); 358 | 359 | ioctl_tst_ring_dbl(fd, 0); 360 | } 361 | 362 | int irq_cr_contig_io_sq(int fd, int sq_id, int assoc_cq_id, uint16_t elems) 363 | { 364 | int ret_val = -1; 365 | struct nvme_64b_send user_cmd; 366 | struct nvme_create_sq create_sq_cmd; 367 | 368 | /* Fill the command for create discontig IOSQ*/ 369 | create_sq_cmd.opcode = 0x01; 370 | create_sq_cmd.sqid = sq_id; 371 | create_sq_cmd.qsize = elems; 372 | create_sq_cmd.cqid = assoc_cq_id; 373 | create_sq_cmd.sq_flags = 0x01; 374 | 375 | /* Fill the user command */ 376 | user_cmd.q_id = 0; 377 | user_cmd.bit_mask = (MASK_PRP1_PAGE); 378 | user_cmd.cmd_buf_ptr = (u_int8_t *) &create_sq_cmd; 379 | user_cmd.data_buf_size = 0; 380 | user_cmd.data_buf_ptr = NULL; 381 | user_cmd.data_dir = 2; 382 | 383 | printf("User Call to send command\n"); 384 | 385 | ret_val = ioctl(fd, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 386 | if (ret_val < 0) { 387 | printf("Sending of Command Failed!\n"); 388 | } else { 389 | printf("Command sent succesfully\n"); 390 | } 391 | return ret_val; 392 | } 393 | 394 | int irq_cr_disc_io_sq(int fd, void *addr,int sq_id, 395 | int assoc_cq_id, uint16_t elems) 396 | { 397 | int ret_val = -1; 398 | struct nvme_64b_send user_cmd; 399 | struct nvme_create_sq create_sq_cmd; 400 | 401 | /* Fill the command for create discontig IOSQ*/ 402 | create_sq_cmd.opcode = 0x01; 403 | create_sq_cmd.sqid = sq_id; 404 | create_sq_cmd.qsize = elems; 405 | create_sq_cmd.cqid = assoc_cq_id; 406 | create_sq_cmd.sq_flags = 0x00; 407 | 408 | /* Fill the user command */ 409 | user_cmd.q_id = 0; 410 | user_cmd.bit_mask = MASK_PRP1_LIST; 411 | user_cmd.cmd_buf_ptr = (u_int8_t *) &create_sq_cmd; 412 | user_cmd.data_buf_size = PAGE_SIZE_I * 64; 413 | user_cmd.data_buf_ptr = addr; 414 | user_cmd.data_dir = 2; 415 | 416 | ret_val = ioctl(fd, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 417 | if (ret_val < 0) { 418 | printf("Sending of Command Failed!\n"); 419 | } else { 420 | printf("Command sent succesfully\n"); 421 | } 422 | return ret_val; 423 | } 424 | 425 | void set_sq_irq(int fd, void *addr) 426 | { 427 | int sq_id; 428 | int assoc_cq_id; 429 | int num; 430 | int ret_val; 431 | 432 | num = ioctl_reap_inquiry(fd, 0); 433 | sq_id = 31; 434 | assoc_cq_id = 20; 435 | ret_val = ioctl_prep_sq(fd, sq_id, assoc_cq_id, PAGE_SIZE_I, 0); 436 | if (ret_val < 0) return; 437 | ret_val= irq_cr_disc_io_sq(fd, addr, sq_id, assoc_cq_id, PAGE_SIZE_I); 438 | ioctl_tst_ring_dbl(fd, 0); 439 | /* Contig SQ */ 440 | sq_id = 32; 441 | assoc_cq_id = 21; 442 | ret_val = ioctl_prep_sq(fd, sq_id, assoc_cq_id, PAGE_SIZE_I, 1); 443 | if (ret_val < 0) return; 444 | ret_val = irq_cr_contig_io_sq(fd, sq_id, assoc_cq_id, PAGE_SIZE_I); 445 | 446 | sq_id = 33; 447 | assoc_cq_id = 22; 448 | ret_val = ioctl_prep_sq(fd, sq_id, assoc_cq_id, PAGE_SIZE_I, 1); 449 | if (ret_val < 0) return; 450 | ret_val = irq_cr_contig_io_sq(fd, sq_id, assoc_cq_id, PAGE_SIZE_I); 451 | 452 | sq_id = 34; 453 | assoc_cq_id = 23; 454 | ret_val = ioctl_prep_sq(fd, sq_id, assoc_cq_id, PAGE_SIZE_I, 1); 455 | if (ret_val < 0) return; 456 | ret_val = irq_cr_contig_io_sq(fd, sq_id, assoc_cq_id, PAGE_SIZE_I); 457 | 458 | ioctl_tst_ring_dbl(fd, 0); 459 | } 460 | 461 | void test_contig_threeio_irq(int fd, void *addr0, void *addr1, void *addr2) 462 | { 463 | static uint32_t meta_index = 10; 464 | int num, sq_id, assoc_cq_id; 465 | /* Send read command through 3 IO SQ's */ 466 | 467 | /* SQ:CQ 32:21 */ 468 | sq_id = 32; 469 | assoc_cq_id = 21; 470 | test_meta_buf(fd, meta_index); 471 | send_nvme_read_mb(fd, sq_id, addr0, meta_index); 472 | ioctl_tst_ring_dbl(fd, sq_id); 473 | 474 | /* SQ:CQ 33:22 */ 475 | sq_id = 33; 476 | assoc_cq_id = 22; 477 | send_nvme_read(fd, sq_id, addr1); 478 | ioctl_tst_ring_dbl(fd, sq_id); 479 | 480 | /* SQ:CQ 34:23 */ 481 | 482 | sq_id = 34; 483 | assoc_cq_id = 23; 484 | num = ioctl_reap_inquiry(fd, assoc_cq_id); 485 | send_nvme_read(fd, sq_id, addr2); 486 | ioctl_tst_ring_dbl(fd, sq_id); 487 | while (ioctl_reap_inquiry(fd, assoc_cq_id) != num + 1); 488 | ioctl_reap_cq(fd, assoc_cq_id, 1, 16, 0); 489 | 490 | meta_index++; 491 | } 492 | 493 | void test_discontig_io_irq(int fd, void *addr) 494 | { 495 | int sq_id, num,assoc_cq_id; 496 | /* SQ:CQ 31:20 */ 497 | sq_id = 31; 498 | assoc_cq_id = 20; 499 | num = ioctl_reap_inquiry(fd, assoc_cq_id); 500 | send_nvme_read(fd, sq_id, addr); 501 | ioctl_tst_ring_dbl(fd, sq_id); 502 | while (ioctl_reap_inquiry(fd, assoc_cq_id) != num + 1); 503 | ioctl_reap_cq(fd, assoc_cq_id, 1, 16, 0); 504 | 505 | } 506 | 507 | void test_irq_delete(int fd) 508 | { 509 | int op_code; 510 | int q_id; 511 | int num; 512 | 513 | /* 514 | * Host software shall delete all associated Submission Queues 515 | * prior to deleting a Completion Queue. 516 | */ 517 | 518 | /* SQ Case */ 519 | op_code = 0x0; 520 | q_id = 32; 521 | num = ioctl_reap_inquiry(fd, 0); 522 | ioctl_delete_ioq(fd, op_code, q_id); 523 | ioctl_tst_ring_dbl(fd, 0); 524 | while (ioctl_reap_inquiry(fd, 0) != num + 1); 525 | 526 | /* SQ Case */ 527 | op_code = 0x0; 528 | q_id = 31; 529 | num = ioctl_reap_inquiry(fd, 0); 530 | ioctl_delete_ioq(fd, op_code, q_id); 531 | ioctl_tst_ring_dbl(fd, 0); 532 | while (ioctl_reap_inquiry(fd, 0) != num + 1); 533 | /* SQ Case */ 534 | op_code = 0x0; 535 | q_id = 33; 536 | num = ioctl_reap_inquiry(fd, 0); 537 | ioctl_delete_ioq(fd, op_code, q_id); 538 | ioctl_tst_ring_dbl(fd, 0); 539 | while (ioctl_reap_inquiry(fd, 0) != num + 1); 540 | 541 | /* SQ Case */ 542 | op_code = 0x0; 543 | q_id = 34; 544 | num = ioctl_reap_inquiry(fd, 0); 545 | ioctl_delete_ioq(fd, op_code, q_id); 546 | ioctl_tst_ring_dbl(fd, 0); 547 | while (ioctl_reap_inquiry(fd, 0) != num + 1); 548 | 549 | /* CQ case */ 550 | op_code = 0x4; 551 | q_id = 21; 552 | num = ioctl_reap_inquiry(fd, 0); 553 | ioctl_delete_ioq(fd, op_code, q_id); 554 | ioctl_tst_ring_dbl(fd, 0); 555 | while (ioctl_reap_inquiry(fd, 0) != num + 1); 556 | 557 | /* CQ case */ 558 | op_code = 0x4; 559 | q_id = 22; 560 | num = ioctl_reap_inquiry(fd, 0); 561 | ioctl_delete_ioq(fd, op_code, q_id); 562 | ioctl_tst_ring_dbl(fd, 0); 563 | while (ioctl_reap_inquiry(fd, 0) != num + 1); 564 | /* CQ case */ 565 | op_code = 0x4; 566 | q_id = 23; 567 | num = ioctl_reap_inquiry(fd, 0); 568 | ioctl_delete_ioq(fd, op_code, q_id); 569 | ioctl_tst_ring_dbl(fd, 0); 570 | while (ioctl_reap_inquiry(fd, 0) != num + 1); 571 | /* CQ case */ 572 | op_code = 0x4; 573 | q_id = 20; 574 | num = ioctl_reap_inquiry(fd, 0); 575 | ioctl_delete_ioq(fd, op_code, q_id); 576 | ioctl_tst_ring_dbl(fd, 0); 577 | while (ioctl_reap_inquiry(fd, 0) != num + 1); 578 | 579 | } 580 | -------------------------------------------------------------------------------- /unittest/test_irq.h: -------------------------------------------------------------------------------- 1 | #include "test_send_cmd.h" 2 | 3 | #define PAGE_SIZE_I 4096 4 | 5 | void test_interrupts(int fd); 6 | void set_irq_msix(int fd); 7 | void set_irq_none(int fd); 8 | void test_loop_irq(int fd); 9 | int admin_create_iocq_irq(int fd, int cq_id, int irq_vec, int cq_flags); 10 | void set_cq_irq(int fd, void *p_dcq_buf); 11 | void set_sq_irq(int fd, void *addr); 12 | int irq_for_io_discontig(int file_desc, int cq_id, int irq_no, int cq_flags, 13 | uint16_t elem, void *addr); 14 | int irq_for_io_contig(int file_desc, int cq_id, int irq_no, 15 | int cq_flags, uint16_t elems); 16 | void test_contig_threeio_irq(int fd, void *addr0, void *addr1, void * addr2); 17 | void test_discontig_io_irq(int fd, void *addr); 18 | void test_irq_delete(int fd); 19 | int irq_cr_contig_io_sq(int fd, int sq_id, int assoc_cq_id, uint16_t elems); 20 | int irq_cr_disc_io_sq(int fd, void *addr,int sq_id, 21 | int assoc_cq_id, uint16_t elems); 22 | -------------------------------------------------------------------------------- /unittest/test_metrics.c: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | 14 | #include "../dnvme_interface.h" 15 | #include "../dnvme_ioctls.h" 16 | 17 | #include "test_metrics.h" 18 | /* 19 | * Functions for the ioctl calls 20 | */ 21 | void ioctl_get_q_metrics(int file_desc, int q_id, int q_type, int size) 22 | { 23 | int ret_val = -1; 24 | uint16_t tmp; 25 | struct nvme_get_q_metrics get_q_metrics; 26 | 27 | printf("User App Calling Get Q Metrics...\n"); 28 | 29 | get_q_metrics.q_id = q_id; 30 | get_q_metrics.type = q_type; 31 | get_q_metrics.nBytes = size; 32 | 33 | if (q_type == 1) { 34 | get_q_metrics.buffer = malloc(sizeof(uint8_t) * 35 | sizeof(struct nvme_gen_sq)); 36 | } else { 37 | get_q_metrics.buffer = malloc(sizeof(uint8_t) * 38 | sizeof(struct nvme_gen_cq)); 39 | } 40 | if (get_q_metrics.buffer == NULL) { 41 | printf("Malloc Failed"); 42 | return; 43 | } 44 | ret_val = ioctl(file_desc, NVME_IOCTL_GET_Q_METRICS, &get_q_metrics); 45 | 46 | if(ret_val < 0) 47 | printf("\tQ metrics could not be checked!\n"); 48 | else { 49 | if (q_type == 1) { 50 | memcpy(&tmp, &get_q_metrics.buffer[0], sizeof(uint16_t)); 51 | printf("\nMetrics for SQ Id = %d\n", tmp); 52 | memcpy(&tmp, &get_q_metrics.buffer[2], sizeof(uint16_t)); 53 | printf("\tCQ Id = %d\n", tmp); 54 | memcpy(&tmp, &get_q_metrics.buffer[4], sizeof(uint16_t)); 55 | printf("\tTail Ptr = %d\n", tmp); 56 | memcpy(&tmp, &get_q_metrics.buffer[6], sizeof(uint16_t)); 57 | printf("\tTail_Ptr_Virt = %d\n", tmp); 58 | memcpy(&tmp, &get_q_metrics.buffer[8], sizeof(uint16_t)); 59 | printf("\tHead Ptr = %d\n", tmp); 60 | memcpy(&tmp, &get_q_metrics.buffer[10], sizeof(uint16_t)); 61 | printf("\tElements = %d\n", tmp); 62 | } else { 63 | memcpy(&tmp, &get_q_metrics.buffer[0], sizeof(uint16_t)); 64 | printf("\nMetrics for CQ Id = %d\n", tmp); 65 | memcpy(&tmp, &get_q_metrics.buffer[2], sizeof(uint16_t)); 66 | printf("\tTail_Ptr = %d\n", tmp); 67 | memcpy(&tmp, &get_q_metrics.buffer[4], sizeof(uint16_t)); 68 | printf("\tHead Ptr = %d\n", tmp); 69 | memcpy(&tmp, &get_q_metrics.buffer[6], sizeof(uint16_t)); 70 | printf("\tElements = %d\n", tmp); 71 | memcpy(&tmp, &get_q_metrics.buffer[8], sizeof(uint16_t)); 72 | printf("\tIrq Enabled = %d\n", tmp); 73 | } 74 | } 75 | free(get_q_metrics.buffer); 76 | } 77 | 78 | void test_drv_metrics(int file_desc) 79 | { 80 | struct metrics_driver get_drv_metrics; 81 | int ret_val = -1; 82 | 83 | ret_val = ioctl(file_desc, NVME_IOCTL_GET_DRIVER_METRICS, &get_drv_metrics); 84 | if (ret_val < 0) { 85 | printf("\tDrv metrics failed!\n"); 86 | } 87 | printf("\nDrv Version = 0x%X\n", get_drv_metrics.driver_version); 88 | printf("Api Version = 0x%X\n", get_drv_metrics.api_version); 89 | } 90 | 91 | void test_dev_metrics(int file_desc) 92 | { 93 | struct public_metrics_dev get_dev_metrics; 94 | int ret_val = -1; 95 | 96 | ret_val = ioctl(file_desc, NVME_IOCTL_GET_DEVICE_METRICS, &get_dev_metrics); 97 | if (ret_val < 0) { 98 | printf("\tDev metrics failed!\n"); 99 | } 100 | printf("\nIRQ Type = %d (0=S/1=M/2=X/3=N)", get_dev_metrics.irq_active.irq_type); 101 | printf("\nIRQ No's = %d\n", get_dev_metrics.irq_active.num_irqs); 102 | } 103 | -------------------------------------------------------------------------------- /unittest/test_metrics.h: -------------------------------------------------------------------------------- 1 | 2 | /* 3 | * q_type = 0 => CQ 4 | * q_type = 1 => SQ 5 | * g_id = 0 => Admin 6 | */ 7 | 8 | #define READ_BUFFER_SIZE (2 * 4096) 9 | #define DISCONTIG_IO_SQ_SIZE (1023 * 4096) 10 | #define DISCONTIG_IO_CQ_SIZE (255 * 4096) 11 | 12 | void ioctl_get_q_metrics(int file_desc, int q_id, int q_type, int size); 13 | void test_drv_metrics(int file_desc); 14 | int ioctl_prep_sq(int file_desc, uint16_t sq_id, uint16_t cq_id, uint16_t elem, uint8_t contig); 15 | int ioctl_prep_cq(int file_desc, uint16_t cq_id, uint16_t elem, uint8_t contig); 16 | void ioctl_tst_ring_dbl(int file_desc, int sq_id); 17 | void ioctl_create_prp_one_page(int file_desc); 18 | void ioctl_create_prp_less_than_one_page(int file_desc); 19 | void ioctl_create_prp_more_than_two_page(int file_desc); 20 | void ioctl_create_list_of_prp(int file_desc); 21 | void ioctl_create_fill_list_of_prp(int file_desc); 22 | 23 | void ioctl_create_contig_iocq(int file_desc); 24 | void ioctl_create_discontig_iocq(int file_desc, void *addr); 25 | void ioctl_create_contig_iosq(int file_desc); 26 | void ioctl_create_discontig_iosq(int file_desc, void *addr); 27 | void ioctl_delete_ioq(int file_desc, uint8_t opcode, uint16_t qid); 28 | void ioctl_send_identify_cmd(int file_desc, void *addr); 29 | void ioctl_send_nvme_write(int file_desc, void *addr); 30 | void ioctl_send_nvme_write_using_metabuff(int file_desc, uint32_t meta_id, void* addr); 31 | void ioctl_send_nvme_read(int file_desc, void *addr); 32 | void ioctl_send_nvme_read_using_metabuff(int file_desc, void* addr, uint32_t meta_id); 33 | 34 | int ioctl_reap_inquiry(int file_desc, int cq_id); 35 | void ioctl_reap_cq(int file_desc, int cq_id, int elements, int size, int display); 36 | void ioctl_disable_ctrl(int file_desc, enum nvme_state new_state); 37 | void ioctl_enable_ctrl(int file_desc); 38 | void set_admn(int file_desc); 39 | void ioctl_create_acq(int file_desc); 40 | void ioctl_create_asq(int file_desc); 41 | void test_meta(int file_desc, int log); 42 | uint32_t test_meta_buf(int file_desc, uint32_t); 43 | void ioctl_dump(int file_desc, char *tmpfile); 44 | void display_cq_data(unsigned char *cq_buffer, int reap_ele); 45 | void test_admin(int file_desc); 46 | void ioctl_write_data(int file_desc); 47 | void test_irq_review568(int fd); 48 | void test_dev_metrics(int file_desc); 49 | -------------------------------------------------------------------------------- /unittest/test_rng_sqxdbl.c: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | 14 | #include "../dnvme_interface.h" 15 | #include "../dnvme_ioctls.h" 16 | 17 | #include "test_metrics.h" 18 | 19 | void ioctl_tst_ring_dbl(int file_desc, int sq_id) 20 | { 21 | int ret_val = -1; 22 | 23 | printf("\n\tRequested to Ring Doorbell of SQ ID = %d\n", sq_id); 24 | 25 | ret_val = ioctl(file_desc, NVME_IOCTL_RING_SQ_DOORBELL, sq_id); 26 | if(ret_val < 0) { 27 | printf("\n\t\tRing Doorbell Failed!\n"); 28 | exit (-1); 29 | } 30 | else { 31 | printf("\n\t\tRing Doorbell SUCCESS\n"); 32 | } 33 | 34 | } 35 | -------------------------------------------------------------------------------- /unittest/test_send_cmd.c: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | #include 14 | 15 | #define _TNVME_H_ 16 | 17 | #include "../dnvme_interface.h" 18 | #include "../dnvme_ioctls.h" 19 | #include "test_send_cmd.h" 20 | #include "test_metrics.h" 21 | 22 | void ioctl_create_prp_more_than_two_page(int file_desc) 23 | { 24 | int ret_val = -1; 25 | struct nvme_64b_send user_cmd; 26 | void *addr = (void *) malloc(8200); 27 | if (addr == NULL) { 28 | printf("Malloc Failed"); 29 | return; 30 | } 31 | user_cmd.q_id = 0; 32 | user_cmd.bit_mask = 0; 33 | user_cmd.cmd_buf_ptr = NULL; 34 | user_cmd.data_buf_size = 8200; /* more than 2 page */ 35 | user_cmd.data_buf_ptr = addr; 36 | 37 | 38 | printf("User Call to Create PRP more than one page:\n"); 39 | 40 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 41 | if (ret_val < 0) { 42 | printf("Creation of PRP Failed!\n"); 43 | } else { 44 | printf("PRP Creation SUCCESS\n"); 45 | } 46 | free(addr); 47 | } 48 | 49 | void ioctl_create_prp_less_than_one_page(int file_desc) 50 | { 51 | int ret_val = -1; 52 | struct nvme_64b_send user_cmd; 53 | void *addr = (void *) malloc(95); 54 | if (addr == NULL) { 55 | printf("Malloc Failed"); 56 | return; 57 | } 58 | user_cmd.q_id = 0; 59 | user_cmd.bit_mask = 0; 60 | user_cmd.cmd_buf_ptr = NULL; 61 | user_cmd.data_buf_size = 95; 62 | user_cmd.data_buf_ptr = addr; 63 | 64 | 65 | printf("User Call to Create PRP less than one page:\n"); 66 | 67 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 68 | if (ret_val < 0) { 69 | printf("Creation of PRP Failed!\n"); 70 | } else { 71 | printf("PRP Creation SUCCESS\n"); 72 | } 73 | free(addr); 74 | } 75 | 76 | void ioctl_create_prp_one_page(int file_desc) 77 | { 78 | int ret_val = -1; 79 | struct nvme_64b_send user_cmd; 80 | void *addr = (void *) malloc(4096); 81 | if (addr == NULL) { 82 | printf("Malloc Failed"); 83 | return; 84 | } 85 | user_cmd.q_id = 0; 86 | user_cmd.bit_mask = 0; 87 | user_cmd.cmd_buf_ptr = NULL; 88 | user_cmd.data_buf_size = 4096; 89 | user_cmd.data_buf_ptr = addr; 90 | 91 | printf("User Call to Create PRP single page:\n"); 92 | 93 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 94 | if (ret_val < 0) { 95 | printf("Creation of PRP Failed!\n"); 96 | } else { 97 | printf("PRP Creation SUCCESS\n"); 98 | } 99 | free(addr); 100 | } 101 | 102 | void ioctl_create_list_of_prp(int file_desc) 103 | { 104 | int ret_val = -1; 105 | struct nvme_64b_send user_cmd; 106 | /* 1 page of PRP List filled completley and 1 more page 107 | * of PRP List containg only one entry */ 108 | void *addr = (void *) malloc((512 * 4096) + 4096); 109 | if (addr == NULL) { 110 | printf("Malloc Failed"); 111 | return; 112 | } 113 | user_cmd.q_id = 0; 114 | user_cmd.bit_mask = 0; 115 | user_cmd.cmd_buf_ptr = NULL; 116 | user_cmd.data_buf_size = (512 * 4096) + 4096; 117 | user_cmd.data_buf_ptr = addr; 118 | printf("User Call to Create Lists of PRP's\n"); 119 | 120 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 121 | if (ret_val < 0) { 122 | printf("Creation of PRP Failed!\n"); 123 | } else { 124 | printf("PRP Creation SUCCESS\n"); 125 | } 126 | free(addr); 127 | } 128 | 129 | void ioctl_create_fill_list_of_prp(int file_desc) 130 | { 131 | int ret_val = -1; 132 | struct nvme_64b_send user_cmd; 133 | /* 2 pages of PRP Lists filled completley */ 134 | void *addr = (void *) malloc(1023 * 4096); 135 | if (addr == NULL) { 136 | printf("Malloc Failed"); 137 | return; 138 | } 139 | user_cmd.q_id = 0; 140 | user_cmd.bit_mask = 0; 141 | user_cmd.cmd_buf_ptr = NULL; 142 | user_cmd.data_buf_size = 1023 * 4096; 143 | user_cmd.data_buf_ptr = addr; 144 | 145 | printf("User Call to Create Lists of PRP's\n"); 146 | 147 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 148 | if (ret_val < 0) { 149 | printf("Creation of PRP Failed!\n"); 150 | } else { 151 | printf("PRP Creation SUCCESS\n"); 152 | } 153 | free(addr); 154 | } 155 | 156 | /* CMD to create discontig IOSQueue spanning multiple pages of PRP lists*/ 157 | void ioctl_create_discontig_iosq(int file_desc, void *addr) 158 | { 159 | int ret_val = -1; 160 | struct nvme_64b_send user_cmd; 161 | struct nvme_create_sq create_sq_cmd; 162 | 163 | /* Fill the command for create discontig IOSQ*/ 164 | create_sq_cmd.opcode = 0x01; 165 | create_sq_cmd.sqid = 0x01; 166 | create_sq_cmd.qsize = 65472; 167 | create_sq_cmd.cqid = 0x02; 168 | create_sq_cmd.sq_flags = 0x00; 169 | create_sq_cmd.rsvd1[0] = 0x00; 170 | 171 | /* Fill the user command */ 172 | user_cmd.q_id = 0; 173 | user_cmd.bit_mask = MASK_PRP1_LIST; 174 | user_cmd.cmd_buf_ptr = (u_int8_t *) &create_sq_cmd; 175 | user_cmd.data_buf_size = DISCONTIG_IO_SQ_SIZE; 176 | user_cmd.data_buf_ptr = addr; 177 | user_cmd.data_dir = 2; 178 | 179 | printf("User Call to send command\n"); 180 | 181 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 182 | if (ret_val < 0) { 183 | printf("Sending of Command Failed!\n"); 184 | } else { 185 | printf("Command sent succesfully\n"); 186 | } 187 | } 188 | 189 | /* CMD to delete IO Queue */ 190 | void ioctl_delete_ioq(int file_desc, uint8_t opcode, uint16_t qid) 191 | { 192 | int ret_val = -1; 193 | struct nvme_64b_send user_cmd; 194 | struct nvme_del_q del_q_cmd; 195 | /* Fill the command for create discontig IOSQ*/ 196 | del_q_cmd.opcode = opcode; 197 | del_q_cmd.qid = qid; 198 | del_q_cmd.rsvd1[0] = 0x00; 199 | 200 | /* Fill the user command */ 201 | user_cmd.q_id = 0; 202 | user_cmd.bit_mask = 0; 203 | user_cmd.cmd_buf_ptr = (u_int8_t *) &del_q_cmd; 204 | user_cmd.data_buf_size = 0; 205 | user_cmd.data_buf_ptr = NULL; 206 | user_cmd.data_dir = 2; 207 | 208 | printf("User Call to send command\n"); 209 | 210 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 211 | if (ret_val < 0) { 212 | printf("Sending of Command Failed!\n"); 213 | } else { 214 | printf("Command sent succesfully\n"); 215 | } 216 | } 217 | 218 | /* CMD to create contig IOSQueue */ 219 | void ioctl_create_contig_iosq(int file_desc) 220 | { 221 | int ret_val = -1; 222 | struct nvme_64b_send user_cmd; 223 | struct nvme_create_sq create_sq_cmd; 224 | 225 | /* Fill the command for create discontig IOSQ*/ 226 | create_sq_cmd.opcode = 0x01; 227 | create_sq_cmd.sqid = 0x02; 228 | create_sq_cmd.qsize = 256; 229 | create_sq_cmd.cqid = 0x01; 230 | create_sq_cmd.sq_flags = 0x01; 231 | create_sq_cmd.rsvd1[0] = 0x00; 232 | 233 | /* Fill the user command */ 234 | user_cmd.q_id = 0; 235 | user_cmd.bit_mask = (MASK_PRP1_PAGE); 236 | user_cmd.cmd_buf_ptr = (u_int8_t *) &create_sq_cmd; 237 | user_cmd.data_buf_size = 0; 238 | user_cmd.data_buf_ptr = NULL; 239 | user_cmd.data_dir = 2; 240 | 241 | printf("User Call to send command\n"); 242 | 243 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 244 | if (ret_val < 0) { 245 | printf("Sending of Command Failed!\n"); 246 | } else { 247 | printf("Command sent succesfully\n"); 248 | } 249 | } 250 | 251 | /* CMD to create contig IOCQueue spanning multiple pages of PRP lists*/ 252 | void ioctl_create_contig_iocq(int file_desc) 253 | { 254 | int ret_val = -1; 255 | struct nvme_64b_send user_cmd; 256 | struct nvme_create_cq create_cq_cmd; 257 | 258 | /* Fill the command for create contig IOSQ*/ 259 | create_cq_cmd.opcode = 0x05; 260 | create_cq_cmd.cqid = 0x01; 261 | create_cq_cmd.qsize = 20; 262 | create_cq_cmd.cq_flags = 0x01; 263 | create_cq_cmd.rsvd1[0] = 0x00; 264 | /* Fill the user command */ 265 | user_cmd.q_id = 0; 266 | user_cmd.bit_mask = (MASK_PRP1_PAGE); 267 | user_cmd.cmd_buf_ptr = (u_int8_t *) &create_cq_cmd; 268 | user_cmd.data_buf_size = 0; 269 | user_cmd.data_buf_ptr = NULL; 270 | user_cmd.data_dir = 0; 271 | 272 | printf("User Call to send command\n"); 273 | 274 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 275 | if (ret_val < 0) { 276 | printf("Sending of Command Failed!\n"); 277 | } else { 278 | printf("Command sent succesfully\n"); 279 | } 280 | } 281 | 282 | /* CMD to create discontig IOCQ:2 spanning multiple pages of PRP lists*/ 283 | void ioctl_create_discontig_iocq(int file_desc, void *addr) 284 | { 285 | int ret_val = -1; 286 | struct nvme_64b_send user_cmd; 287 | struct nvme_create_cq create_cq_cmd; 288 | 289 | /* Fill the command for create discontig IOSQ*/ 290 | create_cq_cmd.opcode = 0x05; 291 | create_cq_cmd.cqid = 0x02; 292 | create_cq_cmd.qsize = 65280; 293 | create_cq_cmd.cq_flags = 0x00; 294 | create_cq_cmd.rsvd1[0] = 0x00; 295 | 296 | /* Fill the user command */ 297 | user_cmd.q_id = 0; 298 | user_cmd.bit_mask = MASK_PRP1_LIST; 299 | user_cmd.cmd_buf_ptr = (u_int8_t *) &create_cq_cmd; 300 | user_cmd.data_buf_size = DISCONTIG_IO_CQ_SIZE; 301 | user_cmd.data_buf_ptr = addr; 302 | user_cmd.data_dir = 0; 303 | 304 | printf("User Call to send command\n"); 305 | 306 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 307 | if (ret_val < 0) { 308 | printf("Sending of Command Failed!\n"); 309 | } else { 310 | printf("Command sent succesfully\n"); 311 | } 312 | } 313 | 314 | 315 | /* CMD to send Identify command*/ 316 | void ioctl_send_identify_cmd(int file_desc, void* addr) 317 | { 318 | int ret_val = -1; 319 | struct nvme_64b_send user_cmd; 320 | struct nvme_identify nvme_identify; 321 | uint32_t nsid, cns; 322 | 323 | /* Writing 0's to first page */ 324 | memset(addr, 0, READ_BUFFER_SIZE/2); 325 | 326 | printf("\nEnter NSID:\n"); 327 | fflush(stdout); 328 | scanf ("%u", &nsid); 329 | printf("\nController:Namespace (1:0)\n"); 330 | fflush(stdout); 331 | scanf ("%u", &cns); 332 | 333 | /* Fill the command for create discontig IOSQ*/ 334 | nvme_identify.opcode = 0x06; 335 | nvme_identify.nsid = nsid; 336 | nvme_identify.cns = cns; 337 | 338 | /* Fill the user command */ 339 | user_cmd.q_id = 0; 340 | user_cmd.bit_mask = (MASK_PRP1_PAGE | MASK_PRP2_PAGE); 341 | user_cmd.cmd_buf_ptr = (u_int8_t *) &nvme_identify; 342 | user_cmd.data_buf_size = 4096; 343 | user_cmd.data_buf_ptr = addr; 344 | user_cmd.data_dir = 0; 345 | 346 | printf("User Call to send command\n"); 347 | 348 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 349 | if (ret_val < 0) { 350 | printf("Sending of Command Failed!\n"); 351 | } else { 352 | printf("Command sent succesfully\n"); 353 | } 354 | } 355 | 356 | /* CMD to send NVME IO write command */ 357 | void ioctl_send_nvme_write(int file_desc, void *addr) 358 | { 359 | int ret_val = -1; 360 | struct nvme_64b_send user_cmd; 361 | struct nvme_user_io nvme_write; 362 | uint32_t nsid; 363 | uint64_t slba; 364 | 365 | printf("\nEnter NSID:\n"); 366 | fflush(stdout); 367 | scanf ("%u", &nsid); 368 | printf("\nEnter SLBA:\n"); 369 | fflush(stdout); 370 | scanf ("%lu", &slba); 371 | 372 | /* Fill the command for create discontig IOSQ*/ 373 | nvme_write.opcode = 0x01; 374 | nvme_write.flags = 0; 375 | nvme_write.control = 0; 376 | nvme_write.nsid = nsid; 377 | nvme_write.rsvd2[0] = 0; 378 | nvme_write.metadata = 0; 379 | nvme_write.prp1 = 0; 380 | nvme_write.prp2 = 0; 381 | nvme_write.slba = slba; 382 | nvme_write.nlb = 15; 383 | nvme_write.cmd_flags = 0; 384 | nvme_write.dsm = 0; 385 | nvme_write.ilbrt = 0; 386 | nvme_write.lbat = 0; 387 | nvme_write.lbatm = 0; 388 | 389 | /* Fill the user command */ 390 | user_cmd.q_id = 1; 391 | user_cmd.bit_mask = (MASK_PRP1_PAGE | MASK_PRP1_LIST | 392 | MASK_PRP2_PAGE | MASK_PRP2_LIST); 393 | user_cmd.cmd_buf_ptr = (u_int8_t *) &nvme_write; 394 | user_cmd.data_buf_size = READ_BUFFER_SIZE; 395 | user_cmd.data_buf_ptr = addr; 396 | user_cmd.data_dir = 2; 397 | 398 | printf("User Call to send command\n"); 399 | 400 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 401 | if (ret_val < 0) { 402 | printf("Sending of Command Failed!\n"); 403 | } else { 404 | printf("Command sent succesfully\n"); 405 | } 406 | 407 | } 408 | 409 | /* CMD to send NVME IO write command */ 410 | void ioctl_send_nvme_read(int file_desc, void* addr) 411 | { 412 | int ret_val = -1; 413 | struct nvme_64b_send user_cmd; 414 | struct nvme_user_io nvme_read; 415 | uint32_t nsid; 416 | uint64_t slba; 417 | 418 | printf("\nEnter NSID:\n"); 419 | fflush(stdout); 420 | scanf ("%u", &nsid); 421 | printf("\nEnter SLBA:\n"); 422 | fflush(stdout); 423 | scanf ("%lu", &slba); 424 | 425 | /* Fill the command for create discontig IOSQ*/ 426 | nvme_read.opcode = 0x02; 427 | nvme_read.flags = 0; 428 | nvme_read.control = 0; 429 | nvme_read.nsid = nsid; 430 | nvme_read.metadata = 0; 431 | nvme_read.slba = slba; 432 | nvme_read.nlb = 15; 433 | nvme_read.cmd_flags = 0; 434 | nvme_read.dsm = 0; 435 | nvme_read.ilbrt = 0; 436 | nvme_read.lbat = 0; 437 | nvme_read.lbatm = 0; 438 | 439 | /* Fill the user command */ 440 | user_cmd.q_id = 1; 441 | user_cmd.bit_mask = (MASK_PRP1_PAGE | MASK_PRP1_LIST | 442 | MASK_PRP2_PAGE | MASK_PRP2_LIST); 443 | user_cmd.cmd_buf_ptr = (u_int8_t *) &nvme_read; 444 | user_cmd.data_buf_size = READ_BUFFER_SIZE; 445 | user_cmd.data_buf_ptr = addr; 446 | user_cmd.data_dir = 0; 447 | 448 | printf("User Call to send command\n"); 449 | 450 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 451 | if (ret_val < 0) { 452 | printf("Sending of Command Failed!\n"); 453 | } else { 454 | printf("Command sent succesfully\n"); 455 | } 456 | 457 | } 458 | 459 | /* CMD to send NVME IO write command using metabuff*/ 460 | void ioctl_send_nvme_write_using_metabuff(int file_desc, uint32_t meta_id, void* addr) 461 | { 462 | int ret_val = -1; 463 | struct nvme_64b_send user_cmd; 464 | struct nvme_user_io nvme_write; 465 | uint32_t nsid; 466 | uint64_t slba; 467 | 468 | printf("\nEnter NSID:\n"); 469 | fflush(stdout); 470 | scanf ("%u", &nsid); 471 | printf("\nEnter SLBA:\n"); 472 | fflush(stdout); 473 | scanf ("%lu", &slba); 474 | /* Fill the command for create discontig IOSQ*/ 475 | nvme_write.opcode = 0x01; 476 | nvme_write.flags = 0; 477 | nvme_write.control = 0; 478 | nvme_write.nsid = nsid; 479 | nvme_write.rsvd2[0] = 0; 480 | nvme_write.metadata = 0; 481 | nvme_write.prp1 = 0; 482 | nvme_write.prp2 = 0; 483 | nvme_write.slba = slba; 484 | nvme_write.nlb = 15; 485 | nvme_write.cmd_flags = 0; 486 | nvme_write.dsm = 0; 487 | nvme_write.ilbrt = 0; 488 | nvme_write.lbat = 0; 489 | nvme_write.lbatm = 0; 490 | 491 | /* Fill the user command */ 492 | user_cmd.q_id = 2; /* Contig SQ ID */ 493 | user_cmd.bit_mask = (MASK_PRP1_PAGE | MASK_PRP1_LIST | 494 | MASK_PRP2_PAGE | MASK_PRP2_LIST | MASK_MPTR); 495 | user_cmd.cmd_buf_ptr = (u_int8_t *) &nvme_write; 496 | user_cmd.data_buf_size = READ_BUFFER_SIZE; 497 | user_cmd.data_buf_ptr = addr; 498 | user_cmd.meta_buf_id = meta_id; 499 | user_cmd.data_dir = 2; 500 | 501 | printf("User Call to send command\n"); 502 | 503 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 504 | if (ret_val < 0) { 505 | printf("Sending of Command Failed!\n"); 506 | } else { 507 | printf("Command sent succesfully\n"); 508 | } 509 | 510 | } 511 | 512 | /* CMD to send NVME IO read command using metabuff through contig Queue */ 513 | void ioctl_send_nvme_read_using_metabuff(int file_desc, void* addr, uint32_t meta_id) 514 | { 515 | int ret_val = -1; 516 | struct nvme_64b_send user_cmd; 517 | struct nvme_user_io nvme_read; 518 | uint32_t nsid; 519 | uint64_t slba; 520 | 521 | printf("\nEnter NSID:\n"); 522 | fflush(stdout); 523 | scanf ("%u", &nsid); 524 | printf("\nEnter SLBA:\n"); 525 | fflush(stdout); 526 | scanf ("%lu", &slba); 527 | /* Fill the command for create discontig IOSQ*/ 528 | nvme_read.opcode = 0x02; 529 | nvme_read.flags = 0; 530 | nvme_read.control = 0; 531 | nvme_read.nsid = nsid; 532 | nvme_read.metadata = 0; 533 | nvme_read.slba = slba; 534 | nvme_read.nlb = 15; 535 | nvme_read.cmd_flags = 0; 536 | nvme_read.dsm = 0; 537 | nvme_read.ilbrt = 0; 538 | nvme_read.lbat = 0; 539 | nvme_read.lbatm = 0; 540 | 541 | /* Fill the user command */ 542 | user_cmd.q_id = 2; /* Contig SQ ID */ 543 | user_cmd.bit_mask = (MASK_PRP1_PAGE | MASK_PRP1_LIST | 544 | MASK_PRP2_PAGE | MASK_PRP2_LIST); 545 | user_cmd.cmd_buf_ptr = (u_int8_t *) &nvme_read; 546 | user_cmd.data_buf_size = READ_BUFFER_SIZE; 547 | user_cmd.data_buf_ptr = addr; 548 | user_cmd.meta_buf_id = meta_id; 549 | user_cmd.data_dir = 0; 550 | 551 | printf("User Call to send command\n"); 552 | 553 | ret_val = ioctl(file_desc, NVME_IOCTL_SEND_64B_CMD, &user_cmd); 554 | if (ret_val < 0) { 555 | printf("Sending of Command Failed!\n"); 556 | } else { 557 | printf("Command sent succesfully\n"); 558 | } 559 | 560 | } 561 | -------------------------------------------------------------------------------- /unittest/test_send_cmd.h: -------------------------------------------------------------------------------- 1 | /** 2 | * Specific structure for NVME Identify command 3 | */ 4 | struct nvme_identify { 5 | uint8_t opcode; 6 | uint8_t flags; 7 | uint16_t command_id; 8 | uint32_t nsid; 9 | uint64_t rsvd2[2]; 10 | uint64_t prp1; 11 | uint64_t prp2; 12 | uint32_t cns; 13 | uint32_t rsvd11[5]; 14 | }; 15 | 16 | /** 17 | * Specific structure for NVME IO Read/Write command 18 | */ 19 | struct nvme_user_io { 20 | uint8_t opcode; 21 | uint8_t flags; 22 | uint16_t control; 23 | uint32_t nsid; 24 | uint64_t rsvd2[1]; 25 | uint64_t metadata; 26 | uint64_t prp1; 27 | uint64_t prp2; 28 | uint64_t slba; 29 | uint16_t nlb; 30 | uint16_t cmd_flags; 31 | uint32_t dsm; 32 | uint32_t ilbrt; 33 | uint16_t lbat; 34 | uint16_t lbatm; 35 | }; 36 | 37 | -------------------------------------------------------------------------------- /version.h: -------------------------------------------------------------------------------- 1 | /* 2 | * NVM Express Compliance Suite 3 | * Copyright (c) 2011, Intel Corporation. 4 | * 5 | * This program is free software; you can redistribute it and/or modify it 6 | * under the terms and conditions of the GNU General Public License, 7 | * version 2, as published by the Free Software Foundation. 8 | * 9 | * This program is distributed in the hope it will be useful, but WITHOUT 10 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 | * more details. 13 | * 14 | * You should have received a copy of the GNU General Public License along with 15 | * this program; if not, write to the Free Software Foundation, Inc., 16 | * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 | */ 18 | 19 | /* 20 | * Specify the software release version numbers on their own line for use with 21 | * awk and the creation of RPM's while also being compatible with building 22 | * the binaries via the Makefile with *.cpp source code. 23 | * If the line numbers within this file change by the result of editing, then 24 | * you must modify both the Makefile and build.sh for awk parsing. Additionally 25 | * test this modification by running the Makefile rpm target. 26 | */ 27 | 28 | #define VER_MAJOR \ 29 | 2 30 | 31 | #define VER_MINOR \ 32 | 14 33 | --------------------------------------------------------------------------------