├── LICENSE
├── README.md
├── pic_for_readme
├── .gitkeep
├── add_hardware_function.png
├── add_opencv_lib.png
├── block_automation_result.png
├── build_vitis_platform.png
├── clk_rst_connection.png
├── clock_settings.png
├── concat_connection.png
├── enable_s_axi_hp0_fpd.png
├── hp_removed.png
├── import_sources.png
├── intc_settings.png
├── netname.png
├── petalinux_rootfs.png
├── run_xsa_tcl.png
├── set_s_axi_hp0_fpd_options.png
├── set_s_axi_hpc0_fpd_options.png
├── ssh_settings.png
├── test_result.PNG
├── vitis_acceleration_flow.PNG
├── vitis_acceleration_flow.image
├── vitis_include_settings.png
├── vitis_launch.png
├── vitis_lib_settings.png
├── vitis_linux_config.png
├── vivado_platform_connection.png
└── vivado_project_summary.png
├── ref_files
├── dynamic_postlink.tcl
├── opencv
│ └── opencv_3.4.3.bbappend
├── petalinuxbsp.conf
├── src
│ ├── dputils.cpp
│ ├── dputils.h
│ ├── main.cpp
│ └── prj_config
├── system-user.dtsi
└── xsa.tcl
└── version.md
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Vitis AI Custom Platform Development
2 | 1. Introduction to Vitis Acceleration Platform
3 | 2. Create the Vivado Hardware Component
4 | 3. Configure Platform Interface Properties and Generate XSA
5 | 4. Create the PetaLinux Software Component
6 | 5. Create the Vitis Platform
7 | 6. Prepare for the DPU Kernel
8 | 7. Create and Build a Vitis Application
9 | 8. Prepare the Network Deployment File
10 | 9. Run Application on Board
11 |
12 | ## Introduction to Vitis Acceleration Platform
13 | The Vivado Design Suite is used to generate XSA containing a few additional IP blocks and metadata to support kernel connectivity. The following figure shows the acceleration kernel application development flow:
14 | 
15 | For Vitis AI platform, DPU is integrated as RTL kernel. To create a Vitis AI platform on MPSoC and run ConvNet on that, you need to create a Vivado HW platform, a PetaLinux SW platform, a Vitis platform which contains both the HW/SW platform you created. Then create a Vitis application based on this Vitis platform, import DPU kernel & ARM deployment code and build the Vitis application to be a HW-SW cowork design. Vitis would generate a SD card folder as output which would contain all the files needed to boot up from a target board. In the meanwhile to cross-compile the application and run it on board you may need Vitis AI library and DNNDK, you should install them both on the host and target board.
16 |
17 | ## Create the Vivado Hardware Component and Generate XSA
18 | 1. Source /settings64.sh, and call Vivado out by typing "vivado" in the console.
19 | 2. Create a Vivado project named zcu102_custom_platform.
20 | a) Select ***File->Project->New***.
21 | b) Click ***Next***.
22 | c) In Project Name dialog set Project name to ```zcu102_custom_platform```.
23 | d) Click ***Next***.
24 | e) Leaving all the setting to default until you goto the Default Part dialog.
25 | f) Select ***Boards tab*** and then select ***Zynq UltraScale+ ZCU102 Evaluation Board***
26 | g) Click ***Next***, and your project summary should like below:
27 | 
28 | h) Then click ***Finish***
29 | 3. Create a block design named system.
30 | a) Select Create Block Design.
31 | b) Change the design name to ```system```.
32 | c) Click ***OK***.
33 | 4. Add MPSoC IP and run block automation to configure it.
34 | a) Right click Diagram view and select ***Add IP***.
35 | b) Search for ```zynq``` and then double-click the ***Zynq UltraScale+ MPSoC*** from the IP search results.
36 | c) Click the ***Run Block Automation*** link to apply the board presets.
37 | In the Run Block Automation dialog, ensure the following is check marked:
38 | - All Automation
39 | - Zynq_ultra_ps_e_0
40 | - Apply Board Presets
41 |
42 | d) Click ***OK***. You should get MPSoC block configured like below:
43 | 
44 |
45 | ***Note: At this stage, the Vivado block automation has added a Zynq UltraScale+ MPSoC block and applied all board presets for the ZCU102. Add the IP blocks and metadata to create a base hardware design that supports acceleration kernels.***
46 |
47 | 5. Re-Customizing the Processor IP Block
48 | a) Double-click the Zynq UltraScale+ MPSoC block in the IP integrator diagram.
49 | b) Select ***Page Navigator > PS-PL Configuration***.
50 | c) Expand ***PS-PL Configuration > PS-PL Interfaces*** by clicking the > symbol.
51 | d) Expand Master Interface.
52 | e) Uncheck the AXI HPM0 FPD and AXI HPM1 FPD interfaces.
53 | f) Click OK.
54 | g) Confirm that the IP block interfaces were removed from the Zynq UltraScale+ MPSoC symbol in your block design.
55 | 
56 |
57 | ***Note: This is a little different from traditional Vivado design flow. When trying to make AXI interfaces available in Vitis design you should disable these interface at Vivado IPI platform and enable them at platform interface properties. We will show you how to do that later***
58 |
59 | 6. Add clock block:
60 | a) Right click Diagram view and select ***Add IP***.
61 | b) Search for and add a Clocking Wizard from the IP Search dialog.
62 | c) Double-click the clk_wiz_0 IP block to open the Re-Customize IP dialog box.
63 | d) Click the Output Clocks tab.
64 | e) Enable clk_out1 through clk_out3 in the Output Clock column, rename them as ```clk_100m```, ```clk_200m```, ```clk_400m``` and set the Requested Output Freq as follows:
65 | - clk_100m to ```100``` MHz.
66 | - clk_200m to ```200``` MHz.
67 | - clk_400m to ```400``` MHz.
68 |
69 | f) At the bottom of the dialog box set the ***Reset Type*** to ***Active Low***.
70 | g) Click ***OK*** to close the dialog.
71 | The settings should like below:
72 | 
73 | ***Note: So now we have set up the clock system for our design. This clock wizard use the pl_clk as input clock and geneatate clocks needed for the whole logic design. In this simple design I would like to use 100MHz clock as the axi_lite control bus clock, 200MHz clock as DPU AXI interface clock and 400MHz as DPU core clock. You can just modifiy these clocks as you like and remember we should "tell" Vitis what clock we can use. Let's do that later.(And after creating this example I learn that the Vitis AI DPU can only have 2 clock domains and the axi_lite control bus and DPU AXI interface share same clock. So the 100MHz clock can't be used as axi_lite control bus now. The design still can work. But between 100MHz clock and 200MHz clock Vitis would add a clock convertor inside the axi_interconnect.)***
74 |
75 | 7. Add the Processor System Reset blocks:
76 | a) Right click Diagram view and select ***Add IP***.
77 | b) Search for and add a Processor System Reset from the IP Search dialog
78 | c) Add 2 more Processor System Reset blocks, using the previous step; or select the proc_sys_reset_0 block and Copy (Ctrl-C) and Paste (Ctrl-V) it four times in the block diagram
79 | d) Rename them as ```proc_sys_reset_100m```, ```proc_sys_reset_200m```, ```proc_sys_reset_400m```
80 |
81 | 8. Connect Clocks and Resets:
82 | a) Click ***Run Connection Automation***, which will open a dialog that will help connect the proc_sys_reset blocks to the clocking wizard clock outputs.
83 | b) Enable All Automation on the left side of the Run Connection Automation dialog box.
84 | c) Select clk_in1 on clk_wiz_0, and set the Clock Source to ***/zynq_ultra_ps_e_0/pl_clk0***.
85 | d) For each proc_sys_reset instance, select the slowest_sync_clk, and set the Clock Source as follows:
86 | - proc_sys_reset_100m with /clk_wiz_0/clk_100m
87 | - proc_sys_reset_200m with /clk_wiz_0/clk_200m
88 | - proc_sys_reset_400m with /clk_wiz_0/clk_400m
89 |
90 | e) On each proc_sys_reset instance, select the ***ext_reset_in***, set ***Board Part Interface*** to ***Custom*** and set the ***Select Manual Source*** to ***/zynq_ultra_ps_e_0/pl_resetn0***.
91 | f) Make sure all checkboxes are enabled, and click ***OK*** to close the dialog and create the connections.
92 | g) Connect all the ***dcm_locked*** signals on each proc_sys_reset instance to the locked signal on ***clk_wiz_0***.
93 | Then the connection should like below:
94 | 
95 | ***Now we have added clock and reset IPs and configure and connect them. Some would be used in creating the hardware platform and some would be called in Vitis high level design***
96 |
97 | 9. Add Kernel Interrupt Support
98 | You can provide kernel interrupt support by adding an AXI interrupt controller to the base hardware design and connecting the output of the interrupt controller to the input of the processor block interrupt. The interrupt inputs of the AXI interrupt controller are initialized to a de-asserted state by wiring them to ground. When the v++ linker adds acceleration kernels to the base hardware design, the dynamic_postlink.tcl script is used to wire the interrupt output of the kernel to the AXI interrupt controller.
99 | a) In the block diagram, double-click the Zynq UltraScale+ MPSoC block.
100 | b) Select ***PS-PL Configuration > PS-PL interfaces > Master interface***.
101 | c) Select the ***AXI HPM0 LPD*** check box, keep the ***AXI HPM0 LPD Data width*** settings as default ***32***.
102 | d) Click ***OK*** to finsih the configuration.
103 | e) Connect ***maxihpm0_lpd_aclk*** to ***/clk_wiz_0/clk_100m***.
104 | f) Right click Diagram view and select ***Add IP***, search and add ***Concat*** IP.
105 | g) Double-click the Concat block to open the Re-Customize IP dialog box.
106 | h) Set the number of ports to ```32```.
107 | i) Right click Diagram view and select ***Add IP***, search and add ***Constant*** IP.
108 | j) Double-click the Constant IP, set Const Width = 1 & Const Val = 0, click ***OK***.
109 | k) Connect the xlconstant_0 dout[0:0] output to all 32 inputs of xlconcat_0 like below:
110 | 
111 | l) Select the ***xconstant_0*** IP block,in the Block Properties, General dialog box, change the name to ```xlconstant_gnd```.
112 | m) Select the ***xlconcat_0*** IP block,in the Block Properties, General dialog box, change the name to ```xlconcat_interrupt_0```.
113 | These names should match the ones in the dynamic_postlink.tcl script.
114 | n) Right click Diagram view and select ***Add IP***, search and add ***AXI Interrupt Controller*** IP.
115 | o) Double-click the AXI Interrupt Controller block, set the Interrupts type to Level by changing the button to ***Manual*** and entering ```0x0``` text field, Set the Level type to High by changing the button to ***Manual*** and entering ```0xFFFFFFFF```, Set the Interrupt Output Connection to ***Single***, click ***OK***.
116 | The configuration of axi_intc should like below:
117 | 
118 | p) Click ***Run Connection Automation***
119 | q) Leave the default values for Master interface and Bridge IP.
120 | - Master interface default is /zynq_ultra_ps_e_0/M_AXI_HPM0_LPD.
121 | - Bridge IP default is New AXI interconnect.
122 |
123 | r) For the clock source for driving Bridge IP/Slave interface/Master interface, select /clk_wiz_0/clk_100m.
124 | You can select other clock resources if you want. But for axi_lite bus for register control i would recommend a lower frequency.
125 | s) Connect the interrupt_concat/dout[31:0] to the axi_intc_0/intr[0:0] input.
126 | t) Connect the axi_intc_0/irq output to the Zynq UltraScale+ MPSoC pl_ps_irq0[0:0] input.
127 | 
128 | u) Press and hold the ***Shift*** button on keyboard, then left click ***xlconstant_gnd*** and ***xlconcat_interrupt_0*** to select these 2 IPs. Release the ***Shift*** button, right click one of these 2 IPs, select ***Create Hierachy ...***, use ```interrupt_concat``` as ***Cell Name***.
129 | v) Click ***(+)*** to expand the ***interrupt_concat*** block, click the connection network between ***xlconstant_gnd*** and ***xlconcat_interrupt_0***, modify the ***System Network Properties->Name*** to ```xlconstant_gnd_dout```.
130 | 
131 | ***Note: Now we have finished the IPI design input, let's set some platform parameters and generate the DSA***
132 |
133 | ## Configuring Platform Interface Properties
134 | 1. Click ***Window->Platform interfaces*** to open the ***Platform Interfaces*** Window.
135 | 2. Select ***Platform-system->zynq_ultra_ps_e_0->S_AXI_HP0_FPD***, in ***Platform interface Properties*** tab enable the ***Enabled*** option like below:
136 | 
137 | 3. Select ***Options*** tab, set ***memport*** to ```S_AXI_HP``` and set ***sptag*** to ```HP0``` like below:
138 | 
139 | 4. Do the same operations for ***S_AXI_HP1_FPD, S_AXI_HP2_FPD, S_AXI_HP3_FPD, S_AXI_HPC0_FPD, S_AXI_HPC1_FPD*** and set ***sptag*** to ```HP1```, ```HP2```, ```HP3```, ```HPC0```, ```HPC1```. And be noticed that for HPC0/HPC1 ports the ***memport*** is set to ```S_AXI_HPC``` in default, but actually we would use these ports without data coherency function enabled to get a high performance. So please modify it into ```S_AXI_HP``` manually.
140 | 
141 | 5. Enable the M01_AXI ~ M08_AXI ports of ps8_0_axi_periph IP(The axi_interconnect between M_AXI_HPM0_LPD and axi_intc_0), and set these ports with the same ***sptag*** name to ```HPM0_LPD``` and ***memport*** type to ```M_AXI_GP```
142 | 6. Enable the ***M_AXI_HPM0_FPD*** and ***M_AXI_HPM1_FPD*** ports, set ***sptag*** name to ```HPM0_FPD```, ```HPM1_FPD``` and ***memport*** to ```M_AXI_GP```.
143 | ***Now we enable AXI master/slave interfaces that can be used for Vitis tools on the platform***
144 | 7. Enable ***clk_200m***, ***clk_400m***, ***clk_100m*** of clk_wiz_0, set ***id*** of ***clk_200m*** to ```0```, set ***id*** of ***clk_400m*** to ```1```, set ***id*** of ***clk_100m*** to ```2```, enable ***is default*** for ***clk_200m***.
145 |
146 | 8. Create a ```xsa_gen``` folder inside your Vivado project.
147 | 9. Copy the [dynamic_postlink.tcl](https://github.com/Xilinx/Vitis_Embedded_Platform_Source/blob/2019.2/Xilinx_Official_Platforms/zcu102_dpu/vivado/dynamic_postlink.tcl) file into that ***xsa_gen*** folder.
148 | Or you can just find this file from any of the MPSoC official platform example.
149 | 10. Create a file named ```xsa.tcl``` inside the ***xsa_gen*** folder.
150 | 11. Copy the following commands into the xsa.tcl file and save the file.
151 | ```
152 | # Set the platform design intent properties
153 | set_property platform.design_intent.embedded true [current_project]
154 | set_property platform.design_intent.server_managed false [current_project]
155 | set_property platform.design_intent.external_host false [current_project]
156 | set_property platform.design_intent.datacenter false [current_project]
157 |
158 | get_property platform.design_intent.embedded [current_project]
159 | get_property platform.design_intent.server_managed [current_project]
160 | get_property platform.design_intent.external_host [current_project]
161 | get_property platform.design_intent.datacenter [current_project]
162 |
163 | # Set the platform default output type property
164 | set_property platform.default_output_type "sd_card" [current_project]
165 |
166 | get_property platform.default_output_type [current_project]
167 |
168 | # Add the platform property to use dynamic_postlink.tcl during the v++ link
169 | set_property platform.post_sys_link_tcl_hook ./dynamic_postlink.tcl [current_project]
170 | ```
171 | 12. In your Vivado project, use the ***Tcl console*** to ***navigate to the xsa_gen folder***, and run ```source ./xsa.tcl``` command.
172 | 
173 | 13. Right-click and select ***Validate Design*** on ***IP integrator diagram***
174 | 14. Select the Zynq UltraScale+ MPSoC IP block and set ***SELECTED_SIM_MODEL*** to ```tlm``` in the Block Properties view.
175 | 15. Create the HDL wrapper:
176 | a. Right-click ***system.bd*** in the Block Design, Sources view and select Create HDL Wrapper.
177 | b. Select Let Vivado manage wrapper and ***auto-update***.
178 | c. Click ***OK***.
179 |
180 | 16. Right-click ***system.bd*** in the Block Design, Sources view and select ***Generate Output Products***.
181 | 17. Type the tcl command in tcl console like:
182 | ```write_hw_platform -unified -force -file /xsa_gen/zcu102_custom_platform.xsa```
183 | If you use ***export Hardware*** function in Vivado GUI it would add ***-fixed*** option which would generate a XSA for traditional embedded platform which can't add DPU acceleration kernel here.
184 | 18. Check the ***/xsa_gen*** folder, you should find the ***zcu102_custom_platform.xsa*** generated there.
185 |
186 | ***Now we finish the Hardware platform creation flow, then we should go to the Software platform creation***
187 |
188 | ## Create the PetaLinux Software Component
189 |
190 | A Vitis platform requires software components. For Linux, the PetaLinux tools are invoked outside of the Vitis tools by the developer to create the necessary Linux image,Executable and Linkable Format (ELF) files, and sysroot with XRT support. Yocto or third-party Linux development tools can also be used as long as they produce the same Linux output products as PetaLinux.
191 | 1. source /settings.sh
192 | 2. Create a PetaLinux project named ***zcu102_custom_plnx*** and configure the hw with the XSA file we created before:
193 | ```petalinux-create --type project --template zynqMP --name zcu102_custom_plnx```
194 | ```cd zcu102_custom_plnx```
195 | ```petalinux-config --get-hw-description=/xsa_gen/```
196 | 3. A petalinux-config menu would be launched, select ***DTG Settings->MACHINE_NAME***, modify it to ```zcu102-rev1.0```.
197 | ***Note: If you are using a Xilinx development board, izcu102_dpu_pkg/DPU-TRD/prj/Vitis/binary_container_1t is recomended to modify the machine name so that the board confiugrations would be involved in the DTS auto-generation. Otherwise you would need to configure the associated settings(e.g. the PHY information DTS node) by yourself manually.***
198 | 4. Add user packages by appending the CONFIG_x lines below to the /project-spec/meta-user/conf/user-rootfsconfig file.
199 | Packages for base XRT support:
200 | ```
201 | CONFIG_xrt
202 | CONFIG_xrt-dev
203 | CONFIG_zocl
204 | CONFIG_opencl-clhpp-dev
205 | CONFIG_opencl-headers-dev
206 | CONFIG_packagegroup-petalinux-opencv
207 | CONFIG_packagegroup-petalinux-opencv-dev
208 | ```
209 | Packages for DPU support:
210 | ```
211 | CONFIG_glog
212 | CONFIG_gtest
213 | CONFIG_json-c
214 | CONFIG_protobuf
215 | CONFIG_python3-pip
216 | CONFIG_apt
217 | CONFIG_dpkg
218 | ```
219 | Packages for building Vitis AI applications:
220 | ```
221 | CONFIG_gtest-staticdev
222 | CONFIG_json-c-dev
223 | CONFIG_protobuf-dev
224 | CONFIG_protobuf-c
225 | CONFIG_libeigen-dev
226 | ```
227 | Packages for native compiling on target board:
228 | ```
229 | CONFIG_packagegroup-petalinux-self-hosted
230 | CONFIG_cmake
231 | ```
232 | Packages mentioned at DPU integration lab for Vivado flow:
233 | ```
234 | CONFIG_packagegroup-petalinux-x11
235 | CONFIG_packagegroup-petalinux-v4lutils
236 | CONFIG_packagegroup-petalinux-matchbox
237 | ```
238 | 5. Run ```petalinux-config -c rootfs``` and select ***user packages***, select name of rootfs all the libraries listed above, save and exit.
239 | 
240 |
241 | 6. Copy ***ref_files/opencv*** folder from this Git repository to ***/project-spec/meta-user/recipes-ai*** in your platform source (if not exist, users need to create this directory).
242 | 7. Add custom opencv recipe. Edit ***/project-spec/meta-user/conf/user-rootfsconfig*** and add the opencv recipe at the end:
243 | ```
244 | CONFIG_opencv
245 | ```
246 | 8. Run ```petalinux-config -c rootfs``` and select ***user packages***, enable ***opencv***, save and exit.
247 |
248 | 9. Enable OpenSSH and disable dropbear
249 | Dropbear is the default SSH tool in Vitis Base Embedded Platform. If OpenSSH is used to replace Dropbear, it could achieve 4x times faster data transmission speed (tested on 1Gbps Ethernet environment). Since Vitis-AI applications may use remote display feature to show machine learning results, using OpenSSH can improve the display experience.
250 | a) Run ```petalinux-config -c rootfs```.
251 | b) Go to ***Image Features***.
252 | c) Disable ***ssh-server-dropbear*** and enable ***ssh-server-openssh***.
253 | 
254 | d) Go to ***Filesystem Packages-> misc->packagegroup-core-ssh-dropbear*** and disable ***packagegroup-core-ssh-dropbear***.
255 | e) Go to ***Filesystem Packages -> console -> network -> openssh*** and enable ***openssh***, ***openssh-sftp-server***, ***openssh-sshd***, ***openssh-scp***.
256 | 10. In rootfs config go to ***Image Features*** and enable ***package-management*** and ***debug_tweaks*** option, store the change and exit.
257 | 11. Increase the size allocation for CMA memory to 512 MB (optional), disable CPU IDLE in the kernel configurations as follows:
258 | Default CMA size in PetaLinux project and Vitis Base Platform is 256MB. But for some models, 256MB is not enough to allocate DPU instructions/parameters/data area. Unless it's clear that your 256MB is sufficient for your model, it's recommended to set cma=512M which could cover all Vitis-AI models.
259 | CPU IDLE would cause CPU IDLE when JTAG is connected. So it is recommended to disable the selection.
260 | a) Type ```petalinux-config -c kernel```
261 | b) Select ***Device Drivers > Generic Driver Options > DMA Contiguous Memory Allocator > Size in Mega Bytes***.
262 | c) Press the ```Enter``` key and change 256 to 512.
263 | Ensure the following are ***TURNED OFF*** by entering 'n' in the [ ] menu selection for:
264 | - ***CPU Power Mangement > CPU Idle > CPU idle PM support***
265 | - ***CPU Power Management > CPU Frequency scaling > CPU Frequency scaling***
266 |
267 | 12. Update the Device tree to include the zocl driver by appending the text below to the ***project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi*** file.
268 | ```
269 | &amba {
270 | zyxclmm_drm {
271 | compatible = "xlnx,zocl";
272 | status = "okay";
273 | interrupt-parent = <&axi_intc_0>;
274 | interrupts = <0 4>, <1 4>, <2 4>, <3 4>,
275 | <4 4>, <5 4>, <6 4>, <7 4>,
276 | <8 4>, <9 4>, <10 4>, <11 4>,
277 | <12 4>, <13 4>, <14 4>, <15 4>,
278 | <16 4>, <17 4>, <18 4>, <19 4>,
279 | <20 4>, <21 4>, <22 4>, <23 4>,
280 | <24 4>, <25 4>, <26 4>, <27 4>,
281 | <28 4>, <29 4>, <30 4>, <31 4>;
282 | };
283 | };
284 |
285 | ```
286 | 13. Modify the bsp config file:
287 | Open ***project-spec/meta-user/conf/petalinuxbsp.conf*** and add a line like below:
288 | ```
289 | PACKAGE_CLASSES = "package_deb"
290 | ```
291 | 14. Modify the u-boot settings:
292 | Because we didn't use SD card to store the rootfs files. So that u-boot need to load a large image. We need to modify the u-boot so that it can load larger image.
293 | Open ***project-spec/meta-user/recipes-bsp/u-boot/files/platform-top.h*** and modify:
294 |
295 | ```
296 | #define CONFIG_SYS_BOOTM_LEN 0xF000000
297 | ```
298 | to
299 | ```
300 | #define CONFIG_SYS_BOOTM_LEN 0x80000000
301 | #undef CONFIG_SYS_BOOTMAPSZ
302 | ```
303 | 15. From within the PetaLinux project (petalinux), type ```petalinux-build``` to build the project.
304 | 16. Create a sysroot self-installer for the target Linux system:
305 | ```
306 | cd images/linux
307 | petalinux-build --sdk
308 | ```
309 | ***Note: We would store all the necessary files for Vitis platform creation flow. Here we name it ```zcu102_dpu_pkg ```. Then we create a pfm folder inside.***
310 | 17. Type ```./sdk.sh``` to install PetaLinux SDK, provide a full pathname to the output directory ***/pfm***(here in this example I use ***/home/wuxian/wu_project/vitis2019.2/vitis_custom_platform_flow/zcu102_dpu_pkg/pfm***) and confirm.
311 | 18. We would install Vitis AI library and DNNDK into this rootfs in the future.
312 | 19. After the PetaLinux build succeeds, the generated Linux software components are in the ***/images/linux directory***. For our example, the ***images/linux*** directory contains the generated image and ELF files listed below. Copy these files to the ***/pfm/boot*** directory in preparation for running the Vitis platform creation flow:
313 | ```
314 | - image.ub
315 | - zynqmp_fsbl.elf
316 | - pmufw.elf
317 | - bl31.elf
318 | - u-boot.elf
319 | ```
320 | 20. Add a BIF file (linux.bif) to the ***/pfm/boot*** directory with the contents shown below. The file names should match the contents of the boot directory. The Vitis tool expands these pathnames relative to the sw directory of the platform at v++ link time or when generating an SD card. However, if the bootgen command is used directly to create a BOOT.BIN file from a BIF file, full pathnames in the BIF are necessary. Bootgen does not expand the names between the <> symbols.
321 | ```
322 | /* linux */
323 | the_ROM_image:
324 | {
325 | [fsbl_config] a53_x64
326 | [bootloader]
327 | [pmufw_image]
328 | [destination_device=pl]
329 | [destination_cpu=a53-0, exception_level=el-3, trustzone]
330 | [destination_cpu=a53-0, exception_level=el-2]
331 | }
332 | ```
333 | ***Note: Now we prepare the HW platform and SW platform, next we would create a Vitis Platform.***
334 |
335 | ## Create the Vitis Platform
336 |
337 | 1. Source Vitis and XRT settings
338 | ```
339 | source /settings64.sh
340 | source /opt/xilinx/xrt/setup.sh
341 | ```
342 | 2. Go to the ***zcu102_dpu_pkg*** folder you created: ```cd ```.
343 | 3. Launch Vitis by typing ```vits``` in the console.
344 | 4. Select ***zcu102_dpu_pkg*** folder as workspace directory.
345 | 
346 | 5. In the Vitis IDE, select ***File > New > Platform Project*** to create a platform project.
347 | 6. In the Create New Platform Project dialog box, do the following:
348 | a) Enter the project name. For this example, type ```zcu102_vai_custom```.
349 | b) Leave the checkbox for the default location selected.
350 | c) Click ***Next***.
351 | 7. In the Platform Project dialog box, do the following:
352 | a) Select ***Create from hardware specification (XSA)***.
353 | b) Click ***Next***.
354 | 8. In the Platform Project Specification dialog box, do the following:
355 | a) Browse to the XSA file generated by the Vivado. In this case, it is located in ```vitis_custom_platform_flow/zcu102_custom_platform/xsa_gen/zcu102_custom_platform.xsa```.
356 | b) Set the operating system to ***linux***.
357 | c) Set the processor to ***psu_cortexa53***.
358 | d) Leave the checkmark selected to generate boot components.
359 | e) Click ***Finish***.
360 | 9. In the Platform Settings view, observe the following:
361 | - The name of the Platform Settings view matches the platform project name of ***zcu102_vai_custom***.
362 | - A psu_cortexa53 device icon is shown, containing a Linux on psu_cortexa53 domain.
363 | - A psu_cortexa53 device icon is shown, containing a zynqmp_fsbl BSP.
364 | - A psu_pmu_0 device icon is shown, containing a zynqmp_pmufw BSP.
365 | 10. Click the linux on psu_cortexa53 domain, browse to the locations and select the directory or file needed to complete the dialog box for the following:
366 | ```
367 | Linux Build Output:
368 | Browse to zcu102_dpu_pkg/pfm/boot and click OK.
369 |
370 | Bif file:
371 | Browse to zcu102_dpu_pkg/pfm/boot/linux.bif file and click OK.
372 |
373 | Image:
374 | Browse to zcu102_dpu_pkg/pfm/boot and click OK.
375 | ```
376 | 
377 | 11. Click ***zcu102_vai_custom*** project in the Vitis Explorer view, click the ***Build*** button to generate the platform.
378 | 
379 | ***Note: The generated platform is placed in the export directory. BSP and source files are also provided for re-building the FSBL and PMU if desired and are associated with the platform. The platform is ready to be used for application development.***
380 |
381 | ## Prepare for the DPU Kernel
382 |
383 | 1. Download Vitis AI by calling command ```git clone https://github.com/Xilinx/Vitis-AI.git```.
384 | 2. Navigate to the repository:```cd Vitis-AI```, set the tag to proper tag(here we use ***v1.1***) by typing: ```git checkout v1.1```.
385 | 3. If you don't want to destroy the TRD reference design. Copy ***DPU-TRD*** folder into another directory. For example I would copy that into my ***zcu102_dpu_pkg*** folder: ```cp -r DPU-TRD ~/wu_project/vitis2019.2/vitis_custom_platform_flow/zcu102_dpu_pkg/```
386 | 4. Source Vitis tools setting sh file: ```source /Vitis/2019.2/settings64.sh```.
387 | 5. Source XRT sh file:```source opt/xilinx/xrt/setup.sh```.
388 | 6. Export SDX_PLATFORM with the directory of the custom platform xpfm file which you created before. Here in my project it would be: ```export SDX_PLATFORM=/home/wuxian/wu_project/vitis2019.2/vitis_custom_platform_flow/zcu102_dpu_pkg/zcu102_vai_custom/export/zcu102_vai_custom/zcu102_vai_custom.xpfm```. Remember now this custom platform name is ***zcu102_vai_custom***.
389 | 7. Navigate to the copy of the ***DPU-TRD*** folder, then go to the ***./prj/Vitis*** folder.
390 | There are 2 files can be used to modify the DPU settings: The ***config_file/prj_config*** file is for DPU connection in Vitis project and the dpu_conf.vh is for other DPU configurations. Here we would modify the prj_config so that 2 DPU cores are enabled. And we would keep dpu_conf.vh in default.
391 | 8. Modify the ***config_file/prj_config*** like below:
392 | ```
393 |
394 | [clock]
395 |
396 | id=0:dpu_xrt_top_1.aclk
397 | id=1:dpu_xrt_top_1.ap_clk_2
398 | id=0:dpu_xrt_top_2.aclk
399 | id=1:dpu_xrt_top_2.ap_clk_2
400 |
401 | [connectivity]
402 |
403 | sp=dpu_xrt_top_1.M_AXI_GP0:HPC0
404 | sp=dpu_xrt_top_1.M_AXI_HP0:HP0
405 | sp=dpu_xrt_top_1.M_AXI_HP2:HP1
406 | sp=dpu_xrt_top_2.M_AXI_GP0:HPC1
407 | sp=dpu_xrt_top_2.M_AXI_HP0:HP2
408 | sp=dpu_xrt_top_2.M_AXI_HP2:HP3
409 |
410 | [advanced]
411 | misc=:solution_name=link
412 | param=compiler.addOutputTypes=sd_card
413 |
414 | #param=compiler.skipTimingCheckAndFrequencyScaling=1
415 |
416 | [vivado]
417 | prop=run.impl_1.strategy=Performance_Explore
418 | #param=place.runPartPlacer=0
419 |
420 | ```
421 |
422 | 9. Generate the XO file by typing: ```make binary_container_1/dpu.xo DEVICE=zcu102_vai_custom```.
423 | 10. Verify if the XO file is generated here: ***/DPU-TRD/prj/Vitis/binary_container_1/dpu.xo***.
424 |
425 | ## Create and Build a Vitis application
426 | 1. Open Vitis workspace you were using before.
427 | 2. Select ***File -> New -> Application Project***.
428 | 3. Name the project ```hello_dpu```, use ***new system project** and use the default name, click ***next***.
429 | 4. Select ***zcu102_vai_custom*** as platform, click ***next***.
430 | 5. Set Domain to ***linux on psu_cortexa53***, set ***Sys_root path*** to ```/pfm/sysroots/aarch64-xilinx-linux```(as you created by running ***sdk.sh***) and click ***next***.
431 | 6. Select ***Empty Application*** and click ***finish*** to generate the application.
432 | 7. Right click on the ***src*** folder under your ***hello_dpu*** application in the Expplorer window, and select "Import Sources"
433 | 
434 | 8. Choose from directory ***/DPU-TRD/prj/Vitis/binary_container_1/*** as the target location, and import the ***dpu.xo*** file that we just created.
435 | 9. Import sources again, and add the cpp, header and prj_config files from ***ref_files/src*** folder provided by this Git repository.
436 | 10. In the Explorer window double click the hello_dpu.prj file to open it, change the ***Active Build configuration*** from ***Emulation-SW*** to ***Hardware***.
437 | 11. Under Hardware Functions, click the lightning bolt logo to ***Add Hardware Function***.
438 | 
439 | 12. Select the "dpu_xrt_top" included as part of the dpu.xo file that we included earlier.
440 | 13. Click on binary_container_1 to change the name to dpu.
441 | 14. Click on ***dpu_xrt_top*** and change the ***Compute Units*** from ```1``` to ```2``` because we have 2 dpu cores involved.
442 | 15. Right click on "dpu", select ***Edit V++ Options***, add ```--config ../src/prj_config -s``` as ***V++ Options***, then click ***OK***.
443 | 16. Go back to the ***Explorer*** window, right click on the ***hello_dpu*** project folder select ***C/C++ Building Settings**.
444 | 17. In ***Propery for Hello_DPU*** dialog box, select ***C/C++ Build->Settings->Tool Settings->GCC Host Linker->Libraries***
445 | , click the green "+" to add the following libraries:
446 | ```
447 | opencv_core
448 | opencv_imgcodecs
449 | opencv_highgui
450 | opencv_imgproc
451 | opencv_videoio
452 | n2cube
453 | hineon
454 | ```
455 | 18. In the same page, modify the ***Library search path*** to add ```${SYSROOT}/usr/lib/```, click ***Apply***
456 | 
457 | 19. Then go to ***C/C++ Build->Settings->Tool Settings->GCC Host Compiler->Includes***, remove the HLS include directory and add ```${SYSROOT}/usr/include/``` like below, then click ***Apply and Close*** to save the changes.
458 | 
459 | ***These steps are used to make sure your application can call libs in rootfs directly on Vitis appilcation build***
460 | 20. The Vitis AI library and DNNDK are not included in PetaLinux SDK rootfs, now let's install them into the rootfs directory:
461 | ***Note:*** We should follow the section ***Setting Up the Host For Edge*** of [Vitis AI library readme file](https://github.com/Xilinx/Vitis-AI/blob/master/Vitis-AI-Library/README.md) to install the Vitis AI library and section ***Setup cross-compiler for Vitis AI DNNDK and make samples*** of [DNNDK readme file](https://github.com/Xilinx/Vitis-AI/blob/master/mpsoc/README.md) to install the DNNDK. Most of the time I would suggest you to use a release tag when visting Github resource, but there are some critical modifications after v1.1 release. So I would just suggest you to refer to ***master*** branch this time. If you feel difficult to following the official guide there you can refer to the following ones. ***Please just skip these steps if you already install the libs referring to the readme files***:
462 | a) Set the PetaLinux SDK environment by running command: ```. /pfm/environment-setup-aarch64-xilinx-linux```
463 | b) Download the [vitis_ai_2019.2-r1.1.0.tar.gz](https://www.xilinx.com/bin/public/openDownload?filename=vitis_ai_2019.2-r1.1.0.tar.gz) to a particular directory(here we take ***~/Downloads*** as example) and install it to the roofs folder:
464 | ```
465 | cd ~/Downloads # Or some place else you download the vitis_ai_2019.2-r1.1.0.tar.gz file
466 | tar -xzvf vitis_ai_2019.2-r1.1.0.tar.gz -C /pfm/sysroots/aarch64-xilinx-linux
467 | ```
468 | c) Download the glog package to ***~/Downloads*** folder and untar it:
469 | ```
470 | cd ~/Downloads # Or some place else you download the file
471 | curl -Lo glog-v0.4.0.tar.gz https://github.com/google/glog/archive/v0.4.0.tar.gz
472 | tar -zxvf glog-v0.4.0.tar.gz
473 | cd glog-0.4.0
474 | ```
475 | d) Build it and install it to the rootfs folder:
476 | ```
477 | mkdir build_for_petalinux
478 | cd build_for_petalinux
479 | unset LD_LIBRARY_PATH; source /pfm/environment-setup-aarch64-xilinx-linux
480 | cmake -DCPACK_GENERATOR=TGZ -DBUILD_SHARED_LIBS=on -DCMAKE_INSTALL_PREFIX=/pfm/sysroots/aarch64-xilinx-linux/usr ..
481 | make && make install
482 | make package
483 | ```
484 | e) Download DNNDK runtime package [vitis-ai_v1.1_dnndk.tar.gz](https://www.xilinx.com/bin/public/openDownload?filename=vitis-ai_v1.1_dnndk.tar.gz) to ***~/Downloads*** and install it into rootfs
485 | ```
486 | cd ~/Downloads # Or some place else you download the file
487 | tar -xzvf vitis-ai_v1.1_dnndk.tar.gz
488 | cd vitis-ai_v1.1_dnndk
489 | ./install.sh /pfm/sysroots/aarch64-xilinx-linux
490 | ```
491 | ***Now we install both the VAI lib and DNNDK packages into the rootfs set as Vitis sysroot, then we can build application on Vitis.***
492 |
493 | 21. Right click the ***hello_dpu*** project folder and select ***Build Project***
494 |
495 | ## Prepare the Network Deployment File
496 |
497 | 1. Find HWH file from your Vitis application folder***hello_dpu/Hardware/dpu.build/link/vivado/vpl/prj/prj.srcs/sources_1/bd/system/hw_handoff/system.hwh***
498 | Or go to your Vitis application folder use command ```find -name *.hwh``` to search for the file.
499 | 2. Copy this HWH file into ***/Tool-Example*** folder.
500 | 3. Go to ****** folder and launch the docker.
501 | 4. Use following command to activate TensorFlow tool conda environment:
502 | ```
503 | conda activate vitis-ai-tensorflow
504 | ```
505 | 5. Go to ***/workspace/Tool-Example*** folder and run ```dlet -f ./system.hwh```.
506 | You should get the running log like below:
507 | ```
508 | (vitis-ai-tensorflow) wuxian@wuxian-Ubuntu1804:/workspace/Tool-Example$ dlet -f ./system.hwh
509 | [DLet]Generate DPU DCF file dpu-03-26-2020-13-30.dcf successfully.
510 | ```
511 | The DCF file name should be associated with the time and date you generating this file.
512 | 6. Edit the ***6_tf_compile_for_v2.sh*** file and modify the ***--options*** parameter to add dcf file like below:
513 | ```--options "{'save_kernel':'', 'dcf':'./'}"```
514 | Take my project as example it is:
515 | ```--options "{'save_kernel':'', 'dcf':'./dpu-03-26-2020-13-30.dcf'}"```
516 | 7. Following the TensorFlow steps at https://github.com/Xilinx/Vitis-AI/blob/v1.1/Tool-Example/README.md to generate the ELF from ResNet model.
517 | 8. Check the generated ELF file from ***tf_resnetv1_50_imagenet_224_224_6.97G/vai_c_output_ZCU102/dpu_resnet50_0.elf***.
518 | 9. Copy that file to the ***src*** folder of Vitis application ***hello_dpu***
519 | 10. Right click on the ***hello_dpu*** project folder in Vitis select ***C/C++ Building Settings**.
520 | 11. In ***Propery for Hello_DPU*** dialog box, select ***C/C++ Build->Settings->Tool Settings->GCC Host Linker->Miscellaneous->Other objects***, add a new object: ```"${workspace_loc:/${ProjName}/src/dpu_resnet50_0.elf}"```, click ***Apply and Close***.
521 | 12. Right click the ***hello_dpu*** project folder and select ***Build Project***
522 | ***Now you should get an updated hello_dpu.exe with a size of about 20MB(the ConvNet model is involved).***
523 |
524 | ## Run Application on Board
525 | 1. Copy all the files from ***sd_card folder*** inside your Vitis application like ***/Hardware/sd_card/*** to SD card, set ZCU102 to SD boot mode and boot up the board, connect the board with serial port.
526 | 2. Connect SSH:
527 | a) Run ```ifconfig``` to get the IP address, here we take ```172.16.75.189``` as example.
528 | b) Using SSH terminal to connect ZCU102 with SSH: ```ssh -x root@172.16.75.189```, or use MobaXterm in Windows.
529 | 3. Mount SD card to mnt folder by running command: ```mount /dev/mmcblk0p1 /mnt```.
530 | 4. Go to the /mnt folder and create a new folder named "package":
531 | ```
532 | cd /mnt
533 | mkdir package
534 | ```
535 | 5. Since this is a custom design the Vitis AI library, DNNDK and test images are not installed. We need to install them on board.
536 | I would suggest you to refer to section "Setting Up the Target" of [Vitis AI library readme file](https://github.com/Xilinx/Vitis-AI/blob/master/Vitis-AI-Library/README.md) to install the Vitis AI library and refer to section "Setup Evaluation Board and run Vitis AI DNNDK samples" of [DNNDK example readme file](https://github.com/Xilinx/Vitis-AI/blob/master/mpsoc/README.md) to install DNNDK and test images.(For the similar reason now I would suggest the master branch not v1.1 tag.) If you feel difficult to do that please follow the steps below:
537 | a) Download the Vitis AI Runtime 1.1 package [vitis-ai-runtime-1.1.2.tar.gz](https://www.xilinx.com/bin/public/openDownload?filename=vitis-ai-runtime-1.1.2.tar.gz)
538 | b) Untar the packet and copy the following files to the board using scp by running the command on host:
539 | ```
540 | scp /unilog/aarch64/libunilog-1.1.0-Linux-build.deb root@172.16.75.189:~/package
541 | scp /XIR/aarch64/libxir-1.1.0-Linux-build.deb root@172.16.75.189:~/package
542 | scp /VART/aarch64/libvart-1.1.0-Linux-build.deb root@172.16.75.189:~/package
543 | scp /Vitis-AI-Library/aarch64/libvitis_ai_library-1.1.0-Linux-build.deb root@172.16.75.189:~/package
544 | ```
545 | c) Copy the glog-0.4.0-Linux.tar.gz from host to board with the following command:
546 | ***glog-0.4.0-Linux.tar.gz is built before when configure rootfs for Vitis application***
547 | ```
548 | cd /build_for_petalinux
549 | scp glog-0.4.0-Linux.tar.gz 172.16.75.189:/mnt/package
550 | ```
551 | d) Download the package [vitis-ai_v1.1_dnndk.tar.gz](https://www.xilinx.com/bin/public/openDownload?filename=vitis-ai_v1.1_dnndk.tar.gz) and package [vitis-ai_v1.1_dnndk_sample_img.tar.gz](https://www.xilinx.com/bin/public/openDownload?filename=vitis-ai_v1.1_dnndk_sample_img.tar.gz), copy them to board:
552 | ```
553 | scp vitis-ai_v1.1_dnndk.tar.gz root@172.16.75.189:/mnt/package
554 | scp vitis-ai_v1.1_dnndk_sample_img.tar.gz root@172.16.75.189:/mnt/package
555 | ```
556 | e) In SSH console go to the ***/mnt/package*** folder and install the packages you have uploaded:
557 | ```
558 | cd /mnt/package
559 | tar -xzvf glog-0.4.0-Linux.tar.gz --strip-components=1 -C /usr
560 | dpkg -i --force-all libunilog-1.1.0-Linux-build46.deb
561 | dpkg -i libxir-1.1.0-Linux-build46.deb
562 | dpkg -i libvart-1.1.0-Linux-build48.deb
563 | dpkg -i libvitis_ai_library-1.1.0-Linux-build46.deb
564 | ```
565 | ***Notice that the first dpkg command we use --force-all option to force install this package and ignore the warning messages. And the build version may be a little different depending on the download time.***
566 | f) Install DNNDK package like below:
567 | ```
568 | cp vitis-ai_v1.1_dnndk.tar.gz ~/
569 | cd ~/
570 | tar -zxvf vitis-ai_v1.1_dnndk.tar.gz
571 | cd vitis-ai_v1.1_dnndk
572 | ./install.sh
573 | ```
574 | g) Go back to ***/mnt/package*** folder and untar the dnndk example file:
575 | ```
576 | cd /mnt/package
577 | tar -zxvf vitis-ai_v1.1_dnndk_sample_img.tar.gz
578 | ```
579 | 6. Go to the vitis_ai_dnndk_samples and run the hello_dpu.exe application:
580 | ```
581 | cd /mnt/package/vitis_ai_dnndk_samples
582 | mkdir test
583 | cd test
584 | cp /mnt/hello_dpu.exe ./
585 | ./hello_dpu.exe
586 | ```
587 | ***We store the hello_dpu.exe to /mnt/package/vitis_ai_dnndk_samples/test folder to suit the relative path in my code, you can do that according to your code context. The hello_dpu.exe is generated in Vitis application build and was copied to sd card from previous operation.***
588 | 7. You should see the result like below:
589 | 
590 |
591 | ## Reference
592 |
593 | https://www.xilinx.com/html_docs/xilinx2019_2/vitis_doc/index.html
594 | https://github.com/Xilinx/Vitis-AI
595 | https://github.com/Xilinx/Vitis_Embedded_Platform_Source
596 | https://github.com/Xilinx/Vitis-AI-Tutorials/tree/Vitis-AI-Custom-Platform
597 | https://github.com/Xilinx/Edge-AI-Platform-Tutorials/tree/3.1/docs/DPU-Integration
598 | ***Note: If you would like to try with one click creating VAI platform flow it is recommended to try with the official platform source code for*** [zcu102_dpu](https://github.com/Xilinx/Vitis_Embedded_Platform_Source/tree/master/Xilinx_Official_Platforms/zcu102_dpu) ***and*** [zcu104_dpu](https://github.com/Xilinx/Vitis_Embedded_Platform_Source/tree/master/Xilinx_Official_Platforms/zcu104_dpu)***.***
599 |
600 | ## More Information about Install and Set Vitis and XRT Environment
601 | https://www.xilinx.com/html_docs/xilinx2019_2/vitis_doc/Chunk2027126153.html#zks1565446519267
602 | https://www.xilinx.com/html_docs/xilinx2019_2/vitis_doc/pjr1542153622642.html
603 | https://www.xilinx.com/html_docs/xilinx2019_2/vitis_doc/rbk1547656041291.html
604 |
605 |
606 |
607 |
608 |
609 |
610 |
611 |
612 |
613 |
614 |
615 |
--------------------------------------------------------------------------------
/pic_for_readme/.gitkeep:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/pic_for_readme/add_hardware_function.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/add_hardware_function.png
--------------------------------------------------------------------------------
/pic_for_readme/add_opencv_lib.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/add_opencv_lib.png
--------------------------------------------------------------------------------
/pic_for_readme/block_automation_result.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/block_automation_result.png
--------------------------------------------------------------------------------
/pic_for_readme/build_vitis_platform.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/build_vitis_platform.png
--------------------------------------------------------------------------------
/pic_for_readme/clk_rst_connection.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/clk_rst_connection.png
--------------------------------------------------------------------------------
/pic_for_readme/clock_settings.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/clock_settings.png
--------------------------------------------------------------------------------
/pic_for_readme/concat_connection.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/concat_connection.png
--------------------------------------------------------------------------------
/pic_for_readme/enable_s_axi_hp0_fpd.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/enable_s_axi_hp0_fpd.png
--------------------------------------------------------------------------------
/pic_for_readme/hp_removed.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/hp_removed.png
--------------------------------------------------------------------------------
/pic_for_readme/import_sources.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/import_sources.png
--------------------------------------------------------------------------------
/pic_for_readme/intc_settings.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/intc_settings.png
--------------------------------------------------------------------------------
/pic_for_readme/netname.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/netname.png
--------------------------------------------------------------------------------
/pic_for_readme/petalinux_rootfs.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/petalinux_rootfs.png
--------------------------------------------------------------------------------
/pic_for_readme/run_xsa_tcl.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/run_xsa_tcl.png
--------------------------------------------------------------------------------
/pic_for_readme/set_s_axi_hp0_fpd_options.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/set_s_axi_hp0_fpd_options.png
--------------------------------------------------------------------------------
/pic_for_readme/set_s_axi_hpc0_fpd_options.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/set_s_axi_hpc0_fpd_options.png
--------------------------------------------------------------------------------
/pic_for_readme/ssh_settings.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/ssh_settings.png
--------------------------------------------------------------------------------
/pic_for_readme/test_result.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/test_result.PNG
--------------------------------------------------------------------------------
/pic_for_readme/vitis_acceleration_flow.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/vitis_acceleration_flow.PNG
--------------------------------------------------------------------------------
/pic_for_readme/vitis_acceleration_flow.image:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/vitis_acceleration_flow.image
--------------------------------------------------------------------------------
/pic_for_readme/vitis_include_settings.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/vitis_include_settings.png
--------------------------------------------------------------------------------
/pic_for_readme/vitis_launch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/vitis_launch.png
--------------------------------------------------------------------------------
/pic_for_readme/vitis_lib_settings.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/vitis_lib_settings.png
--------------------------------------------------------------------------------
/pic_for_readme/vitis_linux_config.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/vitis_linux_config.png
--------------------------------------------------------------------------------
/pic_for_readme/vivado_platform_connection.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/vivado_platform_connection.png
--------------------------------------------------------------------------------
/pic_for_readme/vivado_project_summary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gewuek/vitis_ai_custom_platform_flow/767782e6dbeb09d981f7d32a99ae7be008d59034/pic_for_readme/vivado_project_summary.png
--------------------------------------------------------------------------------
/ref_files/dynamic_postlink.tcl:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright 2019 Xilinx Inc.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | #
16 |
17 | # Generate an empty _post_sys_link_gen_constrs.xdc file
18 | # -------------------------------------------------------------------------
19 | set fd [open "./_post_sys_link_gen_constrs.xdc" w]
20 | puts $fd "# No content"
21 | close $fd
22 |
23 | # Set SLR_ASSIGNMENTS to SLR1 for the host path to MemSS S00_AXI
24 | # -------------------------------------------------------------------------
25 | ####set_property CONFIG.SLR_ASSIGNMENTS SLR1 [get_bd_cells axi_vip_data] //no SLR
26 |
27 | # Connect available interrupt pins on compute units to the interrupt vector
28 | # -------------------------------------------------------------------------
29 |
30 | # The wiring proc takes in the CU's interrupt BD pin and the overall interrupt index
31 | proc wire_cu_to_xlconcat_intr {__cu_inst_intr_pin __intr_pin_num} {
32 | # Set number of xlconcat blocks and number of interrupts per block
33 | set __num_xlconcat 4
34 | set __num_pin_per_xlconcat 32
35 |
36 | # Get the xlconcat instance and pin number to work on now
37 | set __xlconcat_inst_num [expr {$__intr_pin_num / $__num_pin_per_xlconcat}]
38 | set __xlconcat_pin_num [expr {$__intr_pin_num - ($__xlconcat_inst_num * $__num_pin_per_xlconcat)}]
39 |
40 | # Ensure that the xlconcat instance and its pin exist, then get those objects
41 | if {($__xlconcat_pin_num < $__num_pin_per_xlconcat) && ($__xlconcat_inst_num < $__num_xlconcat)} {
42 | set __xlconcat_inst [get_bd_cells -hierarchical -quiet -filter NAME=~xlconcat_interrupt_${__xlconcat_inst_num}]
43 | set __xlconcat_pin [get_bd_pins -of_objects $__xlconcat_inst -quiet -filter NAME=~In${__xlconcat_pin_num}]
44 |
45 | # If the xlconcat pin object exists, disconnect it from ground and connect the CU's interrupt BD pin to it
46 | if {[llength $__xlconcat_pin] == 1} {
47 | disconnect_bd_net /interrupt_concat/xlconstant_gnd_dout $__xlconcat_pin -quiet
48 | connect_bd_net $__cu_inst_intr_pin $__xlconcat_pin -quiet
49 | } else {
50 | puts "(Post-linking XSA Tcl hook) No available xlconcat pins found"
51 | }
52 | } else {
53 | puts "(Post-linking XSA Tcl hook) No remaining xlconcat pins to connect to"
54 | }
55 | }
56 |
57 | # Make sure the kernel key in the config_info dict exists
58 | if {[dict exists $config_info kernels]} {
59 | # Make sure that list of CUs is populated
60 | set __cu_list [dict get $config_info kernels]
61 | if {[llength $__cu_list] > 0} {
62 | # Translate the list of CUs to a list of BD cells
63 | set __cu_inst_list {}
64 | foreach __cu_inst $__cu_list {
65 | set str [get_bd_cells -quiet -filter "VLNV=~*:*:${__cu_inst}:*"]
66 | foreach name [split $str " "] {
67 | lappend __cu_inst_list $name
68 | }
69 | }
70 |
71 | set __cu_inst_addr_list {}
72 | # Sort the list of CUs by offset address
73 | foreach __cu_bd_cell $__cu_inst_list {
74 | set __cu_bd_cell_sub [string range $__cu_bd_cell 1 [string length $__cu_bd_cell]]
75 | set __cu_bd_cell_segs [get_bd_addr_segs -of_objects [get_bd_addr_spaces ps_*] -filter "NAME =~ *${__cu_bd_cell_sub}_*"]
76 | if {[llength ${__cu_bd_cell_segs}] > 0} {
77 | set __cu_offset [get_property OFFSET [get_bd_addr_segs -of_objects [get_bd_addr_spaces ps_*] -filter "NAME =~ *${__cu_bd_cell_sub}_*"]]
78 | lappend __cu_inst_addr_list "$__cu_bd_cell $__cu_offset"
79 | }
80 | }
81 |
82 | if {[llength $__cu_inst_addr_list] > 0} {
83 | # Order the list by increasing AXI-Lite address offsets, then extract just ordered BD cells
84 | set __cu_inst_list {}
85 | unset __cu_inst_list
86 | set __cu_inst_addr_list_ordered [lsort -index 1 $__cu_inst_addr_list]
87 | foreach __cu_pair $__cu_inst_addr_list_ordered {
88 | lappend __cu_inst_list [lindex $__cu_pair 0]
89 | }
90 | }
91 |
92 | # Make sure the list of BD cells is populated
93 | if {[llength $__cu_inst_list] > 0} {
94 | # Of the BD cells, iterate through those with an interrupt BD pin
95 | set __intr_pin_num 0
96 | foreach __cu_inst_intr $__cu_inst_list {
97 | set __cu_inst_intr_pin [get_bd_pins -of_objects [get_bd_cells $__cu_inst_intr] -quiet -filter "TYPE=~intr"]
98 | if {[llength $__cu_inst_intr_pin] == 1} {
99 | # When a BD cell has an interrupt BD pin, wire it to the next available xlconcat pin
100 | wire_cu_to_xlconcat_intr $__cu_inst_intr_pin $__intr_pin_num
101 | incr __intr_pin_num
102 | }
103 | }
104 | } else {
105 | puts "(Post-linking XSA Tcl hook) No BD cells found for interrupt wiring"
106 | }
107 | } else {
108 | puts "(Post-linking XSA Tcl hook) No CUs found for interrupt wiring"
109 | }
110 | } else {
111 | puts "(Post-linking XSA Tcl hook) No kernels key in config_info dict for interrupt wiring"
112 | }
113 |
--------------------------------------------------------------------------------
/ref_files/opencv/opencv_3.4.3.bbappend:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright 2019 Xilinx Inc.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | #
16 |
17 | PACKAGECONFIG_remove = " gstreamer"
18 | PACKAGECONFIG_append = " libav ffmpeg"
19 | PACKAGECONFIG[ffmpeg] = "-DWITH_FFMPEG=ON,-DWITH_FFMPEG=OFF,ffmpeg,"
20 |
21 | do_install_append() {
22 | rm ${D}/usr/share/OpenCV/haarcascades -rf
23 | rm ${D}/usr/share/OpenCV/lbpcascades -rf
24 | }
25 |
26 |
--------------------------------------------------------------------------------
/ref_files/petalinuxbsp.conf:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright 2019 Xilinx Inc.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | #
16 | #User Configuration
17 |
18 | #OE_TERMINAL = "tmux"
19 |
20 | # Add EXTRA_IMAGEDEPENDS default components
21 | EXTRA_IMAGEDEPENDS_append = " virtual/fsbl virtual/pmu-firmware arm-trusted-firmware qemu-devicetrees"
22 |
23 | # prevent U-Boot from deploying the boot.bin
24 | SPL_BINARY = ""
25 |
26 | #Remove all qemu contents
27 | IMAGE_CLASSES_remove = "image-types-xilinx-qemu qemuboot-xilinx"
28 | IMAGE_FSTYPES_remove = "wic.qemu-sd"
29 |
30 | EXTRA_IMAGEDEPENDS_remove = "qemu-helper-native virtual/boot-bin"
31 |
32 | PACKAGE_CLASSES = " package_deb"
33 |
--------------------------------------------------------------------------------
/ref_files/src/dputils.cpp:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2019 Xilinx Inc.
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 | #include
18 | #include "dputils.h"
19 |
20 | using namespace std;
21 | using namespace cv;
22 | #define N2CUBE_SUCCESS 0
23 | #define USE_NEON_OPT
24 | /**
25 | * @brief Set image into DPU Task's input tensor, multiple IO supported.
26 | *
27 | * @note source data must be in in Caffe order: channel, height, width;
28 | * source data type must be int8_t;
29 | * source data will be converted from CaffdpuGetInputTensorHeighte order to DPU order
30 | *
31 | * @param task - pointer to DPU task
32 | * @param nodeName - pointer to DPU Node name
33 | * @param image - input image in OpenCV Mat format. Single channel and 3- channel input image are supported
34 | * Note: Only support CV_8U, please modify the code for other types.
35 | * @param mean - pointer to mean value array which contains 1 member for single channel input image or 3 members for 3-channel input image
36 | * Note: You can get the mean values from the input Caffe prototxt. At present, the format of mean value file is not yet supported.
37 | * @param scale - scale value of input image
38 | * @param idx - the index of a single input tensor for the Node, with default value as 0
39 | *
40 | * @return 0 on success, or negative error ID in case of failure.
41 | */
42 | int dpuSetInputImageWithScale(DPUTask *task, const char* nodeName, const cv::Mat &image, float *mean, float scale, int idx)
43 | {
44 | int value;
45 | int8_t *inputAddr;
46 | unsigned char *resized_data;
47 | cv::Mat newImage;
48 | float scaleFix;
49 | int height, width, channel;
50 |
51 | height = dpuGetInputTensorHeight(task, nodeName, idx);
52 | width = dpuGetInputTensorWidth(task, nodeName, idx);
53 | channel = dpuGetInputTensorChannel(task, nodeName, idx);
54 |
55 | if (height == image.rows && width == image.cols) {
56 | newImage = image;
57 | } else {
58 | newImage = cv::Mat (height, width, CV_8SC3,
59 | (void*)dpuGetInputTensorAddress(task, nodeName, idx));
60 | cv::resize(image, newImage, newImage.size(), 0, 0, cv::INTER_LINEAR);
61 | }
62 | resized_data = newImage.data;
63 |
64 | inputAddr = dpuGetInputTensorAddress(task, nodeName, idx);
65 | scaleFix = dpuGetInputTensorScale(task, nodeName, idx);
66 |
67 | scaleFix = scaleFix*scale;
68 |
69 | if (newImage.channels() == 1) {
70 | for (int idx_h=0; idx_h(idx_h, idx_w)[idx_c] - mean[idx_c]) * scaleFix);
87 | inputAddr[idx_h*newImage.cols*3+idx_w*3+idx_c] = (char)value;
88 | }
89 | }
90 | }
91 | #endif
92 | }
93 |
94 | return N2CUBE_SUCCESS;
95 | }
96 |
97 |
98 | /**
99 | * @brief Set image into DPU Task's input tensor with mean values, multiple IO supported.
100 | *
101 | * @note source data must be in in Caffe order: channel, height, width;
102 | * source data type must be int8_t;
103 | * source data will be converted from Caffe order to DPU order
104 | *
105 | * @param task - pointer to DPU task
106 | * @param nodeName - pointer to DPU Node name
107 | * @param image - input image in OpenCV Mat format. Single channel and 3- channel input image are supported
108 | * Note: Only support CV_8U, please modify the code for other types.
109 | * @param mean - pointer to mean value array which contains 1 member for single channel input image or 3 members for 3-channel input image
110 | * Note: You can get the mean values from the input Caffe prototxt. At present, the format of mean value file is not yet supported
111 | * @param idx - the index of a single input tensor for the Node, with default value as 0
112 | *
113 | * @return 0 on success, or negative error ID in case of failure.
114 | */
115 | int dpuSetInputImage(DPUTask *task, const char* nodeName, const cv::Mat &image, float *mean, int idx)
116 | {
117 |
118 | return dpuSetInputImageWithScale(task, nodeName, image, mean, 1.0f, idx);
119 | }
120 |
121 |
122 | /**
123 | * @brief Set image into DPU Task's input tensor without mean values, multiple IO supported.
124 | *
125 | * @note source data must be in in Caffe order: channel, height, width;
126 | * source data type must be int8_t;
127 | * source data will be converted from Caffe order to DPU order
128 | *
129 | * @param task - pointer to DPU task
130 | * @param nodeName - pointer to DPU Node name
131 | * @param image - input image in OpenCV Mat format. Single channel and 3- channel input image are supported
132 | * Note: Only support CV_8U, please modify the code for other types.
133 | * @param idx - the index of a single input tensor for the Node, with default value as 0
134 | *
135 | * @return 0 on success, or negative error ID in case of failure.
136 | */
137 | int dpuSetInputImage2(DPUTask *task, const char* nodeName, const cv::Mat &image, int idx)
138 | {
139 | float mean[3];
140 |
141 | dpuGetKernelMean(task,mean,image.channels()); //This API is only available for Caffe model
142 | return dpuSetInputImageWithScale(task, nodeName, image, mean, 1.0f, idx);
143 | }
144 |
145 |
--------------------------------------------------------------------------------
/ref_files/src/dputils.h:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2019 Xilinx Inc.
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 | #ifndef _DPUTILS_H_
18 | #define _DPUTILS_H_
19 |
20 | #include
21 |
22 | struct dpu_task;
23 | typedef struct dpu_task DPUTask;
24 |
25 |
26 | /* Set image into DPU Task's input Tensor */
27 | int dpuSetInputImage(DPUTask *task, const char *nodeName,
28 | const cv::Mat &image, float *mean, int idx = 0);
29 |
30 | /* Set image into DPU Task's input Tensor with a specified scale parameter */
31 | int dpuSetInputImageWithScale(DPUTask *task, const char *nodeName,
32 | const cv::Mat &image, float *mean, float scale, int idx = 0);
33 |
34 | /* Set image into DPU Task's input Tensor (mean values automatically processed by N2Cube) */
35 | int dpuSetInputImage2(DPUTask *task, const char *nodeName, const cv::Mat &image, int idx = 0);
36 |
37 | #endif
38 |
--------------------------------------------------------------------------------
/ref_files/src/main.cpp:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2019 Xilinx Inc.
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 | #include
18 | #include
19 | #include
20 | #include
21 | #include
22 | #include
23 | #include
24 | #include
25 | #include
26 | #include
27 | #include
28 | #include
29 | #include
30 | #include
31 |
32 | #include
33 | /* header files for Vitis AI advanced APIs */
34 | #include
35 |
36 | using namespace std;
37 | using namespace cv;
38 |
39 | #define KRENEL_CONV "resnet50_0"
40 |
41 | #define TASK_CONV_INPUT "resnet_v1_50_conv1_Conv2D"
42 | #define TASK_CONV_OUTPUT "resnet_v1_50_logits_Conv2D"
43 |
44 | const string baseImagePath = "../dataset/image500_640_480/";
45 |
46 | /*List all images's name in path.*/
47 | void ListImages(std::string const &path, std::vector &images) {
48 | images.clear();
49 | struct dirent *entry;
50 |
51 | /*Check if path is a valid directory path. */
52 | struct stat s;
53 | lstat(path.c_str(), &s);
54 | if (!S_ISDIR(s.st_mode)) {
55 | fprintf(stderr, "Error: %s is not a valid directory!\n", path.c_str());
56 | exit(1);
57 | }
58 |
59 | DIR *dir = opendir(path.c_str());
60 | if (dir == nullptr) {
61 | fprintf(stderr, "Error: Open %s path failed.\n", path.c_str());
62 | exit(1);
63 | }
64 |
65 | while ((entry = readdir(dir)) != nullptr) {
66 | if (entry->d_type == DT_REG || entry->d_type == DT_UNKNOWN) {
67 | std::string name = entry->d_name;
68 | std::string ext = name.substr(name.find_last_of(".") + 1);
69 | if ((ext == "JPEG") || (ext == "jpeg") || (ext == "JPG") || (ext == "jpg") ||
70 | (ext == "bmp") || (ext == "PNG") || (ext == "png")) {
71 | images.push_back(name);
72 | }
73 | }
74 | }
75 |
76 | closedir(dir);
77 | }
78 |
79 | /*Load all kinds*/
80 | void LoadWords(std::string const &path, std::vector &kinds) {
81 | kinds.clear();
82 | std::fstream fkinds(path);
83 | if (fkinds.fail()) {
84 | fprintf(stderr, "Error : Open %s failed.\n", path.c_str());
85 | exit(1);
86 | }
87 | std::string kind;
88 | while (getline(fkinds, kind)) {
89 | kinds.push_back(kind);
90 | }
91 |
92 | fkinds.close();
93 | }
94 |
95 | /**
96 | * @brief Get top k results according to its probability
97 | *
98 | * @param d - pointer to input data
99 | * @param size - size of input data
100 | * @param k - calculation result
101 | * @param vkinds - vector of kinds
102 | *
103 | * @return none
104 | */
105 | void TopK(const float *d, int size, int k, std::vector &vkind) {
106 | assert(d && size > 0 && k > 0);
107 | std::priority_queue> q;
108 |
109 | for (auto i = 0; i < size; ++i) {
110 | q.push(std::pair(d[i], i));
111 | }
112 |
113 | for (auto i = 0; i < k; ++i) {
114 | std::pair ki = q.top();
115 | /* Note: For current tensorflow Resnet model, there are 1001 kinds.*/
116 | int real_ki = ki.second;
117 | fprintf(stdout, "top[%d] prob = %-8f name = %s\n", i, d[ki.second],
118 | vkind[real_ki].c_str());
119 | q.pop();
120 | }
121 | }
122 |
123 | void central_crop(const Mat& image, int height, int width, Mat& img) {
124 | int offset_h = (image.rows - height)/2;
125 | int offset_w = (image.cols - width)/2;
126 | Rect box(offset_w, offset_h, width, height);
127 | img = image(box);
128 | }
129 |
130 |
131 | void change_bgr(const Mat& image, int8_t* data, float scale, float* mean) {
132 | for(int i = 0; i < 3; ++i)
133 | for(int j = 0; j < image.rows; ++j)
134 | for(int k = 0; k < image.cols; ++k) {
135 | data[j*image.rows*3+k*3+2-i] = (image.at(j,k)[i] - (int8_t)mean[i]) * scale;
136 | }
137 |
138 | }
139 |
140 | /**
141 | * @brief set input image
142 | *
143 | * @param task - pointer to Resnet50 CONV Task
144 | * @param input_node - input node of Resnet50
145 | * @param image - the input image
146 | * @param mean - mean of Resnet50
147 | *
148 | * @return none
149 | */
150 | inline void set_input_image(DPUTask *task, const string& input_node, const cv::Mat& image, float* mean){
151 | Mat cropped_img;
152 | DPUTensor* dpu_in = dpuGetInputTensor(task, input_node.c_str());
153 | float scale = dpuGetTensorScale(dpu_in);
154 | int width = dpuGetTensorWidth(dpu_in);
155 | int height = dpuGetTensorHeight(dpu_in);
156 | int size = dpuGetTensorSize(dpu_in);
157 | vector abc(size);
158 | central_crop(image, height, width, cropped_img);
159 |
160 | int8_t* data = dpuGetTensorAddress(dpu_in);
161 | change_bgr(cropped_img, data, scale, mean);
162 | }
163 |
164 | /**
165 | * @brief Run DPU CONV Task and FC Task for Resnet50
166 | *
167 | * @param taskConv - pointer to Resnet50 CONV Task
168 | * @param taskFC - pointer to Resnet50 FC Task
169 | *
170 | * @return none
171 | */
172 | void runResnet(DPUTask *taskConv) {
173 | assert(taskConv);
174 |
175 | vector kinds, images;
176 |
177 | /*Load all image names */
178 | ListImages(baseImagePath, images);
179 | if (images.size() == 0) {
180 | fprintf(stdout, "[debug] %s is the directory\n", baseImagePath.c_str());
181 | cerr << "\nError: Not images exist in " << baseImagePath << endl;
182 | return;
183 | }
184 |
185 | /*Load all kinds words.*/
186 | LoadWords(baseImagePath + "words.txt", kinds);
187 |
188 | /* Get the output Tensor for Resnet50 Task */
189 | int8_t *outAddr = (int8_t *)dpuGetOutputTensorAddress(taskConv, TASK_CONV_OUTPUT);
190 | /* Get size of the output Tensor for Resnet50 Task */
191 | int size = dpuGetOutputTensorSize(taskConv, TASK_CONV_OUTPUT);
192 | /* Get channel count of the output Tensor for FC Task */
193 | int channel = dpuGetOutputTensorChannel(taskConv, TASK_CONV_OUTPUT);
194 | /* Get scale of the output Tensor for Resnet50 Task */
195 | float out_scale = dpuGetOutputTensorScale(taskConv, TASK_CONV_OUTPUT);
196 |
197 | float *softmax = new float[size];
198 |
199 | for (auto &image_name : images) {
200 | cout << "\nLoad image : " << image_name << endl;
201 | Mat image = imread(baseImagePath + image_name);
202 |
203 | /* Set image into Conv Task with mean value */
204 | float mean[3] = {0, 0, 0};
205 |
206 | set_input_image(taskConv, TASK_CONV_INPUT, image, mean);
207 |
208 | /* Run Resnet50 CONV part */
209 | cout << "\nRun Resnet50 CONV ..." << endl;
210 | dpuRunTask(taskConv);
211 |
212 | /* Calculate softmax on CPU and show TOP5 classification result */
213 | dpuRunSoftmax(outAddr, softmax, channel, size/channel, out_scale);
214 | TopK(softmax, channel, 5, kinds);
215 |
216 | /* Show the image */
217 | cv::imshow("Image", image);
218 | cv::waitKey(1);
219 | }
220 | delete[] softmax;
221 | }
222 |
223 | /**
224 | * @brief Entry for runing ResNet50 neural network
225 | *
226 | * @note Vitis AI advanced APIs prefixed with "dpu" are used to easily program &
227 | * deploy ResNet50 on DPU platform.
228 | *
229 | */
230 | int main(int argc, char *argv[]) {
231 | /* DPU Kernels/Tasks for runing Resnet50 */
232 | DPUKernel *kernelConv;
233 | DPUTask *taskConv;
234 | printf("Start the test\n\r");
235 |
236 | /* Attach to DPU driver and prepare for runing */
237 | dpuOpen();
238 |
239 | /* Create DPU Kernels for CONV & FC Nodes in Resnet50 */
240 | kernelConv = dpuLoadKernel(KRENEL_CONV);
241 |
242 | /* Create DPU Tasks for CONV & FC Nodes in Resnet50 */
243 | taskConv = dpuCreateTask(kernelConv, 0);
244 |
245 | /* Run CONV & FC Kernels for Resnet50 */
246 | runResnet(taskConv);
247 |
248 | /* Destroy DPU Tasks & free resources */
249 | dpuDestroyTask(taskConv);
250 |
251 | /* Destroy DPU Kernels & free resources */
252 | dpuDestroyKernel(kernelConv);
253 |
254 | /* Dettach from DPU driver & release resources */
255 | dpuClose();
256 |
257 | return 0;
258 | }
259 |
--------------------------------------------------------------------------------
/ref_files/src/prj_config:
--------------------------------------------------------------------------------
1 | # /*
2 | # * Copyright 2019 Xilinx Inc.
3 | # *
4 | # * Licensed under the Apache License, Version 2.0 (the "License");
5 | # * you may not use this file except in compliance with the License.
6 | # * You may obtain a copy of the License at
7 | # *
8 | # * http://www.apache.org/licenses/LICENSE-2.0
9 | # *
10 | # * Unless required by applicable law or agreed to in writing, software
11 | # * distributed under the License is distributed on an "AS IS" BASIS,
12 | # * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # * See the License for the specific language governing permissions and
14 | # * limitations under the License.
15 | # */
16 |
17 |
18 | [clock]
19 |
20 | id=0:dpu_xrt_top_1.aclk
21 | id=1:dpu_xrt_top_1.ap_clk_2
22 | id=0:dpu_xrt_top_2.aclk
23 | id=1:dpu_xrt_top_2.ap_clk_2
24 |
25 | [connectivity]
26 |
27 | sp=dpu_xrt_top_1.M_AXI_GP0:HPC0
28 | sp=dpu_xrt_top_1.M_AXI_HP0:HP0
29 | sp=dpu_xrt_top_1.M_AXI_HP2:HP1
30 | sp=dpu_xrt_top_2.M_AXI_GP0:HPC1
31 | sp=dpu_xrt_top_2.M_AXI_HP0:HP2
32 | sp=dpu_xrt_top_2.M_AXI_HP2:HP3
33 |
34 |
35 |
36 | [advanced]
37 | misc=:solution_name=link
38 | param=compiler.addOutputTypes=sd_card
39 |
40 | #param=compiler.skipTimingCheckAndFrequencyScaling=1
41 |
42 | [vivado]
43 | prop=run.impl_1.strategy=Performance_Explore
44 | #param=place.runPartPlacer=0
45 |
46 |
--------------------------------------------------------------------------------
/ref_files/system-user.dtsi:
--------------------------------------------------------------------------------
1 | /*
2 | *
3 | * Copyright 2019 Xilinx Inc.
4 | *
5 | * Licensed under the Apache License, Version 2.0 (the "License");
6 | * you may not use this file except in compliance with the License.
7 | * You may obtain a copy of the License at
8 | *
9 | * http://www.apache.org/licenses/LICENSE-2.0
10 | *
11 | * Unless required by applicable law or agreed to in writing, software
12 | * distributed under the License is distributed on an "AS IS" BASIS,
13 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 | * See the License for the specific language governing permissions and
15 | * limitations under the License.
16 | */
17 |
18 | /include/ "system-conf.dtsi"
19 | / {
20 | };
21 | &amba {
22 | zyxclmm_drm {
23 | compatible = "xlnx,zocl";
24 | status = "okay";
25 | interrupt-parent = <&axi_intc_0>;
26 | interrupts = <0 4>, <1 4>, <2 4>, <3 4>,
27 | <4 4>, <5 4>, <6 4>, <7 4>,
28 | <8 4>, <9 4>, <10 4>, <11 4>,
29 | <12 4>, <13 4>, <14 4>, <15 4>,
30 | <16 4>, <17 4>, <18 4>, <19 4>,
31 | <20 4>, <21 4>, <22 4>, <23 4>,
32 | <24 4>, <25 4>, <26 4>, <27 4>,
33 | <28 4>, <29 4>, <30 4>, <31 4>;
34 | };
35 | };
36 |
37 |
--------------------------------------------------------------------------------
/ref_files/xsa.tcl:
--------------------------------------------------------------------------------
1 | #
2 | # Copyright 2019 Xilinx Inc.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | #
11 | # Unless required by applicable law or agreed to in writing, software
12 | # distributed under the License is distributed on an "AS IS" BASIS,
13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 | # See the License for the specific language governing permissions and
15 | # limitations under the License.
16 | #
17 |
18 | # Set the platform design intent properties
19 | set_property platform.design_intent.embedded true [current_project]
20 | set_property platform.design_intent.server_managed false [current_project]
21 | set_property platform.design_intent.external_host false [current_project]
22 | set_property platform.design_intent.datacenter false [current_project]
23 |
24 | get_property platform.design_intent.embedded [current_project]
25 | get_property platform.design_intent.server_managed [current_project]
26 | get_property platform.design_intent.external_host [current_project]
27 | get_property platform.design_intent.datacenter [current_project]
28 |
29 | # Set the platform default output type property
30 | set_property platform.default_output_type "sd_card" [current_project]
31 |
32 | get_property platform.default_output_type [current_project]
33 |
34 | # Add the platform property to use dynamic_postlink.tcl during the v++ link
35 | set_property platform.post_sys_link_tcl_hook ./dynamic_postlink.tcl [current_project]
36 |
--------------------------------------------------------------------------------
/version.md:
--------------------------------------------------------------------------------
1 | This design uses:
2 | Vivado 2019.2
3 | PetaLinux 2019.2
4 | Vitis 2019.2
5 | Vitis AI 1.1
6 |
--------------------------------------------------------------------------------