├── .gitignore
├── LICENSE
├── README.md
├── file_root
├── _modules
│ ├── glance.py
│ ├── ini_manage.py
│ ├── keystone.py
│ ├── linux_lvm.py
│ ├── neutron.py
│ ├── parted.py
│ └── parted_free_disks.py
├── _states
│ ├── glance.py
│ ├── ini_manage.py
│ ├── keystone.py
│ ├── lvm.py
│ └── neutron.py
├── cinder
│ ├── init.sls
│ └── volume.sls
├── cluster
│ ├── physical_networks.jinja
│ ├── resources.jinja
│ └── volumes.jinja
├── generics
│ ├── files.sls
│ ├── headers.sls
│ ├── hosts.sls
│ ├── proxy.sls
│ ├── repo.sls
│ ├── repo.sls~
│ └── system_update.sls
├── glance
│ ├── images.sls
│ └── init.sls
├── horizon
│ ├── init.sls
│ ├── secure.sls
│ └── secure.sls~
├── keystone
│ ├── init.sls
│ ├── openstack_services.sls
│ ├── openstack_tenants.sls
│ └── openstack_users.sls
├── mysql
│ ├── client.sls
│ ├── init.sls
│ └── openstack_dbschema.sls
├── neutron
│ ├── guest_mtu.sls
│ ├── guest_mtu.sls~
│ ├── init.sls
│ ├── ml2.sls
│ ├── ml2.sls~
│ ├── networks.sls
│ ├── openvswitch.sls
│ ├── routers.sls
│ ├── security_groups.sls
│ ├── service.sls
│ └── service.sls~
├── nova
│ ├── compute_kvm.sls
│ └── init.sls
├── postinstall
│ ├── misc_options.sls
│ └── misc_options.sls~
├── queue
│ ├── rabbit.sls
│ └── rabbit.sls~
└── top.sls
└── pillar_root
├── Arch.sls
├── Debian.sls
├── Debian_repo.sls
├── Ubuntu.sls
├── Ubuntu_repo.sls
├── access_resources.sls
├── cluster_resources.sls
├── db_resources.sls
├── deploy_files.sls
├── machine_images.sls
├── misc_openstack_options.sls
├── network_resources.sls
├── openstack_cluster.sls
├── to_do.sls
└── top.sls
/.gitignore:
--------------------------------------------------------------------------------
1 | # Compiled source #
2 | ###################
3 | *.com
4 | *.class
5 | *.dll
6 | *.exe
7 | *.o
8 | *.so
9 | *.pyc
10 |
11 | # Packages #
12 | ############
13 | # it's better to unpack these files and commit the raw source
14 | # git has its own built in compression methods
15 | *.7z
16 | *.dmg
17 | *.gz
18 | *.iso
19 | *.jar
20 | *.rar
21 | *.tar
22 | *.zip
23 |
24 | # Logs and databases #
25 | ######################
26 | *.log
27 | *.sql
28 | *.sqlite
29 |
30 | # OS generated files #
31 | ######################
32 | .DS_Store
33 | .DS_Store?
34 | ._*
35 | .Spotlight-V100
36 | .Trashes
37 | ehthumbs.db
38 | Thumbs.db
39 | *~
40 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | GNU GENERAL PUBLIC LICENSE
2 | Version 2, June 1991
3 |
4 | Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
5 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
6 | Everyone is permitted to copy and distribute verbatim copies
7 | of this license document, but changing it is not allowed.
8 |
9 | Preamble
10 |
11 | The licenses for most software are designed to take away your
12 | freedom to share and change it. By contrast, the GNU General Public
13 | License is intended to guarantee your freedom to share and change free
14 | software--to make sure the software is free for all its users. This
15 | General Public License applies to most of the Free Software
16 | Foundation's software and to any other program whose authors commit to
17 | using it. (Some other Free Software Foundation software is covered by
18 | the GNU Lesser General Public License instead.) You can apply it to
19 | your programs, too.
20 |
21 | When we speak of free software, we are referring to freedom, not
22 | price. Our General Public Licenses are designed to make sure that you
23 | have the freedom to distribute copies of free software (and charge for
24 | this service if you wish), that you receive source code or can get it
25 | if you want it, that you can change the software or use pieces of it
26 | in new free programs; and that you know you can do these things.
27 |
28 | To protect your rights, we need to make restrictions that forbid
29 | anyone to deny you these rights or to ask you to surrender the rights.
30 | These restrictions translate to certain responsibilities for you if you
31 | distribute copies of the software, or if you modify it.
32 |
33 | For example, if you distribute copies of such a program, whether
34 | gratis or for a fee, you must give the recipients all the rights that
35 | you have. You must make sure that they, too, receive or can get the
36 | source code. And you must show them these terms so they know their
37 | rights.
38 |
39 | We protect your rights with two steps: (1) copyright the software, and
40 | (2) offer you this license which gives you legal permission to copy,
41 | distribute and/or modify the software.
42 |
43 | Also, for each author's protection and ours, we want to make certain
44 | that everyone understands that there is no warranty for this free
45 | software. If the software is modified by someone else and passed on, we
46 | want its recipients to know that what they have is not the original, so
47 | that any problems introduced by others will not reflect on the original
48 | authors' reputations.
49 |
50 | Finally, any free program is threatened constantly by software
51 | patents. We wish to avoid the danger that redistributors of a free
52 | program will individually obtain patent licenses, in effect making the
53 | program proprietary. To prevent this, we have made it clear that any
54 | patent must be licensed for everyone's free use or not licensed at all.
55 |
56 | The precise terms and conditions for copying, distribution and
57 | modification follow.
58 |
59 | GNU GENERAL PUBLIC LICENSE
60 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
61 |
62 | 0. This License applies to any program or other work which contains
63 | a notice placed by the copyright holder saying it may be distributed
64 | under the terms of this General Public License. The "Program", below,
65 | refers to any such program or work, and a "work based on the Program"
66 | means either the Program or any derivative work under copyright law:
67 | that is to say, a work containing the Program or a portion of it,
68 | either verbatim or with modifications and/or translated into another
69 | language. (Hereinafter, translation is included without limitation in
70 | the term "modification".) Each licensee is addressed as "you".
71 |
72 | Activities other than copying, distribution and modification are not
73 | covered by this License; they are outside its scope. The act of
74 | running the Program is not restricted, and the output from the Program
75 | is covered only if its contents constitute a work based on the
76 | Program (independent of having been made by running the Program).
77 | Whether that is true depends on what the Program does.
78 |
79 | 1. You may copy and distribute verbatim copies of the Program's
80 | source code as you receive it, in any medium, provided that you
81 | conspicuously and appropriately publish on each copy an appropriate
82 | copyright notice and disclaimer of warranty; keep intact all the
83 | notices that refer to this License and to the absence of any warranty;
84 | and give any other recipients of the Program a copy of this License
85 | along with the Program.
86 |
87 | You may charge a fee for the physical act of transferring a copy, and
88 | you may at your option offer warranty protection in exchange for a fee.
89 |
90 | 2. You may modify your copy or copies of the Program or any portion
91 | of it, thus forming a work based on the Program, and copy and
92 | distribute such modifications or work under the terms of Section 1
93 | above, provided that you also meet all of these conditions:
94 |
95 | a) You must cause the modified files to carry prominent notices
96 | stating that you changed the files and the date of any change.
97 |
98 | b) You must cause any work that you distribute or publish, that in
99 | whole or in part contains or is derived from the Program or any
100 | part thereof, to be licensed as a whole at no charge to all third
101 | parties under the terms of this License.
102 |
103 | c) If the modified program normally reads commands interactively
104 | when run, you must cause it, when started running for such
105 | interactive use in the most ordinary way, to print or display an
106 | announcement including an appropriate copyright notice and a
107 | notice that there is no warranty (or else, saying that you provide
108 | a warranty) and that users may redistribute the program under
109 | these conditions, and telling the user how to view a copy of this
110 | License. (Exception: if the Program itself is interactive but
111 | does not normally print such an announcement, your work based on
112 | the Program is not required to print an announcement.)
113 |
114 | These requirements apply to the modified work as a whole. If
115 | identifiable sections of that work are not derived from the Program,
116 | and can be reasonably considered independent and separate works in
117 | themselves, then this License, and its terms, do not apply to those
118 | sections when you distribute them as separate works. But when you
119 | distribute the same sections as part of a whole which is a work based
120 | on the Program, the distribution of the whole must be on the terms of
121 | this License, whose permissions for other licensees extend to the
122 | entire whole, and thus to each and every part regardless of who wrote it.
123 |
124 | Thus, it is not the intent of this section to claim rights or contest
125 | your rights to work written entirely by you; rather, the intent is to
126 | exercise the right to control the distribution of derivative or
127 | collective works based on the Program.
128 |
129 | In addition, mere aggregation of another work not based on the Program
130 | with the Program (or with a work based on the Program) on a volume of
131 | a storage or distribution medium does not bring the other work under
132 | the scope of this License.
133 |
134 | 3. You may copy and distribute the Program (or a work based on it,
135 | under Section 2) in object code or executable form under the terms of
136 | Sections 1 and 2 above provided that you also do one of the following:
137 |
138 | a) Accompany it with the complete corresponding machine-readable
139 | source code, which must be distributed under the terms of Sections
140 | 1 and 2 above on a medium customarily used for software interchange; or,
141 |
142 | b) Accompany it with a written offer, valid for at least three
143 | years, to give any third party, for a charge no more than your
144 | cost of physically performing source distribution, a complete
145 | machine-readable copy of the corresponding source code, to be
146 | distributed under the terms of Sections 1 and 2 above on a medium
147 | customarily used for software interchange; or,
148 |
149 | c) Accompany it with the information you received as to the offer
150 | to distribute corresponding source code. (This alternative is
151 | allowed only for noncommercial distribution and only if you
152 | received the program in object code or executable form with such
153 | an offer, in accord with Subsection b above.)
154 |
155 | The source code for a work means the preferred form of the work for
156 | making modifications to it. For an executable work, complete source
157 | code means all the source code for all modules it contains, plus any
158 | associated interface definition files, plus the scripts used to
159 | control compilation and installation of the executable. However, as a
160 | special exception, the source code distributed need not include
161 | anything that is normally distributed (in either source or binary
162 | form) with the major components (compiler, kernel, and so on) of the
163 | operating system on which the executable runs, unless that component
164 | itself accompanies the executable.
165 |
166 | If distribution of executable or object code is made by offering
167 | access to copy from a designated place, then offering equivalent
168 | access to copy the source code from the same place counts as
169 | distribution of the source code, even though third parties are not
170 | compelled to copy the source along with the object code.
171 |
172 | 4. You may not copy, modify, sublicense, or distribute the Program
173 | except as expressly provided under this License. Any attempt
174 | otherwise to copy, modify, sublicense or distribute the Program is
175 | void, and will automatically terminate your rights under this License.
176 | However, parties who have received copies, or rights, from you under
177 | this License will not have their licenses terminated so long as such
178 | parties remain in full compliance.
179 |
180 | 5. You are not required to accept this License, since you have not
181 | signed it. However, nothing else grants you permission to modify or
182 | distribute the Program or its derivative works. These actions are
183 | prohibited by law if you do not accept this License. Therefore, by
184 | modifying or distributing the Program (or any work based on the
185 | Program), you indicate your acceptance of this License to do so, and
186 | all its terms and conditions for copying, distributing or modifying
187 | the Program or works based on it.
188 |
189 | 6. Each time you redistribute the Program (or any work based on the
190 | Program), the recipient automatically receives a license from the
191 | original licensor to copy, distribute or modify the Program subject to
192 | these terms and conditions. You may not impose any further
193 | restrictions on the recipients' exercise of the rights granted herein.
194 | You are not responsible for enforcing compliance by third parties to
195 | this License.
196 |
197 | 7. If, as a consequence of a court judgment or allegation of patent
198 | infringement or for any other reason (not limited to patent issues),
199 | conditions are imposed on you (whether by court order, agreement or
200 | otherwise) that contradict the conditions of this License, they do not
201 | excuse you from the conditions of this License. If you cannot
202 | distribute so as to satisfy simultaneously your obligations under this
203 | License and any other pertinent obligations, then as a consequence you
204 | may not distribute the Program at all. For example, if a patent
205 | license would not permit royalty-free redistribution of the Program by
206 | all those who receive copies directly or indirectly through you, then
207 | the only way you could satisfy both it and this License would be to
208 | refrain entirely from distribution of the Program.
209 |
210 | If any portion of this section is held invalid or unenforceable under
211 | any particular circumstance, the balance of the section is intended to
212 | apply and the section as a whole is intended to apply in other
213 | circumstances.
214 |
215 | It is not the purpose of this section to induce you to infringe any
216 | patents or other property right claims or to contest validity of any
217 | such claims; this section has the sole purpose of protecting the
218 | integrity of the free software distribution system, which is
219 | implemented by public license practices. Many people have made
220 | generous contributions to the wide range of software distributed
221 | through that system in reliance on consistent application of that
222 | system; it is up to the author/donor to decide if he or she is willing
223 | to distribute software through any other system and a licensee cannot
224 | impose that choice.
225 |
226 | This section is intended to make thoroughly clear what is believed to
227 | be a consequence of the rest of this License.
228 |
229 | 8. If the distribution and/or use of the Program is restricted in
230 | certain countries either by patents or by copyrighted interfaces, the
231 | original copyright holder who places the Program under this License
232 | may add an explicit geographical distribution limitation excluding
233 | those countries, so that distribution is permitted only in or among
234 | countries not thus excluded. In such case, this License incorporates
235 | the limitation as if written in the body of this License.
236 |
237 | 9. The Free Software Foundation may publish revised and/or new versions
238 | of the General Public License from time to time. Such new versions will
239 | be similar in spirit to the present version, but may differ in detail to
240 | address new problems or concerns.
241 |
242 | Each version is given a distinguishing version number. If the Program
243 | specifies a version number of this License which applies to it and "any
244 | later version", you have the option of following the terms and conditions
245 | either of that version or of any later version published by the Free
246 | Software Foundation. If the Program does not specify a version number of
247 | this License, you may choose any version ever published by the Free Software
248 | Foundation.
249 |
250 | 10. If you wish to incorporate parts of the Program into other free
251 | programs whose distribution conditions are different, write to the author
252 | to ask for permission. For software which is copyrighted by the Free
253 | Software Foundation, write to the Free Software Foundation; we sometimes
254 | make exceptions for this. Our decision will be guided by the two goals
255 | of preserving the free status of all derivatives of our free software and
256 | of promoting the sharing and reuse of software generally.
257 |
258 | NO WARRANTY
259 |
260 | 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
261 | FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
262 | OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
263 | PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
264 | OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
265 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
266 | TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
267 | PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
268 | REPAIR OR CORRECTION.
269 |
270 | 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
271 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
272 | REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
273 | INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
274 | OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
275 | TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
276 | YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
277 | PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
278 | POSSIBILITY OF SUCH DAMAGES.
279 |
280 | END OF TERMS AND CONDITIONS
281 |
282 | How to Apply These Terms to Your New Programs
283 |
284 | If you develop a new program, and you want it to be of the greatest
285 | possible use to the public, the best way to achieve this is to make it
286 | free software which everyone can redistribute and change under these terms.
287 |
288 | To do so, attach the following notices to the program. It is safest
289 | to attach them to the start of each source file to most effectively
290 | convey the exclusion of warranty; and each file should have at least
291 | the "copyright" line and a pointer to where the full notice is found.
292 |
293 | {description}
294 | Copyright (C) {year} {fullname}
295 |
296 | This program is free software; you can redistribute it and/or modify
297 | it under the terms of the GNU General Public License as published by
298 | the Free Software Foundation; either version 2 of the License, or
299 | (at your option) any later version.
300 |
301 | This program is distributed in the hope that it will be useful,
302 | but WITHOUT ANY WARRANTY; without even the implied warranty of
303 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
304 | GNU General Public License for more details.
305 |
306 | You should have received a copy of the GNU General Public License along
307 | with this program; if not, write to the Free Software Foundation, Inc.,
308 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
309 |
310 | Also add information on how to contact you by electronic and paper mail.
311 |
312 | If the program is interactive, make it output a short notice like this
313 | when it starts in an interactive mode:
314 |
315 | Gnomovision version 69, Copyright (C) year name of author
316 | Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
317 | This is free software, and you are welcome to redistribute it
318 | under certain conditions; type `show c' for details.
319 |
320 | The hypothetical commands `show w' and `show c' should show the appropriate
321 | parts of the General Public License. Of course, the commands you use may
322 | be called something other than `show w' and `show c'; they could even be
323 | mouse-clicks or menu items--whatever suits your program.
324 |
325 | You should also get your employer (if you work as a programmer) or your
326 | school, if any, to sign a "copyright disclaimer" for the program, if
327 | necessary. Here is a sample; alter the names:
328 |
329 | Yoyodyne, Inc., hereby disclaims all copyright interest in the program
330 | `Gnomovision' (which makes passes at compilers) written by James Hacker.
331 |
332 | {signature of Ty Coon}, 1 April 1989
333 | Ty Coon, President of Vice
334 |
335 | This General Public License does not permit incorporating your program into
336 | proprietary programs. If your program is a subroutine library, you may
337 | consider it more useful to permit linking proprietary applications with the
338 | library. If this is what you want to do, use the GNU Lesser General
339 | Public License instead of this License.
340 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | About the project
2 | =================
3 | The [project](https://github.com/Akilesh1597/salt-openstack/ "Openstack-Automation") gives you a working OpenStack cluster in a mater of minutes. We do this using [saltstack](http://docs.saltstack.com/ "Saltstack"). There is almost no coding involved and it can be easily maintained. Above all it is as easy as talking to your servers and asking them to configure themselves.
4 |
5 | Saltstack provides us an infrastructure management framework that makes our job easier. Saltstack supports most of tasks that you would want to perform while installing OpenStack and more.
6 |
7 | A few reasons why this is cool
8 |
9 | 1. Don't just install OpenStack but also keep your installation formulae versioned, so you can always go back to the last working set.
10 | 2. Salt formulae are not meant to just install. They can also serve to document the steps and the settings involved.
11 | 3. OpenStack has a release cycle of 6 months and every six months you just have to tinker with a few text files to stay in the game.
12 | 4. OpenStack has an ever increasing list of sub projects and we have an ever increasing number of formulae.
13 |
14 | And we are opensource.
15 |
16 | What is New
17 | ===========
18 | 1. Support for glance images to be created using glance state and module
19 | 2. Support for cinder added. Check 'Cinder Volume' section of README.
20 | 3. Support for all linux distros added. Check 'Packages, Services, Config files and Repositories section' of README.
21 | 4. Partial support for 'single server single nic' installations. Check section 'Single Interface Scenario' for details.
22 | 5. Pillar data has been segregated in multiple files according to its purpose. Check 'Customizations' section for details.
23 | 6. Branches icehouse and juno will alone exist and continue forward
24 | 7. Repo modified to be used as git fileserver backend. User may use 'git_pillar' pointed to 'pillar_root' sub directory or download the files to use in 'roots' backend.
25 | 8. Pull request to the project has some regulations, documented towards the end of the README.
26 | 9. 'yaml' will be default format for sls files. This is done as maintaining sls files across format is causing mismatches and errors. Further 'json' does not go well with 'jinja' templating(formulas end up less readable).
27 | 10. 'cluster_ops' salt module has been removed. Its functionality has been achieved using 'jinja macros', in an attempt to remove any dependencies that are not available in saltstack's list of modules.
28 |
29 | Yet to Arrive
30 | =============
31 | 1. Neutron state and execution module, pilllar and formulas for creation of initial networks.
32 | 2. Pillar and formulas for creation of instances.
33 |
34 | Getting started
35 | ===============
36 | When you are ready to install OpenStack modify the [salt-master configuration](http://docs.saltstack.com/en/latest/ref/configuration/master.html "salt-master configuration") file at "/etc/salt/master" to hold the below contents.
37 |
38 |
39 | fileserver_backend:
40 | - roots
41 | - git
42 | gitfs_remotes:
43 | - https://github.com/Akilesh1597/salt-openstack.git:
44 | root: file_root
45 | ext_pillar:
46 | - git: icehouse https://github.com/Akilesh1597/salt-openstack.git root=pillar_root
47 | jinja_trim_blocks: True
48 | jinja_lstrip_blocks: True
49 |
50 |
51 | This will create a new [environment](http://docs.saltstack.com/ref/file_server/index.html#environments "Salt Environments") called "icehouse" in your state tree. The 'file_roots' directory of the github repo will have [state definitions](http://docs.saltstack.com/topics/tutorials/starting_states.html "Salt States") in a bunch of '.sls' files and the few special directories, while the 'pillar_root' directory has your cluster definition files.
52 |
53 | At this stage I assume that you have a machine with minion id 'openstack.icehouse' and ip address '192.168.1.1' [added to the salt master](http://docs.saltstack.com/topics/tutorials/walkthrough.html#setting-up-a-salt-minion "minion setup").
54 |
55 | Lets begin...
56 |
57 |
58 | salt-run fileserver.update
59 | salt '*' saltutil.sync_all
60 | salt -C 'I@cluster_type:icehouse' state.highstate
61 |
62 |
63 | This instructs the minion having 'cluster_type=icehouse' defined in its pillar data to download all the formulae defined for it and execute the same. If all goes well you can login to your newly installed OpenStack setup at 'http://192.168.1.1/horizon'.
64 |
65 | But that is not all. We can have a fully customized install with multiple hosts performing different roles, customized accounts and databases. All this can be done simply by manipulating the [pillar data](http://docs.saltstack.com/topics/pillar/ "Salt Pillar").
66 |
67 | Check [salt walkthrough](http://docs.saltstack.com/en/latest/topics/tutorials/walkthrough.html "walkthrough") to understand how salt works.
68 |
69 | Multi node setup
70 | ================
71 | Edit 'cluster_resources.sls' file inside 'pillar_root' sub directory. It looks like below.
72 |
73 | roles:
74 | - "compute"
75 | - "controller"
76 | - "network"
77 | - "storage"
78 | compute:
79 | - "openstack.icehouse"
80 | controller:
81 | - "openstack.icehouse"
82 | network:
83 | - "openstack.icehouse"
84 | storage:
85 | - "openstack.icehouse"
86 |
87 | It just means that in our cluster all roles are played by a single host 'openstack.icehouse'. Now let just distribute the responsibilities.
88 |
89 | roles:
90 | - "compute"
91 | - "controller"
92 | - "network"
93 | - "storage"
94 | compute:
95 | - "compute1.icehouse"
96 | - "compute2.icehouse"
97 | controller:
98 | - "controller.icehouse"
99 | network:
100 | - "network.icehouse"
101 | storage:
102 | - "storage.icehouse"
103 |
104 | We just added five hosts to perform different roles. "compute1.icehouse" and "compute2.icehouse" perform the role "compute" for example. Make sure the tell their ip addresses in the file 'openstack_cluster.sls'.
105 |
106 | hosts:
107 | controller.icehouse: 192.168.1.1
108 | network.icehouse: 192.168.1.2
109 | storage.icehouse: 192.168.1.3
110 | compute1.icehouse: 192.168.1.4
111 | compute2.icehouse: 192.168.1.5
112 |
113 | Lets sync up.
114 |
115 | salt-run fileserver.update
116 | salt '*' saltutil.sync_all
117 | salt -C 'I@cluster_type:icehouse' state.highstate
118 |
119 | Ah and if you use git as your backend, you have to fork us before doing any changes.
120 |
121 | Add new roles
122 | =============
123 | Lets say we want the 'queue server' as a separate role. This is how we do it.
124 | 1. Add a new role "queue_server" under "roles"
125 | 2. Define what minions will perform the role of "queue_server"
126 | 3. Finally define which formula deploys a "queue_server" under "sls" under "queue_server" section.
127 |
128 | roles:
129 | - "compute"
130 | - "controller"
131 | - "network"
132 | - "storage"
133 | - "queue_server"
134 | compute:
135 | - "compute1.icehouse"
136 | - "compute2.icehouse"
137 | controller:
138 | - "controller.icehouse"
139 | network:
140 | - "network.icehouse"
141 | storage:
142 | - "storage.icehouse"
143 | queue_server:
144 | - "queueserver.icehouse"
145 | sls:
146 | queue_server:
147 | - "queue.rabbit"
148 | controller:
149 | - "generics.host"
150 | - "mysql"
151 | - "mysql.client"
152 | - "mysql.OpenStack_dbschema"
153 | - "keystone"
154 | - "keystone.OpenStack_tenants"
155 | - "keystone.OpenStack_users"
156 | - "keystone.OpenStack_services"
157 | - "nova"
158 | - "horizon"
159 | - "glance"
160 | - "cinder"
161 | network:
162 | - "generics.host"
163 | - "mysql.client"
164 | - "neutron"
165 | - "neutron.service"
166 | - "neutron.openvswitch"
167 | - "neutron.ml2"
168 | compute:
169 | - "generics.host"
170 | - "mysql.client"
171 | - "nova.compute_kvm"
172 | - "neutron.openvswitch"
173 | - "neutron.ml2"
174 | storage:
175 | - "cinder.volume"
176 |
177 | You may want the same machine to perform many roles or you may add a new machine. Make sure to update the machine's ip address as mentioned earlier.
178 |
179 | Customizations
180 | ==============
181 | The pillar data has been structured as below, in order have a single sls file to modify, for each kind of customization.
182 |
183 | ---------------------------------------------------------------------------------
184 | |Pillar File | Purpose |
185 | |:----------------------|:------------------------------------------------------|
186 | |OpenStack_cluster | Generic cluster Data |
187 | |access_resources | Keystone tenants, users, roles, services and endpoints|
188 | |cluster_resources | Hosts, Roles and their correspoinding formulas |
189 | |network_resources | OpenStack Neutron data, explained below |
190 | |db_resources | Databases, Users, Passwords and Grants |
191 | |deploy_files | Arbitrary files to be deployed on all minions |
192 | |misc_OpenStack_options | Arbitrary OpenStack options and affected services |
193 | |[DISTRO].sls | Distro specific package data |
194 | |[DISTRO]_repo.sls | Distro specific repository data |
195 | ---------------------------------------------------------------------------------
196 |
197 | Should you need more 'tenants' or change credentials of an OpenStack user you should look into 'access_resources.sls' under 'pillar_root'. You may tweak your OpenStack in any way you want.
198 |
199 | Neutron Networking
200 | ==================
201 | 'network_resources' under 'pillar_root' defines the OpenStack ["Data" and "External" networks](http://fosskb.wordpress.com/2014/06/10/managing-OpenStack-internaldataexternal-network-in-one-interface/ "Openstack networking"). The default configuration will install a "gre" mode "Data" network and a "flat" mode "External" network, and looks as below.
202 |
203 |
204 | neutron:
205 | intergration_bridge: br-int
206 | metadata_secret: "414c66b22b1e7a20cc35"
207 | type_drivers:
208 | flat:
209 | physnets:
210 | External:
211 | bridge: "br-ex"
212 | hosts:
213 | network.icehouse: "eth3"
214 | gre:
215 | tunnel_start: "1"
216 | tunnel_end: "1000"
217 |
218 |
219 | Choosing the vxlan mode is not difficult either.
220 |
221 | neutron:
222 | intergration_bridge: br-int
223 | metadata_secret: "414c66b22b1e7a20cc35"
224 | type_drivers:
225 | flat:
226 | physnets:
227 | External:
228 | bridge: "br-ex"
229 | hosts:
230 | network.icehouse: "eth3"
231 | vxlan:
232 | tunnel_start: "1"
233 | tunnel_end: "1000"
234 |
235 |
236 | For vlan mode tenant networks, you need to add 'vlan' under 'tenant_network_types' and for each compute host and network host specify the list of [physical networks](http://fosskb.wordpress.com/2014/06/19/l2-connectivity-in-OpenStack-using-openvswitch-mechanism-driver/ "Openstack physical networks") and their corresponding bridge, interface and allowed vlan range.
237 |
238 |
239 | neutron:
240 | intergration_bridge: br-int
241 | metadata_secret: "414c66b22b1e7a20cc35"
242 | type_drivers:
243 | flat:
244 | physnets:
245 | External:
246 | bridge: "br-ex"
247 | hosts:
248 | network.icehouse: "eth3"
249 | vlan:
250 | physnets:
251 | Internal1:
252 | bridge: "br-eth1"
253 | vlan_range: "100:200"
254 | hosts:
255 | network.icehouse: "eth2"
256 | compute1.icehouse: "eth2"
257 | compute2.icehouse: "eth2"
258 | Internal2:
259 | bridge: "br-eth2"
260 | vlan_range: "200:300"
261 | hosts:
262 | network.icehouse: "eth4"
263 | compute3.icehouse: "eth2"
264 |
265 |
266 | Single Interface Scenario
267 | =========================
268 | Most users trying OpenStack for first time would need OpenStack up and running on machine with single interface. Please set "single_nic" pillar in 'network_resources' to the 'primary interface id' in your machine.
269 |
270 | This will connect all bridges to a bridge named 'br-proxy'. Later you have to manually add your primary nic to this bridge and configure the bridge with the ip address of your primary nic.
271 |
272 | We have not automated the last part because you may loose connectivity to your minion at this phase and it is best you do it manually. Further setting up the briges in your 'interfaces configuration' file varies per distro.
273 |
274 | User would have to bear with us, untill we find formula for the same.
275 |
276 | Cinder Volume
277 | =============
278 | The 'cinder.volume' formula will find any free disk space available on the minion and create lvm partition and also a volume group 'cinder-volume'. This will be used by OpenStack's volume service. It is therefore adviced to deploy this particular formula on machines with available free space. We are using a custom state module for this purpose. Efforts are guaranteed to push the additional state to official salt state modules.
279 |
280 | Packages, Services, Config files and Repositories
281 | =================================================
282 | The 'pillar_root' sub directory contains a [DISTRO].sls file that contains package names, service names, and config file paths for each of OpenStack's component. This file is supposed to be replicated for all the distros that you plan to use on your minions.
283 |
284 | The [DISTRO]_repo.sls that has repository details specific to each distro that house the OpenStack packages. The parameters defined in the file should be the ones that are accepted by saltstack's pkgrepo module.
285 |
286 |
287 | Contributing
288 | ============
289 | As with all opensource projects, we need support too. However since this repo is used for deploying OpenStack clusters, it may end up making lots of changes after forking. These changes may include sensitive information in pillar. The changes may also be some trivial changes that are not necessary to be merged back here. So please follow the steps below.
290 |
291 | After forking please create a new branch, which you will use to deploy OpenStack and make changes specific to yourself. If you feel any changes are good to be maintained and carried forwad, make them in the 'original' branches and merge them to your other branches. Always pull request from the 'original' branches.
292 |
293 | As you may see the pillar data as of now only have 'Ubuntu.sls' and 'Debian.sls'. We need to update this repo with all those other distros available for OpenStack.
294 |
295 |
--------------------------------------------------------------------------------
/file_root/_modules/glance.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | '''
3 | Module for handling openstack glance calls.
4 |
5 | :optdepends: - glanceclient Python adapter
6 | :configuration: This module is not usable until the following are specified
7 | either in a pillar or in the minion's config file::
8 |
9 | keystone.user: admin
10 | keystone.password: verybadpass
11 | keystone.tenant: admin
12 | keystone.tenant_id: f80919baedab48ec8931f200c65a50df
13 | keystone.insecure: False #(optional)
14 | keystone.auth_url: 'http://127.0.0.1:5000/v2.0/'
15 |
16 | If configuration for multiple openstack accounts is required, they can be
17 | set up as different configuration profiles:
18 | For example::
19 |
20 | openstack1:
21 | keystone.user: admin
22 | keystone.password: verybadpass
23 | keystone.tenant: admin
24 | keystone.tenant_id: f80919baedab48ec8931f200c65a50df
25 | keystone.auth_url: 'http://127.0.0.1:5000/v2.0/'
26 |
27 | openstack2:
28 | keystone.user: admin
29 | keystone.password: verybadpass
30 | keystone.tenant: admin
31 | keystone.tenant_id: f80919baedab48ec8931f200c65a50df
32 | keystone.auth_url: 'http://127.0.0.2:5000/v2.0/'
33 |
34 | With this configuration in place, any of the keystone functions can make
35 | use of a configuration profile by declaring it explicitly.
36 | For example::
37 |
38 | salt '*' glance.image_list profile=openstack1
39 | '''
40 |
41 | # Import third party libs
42 | HAS_GLANCE = False
43 | try:
44 | from glanceclient import client
45 | import glanceclient.v1.images
46 | HAS_GLANCE = True
47 | except ImportError:
48 | pass
49 |
50 |
51 | def __virtual__():
52 | '''
53 | Only load this module if glance
54 | is installed on this minion.
55 | '''
56 | if HAS_GLANCE:
57 | return 'glance'
58 | return False
59 |
60 | __opts__ = {}
61 |
62 |
63 | def _auth(profile=None, **connection_args):
64 | '''
65 | Set up keystone credentials
66 | '''
67 | kstone = __salt__['keystone.auth'](profile, **connection_args)
68 | token = kstone.auth_token
69 | endpoint = kstone.service_catalog.url_for(
70 | service_type='image',
71 | endpoint_type='publicURL',
72 | )
73 |
74 | return client.Client('1', endpoint, token=token)
75 |
76 |
77 | def image_create(profile=None, **connection_args):
78 | '''
79 | Create an image (glance image-create)
80 |
81 | CLI Example:
82 |
83 | .. code-block:: bash
84 |
85 | salt '*' glance.image_create name=f16-jeos is_public=true \\
86 | disk_format=qcow2 container_format=ovf \\
87 | copy_from=http://berrange.fedorapeople.org/images/ \\
88 | 2012-02-29/f16-x86_64-openstack-sda.qcow2
89 |
90 | For all possible values, run ``glance help image-create`` on the minion.
91 | '''
92 | nt_ks = _auth(profile, **connection_args)
93 | fields = dict(
94 | filter(
95 | lambda x: x[0] in glanceclient.v1.images.CREATE_PARAMS,
96 | connection_args.items()
97 | )
98 | )
99 |
100 | image = nt_ks.images.create(**fields)
101 | return image_show(id=str(image.id), profile=profile, **connection_args)
102 |
103 |
104 | def image_delete(id=None, name=None, profile=None, **connection_args): # pylint: disable=C0103
105 | '''
106 | Delete an image (glance image-delete)
107 |
108 | CLI Examples:
109 |
110 | .. code-block:: bash
111 |
112 | salt '*' glance.image_delete c2eb2eb0-53e1-4a80-b990-8ec887eae7df
113 | salt '*' glance.image_delete id=c2eb2eb0-53e1-4a80-b990-8ec887eae7df
114 | salt '*' glance.image_delete name=f16-jeos
115 | '''
116 | nt_ks = _auth(profile, **connection_args)
117 | if name:
118 | for image in nt_ks.images.list():
119 | if image.name == name:
120 | id = image.id # pylint: disable=C0103
121 | continue
122 | if not id:
123 | return {'Error': 'Unable to resolve image id'}
124 | nt_ks.images.delete(id)
125 | ret = 'Deleted image with ID {0}'.format(id)
126 | if name:
127 | ret += ' ({0})'.format(name)
128 | return ret
129 |
130 |
131 | def image_show(id=None, name=None, profile=None, **connection_args): # pylint: disable=C0103
132 | '''
133 | Return details about a specific image (glance image-show)
134 |
135 | CLI Example:
136 |
137 | .. code-block:: bash
138 |
139 | salt '*' glance.image_get
140 | '''
141 | nt_ks = _auth(profile, **connection_args)
142 | ret = {}
143 | if name:
144 | for image in nt_ks.images.list():
145 | if image.name == name:
146 | id = image.id # pylint: disable=C0103
147 | break
148 | if not id:
149 | return {'Error': 'Unable to resolve image id'}
150 | image = nt_ks.images.get(id)
151 | ret[image.name] = {'id': image.id,
152 | 'name': image.name,
153 | 'checksum': getattr(image, 'checksum', 'Creating'),
154 | 'container_format': image.container_format,
155 | 'created_at': image.created_at,
156 | 'deleted': image.deleted,
157 | 'disk_format': image.disk_format,
158 | 'is_public': image.is_public,
159 | 'min_disk': image.min_disk,
160 | 'min_ram': image.min_ram,
161 | 'owner': image.owner,
162 | 'protected': image.protected,
163 | 'size': image.size,
164 | 'status': image.status,
165 | 'updated_at': image.updated_at}
166 | return ret
167 |
168 |
169 | def image_list(profile=None, **connection_args): # pylint: disable=C0103
170 | '''
171 | Return a list of available images (glance image-list)
172 |
173 | CLI Example:
174 |
175 | .. code-block:: bash
176 |
177 | salt '*' glance.image_list
178 | '''
179 | nt_ks = _auth(profile, **connection_args)
180 | ret = {}
181 | for image in nt_ks.images.list():
182 | ret[image.name] = {'id': image.id,
183 | 'name': image.name,
184 | 'checksum': getattr(image, 'checksum', 'Creating'),
185 | 'container_format': image.container_format,
186 | 'created_at': image.created_at,
187 | 'deleted': image.deleted,
188 | 'disk_format': image.disk_format,
189 | 'is_public': image.is_public,
190 | 'min_disk': image.min_disk,
191 | 'min_ram': image.min_ram,
192 | 'owner': image.owner,
193 | 'protected': image.protected,
194 | 'size': image.size,
195 | 'status': image.status,
196 | 'updated_at': image.updated_at}
197 | return ret
198 |
199 |
200 | def _item_list(profile=None, **connection_args):
201 | '''
202 | Template for writing list functions
203 | Return a list of available items (glance items-list)
204 |
205 | CLI Example:
206 |
207 | .. code-block:: bash
208 |
209 | salt '*' glance.item_list
210 | '''
211 | nt_ks = _auth(profile, **connection_args)
212 | ret = []
213 | for item in nt_ks.items.list():
214 | ret.append(item.__dict__)
215 | # ret[item.name] = {
216 | # 'name': item.name,
217 | # }
218 | return ret
219 |
220 | # The following is a list of functions that need to be incorporated in the
221 | # glance module. This list should be updated as functions are added.
222 |
223 | # image-download Download a specific image.
224 | # image-update Update a specific image.
225 | # member-create Share a specific image with a tenant.
226 | # member-delete Remove a shared image from a tenant.
227 | # member-list Describe sharing permissions by image or tenant.
228 |
--------------------------------------------------------------------------------
/file_root/_modules/ini_manage.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | '''
3 | Edit ini files
4 |
5 | :maintainer:
6 | :maturity: new
7 | :depends: re
8 | :platform: all
9 |
10 | Use section as DEFAULT_IMPLICIT if your ini file does not have any section
11 | (for example /etc/sysctl.conf)
12 | '''
13 |
14 | # Import Python libs
15 | import re
16 |
17 | __virtualname__ = 'ini'
18 |
19 |
20 | def __virtual__():
21 | '''
22 | Rename to ini
23 | '''
24 | return __virtualname__
25 |
26 | comment_regexp = re.compile(r'^\s*#\s*(.*)')
27 | section_regexp = re.compile(r'\s*\[(.+)\]\s*')
28 | option_regexp1 = re.compile(r'\s*(.+?)\s*(=)(.*)')
29 | option_regexp2 = re.compile(r'\s*(.+?)\s*(:)(.*)')
30 |
31 |
32 | def set_option(file_name, sections=None, summary=True):
33 | '''
34 | Edit an ini file, replacing one or more sections. Returns a dictionary
35 | containing the changes made.
36 |
37 | file_name
38 | path of ini_file
39 |
40 | sections : None
41 | A dictionary representing the sections to be edited ini file
42 |
43 | Set ``summary=False`` if return data need not have previous option value
44 |
45 | API Example:
46 |
47 | .. code-block:: python
48 |
49 | import salt
50 | sc = salt.client.get_local_client()
51 | sc.cmd('target', 'ini.set_option',
52 | ['path_to_ini_file', '{"section_to_change": {"key": "value"}}'])
53 |
54 | CLI Example:
55 |
56 | .. code-block:: bash
57 |
58 | salt '*' ini.set_option /path/to/ini '{section_foo: {key: value}}'
59 | '''
60 | if sections is None:
61 | sections = {}
62 | ret = {'file_name': file_name}
63 | inifile = _Ini.get_ini_file(file_name)
64 | if not inifile:
65 | ret.update({'error': 'ini file not found'})
66 | return ret
67 | changes = {}
68 | err_flag = False
69 | for section in sections:
70 | changes.update({section: {}})
71 | for option in sections[section]:
72 | try:
73 | current_value = get_option(file_name, section, option)
74 | if not current_value == sections[section][option]:
75 | inifile.update_section(section,
76 | option,
77 | sections[section][option])
78 | changes[section].update(
79 | {
80 | option: {
81 | 'before': current_value,
82 | 'after': sections[section][option]
83 | }
84 | })
85 | if not summary:
86 | changes[section].update({option:
87 | sections[section][option]})
88 | except Exception:
89 | ret.update({'error':
90 | 'while setting option {0} in section {1}'.
91 | format(option, section)})
92 | err_flag = True
93 | break
94 | if not err_flag:
95 | inifile.flush()
96 | ret.update({'changes': changes})
97 | return ret
98 |
99 |
100 | def get_option(file_name, section, option):
101 | '''
102 | Get value of a key from a section in an ini file. Returns ``None`` if
103 | no matching key was found.
104 |
105 | API Example:
106 |
107 | .. code-block:: python
108 |
109 | import salt
110 | sc = salt.client.get_local_client()
111 | sc.cmd('target', 'ini.get_option',
112 | [path_to_ini_file, section_name, option])
113 |
114 | CLI Example:
115 |
116 | .. code-block:: bash
117 |
118 | salt '*' ini.get_option /path/to/ini section_name option_name
119 | '''
120 | inifile = _Ini.get_ini_file(file_name)
121 | if inifile:
122 | opt = inifile.get_option(section, option)
123 | if opt:
124 | return opt.value
125 |
126 |
127 | def remove_option(file_name, section, option):
128 | '''
129 | Remove a key/value pair from a section in an ini file. Returns the value of
130 | the removed key, or ``None`` if nothing was removed.
131 |
132 | API Example:
133 |
134 | .. code-block:: python
135 |
136 | import salt
137 | sc = salt.client.get_local_client()
138 | sc.cmd('target', 'ini.remove_option',
139 | [path_to_ini_file, section_name, option])
140 |
141 | CLI Example:
142 |
143 | .. code-block:: bash
144 |
145 | salt '*' ini.remove_option /path/to/ini section_name option_name
146 | '''
147 | inifile = _Ini.get_ini_file(file_name)
148 | if inifile:
149 | opt = inifile.remove_option(section, option)
150 | if opt:
151 | inifile.flush()
152 | return opt.value
153 |
154 |
155 | def get_section(file_name, section):
156 | '''
157 | Retrieve a section from an ini file. Returns the section as dictionary. If
158 | the section is not found, an empty dictionary is returned.
159 |
160 | API Example:
161 |
162 | .. code-block:: python
163 |
164 | import salt
165 | sc = salt.client.get_local_client()
166 | sc.cmd('target', 'ini.get_section',
167 | [path_to_ini_file, section_name])
168 |
169 | CLI Example:
170 |
171 | .. code-block:: bash
172 |
173 | salt '*' ini.get_section /path/to/ini section_name
174 | '''
175 | inifile = _Ini.get_ini_file(file_name)
176 | if inifile:
177 | sect = inifile.get_section(section)
178 | if sect:
179 | return sect.contents()
180 | return {}
181 |
182 |
183 | def remove_section(file_name, section):
184 | '''
185 | Remove a section in an ini file. Returns the removed section as dictionary,
186 | or ``None`` if nothing was removed.
187 |
188 | API Example:
189 |
190 | .. code-block:: python
191 |
192 | import salt
193 | sc = salt.client.get_local_client()
194 | sc.cmd('target', 'ini.remove_section',
195 | [path_to_ini_file, section_name])
196 |
197 | CLI Example:
198 |
199 | .. code-block:: bash
200 |
201 | salt '*' ini.remove_section /path/to/ini section_name
202 | '''
203 | inifile = _Ini.get_ini_file(file_name)
204 | if inifile:
205 | sect = inifile.remove_section(section)
206 | if sect:
207 | inifile.flush()
208 | return sect.contents()
209 |
210 |
211 | class _Section(list):
212 | def __init__(self, name):
213 | super(_Section, self).__init__()
214 | self.section_name = name
215 |
216 | def get_option(self, option_name):
217 | for item in self:
218 | if isinstance(item, _Option) and (item.name == option_name):
219 | return item
220 |
221 | def update_option(self, option_name, option_value=None, separator="="):
222 | option_to_update = self.get_option(option_name)
223 | if not option_to_update:
224 | option_to_update = _Option(option_name)
225 | self.append(option_to_update)
226 | option_to_update.value = option_value
227 | option_to_update.separator = separator
228 |
229 | def remove_option(self, option_name):
230 | option_to_remove = self.get_option(option_name)
231 | if option_to_remove:
232 | return self.pop(self.index(option_to_remove))
233 |
234 | def contents(self):
235 | contents = {}
236 | for item in self:
237 | try:
238 | contents.update({item.name: item.value})
239 | except Exception:
240 | pass # item was a comment
241 | return contents
242 |
243 | def __nonzero__(self):
244 | return True
245 |
246 | def __eq__(self, item):
247 | return (isinstance(item, self.__class__) and
248 | self.section_name == item.section_name)
249 |
250 | def __ne__(self, item):
251 | return not (isinstance(item, self.__class__) and
252 | self.section_name == item.section_name)
253 |
254 |
255 | class _Option(object):
256 | def __init__(self, name, value=None, separator="="):
257 | super(_Option, self).__init__()
258 | self.name = name
259 | self.value = value
260 | self.separator = separator
261 |
262 | def __eq__(self, item):
263 | return (isinstance(item, self.__class__) and
264 | self.name == item.name)
265 |
266 | def __ne__(self, item):
267 | return not (isinstance(item, self.__class__) and
268 | self.name == item.name)
269 |
270 |
271 | class _Ini(object):
272 | def __init__(self, file_name):
273 | super(_Ini, self).__init__()
274 | self.file_name = file_name
275 |
276 | def refresh(self):
277 | self.sections = []
278 | current_section = _Section('DEFAULT_IMPLICIT')
279 | self.sections.append(current_section)
280 | with open(self.file_name, 'r') as inifile:
281 | previous_line = None
282 | for line in inifile.readlines():
283 | # Make sure the empty lines between options are preserved
284 | if _Ini.isempty(previous_line) and not _Ini.isnewsection(line):
285 | current_section.append('\n')
286 | if _Ini.iscomment(line):
287 | current_section.append(_Ini.decrypt_comment(line))
288 | elif _Ini.isnewsection(line):
289 | self.sections.append(_Ini.decrypt_section(line))
290 | current_section = self.sections[-1]
291 | elif _Ini.isoption(line):
292 | current_section.append(_Ini.decrypt_option(line))
293 | previous_line = line
294 |
295 | def flush(self):
296 | with open(self.file_name, 'w') as outfile:
297 | outfile.write(self.current_contents())
298 |
299 | def dump(self):
300 | print self.current_contents()
301 |
302 | def current_contents(self):
303 | file_contents = ''
304 | for section in self.sections:
305 | if not section.section_name == 'DEFAULT_IMPLICIT':
306 | file_contents += '[{0}]\n'.format(section.section_name)
307 | for item in section:
308 | if isinstance(item, _Option):
309 | file_contents += '{0}{1}{2}\n'.format(
310 | item.name, item.separator, item.value
311 | )
312 | elif item == '\n':
313 | file_contents += '\n'
314 | else:
315 | file_contents += '# {0}\n'.format(item)
316 | file_contents += '\n'
317 | return file_contents
318 |
319 | def get_section(self, section_name):
320 | for section in self.sections:
321 | if section.section_name == section_name:
322 | return section
323 |
324 | def get_option(self, section_name, option):
325 | section_to_get = self.get_section(section_name)
326 | if section_to_get:
327 | return section_to_get.get_option(option)
328 |
329 | def update_section(self, section_name, option_name=None,
330 | option_value=None, separator="="):
331 | section_to_update = self.get_section(section_name)
332 | if not section_to_update:
333 | section_to_update = _Section(section_name)
334 | self.sections.append(section_to_update)
335 | if option_name:
336 | section_to_update.update_option(option_name, option_value,
337 | separator)
338 |
339 | def remove_section(self, section_name):
340 | section_to_remove = self.get_section(section_name)
341 | if section_to_remove:
342 | return self.sections.pop(self.sections.index(section_to_remove))
343 |
344 | def remove_option(self, section_name, option_name):
345 | section_to_update = self.get_section(section_name)
346 | if section_to_update:
347 | return section_to_update.remove_option(option_name)
348 |
349 | @staticmethod
350 | def decrypt_comment(line):
351 | ma = re.match(comment_regexp, line)
352 | return ma.group(1).strip()
353 |
354 | @staticmethod
355 | def decrypt_section(line):
356 | ma = re.match(section_regexp, line)
357 | return _Section(ma.group(1).strip())
358 |
359 | @staticmethod
360 | def decrypt_option(line):
361 | ma = re.match(option_regexp1, line)
362 | if not ma:
363 | ma = re.match(option_regexp2, line)
364 | return _Option(ma.group(1).strip(), ma.group(3).strip(),
365 | ma.group(2).strip())
366 |
367 | @staticmethod
368 | def iscomment(line):
369 | return re.match(comment_regexp, line)
370 |
371 | @staticmethod
372 | def isempty(line):
373 | return line == '\n'
374 |
375 | @staticmethod
376 | def isnewsection(line):
377 | return re.match(section_regexp, line)
378 |
379 | @staticmethod
380 | def isoption(line):
381 | return re.match(option_regexp1, line) or re.match(option_regexp2, line)
382 |
383 | @staticmethod
384 | def get_ini_file(file_name):
385 | try:
386 | inifile = _Ini(file_name)
387 | inifile.refresh()
388 | return inifile
389 | except IOError:
390 | return inifile
391 | except Exception:
392 | return
393 |
--------------------------------------------------------------------------------
/file_root/_modules/linux_lvm.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | '''
3 | Support for Linux LVM2
4 | '''
5 | from __future__ import absolute_import
6 |
7 | # Import python libs
8 | import os.path
9 |
10 | # Import salt libs
11 | import salt.utils
12 | import six
13 |
14 | # Define the module's virtual name
15 | __virtualname__ = 'lvm'
16 |
17 |
18 | def __virtual__():
19 | '''
20 | Only load the module if lvm is installed
21 | '''
22 | if salt.utils.which('lvm'):
23 | return __virtualname__
24 | return False
25 |
26 |
27 | def version():
28 | '''
29 | Return LVM version from lvm version
30 |
31 | CLI Example:
32 |
33 | .. code-block:: bash
34 |
35 | salt '*' lvm.version
36 | '''
37 | cmd = 'lvm version'
38 | out = __salt__['cmd.run'](cmd).splitlines()
39 | ret = out[0].split(': ')
40 | return ret[1].strip()
41 |
42 |
43 | def fullversion():
44 | '''
45 | Return all version info from lvm version
46 |
47 | CLI Example:
48 |
49 | .. code-block:: bash
50 |
51 | salt '*' lvm.fullversion
52 | '''
53 | ret = {}
54 | cmd = 'lvm version'
55 | out = __salt__['cmd.run'](cmd).splitlines()
56 | for line in out:
57 | comps = line.split(':')
58 | ret[comps[0].strip()] = comps[1].strip()
59 | return ret
60 |
61 |
62 | def pvdisplay(pvname=''):
63 | '''
64 | Return information about the physical volume(s)
65 |
66 | CLI Examples:
67 |
68 | .. code-block:: bash
69 |
70 | salt '*' lvm.pvdisplay
71 | salt '*' lvm.pvdisplay /dev/md0
72 | '''
73 | ret = {}
74 | cmd = ['pvdisplay', '-c', pvname]
75 | cmd_ret = __salt__['cmd.run_all'](cmd, python_shell=False)
76 |
77 | if cmd_ret['retcode'] != 0:
78 | return {}
79 |
80 | out = cmd_ret['stdout'].splitlines()
81 | for line in out:
82 | if 'is a new physical volume' not in line:
83 | comps = line.strip().split(':')
84 | ret[comps[0]] = {
85 | 'Physical Volume Device': comps[0],
86 | 'Volume Group Name': comps[1],
87 | 'Physical Volume Size (kB)': comps[2],
88 | 'Internal Physical Volume Number': comps[3],
89 | 'Physical Volume Status': comps[4],
90 | 'Physical Volume (not) Allocatable': comps[5],
91 | 'Current Logical Volumes Here': comps[6],
92 | 'Physical Extent Size (kB)': comps[7],
93 | 'Total Physical Extents': comps[8],
94 | 'Free Physical Extents': comps[9],
95 | 'Allocated Physical Extents': comps[10],
96 | }
97 | return ret
98 |
99 |
100 | def vgdisplay(vgname=''):
101 | '''
102 | Return information about the volume group(s)
103 |
104 | CLI Examples:
105 |
106 | .. code-block:: bash
107 |
108 | salt '*' lvm.vgdisplay
109 | salt '*' lvm.vgdisplay nova-volumes
110 | '''
111 | ret = {}
112 | cmd = ['vgdisplay', '-c', vgname]
113 | cmd_ret = __salt__['cmd.run_all'](cmd, python_shell=False)
114 |
115 | if cmd_ret['retcode'] != 0:
116 | return {}
117 |
118 | out = cmd_ret['stdout'].splitlines()
119 | for line in out:
120 | comps = line.strip().split(':')
121 | ret[comps[0]] = {
122 | 'Volume Group Name': comps[0],
123 | 'Volume Group Access': comps[1],
124 | 'Volume Group Status': comps[2],
125 | 'Internal Volume Group Number': comps[3],
126 | 'Maximum Logical Volumes': comps[4],
127 | 'Current Logical Volumes': comps[5],
128 | 'Open Logical Volumes': comps[6],
129 | 'Maximum Logical Volume Size': comps[7],
130 | 'Maximum Physical Volumes': comps[8],
131 | 'Current Physical Volumes': comps[9],
132 | 'Actual Physical Volumes': comps[10],
133 | 'Volume Group Size (kB)': comps[11],
134 | 'Physical Extent Size (kB)': comps[12],
135 | 'Total Physical Extents': comps[13],
136 | 'Allocated Physical Extents': comps[14],
137 | 'Free Physical Extents': comps[15],
138 | 'UUID': comps[16],
139 | }
140 | return ret
141 |
142 |
143 | def lvdisplay(lvname=''):
144 | '''
145 | Return information about the logical volume(s)
146 |
147 | CLI Examples:
148 |
149 | .. code-block:: bash
150 |
151 | salt '*' lvm.lvdisplay
152 | salt '*' lvm.lvdisplay /dev/vg_myserver/root
153 | '''
154 | ret = {}
155 | cmd = ['lvdisplay', '-c', lvname]
156 | cmd_ret = __salt__['cmd.run_all'](cmd, python_shell=False)
157 |
158 | if cmd_ret['retcode'] != 0:
159 | return {}
160 |
161 | out = cmd_ret['stdout'].splitlines()
162 | for line in out:
163 | comps = line.strip().split(':')
164 | ret[comps[0]] = {
165 | 'Logical Volume Name': comps[0],
166 | 'Volume Group Name': comps[1],
167 | 'Logical Volume Access': comps[2],
168 | 'Logical Volume Status': comps[3],
169 | 'Internal Logical Volume Number': comps[4],
170 | 'Open Logical Volumes': comps[5],
171 | 'Logical Volume Size': comps[6],
172 | 'Current Logical Extents Associated': comps[7],
173 | 'Allocated Logical Extents': comps[8],
174 | 'Allocation Policy': comps[9],
175 | 'Read Ahead Sectors': comps[10],
176 | 'Major Device Number': comps[11],
177 | 'Minor Device Number': comps[12],
178 | }
179 | return ret
180 |
181 |
182 | def pvcreate(devices, **kwargs):
183 | '''
184 | Set a physical device to be used as an LVM physical volume
185 |
186 | CLI Examples:
187 |
188 | .. code-block:: bash
189 |
190 | salt mymachine lvm.pvcreate /dev/sdb1,/dev/sdb2
191 | salt mymachine lvm.pvcreate /dev/sdb1 dataalignmentoffset=7s
192 | '''
193 | if not devices:
194 | return 'Error: at least one device is required'
195 |
196 | cmd = ['pvcreate']
197 | for device in devices.split(','):
198 | if not os.path.exists(device):
199 | return '{0} does not exist'.format(device)
200 | cmd.append(device)
201 | valid = ('metadatasize', 'dataalignment', 'dataalignmentoffset',
202 | 'pvmetadatacopies', 'metadatacopies', 'metadataignore',
203 | 'restorefile', 'norestorefile', 'labelsector',
204 | 'setphysicalvolumesize')
205 | for var in kwargs:
206 | if kwargs[var] and var in valid:
207 | cmd.append('--{0}'.format(var))
208 | cmd.append(kwargs[var])
209 | out = __salt__['cmd.run'](cmd, python_shell=False).splitlines()
210 | return out[0]
211 |
212 |
213 | def pvremove(devices):
214 | '''
215 | Remove a physical device being used as an LVM physical volume
216 |
217 | CLI Examples:
218 |
219 | .. code-block:: bash
220 |
221 | salt mymachine lvm.pvremove /dev/sdb1,/dev/sdb2
222 | '''
223 | cmd = ['pvremove', '-y']
224 | for device in devices.split(','):
225 | if not __salt__['lvm.pvdisplay'](device):
226 | return '{0} is not a physical volume'.format(device)
227 | cmd.append(device)
228 | out = __salt__['cmd.run'](cmd, python_shell=False).splitlines()
229 | return out[0]
230 |
231 |
232 | def vgcreate(vgname, devices, **kwargs):
233 | '''
234 | Create an LVM volume group
235 |
236 | CLI Examples:
237 |
238 | .. code-block:: bash
239 |
240 | salt mymachine lvm.vgcreate my_vg /dev/sdb1,/dev/sdb2
241 | salt mymachine lvm.vgcreate my_vg /dev/sdb1 clustered=y
242 | '''
243 | if not vgname or not devices:
244 | return 'Error: vgname and device(s) are both required'
245 |
246 | cmd = ['vgcreate', vgname]
247 | for device in devices.split(','):
248 | cmd.append(device)
249 | valid = ('clustered', 'maxlogicalvolumes', 'maxphysicalvolumes',
250 | 'vgmetadatacopies', 'metadatacopies', 'physicalextentsize')
251 | for var in kwargs:
252 | if kwargs[var] and var in valid:
253 | cmd.append('--{0}'.format(var))
254 | cmd.append(kwargs[var])
255 | out = __salt__['cmd.run'](cmd, python_shell=False).splitlines()
256 | vgdata = vgdisplay(vgname)
257 | vgdata['Output from vgcreate'] = out[0].strip()
258 | return vgdata
259 |
260 |
261 | def vgextend(vgname, devices):
262 | '''
263 | Add physical volumes to an LVM volume group
264 |
265 | CLI Examples:
266 |
267 | .. code-block:: bash
268 |
269 | salt mymachine lvm.vgextend my_vg /dev/sdb1,/dev/sdb2
270 | salt mymachine lvm.vgextend my_vg /dev/sdb1
271 | '''
272 | if not vgname or not devices:
273 | return 'Error: vgname and device(s) are both required'
274 |
275 | cmd = ['vgextend', vgname]
276 | for device in devices.split(','):
277 | cmd.append(device)
278 | out = __salt__['cmd.run'](cmd, python_shell=False).splitlines()
279 | vgdata = {'Output from vgextend': out[0].strip()}
280 | return vgdata
281 |
282 |
283 | def lvcreate(lvname, vgname, size=None, extents=None, snapshot=None, pv='', **kwargs):
284 | '''
285 | Create a new logical volume, with option for which physical volume to be used
286 |
287 | CLI Examples:
288 |
289 | .. code-block:: bash
290 |
291 | salt '*' lvm.lvcreate new_volume_name vg_name size=10G
292 | salt '*' lvm.lvcreate new_volume_name vg_name extents=100 /dev/sdb
293 | salt '*' lvm.lvcreate new_snapshot vg_name snapshot=volume_name size=3G
294 | '''
295 | if size and extents:
296 | return 'Error: Please specify only size or extents'
297 |
298 | valid = ('activate', 'chunksize', 'contiguous', 'discards', 'stripes',
299 | 'stripesize', 'minor', 'persistent', 'mirrors', 'noudevsync',
300 | 'monitor', 'ignoremonitoring', 'permission', 'poolmetadatasize',
301 | 'readahead', 'regionsize', 'thin', 'thinpool', 'type',
302 | 'virtualsize', 'zero')
303 | no_parameter = ('noudevsync', 'ignoremonitoring')
304 | extra_arguments = [
305 | '--{0}'.format(k) if k in no_parameter else '--{0} {1}'.format(k, v)
306 | for k, v in six.iteritems(kwargs) if k in valid
307 | ]
308 |
309 | cmd = ['lvcreate', '-n', lvname]
310 |
311 | if snapshot:
312 | cmd += ['-s', '{0}/{1}'.format(vgname, snapshot)]
313 | else:
314 | cmd.append(vgname)
315 |
316 | if size:
317 | cmd += ['-L', size]
318 | elif extents:
319 | cmd += ['-l', extents]
320 | else:
321 | return 'Error: Either size or extents must be specified'
322 |
323 | cmd.append(pv)
324 | cmd += extra_arguments
325 |
326 | out = __salt__['cmd.run'](cmd, python_shell=False).splitlines()
327 | lvdev = '/dev/{0}/{1}'.format(vgname, lvname)
328 | lvdata = lvdisplay(lvdev)
329 | lvdata['Output from lvcreate'] = out[0].strip()
330 | return lvdata
331 |
332 |
333 | def vgremove(vgname):
334 | '''
335 | Remove an LVM volume group
336 |
337 | CLI Examples:
338 |
339 | .. code-block:: bash
340 |
341 | salt mymachine lvm.vgremove vgname
342 | salt mymachine lvm.vgremove vgname force=True
343 | '''
344 | cmd = ['vgremove' '-f', vgname]
345 | out = __salt__['cmd.run'](cmd, python_shell=False)
346 | return out.strip()
347 |
348 |
349 | def lvremove(lvname, vgname):
350 | '''
351 | Remove a given existing logical volume from a named existing volume group
352 |
353 | CLI Example:
354 |
355 | .. code-block:: bash
356 |
357 | salt '*' lvm.lvremove lvname vgname force=True
358 | '''
359 | cmd = ['lvremove', '-f', '{0}/{1}'.format(vgname, lvname)]
360 | out = __salt__['cmd.run'](cmd, python_shell=False)
361 | return out.strip()
362 |
363 |
364 | def lvresize(size, lvpath):
365 | '''
366 | Return information about the logical volume(s)
367 |
368 | CLI Examples:
369 |
370 | .. code-block:: bash
371 |
372 |
373 | salt '*' lvm.lvresize +12M /dev/mapper/vg1-test
374 | '''
375 | ret = {}
376 | cmd = ['lvresize', '-L', str(size), lvpath]
377 | cmd_ret = __salt__['cmd.run_all'](cmd, python_shell=False)
378 | if cmd_ret['retcode'] != 0:
379 | return {}
380 | return ret
381 |
--------------------------------------------------------------------------------
/file_root/_modules/neutron.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | '''
3 | Module for handling openstack neutron calls.
4 |
5 | :maintainer:
6 | :maturity: new
7 | :platform: all
8 |
9 | :optdepends: - neutronclient Python adapter
10 | :configuration: This module is not usable until the following are specified
11 | either in a pillar or in the minion's config file::
12 |
13 | keystone.user: admin
14 | keystone.password: verybadpass
15 | keystone.tenant: admin
16 | keystone.tenant_id: f80919baedab48ec8931f200c65a50df
17 | keystone.insecure: False #(optional)
18 | keystone.auth_url: 'http://127.0.0.1:5000/v2.0/'
19 |
20 | If configuration for multiple openstack accounts is required, they can be
21 | set up as different configuration profiles:
22 | For example::
23 |
24 | openstack1:
25 | keystone.user: admin
26 | keystone.password: verybadpass
27 | keystone.tenant: admin
28 | keystone.tenant_id: f80919baedab48ec8931f200c65a50df
29 | keystone.auth_url: 'http://127.0.0.1:5000/v2.0/'
30 |
31 | openstack2:
32 | keystone.user: admin
33 | keystone.password: verybadpass
34 | keystone.tenant: admin
35 | keystone.tenant_id: f80919baedab48ec8931f200c65a50df
36 | keystone.auth_url: 'http://127.0.0.2:5000/v2.0/'
37 |
38 | With this configuration in place, any of the neutron functions can make
39 | use of a configuration profile by declaring it explicitly.
40 | For example::
41 |
42 | salt '*' neutron.list_subnets profile=openstack1
43 |
44 | Please check 'https://wiki.openstack.org/wiki/Neutron/APIv2-specification'
45 | for the correct arguments to the api
46 | '''
47 |
48 |
49 | import logging
50 | from functools import wraps
51 | LOG = logging.getLogger(__name__)
52 | # Import third party libs
53 | HAS_NEUTRON = False
54 | try:
55 | from neutronclient.v2_0 import client
56 | HAS_NEUTRON = True
57 | except ImportError:
58 | pass
59 |
60 |
61 | __opts__ = {}
62 |
63 |
64 | def __virtual__():
65 | '''
66 | Only load this module if neutron
67 | is installed on this minion.
68 | '''
69 | if HAS_NEUTRON:
70 | return 'neutron'
71 | return False
72 |
73 |
74 | def _autheticate(func_name):
75 | '''
76 | Authenticate requests with the salt keystone module and format return data
77 | '''
78 | @wraps(func_name)
79 | def decorator_method(*args, **kwargs):
80 | '''
81 | Authenticate request and format return data
82 | '''
83 | connection_args = {'profile': kwargs.get('profile', None)}
84 | nkwargs = {}
85 | for kwarg in kwargs:
86 | if 'connection_' in kwarg:
87 | connection_args.update({kwarg: kwargs[kwarg]})
88 | elif '__' not in kwarg:
89 | nkwargs.update({kwarg: kwargs[kwarg]})
90 | kstone = __salt__['keystone.auth'](**connection_args)
91 | token = kstone.auth_token
92 | endpoint = kstone.service_catalog.url_for(
93 | service_type='network',
94 | endpoint_type='publicURL')
95 | neutron_interface = client.Client(
96 | endpoint_url=endpoint, token=token)
97 | LOG.error('calling with args ' + str(args))
98 | LOG.error('calling with kwargs ' + str(nkwargs))
99 | return_data = func_name(neutron_interface, *args, **nkwargs)
100 | LOG.error('got return data ' + str(return_data))
101 | if isinstance(return_data, list):
102 | # format list as a dict for rendering
103 | return {data.get('name', None) or data['id']: data
104 | for data in return_data}
105 | return return_data
106 | return decorator_method
107 |
108 |
109 | @_autheticate
110 | def list_floatingips(neutron_interface, **kwargs):
111 | '''
112 | list all floatingips
113 |
114 | CLI Example:
115 |
116 | .. code-block:: bash
117 |
118 | salt '*' neutron.list_floatingips
119 | '''
120 | return neutron_interface.list_floatingips(**kwargs)['floatingips']
121 |
122 |
123 | @_autheticate
124 | def list_security_groups(neutron_interface, **kwargs):
125 | '''
126 | list all security_groups
127 |
128 | CLI Example:
129 |
130 | .. code-block:: bash
131 |
132 | salt '*' neutron.list_security_groups
133 | '''
134 | return neutron_interface.list_security_groups(**kwargs)['security_groups']
135 |
136 |
137 | @_autheticate
138 | def list_subnets(neutron_interface, **kwargs):
139 | '''
140 | list all subnets
141 |
142 | CLI Example:
143 |
144 | .. code-block:: bash
145 |
146 | salt '*' neutron.list_subnets
147 | '''
148 | return neutron_interface.list_subnets(**kwargs)['subnets']
149 |
150 |
151 | @_autheticate
152 | def list_networks(neutron_interface, **kwargs):
153 | '''
154 | list all networks
155 |
156 | CLI Example:
157 |
158 | .. code-block:: bash
159 |
160 | salt '*' neutron.list_networks
161 | '''
162 | return neutron_interface.list_networks(**kwargs)['networks']
163 |
164 |
165 | @_autheticate
166 | def list_ports(neutron_interface, **kwargs):
167 | '''
168 | list all ports
169 |
170 | CLI Example:
171 |
172 | .. code-block:: bash
173 |
174 | salt '*' neutron.list_ports
175 | '''
176 | return neutron_interface.list_ports(**kwargs)['ports']
177 |
178 |
179 | @_autheticate
180 | def list_routers(neutron_interface, **kwargs):
181 | '''
182 | list all routers
183 |
184 | CLI Example:
185 |
186 | .. code-block:: bash
187 |
188 | salt '*' neutron.list_routers
189 | '''
190 | return neutron_interface.list_routers(**kwargs)['routers']
191 |
192 |
193 | @_autheticate
194 | def update_floatingip(neutron_interface, fip, port_id=None):
195 | '''
196 | update floating IP. Should be used to associate and disassociate
197 | floating IP with instance
198 |
199 | CLI Example:
200 |
201 | .. code-block:: bash
202 |
203 | to associate with an instance's port
204 | salt '*' neutron.update_floatingip openstack-floatingip-id port-id
205 |
206 | to disassociate from an instance's port
207 | salt '*' neutron.update_floatingip openstack-floatingip-id
208 | '''
209 | neutron_interface.update_floatingip(fip, {"floatingip":
210 | {"port_id": port_id}})
211 |
212 |
213 | @_autheticate
214 | def update_subnet(neutron_interface, subnet_id, **subnet_params):
215 | '''
216 | update given subnet
217 |
218 | CLI Example:
219 |
220 | .. code-block:: bash
221 |
222 | salt '*' neutron.update_subnet openstack-subnet-id name='new_name'
223 | '''
224 | neutron_interface.update_subnet(subnet_id, {'subnet': subnet_params})
225 |
226 |
227 | @_autheticate
228 | def update_router(neutron_interface, router_id, **router_params):
229 | '''
230 | update given router
231 |
232 | CLI Example:
233 |
234 | .. code-block:: bash
235 |
236 | salt '*' neutron.update_router openstack-router-id name='new_name'
237 | external_gateway='openstack-network-id' administrative_state=true
238 | '''
239 | neutron_interface.update_router(router_id, {'router': router_params})
240 |
241 |
242 | @_autheticate
243 | def router_gateway_set(neutron_interface, router_id, external_gateway):
244 | '''
245 | Set external gateway for a router
246 |
247 | CLI Example:
248 |
249 | .. code-block:: bash
250 |
251 | salt '*' neutron.update_router openstack-router-id openstack-network-id
252 | '''
253 | neutron_interface.update_router(
254 | router_id, {'router': {'external_gateway_info':
255 | {'network_id': external_gateway}}})
256 |
257 |
258 | @_autheticate
259 | def router_gateway_clear(neutron_interface, router_id):
260 | '''
261 | Clear external gateway for a router
262 |
263 | CLI Example:
264 |
265 | .. code-block:: bash
266 |
267 | salt '*' neutron.update_router openstack-router-id
268 | '''
269 | neutron_interface.update_router(
270 | router_id, {'router': {'external_gateway_info': None}})
271 |
272 |
273 | @_autheticate
274 | def create_router(neutron_interface, **router_params):
275 | '''
276 | Create OpenStack Neutron router
277 |
278 | CLI Example:
279 |
280 | .. code-block:: bash
281 |
282 | salt '*' neutron.create_router name=R1
283 | '''
284 | response = neutron_interface.create_router({'router': router_params})
285 | if 'router' in response and 'id' in response['router']:
286 | return response['router']['id']
287 |
288 |
289 | @_autheticate
290 | def router_add_interface(neutron_interface, router_id, subnet_id):
291 | '''
292 | Attach router to a subnet
293 |
294 | CLI Example:
295 |
296 | .. code-block:: bash
297 |
298 | salt '*' neutron.router_add_interface openstack-router-id subnet-id
299 | '''
300 | neutron_interface.add_interface_router(router_id, {'subnet_id': subnet_id})
301 |
302 |
303 | @_autheticate
304 | def router_rem_interface(neutron_interface, router_id, subnet_id):
305 | '''
306 | Dettach router from a subnet
307 |
308 | CLI Example:
309 |
310 | .. code-block:: bash
311 |
312 | salt '*' neutron.router_rem_interface openstack-router-id subnet-id
313 | '''
314 | neutron_interface.remove_interface_router(
315 | router_id, {'subnet_id': subnet_id})
316 |
317 |
318 | @_autheticate
319 | def create_security_group(neutron_interface, **sg_params):
320 | '''
321 | Create a new security group
322 |
323 | CLI Example:
324 |
325 | .. code-block:: bash
326 |
327 | salt '*' neutron.create_security_group name='new_rule'
328 | description='test rule'
329 | '''
330 | response = neutron_interface.create_security_group(
331 | {'security_group': sg_params})
332 | if 'security_group' in response and 'id' in response['security_group']:
333 | return response['security_group']['id']
334 |
335 |
336 | @_autheticate
337 | def create_security_group_rule(neutron_interface, **rule_params):
338 | '''
339 | Create a rule entry for a security group
340 |
341 | CLI Example:
342 |
343 | .. code-block:: bash
344 |
345 | salt '*' neutron.create_security_group_rule
346 | '''
347 | neutron_interface.create_security_group_rule(
348 | {'security_group_rule': rule_params})
349 |
350 |
351 | @_autheticate
352 | def create_floatingip(neutron_interface, **floatingip_params):
353 | '''
354 | Create a new floating IP
355 |
356 | CLI Example:
357 |
358 | .. code-block:: bash
359 |
360 | salt '*' neutron.create_floatingip floating_network_id=ext-net-id
361 | '''
362 | response = neutron_interface.create_floatingip(
363 | {'floatingip': floatingip_params})
364 | if 'floatingip' in response and 'id' in response['floatingip']:
365 | return response['floatingip']['id']
366 |
367 |
368 | @_autheticate
369 | def create_subnet(neutron_interface, **subnet_params):
370 | '''
371 | Create a new subnet in OpenStack
372 |
373 | CLI Example:
374 |
375 | .. code-block:: bash
376 |
377 | salt '*' neutron.create_subnet name='subnet name'
378 | network_id='openstack-network-id' cidr='192.168.10.0/24' \\
379 | gateway_ip='192.168.10.1' ip_version='4' enable_dhcp=false \\
380 | start_ip='192.168.10.10' end_ip='192.168.10.20'
381 | '''
382 | if 'start_ip' in subnet_params:
383 | subnet_params.update(
384 | {'allocation_pools': [{'start': subnet_params.pop('start_ip'),
385 | 'end': subnet_params.pop('end_ip', None)}]})
386 | response = neutron_interface.create_subnet({'subnet': subnet_params})
387 | if 'subnet' in response and 'id' in response['subnet']:
388 | return response['subnet']['id']
389 |
390 |
391 | @_autheticate
392 | def create_network(neutron_interface, **network_params):
393 | '''
394 | Create a new network segment in OpenStack
395 |
396 | CLI Example:
397 |
398 | .. code-block:: bash
399 |
400 | salt '*' neutron.create_network name=External
401 | provider_network_type=flat provider_physical_network=ext
402 | '''
403 | network_params = {param.replace('_', ':', 1):
404 | network_params[param] for param in network_params}
405 | response = neutron_interface.create_network({'network': network_params})
406 | if 'network' in response and 'id' in response['network']:
407 | return response['network']['id']
408 |
409 |
410 | @_autheticate
411 | def create_port(neutron_interface, **port_params):
412 | '''
413 | Create a new port in OpenStack
414 |
415 | CLI Example:
416 |
417 | .. code-block:: bash
418 |
419 | salt '*' neutron.create_port network_id='openstack-network-id'
420 | '''
421 | response = neutron_interface.create_port({'port': port_params})
422 | if 'port' in response and 'id' in response['port']:
423 | return response['port']['id']
424 |
425 |
426 | @_autheticate
427 | def update_port(neutron_interface, port_id, **port_params):
428 | '''
429 | Create a new port in OpenStack
430 |
431 | CLI Example:
432 |
433 | .. code-block:: bash
434 |
435 | salt '*' neutron.update_port name='new_port_name'
436 | '''
437 | neutron_interface.update_port(port_id, {'port': port_params})
438 |
439 |
440 | @_autheticate
441 | def delete_floatingip(neutron_interface, floating_ip_id):
442 | '''
443 | delete a floating IP
444 |
445 | CLI Example:
446 |
447 | .. code-block:: bash
448 |
449 | salt '*' neutron.delete_floatingip openstack-floating-ip-id
450 | '''
451 | neutron_interface.delete_floatingip(floating_ip_id)
452 |
453 |
454 | @_autheticate
455 | def delete_security_group(neutron_interface, sg_id):
456 | '''
457 | delete a security group
458 |
459 | CLI Example:
460 |
461 | .. code-block:: bash
462 |
463 | salt '*' neutron.delete_security_group openstack-security-group-id
464 | '''
465 | neutron_interface.delete_security_group(sg_id)
466 |
467 |
468 | @_autheticate
469 | def delete_security_group_rule(neutron_interface, rule):
470 | '''
471 | delete a security group rule. pass all rule params that match the rule
472 | to be deleted
473 |
474 | CLI Example:
475 |
476 | .. code-block:: bash
477 |
478 | salt '*' neutron.delete_security_group_rule direction='ingress'
479 | ethertype='ipv4' security_group_id='openstack-security-group-id'
480 | port_range_min=100 port_range_max=4096 protocol='tcp'
481 | remote_group_id='default'
482 | '''
483 | sg_rules = neutron_interface.list_security_group_rules(
484 | security_group_id=rule['security_group_id'])
485 | for sg_rule in sg_rules['security_group_rules']:
486 | sgr_id = sg_rule.pop('id')
487 | if sg_rule == rule:
488 | neutron_interface.delete_security_group_rule(sgr_id)
489 |
490 |
491 | @_autheticate
492 | def delete_subnet(neutron_interface, subnet_id):
493 | '''
494 | delete given subnet
495 |
496 | CLI Example:
497 |
498 | .. code-block:: bash
499 |
500 | salt '*' neutron.delete_subnet openstack-subnet-id
501 | '''
502 | neutron_interface.delete_subnet(subnet_id)
503 |
504 |
505 | @_autheticate
506 | def delete_network(neutron_interface, network_id):
507 | '''
508 | delete given network
509 |
510 | CLI Example:
511 |
512 | .. code-block:: bash
513 |
514 | salt '*' neutron.delete_network openstack-network-id
515 | '''
516 | neutron_interface.delete_network(network_id)
517 |
518 |
519 | @_autheticate
520 | def delete_router(neutron_interface, router_id):
521 | '''
522 | delete given router
523 |
524 | CLI Example:
525 |
526 | .. code-block:: bash
527 |
528 | salt '*' neutron.delete_router openstack-router-id
529 | '''
530 | neutron_interface.delete_router(router_id)
531 |
--------------------------------------------------------------------------------
/file_root/_modules/parted_free_disks.py:
--------------------------------------------------------------------------------
1 |
2 | import logging
3 | LOG = logging.getLogger(__name__)
4 | __virtualname__ = 'partition_free_disks'
5 |
6 | def __virtual__():
7 | return __virtualname__
8 |
9 | def free_disks(free_partitions=True, free_space=True, min_disk_size='10',
10 | max_disk_size=None):
11 | """
12 | Retrieve a list of disk devices that are not active
13 | do not include unmounted partitions if free_partitions=False
14 | do not include free spaces if free_space=False
15 |
16 | CLI Example:
17 |
18 | .. code-block:: bash
19 |
20 | salt '*' partition.free_disks
21 | """
22 | available_disks = []
23 | if free_partitions:
24 | available_disks.extend(unmounted_partitions())
25 | if free_space:
26 | free_disk_space = find_free_spaces(min_disk_size, max_disk_size)
27 | while free_disk_space:
28 | __salt__['partition.mkpart'](free_disk_space['device'], 'primary',
29 | start=free_disk_space['start'],
30 | end=free_disk_space['end'])
31 | __salt__['cmd.run']('partprobe')
32 | available_disks.append(free_disk_space['device']+free_disk_space['id'])
33 | free_disk_space = find_free_spaces(min_disk_size, max_disk_size)
34 | return available_disks
35 |
36 |
37 | def get_block_device():
38 | '''
39 | Retrieve a list of disk devices
40 |
41 | .. versionadded:: 2014.7.0
42 |
43 | CLI Example:
44 |
45 | .. code-block:: bash
46 |
47 | salt '*' partition.get_block_device
48 | '''
49 | cmd = 'lsblk -n -io KNAME -d -e 1,7,11 -l'
50 | devs = __salt__['cmd.run'](cmd).splitlines()
51 | return devs
52 |
53 |
54 | def unmounted_partitions():
55 | '''
56 | Retrieve a list of unmounted partitions
57 |
58 | CLI Example:
59 |
60 | .. code-block:: bash
61 |
62 | salt '*' partition.unmounted_partitions
63 | '''
64 | unused_partitions = []
65 | active_mounts = __salt__['disk.usage']()
66 | mounted_devices = [active_mounts[mount_point]['filesystem'] for mount_point in active_mounts]
67 | mounted_devices.extend(__salt__['mount.swaps']())
68 | for block_device in get_block_device():
69 | device_name = '/dev/%s' % block_device
70 | part_data = __salt__['partition.list'](device_name)
71 | for partition_id in part_data['partitions']:
72 | partition_name = device_name + partition_id
73 | if partition_name not in mounted_devices:
74 | unused_partitions.append(partition_name)
75 | return unused_partitions
76 |
77 |
78 |
79 | def find_free_spaces(min_disk_size=10, max_disk_size=None):
80 | '''
81 | Retrieve a list of free space where partitions can be created
82 | returns device name, partition id when created, start and end sector
83 | returns only spaces greated than min_disk_size, which
84 | defaults to 10, units are in Gigabytes
85 |
86 | CLI Example:
87 |
88 | .. code-block:: bash
89 |
90 | salt '*' partition.find_free_spaces
91 | '''
92 | min_disk_size=int(min_disk_size)
93 | for block_device in get_block_device():
94 | device_name = '/dev/%s' % block_device
95 | part_data = __salt__['partition.list'](device_name, unit='s')
96 | sector_size = _sector_to_int(part_data['info']['logical sector'])
97 | disk_final_sector_int = _sector_to_int(part_data['info']['size'])
98 | LOG.debug('got part data {0}'.format(str(part_data)))
99 | last_device_id, last_allocated_sector_int = _last_allocated_sector(part_data['partitions'])
100 | disk_size_G = _sector_to_G(disk_final_sector_int - last_allocated_sector_int, sector_size)
101 | if disk_size_G > min_disk_size:
102 | start_sector_int = last_allocated_sector_int + 1
103 | if max_disk_size and disk_size_G > max_disk_size:
104 | end_sector_int = start_sector_int + _G_to_sector(int(max_disk_size), sector_size) -1
105 | else:
106 | end_sector_int = disk_final_sector_int - 1
107 | device_id = last_device_id+1
108 | if device_id > 4:
109 | LOG.error('maximum disks created, no more to allocate')
110 | return
111 | return {'device': device_name,
112 | 'id': str(device_id),
113 | 'start': _int_to_sector(start_sector_int),
114 | 'end': _int_to_sector(end_sector_int)}
115 | else:
116 | LOG.error('space {0} less than minimum {0}'.format(
117 | disk_size_G, min_disk_size))
118 | LOG.error('No free space found')
119 |
120 | def _last_allocated_sector(part_data):
121 | last_allocated_sector = 2048
122 | for partition_id, partition_data in part_data.iteritems():
123 | LOG.debug('checking partition {0} {0}'.format(
124 | partition_id, str(partition_data)))
125 | sector_end_in_int = _sector_to_int(partition_data['end'])
126 | if sector_end_in_int > last_allocated_sector:
127 | last_allocated_sector = sector_end_in_int
128 | last_partition = partition_id
129 | return int(last_partition), last_allocated_sector
130 |
131 | def _sector_to_int(sector):
132 | if sector[-1] == 's':
133 | return int(sector[:-1])
134 | return int(sector)
135 |
136 | def _int_to_sector(int_sector):
137 | return str(int_sector) + 's'
138 |
139 | def _G_to_sector(giga_bytes, sector_size):
140 | return (giga_bytes*1073741824)/sector_size
141 |
142 | def _sector_to_G(sector_length, sector_size):
143 | return (sector_length*sector_size)/1073741824
--------------------------------------------------------------------------------
/file_root/_states/glance.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | '''
3 | Management of Glance images
4 | ===========================
5 |
6 | :depends: - glanceclient Python module
7 | :configuration: See :py:mod:`salt.modules.glance` for setup instructions.
8 |
9 | .. code-block:: yaml
10 |
11 | glance image present:
12 | glance.image_present:
13 | - name: Ubuntu
14 | - copy_from: 'https://cloud-images.ubuntu.com/trusty/current/
15 | trusty-server-cloudimg-amd64-disk1.img'
16 | - container_format: bare
17 | - disk_format: qcow2
18 | - connection_user: admin
19 | - connection_password: admin_pass
20 | - connection_tenant: admin
21 | - connection_auth_url: 'http://127.0.0.1:5000/v2.0'
22 |
23 | glance image absent:
24 | glance.image_absent:
25 | - name: Ubuntu
26 | - disk_format: qcow2
27 | - connection_user: admin
28 | - connection_password: admin_pass
29 | - connection_tenant: admin
30 | - connection_auth_url: 'http://127.0.0.1:5000/v2.0'
31 | '''
32 | import logging
33 | LOG = logging.getLogger(__name__)
34 |
35 |
36 | def __virtual__():
37 | '''
38 | Only load if glance module is present in __salt__
39 | '''
40 | return 'glance' if 'glance.image_list' in __salt__ else False
41 |
42 |
43 | def image_present(name,
44 | disk_format='qcow2',
45 | container_format='bare',
46 | min_disk=None,
47 | min_ram=None,
48 | is_public=True,
49 | protected=False,
50 | checksum=None,
51 | copy_from=None,
52 | store=None,
53 | profile=None,
54 | **connection_args):
55 | '''
56 | Ensure that the glance image is present with the specified properties.
57 |
58 | name
59 | The name of the image to manage
60 | '''
61 | ret = {'name': name,
62 | 'changes': {'name': name},
63 | 'result': True,
64 | 'comment': 'Image "{0}" will be updated'.format(name)}
65 | if __opts__.get('test', None):
66 | return ret
67 | existing_image = __salt__['glance.image_show'](
68 | name=name, profile=profile, **connection_args)
69 | non_null_arguments = _get_non_null_args(name=name,
70 | disk_format=disk_format,
71 | container_format=container_format,
72 | min_disk=min_disk,
73 | min_ram=min_ram,
74 | is_public=is_public,
75 | protected=protected,
76 | checksum=checksum,
77 | copy_from=copy_from,
78 | store=store)
79 | LOG.debug('running state glance.image_present with arguments {0}'.format(
80 | str(non_null_arguments)))
81 | if 'Error' in existing_image:
82 | non_null_arguments.update({'profile': profile})
83 | non_null_arguments.update(connection_args)
84 | ret['changes'] = __salt__['glance.image_create'](**non_null_arguments)
85 | if 'Error' in ret['changes']:
86 | ret['result'] = False
87 | ret['comment'] = 'Image "{0}" failed to create'.format(name)
88 | else:
89 | ret['comment'] = 'Image "{0}" created'.format(name)
90 | return ret
91 | # iterate over all given arguments
92 | # if anything is different delete and recreate
93 | for key in non_null_arguments:
94 | if key == 'copy_from':
95 | continue
96 | if existing_image[name].get(key, None) != non_null_arguments[key]:
97 | LOG.debug('{0} has changed to {1}'.format(
98 | key, non_null_arguments[key]))
99 | __salt__['glance.image_delete'](
100 | name=name, profile=profile, **connection_args)
101 | non_null_arguments.update({'profile': profile})
102 | non_null_arguments.update(connection_args)
103 | return image_present(**non_null_arguments)
104 | ret['changes'] = {}
105 | ret['comment'] = 'Image "{0}" present in correct state'.format(name)
106 | return ret
107 |
108 |
109 | def image_absent(name, profile=None, **connection_args):
110 | '''
111 | Ensure that the glance image is absent.
112 |
113 | name
114 | The name of the image to manage
115 | '''
116 | ret = {'name': name,
117 | 'changes': {'name': name},
118 | 'result': True,
119 | 'comment': 'Image "{0}" will be removed'.format(name)}
120 | if __opts__.get('test', None):
121 | return ret
122 | existing_image = __salt__['glance.image_show'](
123 | name=name, profile=profile, **connection_args)
124 | if 'Error' not in existing_image:
125 | __salt__['glance.image_delete'](
126 | name=name, profile=profile, **connection_args)
127 | existing_image = __salt__['glance.image_show'](
128 | name=name, profile=profile, **connection_args)
129 | if 'Error' not in existing_image:
130 | ret['result'] = False
131 | ret['comment'] = 'Image "{0}" can not be remove'.format(name)
132 | return ret
133 | ret['changes'] = {name: 'deleted'}
134 | ret['comment'] = 'Image "{0}" removed'.format(name)
135 | return ret
136 | ret['changes'] = {}
137 | ret['comment'] = 'Image "{0}" absent'.format(name)
138 | return ret
139 |
140 |
141 | def _get_non_null_args(**kwargs):
142 | '''
143 | Return those kwargs which are not null
144 | '''
145 | return {key: kwargs[key] for key in kwargs if kwargs[key]}
146 |
--------------------------------------------------------------------------------
/file_root/_states/ini_manage.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | '''
3 | Manage ini files
4 | ================
5 |
6 | :maintainer:
7 | :maturity: new
8 | :depends: re
9 | :platform: all
10 |
11 | use section as DEFAULT_IMPLICIT if your ini file does not have any section
12 | for example /etc/sysctl.conf
13 | '''
14 |
15 |
16 | __virtualname__ = 'ini'
17 |
18 |
19 | def __virtual__():
20 | '''
21 | Only load if the mysql module is available
22 | '''
23 | return __virtualname__ if 'ini.set_option' in __salt__ else False
24 |
25 |
26 | def options_present(name, sections=None):
27 | '''
28 | .. code-block:: yaml
29 |
30 | /home/saltminion/api-paste.ini:
31 | ini.options_present:
32 | - sections:
33 | test:
34 | testkey: 'testval'
35 | secondoption: 'secondvalue'
36 | test1:
37 | testkey1: 'testval121'
38 |
39 | options present in file and not specified in sections
40 | dict will be untouched
41 |
42 | changes dict will contain the list of changes made
43 | '''
44 | ret = {'name': name,
45 | 'changes': {},
46 | 'result': True,
47 | 'comment': 'No anomaly detected'
48 | }
49 | if __opts__['test']:
50 | ret['result'] = None
51 | ret['comment'] = ('ini file {0} shall be validated for presence of '
52 | 'given options under their respective '
53 | 'sections').format(name)
54 | return ret
55 | for section in sections or {}:
56 | for key in sections[section]:
57 | current_value = __salt__['ini.get_option'](name,
58 | section,
59 | key)
60 | if current_value == sections[section][key]:
61 | continue
62 | ret['changes'] = __salt__['ini.set_option'](name,
63 | sections)
64 | if 'error' in ret['changes']:
65 | ret['result'] = False
66 | ret['comment'] = 'Errors encountered. {0}'.\
67 | format(ret['changes'])
68 | ret['changes'] = {}
69 | else:
70 | ret['comment'] = 'Changes take effect'
71 | return ret
72 |
73 |
74 | def options_absent(name, sections=None):
75 | '''
76 | .. code-block:: yaml
77 |
78 | /home/saltminion/api-paste.ini:
79 | ini.options_present:
80 | - sections:
81 | test:
82 | - testkey
83 | - secondoption
84 | test1:
85 | - testkey1
86 |
87 | options present in file and not specified in sections
88 | dict will be untouched
89 |
90 | changes dict will contain the list of changes made
91 | '''
92 | ret = {'name': name,
93 | 'changes': {},
94 | 'result': True,
95 | 'comment': 'No anomaly detected'
96 | }
97 | if __opts__['test']:
98 | ret['result'] = None
99 | ret['comment'] = ('ini file {0} shall be validated for absence of '
100 | 'given options under their respective '
101 | 'sections').format(name)
102 | return ret
103 | for section in sections or {}:
104 | for key in sections[section]:
105 | current_value = __salt__['ini.remove_option'](name,
106 | section,
107 | key)
108 | if not current_value:
109 | continue
110 | if section not in ret['changes']:
111 | ret['changes'].update({section: {}})
112 | ret['changes'][section].update({key: {'before': current_value,
113 | 'after': None}})
114 | ret['comment'] = 'Changes take effect'
115 | return ret
116 |
117 |
118 | def sections_present(name, sections=None):
119 | '''
120 | .. code-block:: yaml
121 |
122 | /home/saltminion/api-paste.ini:
123 | ini.sections_present:
124 | - sections:
125 | test:
126 | testkey: testval
127 | secondoption: secondvalue
128 | test1:
129 | testkey1: 'testval121'
130 |
131 | options present in file and not specified in sections will be deleted
132 | changes dict will contain the sections that changed
133 | '''
134 | ret = {'name': name,
135 | 'changes': {},
136 | 'result': True,
137 | 'comment': 'No anomaly detected'
138 | }
139 | if __opts__['test']:
140 | ret['result'] = None
141 | ret['comment'] = ('ini file {0} shall be validated for '
142 | 'presence of given sections with the '
143 | 'exact contents').format(name)
144 | return ret
145 | for section in sections or {}:
146 | cur_section = __salt__['ini.get_section'](name, section)
147 | if _same(cur_section, sections[section]):
148 | continue
149 | __salt__['ini.remove_section'](name, section)
150 | changes = __salt__['ini.set_option'](name, {section:
151 | sections[section]},
152 | summary=False)
153 | if 'error' in changes:
154 | ret['result'] = False
155 | ret['changes'] = 'Errors encountered'
156 | return ret
157 | ret['changes'][section] = {'before': {section: cur_section},
158 | 'after': changes['changes']}
159 | ret['comment'] = 'Changes take effect'
160 | return ret
161 |
162 |
163 | def sections_absent(name, sections=None):
164 | '''
165 | .. code-block:: yaml
166 |
167 | /home/saltminion/api-paste.ini:
168 | ini.sections_absent:
169 | - sections:
170 | - test
171 | - test1
172 |
173 | options present in file and not specified in sections will be deleted
174 | changes dict will contain the sections that changed
175 | '''
176 | ret = {'name': name,
177 | 'changes': {},
178 | 'result': True,
179 | 'comment': 'No anomaly detected'
180 | }
181 | if __opts__['test']:
182 | ret['result'] = None
183 | ret['comment'] = ('ini file {0} shall be validated for absence of '
184 | 'given sections').format(name)
185 | return ret
186 | for section in sections or []:
187 | cur_section = __salt__['ini.remove_section'](name, section)
188 | if not cur_section:
189 | continue
190 | ret['changes'][section] = {'before': cur_section,
191 | 'after': None}
192 | ret['comment'] = 'Changes take effect'
193 | return ret
194 |
195 |
196 | def _same(dict1, dict2):
197 | diff = _DictDiffer(dict1, dict2)
198 | return not (diff.added() or diff.removed() or diff.changed())
199 |
200 |
201 | class _DictDiffer(object):
202 | def __init__(self, current_dict, past_dict):
203 | self.current_dict = current_dict
204 | self.past_dict = past_dict
205 | self.set_current = set(current_dict)
206 | self.set_past = set(past_dict)
207 | self.intersect = self.set_current.intersection(self.set_past)
208 |
209 | def added(self):
210 | return self.set_current - self.intersect
211 |
212 | def removed(self):
213 | return self.set_past - self.intersect
214 |
215 | def changed(self):
216 | return set(o for o in self.intersect if
217 | self.past_dict[o] != self.current_dict[o])
218 |
--------------------------------------------------------------------------------
/file_root/_states/lvm.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | '''
3 | Management of Linux logical volumes
4 | ===================================
5 |
6 | A state module to manage LVMs
7 |
8 | .. code-block:: yaml
9 |
10 | /dev/sda:
11 | lvm.pv_present
12 |
13 | my_vg:
14 | lvm.vg_present:
15 | - devices: /dev/sda
16 |
17 | lvroot:
18 | lvm.lv_present:
19 | - vgname: my_vg
20 | - size: 10G
21 | - stripes: 5
22 | - stripesize: 8K
23 | '''
24 |
25 | # Import salt libs
26 | import salt.utils
27 |
28 |
29 | def __virtual__():
30 | '''
31 | Only load the module if lvm is installed
32 | '''
33 | if salt.utils.which('lvm'):
34 | return 'lvm'
35 | return False
36 |
37 |
38 | def pv_present(name, **kwargs):
39 | '''
40 | Set a physical device to be used as an LVM physical volume
41 |
42 | name
43 | The device name to initialize.
44 |
45 | kwargs
46 | Any supported options to pvcreate. See
47 | :mod:`linux_lvm ` for more details.
48 | '''
49 | ret = {'changes': {},
50 | 'comment': '',
51 | 'name': name,
52 | 'result': True}
53 |
54 | if __salt__['lvm.pvdisplay'](name):
55 | ret['comment'] = 'Physical Volume {0} already present'.format(name)
56 | elif __opts__['test']:
57 | ret['comment'] = 'Physical Volume {0} is set to be created'.format(name)
58 | ret['result'] = None
59 | return ret
60 | else:
61 | changes = __salt__['lvm.pvcreate'](name, **kwargs)
62 |
63 | if __salt__['lvm.pvdisplay'](name):
64 | ret['comment'] = 'Created Physical Volume {0}'.format(name)
65 | ret['changes'] = changes
66 | else:
67 | ret['comment'] = 'Failed to create Physical Volume {0}'.format(name)
68 | ret['result'] = False
69 | return ret
70 |
71 |
72 | def pv_absent(name):
73 | '''
74 | Ensure that a Physical Device is not being used by lvm
75 |
76 | name
77 | The device name to initialize.
78 | '''
79 | ret = {'changes': {},
80 | 'comment': '',
81 | 'name': name,
82 | 'result': True}
83 |
84 | if not __salt__['lvm.pvdisplay'](name):
85 | ret['comment'] = 'Physical Volume {0} does not exist'.format(name)
86 | elif __opts__['test']:
87 | ret['comment'] = 'Physical Volume {0} is set to be removed'.format(name)
88 | ret['result'] = None
89 | return ret
90 | else:
91 | changes = __salt__['lvm.pvremove'](name)
92 |
93 | if __salt__['lvm.pvdisplay'](name):
94 | ret['comment'] = 'Failed to remove Physical Volume {0}'.format(name)
95 | ret['result'] = False
96 | else:
97 | ret['comment'] = 'Removed Physical Volume {0}'.format(name)
98 | ret['changes'] = changes
99 | return ret
100 |
101 |
102 | def vg_present(name, devices=None, **kwargs):
103 | '''
104 | Create an LVM volume group
105 |
106 | name
107 | The volume group name to create
108 |
109 | devices
110 | A list of devices that will be added to the volume group
111 |
112 | kwargs
113 | Any supported options to vgcreate. See
114 | :mod:`linux_lvm ` for more details.
115 | '''
116 | ret = {'changes': {},
117 | 'comment': '',
118 | 'name': name,
119 | 'result': True}
120 |
121 | if __salt__['lvm.vgdisplay'](name):
122 | ret['comment'] = 'Volume Group {0} already present'.format(name)
123 | for device in devices.split(','):
124 | pvs = __salt__['lvm.pvdisplay'](device)
125 | if pvs and pvs.get(device, None):
126 | if pvs[device]['Volume Group Name'] == name:
127 | ret['comment'] = '{0}\n{1}'.format(
128 | ret['comment'],
129 | '{0} is part of Volume Group'.format(device))
130 | elif pvs[device]['Volume Group Name'] == '#orphans_lvm2':
131 | __salt__['lvm.vgextend'](name, device)
132 | pvs = __salt__['lvm.pvdisplay'](device)
133 | if pvs[device]['Volume Group Name'] == name:
134 | ret['changes'].update(
135 | {device: 'added to {0}'.format(name)})
136 | else:
137 | ret['comment'] = '{0}\n{1}'.format(
138 | ret['comment'],
139 | '{0} could not be added'.format(device))
140 | ret['result'] = False
141 | else:
142 | ret['comment'] = '{0}\n{1}'.format(
143 | ret['comment'],
144 | '{0} is part of {0}'.format(
145 | device, pvs[device]['Volume Group Name']))
146 | ret['result'] = False
147 | else:
148 | ret['comment'] = '{0}\n{1}'.format(
149 | ret['comment'],
150 | 'pv {0} is not present'.format(device))
151 | ret['result'] = False
152 | elif __opts__['test']:
153 | ret['comment'] = 'Volume Group {0} is set to be created'.format(name)
154 | ret['result'] = None
155 | return ret
156 | else:
157 | changes = __salt__['lvm.vgcreate'](name, devices, **kwargs)
158 |
159 | if __salt__['lvm.vgdisplay'](name):
160 | ret['comment'] = 'Created Volume Group {0}'.format(name)
161 | ret['changes'] = changes
162 | else:
163 | ret['comment'] = 'Failed to create Volume Group {0}'.format(name)
164 | ret['result'] = False
165 | return ret
166 |
167 |
168 | def vg_absent(name):
169 | '''
170 | Remove an LVM volume group
171 |
172 | name
173 | The volume group to remove
174 | '''
175 | ret = {'changes': {},
176 | 'comment': '',
177 | 'name': name,
178 | 'result': True}
179 |
180 | if not __salt__['lvm.vgdisplay'](name):
181 | ret['comment'] = 'Volume Group {0} already absent'.format(name)
182 | elif __opts__['test']:
183 | ret['comment'] = 'Volume Group {0} is set to be removed'.format(name)
184 | ret['result'] = None
185 | return ret
186 | else:
187 | changes = __salt__['lvm.vgremove'](name)
188 |
189 | if not __salt__['lvm.vgdisplay'](name):
190 | ret['comment'] = 'Removed Volume Group {0}'.format(name)
191 | ret['changes'] = changes
192 | else:
193 | ret['comment'] = 'Failed to remove Volume Group {0}'.format(name)
194 | ret['result'] = False
195 | return ret
196 |
197 |
198 | def lv_present(name,
199 | vgname=None,
200 | size=None,
201 | extents=None,
202 | snapshot=None,
203 | pv='',
204 | **kwargs):
205 | '''
206 | Create a new logical volume
207 |
208 | name
209 | The name of the logical volume
210 |
211 | vgname
212 | The volume group name for this logical volume
213 |
214 | size
215 | The initial size of the logical volume
216 |
217 | extents
218 | The number of logical extents to allocate
219 |
220 | snapshot
221 | The name of the snapshot
222 |
223 | pv
224 | The physical volume to use
225 |
226 | kwargs
227 | Any supported options to lvcreate. See
228 | :mod:`linux_lvm ` for more details.
229 | '''
230 | ret = {'changes': {},
231 | 'comment': '',
232 | 'name': name,
233 | 'result': True}
234 |
235 | _snapshot = None
236 |
237 | if snapshot:
238 | _snapshot = name
239 | name = snapshot
240 |
241 | lvpath = '/dev/{0}/{1}'.format(vgname, name)
242 |
243 | if __salt__['lvm.lvdisplay'](lvpath):
244 | ret['comment'] = 'Logical Volume {0} already present'.format(name)
245 | elif __opts__['test']:
246 | ret['comment'] = 'Logical Volume {0} is set to be created'.format(name)
247 | ret['result'] = None
248 | return ret
249 | else:
250 | changes = __salt__['lvm.lvcreate'](name,
251 | vgname,
252 | size=size,
253 | extents=extents,
254 | snapshot=_snapshot,
255 | pv=pv,
256 | **kwargs)
257 |
258 | if __salt__['lvm.lvdisplay'](lvpath):
259 | ret['comment'] = 'Created Logical Volume {0}'.format(name)
260 | ret['changes'] = changes
261 | else:
262 | ret['comment'] = 'Failed to create Logical Volume {0}'.format(name)
263 | ret['result'] = False
264 | return ret
265 |
266 |
267 | def lv_absent(name, vgname=None):
268 | '''
269 | Remove a given existing logical volume from a named existing volume group
270 |
271 | name
272 | The logical volume to remove
273 |
274 | vgname
275 | The volume group name
276 | '''
277 | ret = {'changes': {},
278 | 'comment': '',
279 | 'name': name,
280 | 'result': True}
281 |
282 | lvpath = '/dev/{0}/{1}'.format(vgname, name)
283 | if not __salt__['lvm.lvdisplay'](lvpath):
284 | ret['comment'] = 'Logical Volume {0} already absent'.format(name)
285 | elif __opts__['test']:
286 | ret['comment'] = 'Logical Volume {0} is set to be removed'.format(name)
287 | ret['result'] = None
288 | return ret
289 | else:
290 | changes = __salt__['lvm.lvremove'](name, vgname)
291 |
292 | if not __salt__['lvm.lvdisplay'](lvpath):
293 | ret['comment'] = 'Removed Logical Volume {0}'.format(name)
294 | ret['changes'] = changes
295 | else:
296 | ret['comment'] = 'Failed to remove Logical Volume {0}'.format(name)
297 | ret['result'] = False
298 | return ret
299 |
--------------------------------------------------------------------------------
/file_root/_states/neutron.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | '''
3 | Management of Neutron resources
4 | ===============================
5 |
6 | :depends: - neutronclient Python module
7 | :configuration: See :py:mod:`salt.modules.neutron` for setup instructions.
8 |
9 | .. code-block:: yaml
10 |
11 | neutron network present:
12 | neutron.network_present:
13 | - name: Netone
14 | - provider_physical_network: PHysnet1
15 | - provider_network_type: vlan
16 | '''
17 |
--------------------------------------------------------------------------------
/file_root/cinder/init.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 |
3 | cinder_api_pkg:
4 | pkg:
5 | - installed
6 | - name: "{{ salt['pillar.get']('packages:cinder_api', default='cinder-api') }}"
7 |
8 |
9 | cinder_scheduler_pkg:
10 | pkg:
11 | - installed
12 | - name: "{{ salt['pillar.get']('packages:cinder_scheduler', default='cinder-scheduler') }}"
13 |
14 | cinder_config_file:
15 | file:
16 | - managed
17 | - name: "{{ salt['pillar.get']('conf_files:cinder', default='/etc/cinder/cinder.conf') }}"
18 | - user: cinder
19 | - group: cinder
20 | - mode: 644
21 | - require:
22 | - ini: cinder_config_file
23 | ini:
24 | - options_present
25 | - name: "{{ salt['pillar.get']('conf_files:cinder', default='/etc/cinder/cinder.conf') }}"
26 | - sections:
27 | DEFAULT:
28 | rpc_backend: "{{ salt['pillar.get']('queue_engine', default='rabbit') }}"
29 | rabbit_host: "{{ get_candidate('queue.%s' % salt['pillar.get']('queue_engine', default='rabbit')) }}"
30 | rabbit_port: 5672
31 | cinder_host: "{{ get_candidate('glance') }}"
32 | database:
33 | connection: "mysql://{{ salt['pillar.get']('databases:cinder:username', default='cinder') }}:{{ salt['pillar.get']('databases:cinder:password', default='cinder_pass') }}@{{ get_candidate('mysql') }}/{{ salt['pillar.get']('databases:cinder:db_name', default='cinder') }}"
34 | keystone_authtoken:
35 | auth_uri: "{{ get_candidate('keystone') }}:5000"
36 | auth_port: 35357
37 | auth_protocol: http
38 | admin_tenant_name: service
39 | admin_user: cinder
40 | admin_password: "{{ pillar['keystone']['tenants']['service']['users']['cinder']['password'] }}"
41 | auth_host: "{{ get_candidate('keystone') }}"
42 | - require:
43 | - pkg: cinder_api_pkg
44 | - pkg: cinder_scheduler_pkg
45 |
46 |
47 | {% if 'db_sync' in salt['pillar.get']('databases:cinder', default=()) %}
48 | cinder_sync:
49 | cmd:
50 | - run
51 | - name: "{{ salt['pillar.get']('databases:cinder:db_sync') }}"
52 | - require:
53 | - service: "cinder-api-service"
54 | - service: "cinder-scheduler-service"
55 | {% endif %}
56 |
57 |
58 | cinder-api-service:
59 | service:
60 | - running
61 | - name: "{{ salt['pillar.get']('services:cinder_api', default='cinder-api') }}"
62 | - watch:
63 | - ini: cinder_config_file
64 | - file: cinder_config_file
65 |
66 | cinder-scheduler-service:
67 | service:
68 | - running
69 | - name: "{{ salt['pillar.get']('services:cinder_scheduler', default='cinder-scheduler') }}"
70 | - watch:
71 | - ini: cinder_config_file
72 | - file: cinder_config_file
73 |
--------------------------------------------------------------------------------
/file_root/cinder/volume.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 | {% from "cluster/volumes.jinja" import volumes with context %}
3 |
4 | lvm_pkg_install:
5 | pkg:
6 | - installed
7 | - name: {{ salt['pillar.get']('packages:lvm', default='lvm2') }}
8 |
9 | {% for disk_id in volumes %}
10 | pv_create_{{ disk_id }}:
11 | lvm:
12 | - pv_present
13 | - name: "{{ disk_id }}"
14 | - require:
15 | - pkg: lvm_pkg_install
16 | {% endfor %}
17 |
18 |
19 | {% if volumes %}
20 | lvm_logical_volume:
21 | lvm:
22 | - vg_present
23 | - name: "cinder-volumes"
24 | - devices: "{{ ','.join(volumes) }}"
25 | - require:
26 | {% for disk_id in volumes %}
27 | - lvm: pv_create_{{ disk_id }}
28 | {% endfor %}
29 | {% endif %}
30 |
31 |
32 | cinder_volume_package:
33 | pkg:
34 | - installed
35 | - name: "{{ salt['pillar.get']('packages:cinder_volume', default='cinder-volume') }}"
36 | - require:
37 | - pkg: lvm_pkg_install
38 |
39 | cinder_config_file_volume:
40 | file:
41 | - managed
42 | - name: "{{ salt['pillar.get']('conf_files:cinder', default='/etc/cinder/cinder.conf') }}"
43 | - user: cinder
44 | - group: cinder
45 | - mode: 644
46 | - require:
47 | - ini: cinder_config_file_volume
48 | ini:
49 | - options_present
50 | - name: "{{ salt['pillar.get']('conf_files:cinder', default='/etc/cinder/cinder.conf') }}"
51 | - sections:
52 | DEFAULT:
53 | rpc_backend: "{{ salt['pillar.get']('queue_engine', default='rabbit') }}"
54 | rabbit_host: "{{ get_candidate('queue.%s' % salt['pillar.get']('queue_engine', default='rabbit')) }}"
55 | rabbit_port: 5672
56 | glance_host: "{{ get_candidate('glance') }}"
57 | database:
58 | connection: "mysql://{{ salt['pillar.get']('databases:cinder:username', default='cinder') }}:{{ salt['pillar.get']('databases:cinder:password', default='cinder_pass') }}@{{ get_candidate('mysql') }}/{{ salt['pillar.get']('databases:cinder:db_name', default='cinder') }}"
59 | keystone_authtoken:
60 | auth_uri: "{{ get_candidate('keystone') }}:5000"
61 | auth_port: 35357
62 | auth_protocol: http
63 | admin_tenant_name: service
64 | admin_user: cinder
65 | admin_password: "{{ pillar['keystone']['tenants']['service']['users']['cinder']['password'] }}"
66 | auth_host: "{{ get_candidate('keystone') }}"
67 | - require:
68 | - pkg: cinder_volume_package
69 |
70 | cinder_volume_service:
71 | service:
72 | - running
73 | - name: "{{ salt['pillar.get']('services:cinder_volume', default='cinder-volume') }}"
74 | - watch:
75 | - ini: cinder_config_file_volume
76 | - file: cinder_config_file_volume
77 |
78 | cinder_iscsi_target_service:
79 | service:
80 | - running
81 | - name: "{{ salt['pillar.get']('services:iscsi_target', default='tgt') }}"
82 | - watch:
83 | - ini: cinder_config_file_volume
84 | - file: cinder_config_file_volume
85 |
--------------------------------------------------------------------------------
/file_root/cluster/physical_networks.jinja:
--------------------------------------------------------------------------------
1 |
2 |
3 | {% set mappings = [] %}
4 | {% for network_type in ('flat', 'vlan') %}
5 | {% for physnet in salt['pillar.get']('neutron:type_drivers:%s:physnets' % network_type, default=()) %}
6 | {% if grains['id'] in pillar['neutron']['type_drivers'][network_type]['physnets'][physnet]['hosts'] %}
7 | {% do mappings.append(':'.join((physnet, pillar['neutron']['type_drivers'][network_type]['physnets'][physnet]['bridge']))) %}
8 | {% endif %}
9 | {% endfor %}
10 | {% endfor %}
11 |
12 |
13 | {% set vlan_networks = [] %}
14 | {% for physnet in salt['pillar.get']('neutron:type_drivers:vlan:physnets', default=()) %}
15 | {% if grains['id'] in pillar['neutron']['type_drivers']['vlan']['physnets'][physnet]['hosts'] %}
16 | {% do vlan_networks.append(':'.join((physnet, pillar['neutron']['type_drivers']['vlan']['physnets'][physnet]['vlan_range']))) %}
17 | {% endif %}
18 | {% endfor %}
19 |
20 |
21 | {% set flat_networks = [] %}
22 | {% for physnet in salt['pillar.get']('neutron:type_drivers:flat:physnets', default=()) %}
23 | {% if grains['id'] in pillar['neutron']['type_drivers']['flat']['physnets'][physnet]['hosts'] %}
24 | {% do flat_networks.append(physnet) %}
25 | {% endif %}
26 | {% endfor %}
27 |
28 |
29 | {% set bridges = {salt['pillar.get']('neutron:intergration_bridge', default='br-int'): None} %}
30 | {% for network_type in ('flat', 'vlan') %}
31 | {% for physnet in salt['pillar.get']('neutron:type_drivers:%s:physnets' % network_type, default=()) %}
32 | {% if grains['id'] in pillar['neutron']['type_drivers'][network_type]['physnets'][physnet]['hosts'] %}
33 | {% do bridges.update({pillar['neutron']['type_drivers'][network_type]['physnets'][physnet]['bridge']:
34 | pillar['neutron']['type_drivers'][network_type]['physnets'][physnet]['hosts'][grains['id']]}) %}
35 | {% endif %}
36 | {% endfor %}
37 | {% endfor %}
38 |
39 |
--------------------------------------------------------------------------------
/file_root/cluster/resources.jinja:
--------------------------------------------------------------------------------
1 |
2 |
3 | {% set roles = [] %}
4 | {% for cluster_entity in salt['pillar.get']('roles', default=()) %}
5 | {% if grains['id'] in salt['pillar.get'](cluster_entity, default=()) %}
6 | {% do roles.append(cluster_entity) %}
7 | {% endif %}
8 | {% endfor %}
9 |
10 |
11 | {% set formulas = [] %}
12 | {% for role in roles %}
13 | {% for formula in salt['pillar.get']('sls:%s' % role, default=()) %}
14 | {% if formula not in formulas %}
15 | {% do formulas.append(formula) %}
16 | {% endif %}
17 | {% endfor %}
18 | {% endfor %}
19 |
20 |
21 |
22 | {% set hosts = [] %}
23 | {% for cluster_entity in salt['pillar.get']('roles', default=()) %}
24 | {% for host in salt['pillar.get'](cluster_entity, default=()) %}
25 | {% if host not in hosts %}
26 | {% do hosts.append(host) %}
27 | {% endif %}
28 | {% endfor %}
29 | {% endfor %}
30 |
31 |
32 | {% macro get_candidate(sls) -%}
33 | {% for host in hosts %}
34 | {% for cluster_entity in salt['pillar.get']('roles', default=()) %}
35 | {% if host in pillar[cluster_entity] %}
36 | {% if sls in pillar['sls'][cluster_entity] %}
37 | {{ host }}{% endif %}
38 | {% endif %}
39 | {% endfor %}
40 | {% endfor %}
41 | {%- endmacro %}
42 |
--------------------------------------------------------------------------------
/file_root/cluster/volumes.jinja:
--------------------------------------------------------------------------------
1 | {% set volumes = salt['partition_free_disks.free_disks']() %}
2 |
--------------------------------------------------------------------------------
/file_root/generics/files.sls:
--------------------------------------------------------------------------------
1 | {% for file_required in salt['pillar.get']('files', default=()) %}
2 | {{ file_required }}:
3 | file:
4 | - managed
5 | {% for file_option in pillar['files'][file_required] %}
6 | - {{ file_option }}: {{ pillar['files'][file_required][file_option] }}
7 | {% endfor %}
8 | {% endfor %}
9 |
--------------------------------------------------------------------------------
/file_root/generics/headers.sls:
--------------------------------------------------------------------------------
1 | linux-headers-install:
2 | pkg:
3 | - installed
4 | - name: {{ salt['pillar.get']('packages:linux-headers') }}
5 |
6 |
--------------------------------------------------------------------------------
/file_root/generics/hosts.sls:
--------------------------------------------------------------------------------
1 | {% for server in salt['pillar.get']('hosts', default={}) %}
2 | {{ server }}:
3 | host:
4 | - present
5 | - ip: {{ pillar['hosts'][server] }}
6 | {% endfor %}
7 |
--------------------------------------------------------------------------------
/file_root/generics/proxy.sls:
--------------------------------------------------------------------------------
1 | {% if 'pkg_proxy_url' in pillar %}
2 | {% if grains['os'] in ('Ubuntu', 'Debian') %}
3 | apt-cache-proxy:
4 | file:
5 | - managed
6 | - name: /etc/apt/apt.conf.d/01proxy
7 | - contents: "Acquire::http::Proxy \"{{ pillar['pkg_proxy_url'] }}\";"
8 | {% endif %}
9 | {% if grains['os'] in ('Redhat', 'Fedora', 'CentOS') %}
10 | yum-cache-proxy:
11 | file:
12 | - manage
13 | - name: /etc/yum/yum.repos.d/01proxy
14 | - contents: "proxy = {{ pillar['pkg_proxy_url'] }}"
15 | {% endif %}
16 | {% endif %}
17 |
--------------------------------------------------------------------------------
/file_root/generics/repo.sls:
--------------------------------------------------------------------------------
1 | {% for package in salt['pillar.get']('pkgrepo:pre_repo_additions', default=()) %}
2 | {{ package }}-install:
3 | pkg:
4 | - installed
5 | - name: {{ package }}
6 | {% endfor %}
7 |
8 |
9 | {% for repo in salt['pillar.get']('pkgrepo:repos', default=()) %}
10 | {{ repo }}-repo:
11 | pkgrepo:
12 | - managed
13 | {% for repo_option in pillar['pkgrepo']['repos'][repo] %}
14 | - {{ repo_option }}: {{ pillar['pkgrepo']['repos'][repo][repo_option] }}
15 | {% endfor %}
16 | {% if salt['pillar.get']('pkgrepo:pre_repo_additions', default=()) %}
17 | - require:
18 | {% for package in salt['pillar.get']('pkgrepo:pre_repo_additions', default=()) %}
19 | - pkg: {{ package }}
20 | {% endfor %}
21 | {% endif %}
22 | {% endfor %}
23 |
24 |
25 | {% for package in salt['pillar.get']('pkgrepo:post_repo_additions', default=()) %}
26 | {{ package }}-install:
27 | pkg:
28 | - latest
29 | - name: {{ package }}
30 | {% if salt['pillar.get']('pkgrepo:repos', default=()) %}
31 | - require:
32 | {% for repo in salt['pillar.get']('pkgrepo:repos', default=()) %}
33 | - pkgrepo: {{ repo }}-repo
34 | {% endfor %}
35 | {% endif %}
36 | {% endfor %}
37 |
--------------------------------------------------------------------------------
/file_root/generics/repo.sls~:
--------------------------------------------------------------------------------
1 | {% for package in salt['pillar.get']('pkgrepo:pre_repo_additions', default=()) %}
2 | {{ package }}-install:
3 | pkg:
4 | - installed
5 | - name: {{ package }}
6 | {% endfor %}
7 |
8 |
9 | {% for repo in salt['pillar.get']('pkgrepo:repos', default=()) %}
10 | {{ repo }}-repo:
11 | pkgrepo:
12 | - managed
13 | {% for repo_option in pillar['pkgrepo']['repos'][repo] %}
14 | - {{ repo_option }}: {{ pillar['pkgrepo']['repos'][repo][repo_option] }}
15 | {% endfor %}
16 | {% if salt['pillar.get']('pkgrepo:pre_repo_additions', default=()) %}
17 | - require:
18 | {% for package in salt['pillar.get']('pkgrepo:pre_repo_additions', default=()) %}
19 | - pkg: {{ package }}
20 | {% endfor %}
21 | {% endif %}
22 | {% endfor %}
23 |
24 |
25 | {% for package in salt['pillar.get']('pkgrepo:post_repo_additions', default=()) %}
26 | {{ package }}-install:
27 | pkg:
28 | - installed
29 | - name: {{ package }}
30 | {% if salt['pillar.get']('pkgrepo:repos', default=()) %}
31 | - require:
32 | {% for repo in salt['pillar.get']('pkgrepo:repos', default=()) %}
33 | - pkgrepo: {{ repo }}-repo
34 | {% endfor %}
35 | {% endif %}
36 | {% endfor %}
37 |
--------------------------------------------------------------------------------
/file_root/generics/system_update.sls:
--------------------------------------------------------------------------------
1 | system_uptodate:
2 | pkg:
3 | - uptodate
4 | - refresh: True
5 |
--------------------------------------------------------------------------------
/file_root/glance/images.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 |
3 | {% for image_name in pillar.get('images', ()) %}
4 | image-{{ image_name }}:
5 | glance:
6 | - image_present
7 | - name: {{ image_name }}
8 | - connection_user: {{ pillar['images'][image_name].get('user', 'admin') }}
9 | - connection_tenant: {{ pillar['images'][image_name].get('user', 'admin') }}
10 | - connection_password: {{ salt['pillar.get']('keystone:tenants:%s:users:%s:password' % (pillar['images'][image_name].get('user', 'admin'), pillar['images'][image_name].get('user', 'admin')), 'ADMIN') }}
11 | - connection_auth_url: {{ salt['pillar.get']('keystone:services:keystone:endpoint:internalurl', 'http://{0}:5000/v2.0').format(get_candidate(salt['pillar.get']('keystone:services:keystone:endpoint:endpoint_host_sls', default='keystone'))) }}
12 | {% for image_attr in pillar['images'][image_name] %}
13 | - {{ image_attr }}: {{ pillar['images'][image_name][image_attr] }}
14 | {% endfor %}
15 | {% endfor %}
16 |
--------------------------------------------------------------------------------
/file_root/glance/init.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 | glance-pkg-install:
3 | pkg:
4 | - installed
5 | - name: "{{ salt['pillar.get']('packages:glance', default='glance') }}"
6 |
7 |
8 | glance_registry_running:
9 | service:
10 | - running
11 | - name: "{{ salt['pillar.get']('services:glance_registry') }}"
12 | - watch:
13 | - pkg: glance-pkg-install
14 | - ini: glance-api-conf
15 | - ini: glance-registry-conf
16 |
17 | glance_api_running:
18 | service:
19 | - running
20 | - name: "{{ salt['pillar.get']('services:glance_api') }}"
21 | - watch:
22 | - pkg: glance-pkg-install
23 | - ini: glance-api-conf
24 | - ini: glance-registry-conf
25 |
26 | glance-api-conf:
27 | file:
28 | - managed
29 | - name: "{{ salt['pillar.get']('conf_files:glance_api', default="/etc/glance/glance-api.conf") }}"
30 | - mode: 644
31 | - user: glance
32 | - group: glance
33 | - require:
34 | - pkg: glance-pkg-install
35 | ini:
36 | - options_present
37 | - name: "{{ salt['pillar.get']('conf_files:glance_api', default="/etc/glance/glance-api.conf") }}"
38 | - sections:
39 | database:
40 | connection: "mysql://{{ salt['pillar.get']('databases:glance:username', default='glance') }}:{{ salt['pillar.get']('databases:glance:password', default='glance_pass') }}@{{ get_candidate('mysql') }}/{{ salt['pillar.get']('databases:glance:db_name', default='glance') }}"
41 | DEFAULT:
42 | rpc_backend: "{{ salt['pillar.get']('queue_engine', default='rabbit') }}"
43 | rabbit_host: "{{ get_candidate('queue.%s' % salt['pillar.get']('queue_engine', default='rabbit')) }}"
44 | keystone_authtoken:
45 | auth_uri: "http://{{ get_candidate('keystone') }}:5000"
46 | auth_port: 35357
47 | auth_protocol: http
48 | admin_tenant_name: service
49 | admin_user: glance
50 | admin_password: "{{ pillar['keystone']['tenants']['service']['users']['glance']['password'] }}"
51 | auth_host: "{{ get_candidate('keystone') }}"
52 | paste_deploy:
53 | flavor: keystone
54 | - require:
55 | - file: glance-api-conf
56 |
57 | glance-registry-conf:
58 | file:
59 | - managed
60 | - name: "{{ salt['pillar.get']('conf_files:glance_registry', default="/etc/glance/glance-registry.conf") }}"
61 | - user: glance
62 | - group: glance
63 | - mode: 644
64 | - require:
65 | - pkg: glance-pkg-install
66 | ini:
67 | - options_present
68 | - name: "{{ salt['pillar.get']('conf_files:glance_registry', default="/etc/glance/glance-registry.conf") }}"
69 | - sections:
70 | database:
71 | connection: "mysql://{{ salt['pillar.get']('databases:glance:username', default='glance') }}:{{ salt['pillar.get']('databases:glance:password', default='glance_pass') }}@{{ get_candidate('mysql') }}/{{ salt['pillar.get']('databases:glance:db_name', default='glance') }}"
72 | DEFAULT:
73 | rpc_backend: "{{ salt['pillar.get']('queue_engine', default='rabbit') }}"
74 | rabbit_host: "{{ get_candidate('queue.%s' % salt['pillar.get']('queue_engine', default='rabbit')) }}"
75 | keystone_authtoken:
76 | auth_uri: "http://{{ get_candidate('keystone') }}:5000"
77 | auth_port: 35357
78 | auth_protocol: http
79 | admin_tenant_name: service
80 | admin_user: glance
81 | admin_password: "{{ pillar['keystone']['tenants']['service']['users']['glance']['password'] }}"
82 | auth_host: "{{ get_candidate('keystone') }}"
83 | paste_deploy:
84 | flavor: keystone
85 | - require:
86 | - file: glance-registry-conf
87 |
88 | {% if 'db_sync' in salt['pillar.get']('databases:glance', default=()) %}
89 | glance_sync:
90 | cmd:
91 | - run
92 | - name: "{{ salt['pillar.get']('databases:glance:db_sync') }}"
93 | - require:
94 | - service: glance_registry_running
95 | - service: glance_api_running
96 | {% endif %}
97 |
98 | glance_sqlite_delete:
99 | file:
100 | - absent
101 | - name: /var/lib/glance/glance.sqlite
102 | - require:
103 | - pkg: glance
104 |
--------------------------------------------------------------------------------
/file_root/horizon/init.sls:
--------------------------------------------------------------------------------
1 | apache-install:
2 | pkg:
3 | - installed
4 | - name: {{ salt['pillar.get']('packages:apache', default='apache2') }}
5 | - require:
6 | - pkg: memcached-install
7 |
8 | apache-service:
9 | service:
10 | - running
11 | - name: {{ salt['pillar.get']('services:apache', default='apache2') }}
12 | - watch:
13 | - file: enable-dashboard
14 | - pkg: apache_wsgi_module
15 |
16 | enable-dashboard:
17 | file:
18 | - symlink
19 | - force: true
20 | - name: "{{ salt['pillar.get']('conf_files:apache_dashboard_enabled_conf', default='/etc/apache2/conf-enabled/openstack-dashboard.conf') }}"
21 | - target: "{{ salt['pillar.get']('conf_files:apache_dashboard_conf', default='/etc/apache2/conf-available/openstack-dashboard.conf') }}"
22 | - require:
23 | - pkg: openstack-dashboard-install
24 |
25 | apache_wsgi_module:
26 | pkg:
27 | - installed
28 | - name: {{ salt['pillar.get']('packages:apache_wsgi_module', default='libapache2-mod-wsgi') }}
29 | - require:
30 | - pkg: apache-install
31 |
32 | memcached-install:
33 | pkg:
34 | - installed
35 | - name: {{ salt['pillar.get']('packages:memcached', default='memcached') }}
36 |
37 | memcached-service:
38 | service:
39 | - running
40 | - name: {{ salt['pillar.get']('services:memcached', default='memcached') }}
41 | - watch:
42 | - pkg: memcached-install
43 |
44 | openstack-dashboard-install:
45 | pkg:
46 | - installed
47 | - name: {{ salt['pillar.get']('packages:dashboard', default='openstack-dashboard') }}
48 | - require:
49 | - pkg: apache-install
50 |
--------------------------------------------------------------------------------
/file_root/horizon/secure.sls:
--------------------------------------------------------------------------------
1 | apache_rewrite_enable:
2 | apache_module:
3 | - enable
4 | - name: rewrite
5 | apache_ssl_enable:
6 | apache_module:
7 | - enable
8 | - name: ssl
9 | apache-service-module-enable:
10 | service:
11 | - running
12 | - name: "{{ salt['pillar.get']('services:apache', default='apache2') }}"
13 | - watch:
14 | - apache_module: apache_rewrite_enable
15 | - apache_module: apache_ssl_enable
16 |
--------------------------------------------------------------------------------
/file_root/horizon/secure.sls~:
--------------------------------------------------------------------------------
1 | apache_rewrite_enable:
2 | apache_module:
3 | - enable
4 | - name: rewrite
5 | apache_ssl_enable:
6 | apache_module:
7 | - enable
8 | - name: ssl
9 | apache-service-module-enable:
10 | service:
11 | - running
12 | - name: "{{ salt['pillar.get']('services:apache', default='apache2') }}"
13 | - watch:
14 | - apache_module: apache_rewrite_enable
15 | - apache_module: apache_ssl_enable
16 |
--------------------------------------------------------------------------------
/file_root/keystone/init.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 | keystone-pkg-install:
3 | pkg:
4 | - installed
5 | - name: {{ salt['pillar.get']('packages:keystone', default='keystone') }}
6 |
7 | keystone-service-running:
8 | service:
9 | - running
10 | - name: {{ salt['pillar.get']('services:keystone', default='keystone') }}
11 | - watch:
12 | - pkg: keystone-pkg-install
13 | - ini: keystone-conf-file
14 |
15 | keystone-conf-file:
16 | file:
17 | - managed
18 | - name: {{ salt['pillar.get']('conf_files:keystone', default='/etc/keystone/keystone.conf') }}
19 | - user: root
20 | - group: root
21 | - mode: 644
22 | - require:
23 | - pkg: keystone-pkg-install
24 | ini:
25 | - options_present
26 | - name: {{ salt['pillar.get']('conf_files:keystone', default='/etc/keystone/keystone.conf') }}
27 | - sections:
28 | DEFAULT:
29 | admin_token: {{ salt['pillar.get']('keystone:admin_token', default='ADMIN') }}
30 | database:
31 | connection: mysql://{{ salt['pillar.get']('databases:keystone:username', default='keystone') }}:{{ salt['pillar.get']('databases:keystone:password', default='keystone_pass') }}@{{ get_candidate('mysql') }}/{{ salt['pillar.get']('databases:keystone:db_name', default='keystone') }}
32 | - require:
33 | - file: keystone-conf-file
34 |
35 | {% if 'db_sync' in salt['pillar.get']('databases:keystone', default=()) %}
36 | keystone_sync:
37 | cmd:
38 | - run
39 | - name: {{ salt['pillar.get']('databases:keystone:db_sync') }}
40 | - require:
41 | - service: keystone
42 | {% endif %}
43 |
44 | keystone_sqlite_delete:
45 | file:
46 | - absent
47 | - name: /var/lib/keystone/keystone.sqlite
48 | - require:
49 | - pkg: keystone-pkg-install
50 |
--------------------------------------------------------------------------------
/file_root/keystone/openstack_services.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 |
3 | {% for service_name in pillar['keystone']['services'] %}
4 | {{ service_name }}_service:
5 | keystone:
6 | - service_present
7 | - name: {{ service_name }}
8 | - service_type: {{ pillar['keystone']['services'][service_name]['service_type'] }}
9 | - description: {{ pillar['keystone']['services'][service_name]['description'] }}
10 | - connection_token: {{ salt['pillar.get']('keystone:admin_token', default='ADMIN') }}
11 | - connection_endpoint: {{ salt['pillar.get']('keystone:services:keystone:endpoint:adminurl', default='http://{0}:35357/v2.0').format(get_candidate(salt['pillar.get']('keystone:services:keystone:endpoint:endpoint_host_sls', default='keystone'))) }}
12 | {{ service_name }}_endpoint:
13 | keystone:
14 | - endpoint_present
15 | - name: {{ service_name }}
16 | - publicurl: {{ pillar['keystone']['services'][service_name]['endpoint']['publicurl'].format(get_candidate(pillar['keystone']['services'][service_name]['endpoint']['endpoint_host_sls'])) }}
17 | - adminurl: {{ pillar['keystone']['services'][service_name]['endpoint']['adminurl'].format(get_candidate(pillar['keystone']['services'][service_name]['endpoint']['endpoint_host_sls'])) }}
18 | - internalurl: {{ pillar['keystone']['services'][service_name]['endpoint']['internalurl'].format(get_candidate(pillar['keystone']['services'][service_name]['endpoint']['endpoint_host_sls'])) }}
19 | - connection_token: {{ salt['pillar.get']('keystone:admin_token', default='ADMIN') }}
20 | - connection_endpoint: {{ salt['pillar.get']('keystone:services:keystone:endpoint:adminurl', default='http://{0}:35357/v2.0').format(get_candidate(salt['pillar.get']('keystone:services:keystone:endpoint:endpoint_host_sls', default='keystone'))) }}
21 | - require:
22 | - keystone: {{ service_name }}_service
23 | {% endfor %}
24 |
--------------------------------------------------------------------------------
/file_root/keystone/openstack_tenants.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 |
3 | {% for tenant_name in pillar['keystone']['tenants'] %}
4 | {{ tenant_name }}_tenant:
5 | keystone:
6 | - tenant_present
7 | - name: {{ tenant_name }}
8 | - connection_token: {{ salt['pillar.get']('keystone:admin_token', default='ADMIN') }}
9 | - connection_endpoint: {{ salt['pillar.get']('keystone:services:keystone:endpoint:adminurl', default='http://{0}:35357/v2.0').format(get_candidate(salt['pillar.get']('keystone:services:keystone:endpoint:endpoint_host_sls', default='keystone'))) }}
10 | {% endfor %}
11 | {% for role_name in pillar['keystone']['roles'] %}
12 | {{ role_name }}_role:
13 | keystone:
14 | - role_present
15 | - name: {{ role_name }}
16 | - connection_token: {{ salt['pillar.get']('keystone:admin_token', default='ADMIN') }}
17 | - connection_endpoint: {{ salt['pillar.get']('keystone:services:keystone:endpoint:adminurl', default='http://{0}:35357/v2.0').format(get_candidate(salt['pillar.get']('keystone:services:keystone:endpoint:endpoint_host_sls', default='keystone'))) }}
18 | {% endfor %}
19 |
--------------------------------------------------------------------------------
/file_root/keystone/openstack_users.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 |
3 | {% for tenant_name in pillar['keystone']['tenants'] %}
4 | {% for user_name in pillar['keystone']['tenants'][tenant_name]['users'] %}
5 | {{ user_name }}_user:
6 | keystone:
7 | - user_present
8 | - name: {{ user_name }}
9 | - password: {{ pillar['keystone']['tenants'][tenant_name]['users'][user_name]['password'] }}
10 | - email: {{ pillar['keystone']['tenants'][tenant_name]['users'][user_name]['email'] }}
11 | - tenant: {{ tenant_name }}
12 | - roles:
13 | - {{ tenant_name }}: {{ pillar['keystone']['tenants'][tenant_name]['users'][user_name]['roles'] }}
14 | - connection_token: {{ salt['pillar.get']('keystone:admin_token', default='ADMIN') }}
15 | - connection_endpoint: {{ salt['pillar.get']('keystone:services:keystone:endpoint:adminurl', default='http://{0}:35357/v2.0').format(get_candidate(salt['pillar.get']('keystone:services:keystone:endpoint:endpoint_host_sls', default='keystone'))) }}
16 | {% endfor %}
17 | {% endfor %}
18 |
19 |
--------------------------------------------------------------------------------
/file_root/mysql/client.sls:
--------------------------------------------------------------------------------
1 | mysql-client-install:
2 | pkg:
3 | - installed
4 | - name: {{ salt['pillar.get']('packages:mysql-client', default='mysql-client') }}
5 | python-mysql-library-install:
6 | pkg:
7 | - installed
8 | - name: {{ salt['pillar.get']('packages:python-mysql-library', default='python-mysqldb') }}
9 | - require:
10 | - pkg: mysql-client-install
11 |
--------------------------------------------------------------------------------
/file_root/mysql/init.sls:
--------------------------------------------------------------------------------
1 | mysql-conf-file:
2 | file:
3 | - managed
4 | - group: root
5 | mode: 644
6 | name: {{ salt['pillar.get']('conf_files:mysql', default='/etc/mysql/my.cnf') }}
7 | user: root
8 | require:
9 | - pkg: mysql-server
10 | ini:
11 | - options_present
12 | - name: {{ salt['pillar.get']('conf_files:mysql', default='/etc/mysql/my.cnf') }}
13 | - sections:
14 | mysqld:
15 | bind-address: 0.0.0.0
16 | character-set-server: utf8
17 | collation-server: utf8_general_ci
18 | init-connect: 'SET NAMES utf8'
19 | - require:
20 | - file: mysql-conf-file
21 |
22 | mysql-server-install:
23 | pkg:
24 | - installed
25 | - name: {{ salt['pillar.get']('packages:mysql-server', default='mysql-server') }}
26 |
27 | mysql-service-running:
28 | service:
29 | - running
30 | - name: {{ salt['pillar.get']('services:mysql', default='mysql') }}
31 | - watch:
32 | - pkg: mysql-server-install
33 | - ini: mysql-conf-file
34 |
35 |
--------------------------------------------------------------------------------
/file_root/mysql/openstack_dbschema.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import hosts with context %}
2 | {% for openstack_service in pillar['databases'] %}
3 | {{ openstack_service }}-db:
4 | mysql_database:
5 | - present
6 | - name: {{ pillar['databases'][openstack_service]['db_name'] }}
7 | - character_set: 'utf8'
8 | {% for server in hosts %}
9 | {{ server }}-{{ openstack_service }}-accounts:
10 | mysql_user:
11 | - present
12 | - name: {{ pillar['databases'][openstack_service]['username'] }}
13 | - password: {{ pillar['databases'][openstack_service]['password'] }}
14 | - host: {{ server }}
15 | - require:
16 | - mysql_database: {{ openstack_service }}-db
17 | mysql_grants:
18 | - present
19 | - grant: all
20 | - database: "{{ pillar['databases'][openstack_service]['db_name'] }}.*"
21 | - user: {{ pillar['databases'][openstack_service]['username'] }}
22 | - password: {{ pillar['databases'][openstack_service]['password'] }}
23 | - host: {{ server }}
24 | - require:
25 | - mysql_user: {{ server }}-{{ openstack_service }}-accounts
26 | {% endfor %}
27 | {% endfor %}
28 |
--------------------------------------------------------------------------------
/file_root/neutron/guest_mtu.sls:
--------------------------------------------------------------------------------
1 | neutron-dhcp-agent-running-mtu:
2 | service:
3 | - running
4 | - name: "{{ salt['pillar.get']('services:neutron_dhcp_agent', default='neutron-dhcp-agent') }}"
5 | - watch:
6 | - ini: neutron-dhcp-agent-config-mtu
7 | - file: neutron-dhcp-agent-config-mtu
8 | - ini: dnsmasq-conf
9 | - file: dnsmasq-conf
10 |
11 | neutron-dhcp-agent-config-mtu:
12 | file:
13 | - managed
14 | - name: "{{ salt['pillar.get']('conf_files:neutron_dhcp_agent', default='/etc/neutron/dhcp_agent.ini') }}"
15 | - user: neutron
16 | - group: neutron
17 | - mode: 644
18 | - require:
19 | - ini: neutron-dhcp-agent-config-mtu
20 | ini:
21 | - options_present
22 | - name: "{{ salt['pillar.get']('conf_files:neutron_dhcp_agent', default='/etc/neutron/dhcp_agent.ini') }}"
23 | - sections:
24 | DEFAULT:
25 | dnsmasq_config_file: "/etc/neutron/dnsmasq-neutron.conf"
26 |
27 | dnsmasq-conf:
28 | file:
29 | - managed
30 | - name: "/etc/neutron/dnsmasq-neutron.conf"
31 | - user: neutron
32 | - group: neutron
33 | - mode: 644
34 | - require:
35 | - ini: dnsmasq-conf
36 | ini:
37 | - options_present
38 | - name: "/etc/neutron/dnsmasq-neutron.conf"
39 | - sections:
40 | DEFAULT:
41 | dhcp-options-force: "26,1454"
42 | - require:
43 | - ini: neutron-dhcp-agent-config-mtu
44 |
--------------------------------------------------------------------------------
/file_root/neutron/guest_mtu.sls~:
--------------------------------------------------------------------------------
1 | neutron-dhcp-agent-running-mtu:
2 | service:
3 | - running
4 | - name: "{{ salt['pillar.get']('services:neutron_dhcp_agent', default='neutron-dhcp-agent') }}"
5 | - watch:
6 | - ini: neutron-dhcp-agent-config-mtu
7 | - file: neutron-dhcp-agent-config-mtu
8 | - ini: dnsmasq-conf
9 | - file: dnsmasq-conf
10 |
11 | neutron-dhcp-agent-config-mtu:
12 | file:
13 | - managed
14 | - name: "{{ salt['pillar.get']('conf_files:neutron_dhcp_agent', default='/etc/neutron/dhcp_agent.ini') }}"
15 | - user: neutron
16 | - group: neutron
17 | - mode: 644
18 | - require:
19 | - ini: neutron-dhcp-agent-config-mtu
20 | ini:
21 | - options_present
22 | - name: "{{ salt['pillar.get']('conf_files:neutron_dhcp_agent', default='/etc/neutron/dhcp_agent.ini') }}"
23 | - sections:
24 | DEFAULT:
25 | <<<<<<< HEAD
26 | dnsmasq_config_file = "/etc/neutron/dnsmasq-neutron.conf"
27 | =======
28 | dnsmasq_config_file: "/etc/neutron/dnsmasq-neutron.conf"
29 | >>>>>>> 3c91147d9e82f59689a5132839073894b94ddfb7
30 |
31 | dnsmasq-conf:
32 | file:
33 | - managed
34 | - name: "/etc/neutron/dnsmasq-neutron.conf"
35 | - user: neutron
36 | - group: neutron
37 | - mode: 644
38 | - require:
39 | - ini: dnsmasq-conf
40 | ini:
41 | - options_present
42 | - name: "/etc/neutron/dnsmasq-neutron.conf"
43 | - sections:
44 | DEFAULT:
45 | dhcp-options-force: "26,1454"
46 | - require:
47 | - ini: neutron-dhcp-agent-config-mtu
48 |
--------------------------------------------------------------------------------
/file_root/neutron/init.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 |
3 | neutron-server-install:
4 | pkg:
5 | - installed
6 | - name: "{{ salt['pillar.get']('packages:neutron_server', default='neutron-server') }}"
7 |
8 | neutron-server-service:
9 | service:
10 | - running
11 | - name: "{{ salt['pillar.get']('services:neutron_server', default='neutron-server') }}"
12 | - watch:
13 | - pkg: neutron-server-install
14 | - ini: neutron-conf-file
15 | - file: neutron-conf-file
16 |
17 | neutron-conf-file:
18 | file:
19 | - managed
20 | - name: "{{ salt['pillar.get']('conf_files:neutron', default='/etc/neutron/neutron.conf') }}"
21 | - user: neutron
22 | - group: neutron
23 | - mode: 644
24 | - require:
25 | - pkg: neutron-server-install
26 | ini:
27 | - options_present
28 | - name: "{{ salt['pillar.get']('conf_files:neutron', default='/etc/neutron/neutron.conf') }}"
29 | - sections:
30 | DEFAULT:
31 | rabbit_host: "{{ get_candidate('queue.%s' % salt['pillar.get']('queue_engine', default='rabbit')) }}"
32 | auth_strategy: keystone
33 | rpc_backend: neutron.openstack.common.rpc.impl_kombu
34 | core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
35 | service_plugins: neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
36 | allow_overlapping_ips: True
37 | verbose: True
38 | notify_nova_on_port_status_changes: True
39 | notify_nova_on_port_data_changes: True
40 | nova_url: "http://{{ get_candidate('keystone') }}:8774/v2"
41 | nova_admin_username: nova
42 | nova_admin_tenant_id: service
43 | nova_admin_password: "{{ pillar['keystone']['tenants']['service']['users']['nova']['password'] }}"
44 | nova_admin_auth_url: "http://{{ get_candidate('keystone') }}:35357/v2.0"
45 | keystone_authtoken:
46 | auth_protocol: http
47 | admin_user: neutron
48 | admin_password: "{{ pillar['keystone']['tenants']['service']['users']['neutron']['password'] }}"
49 | auth_host: "{{ get_candidate('keystone') }}"
50 | auth_uri: "http://{{ get_candidate('keystone') }}:5000"
51 | admin_tenant_name: service
52 | auth_port: 35357
53 | database:
54 | connection: "mysql://{{ salt['pillar.get']('databases:neutron:username', default='neutron') }}:{{ salt['pillar.get']('databases:neutron:password', default='neutron_pass') }}@{{ get_candidate('mysql') }}/{{ salt['pillar.get']('databases:neutron:db_name', default='neutron') }}"
55 | - require:
56 | - file: neutron-conf-file
57 |
--------------------------------------------------------------------------------
/file_root/neutron/ml2.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/physical_networks.jinja" import mappings with context %}
2 | {% from "cluster/physical_networks.jinja" import vlan_networks with context %}
3 | {% from "cluster/physical_networks.jinja" import flat_networks with context %}
4 |
5 | neutron_ml2:
6 | pkg:
7 | - installed
8 | - name: "{{ salt['pillar.get']('packages:neutron_ml2', default='neutron-plugin-ml2') }}"
9 |
10 | ml2_config_file:
11 | file:
12 | - managed
13 | - name: "{{ salt['pillar.get']('conf_files:neutron_ml2', default='/etc/neutron/plugins/ml2/ml2_conf.ini') }}"
14 | - user: neutron
15 | - group: neutron
16 | - mode: 644
17 | - require:
18 | - pkg: neutron_ml2
19 | ini:
20 | - options_present
21 | - name: "{{ salt['pillar.get']('conf_files:neutron_ml2', default='/etc/neutron/plugins/ml2/ml2_conf.ini') }}"
22 | - sections:
23 | ml2:
24 | type_drivers: "{{ ','.join(pillar['neutron']['type_drivers']) }}"
25 | tenant_network_types: "{{ ','.join(pillar['neutron']['type_drivers']) }}"
26 | mechanism_drivers: openvswitch
27 | {% if 'flat' in pillar['neutron']['type_drivers'] %}
28 | ml2_type_flat:
29 | flat_networks: "{{ ','.join(flat_networks) }}"
30 | {% endif %}
31 | {% if 'vlan' in pillar['neutron']['type_drivers'] %}
32 | ml2_type_vlan:
33 | network_vlan_ranges: "{{ ','.join(vlan_networks) }}"
34 | {% endif %}
35 | {% if 'gre' in pillar['neutron']['type_drivers'] %}
36 | ml2_type_gre:
37 | tunnel_id_ranges: "{{ pillar['neutron']['type_drivers']['gre']['tunnel_start'] }}:{{ pillar['neutron']['type_drivers']['gre']['tunnel_end'] }}"
38 | {% endif %}
39 | {% if 'vxlan' in pillar['neutron']['type_drivers'] %}
40 | ml2_type_vxlan:
41 | vni_ranges: "{{ pillar['neutron']['type_drivers']['gre']['tunnel_start'] }}:{{ pillar['neutron']['type_drivers']['gre']['tunnel_end'] }}"
42 | {% endif %}
43 | ovs:
44 | {% if 'flat' in pillar['neutron']['type_drivers'] or 'vlan' in pillar['neutron']['type_drivers'] %}
45 | bridge_mappings: "{{ ','.join(mappings) }}"
46 | {% endif %}
47 | {% if 'gre' in pillar['neutron']['type_drivers'] %}
48 | tunnel_type: gre
49 | enable_tunneling: True
50 | local_ip: "{{ pillar['hosts'][grains['id']] }}"
51 | {% endif %}
52 | {% if 'vxlan' in pillar['neutron']['type_drivers'] %}
53 | tunnel_type: vxlan
54 | enable_tunneling: True
55 | local_ip: "{{ pillar['hosts'][grains['id']] }}"
56 | {% endif %}
57 | securitygroup:
58 | firewall_driver: neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
59 | enable_security_group: True
60 | - require:
61 | - file: ml2_config_file
62 |
--------------------------------------------------------------------------------
/file_root/neutron/ml2.sls~:
--------------------------------------------------------------------------------
1 | {% from "cluster/physical_networks.jinja" import mappings with context %}
2 | {% from "cluster/physical_networks.jinja" import vlan_networks with context %}
3 | {% from "cluster/physical_networks.jinja" import flat_networks with context %}
4 |
5 | neutron_ml2:
6 | pkg:
7 | - installed
8 | - name: "{{ salt['pillar.get']('packages:neutron_ml2', default='neutron-plugin-ml2') }}"
9 |
10 | ml2_config_file:
11 | file:
12 | - managed
13 | - name: "{{ salt['pillar.get']('conf_files:neutron_ml2', default='/etc/neutron/plugins/ml2/ml2_conf.ini') }}"
14 | - user: neutron
15 | - group: neutron
16 | - mode: 644
17 | - require:
18 | - pkg: neutron_ml2
19 | ini:
20 | - options_present
21 | - name: "{{ salt['pillar.get']('conf_files:neutron_ml2', default='/etc/neutron/plugins/ml2/ml2_conf.ini') }}"
22 | - sections:
23 | ml2:
24 | type_drivers: "{{ ','.join(pillar['neutron']['type_drivers']) }}"
25 | tenant_network_types: "{{ ','.join(pillar['neutron']['type_drivers']) }}"
26 | mechanism_drivers: openvswitch
27 | {% if 'flat' in pillar['neutron']['type_drivers'] %}
28 | ml2_type_flat:
29 | flat_networks: "{{ ','.join(flat_networks) }}"
30 | {% endif %}
31 | {% if 'vlan' in pillar['neutron']['type_drivers'] %}
32 | ml2_type_vlan:
33 | network_vlan_ranges: "{{ ','.join(vlan_networks) }}"
34 | {% endif %}
35 | {% if 'gre' in pillar['neutron']['type_drivers'] %}
36 | ml2_type_gre:
37 | tunnel_id_ranges: "{{ pillar['neutron']['type_drivers']['gre']['tunnel_start'] }}:{{ pillar['neutron']['type_drivers']['gre']['tunnel_end'] }}"
38 | {% endif %}
39 | {% if 'vxlan' in pillar['neutron']['type_drivers'] %}
40 | ml2_type_vxlan:
41 | vni_ranges: "{{ pillar['neutron']['type_drivers']['gre']['tunnel_start'] }}:{{ pillar['neutron']['type_drivers']['gre']['tunnel_end'] }}"
42 | {% endif %}
43 | ovs:
44 | {% if 'flat' in pillar['neutron']['type_drivers'] or 'vlan' in pillar['neutron']['type_drivers'] %}
45 | bridge_mappings: "{{ ','.join(mappings) }}"
46 | {% endif %}
47 | {% if 'gre' in pillar['neutron']['type_drivers'] %}
48 | tunnel_type: gre
49 | enable_tunneling: True
50 | local_ip: "{{ pillar['hosts'][grains['id']] }}"
51 | {% endif %}
52 | {% if 'vxlan' in pillar['neutron']['type_drivers'] %}
53 | tunnel_type: vxlan
54 | enable_tunneling: True
55 | local_ip: "{{ pillar['hosts'][grains['id']] }}"
56 | {% endif %}
57 | securitygroup:
58 | firewall_driver: neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
59 | enable_security_group: True
60 | - require:
61 | - file: ml2_config_file
62 |
--------------------------------------------------------------------------------
/file_root/neutron/networks.sls:
--------------------------------------------------------------------------------
1 | {% for network in salt['pillar.get']('neutron:networks', ()) %}
2 | openstack-network-{{ network }}:
3 | neutron:
4 | - network_present
5 | - name: {{ network }}
6 | {% for network_param in salt['pillar.get']('neutron:networks:%s' % network, ()) %}
7 | {% if not network_param == 'subnets' %}
8 | - {{ network_param }}: {{ pillar['neutron']['networks'][network][network_param] }}
9 | {% endif %}
10 | {% endfor %}
11 | {% for subnet in salt['pillar.get']('neutron:networks:%s:subnets' % network, ()) %}
12 | openstack-subnet-{{ subnet }}:
13 | neutron:
14 | - subnet_present
15 | - name: {{ subnet }}
16 | - network: {{ network }}
17 | {% for subnet_param in salt['pillar.get']('neutron:networks:%s:subnets:%s' % (network, subnet), ()) %}
18 | - {{ subnet_param }}: {{ pillar['neutron']['networks'][network]['subnets'][subnet][subnet_param] }}
19 | {% endfor %}
20 | - require:
21 | - neutron: openstack-network-{{ network }}
22 | {% endfor %}
23 | {% endfor %}
24 |
--------------------------------------------------------------------------------
/file_root/neutron/openvswitch.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 | {% from "cluster/physical_networks.jinja" import bridges with context %}
3 | neutron-l2-agent-install:
4 | pkg:
5 | - installed
6 | - name: "{{ salt['pillar.get']('packages:neutron_l2_agent', default='neutron-plugin-openvswitch-agent') }}"
7 | {% if bridges %}
8 | - require:
9 | {% for bridge in bridges %}
10 | - cmd: bridge-{{ bridge }}-create
11 | {% endfor %}
12 | {% endif %}
13 |
14 | neutron-l2-agent-running:
15 | service:
16 | - running
17 | - name: "{{ salt['pillar.get']('services:neutron_l2_agent', default='neutron-plugin-openvswitch-agent') }}"
18 | - watch:
19 | - pkg: neutron-l2-agent-install
20 | - file: l2-agent-neutron-config-file
21 |
22 | l2-agent-neutron-config-file:
23 | file:
24 | - managed
25 | - name: "{{ salt['pillar.get']('conf_files:neutron', default='/etc/neutron/neutron.conf') }}"
26 | - group: neutron
27 | - user: neutron
28 | - mode: 644
29 | - require:
30 | - ini: l2-agent-neutron-config-file
31 | ini:
32 | - options_present
33 | - name: "{{ salt['pillar.get']('conf_files:neutron', default='/etc/neutron/neutron.conf') }}"
34 | - sections:
35 | DEFAULT:
36 | rabbit_host: "{{ get_candidate('queue.%s' % salt['pillar.get']('queue_engine', default='rabbit')) }}"
37 | neutron_metadata_proxy_shared_secret: "{{ pillar['neutron']['metadata_secret'] }}"
38 | service_neutron_metadata_proxy: true
39 | auth_strategy: keystone
40 | rpc_backend: neutron.openstack.common.rpc.impl_kombu
41 | core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
42 | service_plugins: neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
43 | allow_overlapping_ips: True
44 | verbose: True
45 | keystone_authtoken:
46 | auth_protocol: http
47 | admin_user: neutron
48 | admin_password: "{{ pillar['keystone']['tenants']['service']['users']['neutron']['password'] }}"
49 | auth_host: "{{ get_candidate('keystone') }}"
50 | auth_uri: "http://{{ get_candidate('keystone') }}:5000"
51 | admin_tenant_name: service
52 | auth_port: 35357
53 | - require:
54 | - pkg: neutron-l2-agent-install
55 |
56 |
57 | openvswitch-switch-install:
58 | pkg:
59 | - installed
60 | - name: "{{ salt['pillar.get']('packages:openvswitch', default='openvswitch-switch') }}"
61 |
62 | openvswitch-switch-running:
63 | service:
64 | - running
65 | - name: "{{ salt['pillar.get']('services:openvswitch', default='openvswitch-switch') }}"
66 | - require:
67 | - pkg: openvswitch-switch-install
68 |
69 | {% for bridge in bridges %}
70 | bridge-{{ bridge }}-create:
71 | cmd:
72 | - run
73 | - name: "ovs-vsctl add-br {{ bridge }}"
74 | - unless: "ovs-vsctl br-exists {{ bridge }}"
75 | - require:
76 | - service: openvswitch-switch-running
77 | {% if bridges[bridge] %}
78 | {% if salt['pillar.get']('neutron:single_nic') %}
79 | proxy-bridge-create-{{ bridge }}:
80 | cmd:
81 | - run
82 | - name: "ovs-vsctl add-br br-proxy"
83 | - unless: "ovs-vsctl br-exists br-proxy"
84 | primary-nic-bring-up-{{ bridge }}:
85 | cmd:
86 | - run
87 | - name: "ip link set {{ salt['pillar.get']('neutron:single_nic') }} up promisc on"
88 | - require:
89 | - cmd: proxy-bridge-create-{{ bridge }}
90 | {% if bridges[bridge] not in salt['network.interfaces']() %}
91 | remove-fake-{{ bridges[bridge] }}-interfaces:
92 | cmd:
93 | - run
94 | - name: "ovs-vsctl del-port {{ bridges[bridge] }}"
95 | - onlyif: "ovs-vsctl list-ports {{ bridge }} | grep {{ bridges[bridge] }}"
96 | - require:
97 | - cmd: bridge-{{ bridge }}-create
98 | remove-fake-{{ bridges[bridge] }}-br-proxy-interfaces:
99 | cmd:
100 | - run
101 | - name: "ovs-vsctl del-port {{ bridges[bridge] }}-br-proxy"
102 | - onlyif: "ovs-vsctl list-ports {{ bridge }} | grep {{ bridges[bridge] }}-br-proxy"
103 | - require:
104 | - cmd: bridge-{{ bridge }}-create
105 | veth-add-{{ bridges[bridge] }}:
106 | cmd:
107 | - run
108 | - name: "ip link add {{ bridges[bridge] }} type veth peer name {{ bridges[bridge] }}-br-proxy"
109 | - require:
110 | - cmd: remove-fake-{{ bridges[bridge] }}-interfaces
111 | - cmd: remove-fake-{{ bridges[bridge] }}-br-proxy-interfaces
112 | veth-bring-up-{{ bridges[bridge] }}-br-proxy:
113 | cmd:
114 | - run
115 | - name: "ip link set {{ bridges[bridge] }}-br-proxy up promisc on"
116 | - require:
117 | - cmd: veth-add-{{ bridges[bridge] }}
118 | veth-add-{{ bridges[bridge] }}-br-proxy:
119 | cmd:
120 | - run
121 | - name: "ovs-vsctl add-port br-proxy {{ bridges[bridge] }}-br-proxy"
122 | - require:
123 | - cmd: veth-add-{{ bridges[bridge] }}
124 | {% endif %}
125 | {% endif %}
126 | {{ bridge }}-interface-add:
127 | cmd:
128 | - run
129 | - name: "ovs-vsctl add-port {{ bridge }} {{ bridges[bridge] }}"
130 | - unless: "ovs-vsctl list-ports {{ bridge }} | grep {{ bridges[bridge] }}"
131 | - require:
132 | - cmd: "bridge-{{ bridge }}-create"
133 | {{ bridges[bridge] }}-interface-bring-up:
134 | cmd:
135 | - run
136 | - name: "ip link set {{ bridges[bridge] }} up promisc on"
137 | - require:
138 | - cmd: {{ bridge }}-interface-add
139 | {% endif %}
140 | {% endfor %}
141 |
--------------------------------------------------------------------------------
/file_root/neutron/routers.sls:
--------------------------------------------------------------------------------
1 | {% for router in salt['pillar.get']('neutron:routers', ()) %}
2 | openstack-router-{{ router }}:
3 | neutron:
4 | - router_present
5 | - name: {{ router }}
6 | {% for router_param in salt['pillar.get']('neutron:routers:%s' % router, ()) %}
7 | - {{ router_param }}: {{ pillar['neutron']['routers'][router][router_param] }}
8 | {% endfor %}
9 | {% endfor %}
10 |
--------------------------------------------------------------------------------
/file_root/neutron/security_groups.sls:
--------------------------------------------------------------------------------
1 | {% for security_group in salt['pillar.get']('neutron:security_groups', ()) %}
2 | openstack-security-group-{{ security_group }}:
3 | neutron:
4 | - security_group_present
5 | - name: {{ security_group }}
6 | - description: {{ salt['pillar.get']('neutron:security_groups:%s:description' % security_group, None) }}
7 | - rules: {{ salt['pillar.get']('neutron:security_groups:%s:rules' % security_group, []) }}
8 | {% endfor %}
9 |
--------------------------------------------------------------------------------
/file_root/neutron/service.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 | neutron-dhcp-agent-install:
3 | pkg:
4 | - installed
5 | - name: "{{ salt['pillar.get']('packages:neutron_dhcp_agent', default='neutron-dhcp-agent') }}"
6 |
7 | neutron-dhcp-agent-running:
8 | service:
9 | - running
10 | - name: "{{ salt['pillar.get']('services:neutron_dhcp_agent', default='neutron-dhcp-agent') }}"
11 | - watch:
12 | - pkg: neutron-dhcp-agent-install
13 | - ini: neutron-dhcp-agent-config
14 | - file: neutron-dhcp-agent-config
15 | - ini: neutron-service-neutron-conf
16 | - file: neutron-service-neutron-conf
17 |
18 | neutron-dhcp-agent-config:
19 | file:
20 | - managed
21 | - name: "{{ salt['pillar.get']('conf_files:neutron_dhcp_agent', default='/etc/neutron/dhcp_agent.ini') }}"
22 | - user: neutron
23 | - group: neutron
24 | - mode: 644
25 | - require:
26 | - ini: neutron-dhcp-agent-config
27 | ini:
28 | - options_present
29 | - name: "{{ salt['pillar.get']('conf_files:neutron_dhcp_agent', default='/etc/neutron/dhcp_agent.ini') }}"
30 | - sections:
31 | DEFAULT:
32 | dhcp_driver: neutron.agent.linux.dhcp.Dnsmasq
33 | interface_driver: neutron.agent.linux.interface.OVSInterfaceDriver
34 | use_namespaces: True
35 | - require:
36 | - pkg: neutron-dhcp-agent-install
37 |
38 |
39 | neutron-metadata-agent-install:
40 | pkg:
41 | - installed
42 | - name: "{{ salt['pillar.get']('packages:neutron_metadata_agent', default='neutron-metadata-agent') }}"
43 |
44 | neutron-metadata-agent-running:
45 | service:
46 | - running
47 | - name: "{{ salt['pillar.get']('services:neutron_metadata_agent', default='neutron-metadata-agent') }}"
48 | - watch:
49 | - pkg: neutron-metadata-agent-install
50 | - file: neutron-metadata-agent-conf
51 | - ini: neutron-metadata-agent-conf
52 | - ini: neutron-service-neutron-conf
53 | - file: neutron-service-neutron-conf
54 |
55 | neutron-metadata-agent-conf:
56 | file:
57 | - managed
58 | - name: "{{ salt['pillar.get']('conf_files:neutron_metadata_agent', default='/etc/neutron/metadata_agent.ini') }}"
59 | - user: neutron
60 | - group: neutron
61 | - mode: 644
62 | - require:
63 | - ini: neutron-metadata-agent-conf
64 | ini:
65 | - options_present
66 | - name: "{{ salt['pillar.get']('conf_files:neutron_metadata_agent', default='/etc/neutron/metadata_agent.ini') }}"
67 | - sections:
68 | DEFAULT:
69 | admin_tenant_name: service
70 | auth_region: RegionOne
71 | admin_user: neutron
72 | auth_url: "http://{{ get_candidate('keystone') }}:5000/v2.0"
73 | admin_password: "{{ pillar['keystone']['tenants']['service']['users']['neutron']['password'] }}"
74 | metadata_proxy_shared_secret: "{{ pillar['neutron']['metadata_secret'] }}"
75 | nova_metadata_ip: "{{ get_candidate('nova') }}"
76 | - require:
77 | - pkg: neutron-metadata-agent-install
78 |
79 | neutron-l3-agent-install:
80 | pkg:
81 | - installed
82 | - name: "{{ salt['pillar.get']('packages:neutron_l3_agent', default='neutron-l3-agent') }}"
83 |
84 | neutron-l3-agent-running:
85 | service:
86 | - running
87 | - name: "{{ salt['pillar.get']('services:neutron_l3_agent', default='neutron-l3-agent') }}"
88 | - watch:
89 | - pkg: neutron-l3-agent
90 | - ini: neutron-l3-agent-config
91 | - file: neutron-l3-agent-config
92 | - ini: neutron-service-neutron-conf
93 | - file: neutron-service-neutron-conf
94 |
95 | neutron-l3-agent-config:
96 | file:
97 | - managed
98 | - name: "{{ salt['pillar.get']('conf_files:neutron_l3_agent', default='/etc/neutron/l3_agent.ini') }}"
99 | - user: neutron
100 | - group: neutron
101 | - mode: 644
102 | - require:
103 | - ini: neutron-l3-agent-config
104 | ini:
105 | - options_present
106 | - name: "{{ salt['pillar.get']('conf_files:neutron_l3_agent', default='/etc/neutron/l3_agent.ini') }}"
107 | - sections:
108 | DEFAULT:
109 | interface_driver: neutron.agent.linux.interface.OVSInterfaceDriver
110 | use_namespaces: True
111 | - require:
112 | - pkg: neutron-l3-agent-install
113 |
114 | enable_forwarding:
115 | file:
116 | - managed
117 | - name: "{{ salt['pillar.get']('conf_files:syslinux', default='/etc/sysctl.conf' ) }}"
118 | ini:
119 | - options_present
120 | - name: "{{ salt['pillar.get']('conf_files:syslinux', default='/etc/sysctl.conf' ) }}"
121 | - sections:
122 | DEFAULT_IMPLICIT:
123 | net.ipv4.conf.all.rp_filter: 0
124 | net.ipv4.ip_forward: 1
125 | net.ipv4.conf.default.rp_filter: 0
126 |
127 |
128 | networking-service:
129 | service:
130 | - running
131 | - name: "{{ salt['pillar.get']('conf_files:networking', default='networking') }}"
132 | - watch:
133 | - file: enable_forwarding
134 | - ini: enable_forwarding
135 |
136 | neutron-service-neutron-conf:
137 | file:
138 | - managed
139 | - name: "{{ salt['pillar.get']('conf_files:neutron', default='/etc/neutron/neutron.conf') }}"
140 | - user: neutron
141 | - group: neutron
142 | - mode: 644
143 | - require:
144 | - ini: neutron-service-neutron-conf
145 | ini:
146 | - options_present
147 | - name: "{{ salt['pillar.get']('conf_files:neutron', default='/etc/neutron/neutron.conf') }}"
148 | - sections:
149 | DEFAULT:
150 | rabbit_host: "{{ get_candidate('queue.%s' % salt['pillar.get']('queue_engine', default='rabbit')) }}"
151 | auth_strategy: keystone
152 | rpc_backend: neutron.openstack.common.rpc.impl_kombu
153 | core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
154 | service_plugins: neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
155 | allow_overlapping_ips: True
156 | verbose: True
157 | keystone_authtoken:
158 | auth_protocol: http
159 | admin_user: neutron
160 | admin_password: "{{ pillar['keystone']['tenants']['service']['users']['neutron']['password'] }}"
161 | auth_host: "{{ get_candidate('keystone') }}"
162 | auth_uri: "http://{{ get_candidate('keystone') }}:5000"
163 | admin_tenant_name: service
164 | auth_port: 35357
165 | - require:
166 | - pkg: neutron-metadata-agent-install
167 | - pkg: neutron-dhcp-agent-install
168 | - pkg: neutron-l3-agent-install
169 |
--------------------------------------------------------------------------------
/file_root/neutron/service.sls~:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 | neutron-dhcp-agent-install:
3 | pkg:
4 | - installed
5 | - name: "{{ salt['pillar.get']('packages:neutron_dhcp_agent', default='neutron-dhcp-agent') }}"
6 |
7 | neutron-dhcp-agent-running:
8 | service:
9 | - running
10 | - name: "{{ salt['pillar.get']('services:neutron_dhcp_agent', default='neutron-dhcp-agent') }}"
11 | - watch:
12 | - pkg: neutron-dhcp-agent-install
13 | - ini: neutron-dhcp-agent-config
14 | - file: neutron-dhcp-agent-config
15 | - ini: neutron-service-neutron-conf
16 | - file: neutron-service-neutron-conf
17 |
18 | neutron-dhcp-agent-config:
19 | file:
20 | - managed
21 | - name: "{{ salt['pillar.get']('conf_files:neutron_dhcp_agent', default='/etc/neutron/dhcp_agent.ini') }}"
22 | - user: neutron
23 | - group: neutron
24 | - mode: 644
25 | - require:
26 | - ini: neutron-dhcp-agent-config
27 | ini:
28 | - options_present
29 | - name: "{{ salt['pillar.get']('conf_files:neutron_dhcp_agent', default='/etc/neutron/dhcp_agent.ini') }}"
30 | - sections:
31 | DEFAULT:
32 | dhcp_driver: neutron.agent.linux.dhcp.Dnsmasq
33 | interface_driver: neutron.agent.linux.interface.OVSInterfaceDriver
34 | use_namespaces: True
35 | - require:
36 | - pkg: neutron-dhcp-agent-install
37 |
38 |
39 | neutron-metadata-agent-install:
40 | pkg:
41 | - installed
42 | - name: "{{ salt['pillar.get']('packages:neutron_metadata_agent', default='neutron-metadata-agent') }}"
43 |
44 | neutron-metadata-agent-running:
45 | service:
46 | - running
47 | - name: "{{ salt['pillar.get']('services:neutron_metadata_agent', default='neutron-metadata-agent') }}"
48 | - watch:
49 | - pkg: neutron-metadata-agent-install
50 | - file: neutron-metadata-agent-conf
51 | - ini: neutron-metadata-agent-conf
52 | - ini: neutron-service-neutron-conf
53 | - file: neutron-service-neutron-conf
54 |
55 | neutron-metadata-agent-conf:
56 | file:
57 | - managed
58 | - name: "{{ salt['pillar.get']('conf_files:neutron_metadata_agent', default='/etc/neutron/metadata_agent.ini') }}"
59 | - user: neutron
60 | - group: neutron
61 | - mode: 644
62 | - require:
63 | - ini: neutron-metadata-agent-conf
64 | ini:
65 | - options_present
66 | - name: "{{ salt['pillar.get']('conf_files:neutron_metadata_agent', default='/etc/neutron/metadata_agent.ini') }}"
67 | - sections:
68 | DEFAULT:
69 | admin_tenant_name: service
70 | auth_region: RegionOne
71 | admin_user: neutron
72 | auth_url: "http://{{ get_candidate('keystone') }}:5000/v2.0"
73 | admin_password: "{{ pillar['keystone']['tenants']['service']['users']['neutron']['password'] }}"
74 | metadata_proxy_shared_secret: "{{ pillar['neutron']['metadata_secret'] }}"
75 | nova_metadata_ip: "{{ get_candidate('nova') }}"
76 | - require:
77 | - pkg: neutron-metadata-agent-install
78 |
79 | neutron-l3-agent-install:
80 | pkg:
81 | - installed
82 | - name: "{{ salt['pillar.get']('packages:neutron_l3_agent', default='neutron-l3-agent') }}"
83 |
84 | neutron-l3-agent-running:
85 | service:
86 | - running
87 | - name: "{{ salt['pillar.get']('services:neutron_l3_agent', default='neutron-l3-agent') }}"
88 | - watch:
89 | - pkg: neutron-l3-agent
90 | - ini: neutron-l3-agent-config
91 | - file: neutron-l3-agent-config
92 | - ini: neutron-service-neutron-conf
93 | - file: neutron-service-neutron-conf
94 |
95 | neutron-l3-agent-config:
96 | file:
97 | - managed
98 | - name: "{{ salt['pillar.get']('conf_files:neutron_l3_agent', default='/etc/neutron/l3_agent.ini') }}"
99 | - user: neutron
100 | - group: neutron
101 | - mode: 644
102 | - require:
103 | - ini: neutron-l3-agent-config
104 | ini:
105 | - options_present
106 | - name: "{{ salt['pillar.get']('conf_files:neutron_l3_agent', default='/etc/neutron/l3_agent.ini') }}"
107 | - sections:
108 | DEFAULT:
109 | interface_driver: neutron.agent.linux.interface.OVSInterfaceDriver
110 | use_namespaces: True
111 | - require:
112 | - pkg: neutron-l3-agent-install
113 |
114 | enable_forwarding:
115 | file:
116 | - managed
117 | - name: "{{ salt['pillar.get']('conf_files:syslinux', default='/etc/sysctl.conf' ) }}"
118 | ini:
119 | - options_present
120 | - name: "{{ salt['pillar.get']('conf_files:syslinux', default='/etc/sysctl.conf' ) }}"
121 | - sections:
122 | DEFAULT_IMPLICIT:
123 | net.ipv4.conf.all.rp_filter: 0
124 | net.ipv4.ip_forward: 1
125 | net.ipv4.conf.default.rp_filter: 0
126 |
127 |
128 | networking-service:
129 | service:
130 | - running
131 | - name: "{{ salt['pillar.get']('conf_files:networking', default='networking') }}"
132 | - watch:
133 | - file: enable_forwarding
134 | - ini: enable_forwarding
135 |
136 | neutron-service-neutron-conf:
137 | file:
138 | - managed
139 | - name: "{{ salt['pillar.get']('conf_files:neutron', default='/etc/neutron/neutron.conf') }}"
140 | - user: neutron
141 | - group: neutron
142 | - mode: 644
143 | - require:
144 | - ini: neutron-service-neutron-conf
145 | ini:
146 | - options_present
147 | - name: "{{ salt['pillar.get']('conf_files:neutron', default='/etc/neutron/neutron.conf') }}"
148 | - sections:
149 | DEFAULT:
150 | rabbit_host: "{{ get_candidate('queue.%s' % salt['pillar.get']('queue_engine', default='rabbit')) }}"
151 | auth_strategy: keystone
152 | rpc_backend: neutron.openstack.common.rpc.impl_kombu
153 | core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
154 | service_plugins: neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
155 | allow_overlapping_ips: True
156 | verbose: True
157 | keystone_authtoken:
158 | auth_protocol: http
159 | admin_user: neutron
160 | admin_password: "{{ pillar['keystone']['tenants']['service']['users']['neutron']['password'] }}"
161 | auth_host: "{{ get_candidate('keystone') }}"
162 | auth_uri: "http://{{ get_candidate('keystone') }}:5000"
163 | admin_tenant_name: service
164 | auth_port: 35357
165 | - require:
166 | - pkg: neutron-metadata-agent-install
167 | - pkg: neutron-dhcp-agent-install
168 | - pkg: neutron-l3-agent-install
169 |
--------------------------------------------------------------------------------
/file_root/nova/compute_kvm.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 |
3 | nova-compute-kvm-install:
4 | pkg:
5 | - installed
6 | - name: {{ salt['pillar.get']('packages:nova_compute_kvm', default='nova-compute-kvm') }}
7 |
8 | nova-compute-install:
9 | pkg:
10 | - installed
11 | - name: {{ salt['pillar.get']('packages:nova_compute', default='nova-compute') }}
12 |
13 | nova-compute-running:
14 | service:
15 | - running
16 | - name: {{ salt['pillar.get']('services:nova_compute', default='nova-compute') }}
17 | - watch:
18 | - pkg: nova-compute-install
19 | - file: nova-conf-compute
20 | - ini: nova-conf-compute
21 | - file: nova-compute-conf
22 | - ini: nova-compute-conf
23 |
24 | nova-compute-conf:
25 | file:
26 | - managed
27 | - name: {{ salt['pillar.get']('conf_files:nova_compute', default='/etc/nova/nova-compute.conf') }}
28 | - user: nova
29 | - group: nova
30 | - mode: 644
31 | - require:
32 | - ini: nova-compute-conf
33 | ini:
34 | - options_present
35 | - name: {{ salt['pillar.get']('conf_files:nova_compute', default='/etc/nova/nova-compute.conf') }}
36 | - sections:
37 | DEFAULT:
38 | {% if 'virt.is_hyper' in salt and salt['virt.is_hyper'] %}
39 | libvirt_type: kvm
40 | {% else %}
41 | libvirt_type: qemu
42 | {% endif %}
43 | - require:
44 | - pkg: nova-compute-install
45 | - pkg: nova-compute-kvm-install
46 |
47 | python-guestfs-install:
48 | pkg:
49 | - installed
50 | - name: {{ salt['pillar.get']('packages:python_guestfs', default='python-guestfs') }}
51 |
52 | nova-conf-compute:
53 | file:
54 | - managed
55 | - name: {{ salt['pillar.get']('conf_files:nova', default='/etc/nova/nova.conf') }}
56 | - user: nova
57 | - password: nova
58 | - mode: 644
59 | - require:
60 | - ini: nova-conf-compute
61 | ini:
62 | - options_present
63 | - name: {{ salt['pillar.get']('conf_files:nova', default='/etc/nova/nova.conf') }}
64 | - sections:
65 | DEFAULT:
66 | vnc_enabled: True
67 | rabbit_host: "{{ get_candidate('queue.%s' % salt['pillar.get']('queue_engine', default='rabbit')) }}"
68 | my_ip: {{ grains['id'] }}
69 | vncserver_listen: 0.0.0.0
70 | glance_host: {{ get_candidate('glance') }}
71 | vncserver_proxyclient_address: {{ grains['id'] }}
72 | rpc_backend: nova.rpc.impl_kombu
73 | novncproxy_base_url: http://{{ get_candidate('nova') }}:6080/vnc_auto.html
74 | auth_strategy: keystone
75 | network_api_class: nova.network.neutronv2.api.API
76 | neutron_url: http://{{ get_candidate('neutron') }}:9696
77 | neutron_auth_strategy: keystone
78 | neutron_admin_tenant_name: service
79 | neutron_admin_username: neutron
80 | neutron_admin_password: {{ pillar['keystone']['tenants']['service']['users']['neutron']['password'] }}
81 | neutron_admin_auth_url: http://{{ get_candidate('keystone') }}:35357/v2.0
82 | linuxnet_interface_driver: nova.network.linux_net.LinuxOVSInterfaceDriver
83 | firewall_driver: nova.virt.firewall.NoopFirewallDriver
84 | security_group_api: neutron
85 | vif_plugging_is_fatal: False
86 | vif_plugging_timeout: 0
87 | keystone_authtoken:
88 | auth_uri: {{ get_candidate('keystone') }}:5000
89 | auth_port: 35357
90 | auth_protocol: http
91 | admin_tenant_name: service
92 | admin_user: nova
93 | admin_password: {{ pillar['keystone']['tenants']['service']['users']['nova']['password'] }}
94 | auth_host: {{ get_candidate('keystone') }}
95 | database:
96 | connection: "mysql://{{ salt['pillar.get']('databases:nova:username', default='nova') }}:{{ salt['pillar.get']('databases:nova:password', default='nova_pass') }}@{{ get_candidate('mysql') }}/{{ salt['pillar.get']('databases:nova:db_name', default='nova') }}"
97 | - require:
98 | - pkg: nova-compute-install
99 | - pkg: nova-compute-kvm-install
100 |
101 | nova-instance-directory:
102 | file:
103 | - directory
104 | - name: /var/lib/nova/instances/
105 | - user: nova
106 | - group: nova
107 | - mode: 755
108 | - recurse:
109 | - user
110 | - group
111 | - mode
112 | - require:
113 | - pkg: nova-compute-install
114 | - pkg: nova-compute-kvm-install
115 |
--------------------------------------------------------------------------------
/file_root/nova/init.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import get_candidate with context %}
2 |
3 | nova-api-install:
4 | pkg:
5 | - installed
6 | - name: "{{ salt['pillar.get']('packages:nova_api', default='nova-api') }}"
7 |
8 | nova-api-running:
9 | service:
10 | - running
11 | - name: "{{ salt['pillar.get']('services:nova_api', default='nova-api') }}"
12 | - watch:
13 | - pkg: nova-api-install
14 | - ini: nova-conf
15 | - file: nova-conf
16 |
17 | nova-conductor-install:
18 | pkg:
19 | - installed
20 | - name: "{{ salt['pillar.get']('packages:nova_conductor', default='nova-conductor') }}"
21 |
22 | nova-conductor-running:
23 | service:
24 | - running
25 | - name: "{{ salt['pillar.get']('services:nova_conductor', default='nova-conductor') }}"
26 | - watch:
27 | - pkg: nova-conductor-install
28 | - ini: nova-conf
29 | - file: nova-conf
30 |
31 | nova-scheduler-install:
32 | pkg:
33 | - installed
34 | - name: "{{ salt['pillar.get']('packages:nova_scheduler', default='nova-scheduler') }}"
35 |
36 | nova-scheduler-running:
37 | service:
38 | - running
39 | - name: "{{ salt['pillar.get']('services:nova_scheduler', default='nova-scheduler') }}"
40 | - watch:
41 | - pkg: nova-scheduler-install
42 | - ini: nova-conf
43 | - file: nova-conf
44 |
45 | nova-cert-install:
46 | pkg:
47 | - installed
48 | - name: "{{ salt['pillar.get']('packages:nova_cert', default='nova-cert') }}"
49 |
50 | nova-cert-running:
51 | service:
52 | - running
53 | - name: "{{ salt['pillar.get']('services:nova_cert', default='nova-cert') }}"
54 | - watch:
55 | - pkg: nova-cert-install
56 | - ini: nova-conf
57 | - file: nova-conf
58 |
59 | nova-consoleauth-install:
60 | pkg:
61 | - installed
62 | - name: "{{ salt['pillar.get']('packages:nova_consoleauth', default='nova-consoleauth') }}"
63 |
64 | nova-consoleauth-running:
65 | service:
66 | - running
67 | - name: "{{ salt['pillar.get']('services:nova_consoleauth', default='nova-consoleauth') }}"
68 | - watch:
69 | - pkg: nova-consoleauth-install
70 | - ini: nova-conf
71 | - file: nova-conf
72 |
73 | python-novaclient:
74 | pkg:
75 | - installed
76 | - name: "{{ salt['pillar.get']('packages:nova_pythonclient', default='python-novaclient') }}"
77 |
78 | nova-ajax-console-proxy:
79 | pkg:
80 | - installed
81 | - name: "{{ salt['pillar.get']('packages:nova_ajax_console_proxy', default='nova-ajax-console-proxy') }}"
82 |
83 | novnc:
84 | pkg:
85 | - installed
86 | - name: "{{ salt['pillar.get']('packages:novnc', default='novnc') }}"
87 |
88 | nova-novncproxy-install:
89 | pkg:
90 | - installed
91 | - name: "{{ salt['pillar.get']('packages:nova_novncproxy', default='nova-novncproxy') }}"
92 |
93 | nova-novncproxy-running:
94 | service:
95 | - running
96 | - name: "{{ salt['pillar.get']('services:nova_novncproxy', default='nova-novncproxy') }}"
97 | - watch:
98 | - pkg: nova-novncproxy-install
99 | - ini: nova-conf
100 | - file: nova-conf
101 |
102 | {% if 'db_sync' in salt['pillar.get']('databases:nova', default=()) %}
103 | nova_sync:
104 | cmd:
105 | - run
106 | - name: "{{ salt['pillar.get']('databases:nova:db_sync') }}"
107 | - require:
108 | - service: nova-api-running
109 | {% endif %}
110 |
111 | nova_sqlite_delete:
112 | file:
113 | - absent
114 | - name: /var/lib/nova/nova.sqlite
115 | - require:
116 | - pkg: nova-api
117 |
118 | nova-conf:
119 | file:
120 | - managed
121 | - name: "{{ salt['pillar.get']('conf_files:nova', default='/etc/nova/nova.conf') }}"
122 | - user: nova
123 | - group: nova
124 | - mode: 644
125 | - require:
126 | - pkg: nova-api
127 | - ini: nova-conf
128 | ini:
129 | - options_present
130 | - name: "{{ salt['pillar.get']('conf_files:nova', default='/etc/nova/nova.conf') }}"
131 | - sections:
132 | DEFAULT:
133 | auth_strategy: "keystone"
134 | rabbit_host: "{{ get_candidate('queue.%s' % salt['pillar.get']('queue_engine', default='rabbit')) }}"
135 | my_ip: "{{ grains['id'] }}"
136 | vncserver_listen: "{{ get_candidate('nova') }}"
137 | vncserver_proxyclient_address: "{{ get_candidate('nova') }}"
138 | rpc_backend: "{{ pillar['queue_engine'] }}"
139 | network_api_class: "nova.network.neutronv2.api.API"
140 | neutron_url: "http://{{ get_candidate('neutron') }}:9696"
141 | neutron_auth_strategy: "keystone"
142 | neutron_admin_tenant_name: "service"
143 | neutron_admin_username: "neutron"
144 | neutron_admin_password: "{{ pillar['keystone']['tenants']['service']['users']['neutron']['password'] }}"
145 | neutron_admin_auth_url: "http://{{ get_candidate('keystone') }}:35357/v2.0"
146 | linuxnet_interface_driver: "nova.network.linux_net.LinuxOVSInterfaceDriver"
147 | firewall_driver: "nova.virt.firewall.NoopFirewallDriver"
148 | security_group_api: "neutron"
149 | service_neutron_metadata_proxy: "True"
150 | neutron_metadata_proxy_shared_secret: "{{ pillar['neutron']['metadata_secret'] }}"
151 | vif_plugging_is_fatal: "False"
152 | vif_plugging_timeout: "0"
153 | keystone_authtoken:
154 | auth_protocol: "http"
155 | admin_user: "nova"
156 | admin_password: "{{ pillar['keystone']['tenants']['service']['users']['nova']['password'] }}"
157 | auth_host: "{{ get_candidate('keystone') }}"
158 | auth_uri: "http://{{ get_candidate('keystone') }}:5000"
159 | admin_tenant_name: "service"
160 | auth_port: "35357"
161 | database:
162 | connection: "mysql://{{ salt['pillar.get']('databases:nova:username', default='nova') }}:{{ salt['pillar.get']('databases:nova:password', default='nova_pass') }}@{{ get_candidate('mysql') }}/{{ salt['pillar.get']('databases:nova:db_name', default='nova') }}"
163 | - require:
164 | - pkg: nova-api-install
165 |
--------------------------------------------------------------------------------
/file_root/postinstall/misc_options.sls:
--------------------------------------------------------------------------------
1 | {% for conf_file in salt['pillar.get']('misc_options', default=()) %}
2 | misc-{{ conf_file }}:
3 | file:
4 | - managed
5 | - name: "{{ salt['pillar.get']('conf_files:%s' % conf_file, default=conf_file) }}"
6 | - user: "{{ salt['pillar.get']('conf_files:%s:user' % conf_file, default='root') }}"
7 | - group: "{{ salt['pillar.get']('conf_files:%s:group' % conf_file, default='root') }}"
8 | - mode: "{{ salt['pillar.get']('conf_files:%s:mode' % conf_file, default='644') }}"
9 | {% if salt['pillar.get']('misc_options:%s:sections' % conf_file, default=None) %}
10 | ini:
11 | - options_present
12 | - name: "{{ salt['pillar.get']('conf_files:%s' % conf_file, default=conf_file) }}"
13 | - sections:
14 | {% for section_name in salt['pillar.get']('misc_options:%s:sections' % conf_file) %}
15 | {{ section_name }}:
16 | {% for option_name in pillar['misc_options'][conf_file]['sections'][section_name] %}
17 | {{ option_name }}: {{ pillar['misc_options'][conf_file]['sections'][section_name][option_name] }}
18 | {% endfor %}
19 | {% endfor %}
20 | {% endif %}
21 | {% if salt['pillar.get']('misc_options:%s:service' % conf_file, default=None) %}
22 | service:
23 | - running
24 | - name: "{{ salt['pillar.get']('services:%s' % pillar['misc_options'][conf_file]['service'], default=pillar['misc_options'][conf_file]['service']) }}"
25 | - watch:
26 | - file: "{{ salt['pillar.get']('conf_files:%s' % conf_file, default=conf_file) }}"
27 | - ini: "{{ salt['pillar.get']('conf_files:%s' % conf_file, default=conf_file) }}"
28 | {% endif %}
29 | {% endfor %}
30 |
--------------------------------------------------------------------------------
/file_root/postinstall/misc_options.sls~:
--------------------------------------------------------------------------------
1 | {% for conf_file in salt['pillar.get']('misc_options', default=()) %}
2 | misc-{{ conf_file }}:
3 | file:
4 | - managed
5 | - name: "{{ salt['pillar.get']('conf_files:%s' % pillar['misc_options'][conf_file], default=pillar['misc_options'][conf_file]) }}"
6 | - user: "{{ salt['pillar.get']('conf_files:%s:user' % pillar['misc_options'][conf_file], default='root') }}"
7 | - group: "{{ salt['pillar.get']('conf_files:%s:group' % pillar['misc_options'][conf_file], default='root') }}"
8 | - mode: "{{ salt['pillar.get']('conf_files:%s:mode' % pillar['misc_options'][conf_file], default='644') }}"
9 | {% if salt['pillar.get']('misc_options:%s:sections' % conf_file, default=None) %}
10 | ini:
11 | - options_present
12 | - name: "{{ salt['pillar.get']('conf_files:%s' % pillar['misc_options'][conf_file], default=pillar['misc_options'][conf_file]) }}"
13 | - sections:
14 | {% for section_name in salt['pillar.get']('misc_options:%s:sections' % conf_file) %}
15 | {{ section_name }}:
16 | {% for option_name in pillar['misc_options'][conf_file]['sections'][section_name] %}
17 | {{ option_name }}: {{ pillar['misc_options'][conf_file]['sections'][section_name][option_name] }}
18 | {% endfor %}
19 | {% endfor %}
20 | {% endif %}
21 | {% if salt['pillar.get']('misc_options:%s:service' % conf_file, default=None) %}
22 | service:
23 | - running
24 | - name: "{{ salt['pillar.get']('conf_files:%s:service' % pillar['misc_options'][conf_file]) }}"
25 | - watch:
26 | - file: "{{ salt['pillar.get']('conf_files:%s' % pillar['misc_options'][conf_file], default=pillar['misc_options'][conf_file]) }}"
27 | - ini: "{{ salt['pillar.get']('conf_files:%s' % pillar['misc_options'][conf_file], default=pillar['misc_options'][conf_file]) }}"
28 | {% endif %}
29 | {% endfor %}
30 |
--------------------------------------------------------------------------------
/file_root/queue/rabbit.sls:
--------------------------------------------------------------------------------
1 | rabbitmq-server-install:
2 | pkg:
3 | - installed
4 | - name: {{ salt['pillar.get']('packages:rabbitmq', default='rabbitmq-server') }}
5 | rabbitmq-service-running:
6 | service:
7 | - running
8 | - name: {{ salt['pillar.get']('services:rabbitmq', default='rabbitmq') }}
9 | - watch:
10 | - pkg: rabbitmq-server-install
11 |
--------------------------------------------------------------------------------
/file_root/queue/rabbit.sls~:
--------------------------------------------------------------------------------
1 | rabbitmq-server-install:
2 | pkg:
3 | - installed
4 | - name: {{ salt['pillar.get']('packages:rabbitmq', default='rabbitmq-server') }}
5 | <<<<<<< HEAD
6 | rabbit-port-open:
7 | file:
8 | - replace
9 | - name: {{ salt['pillar.get']('conf_files:rabbitmq', default='/etc/rabbitmq/rabbitmq-env.conf') }}
10 | - pattern: 127.0.0.1
11 | - repl: 0.0.0.0
12 | - require:
13 | - pkg: rabbitmq-server-install
14 | =======
15 | >>>>>>> 3c91147d9e82f59689a5132839073894b94ddfb7
16 | rabbitmq-service-running:
17 | service:
18 | - running
19 | - name: {{ salt['pillar.get']('services:rabbitmq', default='rabbitmq') }}
20 | - watch:
21 | <<<<<<< HEAD
22 | - pkg: rabbit-server-install
23 | - file: rabbit-port-open
24 | =======
25 | - pkg: rabbitmq-server-install
26 | >>>>>>> 3c91147d9e82f59689a5132839073894b94ddfb7
27 |
--------------------------------------------------------------------------------
/file_root/top.sls:
--------------------------------------------------------------------------------
1 | {% from "cluster/resources.jinja" import formulas with context %}
2 | icehouse:
3 | "*.icehouse":
4 | - generics.*
5 | {% for formula in formulas %}
6 | - {{ formula }}
7 | {% endfor %}
8 | - postinstall.misc_options
9 |
--------------------------------------------------------------------------------
/pillar_root/Arch.sls:
--------------------------------------------------------------------------------
1 | packages:
2 | linux-headers: linux-headers-{{ grains['kernelrelease'][:-5] }}
3 | mysql-client: mysql-client
4 | python-mysql-library: python-mysqldb
5 | mysql-server: mysql-server
6 | rabbitmq: rabbitmq-server
7 | keystone: keystone
8 | glance: glance
9 | cinder_api: cinder-api
10 | cinder_scheduler: cinder-scheduler
11 | cinder_volume: cinder-volume
12 | lvm: lvm2
13 | apache: apache2
14 | apache_wsgi_module: libapache2-mod-wsgi
15 | memcached: memcached
16 | dashboard: openstack-dashboard
17 | neutron_server: neutron-server
18 | neutron_ml2: neutron-plugin-ml2
19 | neutron_l2_agent: neutron-plugin-openvswitch-agent
20 | openvswitch: openvswitch-switch
21 | neutron_dhcp_agent: neutron-dhcp-agent
22 | neutron_l3_agent: neutron-l3-agent
23 | neutron_metadata_agent: neutron-metadata-agent
24 | nova_api: nova-api
25 | nova_conductor: nova-conductor
26 | nova_scheduler: nova-scheduler
27 | nova_cert: nova-cert
28 | nova_consoleauth: nova-consoleauth
29 | nova_novncproxy: nova-novncproxy
30 | nova_compute: nova-compute
31 | nova_compute_kvm: nova-compute-kvm
32 | python_guestfs: python-guestfs
33 | nova_pythonclient: python-novaclient
34 | nova_ajax_console_proxy: nova-ajax-console-proxy
35 | novnc: novnc
36 |
37 | services:
38 | mysql: mysql
39 | rabbitmq: rabbitmq-server
40 | keystone: keystone
41 | glance_api: glance-api
42 | glance_registry: glance-registry
43 | cinder_api: cinder-api
44 | cinder_scheduler: cinder-scheduler
45 | cinder_volume: cinder-volume
46 | iscsi_target: tgt
47 | apache: apache2
48 | memcached: memcached
49 | neutron_server: neutron-server
50 | neutron_l2_agent: neutron-plugin-openvswitch-agent
51 | openvswitch: openvswitch-switch
52 | neutron_dhcp_agent: neutron-dhcp-agent
53 | neutron_l3_agent: neutron-l3-agent
54 | neutron_metadata_agent: neutron-metadata-agent
55 | nova_api: nova-api
56 | nova_conductor: nova-conductor
57 | nova_scheduler: nova-scheduler
58 | nova_cert: nova-cert
59 | nova_consoleauth: nova-consoleauth
60 | nova_novncproxy: nova-novncproxy
61 | nova_compute: nova-compute
62 |
63 | conf_files:
64 | mysql: "/etc/mysql/my.cnf"
65 | keystone: "/etc/keystone/keystone.conf"
66 | glance_api: "/etc/glance/glance-api.conf"
67 | glance_registry: "/etc/glance/glance-registry.conf"
68 | cinder: "/etc/cinder/cinder.conf"
69 | apache_dashboard_enabled_conf: "/etc/apache2/conf-enabled/openstack-dashboard.conf"
70 | apache_dashboard_conf: "/etc/apache2/conf-available/openstack-dashboard.conf"
71 | neutron: "/etc/neutron/neutron.conf"
72 | neutron_ml2: "/etc/neutron/plugins/ml2/ml2_conf.ini"
73 | neutron_l2_agent: "/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini"
74 | neutron_dhcp_agent: "/etc/neutron/dhcp_agent.ini"
75 | neutron_l3_agent: "/etc/neutron/l3_agent.ini"
76 | neutron_metadata_agent: "/etc/neutron/metadata_agent.ini"
77 | syslinux: "/etc/sysctl.conf"
78 | nova: "/etc/nova/nova.conf"
79 | nova_compute: "/etc/nova/nova-compute.conf"
80 |
--------------------------------------------------------------------------------
/pillar_root/Debian.sls:
--------------------------------------------------------------------------------
1 | packages:
2 | linux-headers: linux-headers-{{ grains['kernelrelease'] }}
3 | mysql-client: mysql-client
4 | python-mysql-library: python-mysqldb
5 | mysql-server: mysql-server
6 | rabbitmq: rabbitmq-server
7 | keystone: keystone
8 | glance: glance
9 | cinder_api: cinder-api
10 | cinder_scheduler: cinder-scheduler
11 | cinder_volume: cinder-volume
12 | lvm: lvm2
13 | apache: apache2
14 | apache_wsgi_module: libapache2-mod-wsgi
15 | memcached: memcached
16 | dashboard: openstack-dashboard
17 | neutron_server: neutron-server
18 | neutron_ml2: neutron-common
19 | neutron_l2_agent: neutron-plugin-openvswitch-agent
20 | openvswitch: openvswitch-switch
21 | neutron_dhcp_agent: neutron-dhcp-agent
22 | neutron_l3_agent: neutron-l3-agent
23 | neutron_metadata_agent: neutron-metadata-agent
24 | nova_api: nova-api
25 | nova_conductor: nova-conductor
26 | nova_scheduler: nova-scheduler
27 | nova_cert: nova-cert
28 | nova_consoleauth: nova-consoleauth
29 | nova_novncproxy: nova-novncproxy
30 | nova_compute: nova-compute
31 | nova_compute_kvm: nova-compute-kvm
32 | python_guestfs: python-guestfs
33 | nova_pythonclient: python-novaclient
34 | nova_ajax_console_proxy: novnc
35 | novnc: novnc
36 |
37 | services:
38 | mysql: mysql
39 | rabbitmq: rabbitmq-server
40 | keystone: keystone
41 | glance_api: glance-api
42 | glance_registry: glance-registry
43 | cinder_api: cinder-api
44 | cinder_scheduler: cinder-scheduler
45 | cinder_volume: cinder-volume
46 | iscsi_target: tgt
47 | apache: apache2
48 | memcached: memcached
49 | neutron_server: neutron-server
50 | neutron_l2_agent: neutron-plugin-openvswitch-agent
51 | openvswitch: openvswitch-switch
52 | neutron_dhcp_agent: neutron-dhcp-agent
53 | neutron_l3_agent: neutron-l3-agent
54 | neutron_metadata_agent: neutron-metadata-agent
55 | nova_api: nova-api
56 | nova_conductor: nova-conductor
57 | nova_scheduler: nova-scheduler
58 | nova_cert: nova-cert
59 | nova_consoleauth: nova-consoleauth
60 | nova_novncproxy: nova-novncproxy
61 | nova_compute: nova-compute
62 |
63 | conf_files:
64 | mysql: "/etc/mysql/my.cnf"
65 | keystone: "/etc/keystone/keystone.conf"
66 | glance_api: "/etc/glance/glance-api.conf"
67 | glance_registry: "/etc/glance/glance-registry.conf"
68 | cinder: "/etc/cinder/cinder.conf"
69 | apache_dashboard_enabled_conf: "/etc/apache2/sites-enabled/openstack-dashboard.conf"
70 | apache_dashboard_conf: "/etc/apache2/sites-available/openstack-dashboard.conf"
71 | neutron: "/etc/neutron/neutron.conf"
72 | neutron_ml2: "/etc/neutron/plugins/ml2/ml2_conf.ini"
73 | neutron_l2_agent: "/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini"
74 | neutron_dhcp_agent: "/etc/neutron/dhcp_agent.ini"
75 | neutron_l3_agent: "/etc/neutron/l3_agent.ini"
76 | neutron_metadata_agent: "/etc/neutron/metadata_agent.ini"
77 | syslinux: "/etc/sysctl.conf"
78 | nova: "/etc/nova/nova.conf"
79 | nova_compute: "/etc/nova/nova-compute.conf"
80 |
--------------------------------------------------------------------------------
/pillar_root/Debian_repo.sls:
--------------------------------------------------------------------------------
1 | pkgrepo:
2 | pre_repo_additions:
3 | - "gplhost-archive-keyring"
4 | repos:
5 | Juno-Cloud:
6 | name: "deb http://archive.gplhost.com/debian icehouse main"
7 | file: "/etc/apt/sources.list.d/gplhost-icehouse.list"
8 | human_name: "GPLHost Juno packages"
9 | Juno-Cloud-backports:
10 | name: "deb http://archive.gplhost.com/debian icehouse-backports main"
11 | file: "/etc/apt/sources.list.d/gplhost-icehouse.list"
12 | human_name: "GPLHost Juno Backports packages"
13 | post_repo_additions:
14 | - "python-argparse"
15 | - "iproute"
16 | - "python-crypto"
17 | - "python-psutil"
18 | - "libusb-1.0-0"
19 | - "libyaml-0-2"
20 |
--------------------------------------------------------------------------------
/pillar_root/Ubuntu.sls:
--------------------------------------------------------------------------------
1 | packages:
2 | linux-headers: linux-headers-{{ grains['kernelrelease'] }}
3 | mysql-client: mysql-client
4 | python-mysql-library: python-mysqldb
5 | mysql-server: mysql-server
6 | rabbitmq: rabbitmq-server
7 | keystone: keystone
8 | glance: glance
9 | cinder_api: cinder-api
10 | cinder_scheduler: cinder-scheduler
11 | cinder_volume: cinder-volume
12 | lvm: lvm2
13 | apache: apache2
14 | apache_wsgi_module: libapache2-mod-wsgi
15 | memcached: memcached
16 | dashboard: openstack-dashboard
17 | neutron_server: neutron-server
18 | neutron_ml2: neutron-plugin-ml2
19 | neutron_l2_agent: neutron-plugin-openvswitch-agent
20 | openvswitch: openvswitch-switch
21 | neutron_dhcp_agent: neutron-dhcp-agent
22 | neutron_l3_agent: neutron-l3-agent
23 | neutron_metadata_agent: neutron-metadata-agent
24 | nova_api: nova-api
25 | nova_conductor: nova-conductor
26 | nova_scheduler: nova-scheduler
27 | nova_cert: nova-cert
28 | nova_consoleauth: nova-consoleauth
29 | nova_novncproxy: nova-novncproxy
30 | nova_compute: nova-compute
31 | nova_compute_kvm: nova-compute-kvm
32 | python_guestfs: python-guestfs
33 | nova_pythonclient: python-novaclient
34 | nova_ajax_console_proxy: nova-ajax-console-proxy
35 | novnc: novnc
36 |
37 | services:
38 | mysql: mysql
39 | rabbitmq: rabbitmq-server
40 | keystone: keystone
41 | glance_api: glance-api
42 | glance_registry: glance-registry
43 | cinder_api: cinder-api
44 | cinder_scheduler: cinder-scheduler
45 | cinder_volume: cinder-volume
46 | iscsi_target: tgt
47 | apache: apache2
48 | memcached: memcached
49 | neutron_server: neutron-server
50 | neutron_l2_agent: neutron-plugin-openvswitch-agent
51 | openvswitch: openvswitch-switch
52 | neutron_dhcp_agent: neutron-dhcp-agent
53 | neutron_l3_agent: neutron-l3-agent
54 | neutron_metadata_agent: neutron-metadata-agent
55 | nova_api: nova-api
56 | nova_conductor: nova-conductor
57 | nova_scheduler: nova-scheduler
58 | nova_cert: nova-cert
59 | nova_consoleauth: nova-consoleauth
60 | nova_novncproxy: nova-novncproxy
61 | nova_compute: nova-compute
62 |
63 | conf_files:
64 | mysql: "/etc/mysql/my.cnf"
65 | rabbitmq: "/etc/rabbitmq/rabbitmq-env.conf"
66 | keystone: "/etc/keystone/keystone.conf"
67 | glance_api: "/etc/glance/glance-api.conf"
68 | glance_registry: "/etc/glance/glance-registry.conf"
69 | cinder: "/etc/cinder/cinder.conf"
70 | apache_dashboard_enabled_conf: "/etc/apache2/conf-enabled/openstack-dashboard.conf"
71 | apache_dashboard_conf: "/etc/apache2/conf-available/openstack-dashboard.conf"
72 | neutron: "/etc/neutron/neutron.conf"
73 | neutron_ml2: "/etc/neutron/plugins/ml2/ml2_conf.ini"
74 | neutron_l2_agent: "/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini"
75 | neutron_dhcp_agent: "/etc/neutron/dhcp_agent.ini"
76 | neutron_l3_agent: "/etc/neutron/l3_agent.ini"
77 | neutron_metadata_agent: "/etc/neutron/metadata_agent.ini"
78 | syslinux: "/etc/sysctl.conf"
79 | nova: "/etc/nova/nova.conf"
80 | nova_compute: "/etc/nova/nova-compute.conf"
81 |
--------------------------------------------------------------------------------
/pillar_root/Ubuntu_repo.sls:
--------------------------------------------------------------------------------
1 | pkgrepo:
2 | pre_repo_additions:
3 | - "software-properties-common"
4 | - "ubuntu-cloud-keyring"
5 | repos:
6 | Juno-Cloud:
7 | name: "deb http://ubuntu-cloud.archive.canonical.com/ubuntu trusty-updates/icehouse main"
8 | file: "/etc/apt/sources.list.d/cloudarchive-icehouse.list"
9 |
--------------------------------------------------------------------------------
/pillar_root/access_resources.sls:
--------------------------------------------------------------------------------
1 | keystone:
2 | admin_token: "24811ee3d9a09915bef0"
3 | roles:
4 | - "admin"
5 | - "Member"
6 | services:
7 | glance:
8 | service_type: "image"
9 | endpoint:
10 | adminurl: "http://{0}:9292"
11 | internalurl: "http://{0}:9292"
12 | publicurl: "http://{0}:9292"
13 | endpoint_host_sls: glance
14 | description: "glance image service"
15 | keystone:
16 | service_type: "identity"
17 | endpoint:
18 | adminurl: "http://{0}:35357/v2.0"
19 | internalurl: "http://{0}:5000/v2.0"
20 | publicurl: "http://{0}:5000/v2.0"
21 | endpoint_host_sls: keystone
22 | description: "Openstack Identity"
23 | neutron:
24 | service_type: "network"
25 | endpoint:
26 | adminurl: "http://{0}:9696"
27 | internalurl: "http://{0}:9696"
28 | publicurl: "http://{0}:9696"
29 | endpoint_host_sls: neutron
30 | description: "Openstack network service"
31 | nova:
32 | service_type: "compute"
33 | endpoint:
34 | adminurl: "http://{0}:8774/v2/%(tenant_id)s"
35 | internalurl: "http://{0}:8774/v2/%(tenant_id)s"
36 | publicurl: "http://{0}:8774/v2/%(tenant_id)s"
37 | endpoint_host_sls: nova
38 | description: "nova compute service"
39 | cinder:
40 | service_type: "volume"
41 | endpoint:
42 | adminurl: "http://{0}:8776/v1/%(tenant_id)s"
43 | internalurl: "http://{0}:8776/v1/%(tenant_id)s"
44 | publicurl: "http://{0}:8776/v1/%(tenant_id)s"
45 | endpoint_host_sls: cinder
46 | description: "OpenStack Block Storage"
47 | cinderv2:
48 | service_type: "volumev2"
49 | endpoint:
50 | adminurl: "http://{0}:8776/v2/%(tenant_id)s"
51 | internalurl: "http://{0}:8776/v2/%(tenant_id)s"
52 | publicurl: "http://{0}:8776/v2/%(tenant_id)s"
53 | endpoint_host_sls: cinder
54 | description: "OpenStack Block Storage V2"
55 | tenants:
56 | admin:
57 | users:
58 | admin:
59 | password: "admin_pass"
60 | roles: "[\"admin\", \"_member_\"]"
61 | email: "salt@openstack.com"
62 | service:
63 | users:
64 | cinder:
65 | password: "cinder_pass"
66 | roles: "[\"admin\"]"
67 | email: "salt@openstack.com"
68 | glance:
69 | password: "glance_pass"
70 | roles: "[\"admin\"]"
71 | email: "salt@openstack.com"
72 | neutron:
73 | password: "neutron_pass"
74 | roles: "[\"admin\"]"
75 | email: "salt@openstack.com"
76 | nova:
77 | password: "nova_pass"
78 | roles: "[\"admin\"]"
79 | email: "salt@openstack.com"
80 |
--------------------------------------------------------------------------------
/pillar_root/cluster_resources.sls:
--------------------------------------------------------------------------------
1 | roles:
2 | - "controller"
3 | - "network"
4 | - "storage"
5 | - "compute"
6 | compute:
7 | - "openstack.icehouse"
8 | controller:
9 | - "openstack.icehouse"
10 | network:
11 | - "openstack.icehouse"
12 | storage:
13 | - "openstack.icehouse"
14 | sls:
15 | controller:
16 | - "mysql"
17 | - "mysql.client"
18 | - "mysql.openstack_dbschema"
19 | - "queue.rabbit"
20 | - "keystone"
21 | - "keystone.openstack_tenants"
22 | - "keystone.openstack_users"
23 | - "keystone.openstack_services"
24 | - "nova"
25 | - "horizon"
26 | - "glance"
27 | - "glance.images"
28 | - "cinder"
29 | network:
30 | - "mysql.client"
31 | - "neutron"
32 | - "neutron.service"
33 | - "neutron.openvswitch"
34 | - "neutron.ml2"
35 | - "neutron.guest_mtu"
36 | - "neutron.networks"
37 | - "neutron.routers"
38 | - "neutron.security_groups"
39 | compute:
40 | - "mysql.client"
41 | - "nova.compute_kvm"
42 | - "neutron.openvswitch"
43 | - "neutron.ml2"
44 | storage:
45 | - "cinder.volume"
46 |
--------------------------------------------------------------------------------
/pillar_root/db_resources.sls:
--------------------------------------------------------------------------------
1 | databases:
2 | nova:
3 | db_name: "nova"
4 | username: "nova"
5 | password: "nova_pass"
6 | service: "nova-api"
7 | db_sync: "nova-manage db sync"
8 | keystone:
9 | db_name: "keystone"
10 | username: "keystone"
11 | password: "keystone_pass"
12 | service: "keystone"
13 | db_sync: "keystone-manage db_sync"
14 | cinder:
15 | db_name: "cinder"
16 | username: "cinder"
17 | password: "cinder_pass"
18 | service: "cinder"
19 | db_sync: "cinder-manage db sync"
20 | glance:
21 | db_name: "glance"
22 | username: "glance"
23 | password: "glance_pass"
24 | service: "glance"
25 | db_sync: "glance-manage db_sync"
26 | neutron:
27 | db_name: "neutron"
28 | username: "neutron"
29 | password: "neutron_pass"
30 |
--------------------------------------------------------------------------------
/pillar_root/deploy_files.sls:
--------------------------------------------------------------------------------
1 | # miscelanous files to be deployed on all hosts below an example.
2 | # note that the users could also specify the source as in salt://path instead of the contents
3 | #files:
4 | # gpl_host_pinning:
5 | # name: /etc/apt/preferences.d/gplhost
6 | # contents: |
7 | # "Package: *\nPin: origin archive.gplhost.com\nPin-Priority: 999"
8 |
--------------------------------------------------------------------------------
/pillar_root/machine_images.sls:
--------------------------------------------------------------------------------
1 | images:
2 | CirrosCustom:
3 | min_disk: 1
4 | min_ram: 0
5 | copy_from: 'https://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img'
6 | user: admin
7 | tenant: admin
8 |
--------------------------------------------------------------------------------
/pillar_root/misc_openstack_options.sls:
--------------------------------------------------------------------------------
1 | misc_options:
2 | neutron_dhcp_agent:
3 | sections:
4 | DEFAULT:
5 | debug: "True"
6 | service: neutron_dhcp_agent
7 | sls: neutron.service
8 | user: neutron
9 | group: neutron
10 | mode: '644'
11 | nova:
12 | sections:
13 | DEFAULT:
14 | libvirt_cpu_mode: host-passthrough
15 | service: nova_compute
16 | sls: nova_compute
17 | user: nova
18 | group: nova
19 | mode: '644'
20 |
--------------------------------------------------------------------------------
/pillar_root/network_resources.sls:
--------------------------------------------------------------------------------
1 | neutron:
2 | intergration_bridge: br-int
3 | metadata_secret: "414c66b22b1e7a20cc35"
4 | # uncomment to bridge all interfaces to primary interface
5 | # single_nic : primary_nic_name
6 | single_nic: "eth0"
7 | # make sure you add eth0 to br-proxy
8 | # and configure br-proxy with eth0's address
9 | # after highstate run
10 | type_drivers:
11 | flat:
12 | physnets:
13 | External:
14 | bridge: "br-ex"
15 | hosts:
16 | openstack.icehouse: "eth3"
17 | vlan:
18 | physnets:
19 | Internal1:
20 | bridge: "br-eth1"
21 | vlan_range: "100:200"
22 | hosts:
23 | openstack.icehouse: "eth2"
24 | gre:
25 | tunnel_start: "1"
26 | tunnel_end: "1000"
27 | networks:
28 | Internal:
29 | subnets:
30 | InternalSubnet:
31 | cidr: '192.168.10.0/24'
32 | ExternalNetwork:
33 | provider_physical_network: External
34 | provider_network_type: flat
35 | shared: true
36 | router_external: true
37 | subnets:
38 | ExternalSubnet:
39 | cidr: '10.8.127.0/24'
40 | start_ip: '10.8.127.10'
41 | end_ip: '10.8.127.30'
42 | enable_dhcp: false
43 | routers:
44 | ExternalRouter:
45 | interfaces:
46 | - InternalSubnet
47 | external_gateway: ExternalNetwork
48 | security_groups:
49 | Default:
50 | description: 'Default security group'
51 | rules:
52 | - direction: ingress
53 | ethertype: ipv4
54 | remote_ip_prefix: '10.8.27.0/24'
55 | - direction: ingress
56 | remote_ip_prefix: '10.8.127.0/24'
57 |
58 |
--------------------------------------------------------------------------------
/pillar_root/openstack_cluster.sls:
--------------------------------------------------------------------------------
1 |
2 | #queue backend
3 | queue_engine: rabbit
4 |
5 | #db_backend
6 | db_engine: mysql
7 |
8 |
9 | #Uncomment below line if there is a valid package proxy
10 | #pkg_proxy_url: "http://mars:3142"
11 |
12 | #Data to identify cluster
13 | cluster_type: icehouse
14 |
15 | #Hosts and their ip addresses
16 | hosts:
17 | openstack.icehouse: 192.168.1.11
18 |
19 |
20 |
--------------------------------------------------------------------------------
/pillar_root/to_do.sls:
--------------------------------------------------------------------------------
1 |
2 |
3 | keystone.user: "admin"
4 | keystone.password: "admin_pass"
5 | keystone.tenant: "admin"
6 | keystone.auth_url: "http://brown:5000/v2.0/"
7 |
8 | Check if these are necessary for keystone state and module and add them if necessary
9 |
10 |
11 |
12 | add documentation
13 | if a packages is not valid for a distro then replace that package name with something already bing installed in distro.sls file
14 |
15 | example nova-ajax-console-proxy is replaced with novnc in Debian.sls. tHis does not affect installation, except that the state will be run twice
16 |
17 |
18 | add documentation
19 | To deploy miscelanous files
20 |
21 | add documentation
22 |
23 | repo.sls , repo addition , pre repo addition and post repo addition
24 |
25 |
26 | ssl version of dashboard
27 |
28 |
29 | check why keystone module not using endpoint and token
30 |
31 | check why mysql tables created with charset latin1
32 |
--------------------------------------------------------------------------------
/pillar_root/top.sls:
--------------------------------------------------------------------------------
1 | icehouse:
2 | "*.icehouse":
3 | - cluster_resources
4 | - db_resources
5 | - access_resources
6 | - network_resources
7 | - machine_images
8 | - openstack_cluster
9 | - deploy_files
10 | - {{ grains['os'] }}
11 | - {{ grains['os'] }}_repo
12 | - misc_openstack_options
13 |
--------------------------------------------------------------------------------