├── LICENSE ├── README.md ├── linux ├── include │ └── linux │ │ └── sched.h ├── kernel │ ├── fork.c │ └── sched.c └── mm │ └── page_alloc.c ├── posts ├── ch1.md ├── ch2.md ├── ch3.md └── ch4.md └── src └── page.h /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 2, June 1991 3 | 4 | Copyright (C) 1989, 1991 Free Software Foundation, Inc., [http://fsf.org/] 5 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA 6 | Everyone is permitted to copy and distribute verbatim copies 7 | of this license document, but changing it is not allowed. 8 | 9 | Preamble 10 | 11 | The licenses for most software are designed to take away your 12 | freedom to share and change it. By contrast, the GNU General Public 13 | License is intended to guarantee your freedom to share and change free 14 | software--to make sure the software is free for all its users. This 15 | General Public License applies to most of the Free Software 16 | Foundation's software and to any other program whose authors commit to 17 | using it. (Some other Free Software Foundation software is covered by 18 | the GNU Lesser General Public License instead.) You can apply it to 19 | your programs, too. 20 | 21 | When we speak of free software, we are referring to freedom, not 22 | price. Our General Public Licenses are designed to make sure that you 23 | have the freedom to distribute copies of free software (and charge for 24 | this service if you wish), that you receive source code or can get it 25 | if you want it, that you can change the software or use pieces of it 26 | in new free programs; and that you know you can do these things. 27 | 28 | To protect your rights, we need to make restrictions that forbid 29 | anyone to deny you these rights or to ask you to surrender the rights. 30 | These restrictions translate to certain responsibilities for you if you 31 | distribute copies of the software, or if you modify it. 32 | 33 | For example, if you distribute copies of such a program, whether 34 | gratis or for a fee, you must give the recipients all the rights that 35 | you have. You must make sure that they, too, receive or can get the 36 | source code. And you must show them these terms so they know their 37 | rights. 38 | 39 | We protect your rights with two steps: (1) copyright the software, and 40 | (2) offer you this license which gives you legal permission to copy, 41 | distribute and/or modify the software. 42 | 43 | Also, for each author's protection and ours, we want to make certain 44 | that everyone understands that there is no warranty for this free 45 | software. If the software is modified by someone else and passed on, we 46 | want its recipients to know that what they have is not the original, so 47 | that any problems introduced by others will not reflect on the original 48 | authors' reputations. 49 | 50 | Finally, any free program is threatened constantly by software 51 | patents. We wish to avoid the danger that redistributors of a free 52 | program will individually obtain patent licenses, in effect making the 53 | program proprietary. To prevent this, we have made it clear that any 54 | patent must be licensed for everyone's free use or not licensed at all. 55 | 56 | The precise terms and conditions for copying, distribution and 57 | modification follow. 58 | 59 | GNU GENERAL PUBLIC LICENSE 60 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 61 | 62 | 0. This License applies to any program or other work which contains 63 | a notice placed by the copyright holder saying it may be distributed 64 | under the terms of this General Public License. The "Program", below, 65 | refers to any such program or work, and a "work based on the Program" 66 | means either the Program or any derivative work under copyright law: 67 | that is to say, a work containing the Program or a portion of it, 68 | either verbatim or with modifications and/or translated into another 69 | language. (Hereinafter, translation is included without limitation in 70 | the term "modification".) Each licensee is addressed as "you". 71 | 72 | Activities other than copying, distribution and modification are not 73 | covered by this License; they are outside its scope. The act of 74 | running the Program is not restricted, and the output from the Program 75 | is covered only if its contents constitute a work based on the 76 | Program (independent of having been made by running the Program). 77 | Whether that is true depends on what the Program does. 78 | 79 | 1. You may copy and distribute verbatim copies of the Program's 80 | source code as you receive it, in any medium, provided that you 81 | conspicuously and appropriately publish on each copy an appropriate 82 | copyright notice and disclaimer of warranty; keep intact all the 83 | notices that refer to this License and to the absence of any warranty; 84 | and give any other recipients of the Program a copy of this License 85 | along with the Program. 86 | 87 | You may charge a fee for the physical act of transferring a copy, and 88 | you may at your option offer warranty protection in exchange for a fee. 89 | 90 | 2. You may modify your copy or copies of the Program or any portion 91 | of it, thus forming a work based on the Program, and copy and 92 | distribute such modifications or work under the terms of Section 1 93 | above, provided that you also meet all of these conditions: 94 | 95 | a) You must cause the modified files to carry prominent notices 96 | stating that you changed the files and the date of any change. 97 | 98 | b) You must cause any work that you distribute or publish, that in 99 | whole or in part contains or is derived from the Program or any 100 | part thereof, to be licensed as a whole at no charge to all third 101 | parties under the terms of this License. 102 | 103 | c) If the modified program normally reads commands interactively 104 | when run, you must cause it, when started running for such 105 | interactive use in the most ordinary way, to print or display an 106 | announcement including an appropriate copyright notice and a 107 | notice that there is no warranty (or else, saying that you provide 108 | a warranty) and that users may redistribute the program under 109 | these conditions, and telling the user how to view a copy of this 110 | License. (Exception: if the Program itself is interactive but 111 | does not normally print such an announcement, your work based on 112 | the Program is not required to print an announcement.) 113 | 114 | These requirements apply to the modified work as a whole. If 115 | identifiable sections of that work are not derived from the Program, 116 | and can be reasonably considered independent and separate works in 117 | themselves, then this License, and its terms, do not apply to those 118 | sections when you distribute them as separate works. But when you 119 | distribute the same sections as part of a whole which is a work based 120 | on the Program, the distribution of the whole must be on the terms of 121 | this License, whose permissions for other licensees extend to the 122 | entire whole, and thus to each and every part regardless of who wrote it. 123 | 124 | Thus, it is not the intent of this section to claim rights or contest 125 | your rights to work written entirely by you; rather, the intent is to 126 | exercise the right to control the distribution of derivative or 127 | collective works based on the Program. 128 | 129 | In addition, mere aggregation of another work not based on the Program 130 | with the Program (or with a work based on the Program) on a volume of 131 | a storage or distribution medium does not bring the other work under 132 | the scope of this License. 133 | 134 | 3. You may copy and distribute the Program (or a work based on it, 135 | under Section 2) in object code or executable form under the terms of 136 | Sections 1 and 2 above provided that you also do one of the following: 137 | 138 | a) Accompany it with the complete corresponding machine-readable 139 | source code, which must be distributed under the terms of Sections 140 | 1 and 2 above on a medium customarily used for software interchange; or, 141 | 142 | b) Accompany it with a written offer, valid for at least three 143 | years, to give any third party, for a charge no more than your 144 | cost of physically performing source distribution, a complete 145 | machine-readable copy of the corresponding source code, to be 146 | distributed under the terms of Sections 1 and 2 above on a medium 147 | customarily used for software interchange; or, 148 | 149 | c) Accompany it with the information you received as to the offer 150 | to distribute corresponding source code. (This alternative is 151 | allowed only for noncommercial distribution and only if you 152 | received the program in object code or executable form with such 153 | an offer, in accord with Subsection b above.) 154 | 155 | The source code for a work means the preferred form of the work for 156 | making modifications to it. For an executable work, complete source 157 | code means all the source code for all modules it contains, plus any 158 | associated interface definition files, plus the scripts used to 159 | control compilation and installation of the executable. However, as a 160 | special exception, the source code distributed need not include 161 | anything that is normally distributed (in either source or binary 162 | form) with the major components (compiler, kernel, and so on) of the 163 | operating system on which the executable runs, unless that component 164 | itself accompanies the executable. 165 | 166 | If distribution of executable or object code is made by offering 167 | access to copy from a designated place, then offering equivalent 168 | access to copy the source code from the same place counts as 169 | distribution of the source code, even though third parties are not 170 | compelled to copy the source along with the object code. 171 | 172 | 4. You may not copy, modify, sublicense, or distribute the Program 173 | except as expressly provided under this License. Any attempt 174 | otherwise to copy, modify, sublicense or distribute the Program is 175 | void, and will automatically terminate your rights under this License. 176 | However, parties who have received copies, or rights, from you under 177 | this License will not have their licenses terminated so long as such 178 | parties remain in full compliance. 179 | 180 | 5. You are not required to accept this License, since you have not 181 | signed it. However, nothing else grants you permission to modify or 182 | distribute the Program or its derivative works. These actions are 183 | prohibited by law if you do not accept this License. Therefore, by 184 | modifying or distributing the Program (or any work based on the 185 | Program), you indicate your acceptance of this License to do so, and 186 | all its terms and conditions for copying, distributing or modifying 187 | the Program or works based on it. 188 | 189 | 6. Each time you redistribute the Program (or any work based on the 190 | Program), the recipient automatically receives a license from the 191 | original licensor to copy, distribute or modify the Program subject to 192 | these terms and conditions. You may not impose any further 193 | restrictions on the recipients' exercise of the rights granted herein. 194 | You are not responsible for enforcing compliance by third parties to 195 | this License. 196 | 197 | 7. If, as a consequence of a court judgment or allegation of patent 198 | infringement or for any other reason (not limited to patent issues), 199 | conditions are imposed on you (whether by court order, agreement or 200 | otherwise) that contradict the conditions of this License, they do not 201 | excuse you from the conditions of this License. If you cannot 202 | distribute so as to satisfy simultaneously your obligations under this 203 | License and any other pertinent obligations, then as a consequence you 204 | may not distribute the Program at all. For example, if a patent 205 | license would not permit royalty-free redistribution of the Program by 206 | all those who receive copies directly or indirectly through you, then 207 | the only way you could satisfy both it and this License would be to 208 | refrain entirely from distribution of the Program. 209 | 210 | If any portion of this section is held invalid or unenforceable under 211 | any particular circumstance, the balance of the section is intended to 212 | apply and the section as a whole is intended to apply in other 213 | circumstances. 214 | 215 | It is not the purpose of this section to induce you to infringe any 216 | patents or other property right claims or to contest validity of any 217 | such claims; this section has the sole purpose of protecting the 218 | integrity of the free software distribution system, which is 219 | implemented by public license practices. Many people have made 220 | generous contributions to the wide range of software distributed 221 | through that system in reliance on consistent application of that 222 | system; it is up to the author/donor to decide if he or she is willing 223 | to distribute software through any other system and a licensee cannot 224 | impose that choice. 225 | 226 | This section is intended to make thoroughly clear what is believed to 227 | be a consequence of the rest of this License. 228 | 229 | 8. If the distribution and/or use of the Program is restricted in 230 | certain countries either by patents or by copyrighted interfaces, the 231 | original copyright holder who places the Program under this License 232 | may add an explicit geographical distribution limitation excluding 233 | those countries, so that distribution is permitted only in or among 234 | countries not thus excluded. In such case, this License incorporates 235 | the limitation as if written in the body of this License. 236 | 237 | 9. The Free Software Foundation may publish revised and/or new versions 238 | of the General Public License from time to time. Such new versions will 239 | be similar in spirit to the present version, but may differ in detail to 240 | address new problems or concerns. 241 | 242 | Each version is given a distinguishing version number. If the Program 243 | specifies a version number of this License which applies to it and "any 244 | later version", you have the option of following the terms and conditions 245 | either of that version or of any later version published by the Free 246 | Software Foundation. If the Program does not specify a version number of 247 | this License, you may choose any version ever published by the Free Software 248 | Foundation. 249 | 250 | 10. If you wish to incorporate parts of the Program into other free 251 | programs whose distribution conditions are different, write to the author 252 | to ask for permission. For software which is copyrighted by the Free 253 | Software Foundation, write to the Free Software Foundation; we sometimes 254 | make exceptions for this. Our decision will be guided by the two goals 255 | of preserving the free status of all derivatives of our free software and 256 | of promoting the sharing and reuse of software generally. 257 | 258 | NO WARRANTY 259 | 260 | 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY 261 | FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN 262 | OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES 263 | PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED 264 | OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 265 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS 266 | TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE 267 | PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, 268 | REPAIR OR CORRECTION. 269 | 270 | 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 271 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR 272 | REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, 273 | INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING 274 | OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED 275 | TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY 276 | YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER 277 | PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE 278 | POSSIBILITY OF SUCH DAMAGES. 279 | 280 | END OF TERMS AND CONDITIONS 281 | 282 | How to Apply These Terms to Your New Programs 283 | 284 | If you develop a new program, and you want it to be of the greatest 285 | possible use to the public, the best way to achieve this is to make it 286 | free software which everyone can redistribute and change under these terms. 287 | 288 | To do so, attach the following notices to the program. It is safest 289 | to attach them to the start of each source file to most effectively 290 | convey the exclusion of warranty; and each file should have at least 291 | the "copyright" line and a pointer to where the full notice is found. 292 | 293 | {description} 294 | Copyright (C) 2014 黄亿华 295 | 296 | This program is free software; you can redistribute it and/or modify 297 | it under the terms of the GNU General Public License as published by 298 | the Free Software Foundation; either version 2 of the License, or 299 | (at your option) any later version. 300 | 301 | This program is distributed in the hope that it will be useful, 302 | but WITHOUT ANY WARRANTY; without even the implied warranty of 303 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 304 | GNU General Public License for more details. 305 | 306 | You should have received a copy of the GNU General Public License along 307 | with this program; if not, write to the Free Software Foundation, Inc., 308 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 309 | 310 | Also add information on how to contact you by electronic and paper mail. 311 | 312 | If the program is interactive, make it output a short notice like this 313 | when it starts in an interactive mode: 314 | 315 | Gnomovision version 69, Copyright (C) year name of author 316 | Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 317 | This is free software, and you are welcome to redistribute it 318 | under certain conditions; type `show c' for details. 319 | 320 | The hypothetical commands `show w' and `show c' should show the appropriate 321 | parts of the General Public License. Of course, the commands you use may 322 | be called something other than `show w' and `show c'; they could even be 323 | mouse-clicks or menu items--whatever suits your program. 324 | 325 | You should also get your employer (if you work as a programmer) or your 326 | school, if any, to sign a "copyright disclaimer" for the program, if 327 | necessary. Here is a sample; alter the names: 328 | 329 | Yoyodyne, Inc., hereby disclaims all copyright interest in the program 330 | `Gnomovision' (which makes passes at compilers) written by James Hacker. 331 | 332 | {signature of Ty Coon}, 1 April 1989 333 | Ty Coon, President of Vice 334 | 335 | This General Public License does not permit incorporating your program into 336 | proprietary programs. If your program is a subroutine library, you may 337 | consider it more useful to permit linking proprietary applications with the 338 | library. If this is what you want to do, use the GNU Lesser General 339 | Public License instead of this License. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | #Linux内核学习笔记 2 | ===== 3 | 4 | 本系列是我学习Linux的笔记。关于内核实现的一些原理的具体内容,《Linux内核设计与实现》(Linux Kernel Development,简称LKD)已经比较全面了,网络上的资料也非常多。这系列博文主要想站在初学者的角度,在了解How之前,先了解What跟Why,从而对内核有个稳固的大局观。同时会寻根究底的方式,找到对应的内核代码,满足一下个人的考究爱好。 5 | 6 | # 目录: 7 | 8 | ## 1. [Task It Easy!](posts/ch1.md) 9 | ## 2. [进程与线程](posts/ch2.md) 10 | ## 3. [进程的调度](posts/ch3.md) 11 | ## 4. [内存管理](posts/ch4.md) 12 | 13 | 文章中讲到的代码会放到linux目录下。 -------------------------------------------------------------------------------- /linux/include/linux/sched.h: -------------------------------------------------------------------------------- 1 | #ifndef _LINUX_SCHED_H 2 | #define _LINUX_SCHED_H 3 | 4 | /* 5 | * cloning flags: 6 | */ 7 | #define CSIGNAL 0x000000ff /* signal mask to be sent at exit */ 8 | #define CLONE_VM 0x00000100 /* set if VM shared between processes */ 9 | #define CLONE_FS 0x00000200 /* set if fs info shared between processes */ 10 | #define CLONE_FILES 0x00000400 /* set if open files shared between processes */ 11 | #define CLONE_SIGHAND 0x00000800 /* set if signal handlers and blocked signals shared */ 12 | #define CLONE_PTRACE 0x00002000 /* set if we want to let tracing continue on the child too */ 13 | #define CLONE_VFORK 0x00004000 /* set if the parent wants the child to wake it up on mm_release */ 14 | #define CLONE_PARENT 0x00008000 /* set if we want to have the same parent as the cloner */ 15 | #define CLONE_THREAD 0x00010000 /* Same thread group? */ 16 | #define CLONE_NEWNS 0x00020000 /* New namespace group? */ 17 | #define CLONE_SYSVSEM 0x00040000 /* share system V SEM_UNDO semantics */ 18 | #define CLONE_SETTLS 0x00080000 /* create a new TLS for the child */ 19 | #define CLONE_PARENT_SETTID 0x00100000 /* set the TID in the parent */ 20 | #define CLONE_CHILD_CLEARTID 0x00200000 /* clear the TID in the child */ 21 | #define CLONE_DETACHED 0x00400000 /* Unused, ignored */ 22 | #define CLONE_UNTRACED 0x00800000 /* set if the tracing process can't force CLONE_PTRACE on this clone */ 23 | #define CLONE_CHILD_SETTID 0x01000000 /* set the TID in the child */ 24 | /* 0x02000000 was previously the unused CLONE_STOPPED (Start in stopped state) 25 | and is now available for re-use. */ 26 | #define CLONE_NEWUTS 0x04000000 /* New utsname group? */ 27 | #define CLONE_NEWIPC 0x08000000 /* New ipcs */ 28 | #define CLONE_NEWUSER 0x10000000 /* New user namespace */ 29 | #define CLONE_NEWPID 0x20000000 /* New pid namespace */ 30 | #define CLONE_NEWNET 0x40000000 /* New network namespace */ 31 | #define CLONE_IO 0x80000000 /* Clone io context */ 32 | 33 | /* 34 | * Scheduling policies 35 | */ 36 | #define SCHED_NORMAL 0 37 | #define SCHED_FIFO 1 38 | #define SCHED_RR 2 39 | #define SCHED_BATCH 3 40 | /* SCHED_ISO: reserved but not implemented yet */ 41 | #define SCHED_IDLE 5 42 | /* Can be ORed in to make sure the process is reverted back to SCHED_NORMAL on fork */ 43 | #define SCHED_RESET_ON_FORK 0x40000000 44 | 45 | #ifdef __KERNEL__ 46 | 47 | struct sched_param { 48 | int sched_priority; 49 | }; 50 | 51 | #include /* for HZ */ 52 | 53 | #include 54 | #include 55 | #include 56 | #include 57 | #include 58 | #include 59 | #include 60 | #include 61 | #include 62 | #include 63 | #include 64 | #include 65 | 66 | #include 67 | #include 68 | #include 69 | #include 70 | 71 | #include 72 | #include 73 | #include 74 | #include 75 | #include 76 | #include 77 | #include 78 | #include 79 | #include 80 | #include 81 | #include 82 | #include 83 | #include 84 | 85 | #include 86 | #include 87 | #include 88 | #include 89 | #include 90 | #include 91 | #include 92 | #include 93 | 94 | #include 95 | 96 | struct exec_domain; 97 | struct futex_pi_state; 98 | struct robust_list_head; 99 | struct bio_list; 100 | struct fs_struct; 101 | struct perf_event_context; 102 | struct blk_plug; 103 | 104 | /* 105 | * List of flags we want to share for kernel threads, 106 | * if only because they are not used by them anyway. 107 | */ 108 | #define CLONE_KERNEL (CLONE_FS | CLONE_FILES | CLONE_SIGHAND) 109 | 110 | /* 111 | * These are the constant used to fake the fixed-point load-average 112 | * counting. Some notes: 113 | * - 11 bit fractions expand to 22 bits by the multiplies: this gives 114 | * a load-average precision of 10 bits integer + 11 bits fractional 115 | * - if you want to count load-averages more often, you need more 116 | * precision, or rounding will get you. With 2-second counting freq, 117 | * the EXP_n values would be 1981, 2034 and 2043 if still using only 118 | * 11 bit fractions. 119 | */ 120 | extern unsigned long avenrun[]; /* Load averages */ 121 | extern void get_avenrun(unsigned long *loads, unsigned long offset, int shift); 122 | 123 | #define FSHIFT 11 /* nr of bits of precision */ 124 | #define FIXED_1 (1<>= FSHIFT; 134 | 135 | extern unsigned long total_forks; 136 | extern int nr_threads; 137 | DECLARE_PER_CPU(unsigned long, process_counts); 138 | extern int nr_processes(void); 139 | extern unsigned long nr_running(void); 140 | extern unsigned long nr_uninterruptible(void); 141 | extern unsigned long nr_iowait(void); 142 | extern unsigned long nr_iowait_cpu(int cpu); 143 | extern unsigned long this_cpu_load(void); 144 | 145 | 146 | extern void calc_global_load(unsigned long ticks); 147 | 148 | extern unsigned long get_parent_ip(unsigned long addr); 149 | 150 | struct seq_file; 151 | struct cfs_rq; 152 | struct task_group; 153 | #ifdef CONFIG_SCHED_DEBUG 154 | extern void proc_sched_show_task(struct task_struct *p, struct seq_file *m); 155 | extern void proc_sched_set_task(struct task_struct *p); 156 | extern void 157 | print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq); 158 | #else 159 | static inline void 160 | proc_sched_show_task(struct task_struct *p, struct seq_file *m) 161 | { 162 | } 163 | static inline void proc_sched_set_task(struct task_struct *p) 164 | { 165 | } 166 | static inline void 167 | print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq) 168 | { 169 | } 170 | #endif 171 | 172 | /* 173 | * Task state bitmask. NOTE! These bits are also 174 | * encoded in fs/proc/array.c: get_task_state(). 175 | * 176 | * We have two separate sets of flags: task->state 177 | * is about runnability, while task->exit_state are 178 | * about the task exiting. Confusing, but this way 179 | * modifying one set can't modify the other one by 180 | * mistake. 181 | */ 182 | #define TASK_RUNNING 0 183 | #define TASK_INTERRUPTIBLE 1 184 | #define TASK_UNINTERRUPTIBLE 2 185 | #define __TASK_STOPPED 4 186 | #define __TASK_TRACED 8 187 | /* in tsk->exit_state */ 188 | #define EXIT_ZOMBIE 16 189 | #define EXIT_DEAD 32 190 | /* in tsk->state again */ 191 | #define TASK_DEAD 64 192 | #define TASK_WAKEKILL 128 193 | #define TASK_WAKING 256 194 | #define TASK_STATE_MAX 512 195 | 196 | #define TASK_STATE_TO_CHAR_STR "RSDTtZXxKW" 197 | 198 | extern char ___assert_task_state[1 - 2*!!( 199 | sizeof(TASK_STATE_TO_CHAR_STR)-1 != ilog2(TASK_STATE_MAX)+1)]; 200 | 201 | /* Convenience macros for the sake of set_task_state */ 202 | #define TASK_KILLABLE (TASK_WAKEKILL | TASK_UNINTERRUPTIBLE) 203 | #define TASK_STOPPED (TASK_WAKEKILL | __TASK_STOPPED) 204 | #define TASK_TRACED (TASK_WAKEKILL | __TASK_TRACED) 205 | 206 | /* Convenience macros for the sake of wake_up */ 207 | #define TASK_NORMAL (TASK_INTERRUPTIBLE | TASK_UNINTERRUPTIBLE) 208 | #define TASK_ALL (TASK_NORMAL | __TASK_STOPPED | __TASK_TRACED) 209 | 210 | /* get_task_state() */ 211 | #define TASK_REPORT (TASK_RUNNING | TASK_INTERRUPTIBLE | \ 212 | TASK_UNINTERRUPTIBLE | __TASK_STOPPED | \ 213 | __TASK_TRACED) 214 | 215 | #define task_is_traced(task) ((task->state & __TASK_TRACED) != 0) 216 | #define task_is_stopped(task) ((task->state & __TASK_STOPPED) != 0) 217 | #define task_is_dead(task) ((task)->exit_state != 0) 218 | #define task_is_stopped_or_traced(task) \ 219 | ((task->state & (__TASK_STOPPED | __TASK_TRACED)) != 0) 220 | #define task_contributes_to_load(task) \ 221 | ((task->state & TASK_UNINTERRUPTIBLE) != 0 && \ 222 | (task->flags & PF_FREEZING) == 0) 223 | 224 | #define __set_task_state(tsk, state_value) \ 225 | do { (tsk)->state = (state_value); } while (0) 226 | #define set_task_state(tsk, state_value) \ 227 | set_mb((tsk)->state, (state_value)) 228 | 229 | /* 230 | * set_current_state() includes a barrier so that the write of current->state 231 | * is correctly serialised wrt the caller's subsequent test of whether to 232 | * actually sleep: 233 | * 234 | * set_current_state(TASK_UNINTERRUPTIBLE); 235 | * if (do_i_need_to_sleep()) 236 | * schedule(); 237 | * 238 | * If the caller does not need such serialisation then use __set_current_state() 239 | */ 240 | #define __set_current_state(state_value) \ 241 | do { current->state = (state_value); } while (0) 242 | #define set_current_state(state_value) \ 243 | set_mb(current->state, (state_value)) 244 | 245 | /* Task command name length */ 246 | #define TASK_COMM_LEN 16 247 | 248 | #include 249 | 250 | /* 251 | * This serializes "schedule()" and also protects 252 | * the run-queue from deletions/modifications (but 253 | * _adding_ to the beginning of the run-queue has 254 | * a separate lock). 255 | */ 256 | extern rwlock_t tasklist_lock; 257 | extern spinlock_t mmlist_lock; 258 | 259 | struct task_struct; 260 | 261 | #ifdef CONFIG_PROVE_RCU 262 | extern int lockdep_tasklist_lock_is_held(void); 263 | #endif /* #ifdef CONFIG_PROVE_RCU */ 264 | 265 | extern void sched_init(void); 266 | extern void sched_init_smp(void); 267 | extern asmlinkage void schedule_tail(struct task_struct *prev); 268 | extern void init_idle(struct task_struct *idle, int cpu); 269 | extern void init_idle_bootup_task(struct task_struct *idle); 270 | 271 | extern int runqueue_is_locked(int cpu); 272 | 273 | extern cpumask_var_t nohz_cpu_mask; 274 | #if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ) 275 | extern void select_nohz_load_balancer(int stop_tick); 276 | extern int get_nohz_timer_target(void); 277 | #else 278 | static inline void select_nohz_load_balancer(int stop_tick) { } 279 | #endif 280 | 281 | /* 282 | * Only dump TASK_* tasks. (0 for all tasks) 283 | */ 284 | extern void show_state_filter(unsigned long state_filter); 285 | 286 | static inline void show_state(void) 287 | { 288 | show_state_filter(0); 289 | } 290 | 291 | extern void show_regs(struct pt_regs *); 292 | 293 | /* 294 | * TASK is a pointer to the task whose backtrace we want to see (or NULL for current 295 | * task), SP is the stack pointer of the first frame that should be shown in the back 296 | * trace (or NULL if the entire call-chain of the task should be shown). 297 | */ 298 | extern void show_stack(struct task_struct *task, unsigned long *sp); 299 | 300 | void io_schedule(void); 301 | long io_schedule_timeout(long timeout); 302 | 303 | extern void cpu_init (void); 304 | extern void trap_init(void); 305 | extern void update_process_times(int user); 306 | extern void scheduler_tick(void); 307 | 308 | extern void sched_show_task(struct task_struct *p); 309 | 310 | #ifdef CONFIG_LOCKUP_DETECTOR 311 | extern void touch_softlockup_watchdog(void); 312 | extern void touch_softlockup_watchdog_sync(void); 313 | extern void touch_all_softlockup_watchdogs(void); 314 | extern int proc_dowatchdog_thresh(struct ctl_table *table, int write, 315 | void __user *buffer, 316 | size_t *lenp, loff_t *ppos); 317 | extern unsigned int softlockup_panic; 318 | extern int softlockup_thresh; 319 | void lockup_detector_init(void); 320 | #else 321 | static inline void touch_softlockup_watchdog(void) 322 | { 323 | } 324 | static inline void touch_softlockup_watchdog_sync(void) 325 | { 326 | } 327 | static inline void touch_all_softlockup_watchdogs(void) 328 | { 329 | } 330 | static inline void lockup_detector_init(void) 331 | { 332 | } 333 | #endif 334 | 335 | #ifdef CONFIG_DETECT_HUNG_TASK 336 | extern unsigned int sysctl_hung_task_panic; 337 | extern unsigned long sysctl_hung_task_check_count; 338 | extern unsigned long sysctl_hung_task_timeout_secs; 339 | extern unsigned long sysctl_hung_task_warnings; 340 | extern int proc_dohung_task_timeout_secs(struct ctl_table *table, int write, 341 | void __user *buffer, 342 | size_t *lenp, loff_t *ppos); 343 | #else 344 | /* Avoid need for ifdefs elsewhere in the code */ 345 | enum { sysctl_hung_task_timeout_secs = 0 }; 346 | #endif 347 | 348 | /* Attach to any functions which should be ignored in wchan output. */ 349 | #define __sched __attribute__((__section__(".sched.text"))) 350 | 351 | /* Linker adds these: start and end of __sched functions */ 352 | extern char __sched_text_start[], __sched_text_end[]; 353 | 354 | /* Is this address in the __sched functions? */ 355 | extern int in_sched_functions(unsigned long addr); 356 | 357 | #define MAX_SCHEDULE_TIMEOUT LONG_MAX 358 | extern signed long schedule_timeout(signed long timeout); 359 | extern signed long schedule_timeout_interruptible(signed long timeout); 360 | extern signed long schedule_timeout_killable(signed long timeout); 361 | extern signed long schedule_timeout_uninterruptible(signed long timeout); 362 | asmlinkage void schedule(void); 363 | extern int mutex_spin_on_owner(struct mutex *lock, struct thread_info *owner); 364 | 365 | struct nsproxy; 366 | struct user_namespace; 367 | 368 | /* 369 | * Default maximum number of active map areas, this limits the number of vmas 370 | * per mm struct. Users can overwrite this number by sysctl but there is a 371 | * problem. 372 | * 373 | * When a program's coredump is generated as ELF format, a section is created 374 | * per a vma. In ELF, the number of sections is represented in unsigned short. 375 | * This means the number of sections should be smaller than 65535 at coredump. 376 | * Because the kernel adds some informative sections to a image of program at 377 | * generating coredump, we need some margin. The number of extra sections is 378 | * 1-3 now and depends on arch. We use "5" as safe margin, here. 379 | */ 380 | #define MAPCOUNT_ELF_CORE_MARGIN (5) 381 | #define DEFAULT_MAX_MAP_COUNT (USHRT_MAX - MAPCOUNT_ELF_CORE_MARGIN) 382 | 383 | extern int sysctl_max_map_count; 384 | 385 | #include 386 | 387 | #ifdef CONFIG_MMU 388 | extern void arch_pick_mmap_layout(struct mm_struct *mm); 389 | extern unsigned long 390 | arch_get_unmapped_area(struct file *, unsigned long, unsigned long, 391 | unsigned long, unsigned long); 392 | extern unsigned long 393 | arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr, 394 | unsigned long len, unsigned long pgoff, 395 | unsigned long flags); 396 | extern void arch_unmap_area(struct mm_struct *, unsigned long); 397 | extern void arch_unmap_area_topdown(struct mm_struct *, unsigned long); 398 | #else 399 | static inline void arch_pick_mmap_layout(struct mm_struct *mm) {} 400 | #endif 401 | 402 | 403 | extern void set_dumpable(struct mm_struct *mm, int value); 404 | extern int get_dumpable(struct mm_struct *mm); 405 | 406 | /* mm flags */ 407 | /* dumpable bits */ 408 | #define MMF_DUMPABLE 0 /* core dump is permitted */ 409 | #define MMF_DUMP_SECURELY 1 /* core file is readable only by root */ 410 | 411 | #define MMF_DUMPABLE_BITS 2 412 | #define MMF_DUMPABLE_MASK ((1 << MMF_DUMPABLE_BITS) - 1) 413 | 414 | /* coredump filter bits */ 415 | #define MMF_DUMP_ANON_PRIVATE 2 416 | #define MMF_DUMP_ANON_SHARED 3 417 | #define MMF_DUMP_MAPPED_PRIVATE 4 418 | #define MMF_DUMP_MAPPED_SHARED 5 419 | #define MMF_DUMP_ELF_HEADERS 6 420 | #define MMF_DUMP_HUGETLB_PRIVATE 7 421 | #define MMF_DUMP_HUGETLB_SHARED 8 422 | 423 | #define MMF_DUMP_FILTER_SHIFT MMF_DUMPABLE_BITS 424 | #define MMF_DUMP_FILTER_BITS 7 425 | #define MMF_DUMP_FILTER_MASK \ 426 | (((1 << MMF_DUMP_FILTER_BITS) - 1) << MMF_DUMP_FILTER_SHIFT) 427 | #define MMF_DUMP_FILTER_DEFAULT \ 428 | ((1 << MMF_DUMP_ANON_PRIVATE) | (1 << MMF_DUMP_ANON_SHARED) |\ 429 | (1 << MMF_DUMP_HUGETLB_PRIVATE) | MMF_DUMP_MASK_DEFAULT_ELF) 430 | 431 | #ifdef CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS 432 | # define MMF_DUMP_MASK_DEFAULT_ELF (1 << MMF_DUMP_ELF_HEADERS) 433 | #else 434 | # define MMF_DUMP_MASK_DEFAULT_ELF 0 435 | #endif 436 | /* leave room for more dump flags */ 437 | #define MMF_VM_MERGEABLE 16 /* KSM may merge identical pages */ 438 | #define MMF_VM_HUGEPAGE 17 /* set when VM_HUGEPAGE is set on vma */ 439 | 440 | #define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK) 441 | 442 | struct sighand_struct { 443 | atomic_t count; 444 | struct k_sigaction action[_NSIG]; 445 | spinlock_t siglock; 446 | wait_queue_head_t signalfd_wqh; 447 | }; 448 | 449 | struct pacct_struct { 450 | int ac_flag; 451 | long ac_exitcode; 452 | unsigned long ac_mem; 453 | cputime_t ac_utime, ac_stime; 454 | unsigned long ac_minflt, ac_majflt; 455 | }; 456 | 457 | struct cpu_itimer { 458 | cputime_t expires; 459 | cputime_t incr; 460 | u32 error; 461 | u32 incr_error; 462 | }; 463 | 464 | /** 465 | * struct task_cputime - collected CPU time counts 466 | * @utime: time spent in user mode, in &cputime_t units 467 | * @stime: time spent in kernel mode, in &cputime_t units 468 | * @sum_exec_runtime: total time spent on the CPU, in nanoseconds 469 | * 470 | * This structure groups together three kinds of CPU time that are 471 | * tracked for threads and thread groups. Most things considering 472 | * CPU time want to group these counts together and treat all three 473 | * of them in parallel. 474 | */ 475 | struct task_cputime { 476 | cputime_t utime; 477 | cputime_t stime; 478 | unsigned long long sum_exec_runtime; 479 | }; 480 | /* Alternate field names when used to cache expirations. */ 481 | #define prof_exp stime 482 | #define virt_exp utime 483 | #define sched_exp sum_exec_runtime 484 | 485 | #define INIT_CPUTIME \ 486 | (struct task_cputime) { \ 487 | .utime = cputime_zero, \ 488 | .stime = cputime_zero, \ 489 | .sum_exec_runtime = 0, \ 490 | } 491 | 492 | /* 493 | * Disable preemption until the scheduler is running. 494 | * Reset by start_kernel()->sched_init()->init_idle(). 495 | * 496 | * We include PREEMPT_ACTIVE to avoid cond_resched() from working 497 | * before the scheduler is active -- see should_resched(). 498 | */ 499 | #define INIT_PREEMPT_COUNT (1 + PREEMPT_ACTIVE) 500 | 501 | /** 502 | * struct thread_group_cputimer - thread group interval timer counts 503 | * @cputime: thread group interval timers. 504 | * @running: non-zero when there are timers running and 505 | * @cputime receives updates. 506 | * @lock: lock for fields in this struct. 507 | * 508 | * This structure contains the version of task_cputime, above, that is 509 | * used for thread group CPU timer calculations. 510 | */ 511 | struct thread_group_cputimer { 512 | struct task_cputime cputime; 513 | int running; 514 | spinlock_t lock; 515 | }; 516 | 517 | struct autogroup; 518 | 519 | /* 520 | * NOTE! "signal_struct" does not have its own 521 | * locking, because a shared signal_struct always 522 | * implies a shared sighand_struct, so locking 523 | * sighand_struct is always a proper superset of 524 | * the locking of signal_struct. 525 | */ 526 | struct signal_struct { 527 | atomic_t sigcnt; 528 | atomic_t live; 529 | int nr_threads; 530 | 531 | wait_queue_head_t wait_chldexit; /* for wait4() */ 532 | 533 | /* current thread group signal load-balancing target: */ 534 | struct task_struct *curr_target; 535 | 536 | /* shared signal handling: */ 537 | struct sigpending shared_pending; 538 | 539 | /* thread group exit support */ 540 | int group_exit_code; 541 | /* overloaded: 542 | * - notify group_exit_task when ->count is equal to notify_count 543 | * - everyone except group_exit_task is stopped during signal delivery 544 | * of fatal signals, group_exit_task processes the signal. 545 | */ 546 | int notify_count; 547 | struct task_struct *group_exit_task; 548 | 549 | /* thread group stop support, overloads group_exit_code too */ 550 | int group_stop_count; 551 | unsigned int flags; /* see SIGNAL_* flags below */ 552 | 553 | /* POSIX.1b Interval Timers */ 554 | struct list_head posix_timers; 555 | 556 | /* ITIMER_REAL timer for the process */ 557 | struct hrtimer real_timer; 558 | struct pid *leader_pid; 559 | ktime_t it_real_incr; 560 | 561 | /* 562 | * ITIMER_PROF and ITIMER_VIRTUAL timers for the process, we use 563 | * CPUCLOCK_PROF and CPUCLOCK_VIRT for indexing array as these 564 | * values are defined to 0 and 1 respectively 565 | */ 566 | struct cpu_itimer it[2]; 567 | 568 | /* 569 | * Thread group totals for process CPU timers. 570 | * See thread_group_cputimer(), et al, for details. 571 | */ 572 | struct thread_group_cputimer cputimer; 573 | 574 | /* Earliest-expiration cache. */ 575 | struct task_cputime cputime_expires; 576 | 577 | struct list_head cpu_timers[3]; 578 | 579 | struct pid *tty_old_pgrp; 580 | 581 | /* boolean value for session group leader */ 582 | int leader; 583 | 584 | struct tty_struct *tty; /* NULL if no tty */ 585 | 586 | #ifdef CONFIG_SCHED_AUTOGROUP 587 | struct autogroup *autogroup; 588 | #endif 589 | /* 590 | * Cumulative resource counters for dead threads in the group, 591 | * and for reaped dead child processes forked by this group. 592 | * Live threads maintain their own counters and add to these 593 | * in __exit_signal, except for the group leader. 594 | */ 595 | cputime_t utime, stime, cutime, cstime; 596 | cputime_t gtime; 597 | cputime_t cgtime; 598 | #ifndef CONFIG_VIRT_CPU_ACCOUNTING 599 | cputime_t prev_utime, prev_stime; 600 | #endif 601 | unsigned long nvcsw, nivcsw, cnvcsw, cnivcsw; 602 | unsigned long min_flt, maj_flt, cmin_flt, cmaj_flt; 603 | unsigned long inblock, oublock, cinblock, coublock; 604 | unsigned long maxrss, cmaxrss; 605 | struct task_io_accounting ioac; 606 | 607 | /* 608 | * Cumulative ns of schedule CPU time fo dead threads in the 609 | * group, not including a zombie group leader, (This only differs 610 | * from jiffies_to_ns(utime + stime) if sched_clock uses something 611 | * other than jiffies.) 612 | */ 613 | unsigned long long sum_sched_runtime; 614 | 615 | /* 616 | * We don't bother to synchronize most readers of this at all, 617 | * because there is no reader checking a limit that actually needs 618 | * to get both rlim_cur and rlim_max atomically, and either one 619 | * alone is a single word that can safely be read normally. 620 | * getrlimit/setrlimit use task_lock(current->group_leader) to 621 | * protect this instead of the siglock, because they really 622 | * have no need to disable irqs. 623 | */ 624 | struct rlimit rlim[RLIM_NLIMITS]; 625 | 626 | #ifdef CONFIG_BSD_PROCESS_ACCT 627 | struct pacct_struct pacct; /* per-process accounting information */ 628 | #endif 629 | #ifdef CONFIG_TASKSTATS 630 | struct taskstats *stats; 631 | #endif 632 | #ifdef CONFIG_AUDIT 633 | unsigned audit_tty; 634 | struct tty_audit_buf *tty_audit_buf; 635 | #endif 636 | 637 | int oom_adj; /* OOM kill score adjustment (bit shift) */ 638 | int oom_score_adj; /* OOM kill score adjustment */ 639 | int oom_score_adj_min; /* OOM kill score adjustment minimum value. 640 | * Only settable by CAP_SYS_RESOURCE. */ 641 | 642 | struct mutex cred_guard_mutex; /* guard against foreign influences on 643 | * credential calculations 644 | * (notably. ptrace) */ 645 | }; 646 | 647 | /* Context switch must be unlocked if interrupts are to be enabled */ 648 | #ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW 649 | # define __ARCH_WANT_UNLOCKED_CTXSW 650 | #endif 651 | 652 | /* 653 | * Bits in flags field of signal_struct. 654 | */ 655 | #define SIGNAL_STOP_STOPPED 0x00000001 /* job control stop in effect */ 656 | #define SIGNAL_STOP_DEQUEUED 0x00000002 /* stop signal dequeued */ 657 | #define SIGNAL_STOP_CONTINUED 0x00000004 /* SIGCONT since WCONTINUED reap */ 658 | #define SIGNAL_GROUP_EXIT 0x00000008 /* group exit in progress */ 659 | /* 660 | * Pending notifications to parent. 661 | */ 662 | #define SIGNAL_CLD_STOPPED 0x00000010 663 | #define SIGNAL_CLD_CONTINUED 0x00000020 664 | #define SIGNAL_CLD_MASK (SIGNAL_CLD_STOPPED|SIGNAL_CLD_CONTINUED) 665 | 666 | #define SIGNAL_UNKILLABLE 0x00000040 /* for init: ignore fatal signals */ 667 | 668 | /* If true, all threads except ->group_exit_task have pending SIGKILL */ 669 | static inline int signal_group_exit(const struct signal_struct *sig) 670 | { 671 | return (sig->flags & SIGNAL_GROUP_EXIT) || 672 | (sig->group_exit_task != NULL); 673 | } 674 | 675 | /* 676 | * Some day this will be a full-fledged user tracking system.. 677 | */ 678 | struct user_struct { 679 | atomic_t __count; /* reference count */ 680 | atomic_t processes; /* How many processes does this user have? */ 681 | atomic_t files; /* How many open files does this user have? */ 682 | atomic_t sigpending; /* How many pending signals does this user have? */ 683 | #ifdef CONFIG_INOTIFY_USER 684 | atomic_t inotify_watches; /* How many inotify watches does this user have? */ 685 | atomic_t inotify_devs; /* How many inotify devs does this user have opened? */ 686 | #endif 687 | #ifdef CONFIG_FANOTIFY 688 | atomic_t fanotify_listeners; 689 | #endif 690 | #ifdef CONFIG_EPOLL 691 | atomic_long_t epoll_watches; /* The number of file descriptors currently watched */ 692 | #endif 693 | #ifdef CONFIG_POSIX_MQUEUE 694 | /* protected by mq_lock */ 695 | unsigned long mq_bytes; /* How many bytes can be allocated to mqueue? */ 696 | #endif 697 | unsigned long locked_shm; /* How many pages of mlocked shm ? */ 698 | 699 | #ifdef CONFIG_KEYS 700 | struct key *uid_keyring; /* UID specific keyring */ 701 | struct key *session_keyring; /* UID's default session keyring */ 702 | #endif 703 | 704 | /* Hash table maintenance information */ 705 | struct hlist_node uidhash_node; 706 | uid_t uid; 707 | struct user_namespace *user_ns; 708 | 709 | #ifdef CONFIG_PERF_EVENTS 710 | atomic_long_t locked_vm; 711 | #endif 712 | }; 713 | 714 | extern int uids_sysfs_init(void); 715 | 716 | extern struct user_struct *find_user(uid_t); 717 | 718 | extern struct user_struct root_user; 719 | #define INIT_USER (&root_user) 720 | 721 | 722 | struct backing_dev_info; 723 | struct reclaim_state; 724 | 725 | #if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT) 726 | struct sched_info { 727 | /* cumulative counters */ 728 | unsigned long pcount; /* # of times run on this cpu */ 729 | unsigned long long run_delay; /* time spent waiting on a runqueue */ 730 | 731 | /* timestamps */ 732 | unsigned long long last_arrival,/* when we last ran on a cpu */ 733 | last_queued; /* when we were last queued to run */ 734 | #ifdef CONFIG_SCHEDSTATS 735 | /* BKL stats */ 736 | unsigned int bkl_count; 737 | #endif 738 | }; 739 | #endif /* defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT) */ 740 | 741 | #ifdef CONFIG_TASK_DELAY_ACCT 742 | struct task_delay_info { 743 | spinlock_t lock; 744 | unsigned int flags; /* Private per-task flags */ 745 | 746 | /* For each stat XXX, add following, aligned appropriately 747 | * 748 | * struct timespec XXX_start, XXX_end; 749 | * u64 XXX_delay; 750 | * u32 XXX_count; 751 | * 752 | * Atomicity of updates to XXX_delay, XXX_count protected by 753 | * single lock above (split into XXX_lock if contention is an issue). 754 | */ 755 | 756 | /* 757 | * XXX_count is incremented on every XXX operation, the delay 758 | * associated with the operation is added to XXX_delay. 759 | * XXX_delay contains the accumulated delay time in nanoseconds. 760 | */ 761 | struct timespec blkio_start, blkio_end; /* Shared by blkio, swapin */ 762 | u64 blkio_delay; /* wait for sync block io completion */ 763 | u64 swapin_delay; /* wait for swapin block io completion */ 764 | u32 blkio_count; /* total count of the number of sync block */ 765 | /* io operations performed */ 766 | u32 swapin_count; /* total count of the number of swapin block */ 767 | /* io operations performed */ 768 | 769 | struct timespec freepages_start, freepages_end; 770 | u64 freepages_delay; /* wait for memory reclaim */ 771 | u32 freepages_count; /* total count of memory reclaim */ 772 | }; 773 | #endif /* CONFIG_TASK_DELAY_ACCT */ 774 | 775 | static inline int sched_info_on(void) 776 | { 777 | #ifdef CONFIG_SCHEDSTATS 778 | return 1; 779 | #elif defined(CONFIG_TASK_DELAY_ACCT) 780 | extern int delayacct_on; 781 | return delayacct_on; 782 | #else 783 | return 0; 784 | #endif 785 | } 786 | 787 | enum cpu_idle_type { 788 | CPU_IDLE, 789 | CPU_NOT_IDLE, 790 | CPU_NEWLY_IDLE, 791 | CPU_MAX_IDLE_TYPES 792 | }; 793 | 794 | /* 795 | * sched-domains (multiprocessor balancing) declarations: 796 | */ 797 | 798 | /* 799 | * Increase resolution of nice-level calculations: 800 | */ 801 | #define SCHED_LOAD_SHIFT 10 802 | #define SCHED_LOAD_SCALE (1L << SCHED_LOAD_SHIFT) 803 | 804 | #define SCHED_LOAD_SCALE_FUZZ SCHED_LOAD_SCALE 805 | 806 | #ifdef CONFIG_SMP 807 | #define SD_LOAD_BALANCE 0x0001 /* Do load balancing on this domain. */ 808 | #define SD_BALANCE_NEWIDLE 0x0002 /* Balance when about to become idle */ 809 | #define SD_BALANCE_EXEC 0x0004 /* Balance on exec */ 810 | #define SD_BALANCE_FORK 0x0008 /* Balance on fork, clone */ 811 | #define SD_BALANCE_WAKE 0x0010 /* Balance on wakeup */ 812 | #define SD_WAKE_AFFINE 0x0020 /* Wake task to waking CPU */ 813 | #define SD_PREFER_LOCAL 0x0040 /* Prefer to keep tasks local to this domain */ 814 | #define SD_SHARE_CPUPOWER 0x0080 /* Domain members share cpu power */ 815 | #define SD_POWERSAVINGS_BALANCE 0x0100 /* Balance for power savings */ 816 | #define SD_SHARE_PKG_RESOURCES 0x0200 /* Domain members share cpu pkg resources */ 817 | #define SD_SERIALIZE 0x0400 /* Only a single load balancing instance */ 818 | #define SD_ASYM_PACKING 0x0800 /* Place busy groups earlier in the domain */ 819 | #define SD_PREFER_SIBLING 0x1000 /* Prefer to place tasks in a sibling domain */ 820 | 821 | enum powersavings_balance_level { 822 | POWERSAVINGS_BALANCE_NONE = 0, /* No power saving load balance */ 823 | POWERSAVINGS_BALANCE_BASIC, /* Fill one thread/core/package 824 | * first for long running threads 825 | */ 826 | POWERSAVINGS_BALANCE_WAKEUP, /* Also bias task wakeups to semi-idle 827 | * cpu package for power savings 828 | */ 829 | MAX_POWERSAVINGS_BALANCE_LEVELS 830 | }; 831 | 832 | extern int sched_mc_power_savings, sched_smt_power_savings; 833 | 834 | static inline int sd_balance_for_mc_power(void) 835 | { 836 | if (sched_smt_power_savings) 837 | return SD_POWERSAVINGS_BALANCE; 838 | 839 | if (!sched_mc_power_savings) 840 | return SD_PREFER_SIBLING; 841 | 842 | return 0; 843 | } 844 | 845 | static inline int sd_balance_for_package_power(void) 846 | { 847 | if (sched_mc_power_savings | sched_smt_power_savings) 848 | return SD_POWERSAVINGS_BALANCE; 849 | 850 | return SD_PREFER_SIBLING; 851 | } 852 | 853 | extern int __weak arch_sd_sibiling_asym_packing(void); 854 | 855 | /* 856 | * Optimise SD flags for power savings: 857 | * SD_BALANCE_NEWIDLE helps aggressive task consolidation and power savings. 858 | * Keep default SD flags if sched_{smt,mc}_power_saving=0 859 | */ 860 | 861 | static inline int sd_power_saving_flags(void) 862 | { 863 | if (sched_mc_power_savings | sched_smt_power_savings) 864 | return SD_BALANCE_NEWIDLE; 865 | 866 | return 0; 867 | } 868 | 869 | struct sched_group { 870 | struct sched_group *next; /* Must be a circular list */ 871 | 872 | /* 873 | * CPU power of this group, SCHED_LOAD_SCALE being max power for a 874 | * single CPU. 875 | */ 876 | unsigned int cpu_power, cpu_power_orig; 877 | unsigned int group_weight; 878 | 879 | /* 880 | * The CPUs this group covers. 881 | * 882 | * NOTE: this field is variable length. (Allocated dynamically 883 | * by attaching extra space to the end of the structure, 884 | * depending on how many CPUs the kernel has booted up with) 885 | * 886 | * It is also be embedded into static data structures at build 887 | * time. (See 'struct static_sched_group' in kernel/sched.c) 888 | */ 889 | unsigned long cpumask[0]; 890 | }; 891 | 892 | static inline struct cpumask *sched_group_cpus(struct sched_group *sg) 893 | { 894 | return to_cpumask(sg->cpumask); 895 | } 896 | 897 | enum sched_domain_level { 898 | SD_LV_NONE = 0, 899 | SD_LV_SIBLING, 900 | SD_LV_MC, 901 | SD_LV_BOOK, 902 | SD_LV_CPU, 903 | SD_LV_NODE, 904 | SD_LV_ALLNODES, 905 | SD_LV_MAX 906 | }; 907 | 908 | struct sched_domain_attr { 909 | int relax_domain_level; 910 | }; 911 | 912 | #define SD_ATTR_INIT (struct sched_domain_attr) { \ 913 | .relax_domain_level = -1, \ 914 | } 915 | 916 | struct sched_domain { 917 | /* These fields must be setup */ 918 | struct sched_domain *parent; /* top domain must be null terminated */ 919 | struct sched_domain *child; /* bottom domain must be null terminated */ 920 | struct sched_group *groups; /* the balancing groups of the domain */ 921 | unsigned long min_interval; /* Minimum balance interval ms */ 922 | unsigned long max_interval; /* Maximum balance interval ms */ 923 | unsigned int busy_factor; /* less balancing by factor if busy */ 924 | unsigned int imbalance_pct; /* No balance until over watermark */ 925 | unsigned int cache_nice_tries; /* Leave cache hot tasks for # tries */ 926 | unsigned int busy_idx; 927 | unsigned int idle_idx; 928 | unsigned int newidle_idx; 929 | unsigned int wake_idx; 930 | unsigned int forkexec_idx; 931 | unsigned int smt_gain; 932 | int flags; /* See SD_* */ 933 | enum sched_domain_level level; 934 | 935 | /* Runtime fields. */ 936 | unsigned long last_balance; /* init to jiffies. units in jiffies */ 937 | unsigned int balance_interval; /* initialise to 1. units in ms. */ 938 | unsigned int nr_balance_failed; /* initialise to 0 */ 939 | 940 | u64 last_update; 941 | 942 | #ifdef CONFIG_SCHEDSTATS 943 | /* load_balance() stats */ 944 | unsigned int lb_count[CPU_MAX_IDLE_TYPES]; 945 | unsigned int lb_failed[CPU_MAX_IDLE_TYPES]; 946 | unsigned int lb_balanced[CPU_MAX_IDLE_TYPES]; 947 | unsigned int lb_imbalance[CPU_MAX_IDLE_TYPES]; 948 | unsigned int lb_gained[CPU_MAX_IDLE_TYPES]; 949 | unsigned int lb_hot_gained[CPU_MAX_IDLE_TYPES]; 950 | unsigned int lb_nobusyg[CPU_MAX_IDLE_TYPES]; 951 | unsigned int lb_nobusyq[CPU_MAX_IDLE_TYPES]; 952 | 953 | /* Active load balancing */ 954 | unsigned int alb_count; 955 | unsigned int alb_failed; 956 | unsigned int alb_pushed; 957 | 958 | /* SD_BALANCE_EXEC stats */ 959 | unsigned int sbe_count; 960 | unsigned int sbe_balanced; 961 | unsigned int sbe_pushed; 962 | 963 | /* SD_BALANCE_FORK stats */ 964 | unsigned int sbf_count; 965 | unsigned int sbf_balanced; 966 | unsigned int sbf_pushed; 967 | 968 | /* try_to_wake_up() stats */ 969 | unsigned int ttwu_wake_remote; 970 | unsigned int ttwu_move_affine; 971 | unsigned int ttwu_move_balance; 972 | #endif 973 | #ifdef CONFIG_SCHED_DEBUG 974 | char *name; 975 | #endif 976 | 977 | unsigned int span_weight; 978 | /* 979 | * Span of all CPUs in this domain. 980 | * 981 | * NOTE: this field is variable length. (Allocated dynamically 982 | * by attaching extra space to the end of the structure, 983 | * depending on how many CPUs the kernel has booted up with) 984 | * 985 | * It is also be embedded into static data structures at build 986 | * time. (See 'struct static_sched_domain' in kernel/sched.c) 987 | */ 988 | unsigned long span[0]; 989 | }; 990 | 991 | static inline struct cpumask *sched_domain_span(struct sched_domain *sd) 992 | { 993 | return to_cpumask(sd->span); 994 | } 995 | 996 | extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], 997 | struct sched_domain_attr *dattr_new); 998 | 999 | /* Allocate an array of sched domains, for partition_sched_domains(). */ 1000 | cpumask_var_t *alloc_sched_domains(unsigned int ndoms); 1001 | void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms); 1002 | 1003 | /* Test a flag in parent sched domain */ 1004 | static inline int test_sd_parent(struct sched_domain *sd, int flag) 1005 | { 1006 | if (sd->parent && (sd->parent->flags & flag)) 1007 | return 1; 1008 | 1009 | return 0; 1010 | } 1011 | 1012 | unsigned long default_scale_freq_power(struct sched_domain *sd, int cpu); 1013 | unsigned long default_scale_smt_power(struct sched_domain *sd, int cpu); 1014 | 1015 | #else /* CONFIG_SMP */ 1016 | 1017 | struct sched_domain_attr; 1018 | 1019 | static inline void 1020 | partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], 1021 | struct sched_domain_attr *dattr_new) 1022 | { 1023 | } 1024 | #endif /* !CONFIG_SMP */ 1025 | 1026 | 1027 | struct io_context; /* See blkdev.h */ 1028 | 1029 | 1030 | #ifdef ARCH_HAS_PREFETCH_SWITCH_STACK 1031 | extern void prefetch_stack(struct task_struct *t); 1032 | #else 1033 | static inline void prefetch_stack(struct task_struct *t) { } 1034 | #endif 1035 | 1036 | struct audit_context; /* See audit.c */ 1037 | struct mempolicy; 1038 | struct pipe_inode_info; 1039 | struct uts_namespace; 1040 | 1041 | struct rq; 1042 | struct sched_domain; 1043 | 1044 | /* 1045 | * wake flags 1046 | */ 1047 | #define WF_SYNC 0x01 /* waker goes to sleep after wakup */ 1048 | #define WF_FORK 0x02 /* child wakeup after fork */ 1049 | 1050 | #define ENQUEUE_WAKEUP 1 1051 | #define ENQUEUE_WAKING 2 1052 | #define ENQUEUE_HEAD 4 1053 | 1054 | #define DEQUEUE_SLEEP 1 1055 | 1056 | struct sched_class { 1057 | const struct sched_class *next; 1058 | 1059 | void (*enqueue_task) (struct rq *rq, struct task_struct *p, int flags); 1060 | void (*dequeue_task) (struct rq *rq, struct task_struct *p, int flags); 1061 | void (*yield_task) (struct rq *rq); 1062 | bool (*yield_to_task) (struct rq *rq, struct task_struct *p, bool preempt); 1063 | 1064 | void (*check_preempt_curr) (struct rq *rq, struct task_struct *p, int flags); 1065 | 1066 | struct task_struct * (*pick_next_task) (struct rq *rq); 1067 | void (*put_prev_task) (struct rq *rq, struct task_struct *p); 1068 | 1069 | #ifdef CONFIG_SMP 1070 | int (*select_task_rq)(struct rq *rq, struct task_struct *p, 1071 | int sd_flag, int flags); 1072 | 1073 | void (*pre_schedule) (struct rq *this_rq, struct task_struct *task); 1074 | void (*post_schedule) (struct rq *this_rq); 1075 | void (*task_waking) (struct rq *this_rq, struct task_struct *task); 1076 | void (*task_woken) (struct rq *this_rq, struct task_struct *task); 1077 | 1078 | void (*set_cpus_allowed)(struct task_struct *p, 1079 | const struct cpumask *newmask); 1080 | 1081 | void (*rq_online)(struct rq *rq); 1082 | void (*rq_offline)(struct rq *rq); 1083 | #endif 1084 | 1085 | void (*set_curr_task) (struct rq *rq); 1086 | void (*task_tick) (struct rq *rq, struct task_struct *p, int queued); 1087 | void (*task_fork) (struct task_struct *p); 1088 | 1089 | void (*switched_from) (struct rq *this_rq, struct task_struct *task); 1090 | void (*switched_to) (struct rq *this_rq, struct task_struct *task); 1091 | void (*prio_changed) (struct rq *this_rq, struct task_struct *task, 1092 | int oldprio); 1093 | 1094 | unsigned int (*get_rr_interval) (struct rq *rq, 1095 | struct task_struct *task); 1096 | 1097 | #ifdef CONFIG_FAIR_GROUP_SCHED 1098 | void (*task_move_group) (struct task_struct *p, int on_rq); 1099 | #endif 1100 | }; 1101 | 1102 | struct load_weight { 1103 | unsigned long weight, inv_weight; 1104 | }; 1105 | 1106 | #ifdef CONFIG_SCHEDSTATS 1107 | struct sched_statistics { 1108 | u64 wait_start; 1109 | u64 wait_max; 1110 | u64 wait_count; 1111 | u64 wait_sum; 1112 | u64 iowait_count; 1113 | u64 iowait_sum; 1114 | 1115 | u64 sleep_start; 1116 | u64 sleep_max; 1117 | s64 sum_sleep_runtime; 1118 | 1119 | u64 block_start; 1120 | u64 block_max; 1121 | u64 exec_max; 1122 | u64 slice_max; 1123 | 1124 | u64 nr_migrations_cold; 1125 | u64 nr_failed_migrations_affine; 1126 | u64 nr_failed_migrations_running; 1127 | u64 nr_failed_migrations_hot; 1128 | u64 nr_forced_migrations; 1129 | 1130 | u64 nr_wakeups; 1131 | u64 nr_wakeups_sync; 1132 | u64 nr_wakeups_migrate; 1133 | u64 nr_wakeups_local; 1134 | u64 nr_wakeups_remote; 1135 | u64 nr_wakeups_affine; 1136 | u64 nr_wakeups_affine_attempts; 1137 | u64 nr_wakeups_passive; 1138 | u64 nr_wakeups_idle; 1139 | }; 1140 | #endif 1141 | 1142 | struct sched_entity { 1143 | struct load_weight load; /* for load-balancing */ 1144 | struct rb_node run_node; 1145 | struct list_head group_node; 1146 | unsigned int on_rq; 1147 | 1148 | u64 exec_start; 1149 | u64 sum_exec_runtime; 1150 | u64 vruntime; 1151 | u64 prev_sum_exec_runtime; 1152 | 1153 | u64 nr_migrations; 1154 | 1155 | #ifdef CONFIG_SCHEDSTATS 1156 | struct sched_statistics statistics; 1157 | #endif 1158 | 1159 | #ifdef CONFIG_FAIR_GROUP_SCHED 1160 | struct sched_entity *parent; 1161 | /* rq on which this entity is (to be) queued: */ 1162 | struct cfs_rq *cfs_rq; 1163 | /* rq "owned" by this entity/group: */ 1164 | struct cfs_rq *my_q; 1165 | #endif 1166 | }; 1167 | 1168 | struct sched_rt_entity { 1169 | struct list_head run_list; 1170 | unsigned long timeout; 1171 | unsigned int time_slice; 1172 | int nr_cpus_allowed; 1173 | 1174 | struct sched_rt_entity *back; 1175 | #ifdef CONFIG_RT_GROUP_SCHED 1176 | struct sched_rt_entity *parent; 1177 | /* rq on which this entity is (to be) queued: */ 1178 | struct rt_rq *rt_rq; 1179 | /* rq "owned" by this entity/group: */ 1180 | struct rt_rq *my_q; 1181 | #endif 1182 | }; 1183 | 1184 | struct rcu_node; 1185 | 1186 | enum perf_event_task_context { 1187 | perf_invalid_context = -1, 1188 | perf_hw_context = 0, 1189 | perf_sw_context, 1190 | perf_nr_task_contexts, 1191 | }; 1192 | 1193 | struct task_struct { 1194 | volatile long state; /* -1 unrunnable, 0 runnable, >0 stopped */ 1195 | void *stack; 1196 | atomic_t usage; 1197 | unsigned int flags; /* per process flags, defined below */ 1198 | unsigned int ptrace; 1199 | 1200 | int lock_depth; /* BKL lock depth */ 1201 | 1202 | #ifdef CONFIG_SMP 1203 | #ifdef __ARCH_WANT_UNLOCKED_CTXSW 1204 | int oncpu; 1205 | #endif 1206 | #endif 1207 | 1208 | int prio, static_prio, normal_prio; 1209 | unsigned int rt_priority; 1210 | const struct sched_class *sched_class; 1211 | struct sched_entity se; 1212 | struct sched_rt_entity rt; 1213 | 1214 | #ifdef CONFIG_PREEMPT_NOTIFIERS 1215 | /* list of struct preempt_notifier: */ 1216 | struct hlist_head preempt_notifiers; 1217 | #endif 1218 | 1219 | /* 1220 | * fpu_counter contains the number of consecutive context switches 1221 | * that the FPU is used. If this is over a threshold, the lazy fpu 1222 | * saving becomes unlazy to save the trap. This is an unsigned char 1223 | * so that after 256 times the counter wraps and the behavior turns 1224 | * lazy again; this to deal with bursty apps that only use FPU for 1225 | * a short time 1226 | */ 1227 | unsigned char fpu_counter; 1228 | #ifdef CONFIG_BLK_DEV_IO_TRACE 1229 | unsigned int btrace_seq; 1230 | #endif 1231 | 1232 | unsigned int policy; 1233 | cpumask_t cpus_allowed; 1234 | 1235 | #ifdef CONFIG_PREEMPT_RCU 1236 | int rcu_read_lock_nesting; 1237 | char rcu_read_unlock_special; 1238 | struct list_head rcu_node_entry; 1239 | #endif /* #ifdef CONFIG_PREEMPT_RCU */ 1240 | #ifdef CONFIG_TREE_PREEMPT_RCU 1241 | struct rcu_node *rcu_blocked_node; 1242 | #endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */ 1243 | #ifdef CONFIG_RCU_BOOST 1244 | struct rt_mutex *rcu_boost_mutex; 1245 | #endif /* #ifdef CONFIG_RCU_BOOST */ 1246 | 1247 | #if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT) 1248 | struct sched_info sched_info; 1249 | #endif 1250 | 1251 | struct list_head tasks; 1252 | #ifdef CONFIG_SMP 1253 | struct plist_node pushable_tasks; 1254 | #endif 1255 | 1256 | struct mm_struct *mm, *active_mm; 1257 | #ifdef CONFIG_COMPAT_BRK 1258 | unsigned brk_randomized:1; 1259 | #endif 1260 | #if defined(SPLIT_RSS_COUNTING) 1261 | struct task_rss_stat rss_stat; 1262 | #endif 1263 | /* task state */ 1264 | int exit_state; 1265 | int exit_code, exit_signal; 1266 | int pdeath_signal; /* The signal sent when the parent dies */ 1267 | /* ??? */ 1268 | unsigned int personality; 1269 | unsigned did_exec:1; 1270 | unsigned in_execve:1; /* Tell the LSMs that the process is doing an 1271 | * execve */ 1272 | unsigned in_iowait:1; 1273 | 1274 | 1275 | /* Revert to default priority/policy when forking */ 1276 | unsigned sched_reset_on_fork:1; 1277 | 1278 | pid_t pid; 1279 | pid_t tgid; 1280 | 1281 | #ifdef CONFIG_CC_STACKPROTECTOR 1282 | /* Canary value for the -fstack-protector gcc feature */ 1283 | unsigned long stack_canary; 1284 | #endif 1285 | 1286 | /* 1287 | * pointers to (original) parent process, youngest child, younger sibling, 1288 | * older sibling, respectively. (p->father can be replaced with 1289 | * p->real_parent->pid) 1290 | */ 1291 | struct task_struct *real_parent; /* real parent process */ 1292 | struct task_struct *parent; /* recipient of SIGCHLD, wait4() reports */ 1293 | /* 1294 | * children/sibling forms the list of my natural children 1295 | */ 1296 | struct list_head children; /* list of my children */ 1297 | struct list_head sibling; /* linkage in my parent's children list */ 1298 | struct task_struct *group_leader; /* threadgroup leader */ 1299 | 1300 | /* 1301 | * ptraced is the list of tasks this task is using ptrace on. 1302 | * This includes both natural children and PTRACE_ATTACH targets. 1303 | * p->ptrace_entry is p's link on the p->parent->ptraced list. 1304 | */ 1305 | struct list_head ptraced; 1306 | struct list_head ptrace_entry; 1307 | 1308 | /* PID/PID hash table linkage. */ 1309 | struct pid_link pids[PIDTYPE_MAX]; 1310 | struct list_head thread_group; 1311 | 1312 | struct completion *vfork_done; /* for vfork() */ 1313 | int __user *set_child_tid; /* CLONE_CHILD_SETTID */ 1314 | int __user *clear_child_tid; /* CLONE_CHILD_CLEARTID */ 1315 | 1316 | cputime_t utime, stime, utimescaled, stimescaled; 1317 | cputime_t gtime; 1318 | #ifndef CONFIG_VIRT_CPU_ACCOUNTING 1319 | cputime_t prev_utime, prev_stime; 1320 | #endif 1321 | unsigned long nvcsw, nivcsw; /* context switch counts */ 1322 | struct timespec start_time; /* monotonic time */ 1323 | struct timespec real_start_time; /* boot based time */ 1324 | /* mm fault and swap info: this can arguably be seen as either mm-specific or thread-specific */ 1325 | unsigned long min_flt, maj_flt; 1326 | 1327 | struct task_cputime cputime_expires; 1328 | struct list_head cpu_timers[3]; 1329 | 1330 | /* process credentials */ 1331 | const struct cred __rcu *real_cred; /* objective and real subjective task 1332 | * credentials (COW) */ 1333 | const struct cred __rcu *cred; /* effective (overridable) subjective task 1334 | * credentials (COW) */ 1335 | struct cred *replacement_session_keyring; /* for KEYCTL_SESSION_TO_PARENT */ 1336 | 1337 | char comm[TASK_COMM_LEN]; /* executable name excluding path 1338 | - access with [gs]et_task_comm (which lock 1339 | it with task_lock()) 1340 | - initialized normally by setup_new_exec */ 1341 | /* file system info */ 1342 | int link_count, total_link_count; 1343 | #ifdef CONFIG_SYSVIPC 1344 | /* ipc stuff */ 1345 | struct sysv_sem sysvsem; 1346 | #endif 1347 | #ifdef CONFIG_DETECT_HUNG_TASK 1348 | /* hung task detection */ 1349 | unsigned long last_switch_count; 1350 | #endif 1351 | /* CPU-specific state of this task */ 1352 | struct thread_struct thread; 1353 | /* filesystem information */ 1354 | struct fs_struct *fs; 1355 | /* open file information */ 1356 | struct files_struct *files; 1357 | /* namespaces */ 1358 | struct nsproxy *nsproxy; 1359 | /* signal handlers */ 1360 | struct signal_struct *signal; 1361 | struct sighand_struct *sighand; 1362 | 1363 | sigset_t blocked, real_blocked; 1364 | sigset_t saved_sigmask; /* restored if set_restore_sigmask() was used */ 1365 | struct sigpending pending; 1366 | 1367 | unsigned long sas_ss_sp; 1368 | size_t sas_ss_size; 1369 | int (*notifier)(void *priv); 1370 | void *notifier_data; 1371 | sigset_t *notifier_mask; 1372 | struct audit_context *audit_context; 1373 | #ifdef CONFIG_AUDITSYSCALL 1374 | uid_t loginuid; 1375 | unsigned int sessionid; 1376 | #endif 1377 | seccomp_t seccomp; 1378 | 1379 | /* Thread group tracking */ 1380 | u32 parent_exec_id; 1381 | u32 self_exec_id; 1382 | /* Protection of (de-)allocation: mm, files, fs, tty, keyrings, mems_allowed, 1383 | * mempolicy */ 1384 | spinlock_t alloc_lock; 1385 | 1386 | #ifdef CONFIG_GENERIC_HARDIRQS 1387 | /* IRQ handler threads */ 1388 | struct irqaction *irqaction; 1389 | #endif 1390 | 1391 | /* Protection of the PI data structures: */ 1392 | raw_spinlock_t pi_lock; 1393 | 1394 | #ifdef CONFIG_RT_MUTEXES 1395 | /* PI waiters blocked on a rt_mutex held by this task */ 1396 | struct plist_head pi_waiters; 1397 | /* Deadlock detection and priority inheritance handling */ 1398 | struct rt_mutex_waiter *pi_blocked_on; 1399 | #endif 1400 | 1401 | #ifdef CONFIG_DEBUG_MUTEXES 1402 | /* mutex deadlock detection */ 1403 | struct mutex_waiter *blocked_on; 1404 | #endif 1405 | #ifdef CONFIG_TRACE_IRQFLAGS 1406 | unsigned int irq_events; 1407 | unsigned long hardirq_enable_ip; 1408 | unsigned long hardirq_disable_ip; 1409 | unsigned int hardirq_enable_event; 1410 | unsigned int hardirq_disable_event; 1411 | int hardirqs_enabled; 1412 | int hardirq_context; 1413 | unsigned long softirq_disable_ip; 1414 | unsigned long softirq_enable_ip; 1415 | unsigned int softirq_disable_event; 1416 | unsigned int softirq_enable_event; 1417 | int softirqs_enabled; 1418 | int softirq_context; 1419 | #endif 1420 | #ifdef CONFIG_LOCKDEP 1421 | # define MAX_LOCK_DEPTH 48UL 1422 | u64 curr_chain_key; 1423 | int lockdep_depth; 1424 | unsigned int lockdep_recursion; 1425 | struct held_lock held_locks[MAX_LOCK_DEPTH]; 1426 | gfp_t lockdep_reclaim_gfp; 1427 | #endif 1428 | 1429 | /* journalling filesystem info */ 1430 | void *journal_info; 1431 | 1432 | /* stacked block device info */ 1433 | struct bio_list *bio_list; 1434 | 1435 | #ifdef CONFIG_BLOCK 1436 | /* stack plugging */ 1437 | struct blk_plug *plug; 1438 | #endif 1439 | 1440 | /* VM state */ 1441 | struct reclaim_state *reclaim_state; 1442 | 1443 | struct backing_dev_info *backing_dev_info; 1444 | 1445 | struct io_context *io_context; 1446 | 1447 | unsigned long ptrace_message; 1448 | siginfo_t *last_siginfo; /* For ptrace use. */ 1449 | struct task_io_accounting ioac; 1450 | #if defined(CONFIG_TASK_XACCT) 1451 | u64 acct_rss_mem1; /* accumulated rss usage */ 1452 | u64 acct_vm_mem1; /* accumulated virtual memory usage */ 1453 | cputime_t acct_timexpd; /* stime + utime since last update */ 1454 | #endif 1455 | #ifdef CONFIG_CPUSETS 1456 | nodemask_t mems_allowed; /* Protected by alloc_lock */ 1457 | int mems_allowed_change_disable; 1458 | int cpuset_mem_spread_rotor; 1459 | int cpuset_slab_spread_rotor; 1460 | #endif 1461 | #ifdef CONFIG_CGROUPS 1462 | /* Control Group info protected by css_set_lock */ 1463 | struct css_set __rcu *cgroups; 1464 | /* cg_list protected by css_set_lock and tsk->alloc_lock */ 1465 | struct list_head cg_list; 1466 | #endif 1467 | #ifdef CONFIG_FUTEX 1468 | struct robust_list_head __user *robust_list; 1469 | #ifdef CONFIG_COMPAT 1470 | struct compat_robust_list_head __user *compat_robust_list; 1471 | #endif 1472 | struct list_head pi_state_list; 1473 | struct futex_pi_state *pi_state_cache; 1474 | #endif 1475 | #ifdef CONFIG_PERF_EVENTS 1476 | struct perf_event_context *perf_event_ctxp[perf_nr_task_contexts]; 1477 | struct mutex perf_event_mutex; 1478 | struct list_head perf_event_list; 1479 | #endif 1480 | #ifdef CONFIG_NUMA 1481 | struct mempolicy *mempolicy; /* Protected by alloc_lock */ 1482 | short il_next; 1483 | short pref_node_fork; 1484 | #endif 1485 | atomic_t fs_excl; /* holding fs exclusive resources */ 1486 | struct rcu_head rcu; 1487 | 1488 | /* 1489 | * cache last used pipe for splice 1490 | */ 1491 | struct pipe_inode_info *splice_pipe; 1492 | #ifdef CONFIG_TASK_DELAY_ACCT 1493 | struct task_delay_info *delays; 1494 | #endif 1495 | #ifdef CONFIG_FAULT_INJECTION 1496 | int make_it_fail; 1497 | #endif 1498 | struct prop_local_single dirties; 1499 | #ifdef CONFIG_LATENCYTOP 1500 | int latency_record_count; 1501 | struct latency_record latency_record[LT_SAVECOUNT]; 1502 | #endif 1503 | /* 1504 | * time slack values; these are used to round up poll() and 1505 | * select() etc timeout values. These are in nanoseconds. 1506 | */ 1507 | unsigned long timer_slack_ns; 1508 | unsigned long default_timer_slack_ns; 1509 | 1510 | struct list_head *scm_work_list; 1511 | #ifdef CONFIG_FUNCTION_GRAPH_TRACER 1512 | /* Index of current stored address in ret_stack */ 1513 | int curr_ret_stack; 1514 | /* Stack of return addresses for return function tracing */ 1515 | struct ftrace_ret_stack *ret_stack; 1516 | /* time stamp for last schedule */ 1517 | unsigned long long ftrace_timestamp; 1518 | /* 1519 | * Number of functions that haven't been traced 1520 | * because of depth overrun. 1521 | */ 1522 | atomic_t trace_overrun; 1523 | /* Pause for the tracing */ 1524 | atomic_t tracing_graph_pause; 1525 | #endif 1526 | #ifdef CONFIG_TRACING 1527 | /* state flags for use by tracers */ 1528 | unsigned long trace; 1529 | /* bitmask of trace recursion */ 1530 | unsigned long trace_recursion; 1531 | #endif /* CONFIG_TRACING */ 1532 | #ifdef CONFIG_CGROUP_MEM_RES_CTLR /* memcg uses this to do batch job */ 1533 | struct memcg_batch_info { 1534 | int do_batch; /* incremented when batch uncharge started */ 1535 | struct mem_cgroup *memcg; /* target memcg of uncharge */ 1536 | unsigned long nr_pages; /* uncharged usage */ 1537 | unsigned long memsw_nr_pages; /* uncharged mem+swap usage */ 1538 | } memcg_batch; 1539 | #endif 1540 | #ifdef CONFIG_HAVE_HW_BREAKPOINT 1541 | atomic_t ptrace_bp_refcnt; 1542 | #endif 1543 | }; 1544 | 1545 | /* Future-safe accessor for struct task_struct's cpus_allowed. */ 1546 | #define tsk_cpus_allowed(tsk) (&(tsk)->cpus_allowed) 1547 | 1548 | /* 1549 | * Priority of a process goes from 0..MAX_PRIO-1, valid RT 1550 | * priority is 0..MAX_RT_PRIO-1, and SCHED_NORMAL/SCHED_BATCH 1551 | * tasks are in the range MAX_RT_PRIO..MAX_PRIO-1. Priority 1552 | * values are inverted: lower p->prio value means higher priority. 1553 | * 1554 | * The MAX_USER_RT_PRIO value allows the actual maximum 1555 | * RT priority to be separate from the value exported to 1556 | * user-space. This allows kernel threads to set their 1557 | * priority to a value higher than any user task. Note: 1558 | * MAX_RT_PRIO must not be smaller than MAX_USER_RT_PRIO. 1559 | */ 1560 | 1561 | #define MAX_USER_RT_PRIO 100 1562 | #define MAX_RT_PRIO MAX_USER_RT_PRIO 1563 | 1564 | #define MAX_PRIO (MAX_RT_PRIO + 40) 1565 | #define DEFAULT_PRIO (MAX_RT_PRIO + 20) 1566 | 1567 | static inline int rt_prio(int prio) 1568 | { 1569 | if (unlikely(prio < MAX_RT_PRIO)) 1570 | return 1; 1571 | return 0; 1572 | } 1573 | 1574 | static inline int rt_task(struct task_struct *p) 1575 | { 1576 | return rt_prio(p->prio); 1577 | } 1578 | 1579 | static inline struct pid *task_pid(struct task_struct *task) 1580 | { 1581 | return task->pids[PIDTYPE_PID].pid; 1582 | } 1583 | 1584 | static inline struct pid *task_tgid(struct task_struct *task) 1585 | { 1586 | return task->group_leader->pids[PIDTYPE_PID].pid; 1587 | } 1588 | 1589 | /* 1590 | * Without tasklist or rcu lock it is not safe to dereference 1591 | * the result of task_pgrp/task_session even if task == current, 1592 | * we can race with another thread doing sys_setsid/sys_setpgid. 1593 | */ 1594 | static inline struct pid *task_pgrp(struct task_struct *task) 1595 | { 1596 | return task->group_leader->pids[PIDTYPE_PGID].pid; 1597 | } 1598 | 1599 | static inline struct pid *task_session(struct task_struct *task) 1600 | { 1601 | return task->group_leader->pids[PIDTYPE_SID].pid; 1602 | } 1603 | 1604 | struct pid_namespace; 1605 | 1606 | /* 1607 | * the helpers to get the task's different pids as they are seen 1608 | * from various namespaces 1609 | * 1610 | * task_xid_nr() : global id, i.e. the id seen from the init namespace; 1611 | * task_xid_vnr() : virtual id, i.e. the id seen from the pid namespace of 1612 | * current. 1613 | * task_xid_nr_ns() : id seen from the ns specified; 1614 | * 1615 | * set_task_vxid() : assigns a virtual id to a task; 1616 | * 1617 | * see also pid_nr() etc in include/linux/pid.h 1618 | */ 1619 | pid_t __task_pid_nr_ns(struct task_struct *task, enum pid_type type, 1620 | struct pid_namespace *ns); 1621 | 1622 | static inline pid_t task_pid_nr(struct task_struct *tsk) 1623 | { 1624 | return tsk->pid; 1625 | } 1626 | 1627 | static inline pid_t task_pid_nr_ns(struct task_struct *tsk, 1628 | struct pid_namespace *ns) 1629 | { 1630 | return __task_pid_nr_ns(tsk, PIDTYPE_PID, ns); 1631 | } 1632 | 1633 | static inline pid_t task_pid_vnr(struct task_struct *tsk) 1634 | { 1635 | return __task_pid_nr_ns(tsk, PIDTYPE_PID, NULL); 1636 | } 1637 | 1638 | 1639 | static inline pid_t task_tgid_nr(struct task_struct *tsk) 1640 | { 1641 | return tsk->tgid; 1642 | } 1643 | 1644 | pid_t task_tgid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns); 1645 | 1646 | static inline pid_t task_tgid_vnr(struct task_struct *tsk) 1647 | { 1648 | return pid_vnr(task_tgid(tsk)); 1649 | } 1650 | 1651 | 1652 | static inline pid_t task_pgrp_nr_ns(struct task_struct *tsk, 1653 | struct pid_namespace *ns) 1654 | { 1655 | return __task_pid_nr_ns(tsk, PIDTYPE_PGID, ns); 1656 | } 1657 | 1658 | static inline pid_t task_pgrp_vnr(struct task_struct *tsk) 1659 | { 1660 | return __task_pid_nr_ns(tsk, PIDTYPE_PGID, NULL); 1661 | } 1662 | 1663 | 1664 | static inline pid_t task_session_nr_ns(struct task_struct *tsk, 1665 | struct pid_namespace *ns) 1666 | { 1667 | return __task_pid_nr_ns(tsk, PIDTYPE_SID, ns); 1668 | } 1669 | 1670 | static inline pid_t task_session_vnr(struct task_struct *tsk) 1671 | { 1672 | return __task_pid_nr_ns(tsk, PIDTYPE_SID, NULL); 1673 | } 1674 | 1675 | /* obsolete, do not use */ 1676 | static inline pid_t task_pgrp_nr(struct task_struct *tsk) 1677 | { 1678 | return task_pgrp_nr_ns(tsk, &init_pid_ns); 1679 | } 1680 | 1681 | /** 1682 | * pid_alive - check that a task structure is not stale 1683 | * @p: Task structure to be checked. 1684 | * 1685 | * Test if a process is not yet dead (at most zombie state) 1686 | * If pid_alive fails, then pointers within the task structure 1687 | * can be stale and must not be dereferenced. 1688 | */ 1689 | static inline int pid_alive(struct task_struct *p) 1690 | { 1691 | return p->pids[PIDTYPE_PID].pid != NULL; 1692 | } 1693 | 1694 | /** 1695 | * is_global_init - check if a task structure is init 1696 | * @tsk: Task structure to be checked. 1697 | * 1698 | * Check if a task structure is the first user space task the kernel created. 1699 | */ 1700 | static inline int is_global_init(struct task_struct *tsk) 1701 | { 1702 | return tsk->pid == 1; 1703 | } 1704 | 1705 | /* 1706 | * is_container_init: 1707 | * check whether in the task is init in its own pid namespace. 1708 | */ 1709 | extern int is_container_init(struct task_struct *tsk); 1710 | 1711 | extern struct pid *cad_pid; 1712 | 1713 | extern void free_task(struct task_struct *tsk); 1714 | #define get_task_struct(tsk) do { atomic_inc(&(tsk)->usage); } while(0) 1715 | 1716 | extern void __put_task_struct(struct task_struct *t); 1717 | 1718 | static inline void put_task_struct(struct task_struct *t) 1719 | { 1720 | if (atomic_dec_and_test(&t->usage)) 1721 | __put_task_struct(t); 1722 | } 1723 | 1724 | extern void task_times(struct task_struct *p, cputime_t *ut, cputime_t *st); 1725 | extern void thread_group_times(struct task_struct *p, cputime_t *ut, cputime_t *st); 1726 | 1727 | /* 1728 | * Per process flags 1729 | */ 1730 | #define PF_STARTING 0x00000002 /* being created */ 1731 | #define PF_EXITING 0x00000004 /* getting shut down */ 1732 | #define PF_EXITPIDONE 0x00000008 /* pi exit done on shut down */ 1733 | #define PF_VCPU 0x00000010 /* I'm a virtual CPU */ 1734 | #define PF_WQ_WORKER 0x00000020 /* I'm a workqueue worker */ 1735 | #define PF_FORKNOEXEC 0x00000040 /* forked but didn't exec */ 1736 | #define PF_MCE_PROCESS 0x00000080 /* process policy on mce errors */ 1737 | #define PF_SUPERPRIV 0x00000100 /* used super-user privileges */ 1738 | #define PF_DUMPCORE 0x00000200 /* dumped core */ 1739 | #define PF_SIGNALED 0x00000400 /* killed by a signal */ 1740 | #define PF_MEMALLOC 0x00000800 /* Allocating memory */ 1741 | #define PF_USED_MATH 0x00002000 /* if unset the fpu must be initialized before use */ 1742 | #define PF_FREEZING 0x00004000 /* freeze in progress. do not account to load */ 1743 | #define PF_NOFREEZE 0x00008000 /* this thread should not be frozen */ 1744 | #define PF_FROZEN 0x00010000 /* frozen for system suspend */ 1745 | #define PF_FSTRANS 0x00020000 /* inside a filesystem transaction */ 1746 | #define PF_KSWAPD 0x00040000 /* I am kswapd */ 1747 | #define PF_OOM_ORIGIN 0x00080000 /* Allocating much memory to others */ 1748 | #define PF_LESS_THROTTLE 0x00100000 /* Throttle me less: I clean memory */ 1749 | #define PF_KTHREAD 0x00200000 /* I am a kernel thread */ 1750 | #define PF_RANDOMIZE 0x00400000 /* randomize virtual address space */ 1751 | #define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */ 1752 | #define PF_SPREAD_PAGE 0x01000000 /* Spread page cache over cpuset */ 1753 | #define PF_SPREAD_SLAB 0x02000000 /* Spread some slab caches over cpuset */ 1754 | #define PF_THREAD_BOUND 0x04000000 /* Thread bound to specific cpu */ 1755 | #define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */ 1756 | #define PF_MEMPOLICY 0x10000000 /* Non-default NUMA mempolicy */ 1757 | #define PF_MUTEX_TESTER 0x20000000 /* Thread belongs to the rt mutex tester */ 1758 | #define PF_FREEZER_SKIP 0x40000000 /* Freezer should not count it as freezable */ 1759 | #define PF_FREEZER_NOSIG 0x80000000 /* Freezer won't send signals to it */ 1760 | 1761 | /* 1762 | * Only the _current_ task can read/write to tsk->flags, but other 1763 | * tasks can access tsk->flags in readonly mode for example 1764 | * with tsk_used_math (like during threaded core dumping). 1765 | * There is however an exception to this rule during ptrace 1766 | * or during fork: the ptracer task is allowed to write to the 1767 | * child->flags of its traced child (same goes for fork, the parent 1768 | * can write to the child->flags), because we're guaranteed the 1769 | * child is not running and in turn not changing child->flags 1770 | * at the same time the parent does it. 1771 | */ 1772 | #define clear_stopped_child_used_math(child) do { (child)->flags &= ~PF_USED_MATH; } while (0) 1773 | #define set_stopped_child_used_math(child) do { (child)->flags |= PF_USED_MATH; } while (0) 1774 | #define clear_used_math() clear_stopped_child_used_math(current) 1775 | #define set_used_math() set_stopped_child_used_math(current) 1776 | #define conditional_stopped_child_used_math(condition, child) \ 1777 | do { (child)->flags &= ~PF_USED_MATH, (child)->flags |= (condition) ? PF_USED_MATH : 0; } while (0) 1778 | #define conditional_used_math(condition) \ 1779 | conditional_stopped_child_used_math(condition, current) 1780 | #define copy_to_stopped_child_used_math(child) \ 1781 | do { (child)->flags &= ~PF_USED_MATH, (child)->flags |= current->flags & PF_USED_MATH; } while (0) 1782 | /* NOTE: this will return 0 or PF_USED_MATH, it will never return 1 */ 1783 | #define tsk_used_math(p) ((p)->flags & PF_USED_MATH) 1784 | #define used_math() tsk_used_math(current) 1785 | 1786 | #ifdef CONFIG_PREEMPT_RCU 1787 | 1788 | #define RCU_READ_UNLOCK_BLOCKED (1 << 0) /* blocked while in RCU read-side. */ 1789 | #define RCU_READ_UNLOCK_BOOSTED (1 << 1) /* boosted while in RCU read-side. */ 1790 | #define RCU_READ_UNLOCK_NEED_QS (1 << 2) /* RCU core needs CPU response. */ 1791 | 1792 | static inline void rcu_copy_process(struct task_struct *p) 1793 | { 1794 | p->rcu_read_lock_nesting = 0; 1795 | p->rcu_read_unlock_special = 0; 1796 | #ifdef CONFIG_TREE_PREEMPT_RCU 1797 | p->rcu_blocked_node = NULL; 1798 | #endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */ 1799 | #ifdef CONFIG_RCU_BOOST 1800 | p->rcu_boost_mutex = NULL; 1801 | #endif /* #ifdef CONFIG_RCU_BOOST */ 1802 | INIT_LIST_HEAD(&p->rcu_node_entry); 1803 | } 1804 | 1805 | #else 1806 | 1807 | static inline void rcu_copy_process(struct task_struct *p) 1808 | { 1809 | } 1810 | 1811 | #endif 1812 | 1813 | #ifdef CONFIG_SMP 1814 | extern int set_cpus_allowed_ptr(struct task_struct *p, 1815 | const struct cpumask *new_mask); 1816 | #else 1817 | static inline int set_cpus_allowed_ptr(struct task_struct *p, 1818 | const struct cpumask *new_mask) 1819 | { 1820 | if (!cpumask_test_cpu(0, new_mask)) 1821 | return -EINVAL; 1822 | return 0; 1823 | } 1824 | #endif 1825 | 1826 | #ifndef CONFIG_CPUMASK_OFFSTACK 1827 | static inline int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask) 1828 | { 1829 | return set_cpus_allowed_ptr(p, &new_mask); 1830 | } 1831 | #endif 1832 | 1833 | /* 1834 | * Do not use outside of architecture code which knows its limitations. 1835 | * 1836 | * sched_clock() has no promise of monotonicity or bounded drift between 1837 | * CPUs, use (which you should not) requires disabling IRQs. 1838 | * 1839 | * Please use one of the three interfaces below. 1840 | */ 1841 | extern unsigned long long notrace sched_clock(void); 1842 | /* 1843 | * See the comment in kernel/sched_clock.c 1844 | */ 1845 | extern u64 cpu_clock(int cpu); 1846 | extern u64 local_clock(void); 1847 | extern u64 sched_clock_cpu(int cpu); 1848 | 1849 | 1850 | extern void sched_clock_init(void); 1851 | 1852 | #ifndef CONFIG_HAVE_UNSTABLE_SCHED_CLOCK 1853 | static inline void sched_clock_tick(void) 1854 | { 1855 | } 1856 | 1857 | static inline void sched_clock_idle_sleep_event(void) 1858 | { 1859 | } 1860 | 1861 | static inline void sched_clock_idle_wakeup_event(u64 delta_ns) 1862 | { 1863 | } 1864 | #else 1865 | /* 1866 | * Architectures can set this to 1 if they have specified 1867 | * CONFIG_HAVE_UNSTABLE_SCHED_CLOCK in their arch Kconfig, 1868 | * but then during bootup it turns out that sched_clock() 1869 | * is reliable after all: 1870 | */ 1871 | extern int sched_clock_stable; 1872 | 1873 | extern void sched_clock_tick(void); 1874 | extern void sched_clock_idle_sleep_event(void); 1875 | extern void sched_clock_idle_wakeup_event(u64 delta_ns); 1876 | #endif 1877 | 1878 | #ifdef CONFIG_IRQ_TIME_ACCOUNTING 1879 | /* 1880 | * An i/f to runtime opt-in for irq time accounting based off of sched_clock. 1881 | * The reason for this explicit opt-in is not to have perf penalty with 1882 | * slow sched_clocks. 1883 | */ 1884 | extern void enable_sched_clock_irqtime(void); 1885 | extern void disable_sched_clock_irqtime(void); 1886 | #else 1887 | static inline void enable_sched_clock_irqtime(void) {} 1888 | static inline void disable_sched_clock_irqtime(void) {} 1889 | #endif 1890 | 1891 | extern unsigned long long 1892 | task_sched_runtime(struct task_struct *task); 1893 | extern unsigned long long thread_group_sched_runtime(struct task_struct *task); 1894 | 1895 | /* sched_exec is called by processes performing an exec */ 1896 | #ifdef CONFIG_SMP 1897 | extern void sched_exec(void); 1898 | #else 1899 | #define sched_exec() {} 1900 | #endif 1901 | 1902 | extern void sched_clock_idle_sleep_event(void); 1903 | extern void sched_clock_idle_wakeup_event(u64 delta_ns); 1904 | 1905 | #ifdef CONFIG_HOTPLUG_CPU 1906 | extern void idle_task_exit(void); 1907 | #else 1908 | static inline void idle_task_exit(void) {} 1909 | #endif 1910 | 1911 | #if defined(CONFIG_NO_HZ) && defined(CONFIG_SMP) 1912 | extern void wake_up_idle_cpu(int cpu); 1913 | #else 1914 | static inline void wake_up_idle_cpu(int cpu) { } 1915 | #endif 1916 | 1917 | extern unsigned int sysctl_sched_latency; 1918 | extern unsigned int sysctl_sched_min_granularity; 1919 | extern unsigned int sysctl_sched_wakeup_granularity; 1920 | extern unsigned int sysctl_sched_child_runs_first; 1921 | 1922 | enum sched_tunable_scaling { 1923 | SCHED_TUNABLESCALING_NONE, 1924 | SCHED_TUNABLESCALING_LOG, 1925 | SCHED_TUNABLESCALING_LINEAR, 1926 | SCHED_TUNABLESCALING_END, 1927 | }; 1928 | extern enum sched_tunable_scaling sysctl_sched_tunable_scaling; 1929 | 1930 | #ifdef CONFIG_SCHED_DEBUG 1931 | extern unsigned int sysctl_sched_migration_cost; 1932 | extern unsigned int sysctl_sched_nr_migrate; 1933 | extern unsigned int sysctl_sched_time_avg; 1934 | extern unsigned int sysctl_timer_migration; 1935 | extern unsigned int sysctl_sched_shares_window; 1936 | 1937 | int sched_proc_update_handler(struct ctl_table *table, int write, 1938 | void __user *buffer, size_t *length, 1939 | loff_t *ppos); 1940 | #endif 1941 | #ifdef CONFIG_SCHED_DEBUG 1942 | static inline unsigned int get_sysctl_timer_migration(void) 1943 | { 1944 | return sysctl_timer_migration; 1945 | } 1946 | #else 1947 | static inline unsigned int get_sysctl_timer_migration(void) 1948 | { 1949 | return 1; 1950 | } 1951 | #endif 1952 | extern unsigned int sysctl_sched_rt_period; 1953 | extern int sysctl_sched_rt_runtime; 1954 | 1955 | int sched_rt_handler(struct ctl_table *table, int write, 1956 | void __user *buffer, size_t *lenp, 1957 | loff_t *ppos); 1958 | 1959 | #ifdef CONFIG_SCHED_AUTOGROUP 1960 | extern unsigned int sysctl_sched_autogroup_enabled; 1961 | 1962 | extern void sched_autogroup_create_attach(struct task_struct *p); 1963 | extern void sched_autogroup_detach(struct task_struct *p); 1964 | extern void sched_autogroup_fork(struct signal_struct *sig); 1965 | extern void sched_autogroup_exit(struct signal_struct *sig); 1966 | #ifdef CONFIG_PROC_FS 1967 | extern void proc_sched_autogroup_show_task(struct task_struct *p, struct seq_file *m); 1968 | extern int proc_sched_autogroup_set_nice(struct task_struct *p, int *nice); 1969 | #endif 1970 | #else 1971 | static inline void sched_autogroup_create_attach(struct task_struct *p) { } 1972 | static inline void sched_autogroup_detach(struct task_struct *p) { } 1973 | static inline void sched_autogroup_fork(struct signal_struct *sig) { } 1974 | static inline void sched_autogroup_exit(struct signal_struct *sig) { } 1975 | #endif 1976 | 1977 | #ifdef CONFIG_RT_MUTEXES 1978 | extern int rt_mutex_getprio(struct task_struct *p); 1979 | extern void rt_mutex_setprio(struct task_struct *p, int prio); 1980 | extern void rt_mutex_adjust_pi(struct task_struct *p); 1981 | #else 1982 | static inline int rt_mutex_getprio(struct task_struct *p) 1983 | { 1984 | return p->normal_prio; 1985 | } 1986 | # define rt_mutex_adjust_pi(p) do { } while (0) 1987 | #endif 1988 | 1989 | extern bool yield_to(struct task_struct *p, bool preempt); 1990 | extern void set_user_nice(struct task_struct *p, long nice); 1991 | extern int task_prio(const struct task_struct *p); 1992 | extern int task_nice(const struct task_struct *p); 1993 | extern int can_nice(const struct task_struct *p, const int nice); 1994 | extern int task_curr(const struct task_struct *p); 1995 | extern int idle_cpu(int cpu); 1996 | extern int sched_setscheduler(struct task_struct *, int, 1997 | const struct sched_param *); 1998 | extern int sched_setscheduler_nocheck(struct task_struct *, int, 1999 | const struct sched_param *); 2000 | extern struct task_struct *idle_task(int cpu); 2001 | extern struct task_struct *curr_task(int cpu); 2002 | extern void set_curr_task(int cpu, struct task_struct *p); 2003 | 2004 | void yield(void); 2005 | 2006 | /* 2007 | * The default (Linux) execution domain. 2008 | */ 2009 | extern struct exec_domain default_exec_domain; 2010 | 2011 | union thread_union { 2012 | struct thread_info thread_info; 2013 | unsigned long stack[THREAD_SIZE/sizeof(long)]; 2014 | }; 2015 | 2016 | #ifndef __HAVE_ARCH_KSTACK_END 2017 | static inline int kstack_end(void *addr) 2018 | { 2019 | /* Reliable end of stack detection: 2020 | * Some APM bios versions misalign the stack 2021 | */ 2022 | return !(((unsigned long)addr+sizeof(void*)-1) & (THREAD_SIZE-sizeof(void*))); 2023 | } 2024 | #endif 2025 | 2026 | extern union thread_union init_thread_union; 2027 | extern struct task_struct init_task; 2028 | 2029 | extern struct mm_struct init_mm; 2030 | 2031 | extern struct pid_namespace init_pid_ns; 2032 | 2033 | /* 2034 | * find a task by one of its numerical ids 2035 | * 2036 | * find_task_by_pid_ns(): 2037 | * finds a task by its pid in the specified namespace 2038 | * find_task_by_vpid(): 2039 | * finds a task by its virtual pid 2040 | * 2041 | * see also find_vpid() etc in include/linux/pid.h 2042 | */ 2043 | 2044 | extern struct task_struct *find_task_by_vpid(pid_t nr); 2045 | extern struct task_struct *find_task_by_pid_ns(pid_t nr, 2046 | struct pid_namespace *ns); 2047 | 2048 | extern void __set_special_pids(struct pid *pid); 2049 | 2050 | /* per-UID process charging. */ 2051 | extern struct user_struct * alloc_uid(struct user_namespace *, uid_t); 2052 | static inline struct user_struct *get_uid(struct user_struct *u) 2053 | { 2054 | atomic_inc(&u->__count); 2055 | return u; 2056 | } 2057 | extern void free_uid(struct user_struct *); 2058 | extern void release_uids(struct user_namespace *ns); 2059 | 2060 | #include 2061 | 2062 | extern void xtime_update(unsigned long ticks); 2063 | 2064 | extern int wake_up_state(struct task_struct *tsk, unsigned int state); 2065 | extern int wake_up_process(struct task_struct *tsk); 2066 | extern void wake_up_new_task(struct task_struct *tsk, 2067 | unsigned long clone_flags); 2068 | #ifdef CONFIG_SMP 2069 | extern void kick_process(struct task_struct *tsk); 2070 | #else 2071 | static inline void kick_process(struct task_struct *tsk) { } 2072 | #endif 2073 | extern void sched_fork(struct task_struct *p, int clone_flags); 2074 | extern void sched_dead(struct task_struct *p); 2075 | 2076 | extern void proc_caches_init(void); 2077 | extern void flush_signals(struct task_struct *); 2078 | extern void __flush_signals(struct task_struct *); 2079 | extern void ignore_signals(struct task_struct *); 2080 | extern void flush_signal_handlers(struct task_struct *, int force_default); 2081 | extern int dequeue_signal(struct task_struct *tsk, sigset_t *mask, siginfo_t *info); 2082 | 2083 | static inline int dequeue_signal_lock(struct task_struct *tsk, sigset_t *mask, siginfo_t *info) 2084 | { 2085 | unsigned long flags; 2086 | int ret; 2087 | 2088 | spin_lock_irqsave(&tsk->sighand->siglock, flags); 2089 | ret = dequeue_signal(tsk, mask, info); 2090 | spin_unlock_irqrestore(&tsk->sighand->siglock, flags); 2091 | 2092 | return ret; 2093 | } 2094 | 2095 | extern void block_all_signals(int (*notifier)(void *priv), void *priv, 2096 | sigset_t *mask); 2097 | extern void unblock_all_signals(void); 2098 | extern void release_task(struct task_struct * p); 2099 | extern int send_sig_info(int, struct siginfo *, struct task_struct *); 2100 | extern int force_sigsegv(int, struct task_struct *); 2101 | extern int force_sig_info(int, struct siginfo *, struct task_struct *); 2102 | extern int __kill_pgrp_info(int sig, struct siginfo *info, struct pid *pgrp); 2103 | extern int kill_pid_info(int sig, struct siginfo *info, struct pid *pid); 2104 | extern int kill_pid_info_as_uid(int, struct siginfo *, struct pid *, uid_t, uid_t, u32); 2105 | extern int kill_pgrp(struct pid *pid, int sig, int priv); 2106 | extern int kill_pid(struct pid *pid, int sig, int priv); 2107 | extern int kill_proc_info(int, struct siginfo *, pid_t); 2108 | extern int do_notify_parent(struct task_struct *, int); 2109 | extern void __wake_up_parent(struct task_struct *p, struct task_struct *parent); 2110 | extern void force_sig(int, struct task_struct *); 2111 | extern int send_sig(int, struct task_struct *, int); 2112 | extern int zap_other_threads(struct task_struct *p); 2113 | extern struct sigqueue *sigqueue_alloc(void); 2114 | extern void sigqueue_free(struct sigqueue *); 2115 | extern int send_sigqueue(struct sigqueue *, struct task_struct *, int group); 2116 | extern int do_sigaction(int, struct k_sigaction *, struct k_sigaction *); 2117 | extern int do_sigaltstack(const stack_t __user *, stack_t __user *, unsigned long); 2118 | 2119 | static inline int kill_cad_pid(int sig, int priv) 2120 | { 2121 | return kill_pid(cad_pid, sig, priv); 2122 | } 2123 | 2124 | /* These can be the second arg to send_sig_info/send_group_sig_info. */ 2125 | #define SEND_SIG_NOINFO ((struct siginfo *) 0) 2126 | #define SEND_SIG_PRIV ((struct siginfo *) 1) 2127 | #define SEND_SIG_FORCED ((struct siginfo *) 2) 2128 | 2129 | /* 2130 | * True if we are on the alternate signal stack. 2131 | */ 2132 | static inline int on_sig_stack(unsigned long sp) 2133 | { 2134 | #ifdef CONFIG_STACK_GROWSUP 2135 | return sp >= current->sas_ss_sp && 2136 | sp - current->sas_ss_sp < current->sas_ss_size; 2137 | #else 2138 | return sp > current->sas_ss_sp && 2139 | sp - current->sas_ss_sp <= current->sas_ss_size; 2140 | #endif 2141 | } 2142 | 2143 | static inline int sas_ss_flags(unsigned long sp) 2144 | { 2145 | return (current->sas_ss_size == 0 ? SS_DISABLE 2146 | : on_sig_stack(sp) ? SS_ONSTACK : 0); 2147 | } 2148 | 2149 | /* 2150 | * Routines for handling mm_structs 2151 | */ 2152 | extern struct mm_struct * mm_alloc(void); 2153 | 2154 | /* mmdrop drops the mm and the page tables */ 2155 | extern void __mmdrop(struct mm_struct *); 2156 | static inline void mmdrop(struct mm_struct * mm) 2157 | { 2158 | if (unlikely(atomic_dec_and_test(&mm->mm_count))) 2159 | __mmdrop(mm); 2160 | } 2161 | 2162 | /* mmput gets rid of the mappings and all user-space */ 2163 | extern void mmput(struct mm_struct *); 2164 | /* Grab a reference to a task's mm, if it is not already going away */ 2165 | extern struct mm_struct *get_task_mm(struct task_struct *task); 2166 | /* Remove the current tasks stale references to the old mm_struct */ 2167 | extern void mm_release(struct task_struct *, struct mm_struct *); 2168 | /* Allocate a new mm structure and copy contents from tsk->mm */ 2169 | extern struct mm_struct *dup_mm(struct task_struct *tsk); 2170 | 2171 | extern int copy_thread(unsigned long, unsigned long, unsigned long, 2172 | struct task_struct *, struct pt_regs *); 2173 | extern void flush_thread(void); 2174 | extern void exit_thread(void); 2175 | 2176 | extern void exit_files(struct task_struct *); 2177 | extern void __cleanup_sighand(struct sighand_struct *); 2178 | 2179 | extern void exit_itimers(struct signal_struct *); 2180 | extern void flush_itimer_signals(void); 2181 | 2182 | extern NORET_TYPE void do_group_exit(int); 2183 | 2184 | extern void daemonize(const char *, ...); 2185 | extern int allow_signal(int); 2186 | extern int disallow_signal(int); 2187 | 2188 | extern int do_execve(const char *, 2189 | const char __user * const __user *, 2190 | const char __user * const __user *, struct pt_regs *); 2191 | extern long do_fork(unsigned long, unsigned long, struct pt_regs *, unsigned long, int __user *, int __user *); 2192 | struct task_struct *fork_idle(int); 2193 | 2194 | extern void set_task_comm(struct task_struct *tsk, char *from); 2195 | extern char *get_task_comm(char *to, struct task_struct *tsk); 2196 | 2197 | #ifdef CONFIG_SMP 2198 | extern unsigned long wait_task_inactive(struct task_struct *, long match_state); 2199 | #else 2200 | static inline unsigned long wait_task_inactive(struct task_struct *p, 2201 | long match_state) 2202 | { 2203 | return 1; 2204 | } 2205 | #endif 2206 | 2207 | #define next_task(p) \ 2208 | list_entry_rcu((p)->tasks.next, struct task_struct, tasks) 2209 | 2210 | #define for_each_process(p) \ 2211 | for (p = &init_task ; (p = next_task(p)) != &init_task ; ) 2212 | 2213 | extern bool current_is_single_threaded(void); 2214 | 2215 | /* 2216 | * Careful: do_each_thread/while_each_thread is a double loop so 2217 | * 'break' will not work as expected - use goto instead. 2218 | */ 2219 | #define do_each_thread(g, t) \ 2220 | for (g = t = &init_task ; (g = t = next_task(g)) != &init_task ; ) do 2221 | 2222 | #define while_each_thread(g, t) \ 2223 | while ((t = next_thread(t)) != g) 2224 | 2225 | static inline int get_nr_threads(struct task_struct *tsk) 2226 | { 2227 | return tsk->signal->nr_threads; 2228 | } 2229 | 2230 | /* de_thread depends on thread_group_leader not being a pid based check */ 2231 | #define thread_group_leader(p) (p == p->group_leader) 2232 | 2233 | /* Do to the insanities of de_thread it is possible for a process 2234 | * to have the pid of the thread group leader without actually being 2235 | * the thread group leader. For iteration through the pids in proc 2236 | * all we care about is that we have a task with the appropriate 2237 | * pid, we don't actually care if we have the right task. 2238 | */ 2239 | static inline int has_group_leader_pid(struct task_struct *p) 2240 | { 2241 | return p->pid == p->tgid; 2242 | } 2243 | 2244 | static inline 2245 | int same_thread_group(struct task_struct *p1, struct task_struct *p2) 2246 | { 2247 | return p1->tgid == p2->tgid; 2248 | } 2249 | 2250 | static inline struct task_struct *next_thread(const struct task_struct *p) 2251 | { 2252 | return list_entry_rcu(p->thread_group.next, 2253 | struct task_struct, thread_group); 2254 | } 2255 | 2256 | static inline int thread_group_empty(struct task_struct *p) 2257 | { 2258 | return list_empty(&p->thread_group); 2259 | } 2260 | 2261 | #define delay_group_leader(p) \ 2262 | (thread_group_leader(p) && !thread_group_empty(p)) 2263 | 2264 | static inline int task_detached(struct task_struct *p) 2265 | { 2266 | return p->exit_signal == -1; 2267 | } 2268 | 2269 | /* 2270 | * Protects ->fs, ->files, ->mm, ->group_info, ->comm, keyring 2271 | * subscriptions and synchronises with wait4(). Also used in procfs. Also 2272 | * pins the final release of task.io_context. Also protects ->cpuset and 2273 | * ->cgroup.subsys[]. 2274 | * 2275 | * Nests both inside and outside of read_lock(&tasklist_lock). 2276 | * It must not be nested with write_lock_irq(&tasklist_lock), 2277 | * neither inside nor outside. 2278 | */ 2279 | static inline void task_lock(struct task_struct *p) 2280 | { 2281 | spin_lock(&p->alloc_lock); 2282 | } 2283 | 2284 | static inline void task_unlock(struct task_struct *p) 2285 | { 2286 | spin_unlock(&p->alloc_lock); 2287 | } 2288 | 2289 | extern struct sighand_struct *__lock_task_sighand(struct task_struct *tsk, 2290 | unsigned long *flags); 2291 | 2292 | #define lock_task_sighand(tsk, flags) \ 2293 | ({ struct sighand_struct *__ss; \ 2294 | __cond_lock(&(tsk)->sighand->siglock, \ 2295 | (__ss = __lock_task_sighand(tsk, flags))); \ 2296 | __ss; \ 2297 | }) \ 2298 | 2299 | static inline void unlock_task_sighand(struct task_struct *tsk, 2300 | unsigned long *flags) 2301 | { 2302 | spin_unlock_irqrestore(&tsk->sighand->siglock, *flags); 2303 | } 2304 | 2305 | #ifndef __HAVE_THREAD_FUNCTIONS 2306 | 2307 | #define task_thread_info(task) ((struct thread_info *)(task)->stack) 2308 | #define task_stack_page(task) ((task)->stack) 2309 | 2310 | static inline void setup_thread_stack(struct task_struct *p, struct task_struct *org) 2311 | { 2312 | *task_thread_info(p) = *task_thread_info(org); 2313 | task_thread_info(p)->task = p; 2314 | } 2315 | 2316 | static inline unsigned long *end_of_stack(struct task_struct *p) 2317 | { 2318 | return (unsigned long *)(task_thread_info(p) + 1); 2319 | } 2320 | 2321 | #endif 2322 | 2323 | static inline int object_is_on_stack(void *obj) 2324 | { 2325 | void *stack = task_stack_page(current); 2326 | 2327 | return (obj >= stack) && (obj < (stack + THREAD_SIZE)); 2328 | } 2329 | 2330 | extern void thread_info_cache_init(void); 2331 | 2332 | #ifdef CONFIG_DEBUG_STACK_USAGE 2333 | static inline unsigned long stack_not_used(struct task_struct *p) 2334 | { 2335 | unsigned long *n = end_of_stack(p); 2336 | 2337 | do { /* Skip over canary */ 2338 | n++; 2339 | } while (!*n); 2340 | 2341 | return (unsigned long)n - (unsigned long)end_of_stack(p); 2342 | } 2343 | #endif 2344 | 2345 | /* set thread flags in other task's structures 2346 | * - see asm/thread_info.h for TIF_xxxx flags available 2347 | */ 2348 | static inline void set_tsk_thread_flag(struct task_struct *tsk, int flag) 2349 | { 2350 | set_ti_thread_flag(task_thread_info(tsk), flag); 2351 | } 2352 | 2353 | static inline void clear_tsk_thread_flag(struct task_struct *tsk, int flag) 2354 | { 2355 | clear_ti_thread_flag(task_thread_info(tsk), flag); 2356 | } 2357 | 2358 | static inline int test_and_set_tsk_thread_flag(struct task_struct *tsk, int flag) 2359 | { 2360 | return test_and_set_ti_thread_flag(task_thread_info(tsk), flag); 2361 | } 2362 | 2363 | static inline int test_and_clear_tsk_thread_flag(struct task_struct *tsk, int flag) 2364 | { 2365 | return test_and_clear_ti_thread_flag(task_thread_info(tsk), flag); 2366 | } 2367 | 2368 | static inline int test_tsk_thread_flag(struct task_struct *tsk, int flag) 2369 | { 2370 | return test_ti_thread_flag(task_thread_info(tsk), flag); 2371 | } 2372 | 2373 | static inline void set_tsk_need_resched(struct task_struct *tsk) 2374 | { 2375 | set_tsk_thread_flag(tsk,TIF_NEED_RESCHED); 2376 | } 2377 | 2378 | static inline void clear_tsk_need_resched(struct task_struct *tsk) 2379 | { 2380 | clear_tsk_thread_flag(tsk,TIF_NEED_RESCHED); 2381 | } 2382 | 2383 | static inline int test_tsk_need_resched(struct task_struct *tsk) 2384 | { 2385 | return unlikely(test_tsk_thread_flag(tsk,TIF_NEED_RESCHED)); 2386 | } 2387 | 2388 | static inline int restart_syscall(void) 2389 | { 2390 | set_tsk_thread_flag(current, TIF_SIGPENDING); 2391 | return -ERESTARTNOINTR; 2392 | } 2393 | 2394 | static inline int signal_pending(struct task_struct *p) 2395 | { 2396 | return unlikely(test_tsk_thread_flag(p,TIF_SIGPENDING)); 2397 | } 2398 | 2399 | static inline int __fatal_signal_pending(struct task_struct *p) 2400 | { 2401 | return unlikely(sigismember(&p->pending.signal, SIGKILL)); 2402 | } 2403 | 2404 | static inline int fatal_signal_pending(struct task_struct *p) 2405 | { 2406 | return signal_pending(p) && __fatal_signal_pending(p); 2407 | } 2408 | 2409 | static inline int signal_pending_state(long state, struct task_struct *p) 2410 | { 2411 | if (!(state & (TASK_INTERRUPTIBLE | TASK_WAKEKILL))) 2412 | return 0; 2413 | if (!signal_pending(p)) 2414 | return 0; 2415 | 2416 | return (state & TASK_INTERRUPTIBLE) || __fatal_signal_pending(p); 2417 | } 2418 | 2419 | static inline int need_resched(void) 2420 | { 2421 | return unlikely(test_thread_flag(TIF_NEED_RESCHED)); 2422 | } 2423 | 2424 | /* 2425 | * cond_resched() and cond_resched_lock(): latency reduction via 2426 | * explicit rescheduling in places that are safe. The return 2427 | * value indicates whether a reschedule was done in fact. 2428 | * cond_resched_lock() will drop the spinlock before scheduling, 2429 | * cond_resched_softirq() will enable bhs before scheduling. 2430 | */ 2431 | extern int _cond_resched(void); 2432 | 2433 | #define cond_resched() ({ \ 2434 | __might_sleep(__FILE__, __LINE__, 0); \ 2435 | _cond_resched(); \ 2436 | }) 2437 | 2438 | extern int __cond_resched_lock(spinlock_t *lock); 2439 | 2440 | #ifdef CONFIG_PREEMPT 2441 | #define PREEMPT_LOCK_OFFSET PREEMPT_OFFSET 2442 | #else 2443 | #define PREEMPT_LOCK_OFFSET 0 2444 | #endif 2445 | 2446 | #define cond_resched_lock(lock) ({ \ 2447 | __might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \ 2448 | __cond_resched_lock(lock); \ 2449 | }) 2450 | 2451 | extern int __cond_resched_softirq(void); 2452 | 2453 | #define cond_resched_softirq() ({ \ 2454 | __might_sleep(__FILE__, __LINE__, SOFTIRQ_DISABLE_OFFSET); \ 2455 | __cond_resched_softirq(); \ 2456 | }) 2457 | 2458 | /* 2459 | * Does a critical section need to be broken due to another 2460 | * task waiting?: (technically does not depend on CONFIG_PREEMPT, 2461 | * but a general need for low latency) 2462 | */ 2463 | static inline int spin_needbreak(spinlock_t *lock) 2464 | { 2465 | #ifdef CONFIG_PREEMPT 2466 | return spin_is_contended(lock); 2467 | #else 2468 | return 0; 2469 | #endif 2470 | } 2471 | 2472 | /* 2473 | * Thread group CPU time accounting. 2474 | */ 2475 | void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times); 2476 | void thread_group_cputimer(struct task_struct *tsk, struct task_cputime *times); 2477 | 2478 | static inline void thread_group_cputime_init(struct signal_struct *sig) 2479 | { 2480 | spin_lock_init(&sig->cputimer.lock); 2481 | } 2482 | 2483 | /* 2484 | * Reevaluate whether the task has signals pending delivery. 2485 | * Wake the task if so. 2486 | * This is required every time the blocked sigset_t changes. 2487 | * callers must hold sighand->siglock. 2488 | */ 2489 | extern void recalc_sigpending_and_wake(struct task_struct *t); 2490 | extern void recalc_sigpending(void); 2491 | 2492 | extern void signal_wake_up(struct task_struct *t, int resume_stopped); 2493 | 2494 | /* 2495 | * Wrappers for p->thread_info->cpu access. No-op on UP. 2496 | */ 2497 | #ifdef CONFIG_SMP 2498 | 2499 | static inline unsigned int task_cpu(const struct task_struct *p) 2500 | { 2501 | return task_thread_info(p)->cpu; 2502 | } 2503 | 2504 | extern void set_task_cpu(struct task_struct *p, unsigned int cpu); 2505 | 2506 | #else 2507 | 2508 | static inline unsigned int task_cpu(const struct task_struct *p) 2509 | { 2510 | return 0; 2511 | } 2512 | 2513 | static inline void set_task_cpu(struct task_struct *p, unsigned int cpu) 2514 | { 2515 | } 2516 | 2517 | #endif /* CONFIG_SMP */ 2518 | 2519 | extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask); 2520 | extern long sched_getaffinity(pid_t pid, struct cpumask *mask); 2521 | 2522 | extern void normalize_rt_tasks(void); 2523 | 2524 | #ifdef CONFIG_CGROUP_SCHED 2525 | 2526 | extern struct task_group root_task_group; 2527 | 2528 | extern struct task_group *sched_create_group(struct task_group *parent); 2529 | extern void sched_destroy_group(struct task_group *tg); 2530 | extern void sched_move_task(struct task_struct *tsk); 2531 | #ifdef CONFIG_FAIR_GROUP_SCHED 2532 | extern int sched_group_set_shares(struct task_group *tg, unsigned long shares); 2533 | extern unsigned long sched_group_shares(struct task_group *tg); 2534 | #endif 2535 | #ifdef CONFIG_RT_GROUP_SCHED 2536 | extern int sched_group_set_rt_runtime(struct task_group *tg, 2537 | long rt_runtime_us); 2538 | extern long sched_group_rt_runtime(struct task_group *tg); 2539 | extern int sched_group_set_rt_period(struct task_group *tg, 2540 | long rt_period_us); 2541 | extern long sched_group_rt_period(struct task_group *tg); 2542 | extern int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk); 2543 | #endif 2544 | #endif 2545 | 2546 | extern int task_can_switch_user(struct user_struct *up, 2547 | struct task_struct *tsk); 2548 | 2549 | #ifdef CONFIG_TASK_XACCT 2550 | static inline void add_rchar(struct task_struct *tsk, ssize_t amt) 2551 | { 2552 | tsk->ioac.rchar += amt; 2553 | } 2554 | 2555 | static inline void add_wchar(struct task_struct *tsk, ssize_t amt) 2556 | { 2557 | tsk->ioac.wchar += amt; 2558 | } 2559 | 2560 | static inline void inc_syscr(struct task_struct *tsk) 2561 | { 2562 | tsk->ioac.syscr++; 2563 | } 2564 | 2565 | static inline void inc_syscw(struct task_struct *tsk) 2566 | { 2567 | tsk->ioac.syscw++; 2568 | } 2569 | #else 2570 | static inline void add_rchar(struct task_struct *tsk, ssize_t amt) 2571 | { 2572 | } 2573 | 2574 | static inline void add_wchar(struct task_struct *tsk, ssize_t amt) 2575 | { 2576 | } 2577 | 2578 | static inline void inc_syscr(struct task_struct *tsk) 2579 | { 2580 | } 2581 | 2582 | static inline void inc_syscw(struct task_struct *tsk) 2583 | { 2584 | } 2585 | #endif 2586 | 2587 | #ifndef TASK_SIZE_OF 2588 | #define TASK_SIZE_OF(tsk) TASK_SIZE 2589 | #endif 2590 | 2591 | #ifdef CONFIG_MM_OWNER 2592 | extern void mm_update_next_owner(struct mm_struct *mm); 2593 | extern void mm_init_owner(struct mm_struct *mm, struct task_struct *p); 2594 | #else 2595 | static inline void mm_update_next_owner(struct mm_struct *mm) 2596 | { 2597 | } 2598 | 2599 | static inline void mm_init_owner(struct mm_struct *mm, struct task_struct *p) 2600 | { 2601 | } 2602 | #endif /* CONFIG_MM_OWNER */ 2603 | 2604 | static inline unsigned long task_rlimit(const struct task_struct *tsk, 2605 | unsigned int limit) 2606 | { 2607 | return ACCESS_ONCE(tsk->signal->rlim[limit].rlim_cur); 2608 | } 2609 | 2610 | static inline unsigned long task_rlimit_max(const struct task_struct *tsk, 2611 | unsigned int limit) 2612 | { 2613 | return ACCESS_ONCE(tsk->signal->rlim[limit].rlim_max); 2614 | } 2615 | 2616 | static inline unsigned long rlimit(unsigned int limit) 2617 | { 2618 | return task_rlimit(current, limit); 2619 | } 2620 | 2621 | static inline unsigned long rlimit_max(unsigned int limit) 2622 | { 2623 | return task_rlimit_max(current, limit); 2624 | } 2625 | 2626 | #endif /* __KERNEL__ */ 2627 | 2628 | #endif 2629 | -------------------------------------------------------------------------------- /linux/kernel/fork.c: -------------------------------------------------------------------------------- 1 | /* 2 | * linux/kernel/fork.c 3 | * 4 | * Copyright (C) 1991, 1992 Linus Torvalds 5 | */ 6 | 7 | /* 8 | * 'fork.c' contains the help-routines for the 'fork' system call 9 | * (see also entry.S and others). 10 | * Fork is rather simple, once you get the hang of it, but the memory 11 | * management can be a bitch. See 'mm/memory.c': 'copy_page_range()' 12 | */ 13 | 14 | #include 15 | #include 16 | #include 17 | #include 18 | #include 19 | #include 20 | #include 21 | #include 22 | #include 23 | #include 24 | #include 25 | #include 26 | #include 27 | #include 28 | #include 29 | #include 30 | #include 31 | #include 32 | #include 33 | #include 34 | #include 35 | #include 36 | #include 37 | #include 38 | #include 39 | #include 40 | #include 41 | #include 42 | #include 43 | #include 44 | #include 45 | #include 46 | #include 47 | #include 48 | #include 49 | #include 50 | #include 51 | #include 52 | #include 53 | #include 54 | #include 55 | #include 56 | #include 57 | #include 58 | #include 59 | #include 60 | #include 61 | #include 62 | #include 63 | #include 64 | #include 65 | #include 66 | #include 67 | #include 68 | #include 69 | #include 70 | #include 71 | 72 | #include 73 | #include 74 | #include 75 | #include 76 | #include 77 | #include 78 | 79 | #include 80 | 81 | /* 82 | * Protected counters by write_lock_irq(&tasklist_lock) 83 | */ 84 | unsigned long total_forks; /* Handle normal Linux uptimes. */ 85 | int nr_threads; /* The idle threads do not count.. */ 86 | 87 | int max_threads; /* tunable limit on nr_threads */ 88 | 89 | DEFINE_PER_CPU(unsigned long, process_counts) = 0; 90 | 91 | __cacheline_aligned DEFINE_RWLOCK(tasklist_lock); /* outer */ 92 | 93 | #ifdef CONFIG_PROVE_RCU 94 | int lockdep_tasklist_lock_is_held(void) 95 | { 96 | return lockdep_is_held(&tasklist_lock); 97 | } 98 | EXPORT_SYMBOL_GPL(lockdep_tasklist_lock_is_held); 99 | #endif /* #ifdef CONFIG_PROVE_RCU */ 100 | 101 | int nr_processes(void) 102 | { 103 | int cpu; 104 | int total = 0; 105 | 106 | for_each_possible_cpu(cpu) 107 | total += per_cpu(process_counts, cpu); 108 | 109 | return total; 110 | } 111 | 112 | #ifndef __HAVE_ARCH_TASK_STRUCT_ALLOCATOR 113 | # define alloc_task_struct_node(node) \ 114 | kmem_cache_alloc_node(task_struct_cachep, GFP_KERNEL, node) 115 | # define free_task_struct(tsk) \ 116 | kmem_cache_free(task_struct_cachep, (tsk)) 117 | static struct kmem_cache *task_struct_cachep; 118 | #endif 119 | 120 | #ifndef __HAVE_ARCH_THREAD_INFO_ALLOCATOR 121 | static struct thread_info *alloc_thread_info_node(struct task_struct *tsk, 122 | int node) 123 | { 124 | #ifdef CONFIG_DEBUG_STACK_USAGE 125 | gfp_t mask = GFP_KERNEL | __GFP_ZERO; 126 | #else 127 | gfp_t mask = GFP_KERNEL; 128 | #endif 129 | struct page *page = alloc_pages_node(node, mask, THREAD_SIZE_ORDER); 130 | 131 | return page ? page_address(page) : NULL; 132 | } 133 | 134 | static inline void free_thread_info(struct thread_info *ti) 135 | { 136 | free_pages((unsigned long)ti, THREAD_SIZE_ORDER); 137 | } 138 | #endif 139 | 140 | /* SLAB cache for signal_struct structures (tsk->signal) */ 141 | static struct kmem_cache *signal_cachep; 142 | 143 | /* SLAB cache for sighand_struct structures (tsk->sighand) */ 144 | struct kmem_cache *sighand_cachep; 145 | 146 | /* SLAB cache for files_struct structures (tsk->files) */ 147 | struct kmem_cache *files_cachep; 148 | 149 | /* SLAB cache for fs_struct structures (tsk->fs) */ 150 | struct kmem_cache *fs_cachep; 151 | 152 | /* SLAB cache for vm_area_struct structures */ 153 | struct kmem_cache *vm_area_cachep; 154 | 155 | /* SLAB cache for mm_struct structures (tsk->mm) */ 156 | static struct kmem_cache *mm_cachep; 157 | 158 | static void account_kernel_stack(struct thread_info *ti, int account) 159 | { 160 | struct zone *zone = page_zone(virt_to_page(ti)); 161 | 162 | mod_zone_page_state(zone, NR_KERNEL_STACK, account); 163 | } 164 | 165 | void free_task(struct task_struct *tsk) 166 | { 167 | prop_local_destroy_single(&tsk->dirties); 168 | account_kernel_stack(tsk->stack, -1); 169 | free_thread_info(tsk->stack); 170 | rt_mutex_debug_task_free(tsk); 171 | ftrace_graph_exit_task(tsk); 172 | free_task_struct(tsk); 173 | } 174 | EXPORT_SYMBOL(free_task); 175 | 176 | static inline void free_signal_struct(struct signal_struct *sig) 177 | { 178 | taskstats_tgid_free(sig); 179 | sched_autogroup_exit(sig); 180 | kmem_cache_free(signal_cachep, sig); 181 | } 182 | 183 | static inline void put_signal_struct(struct signal_struct *sig) 184 | { 185 | if (atomic_dec_and_test(&sig->sigcnt)) 186 | free_signal_struct(sig); 187 | } 188 | 189 | void __put_task_struct(struct task_struct *tsk) 190 | { 191 | WARN_ON(!tsk->exit_state); 192 | WARN_ON(atomic_read(&tsk->usage)); 193 | WARN_ON(tsk == current); 194 | 195 | exit_creds(tsk); 196 | delayacct_tsk_free(tsk); 197 | put_signal_struct(tsk->signal); 198 | 199 | if (!profile_handoff_task(tsk)) 200 | free_task(tsk); 201 | } 202 | EXPORT_SYMBOL_GPL(__put_task_struct); 203 | 204 | /* 205 | * macro override instead of weak attribute alias, to workaround 206 | * gcc 4.1.0 and 4.1.1 bugs with weak attribute and empty functions. 207 | */ 208 | #ifndef arch_task_cache_init 209 | #define arch_task_cache_init() 210 | #endif 211 | 212 | void __init fork_init(unsigned long mempages) 213 | { 214 | #ifndef __HAVE_ARCH_TASK_STRUCT_ALLOCATOR 215 | #ifndef ARCH_MIN_TASKALIGN 216 | #define ARCH_MIN_TASKALIGN L1_CACHE_BYTES 217 | #endif 218 | /* create a slab on which task_structs can be allocated */ 219 | task_struct_cachep = 220 | kmem_cache_create("task_struct", sizeof(struct task_struct), 221 | ARCH_MIN_TASKALIGN, SLAB_PANIC | SLAB_NOTRACK, NULL); 222 | #endif 223 | 224 | /* do the arch specific task caches init */ 225 | arch_task_cache_init(); 226 | 227 | /* 228 | * The default maximum number of threads is set to a safe 229 | * value: the thread structures can take up at most half 230 | * of memory. 231 | */ 232 | max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE); 233 | 234 | /* 235 | * we need to allow at least 20 threads to boot a system 236 | */ 237 | if(max_threads < 20) 238 | max_threads = 20; 239 | 240 | init_task.signal->rlim[RLIMIT_NPROC].rlim_cur = max_threads/2; 241 | init_task.signal->rlim[RLIMIT_NPROC].rlim_max = max_threads/2; 242 | init_task.signal->rlim[RLIMIT_SIGPENDING] = 243 | init_task.signal->rlim[RLIMIT_NPROC]; 244 | } 245 | 246 | int __attribute__((weak)) arch_dup_task_struct(struct task_struct *dst, 247 | struct task_struct *src) 248 | { 249 | *dst = *src; 250 | return 0; 251 | } 252 | 253 | static struct task_struct *dup_task_struct(struct task_struct *orig) 254 | { 255 | struct task_struct *tsk; 256 | struct thread_info *ti; 257 | unsigned long *stackend; 258 | int node = tsk_fork_get_node(orig); 259 | int err; 260 | 261 | prepare_to_copy(orig); 262 | 263 | tsk = alloc_task_struct_node(node); 264 | if (!tsk) 265 | return NULL; 266 | 267 | ti = alloc_thread_info_node(tsk, node); 268 | if (!ti) { 269 | free_task_struct(tsk); 270 | return NULL; 271 | } 272 | 273 | err = arch_dup_task_struct(tsk, orig); 274 | if (err) 275 | goto out; 276 | 277 | tsk->stack = ti; 278 | 279 | err = prop_local_init_single(&tsk->dirties); 280 | if (err) 281 | goto out; 282 | 283 | setup_thread_stack(tsk, orig); 284 | clear_user_return_notifier(tsk); 285 | clear_tsk_need_resched(tsk); 286 | stackend = end_of_stack(tsk); 287 | *stackend = STACK_END_MAGIC; /* for overflow detection */ 288 | 289 | #ifdef CONFIG_CC_STACKPROTECTOR 290 | tsk->stack_canary = get_random_int(); 291 | #endif 292 | 293 | /* One for us, one for whoever does the "release_task()" (usually parent) */ 294 | atomic_set(&tsk->usage,2); 295 | atomic_set(&tsk->fs_excl, 0); 296 | #ifdef CONFIG_BLK_DEV_IO_TRACE 297 | tsk->btrace_seq = 0; 298 | #endif 299 | tsk->splice_pipe = NULL; 300 | 301 | account_kernel_stack(ti, 1); 302 | 303 | return tsk; 304 | 305 | out: 306 | free_thread_info(ti); 307 | free_task_struct(tsk); 308 | return NULL; 309 | } 310 | 311 | #ifdef CONFIG_MMU 312 | static int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm) 313 | { 314 | struct vm_area_struct *mpnt, *tmp, *prev, **pprev; 315 | struct rb_node **rb_link, *rb_parent; 316 | int retval; 317 | unsigned long charge; 318 | struct mempolicy *pol; 319 | 320 | down_write(&oldmm->mmap_sem); 321 | flush_cache_dup_mm(oldmm); 322 | /* 323 | * Not linked in yet - no deadlock potential: 324 | */ 325 | down_write_nested(&mm->mmap_sem, SINGLE_DEPTH_NESTING); 326 | 327 | mm->locked_vm = 0; 328 | mm->mmap = NULL; 329 | mm->mmap_cache = NULL; 330 | mm->free_area_cache = oldmm->mmap_base; 331 | mm->cached_hole_size = ~0UL; 332 | mm->map_count = 0; 333 | cpumask_clear(mm_cpumask(mm)); 334 | mm->mm_rb = RB_ROOT; 335 | rb_link = &mm->mm_rb.rb_node; 336 | rb_parent = NULL; 337 | pprev = &mm->mmap; 338 | retval = ksm_fork(mm, oldmm); 339 | if (retval) 340 | goto out; 341 | retval = khugepaged_fork(mm, oldmm); 342 | if (retval) 343 | goto out; 344 | 345 | prev = NULL; 346 | for (mpnt = oldmm->mmap; mpnt; mpnt = mpnt->vm_next) { 347 | struct file *file; 348 | 349 | if (mpnt->vm_flags & VM_DONTCOPY) { 350 | long pages = vma_pages(mpnt); 351 | mm->total_vm -= pages; 352 | vm_stat_account(mm, mpnt->vm_flags, mpnt->vm_file, 353 | -pages); 354 | continue; 355 | } 356 | charge = 0; 357 | if (mpnt->vm_flags & VM_ACCOUNT) { 358 | unsigned int len = (mpnt->vm_end - mpnt->vm_start) >> PAGE_SHIFT; 359 | if (security_vm_enough_memory(len)) 360 | goto fail_nomem; 361 | charge = len; 362 | } 363 | tmp = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL); 364 | if (!tmp) 365 | goto fail_nomem; 366 | *tmp = *mpnt; 367 | INIT_LIST_HEAD(&tmp->anon_vma_chain); 368 | pol = mpol_dup(vma_policy(mpnt)); 369 | retval = PTR_ERR(pol); 370 | if (IS_ERR(pol)) 371 | goto fail_nomem_policy; 372 | vma_set_policy(tmp, pol); 373 | tmp->vm_mm = mm; 374 | if (anon_vma_fork(tmp, mpnt)) 375 | goto fail_nomem_anon_vma_fork; 376 | tmp->vm_flags &= ~VM_LOCKED; 377 | tmp->vm_next = tmp->vm_prev = NULL; 378 | file = tmp->vm_file; 379 | if (file) { 380 | struct inode *inode = file->f_path.dentry->d_inode; 381 | struct address_space *mapping = file->f_mapping; 382 | 383 | get_file(file); 384 | if (tmp->vm_flags & VM_DENYWRITE) 385 | atomic_dec(&inode->i_writecount); 386 | spin_lock(&mapping->i_mmap_lock); 387 | if (tmp->vm_flags & VM_SHARED) 388 | mapping->i_mmap_writable++; 389 | tmp->vm_truncate_count = mpnt->vm_truncate_count; 390 | flush_dcache_mmap_lock(mapping); 391 | /* insert tmp into the share list, just after mpnt */ 392 | vma_prio_tree_add(tmp, mpnt); 393 | flush_dcache_mmap_unlock(mapping); 394 | spin_unlock(&mapping->i_mmap_lock); 395 | } 396 | 397 | /* 398 | * Clear hugetlb-related page reserves for children. This only 399 | * affects MAP_PRIVATE mappings. Faults generated by the child 400 | * are not guaranteed to succeed, even if read-only 401 | */ 402 | if (is_vm_hugetlb_page(tmp)) 403 | reset_vma_resv_huge_pages(tmp); 404 | 405 | /* 406 | * Link in the new vma and copy the page table entries. 407 | */ 408 | *pprev = tmp; 409 | pprev = &tmp->vm_next; 410 | tmp->vm_prev = prev; 411 | prev = tmp; 412 | 413 | __vma_link_rb(mm, tmp, rb_link, rb_parent); 414 | rb_link = &tmp->vm_rb.rb_right; 415 | rb_parent = &tmp->vm_rb; 416 | 417 | mm->map_count++; 418 | retval = copy_page_range(mm, oldmm, mpnt); 419 | 420 | if (tmp->vm_ops && tmp->vm_ops->open) 421 | tmp->vm_ops->open(tmp); 422 | 423 | if (retval) 424 | goto out; 425 | } 426 | /* a new mm has just been created */ 427 | arch_dup_mmap(oldmm, mm); 428 | retval = 0; 429 | out: 430 | up_write(&mm->mmap_sem); 431 | flush_tlb_mm(oldmm); 432 | up_write(&oldmm->mmap_sem); 433 | return retval; 434 | fail_nomem_anon_vma_fork: 435 | mpol_put(pol); 436 | fail_nomem_policy: 437 | kmem_cache_free(vm_area_cachep, tmp); 438 | fail_nomem: 439 | retval = -ENOMEM; 440 | vm_unacct_memory(charge); 441 | goto out; 442 | } 443 | 444 | static inline int mm_alloc_pgd(struct mm_struct * mm) 445 | { 446 | mm->pgd = pgd_alloc(mm); 447 | if (unlikely(!mm->pgd)) 448 | return -ENOMEM; 449 | return 0; 450 | } 451 | 452 | static inline void mm_free_pgd(struct mm_struct * mm) 453 | { 454 | pgd_free(mm, mm->pgd); 455 | } 456 | #else 457 | #define dup_mmap(mm, oldmm) (0) 458 | #define mm_alloc_pgd(mm) (0) 459 | #define mm_free_pgd(mm) 460 | #endif /* CONFIG_MMU */ 461 | 462 | __cacheline_aligned_in_smp DEFINE_SPINLOCK(mmlist_lock); 463 | 464 | #define allocate_mm() (kmem_cache_alloc(mm_cachep, GFP_KERNEL)) 465 | #define free_mm(mm) (kmem_cache_free(mm_cachep, (mm))) 466 | 467 | static unsigned long default_dump_filter = MMF_DUMP_FILTER_DEFAULT; 468 | 469 | static int __init coredump_filter_setup(char *s) 470 | { 471 | default_dump_filter = 472 | (simple_strtoul(s, NULL, 0) << MMF_DUMP_FILTER_SHIFT) & 473 | MMF_DUMP_FILTER_MASK; 474 | return 1; 475 | } 476 | 477 | __setup("coredump_filter=", coredump_filter_setup); 478 | 479 | #include 480 | 481 | static void mm_init_aio(struct mm_struct *mm) 482 | { 483 | #ifdef CONFIG_AIO 484 | spin_lock_init(&mm->ioctx_lock); 485 | INIT_HLIST_HEAD(&mm->ioctx_list); 486 | #endif 487 | } 488 | 489 | static struct mm_struct * mm_init(struct mm_struct * mm, struct task_struct *p) 490 | { 491 | atomic_set(&mm->mm_users, 1); 492 | atomic_set(&mm->mm_count, 1); 493 | init_rwsem(&mm->mmap_sem); 494 | INIT_LIST_HEAD(&mm->mmlist); 495 | mm->flags = (current->mm) ? 496 | (current->mm->flags & MMF_INIT_MASK) : default_dump_filter; 497 | mm->core_state = NULL; 498 | mm->nr_ptes = 0; 499 | memset(&mm->rss_stat, 0, sizeof(mm->rss_stat)); 500 | spin_lock_init(&mm->page_table_lock); 501 | mm->free_area_cache = TASK_UNMAPPED_BASE; 502 | mm->cached_hole_size = ~0UL; 503 | mm_init_aio(mm); 504 | mm_init_owner(mm, p); 505 | atomic_set(&mm->oom_disable_count, 0); 506 | 507 | if (likely(!mm_alloc_pgd(mm))) { 508 | mm->def_flags = 0; 509 | mmu_notifier_mm_init(mm); 510 | return mm; 511 | } 512 | 513 | free_mm(mm); 514 | return NULL; 515 | } 516 | 517 | /* 518 | * Allocate and initialize an mm_struct. 519 | */ 520 | struct mm_struct * mm_alloc(void) 521 | { 522 | struct mm_struct * mm; 523 | 524 | mm = allocate_mm(); 525 | if (mm) { 526 | memset(mm, 0, sizeof(*mm)); 527 | mm = mm_init(mm, current); 528 | } 529 | return mm; 530 | } 531 | 532 | /* 533 | * Called when the last reference to the mm 534 | * is dropped: either by a lazy thread or by 535 | * mmput. Free the page directory and the mm. 536 | */ 537 | void __mmdrop(struct mm_struct *mm) 538 | { 539 | BUG_ON(mm == &init_mm); 540 | mm_free_pgd(mm); 541 | destroy_context(mm); 542 | mmu_notifier_mm_destroy(mm); 543 | #ifdef CONFIG_TRANSPARENT_HUGEPAGE 544 | VM_BUG_ON(mm->pmd_huge_pte); 545 | #endif 546 | free_mm(mm); 547 | } 548 | EXPORT_SYMBOL_GPL(__mmdrop); 549 | 550 | /* 551 | * Decrement the use count and release all resources for an mm. 552 | */ 553 | void mmput(struct mm_struct *mm) 554 | { 555 | might_sleep(); 556 | 557 | if (atomic_dec_and_test(&mm->mm_users)) { 558 | exit_aio(mm); 559 | ksm_exit(mm); 560 | khugepaged_exit(mm); /* must run before exit_mmap */ 561 | exit_mmap(mm); 562 | set_mm_exe_file(mm, NULL); 563 | if (!list_empty(&mm->mmlist)) { 564 | spin_lock(&mmlist_lock); 565 | list_del(&mm->mmlist); 566 | spin_unlock(&mmlist_lock); 567 | } 568 | put_swap_token(mm); 569 | if (mm->binfmt) 570 | module_put(mm->binfmt->module); 571 | mmdrop(mm); 572 | } 573 | } 574 | EXPORT_SYMBOL_GPL(mmput); 575 | 576 | /** 577 | * get_task_mm - acquire a reference to the task's mm 578 | * 579 | * Returns %NULL if the task has no mm. Checks PF_KTHREAD (meaning 580 | * this kernel workthread has transiently adopted a user mm with use_mm, 581 | * to do its AIO) is not set and if so returns a reference to it, after 582 | * bumping up the use count. User must release the mm via mmput() 583 | * after use. Typically used by /proc and ptrace. 584 | */ 585 | struct mm_struct *get_task_mm(struct task_struct *task) 586 | { 587 | struct mm_struct *mm; 588 | 589 | task_lock(task); 590 | mm = task->mm; 591 | if (mm) { 592 | if (task->flags & PF_KTHREAD) 593 | mm = NULL; 594 | else 595 | atomic_inc(&mm->mm_users); 596 | } 597 | task_unlock(task); 598 | return mm; 599 | } 600 | EXPORT_SYMBOL_GPL(get_task_mm); 601 | 602 | /* Please note the differences between mmput and mm_release. 603 | * mmput is called whenever we stop holding onto a mm_struct, 604 | * error success whatever. 605 | * 606 | * mm_release is called after a mm_struct has been removed 607 | * from the current process. 608 | * 609 | * This difference is important for error handling, when we 610 | * only half set up a mm_struct for a new process and need to restore 611 | * the old one. Because we mmput the new mm_struct before 612 | * restoring the old one. . . 613 | * Eric Biederman 10 January 1998 614 | */ 615 | void mm_release(struct task_struct *tsk, struct mm_struct *mm) 616 | { 617 | struct completion *vfork_done = tsk->vfork_done; 618 | 619 | /* Get rid of any futexes when releasing the mm */ 620 | #ifdef CONFIG_FUTEX 621 | if (unlikely(tsk->robust_list)) { 622 | exit_robust_list(tsk); 623 | tsk->robust_list = NULL; 624 | } 625 | #ifdef CONFIG_COMPAT 626 | if (unlikely(tsk->compat_robust_list)) { 627 | compat_exit_robust_list(tsk); 628 | tsk->compat_robust_list = NULL; 629 | } 630 | #endif 631 | if (unlikely(!list_empty(&tsk->pi_state_list))) 632 | exit_pi_state_list(tsk); 633 | #endif 634 | 635 | /* Get rid of any cached register state */ 636 | deactivate_mm(tsk, mm); 637 | 638 | /* notify parent sleeping on vfork() */ 639 | if (vfork_done) { 640 | tsk->vfork_done = NULL; 641 | complete(vfork_done); 642 | } 643 | 644 | /* 645 | * If we're exiting normally, clear a user-space tid field if 646 | * requested. We leave this alone when dying by signal, to leave 647 | * the value intact in a core dump, and to save the unnecessary 648 | * trouble otherwise. Userland only wants this done for a sys_exit. 649 | */ 650 | if (tsk->clear_child_tid) { 651 | if (!(tsk->flags & PF_SIGNALED) && 652 | atomic_read(&mm->mm_users) > 1) { 653 | /* 654 | * We don't check the error code - if userspace has 655 | * not set up a proper pointer then tough luck. 656 | */ 657 | put_user(0, tsk->clear_child_tid); 658 | sys_futex(tsk->clear_child_tid, FUTEX_WAKE, 659 | 1, NULL, NULL, 0); 660 | } 661 | tsk->clear_child_tid = NULL; 662 | } 663 | } 664 | 665 | /* 666 | * Allocate a new mm structure and copy contents from the 667 | * mm structure of the passed in task structure. 668 | */ 669 | struct mm_struct *dup_mm(struct task_struct *tsk) 670 | { 671 | struct mm_struct *mm, *oldmm = current->mm; 672 | int err; 673 | 674 | if (!oldmm) 675 | return NULL; 676 | 677 | mm = allocate_mm(); 678 | if (!mm) 679 | goto fail_nomem; 680 | 681 | memcpy(mm, oldmm, sizeof(*mm)); 682 | 683 | /* Initializing for Swap token stuff */ 684 | mm->token_priority = 0; 685 | mm->last_interval = 0; 686 | 687 | #ifdef CONFIG_TRANSPARENT_HUGEPAGE 688 | mm->pmd_huge_pte = NULL; 689 | #endif 690 | 691 | if (!mm_init(mm, tsk)) 692 | goto fail_nomem; 693 | 694 | if (init_new_context(tsk, mm)) 695 | goto fail_nocontext; 696 | 697 | dup_mm_exe_file(oldmm, mm); 698 | 699 | err = dup_mmap(mm, oldmm); 700 | if (err) 701 | goto free_pt; 702 | 703 | mm->hiwater_rss = get_mm_rss(mm); 704 | mm->hiwater_vm = mm->total_vm; 705 | 706 | if (mm->binfmt && !try_module_get(mm->binfmt->module)) 707 | goto free_pt; 708 | 709 | return mm; 710 | 711 | free_pt: 712 | /* don't put binfmt in mmput, we haven't got module yet */ 713 | mm->binfmt = NULL; 714 | mmput(mm); 715 | 716 | fail_nomem: 717 | return NULL; 718 | 719 | fail_nocontext: 720 | /* 721 | * If init_new_context() failed, we cannot use mmput() to free the mm 722 | * because it calls destroy_context() 723 | */ 724 | mm_free_pgd(mm); 725 | free_mm(mm); 726 | return NULL; 727 | } 728 | 729 | static int copy_mm(unsigned long clone_flags, struct task_struct * tsk) 730 | { 731 | struct mm_struct * mm, *oldmm; 732 | int retval; 733 | 734 | tsk->min_flt = tsk->maj_flt = 0; 735 | tsk->nvcsw = tsk->nivcsw = 0; 736 | #ifdef CONFIG_DETECT_HUNG_TASK 737 | tsk->last_switch_count = tsk->nvcsw + tsk->nivcsw; 738 | #endif 739 | 740 | tsk->mm = NULL; 741 | tsk->active_mm = NULL; 742 | 743 | /* 744 | * Are we cloning a kernel thread? 745 | * 746 | * We need to steal a active VM for that.. 747 | */ 748 | oldmm = current->mm; 749 | if (!oldmm) 750 | return 0; 751 | 752 | if (clone_flags & CLONE_VM) { 753 | atomic_inc(&oldmm->mm_users); 754 | mm = oldmm; 755 | goto good_mm; 756 | } 757 | 758 | retval = -ENOMEM; 759 | mm = dup_mm(tsk); 760 | if (!mm) 761 | goto fail_nomem; 762 | 763 | good_mm: 764 | /* Initializing for Swap token stuff */ 765 | mm->token_priority = 0; 766 | mm->last_interval = 0; 767 | if (tsk->signal->oom_score_adj == OOM_SCORE_ADJ_MIN) 768 | atomic_inc(&mm->oom_disable_count); 769 | 770 | tsk->mm = mm; 771 | tsk->active_mm = mm; 772 | return 0; 773 | 774 | fail_nomem: 775 | return retval; 776 | } 777 | 778 | static int copy_fs(unsigned long clone_flags, struct task_struct *tsk) 779 | { 780 | struct fs_struct *fs = current->fs; 781 | if (clone_flags & CLONE_FS) { 782 | /* tsk->fs is already what we want */ 783 | spin_lock(&fs->lock); 784 | if (fs->in_exec) { 785 | spin_unlock(&fs->lock); 786 | return -EAGAIN; 787 | } 788 | fs->users++; 789 | spin_unlock(&fs->lock); 790 | return 0; 791 | } 792 | tsk->fs = copy_fs_struct(fs); 793 | if (!tsk->fs) 794 | return -ENOMEM; 795 | return 0; 796 | } 797 | 798 | static int copy_files(unsigned long clone_flags, struct task_struct * tsk) 799 | { 800 | struct files_struct *oldf, *newf; 801 | int error = 0; 802 | 803 | /* 804 | * A background process may not have any files ... 805 | */ 806 | oldf = current->files; 807 | if (!oldf) 808 | goto out; 809 | 810 | if (clone_flags & CLONE_FILES) { 811 | atomic_inc(&oldf->count); 812 | goto out; 813 | } 814 | 815 | newf = dup_fd(oldf, &error); 816 | if (!newf) 817 | goto out; 818 | 819 | tsk->files = newf; 820 | error = 0; 821 | out: 822 | return error; 823 | } 824 | 825 | static int copy_io(unsigned long clone_flags, struct task_struct *tsk) 826 | { 827 | #ifdef CONFIG_BLOCK 828 | struct io_context *ioc = current->io_context; 829 | 830 | if (!ioc) 831 | return 0; 832 | /* 833 | * Share io context with parent, if CLONE_IO is set 834 | */ 835 | if (clone_flags & CLONE_IO) { 836 | tsk->io_context = ioc_task_link(ioc); 837 | if (unlikely(!tsk->io_context)) 838 | return -ENOMEM; 839 | } else if (ioprio_valid(ioc->ioprio)) { 840 | tsk->io_context = alloc_io_context(GFP_KERNEL, -1); 841 | if (unlikely(!tsk->io_context)) 842 | return -ENOMEM; 843 | 844 | tsk->io_context->ioprio = ioc->ioprio; 845 | } 846 | #endif 847 | return 0; 848 | } 849 | 850 | static int copy_sighand(unsigned long clone_flags, struct task_struct *tsk) 851 | { 852 | struct sighand_struct *sig; 853 | 854 | if (clone_flags & CLONE_SIGHAND) { 855 | atomic_inc(¤t->sighand->count); 856 | return 0; 857 | } 858 | sig = kmem_cache_alloc(sighand_cachep, GFP_KERNEL); 859 | rcu_assign_pointer(tsk->sighand, sig); 860 | if (!sig) 861 | return -ENOMEM; 862 | atomic_set(&sig->count, 1); 863 | memcpy(sig->action, current->sighand->action, sizeof(sig->action)); 864 | return 0; 865 | } 866 | 867 | void __cleanup_sighand(struct sighand_struct *sighand) 868 | { 869 | if (atomic_dec_and_test(&sighand->count)) 870 | kmem_cache_free(sighand_cachep, sighand); 871 | } 872 | 873 | 874 | /* 875 | * Initialize POSIX timer handling for a thread group. 876 | */ 877 | static void posix_cpu_timers_init_group(struct signal_struct *sig) 878 | { 879 | unsigned long cpu_limit; 880 | 881 | /* Thread group counters. */ 882 | thread_group_cputime_init(sig); 883 | 884 | cpu_limit = ACCESS_ONCE(sig->rlim[RLIMIT_CPU].rlim_cur); 885 | if (cpu_limit != RLIM_INFINITY) { 886 | sig->cputime_expires.prof_exp = secs_to_cputime(cpu_limit); 887 | sig->cputimer.running = 1; 888 | } 889 | 890 | /* The timer lists. */ 891 | INIT_LIST_HEAD(&sig->cpu_timers[0]); 892 | INIT_LIST_HEAD(&sig->cpu_timers[1]); 893 | INIT_LIST_HEAD(&sig->cpu_timers[2]); 894 | } 895 | 896 | static int copy_signal(unsigned long clone_flags, struct task_struct *tsk) 897 | { 898 | struct signal_struct *sig; 899 | 900 | if (clone_flags & CLONE_THREAD) 901 | return 0; 902 | 903 | sig = kmem_cache_zalloc(signal_cachep, GFP_KERNEL); 904 | tsk->signal = sig; 905 | if (!sig) 906 | return -ENOMEM; 907 | 908 | sig->nr_threads = 1; 909 | atomic_set(&sig->live, 1); 910 | atomic_set(&sig->sigcnt, 1); 911 | init_waitqueue_head(&sig->wait_chldexit); 912 | if (clone_flags & CLONE_NEWPID) 913 | sig->flags |= SIGNAL_UNKILLABLE; 914 | sig->curr_target = tsk; 915 | init_sigpending(&sig->shared_pending); 916 | INIT_LIST_HEAD(&sig->posix_timers); 917 | 918 | hrtimer_init(&sig->real_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 919 | sig->real_timer.function = it_real_fn; 920 | 921 | task_lock(current->group_leader); 922 | memcpy(sig->rlim, current->signal->rlim, sizeof sig->rlim); 923 | task_unlock(current->group_leader); 924 | 925 | posix_cpu_timers_init_group(sig); 926 | 927 | tty_audit_fork(sig); 928 | sched_autogroup_fork(sig); 929 | 930 | sig->oom_adj = current->signal->oom_adj; 931 | sig->oom_score_adj = current->signal->oom_score_adj; 932 | sig->oom_score_adj_min = current->signal->oom_score_adj_min; 933 | 934 | mutex_init(&sig->cred_guard_mutex); 935 | 936 | return 0; 937 | } 938 | 939 | static void copy_flags(unsigned long clone_flags, struct task_struct *p) 940 | { 941 | unsigned long new_flags = p->flags; 942 | 943 | new_flags &= ~(PF_SUPERPRIV | PF_WQ_WORKER); 944 | new_flags |= PF_FORKNOEXEC; 945 | new_flags |= PF_STARTING; 946 | p->flags = new_flags; 947 | clear_freeze_flag(p); 948 | } 949 | 950 | SYSCALL_DEFINE1(set_tid_address, int __user *, tidptr) 951 | { 952 | current->clear_child_tid = tidptr; 953 | 954 | return task_pid_vnr(current); 955 | } 956 | 957 | static void rt_mutex_init_task(struct task_struct *p) 958 | { 959 | raw_spin_lock_init(&p->pi_lock); 960 | #ifdef CONFIG_RT_MUTEXES 961 | plist_head_init_raw(&p->pi_waiters, &p->pi_lock); 962 | p->pi_blocked_on = NULL; 963 | #endif 964 | } 965 | 966 | #ifdef CONFIG_MM_OWNER 967 | void mm_init_owner(struct mm_struct *mm, struct task_struct *p) 968 | { 969 | mm->owner = p; 970 | } 971 | #endif /* CONFIG_MM_OWNER */ 972 | 973 | /* 974 | * Initialize POSIX timer handling for a single task. 975 | */ 976 | static void posix_cpu_timers_init(struct task_struct *tsk) 977 | { 978 | tsk->cputime_expires.prof_exp = cputime_zero; 979 | tsk->cputime_expires.virt_exp = cputime_zero; 980 | tsk->cputime_expires.sched_exp = 0; 981 | INIT_LIST_HEAD(&tsk->cpu_timers[0]); 982 | INIT_LIST_HEAD(&tsk->cpu_timers[1]); 983 | INIT_LIST_HEAD(&tsk->cpu_timers[2]); 984 | } 985 | 986 | /* 987 | * This creates a new process as a copy of the old one, 988 | * but does not actually start it yet. 989 | * 990 | * It copies the registers, and all the appropriate 991 | * parts of the process environment (as per the clone 992 | * flags). The actual kick-off is left to the caller. 993 | */ 994 | static struct task_struct *copy_process(unsigned long clone_flags, 995 | unsigned long stack_start, 996 | struct pt_regs *regs, 997 | unsigned long stack_size, 998 | int __user *child_tidptr, 999 | struct pid *pid, 1000 | int trace) 1001 | { 1002 | int retval; 1003 | struct task_struct *p; 1004 | int cgroup_callbacks_done = 0; 1005 | 1006 | if ((clone_flags & (CLONE_NEWNS|CLONE_FS)) == (CLONE_NEWNS|CLONE_FS)) 1007 | return ERR_PTR(-EINVAL); 1008 | 1009 | /* 1010 | * Thread groups must share signals as well, and detached threads 1011 | * can only be started up within the thread group. 1012 | */ 1013 | if ((clone_flags & CLONE_THREAD) && !(clone_flags & CLONE_SIGHAND)) 1014 | return ERR_PTR(-EINVAL); 1015 | 1016 | /* 1017 | * Shared signal handlers imply shared VM. By way of the above, 1018 | * thread groups also imply shared VM. Blocking this case allows 1019 | * for various simplifications in other code. 1020 | */ 1021 | if ((clone_flags & CLONE_SIGHAND) && !(clone_flags & CLONE_VM)) 1022 | return ERR_PTR(-EINVAL); 1023 | 1024 | /* 1025 | * Siblings of global init remain as zombies on exit since they are 1026 | * not reaped by their parent (swapper). To solve this and to avoid 1027 | * multi-rooted process trees, prevent global and container-inits 1028 | * from creating siblings. 1029 | */ 1030 | if ((clone_flags & CLONE_PARENT) && 1031 | current->signal->flags & SIGNAL_UNKILLABLE) 1032 | return ERR_PTR(-EINVAL); 1033 | 1034 | retval = security_task_create(clone_flags); 1035 | if (retval) 1036 | goto fork_out; 1037 | 1038 | retval = -ENOMEM; 1039 | p = dup_task_struct(current); 1040 | if (!p) 1041 | goto fork_out; 1042 | 1043 | ftrace_graph_init_task(p); 1044 | 1045 | rt_mutex_init_task(p); 1046 | 1047 | #ifdef CONFIG_PROVE_LOCKING 1048 | DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled); 1049 | DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled); 1050 | #endif 1051 | retval = -EAGAIN; 1052 | if (atomic_read(&p->real_cred->user->processes) >= 1053 | task_rlimit(p, RLIMIT_NPROC)) { 1054 | if (!capable(CAP_SYS_ADMIN) && !capable(CAP_SYS_RESOURCE) && 1055 | p->real_cred->user != INIT_USER) 1056 | goto bad_fork_free; 1057 | } 1058 | 1059 | retval = copy_creds(p, clone_flags); 1060 | if (retval < 0) 1061 | goto bad_fork_free; 1062 | 1063 | /* 1064 | * If multiple threads are within copy_process(), then this check 1065 | * triggers too late. This doesn't hurt, the check is only there 1066 | * to stop root fork bombs. 1067 | */ 1068 | retval = -EAGAIN; 1069 | if (nr_threads >= max_threads) 1070 | goto bad_fork_cleanup_count; 1071 | 1072 | if (!try_module_get(task_thread_info(p)->exec_domain->module)) 1073 | goto bad_fork_cleanup_count; 1074 | 1075 | p->did_exec = 0; 1076 | delayacct_tsk_init(p); /* Must remain after dup_task_struct() */ 1077 | copy_flags(clone_flags, p); 1078 | INIT_LIST_HEAD(&p->children); 1079 | INIT_LIST_HEAD(&p->sibling); 1080 | rcu_copy_process(p); 1081 | p->vfork_done = NULL; 1082 | spin_lock_init(&p->alloc_lock); 1083 | 1084 | init_sigpending(&p->pending); 1085 | 1086 | p->utime = cputime_zero; 1087 | p->stime = cputime_zero; 1088 | p->gtime = cputime_zero; 1089 | p->utimescaled = cputime_zero; 1090 | p->stimescaled = cputime_zero; 1091 | #ifndef CONFIG_VIRT_CPU_ACCOUNTING 1092 | p->prev_utime = cputime_zero; 1093 | p->prev_stime = cputime_zero; 1094 | #endif 1095 | #if defined(SPLIT_RSS_COUNTING) 1096 | memset(&p->rss_stat, 0, sizeof(p->rss_stat)); 1097 | #endif 1098 | 1099 | p->default_timer_slack_ns = current->timer_slack_ns; 1100 | 1101 | task_io_accounting_init(&p->ioac); 1102 | acct_clear_integrals(p); 1103 | 1104 | posix_cpu_timers_init(p); 1105 | 1106 | p->lock_depth = -1; /* -1 = no lock */ 1107 | do_posix_clock_monotonic_gettime(&p->start_time); 1108 | p->real_start_time = p->start_time; 1109 | monotonic_to_bootbased(&p->real_start_time); 1110 | p->io_context = NULL; 1111 | p->audit_context = NULL; 1112 | cgroup_fork(p); 1113 | #ifdef CONFIG_NUMA 1114 | p->mempolicy = mpol_dup(p->mempolicy); 1115 | if (IS_ERR(p->mempolicy)) { 1116 | retval = PTR_ERR(p->mempolicy); 1117 | p->mempolicy = NULL; 1118 | goto bad_fork_cleanup_cgroup; 1119 | } 1120 | mpol_fix_fork_child_flag(p); 1121 | #endif 1122 | #ifdef CONFIG_TRACE_IRQFLAGS 1123 | p->irq_events = 0; 1124 | #ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW 1125 | p->hardirqs_enabled = 1; 1126 | #else 1127 | p->hardirqs_enabled = 0; 1128 | #endif 1129 | p->hardirq_enable_ip = 0; 1130 | p->hardirq_enable_event = 0; 1131 | p->hardirq_disable_ip = _THIS_IP_; 1132 | p->hardirq_disable_event = 0; 1133 | p->softirqs_enabled = 1; 1134 | p->softirq_enable_ip = _THIS_IP_; 1135 | p->softirq_enable_event = 0; 1136 | p->softirq_disable_ip = 0; 1137 | p->softirq_disable_event = 0; 1138 | p->hardirq_context = 0; 1139 | p->softirq_context = 0; 1140 | #endif 1141 | #ifdef CONFIG_LOCKDEP 1142 | p->lockdep_depth = 0; /* no locks held yet */ 1143 | p->curr_chain_key = 0; 1144 | p->lockdep_recursion = 0; 1145 | #endif 1146 | 1147 | #ifdef CONFIG_DEBUG_MUTEXES 1148 | p->blocked_on = NULL; /* not blocked yet */ 1149 | #endif 1150 | #ifdef CONFIG_CGROUP_MEM_RES_CTLR 1151 | p->memcg_batch.do_batch = 0; 1152 | p->memcg_batch.memcg = NULL; 1153 | #endif 1154 | 1155 | /* Perform scheduler related setup. Assign this task to a CPU. */ 1156 | sched_fork(p, clone_flags); 1157 | 1158 | retval = perf_event_init_task(p); 1159 | if (retval) 1160 | goto bad_fork_cleanup_policy; 1161 | 1162 | if ((retval = audit_alloc(p))) 1163 | goto bad_fork_cleanup_policy; 1164 | /* copy all the process information */ 1165 | if ((retval = copy_semundo(clone_flags, p))) 1166 | goto bad_fork_cleanup_audit; 1167 | if ((retval = copy_files(clone_flags, p))) 1168 | goto bad_fork_cleanup_semundo; 1169 | if ((retval = copy_fs(clone_flags, p))) 1170 | goto bad_fork_cleanup_files; 1171 | if ((retval = copy_sighand(clone_flags, p))) 1172 | goto bad_fork_cleanup_fs; 1173 | if ((retval = copy_signal(clone_flags, p))) 1174 | goto bad_fork_cleanup_sighand; 1175 | if ((retval = copy_mm(clone_flags, p))) 1176 | goto bad_fork_cleanup_signal; 1177 | if ((retval = copy_namespaces(clone_flags, p))) 1178 | goto bad_fork_cleanup_mm; 1179 | if ((retval = copy_io(clone_flags, p))) 1180 | goto bad_fork_cleanup_namespaces; 1181 | retval = copy_thread(clone_flags, stack_start, stack_size, p, regs); 1182 | if (retval) 1183 | goto bad_fork_cleanup_io; 1184 | 1185 | if (pid != &init_struct_pid) { 1186 | retval = -ENOMEM; 1187 | pid = alloc_pid(p->nsproxy->pid_ns); 1188 | if (!pid) 1189 | goto bad_fork_cleanup_io; 1190 | } 1191 | 1192 | p->pid = pid_nr(pid); 1193 | p->tgid = p->pid; 1194 | if (clone_flags & CLONE_THREAD) 1195 | p->tgid = current->tgid; 1196 | 1197 | if (current->nsproxy != p->nsproxy) { 1198 | retval = ns_cgroup_clone(p, pid); 1199 | if (retval) 1200 | goto bad_fork_free_pid; 1201 | } 1202 | 1203 | p->set_child_tid = (clone_flags & CLONE_CHILD_SETTID) ? child_tidptr : NULL; 1204 | /* 1205 | * Clear TID on mm_release()? 1206 | */ 1207 | p->clear_child_tid = (clone_flags & CLONE_CHILD_CLEARTID) ? child_tidptr: NULL; 1208 | #ifdef CONFIG_BLOCK 1209 | p->plug = NULL; 1210 | #endif 1211 | #ifdef CONFIG_FUTEX 1212 | p->robust_list = NULL; 1213 | #ifdef CONFIG_COMPAT 1214 | p->compat_robust_list = NULL; 1215 | #endif 1216 | INIT_LIST_HEAD(&p->pi_state_list); 1217 | p->pi_state_cache = NULL; 1218 | #endif 1219 | /* 1220 | * sigaltstack should be cleared when sharing the same VM 1221 | */ 1222 | if ((clone_flags & (CLONE_VM|CLONE_VFORK)) == CLONE_VM) 1223 | p->sas_ss_sp = p->sas_ss_size = 0; 1224 | 1225 | /* 1226 | * Syscall tracing and stepping should be turned off in the 1227 | * child regardless of CLONE_PTRACE. 1228 | */ 1229 | user_disable_single_step(p); 1230 | clear_tsk_thread_flag(p, TIF_SYSCALL_TRACE); 1231 | #ifdef TIF_SYSCALL_EMU 1232 | clear_tsk_thread_flag(p, TIF_SYSCALL_EMU); 1233 | #endif 1234 | clear_all_latency_tracing(p); 1235 | 1236 | /* ok, now we should be set up.. */ 1237 | p->exit_signal = (clone_flags & CLONE_THREAD) ? -1 : (clone_flags & CSIGNAL); 1238 | p->pdeath_signal = 0; 1239 | p->exit_state = 0; 1240 | 1241 | /* 1242 | * Ok, make it visible to the rest of the system. 1243 | * We dont wake it up yet. 1244 | */ 1245 | p->group_leader = p; 1246 | INIT_LIST_HEAD(&p->thread_group); 1247 | 1248 | /* Now that the task is set up, run cgroup callbacks if 1249 | * necessary. We need to run them before the task is visible 1250 | * on the tasklist. */ 1251 | cgroup_fork_callbacks(p); 1252 | cgroup_callbacks_done = 1; 1253 | 1254 | /* Need tasklist lock for parent etc handling! */ 1255 | write_lock_irq(&tasklist_lock); 1256 | 1257 | /* CLONE_PARENT re-uses the old parent */ 1258 | if (clone_flags & (CLONE_PARENT|CLONE_THREAD)) { 1259 | p->real_parent = current->real_parent; 1260 | p->parent_exec_id = current->parent_exec_id; 1261 | } else { 1262 | p->real_parent = current; 1263 | p->parent_exec_id = current->self_exec_id; 1264 | } 1265 | 1266 | spin_lock(¤t->sighand->siglock); 1267 | 1268 | /* 1269 | * Process group and session signals need to be delivered to just the 1270 | * parent before the fork or both the parent and the child after the 1271 | * fork. Restart if a signal comes in before we add the new process to 1272 | * it's process group. 1273 | * A fatal signal pending means that current will exit, so the new 1274 | * thread can't slip out of an OOM kill (or normal SIGKILL). 1275 | */ 1276 | recalc_sigpending(); 1277 | if (signal_pending(current)) { 1278 | spin_unlock(¤t->sighand->siglock); 1279 | write_unlock_irq(&tasklist_lock); 1280 | retval = -ERESTARTNOINTR; 1281 | goto bad_fork_free_pid; 1282 | } 1283 | 1284 | if (clone_flags & CLONE_THREAD) { 1285 | current->signal->nr_threads++; 1286 | atomic_inc(¤t->signal->live); 1287 | atomic_inc(¤t->signal->sigcnt); 1288 | p->group_leader = current->group_leader; 1289 | list_add_tail_rcu(&p->thread_group, &p->group_leader->thread_group); 1290 | } 1291 | 1292 | if (likely(p->pid)) { 1293 | tracehook_finish_clone(p, clone_flags, trace); 1294 | 1295 | if (thread_group_leader(p)) { 1296 | if (is_child_reaper(pid)) 1297 | p->nsproxy->pid_ns->child_reaper = p; 1298 | 1299 | p->signal->leader_pid = pid; 1300 | p->signal->tty = tty_kref_get(current->signal->tty); 1301 | attach_pid(p, PIDTYPE_PGID, task_pgrp(current)); 1302 | attach_pid(p, PIDTYPE_SID, task_session(current)); 1303 | list_add_tail(&p->sibling, &p->real_parent->children); 1304 | list_add_tail_rcu(&p->tasks, &init_task.tasks); 1305 | __this_cpu_inc(process_counts); 1306 | } 1307 | attach_pid(p, PIDTYPE_PID, pid); 1308 | nr_threads++; 1309 | } 1310 | 1311 | total_forks++; 1312 | spin_unlock(¤t->sighand->siglock); 1313 | write_unlock_irq(&tasklist_lock); 1314 | proc_fork_connector(p); 1315 | cgroup_post_fork(p); 1316 | perf_event_fork(p); 1317 | return p; 1318 | 1319 | bad_fork_free_pid: 1320 | if (pid != &init_struct_pid) 1321 | free_pid(pid); 1322 | bad_fork_cleanup_io: 1323 | if (p->io_context) 1324 | exit_io_context(p); 1325 | bad_fork_cleanup_namespaces: 1326 | exit_task_namespaces(p); 1327 | bad_fork_cleanup_mm: 1328 | if (p->mm) { 1329 | task_lock(p); 1330 | if (p->signal->oom_score_adj == OOM_SCORE_ADJ_MIN) 1331 | atomic_dec(&p->mm->oom_disable_count); 1332 | task_unlock(p); 1333 | mmput(p->mm); 1334 | } 1335 | bad_fork_cleanup_signal: 1336 | if (!(clone_flags & CLONE_THREAD)) 1337 | free_signal_struct(p->signal); 1338 | bad_fork_cleanup_sighand: 1339 | __cleanup_sighand(p->sighand); 1340 | bad_fork_cleanup_fs: 1341 | exit_fs(p); /* blocking */ 1342 | bad_fork_cleanup_files: 1343 | exit_files(p); /* blocking */ 1344 | bad_fork_cleanup_semundo: 1345 | exit_sem(p); 1346 | bad_fork_cleanup_audit: 1347 | audit_free(p); 1348 | bad_fork_cleanup_policy: 1349 | perf_event_free_task(p); 1350 | #ifdef CONFIG_NUMA 1351 | mpol_put(p->mempolicy); 1352 | bad_fork_cleanup_cgroup: 1353 | #endif 1354 | cgroup_exit(p, cgroup_callbacks_done); 1355 | delayacct_tsk_free(p); 1356 | module_put(task_thread_info(p)->exec_domain->module); 1357 | bad_fork_cleanup_count: 1358 | atomic_dec(&p->cred->user->processes); 1359 | exit_creds(p); 1360 | bad_fork_free: 1361 | free_task(p); 1362 | fork_out: 1363 | return ERR_PTR(retval); 1364 | } 1365 | 1366 | noinline struct pt_regs * __cpuinit __attribute__((weak)) idle_regs(struct pt_regs *regs) 1367 | { 1368 | memset(regs, 0, sizeof(struct pt_regs)); 1369 | return regs; 1370 | } 1371 | 1372 | static inline void init_idle_pids(struct pid_link *links) 1373 | { 1374 | enum pid_type type; 1375 | 1376 | for (type = PIDTYPE_PID; type < PIDTYPE_MAX; ++type) { 1377 | INIT_HLIST_NODE(&links[type].node); /* not really needed */ 1378 | links[type].pid = &init_struct_pid; 1379 | } 1380 | } 1381 | 1382 | struct task_struct * __cpuinit fork_idle(int cpu) 1383 | { 1384 | struct task_struct *task; 1385 | struct pt_regs regs; 1386 | 1387 | task = copy_process(CLONE_VM, 0, idle_regs(®s), 0, NULL, 1388 | &init_struct_pid, 0); 1389 | if (!IS_ERR(task)) { 1390 | init_idle_pids(task->pids); 1391 | init_idle(task, cpu); 1392 | } 1393 | 1394 | return task; 1395 | } 1396 | 1397 | /* 1398 | * Ok, this is the main fork-routine. 1399 | * 1400 | * It copies the process, and if successful kick-starts 1401 | * it and waits for it to finish using the VM if required. 1402 | */ 1403 | long do_fork(unsigned long clone_flags, 1404 | unsigned long stack_start, 1405 | struct pt_regs *regs, 1406 | unsigned long stack_size, 1407 | int __user *parent_tidptr, 1408 | int __user *child_tidptr) 1409 | { 1410 | struct task_struct *p; 1411 | int trace = 0; 1412 | long nr; 1413 | 1414 | /* 1415 | * Do some preliminary argument and permissions checking before we 1416 | * actually start allocating stuff 1417 | */ 1418 | if (clone_flags & CLONE_NEWUSER) { 1419 | if (clone_flags & CLONE_THREAD) 1420 | return -EINVAL; 1421 | /* hopefully this check will go away when userns support is 1422 | * complete 1423 | */ 1424 | if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SETUID) || 1425 | !capable(CAP_SETGID)) 1426 | return -EPERM; 1427 | } 1428 | 1429 | /* 1430 | * When called from kernel_thread, don't do user tracing stuff. 1431 | */ 1432 | if (likely(user_mode(regs))) 1433 | trace = tracehook_prepare_clone(clone_flags); 1434 | 1435 | p = copy_process(clone_flags, stack_start, regs, stack_size, 1436 | child_tidptr, NULL, trace); 1437 | /* 1438 | * Do this prior waking up the new thread - the thread pointer 1439 | * might get invalid after that point, if the thread exits quickly. 1440 | */ 1441 | if (!IS_ERR(p)) { 1442 | struct completion vfork; 1443 | 1444 | trace_sched_process_fork(current, p); 1445 | 1446 | nr = task_pid_vnr(p); 1447 | 1448 | if (clone_flags & CLONE_PARENT_SETTID) 1449 | put_user(nr, parent_tidptr); 1450 | 1451 | if (clone_flags & CLONE_VFORK) { 1452 | p->vfork_done = &vfork; 1453 | init_completion(&vfork); 1454 | } 1455 | 1456 | audit_finish_fork(p); 1457 | tracehook_report_clone(regs, clone_flags, nr, p); 1458 | 1459 | /* 1460 | * We set PF_STARTING at creation in case tracing wants to 1461 | * use this to distinguish a fully live task from one that 1462 | * hasn't gotten to tracehook_report_clone() yet. Now we 1463 | * clear it and set the child going. 1464 | */ 1465 | p->flags &= ~PF_STARTING; 1466 | 1467 | wake_up_new_task(p, clone_flags); 1468 | 1469 | tracehook_report_clone_complete(trace, regs, 1470 | clone_flags, nr, p); 1471 | 1472 | if (clone_flags & CLONE_VFORK) { 1473 | freezer_do_not_count(); 1474 | wait_for_completion(&vfork); 1475 | freezer_count(); 1476 | tracehook_report_vfork_done(p, nr); 1477 | } 1478 | } else { 1479 | nr = PTR_ERR(p); 1480 | } 1481 | return nr; 1482 | } 1483 | 1484 | #ifndef ARCH_MIN_MMSTRUCT_ALIGN 1485 | #define ARCH_MIN_MMSTRUCT_ALIGN 0 1486 | #endif 1487 | 1488 | static void sighand_ctor(void *data) 1489 | { 1490 | struct sighand_struct *sighand = data; 1491 | 1492 | spin_lock_init(&sighand->siglock); 1493 | init_waitqueue_head(&sighand->signalfd_wqh); 1494 | } 1495 | 1496 | void __init proc_caches_init(void) 1497 | { 1498 | sighand_cachep = kmem_cache_create("sighand_cache", 1499 | sizeof(struct sighand_struct), 0, 1500 | SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_DESTROY_BY_RCU| 1501 | SLAB_NOTRACK, sighand_ctor); 1502 | signal_cachep = kmem_cache_create("signal_cache", 1503 | sizeof(struct signal_struct), 0, 1504 | SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL); 1505 | files_cachep = kmem_cache_create("files_cache", 1506 | sizeof(struct files_struct), 0, 1507 | SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL); 1508 | fs_cachep = kmem_cache_create("fs_cache", 1509 | sizeof(struct fs_struct), 0, 1510 | SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL); 1511 | mm_cachep = kmem_cache_create("mm_struct", 1512 | sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN, 1513 | SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL); 1514 | vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC); 1515 | mmap_init(); 1516 | } 1517 | 1518 | /* 1519 | * Check constraints on flags passed to the unshare system call. 1520 | */ 1521 | static int check_unshare_flags(unsigned long unshare_flags) 1522 | { 1523 | if (unshare_flags & ~(CLONE_THREAD|CLONE_FS|CLONE_NEWNS|CLONE_SIGHAND| 1524 | CLONE_VM|CLONE_FILES|CLONE_SYSVSEM| 1525 | CLONE_NEWUTS|CLONE_NEWIPC|CLONE_NEWNET)) 1526 | return -EINVAL; 1527 | /* 1528 | * Not implemented, but pretend it works if there is nothing to 1529 | * unshare. Note that unsharing CLONE_THREAD or CLONE_SIGHAND 1530 | * needs to unshare vm. 1531 | */ 1532 | if (unshare_flags & (CLONE_THREAD | CLONE_SIGHAND | CLONE_VM)) { 1533 | /* FIXME: get_task_mm() increments ->mm_users */ 1534 | if (atomic_read(¤t->mm->mm_users) > 1) 1535 | return -EINVAL; 1536 | } 1537 | 1538 | return 0; 1539 | } 1540 | 1541 | /* 1542 | * Unshare the filesystem structure if it is being shared 1543 | */ 1544 | static int unshare_fs(unsigned long unshare_flags, struct fs_struct **new_fsp) 1545 | { 1546 | struct fs_struct *fs = current->fs; 1547 | 1548 | if (!(unshare_flags & CLONE_FS) || !fs) 1549 | return 0; 1550 | 1551 | /* don't need lock here; in the worst case we'll do useless copy */ 1552 | if (fs->users == 1) 1553 | return 0; 1554 | 1555 | *new_fsp = copy_fs_struct(fs); 1556 | if (!*new_fsp) 1557 | return -ENOMEM; 1558 | 1559 | return 0; 1560 | } 1561 | 1562 | /* 1563 | * Unshare file descriptor table if it is being shared 1564 | */ 1565 | static int unshare_fd(unsigned long unshare_flags, struct files_struct **new_fdp) 1566 | { 1567 | struct files_struct *fd = current->files; 1568 | int error = 0; 1569 | 1570 | if ((unshare_flags & CLONE_FILES) && 1571 | (fd && atomic_read(&fd->count) > 1)) { 1572 | *new_fdp = dup_fd(fd, &error); 1573 | if (!*new_fdp) 1574 | return error; 1575 | } 1576 | 1577 | return 0; 1578 | } 1579 | 1580 | /* 1581 | * unshare allows a process to 'unshare' part of the process 1582 | * context which was originally shared using clone. copy_* 1583 | * functions used by do_fork() cannot be used here directly 1584 | * because they modify an inactive task_struct that is being 1585 | * constructed. Here we are modifying the current, active, 1586 | * task_struct. 1587 | */ 1588 | SYSCALL_DEFINE1(unshare, unsigned long, unshare_flags) 1589 | { 1590 | struct fs_struct *fs, *new_fs = NULL; 1591 | struct files_struct *fd, *new_fd = NULL; 1592 | struct nsproxy *new_nsproxy = NULL; 1593 | int do_sysvsem = 0; 1594 | int err; 1595 | 1596 | err = check_unshare_flags(unshare_flags); 1597 | if (err) 1598 | goto bad_unshare_out; 1599 | 1600 | /* 1601 | * If unsharing namespace, must also unshare filesystem information. 1602 | */ 1603 | if (unshare_flags & CLONE_NEWNS) 1604 | unshare_flags |= CLONE_FS; 1605 | /* 1606 | * CLONE_NEWIPC must also detach from the undolist: after switching 1607 | * to a new ipc namespace, the semaphore arrays from the old 1608 | * namespace are unreachable. 1609 | */ 1610 | if (unshare_flags & (CLONE_NEWIPC|CLONE_SYSVSEM)) 1611 | do_sysvsem = 1; 1612 | if ((err = unshare_fs(unshare_flags, &new_fs))) 1613 | goto bad_unshare_out; 1614 | if ((err = unshare_fd(unshare_flags, &new_fd))) 1615 | goto bad_unshare_cleanup_fs; 1616 | if ((err = unshare_nsproxy_namespaces(unshare_flags, &new_nsproxy, 1617 | new_fs))) 1618 | goto bad_unshare_cleanup_fd; 1619 | 1620 | if (new_fs || new_fd || do_sysvsem || new_nsproxy) { 1621 | if (do_sysvsem) { 1622 | /* 1623 | * CLONE_SYSVSEM is equivalent to sys_exit(). 1624 | */ 1625 | exit_sem(current); 1626 | } 1627 | 1628 | if (new_nsproxy) { 1629 | switch_task_namespaces(current, new_nsproxy); 1630 | new_nsproxy = NULL; 1631 | } 1632 | 1633 | task_lock(current); 1634 | 1635 | if (new_fs) { 1636 | fs = current->fs; 1637 | spin_lock(&fs->lock); 1638 | current->fs = new_fs; 1639 | if (--fs->users) 1640 | new_fs = NULL; 1641 | else 1642 | new_fs = fs; 1643 | spin_unlock(&fs->lock); 1644 | } 1645 | 1646 | if (new_fd) { 1647 | fd = current->files; 1648 | current->files = new_fd; 1649 | new_fd = fd; 1650 | } 1651 | 1652 | task_unlock(current); 1653 | } 1654 | 1655 | if (new_nsproxy) 1656 | put_nsproxy(new_nsproxy); 1657 | 1658 | bad_unshare_cleanup_fd: 1659 | if (new_fd) 1660 | put_files_struct(new_fd); 1661 | 1662 | bad_unshare_cleanup_fs: 1663 | if (new_fs) 1664 | free_fs_struct(new_fs); 1665 | 1666 | bad_unshare_out: 1667 | return err; 1668 | } 1669 | 1670 | /* 1671 | * Helper to unshare the files of the current task. 1672 | * We don't want to expose copy_files internals to 1673 | * the exec layer of the kernel. 1674 | */ 1675 | 1676 | int unshare_files(struct files_struct **displaced) 1677 | { 1678 | struct task_struct *task = current; 1679 | struct files_struct *copy = NULL; 1680 | int error; 1681 | 1682 | error = unshare_fd(CLONE_FILES, ©); 1683 | if (error || !copy) { 1684 | *displaced = NULL; 1685 | return error; 1686 | } 1687 | *displaced = task->files; 1688 | task_lock(task); 1689 | task->files = copy; 1690 | task_unlock(task); 1691 | return 0; 1692 | } 1693 | -------------------------------------------------------------------------------- /posts/ch1.md: -------------------------------------------------------------------------------- 1 | Linux内核学习之一-Take It Easy! 2 | ====== 3 | 4 | ## 起-做个了解底层的码农 5 | 6 | 节前上班的日子总是那么悠闲,这么集中的时间,正好集中学习一下。一直以来都做的是Java开发,对不了解底层,总觉得心有不安。听到别人说起“进程切换”、“内存管理”、“内核态和用户态”,也总是觉得似懂非懂。所以就干脆把目标定大一点,学学Linux内核吧! 7 | 8 | ## 承-下载Linux内核及编译 9 | 10 | ### 下载 11 | 12 | 其实如果你使用Linux系统,那么内核的源码就直接在/usr/src目录下了。不过还是建议去下载一份最新的源码!哪里下载呢?Linux的作者-大名鼎鼎的Linus Torvalds也是Git的作者,所以你知道最新的源码去哪里下载了吧!赶紧去[https://github.com/torvalds/linux](https://github.com/torvalds/linux)拉一份最新代码吧! 13 | 14 | git clone https://github.com/torvalds/linux 15 | 16 | 代码一共有1.4G,所以耐心等待一会吧… 17 | 18 | ### 编译内核 19 | 20 | 编译内核是个苦力活。首先,你必须得在Linux系统下,因为编译Linux是依赖gcc的。然后,你编译的版本得跟当前版本一致(博主不完全肯定,但是实践下来是这样)。 21 | 22 | 然后就是编译了!Linux内核编译反而会简单,因为它没有太多的依赖要编译。所以可以用常用的三段式(需要root权限): 23 | 24 | make config 25 | make 26 | make install #install就替换当前内核了,三思而后行! 27 | 28 | 1. `make config`是交互式的,会需要指定使用什么不使用什么。不过这选项实在太多了点,第一次大概选了几百个选项吧…后面才知道,可以图个方便,用`make deconfig`来替代了。反正我们只是看看能不能编译嘛,嗯。 29 | 2. `make`是个很漫长的过程。 30 | 3. `make install`会替换当前内核了,我们这边就不替换了。 31 | 32 | 总之到了这里,已经有一些成就感了! 33 | 34 | ### 开始读代码? 35 | 36 | 关于Linux的代码结构有很多文章,例如这篇:[http://blog.csdn.net/liaoshengjiong/article/details/3957654](http://blog.csdn.net/liaoshengjiong/article/details/3957654),就不赘述了。查看一下代码,乖乖,一共500多万行,估计一两年也读不完吧!我的目的只是了解底层的基本原理,没有必要深入到各种细节。更何况,好多驱动、文件、内存的概念也不熟悉,怎么办呢?还是先看书吧! 37 | 38 | ## 转-还是读书吧 39 | 40 | 我之前的观点是读源码前至少了解项目的领域知识。对于Java码农来说,操作系统毕竟不是熟悉的领域,一开始就看源码不太现实。一两本参考书是必不可少的。这里我也浏览过几本书,最后觉得比较好的是《Linux内核设计与实现》,这本书大部分是理论为主,但是最后总会介绍到大致对应的代码在哪里,就起到非常好的提纲挈领的效果。对于有过一些代码经验的人来说,会觉得异常亲切。关键是,**它只有200页!** 41 | 42 | 浏览的另外几本书,包括大名鼎鼎的《深入理解计算机系统》,这本书全面详尽,但是更适合做教材,实践性弱了点(虽然它也有很多例子)。还有一本《深入理解LINUX内核》,内容对于入门又深了一点。一句话,讲了“深入”的都不太适合入门!还有本《30天自制操作系统》,不是说书不好,而是太基础了点,看到“用二进制编辑器写代码”就看不下去了。 43 | 44 | ## 合-Take It Easy! 45 | 46 | 好了,下面开始学习了。其实弄了那么多,我想说一件事就是,内核虽然很难,很多人只靠研究它就已经成了大牛。但是它难在于,越是底层的东西,对质量、稳定性、性能要求越高,同时需要考虑的情况越多,但是其实其理论和思想,可能大家都是耳熟能详的。 47 | 48 | 例如,在“进程管理”部分,我们常见的“进程描述符”对应的是`sched.h`中的一个结构:`task_struct`: 49 | 50 | ```c 51 | struct task_struct { 52 | volatile long state; /* -1 unrunnable, 0 runnable, >0 stopped */ 53 | struct thread_info *thread_info; 54 | atomic_t usage; 55 | unsigned long flags; /* per process flags, defined below */ 56 | unsigned long ptrace; 57 | … 58 | } 59 | ``` 60 | 61 | 而进程是保存在一个带优先级的双向链表里的,这跟Java里的PriorityQueue原理相似。怎么样,是不是觉得“进程调度”也没有那么神秘了呢? 62 | 63 | 同理,我们经常说“内核态”和“用户态”,实际上两者的代码都是c实现,搞出一个“内核态”是为了安全和某些性能的考虑,但是区别也没有想象中的那么大!其实这跟我们熟悉的“平台”和“业务逻辑”是不是有那么点相似呢? 64 | 65 | 总之,掌握基本的内核知识,应该还是不难的!好处就是,以后对程序设计的理解会更进一步了! 66 | 67 | PS:博主对c和Linux的理解都是入门水平都算不上,如果有问题欢迎指正,我会很开心接受的! -------------------------------------------------------------------------------- /posts/ch2.md: -------------------------------------------------------------------------------- 1 | Linux内核学习之二-进程与线程 2 | ===== 3 | 4 | ## 一、操作系统的功能 5 | 6 | 根据维基百科的解释,一个操作系统大概包括以下几个功能: 7 | 8 | 1. 进程管理(Processing management) 9 | 2. 安全机制(Security) 10 | 3. 内存管理(Memory management) 11 | 4. 用户界面(User interface) 12 | 5. 文件系统(File system) 13 | 6. 驱动程序(Device drivers) 14 | 7. 网络通信(Networking) 15 | 16 | 可以看到,这些功能彼此之间关系并不大,所以操作系统其实是这么些功能的低内聚复合体。所以学习的时候,采用逐个击破的方式,要比囫囵吞枣,一下全看来的好。进程管理是操作系统最最核心的功能,我们就从这里开始。 17 | 18 | ## 二、什么是进程 19 | 20 | 什么是进程?这个问题不太好回答,我们不妨从另一个角度来看看这个问题。 21 | 22 | 最早的操作系统是没有“进程“这个概念的,比如[DOS](http://zh.wikipedia.org/wiki/DOS)。正如其名“Disk Operating System”,DOS主要的任务就是管理一下磁盘,并把BIOS和硬件做一点抽象。在上面开发的程序,其实是直接跟CPU打交道的,程序最终会编译或者解释成CPU指令并被执行。 23 | 24 | 这个系统有个最大的问题,就是同时只能执行一个程序。这样对于用户使用无疑太不友好了,我想一边听音乐一边写代码都做不到!怎么办呢?CPU就像一个无脑的工人,在它那里根本没有“程序”的概念,只负责处理“指令”,所以如果我们程序不做点事情,那么好像无论如何都无法实现“多个程序同时执行”吧? 25 | 26 | 于是,就有了“分时多进程”的操作系统。操作系统把想要执行的程序管理起来,并且按照一定的规则,让它们都得到执行。因为CPU执行很快,所以对用户看起来它们就像同时在执行一样,这就是所谓的“进程“。 27 | 28 | 在这里我想强调一个观点:**操作系统也是一种程序**,这样子会对于码农距离感更小一点。作为一个喜欢动手的码农,我们不妨想想如何实现分时间片调度?我写了一段最简单的调度算法,大概是这样子(wait这里其实依赖时钟周期相关的东东,但不妨先假设一下): 29 | 30 | ```java 31 | while (true){ 32 | processing = nextProcessing(); 33 | processing.run(); 34 | wait(100); //每100毫秒分片 35 | processing.interrupt(); 36 | } 37 | ``` 38 | 39 | 而`nextProcessing()`则可以使用一个FIFO的循环队列来保存,这样子是不是有点意思了?(实际上Linux的调度算法要比这个复杂很多,但是也没有到无法理解的程度,这个我们下篇文章再说。) 40 | 41 | 总结一下这部分:进程就是程序执行的一个实例。它是操作系统为了管理多个程序的执行而产生的机制。 42 | 43 | ## 三、Linux中的进程 44 | 45 | 废话很多了,来点干货吧!Linux内核中进程相关的代码在`include/linux/sched.h`和`kernel/sched.c`里。(本文针对2.6.39版本的内核) 46 | 47 | *插一句,Linux的项目结构大概包括三部分:* 48 | 49 | *1. `include`是对外发布的部分,看到代码中有`#include `这样的,就是在"include"目录中。* 50 | 51 | *2. `kernel`则是内核部分的实现,`arch`是对不同平台的适配。* 52 | 53 | *3. 其他目录多是功能模块,例如初始化`init`,文件系统`fs`,内存管理`mm`等。* 54 | 55 | 记得有句名言叫做“程序=算法+数据结构”。我在写一般业务逻辑的时候,总觉得不完全符合,但是看Linux代码的时候,确确实实感觉到了这句话的道理。 56 | 57 | 进程相关的最重要的数据结构,我觉得有两个。一个是`task_struct`(在`sched.h`中),就是我们常说的“进程描述符”,用于标识一个进程,以及保存上下文信息。弄懂这个可以算是成功了一半了! 58 | 59 | task_struct是个巨大的结构体,光定义就有好几百行,有很多`#ifdef`括起来的可选功能。有些功能需要到后面才能看懂,这里主要说几个部分: 60 | 61 | ```c 62 | struct task_struct { 63 | 64 | /* 执行状态,具体的状态见TASK_RUNNING等一系列常量定义 */ 65 | volatile long state; 66 | 67 | /* 执行中的标志位,具体内容见PF_* 系列常量定义 */ 68 | unsigned int flags; 69 | 70 | /* 优先级,用于调度 */ 71 | int prio, static_prio, normal_prio; 72 | 73 | /* 进程使用的内存 */ 74 | struct mm_struct *mm, *active_mm; 75 | } 76 | ``` 77 | 78 | 感觉细节仍然不明白?其实我也不明白,不过好歹算是懂了个大概! 79 | 80 | ## 四、进程与线程 81 | 82 | 说完了进程,我们来理一理线程的概念。其实线程可以理解为特殊的进程,它没有独立的资源(对,就是上面的*mm),它依赖于进程,某个进程的子线程之间可以共享资源,除此之外没有什么区别。 83 | 84 | 在c程序里,我们使用fork来创建进程。例如: 85 | 86 | ```c 87 | #include 88 | #include 89 | int main(int argc, const char* argv[]) { 90 | pid_t pid; 91 | printf("Hello, World!%d\n",pid); 92 | for (int i=0;i<2;i++){ 93 | pid = fork(); 94 | if (pid == 0){ 95 | printf("I am child"); 96 | } else { 97 | printf("I am parent, my child is %d",pid); 98 | } 99 | } 100 | return 0; 101 | } 102 | ``` 103 | 104 | 这里fork会创建一份新的进程。这个进程会复制当前进程的所有上下文,包括寄存器内容、堆栈和内存(现在内存一般使用Copy-On-Write机制,不过我觉得对于用户来说觉得它是复制了一份也没什么问题)。因为程序计数器也一起复制了,所以执行到哪一步也会被复制下来。 105 | 106 | fork的实现在`kernal/fork.c`里。 107 | 108 | ```c 109 | long do_fork(unsigned long clone_flags, 110 | unsigned long stack_start, 111 | struct pt_regs *regs, 112 | unsigned long stack_size, 113 | int __user *parent_tidptr, 114 | int __user *child_tidptr) 115 | ``` 116 | 117 | 到底是创建线程还是进程,取决于`clone_flags`传入的参数。 118 | 119 | 下一章我们讲进程的调度。 120 | 121 | 参考资料: 122 | 123 | * [http://blog.csdn.net/hongchangfirst/article/details/7075026](http://blog.csdn.net/hongchangfirst/article/details/7075026) 124 | * [http://zh.wikipedia.org/wiki/DOS](http://zh.wikipedia.org/wiki/DOS) 125 | * 《Linux内核设计与实现》 126 | * 《深入理解Linux内核》 127 | * O(1) scheduler [http://en.wikipedia.org/wiki/O(1)_scheduler](http://en.wikipedia.org/wiki/O(1)_scheduler) -------------------------------------------------------------------------------- /posts/ch3.md: -------------------------------------------------------------------------------- 1 | Linux内核学习之三-进程的调度 2 | ===== 3 | ## 一、调度的总体流程 4 | 5 | 进程的调度是进程部分的核心-很显然,如果没有调度,我们也不需要进程了!我们在上一篇文章的第二部分实现了一个最简单的按照时间片的调度算法,每个进程都平均执行100毫秒。 6 | 7 | ```java 8 | while (true){ 9 | processing = nextProcessing(); 10 | processing.run(); 11 | wait(100); //每100毫秒分片 12 | processing.interrupt(); 13 | } 14 | ``` 15 | 16 | 那么Linux中如何实现的呢?我们先来看流程。调度相关的代码都在`sched.c`中。这个就是Linux代码核心中的核心,它被运行在亿万台机器上,每台机器每个时钟周期就要执行一次,看到它是不是有点激动?终于知道“高性能的底层代码”长什么样了! 17 | 18 | 这个文件的核心函数是`asmlinkage void __sched schedule(void)`,这就是调度部分的具体代码。当我读完并注释之后才发现已经有很多注释版本了,比如这篇文章:[http://blog.csdn.net/zhoudaxia/article/details/7375836](http://blog.csdn.net/zhoudaxia/article/details/7375836),所以就不贴代码了。我注释后的代码在[sched.c](https://github.com/code4craft/os-learning/blob/master/linux/kernel/sched.c)(4079行开始)里。不过不读源码,不用那些关键词去搜索,估计也找不到一些好文章,这也是一个学习的过程吧。 19 | 20 | 这个方法有两个重要的点,一个是`pick_next_task(rq);`,获取下一个可执行进程,它涉及到调度算法;一个是`context_switch(rq, prev, next);`,这就是所谓的“上下文切换”了。 21 | 22 | ## 二、调度算法 23 | 24 | 我们前面的“100毫秒算法”算法当然是非常粗糙的。在了解货真价实的Linux调度算法时,不妨看看,调度系统需要考虑什么问题(非官方不权威总结): 25 | 26 | 1. 最大限度利用CPU,只要有进程能执行,就不让要CPU空等。尽量最大化CPU利用率。 27 | 2. 保证进程(特别是交互式进程)的响应时间尽可能短。 28 | 3. 能由系统管理员指定优先级,让重要的任务先执行。 29 | 4. 因为调度执行非常频繁,所以必须考虑它的性能。 30 | 5. 支持多核平均调度,也就是所谓的对称多处理器(Symmetric Multi-Processor,SMP)。 31 | 32 | 我们的“100毫秒算法”不满足1和3,对于2来说其实也不太好(可能有些进程都不会执行那么久)。如果我们把时间缩短,换成1毫秒怎么样呢?我们知道,“进程切换”本身也有开销,这样子频繁切换,岂不是得不偿失了? 33 | 34 | 实际上,因为其核心地位,Linux的调度算法一旦提升一点点性能,对整个工业界的提升也是巨大的。对于算法高手来说,这里成了大显身手的好地方。所以Linux调度算法的变化那是相当的快,从[O(n)调度器](http://en.wikipedia.org/wiki/O(n)_scheduler)到[O(1)调度器](http://en.wikipedia.org/wiki/O\(1\)_scheduler),再到2.6.23中的"[CFS(completely fair schedule)](http://zh.wikipedia.org/wiki/%E5%AE%8C%E5%85%A8%E5%85%AC%E5%B9%B3%E6%8E%92%E7%A8%8B%E5%99%A8)",让人看得都晕了! 35 | 36 | 了解了要解决的问题,或许会更容易理解一点。O(n)和O(1)算法都是基于时间片的,基本思路就是:给进程指定优先级,IO高、交互强的进程给予更高的优先级,CPU占用高的则降低优先级,每次选优先级最高的执行;同时为每个进程分配时间片(每个进程的时间片都是动态调整的),每个进程每次执行的时间就是这个时间片的时间。O(n)和O(1)的区别在于从优先级队列里取进程的时候的时间复杂度而已。具体细节就不多说了。 37 | 38 | 而"CFS"则是使用了一个"vruntime"的概念来保存执行时间。同时它用一颗红黑树来对进程做排序,vruntime越小的进程会被越先执行,所以它的时间复杂度是O(logn)。它的代码在`kernel/sched_fair.c`中。 39 | 40 | 另外还有个“实时调度算法”的概念。这些就是“加塞的”的进程,它们优先于CFS的所有进程。对应的类型是`SCHED_FIFO`和`SCHED_RR`,在`sched.h`中可以看到。 41 | 42 | 调度部分就这么多,还有些细节,例如CFG具体实现,书里已经很详细了,就不重复记录,免得写晕了! 43 | 44 | ## 参考资料: 45 | 46 | * 《Linux内核设计与实现》 LKD 47 | * Linux 2.6内核中新的锁机制--RCU [http://www.ibm.com/developerworks/cn/linux/l-rcu/](http://www.ibm.com/developerworks/cn/linux/l-rcu/) 48 | * Linux进程调度(3):进程切换分析 [http://blog.csdn.net/zhoudaxia/article/details/7375836](http://blog.csdn.net/zhoudaxia/article/details/7375836) 49 | * [http://blog.csdn.net/yunsongice/article/details/8547107](http://blog.csdn.net/yunsongice/article/details/8547107) -------------------------------------------------------------------------------- /posts/ch4.md: -------------------------------------------------------------------------------- 1 | Linux内核学习之四-内存管理 2 | ==== 3 | 4 | ## 一、虚拟存储器 5 | 6 | 内存管理最基础的概念,恐怕是虚拟存储器(Virtual Memory,简称VM)了,它是计算机系统(注意我没写操作系统,因为其中还有部分硬件功能)在物理存储之上的一套机制。它将物理地址(Physical Address,简称PA)转换为虚拟地址(Virtual Address,简称VA),来访问存储。正是因为有了虚拟存储器,才有了后面的内存转换、页表等机制。 7 | 8 | 看到这里我就有疑问了,有了物理地址,程序已经能定位到内存块并且使用了,为什么需要有虚拟地址? 9 | 10 | 实际上,如果只有一个程序在内存中运行,没有虚拟存储问题也不大,比如DOS(又提到它了!),而且据说DOS确实是没有虚拟存储机制的。但是当有了多进程之后,问题就出现了:多个进程共同使用一整个物理内存,既不安全也不方便,比如A用了`0xb7001008`,结果B没法知道,然后也用了它,岂不是乱套了? 11 | 12 | 为了解决这个问题,就有了虚拟存储机制。对于每个进程来说,它的虚拟地址空间总是一样的,但是实际使用的物理内存是分开的,而且它也不知道到底是在使用虚拟地址,还是物理地址,反正用就对了!这样子既简化了程序开发,又增加了安全性,真是非常巧妙的设计! 13 | 14 | 总结一下,虚拟存储的最大作用就是**隔离与抽象**。 15 | 16 | ## 二、地址转换的实现 17 | 18 | 为了了解地址的转换,我们必须引入“页”(page)的概念。其实页就是一块连续的内存,这也是操作系统利用内存的最小单位。更直观一点的说,在Linux里,它是这么实现的: 19 | 20 | ```c 21 | struct page{ 22 | //保存状态 23 | unsigned long flags; 24 | //引用计数 25 | atomic_t _count; 26 | atomic_t _mapcount; 27 | unsigned long private; 28 | struct address_space *mapping; 29 | pgoff_t index; 30 | struct list_head lru; 31 | //指向虚拟地址 32 | void *virtual; 33 | } 34 | ``` 35 | 36 | `page`结构体保存对应页的引用数、虚拟地址等信息。因为这个结构本身也是消耗内存的,所以一页的大小太小,那么还要存`page`结构,就有很多内存浪费了,很不划算。如果页大小太大,经常会出现一个页装不满的情况,也是我们不愿看到的。在32位CPU里,一页是4KB。 37 | 38 | 有了页的知识,地址转换就可以进行了。在看这部分之前,不妨先想想,如果让我们实现一套地址转换,会怎么做? 39 | 40 | 似乎这没有什么难度啊?可以分两部分,你看我伪代码都写好了: 41 | 42 | 1. 保存虚拟地址到物理地址的映射关系 43 | 44 | page_table={virtual_address:physical_address} 45 | 46 | 2. 在程序中使用指针访问物理地址的时候,对其进行转换 47 | 48 | physical_address=page_table[virtual_address] 49 | 是不是非常简单? 50 | 51 | 实际上目前的计算机系统做的方式也差不多。但是区别是,因为这两个操作非常频繁,光靠软件实现性能未必有那么好,所以这两部分有了一些硬件上的优化。 52 | 53 | 把VA转换为PA的事情,由CPU里一个专门的部件来完成,它叫做“内存管理单元”(Memory Management Unit,简称MMU)。 54 | 55 | 保存地址映射关系的部件,叫做页表(Page Table),它是保存在内存中的,由操作系统维护。但是访问一次内存还要查一次内存,这事感觉不太科学,所以MMU还会维护一份用过的页表索引,这就是传说中的TLB(Translation Lookaside Buffer,也叫转换备用缓冲区,我们学校的孙钟秀院士在他的教材中将其翻译为“快表”)。 56 | 57 | 所以最后的流程是: 58 | 59 | 1. 操作系统新建进程时,为进程分配内存空间,并更新页表; 60 | 2. 该进程的指令到CPU之前,其中的虚拟地址,会触发MMU转换流程; 61 | 3. MMU先到TLB中找页表,找不到再去物理内存中找页表,最后转换为物理地址,递给CPU执行。 62 | 63 | 这其实也是一个操作系统反过来影响CPU设计的案例,这也说明,其实硬件跟系统分界并不是死的,比如有些CPU的指令集也会包括一些高等的操作,理论上越底层越快,所以到底放在哪一层,主要取决于这个机制的价值和通用性。 64 | 65 | 至此地址转换算是差不多了,操作系统和MMU握了个手,合作愉快! 66 | 67 | ## 三、Linux中的内存管理 68 | 69 | Linux中内存分配相关的代码在[`kernal/page_alloc.c`](https://github.com/code4craft/os-learning/blob/master/linux/mm/page_alloc.c)中,其中核心的函数是`struct page * __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, struct zonelist *zonelist, nodemask_t *nodemask)`。我对能看懂的代码做了注释,大部分都没看懂,等全部都研究一遍之后再来细细研究吧… 70 | 71 | Linux内存管理中还有很多细节,罗列几个,做个备忘。 72 | 73 | 1. 分区(zone) 74 | 75 | Linux将内存分为几个区:ZONE_DMA、ZONE_NORMAL和ZONE_HIGHEN。DMA(Direct Memory Access)是一种IO直接操作内存的技术,有些硬件只能用特定地址的内存来进行DMA,对这种内存需要标记一下。 76 | 77 | 2. NUMA 78 | 79 | NUMA(Non-Uniform Memory Access Architecture)是相对于UMA(Uniform Memory Access Architecture)来说的。UMA就是指多处理器共享一片内存,而NUMA则反其道行之,将CPU绑定到一些内存中,从而加快速度。据说在多于八核的处理器中效果明显。 80 | 81 | 3. slab 82 | 83 | slab这部分LKD讲的并不好,有些绕弯(也可能是翻译不好吧),然后我搜到很多资料,大致连描述都照样复制,我也不知道是不是我智商太低,弄不懂,还是作者只是做了个摘抄,反正对于这些技术文章只能呵呵了。 84 | 85 | 其实slab解决了什么问题呢?我们知道在内核里有些数据结构是很常用的,例如inode,这些数据结构会频繁初始化和销毁。但是初始化数据结构是有开销的啊,更好的办法是把它存下来,然后下次创建的时候,直接拿一个现成的,改改内容,就可以用了!slab又译作“板坯”,这样子是不是好理解一点呢? 86 | 87 | 在实现上,slab会为一类对象开辟一段空间,存储多个这样的对象,然后创建和销毁,其实只是在这片空间里指针移动一下的事情了!我们其实可以叫它“对象池”或者“结构池”吧!slab的代码实现在LKD中有非常详细的描述,不再赘述了。 88 | 89 | 参考资料: 90 | 91 | * 文中讲到的LKD指《Linux内核设计与实现》(Linux Kernel Development) 92 | * 《深入理解计算机系统》 93 | * [http://learn.akae.cn/media/ch17s04.html](http://learn.akae.cn/media/ch17s04.html) 94 | * [http://www.cnblogs.com/shanyou/archive/2009/12/26/1633052.html](http://www.cnblogs.com/shanyou/archive/2009/12/26/1633052.html) 95 | * The Slab Allocator:An Object-Caching Kernel Memory Allocator [http://www.usenix.org/publications/library/proceedings/bos94/full_papers/bonwick.ps](http://www.usenix.org/publications/library/proceedings/bos94/full_papers/bonwick.ps) 96 | -------------------------------------------------------------------------------- /src/page.h: -------------------------------------------------------------------------------- 1 | struct page{ 2 | //保存状态 3 | unsigned long flags; 4 | //引用计数 5 | atomic_t _count; 6 | atomic_t _mapcount; 7 | unsigned long private; 8 | struct address_space *mapping; 9 | pgoff_t index; 10 | struct list_head lru; 11 | //指向虚拟地址 12 | void *virtual; 13 | } --------------------------------------------------------------------------------