├── .gitignore ├── COPYING.MIT ├── NOTICE ├── README.md ├── build-templates ├── Makefile.in ├── bblayers-conf.in ├── local-conf.in ├── mcf-status.in └── oe-init-build-env.in ├── mcf ├── scripts ├── build.sh └── prerequisites.sh └── weboslayers.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Copyright (c) 2012-2014 LG Electronics, Inc. 2 | 3 | # A leading / matches only entries in the top-level directory 4 | # A trailing / matches only directories 5 | # A trailing / is not added to directories which are often replaced with symlink 6 | 7 | # Common Eclipse project files 8 | .project 9 | .cproject 10 | .pc 11 | 12 | # Editor backup files 13 | *~ 14 | 15 | # Build products worth keeping between clean builds 16 | /sstate-cache 17 | /buildhistory 18 | /downloads 19 | /cache 20 | 21 | # Other build products 22 | /BUILD/ 23 | /BUILD-ARTIFACTS/ 24 | /Makefile 25 | TAGS 26 | /bitbake.lock 27 | /mcf.status 28 | /oe-init-build-env 29 | /conf/ 30 | __pycache__/ 31 | tmp/ 32 | patches/ 33 | 34 | # Artifacts from generating build_changes.log 35 | build_changes.log 36 | latest_project_baselines*txt 37 | 38 | # Checkouts managed by mcf 39 | /bitbake 40 | /oe-core 41 | /meta-* 42 | 43 | # Local override file 44 | /webos-local.conf 45 | -------------------------------------------------------------------------------- /COPYING.MIT: -------------------------------------------------------------------------------- 1 | Copyright (c) 2013 LG Electronics, Inc. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy 4 | of this software and associated documentation files (the "Software"), to deal 5 | in the Software without restriction, including without limitation the rights 6 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 7 | copies of the Software, and to permit persons to whom the Software is 8 | furnished to do so, subject to the following conditions: 9 | 10 | The above copyright notice and this permission notice shall be included in 11 | all copies or substantial portions of the Software. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 14 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 15 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 16 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 17 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 18 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 19 | THE SOFTWARE. 20 | -------------------------------------------------------------------------------- /NOTICE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2013 LG Electronics, Inc. 2 | 3 | This software contains code licensed as described in COPYING.MIT 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | build-webos 2 | =========== 3 | 4 | Summary 5 | ------- 6 | Build Open webOS images 7 | 8 | Description 9 | ----------- 10 | This repository contains the top level code that aggregates the various [OpenEmbedded](http://openembedded.org) layers into a whole from which Open webOS images can be built. 11 | 12 | Cloning 13 | ======= 14 | To access Git repositories, you may need to register your SSH key with GitHub. For help on doing this, visit [Generating SSH Keys](https://help.github.com/articles/generating-ssh-keys). 15 | 16 | Set up build-webos by cloning its Git repository: 17 | 18 | git clone https://github.com/openwebos/build-webos.git 19 | 20 | Note: If you populate it by downloading an archive (zip or tar.gz file), then you will get the following error when you run mcf: 21 | 22 | fatal: Not a git repository (or any parent up to mount parent). 23 | Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYTEM not set). 24 | 25 | 26 | Prerequisites 27 | ============= 28 | Before you can build, you will need some tools. If you try to build without them, bitbake will fail a sanity check and tell you what's missing, but not really how to get the missing pieces. On Ubuntu, you can force all of the missing pieces to be installed by entering: 29 | 30 | $ sudo scripts/prerequisites.sh 31 | 32 | Also, the bitbake sanity check will issue a warning if you're not running under Ubuntu 11.04 or 12.04.1 LTS, either 32-bit or 64-bit. 33 | 34 | 35 | Building 36 | ======== 37 | To configure the build for the qemux86 emulator and to fetch the sources: 38 | 39 | $ ./mcf -p 0 -b 0 qemux86 40 | 41 | The `-p 0` and `-b 0` options set the make and bitbake parallelism values to the number of CPU cores found on your computer. 42 | 43 | To kick off a full build of Open webOS, make sure you have at least 40GB of disk space available and enter the following: 44 | 45 | $ make webos-image 46 | 47 | This may take in the neighborhood of two hours on a multi-core workstation with a fast disk subsystem and lots of memory, or many more hours on a laptop with less memory and slower disks or in a VM. 48 | 49 | 50 | Running 51 | ======= 52 | To run the resulting build in the qemux86 emulator, enter: 53 | 54 | $ cd BUILD-qemux86 55 | $ source bitbake.rc 56 | $ runqemu webos-image qemux86 qemuparams="-m 512" kvm serial 57 | 58 | You will be prompted by sudo for a password: 59 | 60 | Assuming webos-image really means .../BUILD-qemux86/deploy/images/webos-image-qemux86.ext3 61 | Continuing with the following parameters: 62 | KERNEL: [.../BUILD-qemux86/deploy/images/bzImage-qemux86.bin] 63 | ROOTFS: [.../BUILD-qemux86/deploy/images/webos-image-qemux86.ext3] 64 | FSTYPE: [ext3] 65 | Setting up tap interface under sudo 66 | [sudo] password for : 67 | 68 | A window entitled QEMU will appear with a login prompt. Don't do anything. A bit later, the Open webOS lock screen will appear. Use your mouse to drag up the yellow lock icon. Welcome to (emulated) Open webOS! 69 | 70 | To go into Card View after launching an app, press your keyboard’s `HOME` key. 71 | 72 | To start up a console on the emulator, don't attempt to login at the prompt that appears in the console from which you launched runqemu. Instead, ssh into it as root (no password): 73 | 74 | $ ssh root@192.168.7.2 75 | root@192.168.7.2's password: 76 | root@qemux86:~# 77 | 78 | Each new image appears to ssh as a new machine with the same IP address as the previous one. ssh will therefore warn you of a potential "man-in-the-middle" attack and not allow you to connect. To resolve this, remove the stale ssh key by entering: 79 | 80 | $ ssh-keygen -f ~/.ssh/known_hosts -R 192.168.7.2 81 | 82 | then re-enter the ssh command. 83 | 84 | To shut down the emulator, startup a console and enter: 85 | 86 | root@qemux86:~# halt 87 | 88 | The connection will be dropped: 89 | 90 | Broadcast message from root@qemux86 91 | (/dev/pts/0) at 18:39 ... 92 | 93 | The system is going down for halt NOW! 94 | Connection to 192.168.7.2 closed by remote host. 95 | Connection to 192.168.7.2 closed. 96 | 97 | and the QEMU window will close. (If this doesn't happen, just close the QEMU window manually.) Depending on how long your emulator session lasted, you may be prompted again by sudo for a password: 98 | 99 | [sudo] password for : 100 | Set 'tap0' nonpersistent 101 | Releasing lockfile of preconfigured tap device 'tap0' 102 | 103 | 104 | Images 105 | ====== 106 | The following images can be built: 107 | 108 | - `webos-image`: The production Open webOS image. 109 | - `webos-image-devel`: Adds various development tools to `webos-image`, including gdb and strace. See `packagegroup-core-tools-debug` and `packagegroup-core-tools-profile` in `oe-core` and `packagegroup-webos-test` in `meta-webos` for the complete list. 110 | 111 | 112 | Cleaning 113 | ======== 114 | To blow away the build artifacts and prepare to do clean build, you can remove the build directory and recreate it by typing: 115 | 116 | $ rm -rf BUILD-qemux86 117 | $ ./mcf.status 118 | 119 | What this retains are the caches of downloaded source (under `./downloads`) and shared state (under `./sstate-cache`). These caches will save you a tremendous amount of time during development as they facilitate incremental builds, but can cause seemingly inexplicable behavior when corrupted. If you experience strangeness, use the command presented below to remove the shared state of suspicious components. In extreme cases, you may need to remove the entire shared state cache. See [here](http://www.yoctoproject.org/docs/latest/poky-ref-manual/poky-ref-manual.html#shared-state-cache) for more information on it. 120 | 121 | 122 | Building Individual Components 123 | ============================== 124 | To build an individual component, enter: 125 | 126 | $ make 127 | 128 | To clean a component's build artifacts under BUILD-qemux86, enter: 129 | 130 | $ make clean- 131 | 132 | To remove the shared state for a component as well as its build artifacts to ensure it gets rebuilt afresh from its source, enter: 133 | 134 | $ make cleanall- 135 | 136 | Adding new layers 137 | ================= 138 | The script automates the process of adding new OE layers to the build environment. The information required for integrate new layer are; layer name, OE priority, repository, identification in the form branch, commit or tag ids. It is also possible to reference a layer from local storage area. The details are documented in weboslayers.py. 139 | 140 | Copyright and License Information 141 | ================================= 142 | Unless otherwise specified, all content, including all source code files and 143 | documentation files in this repository are: 144 | 145 | Copyright (c) 2008-2013 LG Electronics, Inc. 146 | 147 | Unless otherwise specified or set forth in the NOTICE file, all content, 148 | including all source code files and documentation files in this repository are: 149 | Licensed under the Apache License, Version 2.0 (the "License"); 150 | you may not use this content except in compliance with the License. 151 | You may obtain a copy of the License at 152 | 153 | http://www.apache.org/licenses/LICENSE-2.0 154 | 155 | Unless required by applicable law or agreed to in writing, software 156 | distributed under the License is distributed on an "AS IS" BASIS, 157 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 158 | See the License for the specific language governing permissions and 159 | limitations under the License. 160 | -------------------------------------------------------------------------------- /build-templates/Makefile.in: -------------------------------------------------------------------------------- 1 | # DO NOT MODIFY! This script is generated by mcf. Changes made 2 | # here will be lost. The source for this file is in build-templates/Makefile.in. 3 | # 4 | # Copyright (c) 2008-2014 LG Electronics, Inc. 5 | # 6 | # Licensed under the Apache License, Version 2.0 (the "License"); 7 | # you may not use this file except in compliance with the License. 8 | # You may obtain a copy of the License at 9 | # 10 | # http://www.apache.org/licenses/LICENSE-2.0 11 | # 12 | # Unless required by applicable law or agreed to in writing, software 13 | # distributed under the License is distributed on an "AS IS" BASIS, 14 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | # See the License for the specific language governing permissions and 16 | # limitations under the License. 17 | 18 | srcdir := @srcdir@ 19 | machines := @machines@ 20 | 21 | TIME := time 22 | 23 | webos-image: 24 | all: webos-image 25 | 26 | force:;: 27 | 28 | oe-init-build-env mcf.status Makefile conf/local.conf conf/bblayers.conf: \ 29 | $(srcdir)/build-templates/oe-init-build-env.in \ 30 | $(srcdir)/build-templates/mcf-status.in \ 31 | $(srcdir)/build-templates/Makefile.in \ 32 | $(srcdir)/build-templates/local-conf.in \ 33 | $(srcdir)/build-templates/bblayers-conf.in 34 | ./mcf.status 35 | 36 | # everything else is already set by oe-init-build-env 37 | BITBAKE := . $(srcdir)/oe-init-build-env && bitbake 38 | 39 | ### intended for command line use 40 | BBFLAGS = 41 | 42 | %:; for MACHINE in $(machines) ; do $(BITBAKE) $(BBFLAGS) $*; done 43 | 44 | define convenience 45 | $(1)-$(2)-%:; export MACHINE=$(1) && $(TIME) $(MAKE) $(2)-$$* 46 | $(1)-%:; for MACHINE in $(machines) ; do $$(BITBAKE) $(BBFLAGS) -c $(1) $$*; done 47 | endef 48 | 49 | conveniences := \ 50 | clean \ 51 | cleanall \ 52 | cleansstate \ 53 | compile \ 54 | configure \ 55 | fetch \ 56 | fetchall \ 57 | install \ 58 | listtasks \ 59 | package \ 60 | patch \ 61 | patchall \ 62 | unpack \ 63 | unpackall \ 64 | 65 | $(foreach c, $(conveniences),$(eval $(call convenience,$(c)))) 66 | 67 | # In most cases, "install-foo" is a coded request 68 | # for 'bitbake -c install foo'. However, at least one component has a 69 | # name prefixed by "install-". Hence the need for the "just-" target 70 | # which lets us name "just-install-first" in order to request 71 | # "install-first". (Yes, I'm sorry it's complicated.) 72 | just-%:; $(BITBAKE) $(BBFLAGS) $* 73 | -------------------------------------------------------------------------------- /build-templates/bblayers-conf.in: -------------------------------------------------------------------------------- 1 | # Copyright (c) 2008-2014 LG Electronics, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | ## 15 | 16 | # LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf 17 | # changes incompatibly 18 | LCONF_VERSION = "5" 19 | 20 | TMPDIR := "@abs_srcdir@" 21 | BBPATH = "${TOPDIR}:${TMPDIR}" 22 | 23 | # Pull in two optional configuration files to allow the user to override 24 | # component source directories, shared state locations, etc. 25 | # 26 | # webos-global.conf (in the user's home directory) applies overrides to 27 | # all clones of openwebos/build-webos in the user's account. 28 | # 29 | # webos-local.conf resides at the top of the build-webos repo and applies 30 | # overrides on a per-repo basis. 31 | # 32 | # Including both here saves the user remmebering to chain to the local 33 | # file from the global one, avoids them forgetting to do so, and makes 34 | # the existence of a global override file optional. 35 | # 36 | # The location of the shared-state cache can be moved by overriding 37 | # DL_DIR and SSTATE_DIR. 38 | # 39 | # The meta-webos layer can be moved out-of-tree by overriding WEBOS_LAYER. 40 | # Note that running mcf will still clone and checkout a meta-webos directory 41 | # in the root of the repo, but "make" will ignore it and use the overridden 42 | # location for recipes etc. The first time you move a meta-webos layer out of 43 | # tree may invalidate your shared state information, as a result of recloning 44 | # the meta-webos layer. 45 | 46 | include ${HOME}/webos-global.conf 47 | include ${TOPDIR}/webos-local.conf 48 | 49 | -------------------------------------------------------------------------------- /build-templates/local-conf.in: -------------------------------------------------------------------------------- 1 | # DO NOT MODIFY! This script is generated by configure. Changes made 2 | # here will be lost. Source for this file is in local-conf.in. 3 | 4 | # Copyright (c) 2008-2014 LG Electronics, Inc. 5 | # 6 | # Licensed under the Apache License, Version 2.0 (the "License"); 7 | # you may not use this file except in compliance with the License. 8 | # You may obtain a copy of the License at 9 | # 10 | # http://www.apache.org/licenses/LICENSE-2.0 11 | # 12 | # Unless required by applicable law or agreed to in writing, software 13 | # distributed under the License is distributed on an "AS IS" BASIS, 14 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | # See the License for the specific language governing permissions and 16 | # limitations under the License. 17 | 18 | # 19 | # This file is your local configuration file and is where all local user settings 20 | # are placed. The comments in this file give some guide to the options a new user 21 | # to the system might want to change but pretty much any configuration option can 22 | # be set in this file. More adventurous users can look at local.conf.extended 23 | # which contains other examples of configuration which can be placed in this file 24 | # but new users likely won't need any of them initially. 25 | # 26 | # Lines starting with the '#' character are commented out and in some cases the 27 | # default values are provided as comments to show people example syntax. Enabling 28 | # the option is a question of removing the # character and making any change to the 29 | # variable as required. 30 | # 31 | # export DISTRO variable in your shell environment 32 | # 33 | # This sets the default distro to be whatever is selected by mcf if no other distro 34 | # is selected: 35 | DISTRO ??= "@distro@" 36 | 37 | # 38 | # Parallelism Options 39 | # 40 | # These two options control how much parallelism BitBake should use. The first 41 | # option determines how many tasks bitbake should run in parallel. The default 42 | # value of 2x the number of cores is that recommended by OE -- see: 43 | # http://www.yoctoproject.org/docs/1.4/ref-manual/ref-manual.html#var-BB_NUMBER_THREADS 44 | # This may appear counter-intuitive, but that this is a good choice was confirmed 45 | # by testing whose results are reported here: http://wiki.lgsvl.com/x/QYW0BQ and 46 | # in [ES-469]. 47 | # 48 | BB_NUMBER_THREADS ?= "${@ @bb_number_threads@ if @bb_number_threads@ != 0 else 2*int(bb.utils.cpu_count())}" 49 | 50 | # 51 | # The second option controls how many processes make should run in parallel when 52 | # running compile tasks. The default value of 2x the number of cores is that 53 | # recommended by OE -- see: 54 | # http://www.yoctoproject.org/docs/1.4/ref-manual/ref-manual.html#var-BB_NUMBER_THREADS 55 | # This may appear counter-intuitive, but that this is a good choice was confirmed 56 | # by testing whose results are reported here: http://wiki.lgsvl.com/x/QYW0BQ and 57 | # in [ES-469]. 58 | # 59 | PARALLEL_MAKE ?= "-j ${@ @parallel_make@ if @parallel_make@ != 0 else 2*int(bb.utils.cpu_count())}" 60 | 61 | # 62 | # Building with icecc is enabled by default. the following configures webOS image 63 | # to build with icecc 64 | # 65 | # To disable ICECC, set ICECC_DISABLED to 1 66 | ICECC_DISABLED ?= "@icecc_disable_enable@" 67 | # Number of parallel make threads 68 | ICECC_PARALLEL_MAKE ?= "-j ${@ @icecc_parallel_make@ if @icecc_parallel_make@ != 0 else 4*int(bb.utils.cpu_count())}" 69 | # To use alterantive icecc installation specify the location of icecc executable below. 70 | @alternative_icecc_installation@ 71 | # To exclude components/recipes from icecc build add its name to the space separated list below 72 | @icecc_user_package_blacklist@ 73 | # To exclude class of components/recipes from icecc build add its name to the space separated list below; i.e. native 74 | @icecc_user_class_blacklist@ 75 | # To force components/recipes to build with icecc, add its name to space separated list below 76 | @icecc_user_package_whitelist@ 77 | # tp Specify alternative script for icecc environment setup, specify the script below. 78 | @icecc_environment_script@ 79 | 80 | # 81 | # Machine Selection 82 | # 83 | # You need to select a specific machine to target the build with. There are a selection 84 | # of emulated machines available which can boot and run in the QEMU emulator: 85 | # 86 | # export MACHINE variable in your shell environment 87 | # 88 | # This sets the default machine to be qemux86 if no other machine is selected: 89 | MACHINE ??= "qemux86" 90 | 91 | # 92 | # Where to place downloads 93 | # 94 | # During a first build the system will download many different source code tarballs 95 | # from various upstream projects. This can take a while, particularly if your network 96 | # connection is slow. These are all stored in DL_DIR. When wiping and rebuilding you 97 | # can preserve this directory to speed up this part of subsequent builds. This directory 98 | # is safe to share between multiple builds on the same machine too. 99 | # 100 | # The default is a downloads directory under TOPDIR which is the build directory. 101 | # 102 | DL_DIR ?= "${TOPDIR}/downloads" 103 | 104 | # 105 | # Where to place shared-state files 106 | # 107 | # BitBake has the capability to accelerate builds based on previously built output. 108 | # This is done using "shared state" files which can be thought of as cache objects 109 | # and this option determines where those files are placed. 110 | # 111 | # You can wipe out TMPDIR leaving this directory intact and the build would regenerate 112 | # from these files if no changes were made to the configuration. If changes were made 113 | # to the configuration, only shared state files where the state was still valid would 114 | # be used (done using checksums). 115 | # 116 | # The default is a sstate-cache directory under TOPDIR. 117 | # 118 | SSTATE_DIR ?= "${TOPDIR}/sstate-cache" 119 | 120 | # 121 | # Where to place the build output 122 | # 123 | # This option specifies where the bulk of the building work should be done and 124 | # where BitBake should place its temporary files and output. Keep in mind that 125 | # this includes the extraction and compilation of many applications and the toolchain 126 | # which can use Gigabytes of hard disk space. 127 | # 128 | # The default is a tmp directory under TOPDIR. 129 | # 130 | TMPDIR = "${TOPDIR}/BUILD" 131 | 132 | 133 | # 134 | # Package Management configuration 135 | # 136 | # This variable lists which packaging formats to enable. Multiple package backends 137 | # can be enabled at once and the first item listed in the variable will be used 138 | # to generate the root filesystems. 139 | # Options are: 140 | # - 'package_deb' for debian style deb files 141 | # - 'package_ipk' for ipk files are used by opkg (a debian style embedded package manager) 142 | # - 'package_rpm' for rpm style packages 143 | # E.g.: PACKAGE_CLASSES ?= "package_rpm package_deb package_ipk" 144 | # We default to ipk: 145 | PACKAGE_CLASSES ?= "package_ipk" 146 | 147 | # 148 | # SDK/ADT target architecture 149 | # 150 | # This variable specified the architecture to build SDK/ADT items for and means 151 | # you can build the SDK packages for architectures other than the machine you are 152 | # running the build on (i.e. building i686 packages on an x86_64 host._ 153 | # Supported values are i686 and x86_64 154 | # 155 | # Using ??= so that it can be overridden in webos-local.conf without having to 156 | # resort to using _forcevariable. 157 | SDKMACHINE ??= "i686" 158 | 159 | # 160 | # Extra image configuration defaults 161 | # 162 | # The EXTRA_IMAGE_FEATURES variable allows extra packages to be added to the generated 163 | # images. Some of these options are added to certain image types automatically. The 164 | # variable can contain the following options: 165 | # "dbg-pkgs" - add -dbg packages for all installed packages 166 | # (adds symbol information for debugging/profiling) 167 | # "dev-pkgs" - add -dev packages for all installed packages 168 | # (useful if you want to develop against libs in the image) 169 | # "tools-sdk" - add development tools (gcc, make, pkgconfig etc.) 170 | # "tools-debug" - add debugging tools (gdb, strace) 171 | # "tools-profile" - add profiling tools (oprofile, exmap, lttng valgrind (x86 only)) 172 | # "tools-testapps" - add useful testing tools (ts_print, aplay, arecord etc.) 173 | # "debug-tweaks" - make an image suitable for development 174 | # e.g. ssh root access has a blank password 175 | # There are other application targets that can be used here too, see 176 | # meta/classes/image.bbclass and meta/classes/core-image.bbclass for more details. 177 | EXTRA_IMAGE_FEATURES ?= "" 178 | 179 | # 180 | # Additional image features 181 | # 182 | # The following is a list of additional classes to use when building images which 183 | # enable extra features. Some available options which can be included in this variable 184 | # are: 185 | # - 'buildhistory' collect statistics from build artifacts 186 | # - 'buildstats' collect build statistics 187 | # - 'image-mklibs' to reduce shared library files size for an image 188 | # - 'image-prelink' in order to prelink the filesystem image 189 | # - 'image-swab' to perform host system intrusion detection 190 | # NOTE: if listing mklibs & prelink both, then make sure mklibs is before prelink 191 | # NOTE: mklibs also needs to be explicitly enabled for a given image, see local.conf.extended 192 | USER_CLASSES ?= "@buildhistory_class@ buildstats image-mklibs" 193 | 194 | BUILDHISTORY_DIR ?= "${TOPDIR}/buildhistory" 195 | BUILDHISTORY_COMMIT ?= "@buildhistory_enabled@" 196 | @buildhistory_author_assignment@ 197 | 198 | # 199 | # Runtime testing of images 200 | # 201 | # The build system can test booting virtual machine images under qemu (an emulator) 202 | # after any root filesystems are created and run tests against those images. To 203 | # enable this uncomment this line 204 | #IMAGETEST = "qemu" 205 | # 206 | # This variable controls which tests are run against virtual images if enabled 207 | # above. The following would enable bat, boot the test case under the sanity suite 208 | # and perform toolchain tests 209 | #TEST_SCEN = "sanity bat sanity:boot toolchain" 210 | # 211 | # Because of the QEMU booting slowness issue (see bug #646 and #618), the 212 | # autobuilder may suffer a timeout issue when running sanity tests. We introduce 213 | # the variable TEST_SERIALIZE here to reduce the time taken by the sanity tests. 214 | # It is set to 1 by default, which will boot the image and run cases in the same 215 | # image without rebooting or killing the machine instance. If it is set to 0, the 216 | # image will be copied and tested for each case, which will take longer but be 217 | # more precise. 218 | #TEST_SERIALIZE = "1" 219 | 220 | # 221 | # Interactive shell configuration 222 | # 223 | # Under certain circumstances the system may need input from you and to do this it 224 | # can launch an interactive shell. It needs to do this since the build is 225 | # multithreaded and needs to be able to handle the case where more than one parallel 226 | # process may require the user's attention. The default is iterate over the available 227 | # terminal types to find one that works. 228 | # 229 | # Examples of the occasions this may happen are when resolving patches which cannot 230 | # be applied, to use the devshell or the kernel menuconfig 231 | # 232 | # Supported values are auto, gnome, xfce, rxvt, screen, konsole (KDE 3.x only), none 233 | # Note: currently, Konsole support only works for KDE 3.x due to the way 234 | # newer Konsole versions behave 235 | #OE_TERMINAL = "auto" 236 | # By default disable interactive patch resolution (tasks will just fail instead): 237 | PATCHRESOLVE = "noop" 238 | 239 | # PREMIRROR? 240 | @premirror_assignment@ 241 | @premirror_inherit@ 242 | 243 | # network or no 244 | BB_NO_NETWORK := "@no_network@" 245 | 246 | # premirror only? 247 | BB_FETCH_PREMIRRORONLY := "@fetchpremirroronly@" 248 | 249 | # mirror tarballs 250 | BB_GENERATE_MIRROR_TARBALLS := "@generatemirrortarballs@" 251 | 252 | # 253 | # Shared-state files from other locations 254 | # 255 | # As mentioned above, shared state files are prebuilt cache data objects which can 256 | # used to accelerate build time. This variable can be used to configure the system 257 | # to search other mirror locations for these objects before it builds the data itself. 258 | # 259 | # This can be a filesystem directory, or a remote url such as http or ftp. These 260 | # would contain the sstate-cache results from previous builds (possibly from other 261 | # machines). This variable works like fetcher MIRRORS/PREMIRRORS and points to the 262 | # cache locations to check for the shared objects. 263 | #SSTATE_MIRRORS ?= "\ 264 | #file://.* http://someserver.tld/share/sstate/PATH \n \ 265 | #file://.* file:///some/local/dir/sstate/PATH" 266 | @sstatemirror_assignment@ 267 | 268 | # CONF_VERSION is increased each time build/conf/ changes incompatibly and is used to 269 | # track the version of this file when it was generated. This can safely be ignored if 270 | # this doesn't mean anything to you. 271 | CONF_VERSION = "1" 272 | 273 | # local.conf.sample.extended starts here. 274 | # BBMASK is a regular expression that can be used to tell BitBake to ignore 275 | # certain recipes. 276 | #BBMASK = "" 277 | 278 | # eglibc configurability is used to reduce minimal image's size. 279 | # the all supported eglibc options are listed in DISTRO_FEATURES_LIBC 280 | # and disabled by default. Uncomment and copy the DISTRO_FEATURES_LIBC 281 | # and DISTRO_FEATURES definitions to local.conf to enable the options. 282 | #DISTRO_FEATURES_LIBC = "ipv6 libc-backtrace libc-big-macros libc-bsd libc-cxx-tests libc-catgets libc-charsets libc-crypt \ 283 | # libc-crypt-ufc libc-db-aliases libc-envz libc-fcvt libc-fmtmsg libc-fstab libc-ftraverse \ 284 | # libc-getlogin libc-idn libc-inet libc-inet-anl libc-libm libc-libm-big libc-locales libc-locale-code \ 285 | # libc-memusage libc-nis libc-nsswitch libc-rcmd libc-rtld-debug libc-spawn libc-streams libc-sunrpc \ 286 | # libc-utmp libc-utmpx libc-wordexp libc-posix-clang-wchar libc-posix-regexp libc-posix-regexp-glibc \ 287 | # libc-posix-wchar-io" 288 | 289 | #DISTRO_FEATURES = "alsa bluetooth ext2 irda pcmcia usbgadget usbhost wifi nfs zeroconf pci ${DISTRO_FEATURES_LIBC}" 290 | 291 | # If you want to get an image based on gtk+directfb without x11, Please copy this variable to build/conf/local.conf 292 | #DISTRO_FEATURES = "alsa argp bluetooth ext2 irda largefile pcmcia usbgadget usbhost wifi xattr nfs zeroconf pci 3g directfb ${DISTRO_FEATURES_LIBC}" 293 | 294 | # ENABLE_BINARY_LOCALE_GENERATION controls the generation of binary locale 295 | # packages at build time using qemu-native. Disabling it (by setting it to 0) 296 | # will save some build time at the expense of breaking i18n on devices with 297 | # less than 128MB RAM. 298 | #ENABLE_BINARY_LOCALE_GENERATION = "1" 299 | 300 | # Set GLIBC_GENERATE_LOCALES to the locales you wish to generate should you not 301 | # wish to perform the time-consuming step of generating all LIBC locales. 302 | # NOTE: If removing en_US.UTF-8 you will also need to uncomment, and set 303 | # appropriate values for IMAGE_LINGUAS and LIMIT_BUILT_LOCALES 304 | # WARNING: this may break localisation! 305 | #GLIBC_GENERATE_LOCALES = "en_GB.UTF-8 en_US.UTF-8" 306 | # See message above as to whether setting these is required 307 | #IMAGE_LINGUAS ?= "en-gb" 308 | #LIMIT_BUILT_LOCALES ?= "POSIX en_GB" 309 | 310 | # The following are used to control options related to debugging. 311 | # 312 | # Uncomment this to change the optimization to make debugging easer, at the 313 | # possible cost of performance. 314 | # DEBUG_BUILD = "1" 315 | # 316 | # Uncomment this to disable the stripping of the installed binaries 317 | # INHIBIT_PACKAGE_STRIP = "1" 318 | # 319 | # Uncomment this to disable the split of the debug information into -dbg files 320 | # INHIBIT_PACKAGE_DEBUG_SPLIT = "1" 321 | # 322 | # When splitting debug information, the following controls the results of the 323 | # file splitting. 324 | # 325 | # .debug (default): 326 | # When splitting the debug information will be placed into 327 | # a .debug directory in the same dirname of the binary produced: 328 | # /bin/foo -> /bin/.debug/foo 329 | # 330 | # debug-file-directory: 331 | # When splitting the debug information will be placed into 332 | # a central debug-file-directory, /usr/lib/debug: 333 | # /bin/foo -> /usr/lib/debug/bin/foo.debug 334 | # 335 | # Any source code referenced in the debug symbols will be copied 336 | # and made available within the /usr/src/debug directory 337 | # 338 | #PACKAGE_DEBUG_SPLIT_STYLE = '.debug' 339 | # PACKAGE_DEBUG_SPLIT_STYLE = 'debug-file-directory' 340 | 341 | # Uncomment these to build a package such that you can use gprof to profile it. 342 | # NOTE: This will only work with 'linux' targets, not 343 | # 'linux-uclibc', as uClibc doesn't provide the necessary 344 | # object files. Also, don't build glibc itself with these 345 | # flags, or it'll fail to build. 346 | # 347 | # PROFILE_OPTIMIZATION = "-pg" 348 | # SELECTED_OPTIMIZATION = "${PROFILE_OPTIMIZATION}" 349 | # LDFLAGS =+ "-pg" 350 | 351 | # TCMODE controls the characteristics of the generated packages/images by 352 | # telling poky which toolchain 'profile' to use. 353 | # 354 | # The default is "default" 355 | # Use "external-MODE" to use the precompiled external toolchains where MODE 356 | # is the type of external toolchain to use e.g. eabi. You need to ensure 357 | # the toolchain you want to use is included in an appropriate layer 358 | # TCMODE = "external-eabi" 359 | 360 | # mklibs library size optimization is more useful to smaller images, 361 | # and less useful for bigger images. Also mklibs library optimization 362 | # can break the ABI compatibility, so should not be applied to the 363 | # images which are to be extended or upgraded later. 364 | #This enabled mklibs library size optimization just for the specified image. 365 | #MKLIBS_OPTIMIZED_IMAGES ?= "core-image-minimal" 366 | #This enable mklibs library size optimization will be for all the images. 367 | #MKLIBS_OPTIMIZED_IMAGES ?= "all" 368 | 369 | # Uncomment this if your host distribution provides the help2man tool. 370 | #ASSUME_PROVIDED += "help2man-native" 371 | 372 | # This value is currently used by pseudo to determine if the recipe should 373 | # build both the 32-bit and 64-bit wrapper libraries on a 64-bit build system. 374 | # 375 | # Pseudo will attempt to determine if a 32-bit wrapper is necessary, but 376 | # it doesn't always guess properly. If you have 32-bit executables on 377 | # your 64-bit build system, you likely want to set this to "0", 378 | # otherwise you could end up with incorrect file attributes on the 379 | # target filesystem. 380 | # 381 | # Default is to not build 32 bit libs on 64 bit systems, uncomment this 382 | # if you need the 32 bits libs 383 | #NO32LIBS = "0" 384 | 385 | # Uncomment the following lines to enable multilib builds 386 | #require conf/multilib.conf 387 | #MULTILIBS = "multilib:lib32" 388 | #DEFAULTTUNE_virtclass-multilib-lib32 = "x86" 389 | 390 | # The network based PR service host and port 391 | # Uncomment the following lines to enable PRservice. 392 | # Set PRSERV_HOST to 'localhost' and PRSERV_PORT to '0' to automatically 393 | # start local PRService. 394 | # Set to other values to use remote PRService. 395 | #PRSERV_HOST = "localhost" 396 | #PRSERV_PORT = "0" 397 | 398 | # Additional image generation features 399 | # 400 | # The following is a list of classes to import to use in the generation of images 401 | # currently an example class is image_types_uboot 402 | # IMAGE_CLASSES = " image_types_uboot" 403 | 404 | # Incremental rpm image generation, the rootfs would be totally removed 405 | # and re-created in the second generation by default, but with 406 | # INC_RPM_IMAGE_GEN = "1", the rpm based rootfs would be kept, and will 407 | # do update(remove/add some pkgs) on it. NOTE: This is not suggested 408 | # when you want to create a productive rootfs 409 | #INC_RPM_IMAGE_GEN = "1" 410 | 411 | # This is a list of packages that require a commercial license to ship 412 | # product. If shipped as part of an image these packages may have 413 | # implications so they are disabled by default. To enable them, 414 | # un-comment the below as appropriate. 415 | #LICENSE_FLAGS_WHITELIST = "commercial_gst-fluendo-mp3 \ 416 | # commercial_gst-openmax \ 417 | # commercial_gst-plugins-ugly \ 418 | # commercial_lame \ 419 | # commercial_libmad \ 420 | # commercial_libomxil \ 421 | # commercial_mpeg2dec \ 422 | # commercial_qmmp" 423 | 424 | 425 | # 426 | # Disk space monitor, take action when the disk space or the amount of 427 | # inode is running low, it is enabled when BB_DISKMON_DIRS is set. 428 | # 429 | # Set the directories to monitor for disk usage, if more than one 430 | # directories are mounted in the same device, then only one directory 431 | # would be monitored since the monitor is based on the device. 432 | # The format is: 433 | # "action,directory,minimum_space,minimum_free_inode" 434 | # 435 | # The "action" must be set and should be one of: 436 | # ABORT: Immediately abort 437 | # STOPTASKS: The new tasks can't be executed any more, will stop the build 438 | # when the running tasks have been done. 439 | # WARN: show warnings (see BB_DISKMON_WARNINTERVAL for more information) 440 | # 441 | # The "directory" must be set, any directory is OK. 442 | # 443 | # Either "minimum_space" or "minimum_free_inode" (or both of them) 444 | # should be set, otherwise the monitor would not be enabled, 445 | # the unit can be G, M, K or none, but do NOT use GB, MB or KB 446 | # (B is not needed). 447 | #BB_DISKMON_DIRS = "STOPTASKS,${TMPDIR},1G,100K WARN,${SSTATE_DIR},1G,100K" 448 | # 449 | # Set disk space and inode interval (only works when the action is "WARN", 450 | # the unit can be G, M, or K, but do NOT use the GB, MB or KB 451 | # (B is not needed), the format is: 452 | # "disk_space_interval,disk_inode_interval", the default value is 453 | # "50M,5K" which means that it would warn when the free space is 454 | # lower than the minimum space(or inode), and would repeat the warning 455 | # when the disk space reduces 50M (or the amount of inode reduces 5k). 456 | #BB_DISKMON_WARNINTERVAL = "50M,5K" 457 | 458 | # Archiving source code configuration 459 | # 460 | # The following variables control which files to archive and the type to archive to generate. 461 | # There are three basic class defintions of common operations that might be desired and these 462 | # can be enabled by uncommenting one of the following lines: 463 | # 464 | # INHERIT += "archive-original-source" 465 | # INHERIT += "archive-patched-source" 466 | #INHERIT =+ "archive-configured-source" 467 | # 468 | # Type of archive: 469 | # SOURCE_ARCHIVE_PACKAGE_TYPE = 'srpm' 470 | #SOURCE_ARCHIVE_PACKAGE_TYPE ?= 'tar' 471 | # 472 | # Whether to include WORKDIR/temp, .bb and .inc files: 473 | # 'logs_with_scripts' include WORKDIR/temp directory and .bb and .inc files 474 | # 'logs' only include WORKDIR/temp 475 | # ARCHIVER_MODE[log_type] = "logs logs_with_scripts" 476 | # There are three basic class defintions of common operations 477 | # that might be desired and these can be enabled by 478 | # uncommenting five of the following lines: 479 | # ARCHIVER_MODE[filter] ?= "yes no" 480 | # Filter packages according to license 481 | #ARCHIVER_MODE ?= "original" 482 | #ARCHIVER_MODE[type] ?= "tar" 483 | #ARCHIVER_MODE[log_type] ?= "logs_with_scripts" 484 | #ARCHIVER_MODE[filter] ?= "no" 485 | #ARCHIVER_CLASS = "${@'archive-${ARCHIVER_MODE}-source' if ARCHIVER_MODE != 'none' else ''}" 486 | #INHERIT += "${ARCHIVER_CLASS}" 487 | 488 | # Other local settings 489 | TCLIBCAPPEND := "" 490 | -------------------------------------------------------------------------------- /build-templates/mcf-status.in: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # DO NOT MODIFY! This script is generated by @prog@. Changes made 4 | # here will be lost. Source for this file can be found in 5 | # mcf-status.in. 6 | 7 | ## 8 | # Copyright (c) 2008-2014 LG Electronics, Inc. 9 | # 10 | # Licensed under the Apache License, Version 2.0 (the "License"); 11 | # you may not use this file except in compliance with the License. 12 | # You may obtain a copy of the License at 13 | # 14 | # http://www.apache.org/licenses/LICENSE-2.0 15 | # 16 | # Unless required by applicable law or agreed to in writing, software 17 | # distributed under the License is distributed on an "AS IS" BASIS, 18 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 19 | # See the License for the specific language governing permissions and 20 | # limitations under the License. 21 | ## 22 | 23 | # Run this file to recreate the current configuration. 24 | 25 | set -e 26 | 27 | exec @prog@ \ 28 | --enable-bb-number-threads=@bb_number_threads@ \ 29 | --enable-parallel-make=@parallel_make@ \ 30 | @icecc_disable_enable_mcf@ \ 31 | --enable-icecc-parallel-make=@icecc_parallel_make@ \ 32 | --enable-icecc-location=@alternative_icecc_installation_mcf@ \ 33 | --enable-icecc-user-package-blacklist=@icecc_user_package_blacklist_mcf@ \ 34 | --enable-icecc-user-class-blacklist=@icecc_user_class_blacklist_mcf@ \ 35 | --enable-icecc-user-package-whitelist=@icecc_user_package_whitelist_mcf@ \ 36 | --enable-icecc-env-exec=@icecc_environment_script_mcf@ \ 37 | --premirror=@premirror@ \ 38 | --sstatemirror='@sstatemirror@' \ 39 | @buildhistory@ \ 40 | --enable-buildhistoryauthor='@buildhistoryauthor@' \ 41 | @network@ \ 42 | @fetchpremirroronlyoption@ \ 43 | @generatemirrortarballsoption@ \ 44 | @machines@ 45 | -------------------------------------------------------------------------------- /build-templates/oe-init-build-env.in: -------------------------------------------------------------------------------- 1 | # DO NOT MODIFY! This script is generated by mcf. Changes made 2 | # here will be lost. The source for this file is in build-templates/oe-init-build-env.in. 3 | 4 | # Copyright (c) 2008-2014 LG Electronics, Inc. 5 | # 6 | # Licensed under the Apache License, Version 2.0 (the "License"); 7 | # you may not use this file except in compliance with the License. 8 | # You may obtain a copy of the License at 9 | # 10 | # http://www.apache.org/licenses/LICENSE-2.0 11 | # 12 | # Unless required by applicable law or agreed to in writing, software 13 | # distributed under the License is distributed on an "AS IS" BASIS, 14 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | # See the License for the specific language governing permissions and 16 | # limitations under the License. 17 | 18 | ## 19 | # See NOV-120063 for an example of how a users's LC_ALL setting can cause problems. 20 | # FIXME: not clear if these are still needed with oe-core 21 | unset LC_ALL; export LC_ALL 22 | export LANG=en_US.UTF-8 23 | 24 | export TOPDIR="@abs_srcdir@" 25 | 26 | BITBAKEPATH=${TOPDIR}/bitbake/bin 27 | case "${PATH}" in 28 | *${BITBAKEPATH}*) ;; 29 | *) export PATH=${TOPDIR}/oe-core/scripts:${BITBAKEPATH}:$PATH ;; 30 | esac 31 | 32 | if [ -z "$ZSH_NAME" ] && [ "$0" = "./oe-init-build-env" ]; then 33 | echo "ERROR: This script needs to be sourced. Please run as '. ./oe-init-build-env'" 34 | exit 1 35 | fi 36 | 37 | # used in runqemu bitbake wrapper for pseudodone location 38 | export BUILDDIR="${TOPDIR}/BUILD" 39 | export BB_ENV_EXTRAWHITE="MACHINE DISTRO TCMODE TCLIBC http_proxy ftp_proxy https_proxy all_proxy ALL_PROXY no_proxy SSH_AGENT_PID SSH_AUTH_SOCK BB_SRCREV_POLICY SDKMACHINE BB_NUMBER_THREADS PARALLEL_MAKE GIT_PROXY_COMMAND GIT_PROXY_IGNORE SOCKS5_PASSWD SOCKS5_USER WEBOS_DISTRO_BUILD_ID PSEUDO_DISABLED PSEUDO_BUILD" 40 | 41 | [ -z "${MACHINES}" ] && MACHINES="@machines@" 42 | [ -z "${MACHINE}" ] && MACHINE="@machine@" 43 | [ -z "${DISTRO}" ] && DISTRO="@distro@" 44 | 45 | export MACHINES MACHINE DISTRO 46 | echo "Altered environment for ${MACHINE}@${DISTRO} development" 47 | -------------------------------------------------------------------------------- /mcf: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) 2008-2014 LG Electronics, Inc. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | 16 | 17 | import argparse 18 | import errno 19 | import logging 20 | import os 21 | import subprocess 22 | import sys 23 | import re 24 | from time import gmtime, strftime 25 | import shutil 26 | import glob 27 | 28 | __version__ = "5.1.0" 29 | 30 | logger = logging.getLogger(__name__) 31 | 32 | submodules = {} 33 | CLEAN = False 34 | TRACE = False 35 | REMOTE = "origin" 36 | SSTATE_MIRRORS = '' 37 | WEBOSLAYERS = [] 38 | LAYERSPRIORITY = {} 39 | SUBMISSIONS = {} 40 | LOCATIONS = {} 41 | URLS = {} 42 | PRIORITYORDER = [] 43 | COLLECTION_NAME = {} 44 | COLLECTION_PATH = {} 45 | SUMMARYINFO = {} 46 | BRANCHINFONEW = {} 47 | BRANCHINFOCURRENT = {} 48 | COMMITIDSNEW = {} 49 | COMMITIDSCURRENT = {} 50 | TAGSINFONEW = {} 51 | REPOPATCHDIR = {} 52 | DISTRO = None 53 | SUPPORTED_MACHINES = [] 54 | 55 | def echo_check_call(todo, verbosity=False): 56 | if verbosity or TRACE: 57 | cmd = 'set -x; ' + todo 58 | else: 59 | cmd = todo 60 | 61 | logger.debug(cmd) 62 | 63 | return str(subprocess.check_output(cmd, shell=True), encoding='utf-8', errors='strict') 64 | 65 | def enable_trace(): 66 | global TRACE 67 | TRACE = True 68 | 69 | def enable_clean(): 70 | logger.warn('Running in clean non-interactive mode, all possible local changes and untracked files will be removed') 71 | global CLEAN 72 | CLEAN = True 73 | 74 | def set_log_level(level): 75 | logger = logging.getLogger(__name__) 76 | logger.setLevel(logging.DEBUG) 77 | f = logging.Formatter('%(asctime)s %(levelname)s %(name)s %(message)s', datefmt='%Y-%m-%dT%H:%M:%S') 78 | 79 | s = logging.StreamHandler() 80 | s.setLevel(level) 81 | 82 | s.setFormatter(f) 83 | logging.getLogger('').addHandler(s) 84 | 85 | # Essentially, mcf parses options, creates mcf.status, and runs mcf.status. 86 | 87 | def process_file(f, replacements): 88 | (ifile, ofile) = f 89 | with open(ifile, 'r') as f: 90 | status = f.read() 91 | 92 | for i, j in replacements: 93 | status = status.replace(i, j) 94 | 95 | odir = os.path.dirname(ofile) 96 | if odir and not os.path.isdir(odir): 97 | os.mkdir(odir) 98 | with open(ofile, 'w') as f: 99 | f.write(status) 100 | 101 | def getopts(): 102 | mcfcommand_option = '--command' 103 | mcfcommand_dest = 'mcfcommand' 104 | # be careful when changing this, jenkins-job.sh is doing 105 | # grep "mcfcommand_choices = \['configure', 'update', " 106 | # to detect if it needs to explicitly run --command update after default action 107 | mcfcommand_choices = ['configure', 'update', 'update+configure'] 108 | mcfcommand_default = 'update+configure' 109 | 110 | # Just parse the --command argument here, so that we can select a parser 111 | mcfcommand_parser = argparse.ArgumentParser(add_help=False) 112 | mcfcommand_parser.add_argument(mcfcommand_option, dest=mcfcommand_dest, choices=mcfcommand_choices, default=mcfcommand_default) 113 | mcfcommand_parser_result = mcfcommand_parser.parse_known_args() 114 | mcfcommand = mcfcommand_parser_result[0].mcfcommand 115 | 116 | # Put --command back in (as the first option) so that the main parser sees everything 117 | arglist = [mcfcommand_option, mcfcommand ] + mcfcommand_parser_result[1] 118 | 119 | parser = argparse.ArgumentParser() 120 | 121 | general = parser.add_argument_group('General Options') 122 | 123 | verbosity = general.add_mutually_exclusive_group() 124 | 125 | verbosity.add_argument('-s', '--silent', action='count', help='work silently, repeat the option twice to hide also the warnings, tree times to hide the errors as well') 126 | verbosity.add_argument('-v', '--verbose', action='count', help='work verbosely, repeat the option twice for more debug output') 127 | 128 | general.add_argument('-c', '--clean', dest='clean', action='store_true', default=False, help='clean checkout - WARN: removes all local changes') 129 | general.add_argument('-V', '--version', action='version', version='%(prog)s {0}'.format(__version__), help='print version and exit') 130 | 131 | general.add_argument(mcfcommand_option, dest=mcfcommand_dest, choices=mcfcommand_choices, default=mcfcommand_default, 132 | help='command to mcf; if update is given, none of the remaining options nor MACHINE can be specified (default: %(default)s)') 133 | 134 | if mcfcommand in ('configure','update+configure'): 135 | variations = parser.add_argument_group('Build Instructions') 136 | 137 | variations.add_argument('-p', '--enable-parallel-make', dest='parallel_make', type=int, default=0, 138 | help='maximum number of parallel tasks each submake of bitbake should spawn (default: 0 = 2x the number of processor cores)') 139 | 140 | variations.add_argument('-b', '--enable-bb-number-threads', dest='bb_number_threads', type=int, default=0, 141 | help='maximum number of bitbake tasks to spawn (default: 0 = 2x the number of processor cores))') 142 | 143 | icecc = parser.add_argument_group('ICECC Configuration') 144 | 145 | icecc.add_argument('--enable-icecc', dest='enable_icecc', action='store_true', default=True, 146 | help='enable build to use ICECC (default: True)') 147 | 148 | icecc.add_argument('--disable-icecc', dest='disable_icecc', action='store_true', default=False, 149 | help='disable build from using ICECC (default: False = use ICECC)') 150 | 151 | icecc.add_argument('--enable-icecc-parallel-make', dest='icecc_parallel_make', type=int, default=0, 152 | help='Number of parallel threads for ICECC build (default: 0 = 4x the number of processor cores))') 153 | 154 | icecc_advanced = parser.add_argument_group('ICECC Advanced Configuration') 155 | 156 | icecc_advanced.add_argument('--enable-icecc-user-package-blacklist', dest='icecc_user_package_blacklist', action='append', 157 | help='Space separated list of components/recipes to be excluded from using ICECC (default: None)') 158 | 159 | icecc_advanced.add_argument('--enable-icecc-user-class-blacklist', dest='icecc_user_class_blacklist', action='append', 160 | help='Space separated list of components/recipes class to be excluded from using ICECC (default: None)') 161 | 162 | icecc_advanced.add_argument('--enable-icecc-user-package-whitelist', dest='icecc_user_package_whitelist', action='append', 163 | help='Space separated list of components/recipes to be forced to use ICECC (default: None)') 164 | 165 | icecc_advanced.add_argument('--enable-icecc-location', dest='icecc_location', default='', 166 | help='location of ICECC tool (default: None)') 167 | 168 | icecc_advanced.add_argument('--enable-icecc-env-exec', dest='icecc_env_exec', default='', 169 | help='location of ICECC environment script (default: None)') 170 | 171 | 172 | partitions = parser.add_argument_group('Source Identification') 173 | 174 | mirrors = parser.add_argument_group('Networking and Mirrors') 175 | 176 | network = mirrors.add_mutually_exclusive_group() 177 | 178 | network.add_argument('--disable-network', dest='network', action='store_false', default=True, 179 | help='disable fetching through the network (default: False)') 180 | 181 | network.add_argument('--enable-network', dest='network', action='store_true', default=True, 182 | help='enable fetching through the network (default: True)') 183 | 184 | mirrors.add_argument('--sstatemirror', dest='sstatemirror', action='append', 185 | help='set sstatemirror to specified URL, repeat this option if you want multiple sstate mirrors (default: None)') 186 | 187 | premirrorurl = mirrors.add_mutually_exclusive_group() 188 | default_premirror = 'http://downloads.yoctoproject.org/mirror/sources' 189 | premirrorurl.add_argument('--enable-default-premirror', dest='premirror', action='store_const', const=default_premirror, default="", 190 | help='enable default premirror URL (default: False)') 191 | premirrorurl.add_argument('--premirror', '--enable-premirror', dest='premirror', default='', 192 | help='set premirror to specified URL (default: None)') 193 | 194 | premirroronly = mirrors.add_mutually_exclusive_group() 195 | premirroronly.add_argument('--disable-fetch-premirror-only', dest='fetchpremirroronly', action='store_false', default=False, 196 | help='disable fetching through the network (default: False)') 197 | 198 | premirroronly.add_argument('--enable-fetch-premirror-only', dest='fetchpremirroronly', action='store_true', default=False, 199 | help='enable fetching through the network (default: True)') 200 | 201 | tarballs = mirrors.add_mutually_exclusive_group() 202 | tarballs.add_argument('--disable-generate-mirror-tarballs', dest='generatemirrortarballs', action='store_false', default=False, 203 | help='disable tarball generation of fetched components (default: True)') 204 | 205 | tarballs.add_argument('--enable-generate-mirror-tarballs', dest='generatemirrortarballs', action='store_true', default=False, 206 | help='generate tarballs suitable for mirroring (default: False)') 207 | 208 | buildhistory = parser.add_argument_group('Buildhistory') 209 | 210 | buildhistory1 = buildhistory.add_mutually_exclusive_group() 211 | 212 | buildhistory1.add_argument('--disable-buildhistory', dest='buildhistory', action='store_false', default=True, 213 | help='disable buildhistory functionality (default: False)') 214 | 215 | buildhistory1.add_argument('--enable-buildhistory', dest='buildhistory', action='store_true', default=True, 216 | help='enable buildhistory functionality (default: True)') 217 | 218 | buildhistory.add_argument('--enable-buildhistoryauthor', dest='buildhistoryauthor', default='', help='specify name and email used in buildhistory git commits (default: none, will use author from git global config)') 219 | 220 | parser.add_argument('MACHINE', nargs='+') 221 | 222 | options = parser.parse_args(arglist) 223 | if mcfcommand in ('configure','update+configure') and options.sstatemirror: 224 | process_sstatemirror_option(options) 225 | return options 226 | 227 | def process_sstatemirror_option(options): 228 | """ 229 | Sets global variable SSTATE_MIRRORS based on list of mirrors in options.sstatemirror 230 | 231 | /PATH suffix is automatically added when generating SSTATE_MIRRORS value 232 | verify that user didn't already include it and show error if he did 233 | """ 234 | global SSTATE_MIRRORS 235 | SSTATE_MIRRORS = "SSTATE_MIRRORS ?= \" \\\n" 236 | for m in options.sstatemirror: 237 | if m.endswith("/PATH"): 238 | logger.error("sstatemirror entry '%s', already ends with '/PATH', remove that" % m) 239 | sys.exit(1) 240 | if m.endswith("/"): 241 | logger.error("sstatemirror entry '%s', ends with '/', remove that" % m) 242 | sys.exit(1) 243 | SSTATE_MIRRORS += "file://.* %s/PATH \\n \\\n" % m 244 | SSTATE_MIRRORS += "\"\n" 245 | 246 | def _icecc_installed(): 247 | try: 248 | # Note that if package is not installed following call will throw an exception 249 | iceinstallstatus,iceversion = subprocess.check_output("dpkg-query -W icecc" , 250 | shell=True, 251 | universal_newlines=True).split() 252 | # We are expecting icecc for the name 253 | if 'icecc' == iceinstallstatus: 254 | if '1.0.1-1' == iceversion: 255 | return True 256 | else: 257 | logger.warn("WARNING: Wrong icecc package version {} is installed, disabling build from using ICECC.\n".format(iceversion) + \ 258 | "Please check 'How To Install ICECC on Your Workstation (Client)'\n" + \ 259 | "http://wiki.lgsvl.com/pages/viewpage.action?pageId=96175316") 260 | return False 261 | else: 262 | logger.warn('WARNING: ICECC package installation check failed, disabling build from using ICECC.') 263 | return False 264 | 265 | except: 266 | logger.warn('WARNING: ICECC package installation check failed, disabling build from using ICECC.') 267 | return False 268 | 269 | def location_to_dirname(location): 270 | str1 = location.split('/') 271 | return os.path.splitext(str1[len(str1)-1])[0] 272 | 273 | def read_weboslayers(path): 274 | sys.path.insert(0,path) 275 | if not os.path.isfile(os.path.join(path,'weboslayers.py')): 276 | raise Exception("Error:" 'Configuration file %s does not exist!' % os.path.join(path,'weboslayers.py')) 277 | 278 | from weboslayers import webos_layers 279 | 280 | for p in webos_layers: 281 | WEBOSLAYERS.append(p[0]) 282 | PRIORITYORDER.append(p[1]) 283 | LAYERSPRIORITY[p[0]] = p[1] 284 | URLS[p[0]] = p[2] 285 | SUBMISSIONS[p[0]] = p[3] 286 | parsesubmissions(p[0]) 287 | LOCATIONS[p[0]] = p[4] 288 | if not URLS[p[0]] and not LOCATIONS[p[0]]: 289 | raise Exception("Error:" 'Layer %s does not have either URL or alternative working-dir defined in weboslayers.py)') 290 | if not LOCATIONS[p[0]]: 291 | LOCATIONS[p[0]] = location_to_dirname(URLS[p[0]]) 292 | 293 | PRIORITYORDER.sort() 294 | PRIORITYORDER.reverse() 295 | 296 | from weboslayers import Distribution 297 | global DISTRO 298 | DISTRO = Distribution 299 | 300 | from weboslayers import Machines 301 | global SUPPORTED_MACHINES 302 | SUPPORTED_MACHINES = Machines 303 | 304 | def parsesubmissions(layer): 305 | BRANCH = '' 306 | COMMIT = '' 307 | TAG = '' 308 | for vgit in SUBMISSIONS[layer].split(','): 309 | if not vgit: 310 | continue 311 | str1, str2 = vgit.split('=') 312 | if str1.lower() == 'commit': 313 | if not COMMIT: 314 | COMMIT = str2 315 | elif str1.lower() == 'branch': 316 | BRANCH = str2 317 | elif str1.lower() == 'tag': 318 | if not TAG: 319 | TAG = str2 320 | 321 | if not BRANCH: 322 | BRANCH = 'master' 323 | 324 | BRANCHINFONEW[layer] = BRANCH 325 | COMMITIDSNEW[layer] = COMMIT 326 | TAGSINFONEW[layer] = TAG 327 | 328 | def downloadrepo(layer): 329 | cmd = 'git clone %s %s' % (URLS[layer], LOCATIONS[layer]) 330 | echo_check_call(cmd) 331 | 332 | olddir = os.getcwd() 333 | os.chdir(LOCATIONS[layer]) 334 | newbranch = BRANCHINFONEW[layer] 335 | 336 | if newbranch: 337 | refbranchlist = echo_check_call("git branch") 338 | refbranch = refbranchlist.splitlines() 339 | foundbranch = False 340 | for ibranch in refbranch: 341 | if newbranch in ibranch: 342 | foundbranch = True 343 | if not foundbranch: 344 | refbranchlist = echo_check_call("git branch -r") 345 | refbranch = refbranchlist.splitlines() 346 | for ibranch in refbranch: 347 | if ibranch == " %s/%s" % (REMOTE, newbranch): 348 | foundbranch = True 349 | logger.info( " found %s " % ibranch ) 350 | cmd ='git checkout -B %s %s' % (newbranch,ibranch) 351 | echo_check_call(cmd) 352 | break 353 | 354 | currentbranch = echo_check_call("git rev-parse --abbrev-ref HEAD").rstrip() 355 | newcommitid = COMMITIDSNEW[layer] 356 | if newcommitid: 357 | if newcommitid.startswith('refs/changes/'): 358 | if newbranch and newbranch != currentbranch: 359 | # older git doesn't allow to update reference on currently checked out branch 360 | cmd ='git fetch %s %s && git checkout -B %s FETCH_HEAD' % (REMOTE, newcommitid, newbranch) 361 | elif newbranch: 362 | # we're already on requested branch 363 | cmd ='git fetch %s %s && git reset --hard FETCH_HEAD' % (REMOTE, newcommitid) 364 | else: 365 | # we don't have any branch preference use detached 366 | cmd ='git fetch %s %s && git checkout FETCH_HEAD' % (REMOTE, newcommitid) 367 | echo_check_call(cmd) 368 | else: 369 | if newbranch and newbranch != currentbranch: 370 | # older git doesn't allow to update reference on currently checked out branch 371 | cmd ='git checkout -B %s %s' % (newbranch,newcommitid) 372 | elif newbranch: 373 | # we're already on requested branch 374 | cmd ='git reset --hard %s' % newcommitid 375 | else: 376 | # we don't have any branch preference use detached 377 | cmd ='git checkout %s' % newcommitid 378 | echo_check_call(cmd) 379 | 380 | newtag = TAGSINFONEW[layer] 381 | if newtag: 382 | if newbranch and newbranch != currentbranch: 383 | # older git doesn't allow to update reference on currently checked out branch 384 | cmd ='git checkout -B %s %s' % (newbranch,newtag) 385 | elif newbranch: 386 | # we're already on requested branch 387 | cmd ='git reset --hard %s' % newtag 388 | else: 389 | cmd ='git checkout %s' % newtag 390 | echo_check_call(cmd) 391 | 392 | os.chdir(olddir) 393 | 394 | def parselayerconffile(layer, layerconffile): 395 | with open(layerconffile, 'r') as f: 396 | lines = f.readlines() 397 | for line in lines: 398 | if re.search( 'BBFILE_COLLECTIONS.*=' , line): 399 | (dummy, collectionname) = line.rsplit('=') 400 | collectionname = collectionname.strip() 401 | collectionname = collectionname.strip("\"") 402 | COLLECTION_NAME[layer] = collectionname 403 | logger.debug("parselayerconffile(%s,%s) -> %s" % (layer, layerconffile, COLLECTION_NAME[layer])) 404 | 405 | def traversedir(layer, root): 406 | for path, dirs, files in os.walk(root): 407 | if os.path.basename(os.path.dirname(path)) == layer: 408 | for filename in files: 409 | if filename == 'layer.conf': 410 | COLLECTION_PATH[layer] = os.path.relpath(os.path.dirname(path), os.path.dirname(root)) 411 | logger.debug("traversedir(%s,%s) -> %s" % (layer, root, COLLECTION_PATH[layer])) 412 | 413 | layerconffile = os.path.join(path, filename) 414 | parselayerconffile(layer, layerconffile) 415 | break 416 | 417 | def parse_collections(srcdir): 418 | for layer in WEBOSLAYERS: 419 | if os.path.exists(LOCATIONS[layer]): 420 | traversedir(layer, LOCATIONS[layer]) 421 | else: 422 | raise Exception("Error:", "directory '%s' does not exist, you probably need to call update" % LOCATIONS[layer]) 423 | 424 | def write_bblayers_conf(sourcedir): 425 | f = open(os.path.join(sourcedir, "conf", "bblayers.conf"), 'a') 426 | f.write('\n') 427 | processed_layers = list() 428 | for p in PRIORITYORDER: 429 | for layer in LAYERSPRIORITY: 430 | if LAYERSPRIORITY[layer] == -1: 431 | continue 432 | if layer not in processed_layers: 433 | if LAYERSPRIORITY[layer] == p: 434 | processed_layers.append(layer) 435 | leftside = layer 436 | leftside = leftside.replace('-','_') 437 | leftside = leftside.upper() 438 | if os.path.isabs(LOCATIONS[layer]): 439 | str = "%s_LAYER ?= \"%s/%s\"" % (leftside, LOCATIONS[layer], COLLECTION_PATH[layer]) 440 | else: 441 | str = "%s_LAYER ?= \"${TOPDIR}/%s\"" % (leftside, COLLECTION_PATH[layer]) 442 | f.write(str) 443 | f.write('\n') 444 | break 445 | f.write('\n') 446 | f.write('BBFILES ?= ""\n') 447 | f.write('BBLAYERS ?= " \\') 448 | f.write('\n') 449 | processed_layers = list() 450 | for p in PRIORITYORDER: 451 | for layer in LAYERSPRIORITY: 452 | if LAYERSPRIORITY[layer] == -1: 453 | continue 454 | if layer not in processed_layers: 455 | if LAYERSPRIORITY[layer] == p: 456 | processed_layers.append(layer) 457 | f.write(" ${%s_LAYER} \\" % layer.replace('-','_').upper()) 458 | f.write('\n') 459 | break 460 | f.write(' "') 461 | f.write('\n') 462 | for layer in LAYERSPRIORITY: 463 | if LAYERSPRIORITY[layer] <= 0 : 464 | continue 465 | f.write("BBFILE_PRIORITY_%s = \"%s\"" % (COLLECTION_NAME[layer], LAYERSPRIORITY[layer])) 466 | f.write('\n') 467 | f.close 468 | 469 | def update_layers(sourcedir): 470 | logger.info('MCF-%s: Updating build directory' % __version__) 471 | layers_sanity = list() 472 | update_location = list() 473 | for layer in WEBOSLAYERS: 474 | if LOCATIONS[layer] not in update_location: 475 | update_location.append(LOCATIONS[layer]) 476 | if not os.path.exists(os.path.abspath( LOCATIONS[layer] ) ): 477 | # downloadrepo 478 | downloadrepo(layer) 479 | else: 480 | # run sanity check on repo 481 | if reposanitycheck(layer) != 0: 482 | layers_sanity.append(layer) 483 | 484 | # update layers 485 | updaterepo(layer) 486 | 487 | if layers_sanity: 488 | logger.info('Found local changes for repos(s) %s' % layers_sanity) 489 | 490 | printupdatesummary() 491 | 492 | def printupdatesummary (): 493 | logger.info('Repo Update Summary') 494 | logger.info('===================') 495 | if not len(SUMMARYINFO): 496 | logger.info('No local changes found') 497 | for layer in SUMMARYINFO: 498 | mstatus = SUMMARYINFO[layer] 499 | logger.info('[%s] has the following changes:' % layer) 500 | if int(mstatus) & 1: 501 | logger.info(' *) local uncommitted changes, use \'git stash pop\' to retrieve') 502 | if int(mstatus) & 2: 503 | logger.info(' *) local committed changes, patches are backed up in %s/' % REPOPATCHDIR[layer]) 504 | if int(mstatus) & 4: 505 | logger.info(' *) local untracked changes') 506 | if BRANCHINFONEW[layer] != BRANCHINFOCURRENT[layer]: 507 | logger.info(' *) switched branches from %s to %s' % (BRANCHINFOCURRENT[layer], BRANCHINFONEW[layer])) 508 | 509 | def get_remote_branch(newbranch, second_call = False): 510 | remotebranch = None 511 | refbranchlist = echo_check_call("git branch -r") 512 | refbranch = refbranchlist.splitlines() 513 | for ibranch in refbranch: 514 | if ibranch == " %s/%s" % (REMOTE, newbranch): 515 | remotebranch = ibranch.strip() 516 | break 517 | if remotebranch or second_call: 518 | return remotebranch 519 | else: 520 | # try it again after "git remote update" 521 | echo_check_call("git remote update") 522 | return get_remote_branch(newbranch, True) 523 | 524 | def reposanitycheck(layer): 525 | olddir = os.getcwd() 526 | os.chdir(LOCATIONS[layer]) 527 | 528 | BRANCHINFOCURRENT[layer] = echo_check_call("git rev-parse --abbrev-ref HEAD").rstrip() 529 | 530 | res = False 531 | msgs = 0 532 | 533 | if CLEAN: 534 | if echo_check_call("git status --porcelain -s"): 535 | logger.warn('Removing all local changes and untracked files in [%s]' % layer) 536 | # abort rebase if git pull --rebase from update_layers got stuck on some local commit 537 | try: 538 | echo_check_call("git rebase --abort") 539 | except subprocess.CalledProcessError: 540 | # we can ignore this one 541 | pass 542 | 543 | echo_check_call("git stash clear") 544 | echo_check_call("git clean -fdx") 545 | echo_check_call("git reset --hard") 546 | else: 547 | logger.info('Checking for local changes in [%s]' % layer) 548 | if echo_check_call("git status --porcelain --u=no -s"): 549 | logger.warn('Found local uncommited changes in [%s]' % layer) 550 | msgs += 1 551 | echo_check_call("git stash") 552 | res = True 553 | 554 | if echo_check_call("git status --porcelain -s | grep -v '^?? MCF-PATCHES_' || true"): 555 | logger.warn('Found local untracked changes in [%s]' % layer) 556 | msgs += 4 557 | res = True 558 | 559 | try: 560 | remote = echo_check_call('git remote | grep "^%s$"' % REMOTE) 561 | except subprocess.CalledProcessError: 562 | remote = '' 563 | 564 | if not remote: 565 | logger.error("Layer %s doesn't have the remote '%s'" % (layer, REMOTE)) 566 | raise Exception("Layer %s doesn't have the remote '%s'" % (layer, REMOTE)) 567 | 568 | try: 569 | urlcurrent = echo_check_call("git config remote.%s.url" % REMOTE) 570 | except subprocess.CalledProcessError: 571 | # git config returns 1 when the option isn't set 572 | urlcurrent = '' 573 | 574 | # there is extra newline at the end 575 | urlcurrent = urlcurrent.strip() 576 | 577 | logger.debug("reposanitycheck(%s) dir %s, branchinfo %s, branchinfonew %s, url %s, urlnew %s" % (layer, LOCATIONS[layer], BRANCHINFOCURRENT[layer], BRANCHINFONEW[layer], URLS[layer], urlcurrent)) 578 | 579 | if urlcurrent != URLS[layer]: 580 | logger.warn("Changing url for remote '%s' from '%s' to '%s'" % (REMOTE, urlcurrent, URLS[layer])) 581 | echo_check_call("git remote set-url %s %s" % (REMOTE, URLS[layer])) 582 | # Sync with new remote repo 583 | try: 584 | echo_check_call('git remote update') 585 | except subprocess.CalledProcessError: 586 | raise Exception('Failed to fetch %s repo' % layer) 587 | 588 | newbranch = BRANCHINFONEW[layer] 589 | if newbranch: 590 | refbranchlist = echo_check_call("git branch") 591 | refbranch = refbranchlist.splitlines() 592 | foundlocalbranch = False 593 | needcheckout = True 594 | for ibranch in refbranch: 595 | if ibranch == " %s" % newbranch: 596 | foundlocalbranch = True 597 | break 598 | if ibranch == "* %s" % newbranch: 599 | foundlocalbranch = True 600 | needcheckout = False 601 | break 602 | 603 | remotebranch = get_remote_branch(newbranch) 604 | 605 | if foundlocalbranch and remotebranch: 606 | if needcheckout: 607 | echo_check_call('git checkout %s' % newbranch) 608 | 609 | head = echo_check_call("git rev-parse --abbrev-ref HEAD").rstrip() 610 | patchdir = './MCF-PATCHES_%s-%s' % (head.replace('/','_'), timestamp) 611 | REPOPATCHDIR[layer] = "%s/%s" % (LOCATIONS[layer], patchdir) 612 | cmd ='git format-patch %s..%s -o %s' % (remotebranch,newbranch,patchdir) 613 | rawpatches = echo_check_call(cmd) 614 | patches = rawpatches.splitlines() 615 | num = len(patches) 616 | # logger.info( ' info: number of patches: %s ' % num) 617 | if num > 0: 618 | msgs += 2 619 | res = True 620 | else: 621 | # remove empty dir if there weren't any patches created by format-patch 622 | cmd ='rmdir --ignore-fail-on-non-empty %s' % patchdir 623 | echo_check_call(cmd) 624 | 625 | try: 626 | trackingbranch = echo_check_call("git config --get branch.%s.merge" % newbranch) 627 | except subprocess.CalledProcessError: 628 | # git config returns 1 when the option isn't set 629 | trackingbranch = '' 630 | 631 | try: 632 | trackingremote = echo_check_call("git config --get branch.%s.remote" % newbranch) 633 | except subprocess.CalledProcessError: 634 | # git config returns 1 when the option isn't set 635 | trackingremote = '' 636 | 637 | # there is extra newline at the end 638 | trackingbranch = trackingbranch.strip() 639 | trackingremote = trackingremote.strip() 640 | 641 | if not trackingbranch or not trackingremote or trackingbranch.replace('refs/heads',trackingremote) != remotebranch: 642 | logger.warn("layer %s was tracking '%s/%s' changing it to track '%s'" % (layer, trackingremote, trackingbranch, remotebranch)) 643 | # to ensure we are tracking remote 644 | echo_check_call('git branch --set-upstream %s %s' % (newbranch, remotebranch)) 645 | 646 | elif not foundlocalbranch and remotebranch: 647 | echo_check_call('git checkout -b %s %s' % (newbranch, remotebranch)) 648 | else: 649 | # anything else is failure 650 | raise Exception('Could not find local and remote branches for %s' % newbranch) 651 | else: 652 | raise Exception('Undefined branch name') 653 | 654 | if res: 655 | SUMMARYINFO[layer] = msgs 656 | 657 | newdir = os.chdir(olddir) 658 | return res 659 | 660 | # Taken from bitbake/lib/bb/fetch2/git.py with modifications for mcf usage 661 | def contains_ref(tag): 662 | cmd = "git log --pretty=oneline -n 1 %s -- 2>/dev/null | wc -l" % (tag) 663 | output = echo_check_call(cmd) 664 | if len(output.split()) > 1: 665 | raise Exception("Error: '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output)) 666 | return output.split()[0] != "0" 667 | 668 | def updaterepo(layer): 669 | olddir = os.getcwd() 670 | os.chdir(LOCATIONS[layer]) 671 | 672 | COMMITIDSCURRENT[layer] = echo_check_call("git log --pretty=format:%h -1") 673 | 674 | newcommitid = COMMITIDSNEW[layer] 675 | currentcommitid = COMMITIDSCURRENT[layer] 676 | newbranch = BRANCHINFONEW[layer] 677 | currentbranch = BRANCHINFOCURRENT[layer] 678 | 679 | logger.debug("updaterepo(%s) dir %s, id %s, newid %s, branch %s, newbranch %s" % (layer, LOCATIONS[layer], currentcommitid, newcommitid, currentbranch, newbranch)) 680 | 681 | if newcommitid != currentcommitid: 682 | logger.info('Updating [%s]' % layer) 683 | if newcommitid: 684 | if newcommitid.startswith('refs/changes/'): 685 | if newbranch and newbranch != currentbranch: 686 | # older git doesn't allow to update reference on currently checked out branch 687 | cmd ='git fetch %s %s && git checkout -B %s FETCH_HEAD' % (REMOTE, newcommitid, newbranch) 688 | elif newbranch: 689 | # we're already on requested branch 690 | cmd ='git fetch %s %s && git reset --hard FETCH_HEAD' % (REMOTE, newcommitid) 691 | else: 692 | # we don't have any branch preference use detached 693 | cmd ='git fetch %s %s && git checkout FETCH_HEAD' % (REMOTE, newcommitid) 694 | echo_check_call(cmd) 695 | else: 696 | if not contains_ref(newcommitid): 697 | echo_check_call('git fetch') 698 | if newbranch and newbranch != currentbranch: 699 | # older git doesn't allow to update reference on currently checked out branch 700 | cmd ='git checkout -B %s %s' % (newbranch,newcommitid) 701 | elif newbranch: 702 | # we're already on requested branch 703 | cmd ='git reset --hard %s' % newcommitid 704 | else: 705 | # we don't have any branch preference use detached 706 | cmd ='git checkout %s' % newcommitid 707 | echo_check_call(cmd) 708 | else: 709 | if CLEAN: 710 | echo_check_call("git remote update") 711 | echo_check_call('git reset --hard %s/%s' % (REMOTE, newbranch)) 712 | else: 713 | # current branch always tracks a remote one 714 | echo_check_call('git pull %s' % REMOTE) 715 | logger.info('Done updating [%s]' % layer) 716 | else: 717 | logger.info(('[%s] is up-to-date.' % layer)) 718 | 719 | newdir = os.chdir(olddir) 720 | os.getcwd() 721 | 722 | def set_verbosity(options): 723 | if options.silent and options.silent == 1: 724 | set_log_level('WARNING') 725 | elif options.silent and options.silent == 2: 726 | set_log_level('ERROR') 727 | elif options.silent and options.silent >= 3: 728 | set_log_level('CRITICAL') 729 | elif options.verbose and options.verbose == 1: 730 | set_log_level('DEBUG') 731 | elif options.verbose and options.verbose >= 2: 732 | set_log_level('DEBUG') 733 | # but also run every system command with set -x 734 | enable_trace() 735 | else: 736 | set_log_level('INFO') 737 | 738 | def recover_current_mcf_state(srcdir, origoptions): 739 | mcfstatusfile = os.path.join(srcdir, "mcf.status") 740 | if not os.path.exists(mcfstatusfile): 741 | raise Exception("mcf.status does not exist.") 742 | 743 | commandlinereconstructed = list() 744 | commandlinereconstructed.append('ignored-argv-0') 745 | start = False 746 | with open(mcfstatusfile, 'r') as f: 747 | for line in f.readlines(): 748 | line = line.strip() 749 | if not start: 750 | start = line.startswith("exec") 751 | continue 752 | 753 | if start: 754 | if line.startswith('--command'): 755 | # skip --command configure 756 | continue 757 | elif line.startswith('--'): 758 | line = line.rstrip('\\') 759 | line = line.strip(' ') 760 | line = line.replace('\"','') 761 | line = line.replace('\'','') 762 | commandlinereconstructed.append(line) 763 | else: 764 | lines = line.rstrip('\\') 765 | lines = lines.lstrip() 766 | lines = lines.rstrip() 767 | lines = lines.split() 768 | for lline in lines: 769 | commandlinereconstructed.append(lline) 770 | 771 | sys.argv = commandlinereconstructed 772 | options = getopts() 773 | # always use clean/verbose/silent flags from origoptions not mcf.status 774 | options.clean = origoptions.clean 775 | options.verbose = origoptions.verbose 776 | options.silent = origoptions.silent 777 | return options 778 | 779 | def checkmirror(name, url): 780 | if url.startswith('file://'): 781 | pathstr = url[7:] 782 | if not os.path.isdir(pathstr): 783 | logger.warn("%s parameter '%s' points to non-existent directory" % (name, url)) 784 | elif not os.listdir(pathstr): 785 | logger.warn("%s parameter '%s' points to empty directory, did you forgot to mount it?" % (name, url)) 786 | 787 | def sanitycheck(options): 788 | try: 789 | mirror = echo_check_call('git config -l | grep "^url\..*insteadof=github.com/"') 790 | except subprocess.CalledProcessError: 791 | # git config returns 1 when the option isn't set 792 | mirror = '' 793 | pass 794 | if not mirror: 795 | logger.warn('No mirror for github.com was detected, please define mirrors in ~/.gitconfig if some are available') 796 | if options.sstatemirror: 797 | for m in options.sstatemirror: 798 | checkmirror('sstatemirror', m) 799 | if options.premirror: 800 | checkmirror('premirror', options.premirror) 801 | 802 | def configure_build(srcdir, options): 803 | files = [ 804 | [os.path.join(srcdir, 'build-templates', 'mcf-status.in'), 'mcf.status' ], 805 | [os.path.join(srcdir, 'build-templates', 'oe-init-build-env.in'), 'oe-init-build-env' ], 806 | [os.path.join(srcdir, 'build-templates', 'Makefile.in'), 'Makefile' ], 807 | [os.path.join(srcdir, 'build-templates', 'bblayers-conf.in'), 'conf/bblayers.conf'], 808 | [os.path.join(srcdir, 'build-templates', 'local-conf.in'), 'conf/local.conf' ], 809 | ] 810 | 811 | replacements = [ 812 | ['@bb_number_threads@', str(options.bb_number_threads)], 813 | ['@parallel_make@', str(options.parallel_make)], 814 | ['@no_network@', '0' if options.network else '1'], 815 | ['@fetchpremirroronly@', '1' if options.fetchpremirroronly else '0'], 816 | ['@generatemirrortarballs@', '1' if options.generatemirrortarballs else '0'], 817 | ['@buildhistory_enabled@', '1' if options.buildhistory else '0'], 818 | ['@buildhistory_class@', 'buildhistory' if options.buildhistory else '' ], 819 | ['@buildhistory_author_assignment@', 'BUILDHISTORY_COMMIT_AUTHOR ?= "%s"' % options.buildhistoryauthor if options.buildhistoryauthor else ''], 820 | ['@premirror_assignment@', 'SOURCE_MIRROR_URL ?= "%s"' % options.premirror if options.premirror else ''], 821 | ['@premirror_inherit@', 'INHERIT += "own-mirrors"' if options.premirror else ''], 822 | ['@sstatemirror_assignment@', SSTATE_MIRRORS if options.sstatemirror else ''], 823 | ['@premirror@', options.premirror], 824 | ['@sstatemirror@', ' '.join(options.sstatemirror) if options.sstatemirror else ''], 825 | ['@buildhistoryauthor@', options.buildhistoryauthor], 826 | ['@buildhistory@', '--%s-buildhistory' % ('enable' if options.buildhistory else 'disable')], 827 | ['@network@', '--%s-network' % ('enable' if options.network else 'disable')], 828 | ['@fetchpremirroronlyoption@', '--%s-fetch-premirror-only' % ('enable' if options.fetchpremirroronly else 'disable')], 829 | ['@generatemirrortarballsoption@', '--%s-generate-mirror-tarballs' % ('enable' if options.generatemirrortarballs else 'disable')], 830 | ['@machine@', options.MACHINE[0]], 831 | ['@machines@', ' '.join(options.MACHINE)], 832 | ['@distro@', DISTRO], 833 | ['@prog@', progname], 834 | ['@srcdir@', srcdir], 835 | ['@abs_srcdir@', abs_srcdir], 836 | ] 837 | 838 | # if icecc is not installed, or version does not match requirements, then disabling icecc is the correct action. 839 | icestate = _icecc_installed() 840 | 841 | icecc_replacements = [ 842 | ['@icecc_disable_enable@', '1' if not icestate or options.disable_icecc else ''], 843 | ['@icecc_parallel_make@', '%s' % options.icecc_parallel_make], 844 | ['@alternative_icecc_installation@', ('ICECC_PATH ?= "%s"' % options.icecc_location) if options.icecc_location else ''], 845 | ['@icecc_user_package_blacklist@', ('ICECC_USER_PACKAGE_BL ?= "%s"' % ' '.join(options.icecc_user_package_blacklist)) if options.icecc_user_package_blacklist else ''], 846 | ['@icecc_user_class_blacklist@', ('ICECC_USER_CLASS_BL ?= "%s"' % ' '.join(options.icecc_user_class_blacklist)) if options.icecc_user_class_blacklist else ''], 847 | ['@icecc_user_package_whitelist@', ('ICECC_USER_PACKAGE_WL ?= "%s"' % ' '.join(options.icecc_user_package_whitelist)) if options.icecc_user_package_whitelist else ''], 848 | ['@icecc_environment_script@', 'ICECC_ENV_EXEC ?= "%s"' % options.icecc_env_exec if options.icecc_location else ''], 849 | ['@icecc_disable_enable_mcf@', '--%s-icecc' % ('disable' if not icestate or options.disable_icecc else 'enable')], 850 | ['@alternative_icecc_installation_mcf@', options.icecc_location if options.icecc_location else ''], 851 | ['@icecc_environment_script_mcf@', options.icecc_env_exec if options.icecc_location else ''], 852 | ['@icecc_user_package_blacklist_mcf@', (' '.join(options.icecc_user_package_blacklist)) if options.icecc_user_package_blacklist else ''], 853 | ['@icecc_user_class_blacklist_mcf@', (' '.join(options.icecc_user_class_blacklist)) if options.icecc_user_class_blacklist else ''], 854 | ['@icecc_user_package_whitelist_mcf@', (' '.join(options.icecc_user_package_whitelist)) if options.icecc_user_package_whitelist else ''], 855 | ] 856 | 857 | replacements = replacements + icecc_replacements 858 | 859 | logger.info('MCF-%s: Configuring build directory BUILD' % __version__) 860 | for f in files: 861 | process_file(f, replacements) 862 | parse_collections(srcdir) 863 | write_bblayers_conf(srcdir) 864 | logger.info('MCF-%s: Done configuring build directory BUILD' % __version__) 865 | 866 | echo_check_call('/bin/chmod a+x mcf.status', options.verbose) 867 | 868 | if __name__ == '__main__': 869 | # NB. The exec done by mcf.status causes argv[0] to be an absolute pathname 870 | progname = sys.argv[0] 871 | 872 | # Use the same timestamp for everything created by this invocation of mcf 873 | timestamp = strftime("%Y%m%d%H%M%S", gmtime()) 874 | 875 | options = getopts() 876 | 877 | srcdir = os.path.dirname(progname) 878 | abs_srcdir = os.path.abspath(srcdir) 879 | 880 | if options.mcfcommand == 'update': 881 | # recover current mcf state 882 | options = recover_current_mcf_state(srcdir, options) 883 | 884 | set_verbosity(options) 885 | 886 | if options.clean: 887 | enable_clean() 888 | 889 | read_weboslayers(srcdir) 890 | for M in options.MACHINE: 891 | if M not in SUPPORTED_MACHINES: 892 | logger.error("MACHINE argument '%s' isn't supported (does not appear in Machines in weboslayers.py '%s')" % (M, SUPPORTED_MACHINES)) 893 | sys.exit(1) 894 | 895 | if options.mcfcommand != 'configure': 896 | update_layers(srcdir) 897 | 898 | configure_build(srcdir, options) 899 | 900 | sanitycheck(options) 901 | logger.info('Done.') 902 | -------------------------------------------------------------------------------- /scripts/build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Copyright (c) 2013-2014 LG Electronics, Inc. 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | # Uncomment line below for debugging 18 | #set -x 19 | 20 | # Some constants 21 | SCRIPT_VERSION="5.1.3" 22 | SCRIPT_NAME=`basename $0` 23 | AUTHORITATIVE_OFFICIAL_BUILD_SITE="svl" 24 | 25 | BUILD_REPO="build-webos" 26 | BUILD_LAYERS=("meta-webos" 27 | "meta-webos-backports") 28 | 29 | # Create BOM files, by default disabled 30 | CREATE_BOM= 31 | 32 | # Dump signatures, by default disabled 33 | SIGNATURES= 34 | 35 | # Build site passed to script from outside (Replaces detecting it from JENKINS_URL) 36 | BUILD_SITE= 37 | # Build job passed to script from outside (Replaces detecting it from JOB_NAME) 38 | BUILD_JOB= 39 | # Branch where to push buildhistory, for repositories on gerrit it should start with refs/heads (Replaces detecting it from JOB_NAME and JENKINS_URL) 40 | BUILD_BUILDHISTORY_BRANCH= 41 | 42 | # We assume that script is inside scripts subfolder of build project 43 | # and form paths based on that 44 | CALLDIR=${PWD} 45 | 46 | BUILD_TIMESTAMP_START=`date -u +%s` 47 | BUILD_TIMESTAMP_OLD=$BUILD_TIMESTAMP_START 48 | 49 | TIME_STR="TIME: %e %S %U %P %c %w %R %F %M %x %C" 50 | 51 | # We need absolute path for ARTIFACTS 52 | pushd `dirname $0` > /dev/null 53 | SCRIPTDIR=`pwd -P` 54 | popd > /dev/null 55 | 56 | # Now let's ensure that: 57 | pushd ${SCRIPTDIR} > /dev/null 58 | if [ ! -d "../scripts" ] ; then 59 | echo "Make sure that ${SCRIPT_NAME} is in scripts folder of project" 60 | exit 2 61 | fi 62 | popd > /dev/null 63 | 64 | cd "${SCRIPTDIR}/.." 65 | 66 | BUILD_TOPDIR=`echo "$SCRIPTDIR" | sed 's#/scripts/*##g'` 67 | ARTIFACTS="${BUILD_TOPDIR}/BUILD-ARTIFACTS" 68 | mkdir -p "${ARTIFACTS}" 69 | BUILD_TIME_LOG=${BUILD_TOPDIR}/time.txt 70 | 71 | function print_timestamp { 72 | BUILD_TIMESTAMP=`date -u +%s` 73 | BUILD_TIMESTAMPH=`date -u +%Y%m%dT%TZ` 74 | 75 | local BUILD_TIMEDIFF=`expr ${BUILD_TIMESTAMP} - ${BUILD_TIMESTAMP_OLD}` 76 | local BUILD_TIMEDIFF_START=`expr ${BUILD_TIMESTAMP} - ${BUILD_TIMESTAMP_START}` 77 | BUILD_TIMESTAMP_OLD=${BUILD_TIMESTAMP} 78 | printf "TIME: ${SCRIPT_NAME}-${SCRIPT_VERSION} $1: ${BUILD_TIMESTAMP}, +${BUILD_TIMEDIFF}, +${BUILD_TIMEDIFF_START}, ${BUILD_TIMESTAMPH}\n" | tee -a ${BUILD_TIME_LOG} 79 | } 80 | 81 | print_timestamp "start" 82 | 83 | declare -i RESULT=0 84 | 85 | function showusage { 86 | echo "Usage: ${SCRIPT_NAME} [OPTION...]" 87 | cat </dev/null 115 | if [ "$GERRIT_PROJECT" = "$1" ] ; then 116 | REMOTE=origin 117 | if [ "${layer}" = "meta-webos" -o "${layer}" = "meta-webos-backports" ]; then 118 | # We cannot use origin, because by default it points to 119 | # github.com/openwebos not to g2g and we won't find GERRIT_REFSPEC on github 120 | REMOTE=ssh://g2g.palm.com/${layer} 121 | fi 122 | git fetch $REMOTE $GERRIT_REFSPEC 123 | echo "NOTE: Checking out $layer in $GERRIT_REFSPEC" >&2 124 | git checkout FETCH_HEAD 125 | else 126 | current_branch=`git branch --list|grep ^*\ |awk '{print $2}'` 127 | echo "NOTE: Run 'git remote update && git reset --hard origin/$current_branch' in $layer" >&2 128 | echo "NOTE: Current branch - $current_branch" 129 | git remote update && git reset --hard origin/$current_branch 130 | fi 131 | popd >/dev/null 132 | fi 133 | } 134 | 135 | function check_project_vars { 136 | # Check out appropriate refspec passed in _commit 137 | # when requested by use__commit 138 | layer=`basename $1` 139 | use=$(eval echo \$"use_${layer//-/_}_commit") 140 | ref=$(eval echo "\$${layer//-/_}_commit") 141 | if [ "$use" = "true" ]; then 142 | echo "NOTE: Checking out $layer in $ref" >&2 143 | ldesc=" $layer:$ref" 144 | if [ -d "${layer}" ] ; then 145 | pushd "${layer}" >/dev/null 146 | if echo $ref | grep -q '^refs/changes/'; then 147 | REMOTE=origin 148 | if [ "${layer}" = "meta-webos" -o "${layer}" = "meta-webos-backports" ]; then 149 | # We cannot use origin, because by default it points to 150 | # github.com/openwebos not to g2g and we won't find GERRIT_REFSPEC on github 151 | REMOTE=ssh://g2g.palm.com/${layer} 152 | fi 153 | git fetch $REMOTE $ref 154 | git checkout FETCH_HEAD 155 | else 156 | # for incremental builds we should add "git fetch" here 157 | git checkout $ref 158 | fi 159 | popd >/dev/null 160 | else 161 | echo "ERROR: Layer $layer does not exist!" >&2 162 | fi 163 | fi 164 | echo "$ldesc" 165 | } 166 | 167 | function generate_bom { 168 | MACHINE=$1 169 | I=$2 170 | BBFLAGS=$3 171 | FILENAME=$4 172 | 173 | mkdir -p "${ARTIFACTS}/${MACHINE}/${I}" || true 174 | /usr/bin/time -f "$TIME_STR" bitbake ${BBFLAGS} -g ${I} 2>&1 | tee /dev/stderr | grep '^TIME:' >> ${BUILD_TIME_LOG} 175 | grep '^"\([^"]*\)" \[label="\([^ ]*\) :\([^\\]*\)\\n\([^"]*\)"\]$' package-depends.dot |\ 176 | grep -v '^"\([^"]*\)" \[label="\([^ (]*([^ ]*)\) :\([^\\]*\)\\n\([^"]*\)"\]$' |\ 177 | sed 's/^"\([^"]*\)" \[label="\([^ ]*\) :\([^\\]*\)\\n\([^"]*\)"\]$/\1;\2;\3;\4/g' |\ 178 | sed "s#;${BUILD_TOPDIR}/#;#g" |\ 179 | sort > ${ARTIFACTS}/${MACHINE}/${I}/${FILENAME} 180 | } 181 | 182 | function set_build_site_and_server { 183 | # JENKINS_URL is set by the Jenkins executor. If it's not set or if it's not 184 | # recognized, then the build is, by definition, unofficial. 185 | if [ -n "${JENKINS_URL}" ]; then 186 | case "${JENKINS_URL}" in 187 | https://gecko.palm.com/jenkins/) 188 | BUILD_SITE="svl" 189 | BUILD_JENKINS_SERVER="gecko" 190 | ;; 191 | https://anaconda.palm.com/jenkins/) 192 | BUILD_SITE="svl" 193 | BUILD_JENKINS_SERVER="anaconda" 194 | ;; 195 | # Add detection of other sites here 196 | *) 197 | echo "Unrecognized JENKINS_URL: '${JENKINS_URL}'" 198 | exit 1 199 | ;; 200 | esac 201 | fi 202 | } 203 | 204 | function set_build_job { 205 | # JOB_NAME is set by the Jenkins executor 206 | if [ -z "${JOB_NAME}" ] ; then 207 | echo "JENKINS_URL set but JOB_NAME isn't" 208 | exit 1 209 | fi 210 | 211 | # It's not expected that this script would ever be used for Open webOS as is, 212 | # but the tests for it have been added as a guide for creating that edition. 213 | case ${JOB_NAME} in 214 | *-official-*) 215 | BUILD_JOB="official" 216 | ;; 217 | *-official.nonMP*) 218 | BUILD_JOB="official" 219 | ;; 220 | clean-engineering-*) 221 | # it cannot be verf or engr, because clean builds are managing layer checkouts alone 222 | BUILD_JOB="clean" 223 | ;; 224 | *-engineering-*) 225 | BUILD_JOB="engr" 226 | ;; 227 | *-engineering.MP*) 228 | BUILD_JOB="engr" 229 | ;; 230 | *-verify-*) 231 | BUILD_JOB="verf" 232 | ;; 233 | # The *-integrate-* jobs are like the verification builds done right before 234 | # the official builds. They have different names so that they can use a 235 | # separate, special pool of Jenkins slaves. 236 | *-integrate-*) 237 | BUILD_JOB="integ" 238 | ;; 239 | # The *-multilayer-* builds allow developers to trigger a multi-layer build 240 | # from their desktop, without using the Jenkins parameterized build UI. 241 | # 242 | # The 'mlverf' job type is used so that the build-id makes it obvious that 243 | # a multilayer build was performed (useful when evaluating CCC's). 244 | *-multilayer-*) 245 | BUILD_JOB="mlverf" 246 | ;; 247 | # Legacy job names 248 | build-webos-nightly|build-webos|build-webos-qemu*) 249 | BUILD_JOB="official" 250 | ;; 251 | *-layers-verification) 252 | BUILD_JOB="verf" 253 | ;; 254 | build-webos-*) 255 | BUILD_JOB="${JOB_NAME#build-webos-}" 256 | ;; 257 | # Add detection of other job types here 258 | *) 259 | echo "Unrecognized JOB_NAME: '${JOB_NAME}'" 260 | BUILD_JOB="unrecognized!${JOB_NAME}" 261 | ;; 262 | esac 263 | 264 | # Convert BUILD_JOBs we recognize into abbreviations 265 | case ${BUILD_JOB} in 266 | engineering) 267 | BUILD_JOB="engr" 268 | ;; 269 | esac 270 | } 271 | 272 | function set_buildhistory_branch { 273 | # When we're running with BUILD_JENKINS_SERVER set we assume that buildhistory repo is on gerrit server (needs refs/heads/ prefix) 274 | [ -n "${BUILD_JENKINS_SERVER}" -a -n "${BUILD_BUILDHISTORY_PUSH_REF_PREFIX}" ] && BUILD_BUILDHISTORY_PUSH_REF_PREFIX="refs/heads/" 275 | # We need to prefix branch name, because anaconda and gecko have few jobs with the same name 276 | [ "${BUILD_JENKINS_SERVER}" = "anaconda" -a "${BUILD_BUILDHISTORY_PUSH_REF_PREFIX}" = "refs/heads/" ] && BUILD_BUILDHISTORY_PUSH_REF_PREFIX="${BUILD_BUILDHISTORY_PUSH_REF_PREFIX}anaconda-" 277 | # default is whole job name 278 | BUILD_BUILDHISTORY_BRANCH="${JOB_NAME}-${BUILD_NUMBER}" 279 | 280 | # checkouts master, pushes to master - We assume that there won't be two slaves 281 | # doing official build at the same time, second build will fail to push buildhistory 282 | # when this assumption is broken. 283 | [ "${BUILD_JOB}" = "official" ] && BUILD_BUILDHISTORY_BRANCH="master" 284 | 285 | BUILD_BUILDHISTORY_PUSH_REF=${BUILDHISTORY_PUSH_REF_PREFIX}${BUILDHISTORY_BRANCH} 286 | } 287 | 288 | TEMP=`getopt -o I:T:M:S:j:J:B:u:bshV --long images:,targets:,machines:,scp-url:,site:,jenkins:,job:,buildhistory-ref:,bom,signatures,help,version \ 289 | -n $(basename $0) -- "$@"` 290 | 291 | if [ $? != 0 ] ; then echo "Terminating..." >&2 ; exit 2 ; fi 292 | 293 | # Note the quotes around `$TEMP': they are essential! 294 | eval set -- "$TEMP" 295 | 296 | while true ; do 297 | case $1 in 298 | -I|--images) IMAGES="$2" ; shift 2 ;; 299 | -T|--targets) TARGETS="$2" ; shift 2 ;; 300 | -M|--machines) BMACHINES="$2" ; shift 2 ;; 301 | -S|--site) BUILD_SITE="$2" ; shift 2 ;; 302 | -j|--jenkins) BUILD_JENKINS_SERVER="$2" ; shift 2 ;; 303 | -J|--job) BUILD_JOB="$2" ; shift 2 ;; 304 | -B|--buildhistory-ref) BUILD_BUILDHISTORY_PUSH_REF="$2" ; shift 2 ;; 305 | -u|--scp-url) URL="$2" ; shift 2 ;; 306 | -b|--bom) CREATE_BOM="Y" ; shift ;; 307 | -s|--signatures) SIGNATURES="Y" ; shift ;; 308 | -h|--help) showusage ; shift ;; 309 | -V|--version) echo ${SCRIPT_NAME} ${SCRIPT_VERSION}; exit ;; 310 | --) shift ; break ;; 311 | *) echo "${SCRIPT_NAME} Unrecognized option '$1'"; 312 | showusage ;; 313 | esac 314 | done 315 | 316 | # Has mcf been run and generated a makefile? 317 | if [ ! -f "Makefile" ] ; then 318 | echo "Make sure that mcf has been run and Makefile has been generated" 319 | exit 2 320 | fi 321 | 322 | [ -n "${BUILD_SITE}" -a -n "${BUILD_JENKINS_SERVER}" ] || set_build_site_and_server 323 | [ -n "${BUILD_JOB}" ] || set_build_job 324 | [ -n "${BUILD_BUILDHISTORY_PUSH_REF}" ] || set_buildhistory_branch 325 | 326 | if [ -z "${BUILD_SITE}" -o "${BUILD_JENKINS_SERVER}" = "anaconda" ]; then 327 | # Let the distro determine the policy on setting WEBOS_DISTRO_BUILD_ID when builds 328 | # are unofficial 329 | unset WEBOS_DISTRO_BUILD_ID 330 | else 331 | # If this is an official build, no BUILD_JOB prefix appears in 332 | # WEBOS_DISTRO_BUILD_ID regardless of the build site. 333 | if [ "${BUILD_JOB}" = "official" ]; then 334 | if [ ${BUILD_SITE} = ${AUTHORITATIVE_OFFICIAL_BUILD_SITE} ]; then 335 | BUILD_SITE="" 336 | fi 337 | BUILD_JOB="" 338 | else 339 | # BUILD_JOB can not contain any hyphens 340 | BUILD_JOB="${BUILD_JOB//-/}" 341 | fi 342 | 343 | # Append the separators to site and build-type. 344 | # 345 | # Use intermediate variables so that the remainder of the script need not concern 346 | # itself with the separators, which are purely related to formatting the build id. 347 | idsite="${BUILD_SITE}" 348 | idtype="${BUILD_JOB}" 349 | 350 | if [ -n "$idsite" ]; then 351 | idsite="${idsite}-" 352 | fi 353 | 354 | if [ -n "$idtype" ]; then 355 | idtype="${idtype}." 356 | fi 357 | 358 | # BUILD_NUMBER should be set by the Jenkins executor 359 | if [ -z "${BUILD_NUMBER}" ] ; then 360 | echo "BUILD_SITE is set, but BUILD_NUMBER isn't" 361 | exit 1 362 | fi 363 | 364 | # Format WEBOS_DISTRO_BUILD_ID as .- 365 | export WEBOS_DISTRO_BUILD_ID=${idtype}${idsite}${BUILD_NUMBER} 366 | fi 367 | 368 | # Generate BOM files with metadata checked out by mcf (pinned versions) 369 | if [ -n "${CREATE_BOM}" -a -n "${BMACHINES}" ]; then 370 | print_timestamp "before first bom" 371 | if [ "${BUILD_JOB}" = "verf" -o "${BUILD_JOB}" = "mlverf" -o "${BUILD_JOB}" = "integ" -o "${BUILD_JOB}" = "engr" -o "${BUILD_JOB}" = "clean" ] ; then 372 | # don't use -before suffix for official builds, because they don't need -after and .diff because 373 | # there is no logic for using different revisions than weboslayers.py 374 | BOM_FILE_SUFFIX="-before" 375 | fi 376 | . oe-init-build-env 377 | for MACHINE in ${BMACHINES}; do 378 | for I in ${IMAGES} ${TARGETS}; do 379 | generate_bom "${MACHINE}" "${I}" "${BBFLAGS}" "bom${BOM_FILE_SUFFIX}.txt" 380 | done 381 | done 382 | fi 383 | 384 | print_timestamp "before verf/engr/clean logic" 385 | 386 | if [ "${BUILD_JOB}" = "verf" -o "${BUILD_JOB}" = "mlverf" -o "${BUILD_JOB}" = "integ" -o "${BUILD_JOB}" = "engr" ] ; then 387 | if [ "$GERRIT_PROJECT" != "${BUILD_REPO}" ] ; then 388 | set -e # checkout issues are critical for verification and engineering builds 389 | for project in "${BUILD_LAYERS[@]}" ; do 390 | check_project ${project} 391 | done 392 | set +e 393 | fi 394 | # use -k for verf and engr builds, see [ES-85] 395 | BBFLAGS="${BBFLAGS} -k" 396 | fi 397 | 398 | if [ "${BUILD_JOB}" = "clean" ] ; then 399 | set -e # checkout issues are critical for clean build 400 | desc="[DESC]" 401 | for project in "${BUILD_LAYERS[@]}" ; do 402 | desc="${desc}`check_project_vars ${project}`" 403 | done 404 | # This is picked by regexp in jenkins config as description of the build 405 | echo $desc 406 | set +e 407 | fi 408 | 409 | # Generate BOM files again, this time with metadata possibly different for engineering and verification builds 410 | if [ -n "${CREATE_BOM}" -a -n "${BMACHINES}" ]; then 411 | if [ "${BUILD_JOB}" = "verf" -o "${BUILD_JOB}" = "mlverf" -o "${BUILD_JOB}" = "integ" -o "${BUILD_JOB}" = "engr" -o "${BUILD_JOB}" = "clean" ] ; then 412 | print_timestamp "before 2nd bom" 413 | . oe-init-build-env 414 | for MACHINE in ${BMACHINES}; do 415 | for I in ${IMAGES} ${TARGETS}; do 416 | generate_bom "${MACHINE}" "${I}" "${BBFLAGS}" "bom-after.txt" 417 | diff ${ARTIFACTS}/${MACHINE}/${I}/bom-before.txt \ 418 | ${ARTIFACTS}/${MACHINE}/${I}/bom-after.txt \ 419 | > ${ARTIFACTS}/${MACHINE}/${I}/bom-diff.txt 420 | done 421 | done 422 | fi 423 | fi 424 | 425 | print_timestamp "before signatures" 426 | 427 | if [ -n "${SIGNATURES}" -a -n "${BMACHINES}" ]; then 428 | . oe-init-build-env 429 | oe-core/scripts/sstate-diff-machines.sh --tmpdir=. --targets="${IMAGES} ${TARGETS}" --machines="${BMACHINES}" 430 | for MACHINE in ${BMACHINES}; do 431 | mkdir -p "${ARTIFACTS}/${MACHINE}" || true 432 | tar cjf ${ARTIFACTS}/${MACHINE}/sstate-diff.tar.bz2 sstate-diff/*/${MACHINE} --remove-files 433 | done 434 | fi 435 | 436 | # If there is git checkout in buildhistory dir and we have BUILD_BUILDHISTORY_PUSH_REF 437 | # add or replace push repo in webos-local 438 | # Write it this way so that BUILDHISTORY_PUSH_REPO is kept in the same place in webos-local.conf 439 | if [ -d "buildhistory/.git" -a -n "${BUILD_BUILDHISTORY_PUSH_REF}" ] ; then 440 | if [ -f webos-local.conf ] && grep -q ^BUILDHISTORY_PUSH_REPO webos-local.conf ; then 441 | sed "s#^BUILDHISTORY_PUSH_REPO.*#BUILDHISTORY_PUSH_REPO ?= \"origin master:${BUILD_BUILDHISTORY_PUSH_REF} 2>/dev/null\"#g" -i webos-local.conf 442 | else 443 | echo "BUILDHISTORY_PUSH_REPO ?= \"origin master:${BUILD_BUILDHISTORY_PUSH_REF} 2>/dev/null\"" >> webos-local.conf 444 | fi 445 | echo "INFO: buildhistory will be pushed to '${BUILD_BUILDHISTORY_PUSH_REF}'" 446 | else 447 | [ -f webos-local.conf ] && sed "/^BUILDHISTORY_PUSH_REPO.*/d" -i webos-local.conf 448 | echo "INFO: buildhistory won't be pushed because buildhistory directory isn't git repo or BUILD_BUILDHISTORY_PUSH_REF wasn't set" 449 | fi 450 | 451 | print_timestamp "before main '${JOB_NAME}' build" 452 | 453 | FIRST_IMAGE= 454 | if [ -z "${BMACHINES}" ]; then 455 | echo "ERROR: calling build.sh without -M parameter" 456 | else 457 | . oe-init-build-env 458 | for MACHINE in ${BMACHINES}; do 459 | /usr/bin/time -f "$TIME_STR" bitbake ${BBFLAGS} ${IMAGES} ${TARGETS} 2>&1 | tee /dev/stderr | grep '^TIME:' >> ${BUILD_TIME_LOG} 460 | 461 | # Be aware that non-zero exit code from bitbake doesn't always mean that images weren't created. 462 | # All images were created if it shows "all succeeded" in" Tasks Summary": 463 | # NOTE: Tasks Summary: Attempted 5450 tasks of which 5205 didn't need to be rerun and all succeeded. 464 | 465 | # Sometimes it's followed by: 466 | # Summary: There were 2 ERROR messages shown, returning a non-zero exit code. 467 | # the ERRORs can be from failed setscene tasks or from QA checks, but weren't fatal for build. 468 | 469 | # Collect exit codes to return them from this script (Use PIPESTATUS to read return code from bitbake, not from added tee) 470 | RESULT+=${PIPESTATUS[0]} 471 | 472 | for I in ${IMAGES}; do 473 | mkdir -p "${ARTIFACTS}/${MACHINE}/${I}" || true 474 | # we store only tar.gz, vmdk.zip and .epk images 475 | # and we don't publish kernel images anymore 476 | if ls BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.vmdk >/dev/null 2>/dev/null; then 477 | if type zip >/dev/null 2>/dev/null; then 478 | # zip vmdk images if they exists 479 | find BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.vmdk -exec zip -j {}.zip {} \; || true 480 | mv BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.vmdk.zip ${ARTIFACTS}/${MACHINE}/${I}/ || true 481 | else 482 | # report failure and publish vmdk 483 | RESULT+=1 484 | mv BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.vmdk ${ARTIFACTS}/${MACHINE}/${I}/ || true 485 | fi 486 | # copy webosvbox if we've built vmdk image 487 | cp meta-webos/scripts/webosvbox ${ARTIFACTS}/${MACHINE} || true 488 | # copy few more files for creating different vmdk files with the same rootfs 489 | mv BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.rootfs.ext3 ${ARTIFACTS}/${MACHINE}/${I}/ || true 490 | cp BUILD/sysroots/${MACHINE}/usr/lib/syslinux/mbr.bin ${ARTIFACTS}/${MACHINE}/${I}/ || true 491 | # this won't work in jobs which inherit rm_work, but until we change the image build to stage them use WORKDIR paths 492 | cp BUILD/work/${MACHINE}*/${I}/*/*/hdd/boot/ldlinux.sys ${ARTIFACTS}/${MACHINE}/${I}/ 2>/dev/null || echo "INFO: ldlinux.sys doesn't exist, probably using rm_work" 493 | cp BUILD/work/${MACHINE}*/${I}/*/*/hdd/boot/syslinux.cfg ${ARTIFACTS}/${MACHINE}/${I}/ 2>/dev/null || echo "INFO: syslinux.cfg doesn't exist, probably using rm_work" 494 | cp BUILD/work/${MACHINE}*/${I}/*/*/hdd/boot/vmlinuz ${ARTIFACTS}/${MACHINE}/${I}/ 2>/dev/null || echo "INFO: vmlinuz doesn't exist, probably using rm_work" 495 | elif ls BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.tar.gz >/dev/null 2>/dev/null \ 496 | || ls BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.ext4.img >/dev/null 2>/dev/null \ 497 | || ls BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.epk >/dev/null 2>/dev/null; then 498 | if ls BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.tar.gz >/dev/null 2>/dev/null; then 499 | mv BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.tar.gz ${ARTIFACTS}/${MACHINE}/${I}/ 500 | fi 501 | if ls BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.ext4.img >/dev/null 2>/dev/null; then 502 | mv BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.ext4.img ${ARTIFACTS}/${MACHINE}/${I}/ 503 | fi 504 | if ls BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.epk >/dev/null 2>/dev/null; then 505 | mv BUILD/deploy/images/${MACHINE}/${I}-${MACHINE}-*.epk ${ARTIFACTS}/${MACHINE}/${I}/ 506 | fi 507 | elif ls BUILD/deploy/sdk/${I}-*.sh >/dev/null 2>/dev/null; then 508 | mv BUILD/deploy/sdk/${I}-*.sh ${ARTIFACTS}/${MACHINE}/${I}/ 509 | else 510 | echo "WARN: No recognized IMAGE_FSTYPES to copy to build artifacts" 511 | fi 512 | FOUND_IMAGE="false" 513 | # Add .md5 files for image files, if they are missing or older than image file 514 | for IMG_FILE in ${ARTIFACTS}/${MACHINE}/${I}/*.vmdk* ${ARTIFACTS}/${MACHINE}/${I}/*.tar.gz ${ARTIFACTS}/${MACHINE}/${I}/*.epk ${ARTIFACTS}/${MACHINE}/${I}/*.sh; do 515 | if echo $IMG_FILE | grep -q "\.md5$"; then 516 | continue 517 | fi 518 | if [ -e ${IMG_FILE} -a ! -h ${IMG_FILE} ] ; then 519 | FOUND_IMAGE="true" 520 | if [ ! -e ${IMG_FILE}.md5 -o ${IMG_FILE}.md5 -ot ${IMG_FILE} ] ; then 521 | echo MD5: ${IMG_FILE} 522 | md5sum ${IMG_FILE} | sed 's# .*/# #g' > ${IMG_FILE}.md5 523 | fi 524 | fi 525 | done 526 | # include .fastboot kernel image when available 527 | if ls BUILD/deploy/images/${MACHINE}/*.fastboot >/dev/null 2>/dev/null; then 528 | mv BUILD/deploy/images/${MACHINE}/*.fastboot ${ARTIFACTS}/${MACHINE}/${I}/ 529 | fi 530 | 531 | # copy few interesting buildhistory reports only if the image was really created 532 | # (otherwise old report from previous build checked out from buildhistory repo could be used) 533 | if [ "${FOUND_IMAGE}" = "true" ] ; then 534 | # XXX Might there be other subdirectories under buildhistory/sdk that weren't created by this build? 535 | if ls buildhistory/sdk/*/${I} >/dev/null 2>/dev/null; then 536 | # Unfortunately, the subdirectories under buildhistory/sdk are - 537 | for d in buildhistory/sdk/*; do 538 | target_tunepkgarch=$(basename $d) 539 | mkdir -p ${ARTIFACTS}/$target_tunepkgarch/ 540 | cp -a $d/${I} ${ARTIFACTS}/$target_tunepkgarch/ 541 | done 542 | else 543 | if [ -f buildhistory/images/${MACHINE}/eglibc/${I}/build-id.txt ]; then 544 | cp buildhistory/images/${MACHINE}/eglibc/${I}/build-id.txt ${ARTIFACTS}/${MACHINE}/${I}/build-id.txt 545 | else 546 | cp buildhistory/images/${MACHINE}/eglibc/${I}/build-id ${ARTIFACTS}/${MACHINE}/${I}/build-id.txt 547 | fi 548 | if [ -z "$FIRST_IMAGE" ] ; then 549 | # store build-id.txt from first IMAGE and first MACHINE as representant of whole build for InfoBadge 550 | # instead of requiring jenkins job to hardcode MACHINE/IMAGE name in: 551 | # manager.addInfoBadge("${manager.build.getWorkspace().child('buildhistory/images/qemux86/eglibc/webos-image/build-id.txt').readToString()}") 552 | # we should be able to use: 553 | # manager.addInfoBadge("${manager.build.getWorkspace().child('BUILD-ARTIFACTS/build-id.txt').readToString()}") 554 | # in all builds (making BUILD_IMAGES/BUILD_MACHINE changes less error-prone) 555 | FIRST_IMAGE="${MACHINE}/${I}" 556 | cp ${ARTIFACTS}/${MACHINE}/${I}/build-id.txt ${ARTIFACTS}/build-id.txt 557 | fi 558 | cp buildhistory/images/${MACHINE}/eglibc/${I}/image-info.txt ${ARTIFACTS}/${MACHINE}/${I}/image-info.txt 559 | cp buildhistory/images/${MACHINE}/eglibc/${I}/files-in-image.txt ${ARTIFACTS}/${MACHINE}/${I}/files-in-image.txt 560 | cp buildhistory/images/${MACHINE}/eglibc/${I}/installed-packages.txt ${ARTIFACTS}/${MACHINE}/${I}/installed-packages.txt 561 | cp buildhistory/images/${MACHINE}/eglibc/${I}/installed-package-sizes.txt ${ARTIFACTS}/${MACHINE}/${I}/installed-package-sizes.txt 562 | if [ -e buildhistory/images/${MACHINE}/eglibc/${I}/installed-package-file-sizes.txt ] ; then 563 | cp buildhistory/images/${MACHINE}/eglibc/${I}/installed-package-file-sizes.txt ${ARTIFACTS}/${MACHINE}/${I}/installed-package-file-sizes.txt 564 | fi 565 | fi 566 | fi 567 | done 568 | done 569 | 570 | grep "Elapsed time" buildstats/*/*/*/* | sed 's/^.*\/\(.*\): Elapsed time: \(.*\)$/\2 \1/g' | sort -n | tail -n 20 | tee -a ${ARTIFACTS}/top20buildstats.txt 571 | tar cjf ${ARTIFACTS}/buildstats.tar.bz2 BUILD/buildstats 572 | if [ -e BUILD/qa.log ]; then 573 | cp BUILD/qa.log ${ARTIFACTS} || true 574 | # show them in console log so they are easier to spot (without downloading qa.log from artifacts 575 | echo "WARN: Following QA issues were found:" 576 | cat BUILD/qa.log 577 | else 578 | echo "NOTE: No QA issues were found." 579 | fi 580 | cp BUILD/WEBOS_BOM_data.pkl ${ARTIFACTS} || true 581 | if [ -d BUILD/deploy/sources ] ; then 582 | # exclude diff.gz files, because with old archiver they contain whole source (nothing creates .orig directory) 583 | # see http://lists.openembedded.org/pipermail/openembedded-core/2013-December/087729.html 584 | tar czf ${ARTIFACTS}/sources.tar.gz BUILD/deploy/sources --exclude \*.diff.gz 585 | fi 586 | fi 587 | 588 | print_timestamp "before package-src-uris" 589 | 590 | # Generate list of SRC_URI and SRCREV values for all components 591 | echo "NOTE: generating package-srcuris.txt" 592 | BUILDHISTORY_PACKAGE_SRCURIS="package-srcuris.txt" 593 | ./meta-webos/scripts/buildhistory-collect-srcuris buildhistory >${BUILDHISTORY_PACKAGE_SRCURIS} 594 | ./oe-core/scripts/buildhistory-collect-srcrevs buildhistory >>${BUILDHISTORY_PACKAGE_SRCURIS} 595 | sort -o ${BUILDHISTORY_PACKAGE_SRCURIS} ${BUILDHISTORY_PACKAGE_SRCURIS} 596 | cp ${BUILDHISTORY_PACKAGE_SRCURIS} ${ARTIFACTS} || true 597 | 598 | print_timestamp "before baselines" 599 | 600 | # Don't do these for unofficial builds 601 | if [ -n "${WEBOS_DISTRO_BUILD_ID}" -a "${RESULT}" -eq 0 ]; then 602 | if [ ! -f latest_project_baselines.txt ]; then 603 | # create dummy, especially useful for verification builds (diff against origin/master) 604 | echo ". origin/master" > latest_project_baselines.txt 605 | for project in "${BUILD_LAYERS[@]}" ; do 606 | layer=`basename ${project}` 607 | if [ -d "${layer}" ] ; then 608 | echo "${layer} origin/master" >> latest_project_baselines.txt 609 | fi 610 | done 611 | fi 612 | 613 | command \ 614 | meta-webos/scripts/build-changes/update_build_changes.sh \ 615 | "${BUILD_NUMBER}" \ 616 | "${URL}" 2>&1 || printf "\nChangelog generation failed or script not found.\nPlease check lines above for errors\n" 617 | cp build_changes.log ${ARTIFACTS} || true 618 | fi 619 | 620 | print_timestamp "stop" 621 | 622 | cd "${CALLDIR}" 623 | 624 | # only the result from bitbake/make is important 625 | exit ${RESULT} 626 | 627 | # vim: ts=2 sts=2 sw=2 et 628 | -------------------------------------------------------------------------------- /scripts/prerequisites.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh -e 2 | 3 | # Copyright (c) 2008-2013 LG Electronics, Inc. 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | # This has only been tested on Ubuntu-12.04 amd64. 18 | 19 | check_sanity=true 20 | usage="$0 [--help|-h] [--version|-V]" 21 | version="2.0.1" 22 | 23 | for i ; do 24 | case "$i" in 25 | --help|-h) echo ${usage}; exit 0 ;; 26 | --version|-V) echo ${version}; exit 0 ;; 27 | *) 28 | echo Unrecognized option: $i 1>&2 29 | echo ${usage} 30 | exit 1 31 | ;; 32 | esac 33 | done 34 | 35 | sane=true 36 | 37 | distributor_id_sane="^((Ubuntu))$" 38 | release_sane="^((12.04)|(12.10))$" 39 | codename_sane="^((precise)|(quantal))$" 40 | arch_sane="^((i386)|(amd64))$" 41 | 42 | case "${check_sanity}" in 43 | true) 44 | if [ ! -x /usr/bin/lsb_release ] ; then 45 | echo 'WARNING: /usr/bin/lsb_release not available, cannot test sanity of this system.' 1>&2 46 | sane=false 47 | else 48 | distributor_id=`/usr/bin/lsb_release -s -i` 49 | release=`/usr/bin/lsb_release -s -r` 50 | codename=`/usr/bin/lsb_release -s -c` 51 | 52 | if ! echo "${distributor_id}" | egrep -q "${distributor_id_sane}"; then 53 | echo "WARNING: Distributor ID reported by lsb_release '${distributor_id}' not in '${distributor_id_sane}'" 1>&2 54 | sane=false 55 | fi 56 | 57 | if ! echo "${release}" | egrep -q "${release_sane}"; then 58 | echo "WARNING: Release reported by lsb_release '${release}' not in '${release_sane}'" 1>&2 59 | sane=false 60 | fi 61 | 62 | if ! echo "${codename}" | egrep -q "${codename_sane}"; then 63 | echo "WARNING: Codename reported by lsb_release '${codename}' not in '${codename_sane}'" 1>&2 64 | sane=false 65 | fi 66 | fi 67 | 68 | if [ ! -x /usr/bin/dpkg ] ; then 69 | echo 'WARNING: /usr/bin/dpkg not available, cannot test architecture of this system.' 1>&2 70 | sane=false 71 | else 72 | arch=`/usr/bin/dpkg --print-architecture` 73 | if ! echo "${arch}" | egrep -q "${arch_sane}"; then 74 | echo "WARNING: Architecture reported by dpkg --print-architecture '${arch}' not in '${arch_sane}'" 1>&2 75 | sane=false 76 | fi 77 | fi 78 | 79 | case "${sane}" in 80 | true) ;; 81 | false) 82 | echo 'WARNING: This system configuration is untested. Let us know if it works.' 1>&2 83 | ;; 84 | esac 85 | ;; 86 | 87 | false) ;; 88 | esac 89 | 90 | apt-get update 91 | 92 | # These are essential on ubuntu 93 | essential="\ 94 | bzip2 \ 95 | gzip \ 96 | tar \ 97 | wget \ 98 | " 99 | 100 | # And we need these when on 64-bit Ubuntu ... 101 | # gcc-multilib is needed to build 32bit version of pseudo 102 | # zlib1g:i386 is needed for 32bit prebuilt toolchain 103 | amd64_specific="\ 104 | gcc-multilib \ 105 | zlib1g:i386 \ 106 | " 107 | 108 | [ "${arch}" = amd64 ] && essential="${essential} ${amd64_specific}" 109 | 110 | apt-get install --yes \ 111 | ${essential} \ 112 | bison \ 113 | build-essential \ 114 | chrpath \ 115 | diffstat \ 116 | gawk \ 117 | git \ 118 | language-pack-en \ 119 | python3 \ 120 | python3-jinja2 \ 121 | texi2html \ 122 | texinfo \ 123 | 124 | -------------------------------------------------------------------------------- /weboslayers.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) 2008-2014 LG Electronics, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | # 15 | # This implementation introduces next generation build environment for 16 | # Open webOS. The change introduces a mechanism to add additional layers to the 17 | # base ones: oe-core, meta-oe, and meta-webos, and to specify the commits to be 18 | # used for each. mcf now expects the layers to be defined in this file 19 | # (weboslayers.py in the same directory as mcf) as a list of Python data tuples: 20 | # 21 | # webos_layers = [ 22 | # ('layer-name', priority, 'URL', 'submission', 'working-dir'), 23 | # ... 24 | # ] 25 | # 26 | # where: 27 | # 28 | # layer-name = Unique identifier; it represents the layer directory containing 29 | # conf/layer.conf. 30 | # 31 | # priority = Integer layer priority as defined by OpenEmbedded. It also 32 | # specifies the order in which layers are searched for files. 33 | # Larger values have higher priority. A value of -1 indicates 34 | # that the entry is not a layer; for example, bitbake. 35 | # 36 | # URL = The Git repository address for the layer from which to clone. 37 | # A value of '' skips the cloning. 38 | # 39 | # submission = The information used by Git to checkout and identify the precise 40 | # content. Submission values could be "branch=" and 41 | # "commit=" or "tag=". Omitted branch information means 42 | # only that "master" will be used as the name of the local 43 | # branch. Omitted commit or tag means origin/HEAD will be checked 44 | # out (which might NOT be origin/master, although 45 | # it usually is). 46 | # 47 | # working-dir = Alternative directory for the layer. 48 | # 49 | # The name of the distribution is also defined in this file 50 | # along with a list of the valid MACHINE-s 51 | # 52 | 53 | Distribution = "webos" 54 | 55 | # Supported MACHINE-s 56 | Machines = ['qemux86', 'qemuarm'] 57 | 58 | # github.com/openembedded repositories are read-only mirrors of the authoritative 59 | # repositories on git.openembedded.org 60 | webos_layers = [ 61 | ('bitbake', -1, 'git://github.com/openembedded/bitbake.git', 'branch=1.18,commit=0f7b6a0', ''), 62 | ('meta', 5, 'git://github.com/openembedded/openembedded-core.git', 'branch=dylan,commit=bf2d538', ''), 63 | ('meta-oe', 6, 'git://github.com/openembedded/meta-openembedded.git', 'branch=dylan,commit=70ebe86', ''), 64 | ('meta-networking', 6, 'git://github.com/openembedded/meta-openembedded.git', '', ''), 65 | 66 | ('meta-webos-backports', 9, 'git://github.com/openwebos/meta-webos-backports.git', 'commit=ed80399', ''), 67 | ('meta-webos', 10, 'git://github.com/openwebos/meta-webos.git', 'commit=f43220d', ''), 68 | ] 69 | --------------------------------------------------------------------------------