11 |
12 | Estuary is a complete open source project for ARM64 ecosystem. It provides a total solutions for general-purpose computer based on aarch64 architecture. Anyone can quickly bring up an ARM64 platform both software and hardware with Estuary!
13 |
14 | The goal of Estuary project is to support, enable and speed up the maturity of ARM64 ecosystem.
15 |
16 | More detailed information about Estuary, please refer to http://open-estuary.org/estuary.
17 |
18 |
Latest release version
19 |
20 | Details of the latest version can be found at: http://open-estuary.org/releases-2/
21 |
22 | To get the difference between the latest version and the previous one, please refer to [changelist.md](https://github.com/open-estuary/estuary/blob/master/changelist.md).
23 |
24 |
Documentation
25 |
26 | We provide technical documents about UEFI, grub, methods to bring up system, armor, application integration and so on. You can get more detailed documentations about them at https://github.com/open-estuary/estuary/tree/master/doc
27 |
28 |
How to get
29 |
30 | All released binary files and the corresponding documents are placed in **Download location**, you can browse and fetch the correct files according to the release version and your board type in this site.
31 |
32 | All prebuilt kernel image/uefi/grub and other binaries can be obtained in **Download location**`/release//linux/common//`.
33 |
34 | All validated distributions can be obtained in **Download location**`/release//linux//common`.
35 |
36 | Each folder contains a "ReadMe" file which describes the details of the files in each main folders.
37 |
38 | More detailed documentations are available in **Download location**`/release//linux/common//`documentation.
39 |
40 | **Download Location**
41 |
42 | Accessing from China: ftp://117.78.41.188
43 |
44 | Accessing from outside-China: http://download.open-estuary.org/
45 |
46 |
Development
47 |
48 | To get start, please refer to http://open-estuary.org/getting-started/.
49 |
50 | To deploy the system, please refer to Depoly_Manual and Quick_Deployment documents which are available in `/build/doc/` directory after you have built the project. Also you can find more details at https://github.com/open-estuary/estuary/tree/master/doc
51 |
52 |
57 |
58 | COPYRIGHT for released/pre-released binary files of Estuary, mainly include: kernel image/uefi/grub, distributions like CentOS/Ubuntu, integrated applications like OpenJDK/MySQL/Redis.
59 |
60 | **Copyright**: (c) 2017-, Open Estuary.
61 |
62 | **License**: GPLv2
63 |
64 | This program is free software; you can redistribute it or modify
65 | it under the terms of the GNU General Public License as published by
66 | the Free Software Foundation; either version 2 of the License, or
67 | (at your option) any later version.
68 |
69 | This program is distributed in the hope that it will be useful,
70 | but WITHOUT ANY WARRANTY; without even the implied warranty of
71 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
72 | GNU General Public License for more details.
73 |
74 |
Contact us
75 |
76 | To post or search any information about Estuary, please refer to [Discussion Forum](http://open-estuary.org/forums/)
77 |
78 | To technical support or raise new issues about Estuary, please refer to [Support center](http://open-estuary.org/supportcenter/)
79 |
80 | To collaborate with Estuary, please refer to http://open-estuary.org/estuary-new-collaboration/
81 |
82 | And….if you have any ARM based platform and you want to enable Estuary solution on it.
83 | Please contact us! Let’s work together to make the ARM Server Ecosystem Rich and Matured!
84 | Yes, together we can make it happen!
85 |
86 |
87 |
--------------------------------------------------------------------------------
/build_all.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | set -ex
3 | user=`whoami`
4 | if [ x"$user" != x"root" ]; then
5 | echo -e "\033[31mInsufficient permissions! please use sudo to run script!\033[0m"
6 | exit 1
7 | fi
8 | ALL_SHELL_DISTRO="centos fedora ubuntu opensuse debian"
9 | BUILD_DIR="./workspace"
10 | for DISTRO in $ALL_SHELL_DISTRO;do
11 | ./build.sh --build_dir=${BUILD_DIR} -d "${DISTRO,,}" > ${DISTRO}.log 2>&1 &
12 | done
13 | ./build.sh --build_dir=${BUILD_DIR} -d common > common.log 2>&1 &
14 | wait
15 |
--------------------------------------------------------------------------------
/changelist.md:
--------------------------------------------------------------------------------
1 | # Change list for Estuary v3.1:
2 | 1. UEFI
3 | - Added HiKey support in UEFI
4 | 2. OS
5 | - Upgraded Linux kernel version to v4.9.20
6 | - Supported HiKey with v4.9.20 kernel
7 | 3. Distros
8 | - Fixed minirootfs devramfs confliction with mdev
9 | - Added support for OpenEmbedded
10 | - Added support for RancherOS
11 | 4. Applications
12 | - Added OpenStack Newton initial support
13 | - Enabled HHVM for ARM64
14 | - Enabled MongoDB docker image
15 | 5. Deployment
16 | - Fixed various BMC load ISO bugs (distro selection, waiting time)
17 | - Sort hard disk list in alphabetic order
18 | - Improved distribution generation speed when building
19 | 6. Document
20 | - Updated project documentation (Readme, Grub, etc)
21 | - Updated applications user manual (Redis, PostgreSQL, MySQL, MongoDB, etc)
22 | 7. CI/Automation
23 | - Supported basic CI/Automation for D03 (Build, NFS/Hard disk Deployment, Some tests)
24 | - Supported basic CI/Automation for D05 (Build, NFS/Hard disk Deployment, Some tests)
25 |
26 | # Remained issues:
27 | 1. Armor utilities are not fully support
28 |
--------------------------------------------------------------------------------
/configs/auto-install/centos/auto-iso/grub.cfg:
--------------------------------------------------------------------------------
1 | set default="0"
2 |
3 | function load_video {
4 | if [ x$feature_all_video_module = xy ]; then
5 | insmod all_video
6 | else
7 | insmod efi_gop
8 | insmod efi_uga
9 | insmod ieee1275_fb
10 | insmod vbe
11 | insmod vga
12 | insmod video_bochs
13 | insmod video_cirrus
14 | fi
15 | }
16 |
17 | load_video
18 | set gfxpayload=keep
19 | insmod gzio
20 | insmod part_gpt
21 | insmod ext2
22 |
23 | set timeout=60
24 | ### END /etc/grub.d/00_header ###
25 |
26 | search --no-floppy --set=root -l 'CentOS 7 aarch64'
27 |
28 | ### BEGIN /etc/grub.d/10_linux ###
29 | menuentry 'Install CentOS 7 (text mode)' --class red --class gnu-linux --class gnu --class os {
30 | linux /images/pxeboot/vmlinuz inst.ks=file:/ks-iso.cfg ip=dhcp inst.text
31 | initrd /images/pxeboot/initrd.img
32 | }
33 | menuentry 'Install CentOS 7 (graphical mode)' --class red --class gnu-linux --class gnu --class os {
34 | linux /images/pxeboot/vmlinuz inst.ks=file:/ks-iso.cfg ip=dhcp
35 | initrd /images/pxeboot/initrd.img
36 | }
37 |
--------------------------------------------------------------------------------
/configs/auto-install/centos/auto-iso/ks-iso.cfg:
--------------------------------------------------------------------------------
1 | network --bootproto=dhcp --onboot=yes --ipv6=auto --activate
2 |
3 | %packages
4 | bash-completion
5 | epel-release
6 | wget
7 | %end
8 |
9 | %post --interpreter=/bin/bash
10 | estuary_release=ftp://repoftp:repopushez7411@117.78.41.188/releases
11 | cat > /etc/yum.repos.d/estuary.repo << EOF
12 | [Estuary]
13 | name=Estuary
14 | baseurl=${estuary_release}/5.0/centos/
15 | enabled=1
16 | gpgcheck=1
17 | gpgkey=${estuary_release}/ESTUARY-GPG-KEY
18 |
19 | [estuary-kernel]
20 | name=estuary-kernel
21 | baseurl=http://114.119.4.74/kernel-5.3/centos/
22 | enabled=1
23 | gpgcheck=0
24 | gpgkey=${estuary_release}/ESTUARY-GPG-KEY
25 | EOF
26 | chmod +r /etc/yum.repos.d/estuary.repo
27 | yum clean dbcache
28 | sed -i "s/GRUB_CMDLINE_LINUX=.*/GRUB_CMDLINE_LINUX=\"crashkernel=auto iommu.strict=0\"/g" /etc/default/grub
29 | grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
30 | %end
31 |
--------------------------------------------------------------------------------
/configs/auto-install/centos/auto-pxe/grub.cfg:
--------------------------------------------------------------------------------
1 | set default="0"
2 |
3 | function load_video {
4 | if [ x$feature_all_video_module = xy ]; then
5 | insmod all_video
6 | else
7 | insmod efi_gop
8 | insmod efi_uga
9 | insmod ieee1275_fb
10 | insmod vbe
11 | insmod vga
12 | insmod video_bochs
13 | insmod video_cirrus
14 | fi
15 | }
16 |
17 | load_video
18 | set gfxpayload=keep
19 | insmod gzio
20 | insmod part_gpt
21 | insmod ext2
22 |
23 | set timeout=60
24 | ### END /etc/grub.d/00_header ###
25 |
26 | search --no-floppy --set=root -l 'CentOS 7 aarch64'
27 |
28 | ### BEGIN /etc/grub.d/10_linux ###
29 | menuentry 'Install CentOS Linux 7 (text mode)' --class red --class gnu-linux --class gnu --class os {
30 | linux /images/pxeboot/vmlinuz inst.stage2=cdrom inst.ks=file:/ks.cfg ip=dhcp inst.text
31 | initrd /images/pxeboot/initrd.img
32 | }
33 | menuentry 'Install CentOS Linux 7 (graphical mode)' --class red --class gnu-linux --class gnu --class os {
34 | linux /images/pxeboot/vmlinuz inst.stage2=cdrom inst.ks=file:/ks.cfg ip=dhcp
35 | initrd /images/pxeboot/initrd.img
36 | }
37 |
--------------------------------------------------------------------------------
/configs/auto-install/centos/auto-pxe/ks.cfg:
--------------------------------------------------------------------------------
1 | url --url="http://mirror.centos.org/altarch/7/os/aarch64/"
2 | repo --name="estuary" --baseurl=http://114.119.4.74/kernel-5.3/centos
3 | repo --name="extras" --baseurl="http://mirror.centos.org/altarch/7/extras/aarch64/"
4 | network --bootproto=dhcp --onboot=yes --ipv6=auto --activate
5 |
6 | %packages
7 | bash-completion
8 | epel-release
9 | wget
10 | %end
11 |
12 | %post --interpreter=/bin/bash
13 | estuary_release=ftp://repoftp:repopushez7411@117.78.41.188/releases
14 | cat > /etc/yum.repos.d/estuary.repo << EOF
15 | [Estuary]
16 | name=Estuary
17 | baseurl=${estuary_release}/5.0/centos/
18 | enabled=1
19 | gpgcheck=1
20 | gpgkey=${estuary_release}/ESTUARY-GPG-KEY
21 |
22 | [estuary-kernel]
23 | name=estuary-kernel
24 | baseurl=http://114.119.4.74/kernel-5.3/centos/
25 | enabled=1
26 | gpgcheck=0
27 | gpgkey=${estuary_release}/ESTUARY-GPG-KEY
28 | EOF
29 | chmod +r /etc/yum.repos.d/estuary.repo
30 | yum clean dbcache
31 | sed -i "s/GRUB_CMDLINE_LINUX=.*/GRUB_CMDLINE_LINUX=\"crashkernel=auto iommu.strict=0\"/g" /etc/default/grub
32 | grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
33 | %end
34 |
--------------------------------------------------------------------------------
/configs/auto-install/fedora/auto-iso/grub.cfg:
--------------------------------------------------------------------------------
1 | set default="0"
2 |
3 | function load_video {
4 | if [ x$feature_all_video_module = xy ]; then
5 | insmod all_video
6 | else
7 | insmod efi_gop
8 | insmod efi_uga
9 | insmod ieee1275_fb
10 | insmod vbe
11 | insmod vga
12 | insmod video_bochs
13 | insmod video_cirrus
14 | fi
15 | }
16 |
17 | load_video
18 | set gfxpayload=keep
19 | insmod gzio
20 | insmod part_gpt
21 | insmod ext2
22 |
23 | set timeout=60
24 | ### END /etc/grub.d/00_header ###
25 |
26 | search --no-floppy --set=root -l 'Fedora-S-dvd-aarch64-29'
27 |
28 | ### BEGIN /etc/grub.d/10_linux ###
29 | menuentry 'Install Fedora 29 (text mode)' --class red --class gnu-linux --class gnu --class os {
30 | linux /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=Fedora-S-dvd-aarch64-29 ro inst.ks=file:/ks-iso.cfg inst.text
31 | initrd /images/pxeboot/initrd.img
32 | }
33 | menuentry 'Install Fedora 29 (graphical mode)' --class red --class gnu-linux --class gnu --class os {
34 | linux /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=Fedora-S-dvd-aarch64-29 ro inst.ks=file:/ks-iso.cfg
35 | initrd /images/pxeboot/initrd.img
36 | }
37 |
--------------------------------------------------------------------------------
/configs/auto-install/fedora/auto-iso/ks-iso.cfg:
--------------------------------------------------------------------------------
1 | %packages
2 | bash-completion
3 | wget
4 | %end
5 |
6 | %post --interpreter=/bin/bash
7 | sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config || true
8 | %end
9 |
10 |
--------------------------------------------------------------------------------
/configs/auto-install/fedora/auto-pxe/grub.cfg:
--------------------------------------------------------------------------------
1 | set default="0"
2 |
3 | function load_video {
4 | if [ x$feature_all_video_module = xy ]; then
5 | insmod all_video
6 | else
7 | insmod efi_gop
8 | insmod efi_uga
9 | insmod ieee1275_fb
10 | insmod vbe
11 | insmod vga
12 | insmod video_bochs
13 | insmod video_cirrus
14 | fi
15 | }
16 |
17 | load_video
18 | set gfxpayload=keep
19 | insmod gzio
20 | insmod part_gpt
21 | insmod ext2
22 |
23 | set timeout=60
24 | ### END /etc/grub.d/00_header ###
25 |
26 | search --no-floppy --set=root -l 'Fedora-S-dvd-aarch64-29'
27 |
28 | ### BEGIN /etc/grub.d/10_linux ###
29 | menuentry 'Install Fedora 29 (text mode)' --class red --class gnu-linux --class gnu --class os {
30 | linux /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=Fedora-S-dvd-aarch64-29 ro inst.ks=file:/ks.cfg ip=dhcp inst.text
31 | initrd /images/pxeboot/initrd.img
32 | }
33 | menuentry 'Install Fedora 29 (graphical mode)' --class red --class gnu-linux --class gnu --class os {
34 | linux /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=Fedora-S-dvd-aarch64-29 ro inst.ks=file:/ks.cfg ip=dhcp
35 | initrd /images/pxeboot/initrd.img
36 | }
37 |
--------------------------------------------------------------------------------
/configs/auto-install/fedora/auto-pxe/ks.cfg:
--------------------------------------------------------------------------------
1 | url --url="https://dl.fedoraproject.org/pub/fedora/linux/releases/test/29_Beta/Everything/aarch64/os/"
2 | repo --name="estuary" --baseurl=ftp://repoftp:repopushez7411@117.78.41.188/releases/5.2/fedora
3 |
4 | %packages
5 | kernel-4.16.0
6 | kernel-modules-extra-4.16.0
7 | bash-completion
8 | wget
9 | %end
10 |
11 | %post --interpreter=/bin/bash
12 | sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config || true
13 | %end
14 |
15 |
--------------------------------------------------------------------------------
/default.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
6 |
7 |
9 |
10 |
12 |
13 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
--------------------------------------------------------------------------------
/deploy/pxe-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # get_pxe_tftproot <__tftproot>
5 | ###################################################################################
6 | get_pxe_tftproot()
7 | {
8 | local __tftproot=$1
9 | local tftproot=""
10 | echo "--------------------------------------------------------------------------------"
11 | echo "Which directory do you want to be your tftp root directory? (if this directory does not exist it will be created for you)"
12 | echo "--------------------------------------------------------------------------------"
13 | read -p "[ /var/lib/tftpboot ] " tftproot
14 | if [ x"$tftproot" = x"" ]; then
15 | tftproot="/var/lib/tftpboot"
16 | fi
17 |
18 | eval $__tftproot="$tftproot"
19 | return 0
20 | }
21 |
22 | ###################################################################################
23 | # get_pxe_nfsroot <__nfsroot>
24 | ###################################################################################
25 | get_pxe_nfsroot()
26 | {
27 | local __nfsroot=$1
28 | local nfsroot=""
29 | echo "--------------------------------------------------------------------------------"
30 | echo "Which directory do you want to be your nfs root directory? (if this directory does not exist it will be created for you)"
31 | echo "--------------------------------------------------------------------------------"
32 | read -p "[ /var/lib/nfsroot ] " nfsroot
33 | if [ x"$nfsroot" = x"" ]; then
34 | nfsroot="/var/lib/nfsroot"
35 | fi
36 |
37 | eval $__nfsroot="'$nfsroot'"
38 | return 0
39 | }
40 |
41 | ###################################################################################
42 | # get_pxe_interface <__interface>
43 | ###################################################################################
44 | get_pxe_interface() {
45 | local __interface=$1
46 | local netcard_name=
47 | local netcard_idx=1
48 | local netcard_count=`ifconfig -a | grep -A 1 eth | grep -B 1 "inet addr" | \
49 | grep -v "Bcast:0.0.0.0" | grep -P "^(eth).*" | wc -l`
50 | if [ $netcard_count -eq 0 ]; then
51 | echo "Please setup netcard at first!" ; exit 1
52 | elif [ $netcard_count -gt 1 ]; then
53 | echo "--------------------------------------------------------------------------------"
54 | echo "Which network card do you want to bind to the PXE (the board will connect into the same local area network)"
55 | echo "--------------------------------------------------------------------------------"
56 | while true; do
57 | echo -e "\nPlease choise the network card needed: \n"
58 | ifconfig -a | grep -A 1 eth | grep -B 1 "inet addr" | grep -v "Bcast:0.0.0.0" | awk 'BEGIN{FS="\n";OFS=") ";RS="--\n"} {print NR,$0}'
59 | echo " "
60 | read -p 'Enter Device Number or 'q' to exit: ' netcard_idx
61 | echo " "
62 | if expr $netcard_idx + 0 &>/dev/null; then
63 | if [ $netcard_idx -ge 1 ] && [ $netcard_idx -le $netcard_idx ]; then
64 | break
65 | fi
66 | elif [ x"$netcard_count" = x"q" ]; then
67 | return 1
68 | fi
69 | done
70 | fi
71 |
72 | netcard_name=`ifconfig -a | grep -A 1 eth | grep -B 1 "inet addr" | grep -v "Bcast:0.0.0.0" | awk 'BEGIN{FS="\n";OFS=") ";RS="--\n"} {print NR,$0}' | \
73 | grep -Po "(?<=${netcard_idx}\) )(eth[^ ]*)"`
74 | if [ x"$netcard_name" = x"" ]; then
75 | return 1
76 | else
77 | eval $__interface="'$netcard_name'"
78 | return 0
79 | fi
80 | }
81 |
--------------------------------------------------------------------------------
/deploy/quick-deploy.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | TOPDIR=$(cd `dirname $0` ; pwd)
4 | export PATH=$TOPDIR:$PATH
5 |
6 | ###################################################################################
7 | # Global args
8 | ###################################################################################
9 | TARGET=
10 | BOARDS_MAC=
11 | PLATFORMS=
12 | DISTROS=
13 | CAPACITY=
14 | BIN_DIR=
15 |
16 | DEPLOY_TYPE=
17 | TARGET_DEVICE=
18 |
19 | ###################################################################################
20 | # Global vars
21 | ###################################################################################
22 | ESTUARY_LABEL="Estuary"
23 |
24 | ###################################################################################
25 | # quick_deploy_usage
26 | ###################################################################################
27 | quick_deploy_usage()
28 | {
29 | cat << EOF
30 | Usage: quick-deploy.sh --target=xxx --boardmac=xxx,xxx --platform=xxx,xxx --distros=xxx,xxx --capacity=xxx,xxx --output=xxx
31 | --target: deploy type and device (usb, iso, pxe)
32 | for usb, you can use "--target=usb:/dev/sdb" to install deploy files into /dev/sdb;
33 | for iso, you can use "--target=iso:Estuary.iso" to create Estuary.iso deploy media;
34 | for pxe, you can use "--target=pxe" to setup the pxe on the host.
35 | if usb device is not specified, the first usb storage device will be default.
36 | if iso file name not specified, the "Estuary_.iso" will be default.
37 | --boardmac: if you pxe deploy type, use "--boardmac" to specify the target board mac addresses.
38 | --platform: which platforms to deploy
39 | --distros: which distros to deploy
40 | --capacity: capacity for distros on install disk, unit GB (suggest 50GB)
41 | --binary: target binary directory
42 |
43 | Example:
44 | quick-deploy.sh --target=usb --platform=D03,D05 --distros=Ubuntu,CentOS --binary=./workspace
45 | quick-deploy.sh --target=usb:/dev/sdb --platform=D03 --distros=Ubuntu,CentOS --binary=./workspace
46 | quick-deploy.sh --target=iso --platform=D03 --distros=Ubuntu,CentOS --binary=./workspace/binary
47 | quick-deploy.sh --target=iso:Estuary_D03.iso --platform=D03 --distros=Ubuntu,CentOS --binary=./workspace
48 | quick-deploy.sh --target=pxe --boardmac=01-00-18-82-05-00-7f,01-00-18-82-05-00-68 --platform=D03 --distros=Ubuntu,CentOS --binary=./workspace
49 |
50 | EOF
51 | }
52 |
53 | ###################################################################################
54 | # get args
55 | ###################################################################################
56 | while test $# != 0
57 | do
58 | case $1 in
59 | --*=*) ac_option=`expr "X$1" : 'X\([^=]*\)='` ; ac_optarg=`expr "X$1" : 'X[^=]*=\(.*\)'` ;;
60 | *) ac_option=$1 ;;
61 | esac
62 |
63 | case $ac_option in
64 | --target) TARGET=$ac_optarg ;;
65 | --boardmac) BOARDS_MAC=$ac_optarg ;;
66 | --platform) PLATFORMS=$ac_optarg ;;
67 | --distros) DISTROS=$ac_optarg ;;
68 | --capacity) CAPACITY=$ac_optarg ;;
69 | --binary) BIN_DIR=$ac_optarg ;;
70 | *) echo "Error! Unknown option $ac_option!"
71 | quick_deploy_usage ; exit 1 ;;
72 | esac
73 |
74 | shift
75 | done
76 |
77 | ###################################################################################
78 | # Check args
79 | ###################################################################################
80 | if [ x"$TARGET" = x"" ] || [ x"$PLATFORMS" = x"" ] || [ x"$DISTROS" = x"" ] || [ x"$BIN_DIR" = x"" ]; then
81 | quick_deploy_usage ; exit 1
82 | fi
83 |
84 | ###################################################################################
85 | # Deploy
86 | ###################################################################################
87 | deploy_type=`echo "$TARGET" | awk -F ':' '{print $1}'`
88 | deploy_device=`echo "$TARGET" | awk -F ':' '{print $2}'`
89 | PLATFORMS=`echo $PLATFORMS |sed 's/HiKey//g'|sed 's/QEMU//g'`
90 |
91 | if [ x"$deploy_type" = x"usb" ]; then
92 | mkusbinstall.sh --target=$deploy_device --platforms=$PLATFORMS --distros=$DISTROS --capacity=$CAPACITY --bindir=$BIN_DIR || exit 1
93 | elif [ x"$deploy_type" = x"iso" ]; then
94 | mkisoimg.sh --platforms=$PLATFORMS --distros=$DISTROS --capacity=$CAPACITY --disklabel="Estuary" --bindir=$BIN_DIR || exit 1
95 | elif [ x"$deploy_type" = x"pxe" ]; then
96 | mkpxe.sh --platforms=$PLATFORMS --distros=$DISTROS --capacity=$CAPACITY --boardmac=$BOARDS_MAC --bindir=$BIN_DIR || exit 1
97 | else
98 | echo "Unknow deploy type!" >&2 ; exit 1
99 | fi
100 |
101 | ###################################################################################
102 | # Report result
103 | ###################################################################################
104 | echo "/*---------------------------------------------------------------"
105 | echo "- quick deploy, deploy type: $deploy_type, platforms: $PLATFORMS, distros: $DISTROS"
106 | echo "- target device: $deploy_device, boards mac: $BOARDS_MAC"
107 | echo "- done!"
108 | echo "---------------------------------------------------------------*/"
109 | echo ""
110 | exit 0
111 |
--------------------------------------------------------------------------------
/deploy/usb-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # get_usb_devices
5 | ###################################################################################
6 | get_usb_devices()
7 | {
8 | (
9 | root=$(mount | grep " / " | grep -Po "(/dev/sd[^ ])")
10 | if [ $? -ne 0 ]; then
11 | root="/dev/sdx"
12 | fi
13 |
14 | index=0
15 | usb_devices=()
16 | disk_devices=(`ls -l --color=auto /dev/disk/by-id/usb* 2>/dev/null| awk '{print $NF}' | grep -Pv "[0-9]"`)
17 | for disk in ${disk_devices[@]}; do
18 | name=${disk##*/}
19 | usb_devices[$index]=/dev/$name
20 | let index++
21 | done
22 |
23 | echo ${usb_devices[*]}
24 | )
25 | }
26 |
27 | ###################################################################################
28 | # check_usb_device
29 | ###################################################################################
30 | check_usb_device()
31 | {
32 | (
33 | usb_device=$1
34 | if [ x"$usb_device" = x"" ] || [ ! -b $usb_device ]; then
35 | echo "Device $usb_device is not exist!" ; return 1
36 | else
37 | if sudo lshw 2>/dev/null | grep "bus info: usb" -A 12 | grep "logical name: /dev/sd" | grep $usb_device >/dev/null; then
38 | return 0
39 | else
40 | echo "Device $usb_device is not an usb device!" ; return 1
41 | fi
42 | fi
43 | )
44 | }
45 |
46 | ###################################################################################
47 | # get_default_usb <__usb_device>
48 | ###################################################################################
49 | get_default_usb()
50 | {
51 | local __usb_device=$1
52 | local usb_devices=(`get_usb_devices`)
53 | local first_usb=${usb_devices[0]}
54 | if [ x"$first_usb" != x"" ]; then
55 | eval $__usb_device="'$first_usb'" ; return 0
56 | else
57 | return 1
58 | fi
59 | }
60 |
--------------------------------------------------------------------------------
/doc/Armor_Manual.4All.md:
--------------------------------------------------------------------------------
1 | * [Introduction](#1)
2 | * [Information of Supported Armor Tools ](#2)
3 | * [Distributions](#3)
4 | * [Installation](#4)
5 | * [How to run Tool's test scripts](#5)
6 |
7 | ## Introduction
8 |
9 | Armor tools supports a list of the platform tools for debug, diagnostics and monitoring and those are available in Open-Estuary Solution.
10 |
11 | Current Release Version- `armor-v1.1`
12 |
13 | ## Information of Supported Armor Tools
14 |
15 | For the supported tools in Armor on different distributions, please refer https://github.com/open-estuary/estuary/blob/master/doc/Armor_Tools_Supported.4All.md
16 |
17 | For the basic information of all the supported Armor tools please refer https://github.com/open-estuary/estuary/blob/master/doc/Armor_Tools_Basic_Info.4All.md
18 |
19 | Documentation for L3 cache event counting support in perf https://github.com/open-estuary/estuary/blob/master/doc/README.armor.perf.md
20 |
21 | Documentation for using KGDB and KDB please refer https://github.com/open-estuary/estuary/blob/master/doc/README.armor.kgdb.kdb.md
22 |
23 | Documentation for LTTng user space tracing and kernel tracing, please refer https://github.com/open-estuary/estuary/blob/master/doc/README.armor.lttng.md
24 |
25 | Documentation for how to verify iptables tool, please refer https://github.com/open-estuary/estuary/blob/master/doc/README.armor.iptables.md
26 |
27 | ## Distributions
28 |
29 | Presently Armor tools are supported on the following distributions.
30 |
31 | * Ubuntu 15.04 ARM64
32 | * Fedora 22 ARM64
33 | * OpenSuse 20150813 Tumbleweed ARM64
34 | * Debian Jessie 8.2 ARM64
35 | * CentOS Linux release 7.2.1511 (AltArch)
36 |
37 | ## Installation
38 |
39 | 1. Default Armor tools are installed onto the rootfs during open-estuary build process.
40 | 2. On first time bootup, user must run update commands before try install or run any Armor tool.
41 |
42 | Ubuntu: run `apt-get -y update` command.
43 | Fedora: run `dnf -y update` command.
44 | OpenSuse: run `zypper -y update` command.
45 | Debian: run `apt-get -y update`and `apt-get install -f -y` commands.
46 | CentOS: run `yum -y update` command.
47 | 3. On target board, Run `armor_utility`, which will provide information of the supported Armor tools, installation status and how to install on the distribution if it is not already present.
48 |
49 | ## How to run Tool's test scripts
50 |
51 | 1. Go to the `/usr/local/armor/test_scripts` on the target terminal.
52 | 2. To test individual tools please run folowing command on the shell terminal, `sh test_.sh` -> to run test script of an armor tool.
53 | For example, 'sh test_strace.sh' to run tests for strace. The test results can be seen on the console.
54 |
--------------------------------------------------------------------------------
/doc/Caliper_Manual.4All.md:
--------------------------------------------------------------------------------
1 | * [What is Caliper?](#1)
2 | * [Caliper Organization](#2)
3 | * [System Requirements](#3)
4 | * [Software Requirements](#3.1)
5 | * [Host Dependency](#3.1.1)
6 | * [Server Dependency](#3.1.2)
7 | * [Target Dependency](#3.1.3)
8 | * [Host OS requirements](#3.2)
9 | * [Toolchain](#3.3)
10 | * [Caliper Setup](#4)
11 | * [Download Caliper](#4.1)
12 | * [Install Caliper(optional)](#4.2)
13 | * [Caliper Configuration](#4.3)
14 | * [Target Configuration](#4.3.1)
15 | * [Mail List Configuration](#4.3.2)
16 | * [Execution Preference Configuration](#4.3.3)
17 | * [Benchmark Selection](#4.3.4)
18 | * [Benchmark Test Case Configuration](#4.3.5)
19 | * [Caliper Execution](#5)
20 | * [Caliper Output](#6)
21 | * [Caliper Architecture](#7)
22 | * [Benchmark Addition](#8)
23 | * [Test Case Definition Structure](#8.1)
24 | * [Benchmark Definition](#8.2)
25 | * [Benchmark Building](#8.3)
26 | * [Benchmark Execution Configuration](#8.4)
27 | * [Benchmark Output Parsing](#8.5)
28 | * [Computing The Score](#8.6)
29 | * [Generating Yaml File](#8.7)
30 | * [Caliper Report Generation](#9)
31 | * [List of Benchmarks Supported in Caliper](#10)
32 |
33 |
34 |
35 | For the details, please refer to http://open-estuary.org/caliper-benchmarking/
36 |
--------------------------------------------------------------------------------
/doc/Deploy_Manual.4All.md:
--------------------------------------------------------------------------------
1 | * [Introduction](#1)
2 | * [Preparation](#2)
3 | * [Prerequisite](#2.1)
4 | * [Check the hardware board](#2.2)
5 | * [Upgrade UEFI and trust firmware](#2.3)
6 | * [Bring up System](#3)
7 | * [Boot via Network](#3.1)
8 | * [Boot via ISO](#3.2)
9 |
10 |
Introduction
11 |
12 | This documentation describes how to get, build, deploy and bring up target system based Estuary Project, it will help you to make your Estuary Environment setup from ZERO.
13 |
14 | All following sections will take the D05 board as example, other boards have the similar steps to do, for more detail difference between them, please refer to Hardware Boards sections in http://open-estuary.org/hardware-boards/.
15 |
16 |
Preparation
17 |
18 |
Prerequisite
19 | ```
20 | Notice : Only serial port output
21 | ```
22 | Local network: To connect hardware boards and host machine, so that they can communicate each other.
23 |
24 | Serial cable: To connect hardware board’s serial port to host machine, so that you can access the target board’s UART in host machine.
25 |
26 | Two methods are provided to **connect the board's UART port to a host machine**:
27 |
28 | **Method 1** : connect the board's UART in openlab environment
29 |
30 | Use `board_connect` command.(Details please refer to `board_connect --help`)
31 |
32 | **Method 2** : directly connect the board by UART cable
33 |
34 | a. Connect the board's UART port to a host machine with a serial cable.
35 | b. Install a serial port application in host machine, e.g.: kermit or minicom.
36 | c. Config serial port setting:115200/8/N/1 on host machine.
37 |
38 | We also can connect the board use command ipmi:
39 |
40 | /usr/bin/ipmitool -I lanplus -H hostip -U username -P password sol activate
41 |
42 | For more details, please refer to [UEFI_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/UEFI_Manual.4D05.md)
43 | "Upgrade UEFI" chapter.
44 |
45 |
Check the hardware board
46 |
47 | Hardware board should be ready and checked carefully to make sure it is available, more detail information about different hardware board, please refer to http://open-estuary.org/d05/.
48 |
49 |
Upgrade UEFI and trust firmware
50 |
51 | You can upgrade UEFI and trust firmware yourself based on FTP service, but this is not necessary. If you really want to do it, please refer to [UEFI_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/UEFI_Manual.4D05.md).
52 |
53 |
Booting The Installer
54 |
55 |
Boot via Network
56 |
57 | a. If you are booting the installer from the network, simply select PXE boot when presented by UEFI.
58 | For the PXE,please refer to [Setup_PXE_Env_on_Host.md](https://github.com/open-estuary/estuary/blob/master/doc/Setup_PXE_Env_on_Host.4All.md).
59 |
60 | Modify grub config file(please refer to [Grub_Manual.4All.md](https://github.com/open-estuary/estuary/blob/master/doc/Grub_Manual.4All.md))
61 |
62 | e.g. grub.cfg file for official versions is modified as follow:
63 | ```
64 | # Sample GRUB configuration file
65 | # For booting GNU/Linux
66 | menuentry 'D05 Install' {
67 | set background_color=black
68 | linux /netboot/boot/aarch64/linux --- quiet
69 | initrd /netboot/boot/aarch64/initrd
70 | }
71 | ```
72 |
73 | b. In NFS boot mode, the root parameter in grub.cfg menuentry will set to /dev/nfs and nfsroot will be set to the path of rootfs on NFS server. You can use `"showmount -e " `to list the exported NFS directories on the NFS server.
74 |
75 | D05 supports booting via NFS, you can try it as following steps:
76 |
77 | 1. Enable DHCP, TFTP and NFS service according to [Setup_PXE_Env_on_Host.md](https://github.com/open-estuary/estuary/blob/master/doc/Setup_PXE_Env_on_Host.4All.md).
78 |
79 | 2. Get and config grub file to support NFS boot according to [Grub_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/Grub_Manual.4All.md).
80 |
81 | Note: D05 only supports booting via ACPI mode with Centos distribution, so please refer to Grub_Manual.md to get correct configuration.
82 |
83 | 3. Reboot D05 and press "F2" or "Esc" to enter UEFI Boot Menu
84 |
85 | 4. Select boot option "Boot Manager"->"UEFI PXEv4 ``" boot option to enter.
86 |
87 | Note:
88 | If you are connecting the D05 board of openlab, please select "UEFI PXEv4 (MAC:000108001540)". The value of `` is depended on which D05 GE port is connected.
89 |
90 |
Boot via ISO
91 | In case you are booting with the minimal ISO via SATA / SAS / SSD, simply select the right boot option in UEFI.
92 |
93 | At this stage you should be able to see the grub menu, Debian's installer like:
94 |
95 | ```
96 | Install
97 | Advanced options ...
98 | Install with speech synthesis
99 | .
100 |
101 | Use the down and up keys to change the selection.
102 | Press 'e' to edit the selected item, or 'c' for a command prompt.
103 | ```
104 | Now just hit enter and wait for the kernel and initrd to load, which automatically loads the installer and provides you the installer console menu, so you can finally install.
105 |
106 | After finished the installation:
107 | a. Reboot and press "F2" or "Esc" to enter UEFI main menu.
108 | b. Select "Boot Manager"-> "UEFI Misc Device 1"-> to enter grub selection menu.
109 | c. Press arrow key up or down to select grub boot option to decide which distribution should boot.
110 |
111 |
112 |
--------------------------------------------------------------------------------
/doc/Distributions_Guide.4All.md:
--------------------------------------------------------------------------------
1 | This is the guide for distributions
2 |
3 | Distribution indicates a specially total rootfs for linux kernel.
4 | After you run `./estuary/build.sh -d -p `, the corresponding distribution tarball will be created into `/build//distro`.
5 | By default, it is named as _.tar.gz, and the default username and password for them are "root,root".
6 |
7 | You can do following commands to uncompress the tarball into any special block device's partition, e.g.: SATA, SAS or USB disk.
8 | ```bash
9 | mkdir tempdir
10 | sudo mount /dev/ tempdir
11 | sudo tar -xzf _.tar.gz -C tempdir
12 | ```
13 | You can also simply uncompress the tarball into a special directory as follows, and use the directory as the NFS's rootfs.
14 | ```bash
15 | mkdir nfsdir
16 | sudo tar -xzf _.tar.gz -C nfsdir
17 | ```
18 | If you want to produce a rootfs image file for QEMU, you can try as belows
19 | ```bash
20 | mkdir tempdir
21 | sudo tar -xzf _.tar.gz -C tempdir
22 |
23 | pushd tempdir
24 | dd if=/dev/zero of=../rootfs.img bs=1M count=10240
25 | mkfs.ext4 ../rootfs.img -F
26 | mkdir -p ../tempdir 2>/dev/null
27 | sudo mount ../rootfs.img ../tempdir
28 | sudo cp -a * ../tempdir/
29 | sudo umount ../tempdir
30 | rm -rf ../tempdir
31 | popd
32 | ```
33 | Then you will get the rootfs image file as rootfs.img
34 |
35 | **You can download above distributions by following commands manually**.
36 |
37 | All validated distributions can be obtained in [HTTP Download](http://download.open-estuary.org/)/[release|pre-releases]/``/linux/``/.
38 |
39 | e.g.:
40 | You can get Ubuntu distribution of estuary v2.2-rc1 pre-releases version as follow:
41 | `wget -c http://download.open-estuary.org/AllDownloads/DownloadsEstuary/pre-releases/2.2/rc1/linux/Ubuntu/Common/Ubuntu_ARM64.tar.gz`
42 |
43 | For other distributions please refer to the description aboved.
44 |
45 | **And all original distributions can be gotten by following commands**:
46 |
47 | Ubuntu:
48 | `wget -c https://cloud-images.ubuntu.com/vivid/current/vivid-server-cloudimg-arm64.tar.gz`
49 |
50 | OpenSUSE:
51 | `wget -c http://download.opensuse.org/ports/aarch64/distribution/13.2/appliances/openSUSE-13.2-ARM-JeOS.aarch64-rootfs.aarch64-Current.tbz`
52 |
53 | Fedora:
54 | `wget -c http://dmarlin.fedorapeople.org/fedora-arm/aarch64/F21-20140407-foundation-v8.tar.xz`
55 |
56 | Redhat: TBD
57 |
58 | Debian:
59 | `wget -c http://people.debian.org/~wookey/bootstrap/rootfs/debian-unstable-arm64.tar.gz`
60 |
61 | OpenEmbedded:
62 | `wget -c http://releases.linaro.org/14.06/openembedded/aarch64/vexpress64-openembedded_minimal-armv8-gcc-4.8_20140623-668.img.gz`
63 |
64 | More detail about how to deploy target system into target board, please refer to Deployment_Manual.md.
65 |
66 |
--------------------------------------------------------------------------------
/doc/Install_Guide_Malluma_linux.docx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/open-estuary/estuary/e80503cb1b2dd76c5292add4288a10b04919da65/doc/Install_Guide_Malluma_linux.docx
--------------------------------------------------------------------------------
/doc/Introduction_for_Docker.md:
--------------------------------------------------------------------------------
1 | What is Docker?
2 |
3 | Docker is an open-source project that automates the deployment of applications inside software containers. In details, Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries - anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. Click on the link https://www.docker.com for more information.
4 |
5 | Docker organization offers an official storage called Docker Hub for users to pull and push Images. For details please visit the link https://hub.docker.com. The Docker Images from open-estuary are all stored here, you can visit the link https://hub.docker.com/u/openestuary.
6 |
--------------------------------------------------------------------------------
/doc/Introduction_for_LAMP_with_Docker.md:
--------------------------------------------------------------------------------
1 | How to Build LAMP Service with Docker Images?
2 |
3 | This doc is designed to build LAMP service by using Docker Images.
4 |
5 | LAMP is an archetypal model of web service solution stacks, which contains four parts: the Linux operating system, the Apache HTTP Server, MySQL, and the PHP programming language. Docker helps developers focus on the web project, not the running environment.
6 |
7 | Two Docker Images: openestuary/mysql and openestuary/apache have been developed which are stored in Docker Hub. About how to build lamp service is shown as follow:
8 |
9 | You are supposed to have installed Docker successfully and enable Docker service automatically. Then you should pull related Docker Images which will take some time, please be patient. Just type as follows:
10 | ```bash
11 | $ docker pull openestuary/apache
12 | $ docker pull openestuary/mysql
13 | ```
14 |
15 | After finished above operation, you can start Docker container by using the pulled Images. Just type as follows:
16 | ```bash
17 | $ docker run -d -p 32775:80 --name apache -v /x/xx:/var/www/html openestuary/apache
18 | $ docker run -d -p 32776:3306 --name mysql openestuary/mysql
19 | ```
20 | Using the two containers named apache and mysql, you only need the source code of the web project. Suppose the web project is stored under /x/xx, we can use the command "-v" to mount the local files into the specified path of the container. The local host will assign a free port to the default port 80 of apache service. Using "-p", you should check the port is free firstly. It is the same for mysql when assigning a local port mapping the default port 3306 of mysql service.
21 |
22 | Mysql container use default username "mysql" and password "123456". Of course, you can change the configuration of mysql with the following commands
23 | ```bash
24 | $ docker run -d -P --name mysql \
25 | -e MYSQL_USER=xxx \
26 | -e MYSQL_PASSWORD=xxx \
27 | -e MYSQL_DATABASE=xxx \
28 | openestuary/mysql
29 | ```
30 | In order to make it more specific, The use of a PHP page with mysql connection will be demonstrated. If PHP page display normally, the two images are proven to work well. Suppose the local IP is 192.168.1.220. The content of the PHP page named index.php is as follows:
31 | ```bash
32 |
42 | ```
43 | Everything goes well if you see the output "hello world!" at website http://192.168.1.220:32775/index.php. Please enjoy the lamp service offered by the two containers named mysql and apache.
44 |
--------------------------------------------------------------------------------
/doc/Malluma_UserGuide.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/open-estuary/estuary/e80503cb1b2dd76c5292add4288a10b04919da65/doc/Malluma_UserGuide.pdf
--------------------------------------------------------------------------------
/doc/Malluma_manual.md:
--------------------------------------------------------------------------------
1 | * [Introduction](#1)
2 | * [Install guide](#2)
3 | * [User guide](#3)
4 | * [Uninstall guide](#4)
5 |
6 |
Introduction
7 |
8 | - The malluma software is the performance tuning tool, that could sample the cpu's data include the hardware events and software events by PMU, and scheduling condition's data, and I/O, network, LLC/DDR and so on.
9 | - But it could only sample the ARM environment's data. It could help you find your system bottleneck when you run your program. Maybe your program's bottleneck is memory, I/O, LLC/DDR or scheduling, it could help you find quickly.
10 |
11 |
Install guide
12 |
13 | First, it is necessary to setup the repository. Please refer to [https://github.com/open-estuary/distro-repo/blob/master/README.md](https://github.com/open-estuary/distro-repo/blob/master/README.md) CentOS repo adding part。
14 |
15 | Second, you should install the malluma's rpm package. You could use the command as follows.
16 | ```
17 | yum install Malluma.aarch64
18 | ```
19 |
20 | Third, you should install the malluma software, and specific steps are as follows:
21 |
22 |
23 | cd /opt/Malluma-2.0
24 | yum install -y strace // install strace as a dependency, avoid this operation in next malluma version
25 | sh ./install.sh --environment //install the external environment
26 | sh ./install.sh --check //check the environemnt if or not ready
27 | sh ./install.sh --all //really install process
28 | Note:you should guarantee that the capacity of the installation directory is more than 50G.
29 |
34 | Error info:
35 | cc:error: ../deps/hiredis/libhiredis.a: no such file or directory
36 | cc:error: ../deps/lua/src/liblua.a: no such file or directory
37 | cc:error: ../deps/geohash-int/geohash.o: no such file or directory
38 | ...
39 | Solution:
40 | cd /usr/local/src/redias/deps
41 | make geohash-int hiredis jemalloc linenoise lua
42 |
43 | - Question 2:
44 |
45 | When you run the third step "sh ./install.sh --check", maybe has some erros such as "./install.sh:line 475: 4*100+16*10+:syntax error:operand expected (error token is '+')".
46 | You could ignore it, this type error would not affect the normal function about the Malluma.
47 |
48 |
49 |
User guide
50 |
51 | For example, you has installded the malluma in the machine, it's ip is 192.168.1.100.
52 | You could visit the 192.16.1.100 by the browser (***only Google chrome***) to use the malluma's functions. When you visit it , you will be asked for the user/password. That's default username is "admin", and the default password "Admin12#$". [Details info is here.](https://github.com/open-estuary/estuary/blob/master/doc/Malluma_UserGuide.pdf)
53 |
54 |
Uninstall guide
55 |
56 | First you should enter in the malluma's installed directory, then executing the commands as follows.
57 |
58 | sh uninstall.sh
59 |
60 | Second you should remove the malluma's rpm package.
61 |
8 |
9 | According to the [EstuaryCfg.json](https://github.com/open-estuary/estuary/blob/master/estuarycfg.json)
10 | packages could be integrated into Estuary accordingly.
11 |
12 |
Packages Installation
13 |
14 | Typically packages could be installed via two ways:
15 | - RPM/Deb Repositories:
16 | - As for how to install rpm/deb packages, please refer to [Open-Estuary Repository README](https://github.com/open-estuary/distro-repo/blob/master/README.md)
17 | - Docker Images:
18 | - Use`docker pull openestuary/` to install the corresponding docker images. For more information, please refer to the corresponding manuals mentioned below.
19 |
20 |
Application Integration and Optimization
21 |
22 |
23 | For details, please refer to [Estuary Application Integration and Optimization](https://github.com/open-estuary/packages).
24 |
25 |
26 |
Others
27 |
28 | - The yum/deb repositories will be supported officially from Estuary V500 release
29 | - If you come across any issue (such as supporting new packages on ARM64 platform, enhancing packages performance on ARM64 platforms, and so on) during using above packages, please feel free to contact with us by using any of following ways:
30 | - Visit www.open-estuary.com website, and submit one bug
31 | - Report one issue in this github issue page
32 | - Email to sjtuhjh@hotmail.com
33 |
--------------------------------------------------------------------------------
/doc/Quick_Deployment.4D03.md:
--------------------------------------------------------------------------------
1 | * [Introduction](#1)
2 | * [Quick Deploy System](#2)
3 | * [Deploy system via USB install Disk](#2.1)
4 | * [Deploy system via DVD/BMC-website](#2.2)
5 | * [Deploy system via PXE](#2.3)
6 | * [View serial info via VGA/ipmitool/BMC-website](#3)
7 |
8 | ## Introduction
9 |
10 | Above all, prepare hardware boards with SCSI disk and download Estuary source code from GitHub.
11 | To learn more about how to do them, please visit this web site: .
12 | If you just want to quickly try it the binary files, please refer to our binary Download Page to get the latest binaries and documentations for each corresponding boards.
13 |
14 | Accessing from China:
15 |
16 | Accessing from outside-China:
17 | ## Quick Deploy System
18 |
19 | ### Deploy system via USB install Disk
20 |
21 | 1. Connect usb disk to your pc to prepare usb install disk. We provide two methods to make USB install disk
22 | **Method 1:**
23 | * Modify `estuary/estuarycfg.json`. Make sure the platform, distros are all right.
24 | * Change the value of "install" to "yes" in object "setup" for usb and the value "device" to your USB install disk.
25 | (Notice: if the specified usb device does not exist, the first usb device will be selected by default.)
26 | * Use `build.sh` to create the usb install disk.
27 | eg: `./estuary/build.sh -f estuary/estuarycfg.json`
28 |
29 | **Method 2:**
30 | * Download mkdeploydisk.sh from website:
31 | * Execute the following command with sudo permission to make usb installing disk `sudo ./mkdeploydisk.sh`. Please specify your disk with `--target=/dev/sdx` if more than one USB disk connected to your computer. If not specified, the first detected usb device will be used.
32 | * According the prompt to make usb install disk.
33 | 2. After you have made usb install disk, please connect the usb to target board.
34 | 3. Reboot the board.
35 | 4. Select "EFI USB Device" at UEFI menu.
36 | 5. According to the prompt to deploy the system.
37 | 6. Reboot board to enter the system you deployed by default.
38 |
39 | ### Deploy system via DVD/BMC-website
40 |
41 | Download ISO file from website:> or >
42 | **Deploy system via DVD**
43 | * Burn the iso image file to DVD disk if you use the physical DVD driver.
44 | * Connect the physical DVD driver to the board, plug in the DVD install disk.
45 | * Reboot the board.
46 | * Select the uefi menu to boot from the DVD device.
47 | * According to the prompt to deploy the system.
48 | * Reboot the boards to enter the system you deployed by default.
49 |
50 | **Deploy the system via BMC-website**
51 | * Login the website of boards' BMC IP(e.g:https://192.168.2.100) with browser(IE browser is suggested to use), The `username` & `password` is `root` & `Huawei12#$`.
52 | * Click "Remote" on the top of BMC webiste. Select "Remote Virtual Console (Shared Mode)" to enter into KVM interface. Click "Image File" and choose the iso image, then click "Connect" button.
53 | * Click "Config" on the top of BMC website, click "Boot Option" to select "DVD-ROM drive", then click "Save" button.
54 | * Reboot the board at BMC website
55 | * According to the prompt to deploy the system at BMC website
56 | * Reboot the boards to enter the system you deployed by default.
57 |
58 | ### Deploy system via PXE
59 |
60 | 1. Connect Ubuntu PC and hardware boards into the same local area network. (Make sure the PC can connect to the internet and no other PXE servers exist.)
61 | 2. Modify the configuration file of `estuary/estuarycfg.json` based on your hardware boards. Change the values of mac to physical addresses of the connected network cards on the board. Change the value of "install" to "yes" in object "setup" for PXE.
62 | 3. Backup files under the tftp root directory if necessary. Use `build.sh` to build project and setup the PXE server on Ubuntu PC.
63 | eg: `./estuary/build.sh -f estuary/estuarycfg.json`
64 | 4. After that, connect the hardware boards by using BMC ports.
65 | 5. Reboot the hardware boards and start the boards from the correct EFI Network.
66 | 6. Install the system according to prompt. After install finished, the boards will restart automatically.
67 | 7. Start the boards from "grub" menu of UEFI by default.
68 |
69 | ### View serial info via VGA/ipmitool/BMC-website
70 |
71 | **View serial info via VGA**
72 | * Connect board's vga port to your local pc and connect a keyboard to the board's usb interface.
73 | * Connect usb install disk to your board and deploy the system
74 | * Reboot the system and you can see the serial info via VGA
75 |
76 | **View serial info via ipmitool**
77 | * Install ipmitool on your local pc by "sudo apt-get install ipmitool"
78 | * Use command `ipmitool -H -I lanplus -U root -P Huawei12#$ sol activate` to see the booting log
79 |
80 | **View serial info via BMC-website**
81 | * When you load iso from BMC website, you can the the process of booting via BMC website directly
82 |
83 |
--------------------------------------------------------------------------------
/doc/Quick_Deployment.4D05.md:
--------------------------------------------------------------------------------
1 | * [Introduction](#1)
2 | * [Quick Deploy System](#2)
3 | * [Deploy system via USB install Disk](#2.1)
4 | * [Deploy system via DVD/BMC-website](#2.2)
5 | * [Deploy system via PXE](#2.3)
6 | * [View serial info via VGA/ipmitool/BMC-website](#3)
7 |
8 | ## Introduction
9 |
10 | Above all, prepare hardware boards with SCSI disk and download Estuary source code from GitHub.
11 | To learn more about how to do them, please visit this web site: .
12 | If you just want to quickly try it the binary files, please refer to our binary Download Page to get the latest binaries and documentations for each corresponding boards.
13 |
14 | Accessing from China:
15 |
16 | Accessing from outside-China:
17 | ## Quick Deploy System
18 |
19 | ### Deploy system via USB install Disk
20 |
21 | 1. Connect usb disk to your pc to prepare usb install disk. We provide two methods to make USB install disk
22 | **Method 1:**
23 | * Modify `estuary/estuarycfg.json`. Make sure the platform, distros are all right.
24 | * Change the value of "install" to "yes" in object "setup" for usb and the value "device" to your USB install disk.
25 | (Notice: if the specified usb device does not exist, the first usb device will be selected by default.)
26 | * Use `build.sh` to create the usb install disk.
27 | eg: `./estuary/build.sh -f estuary/estuarycfg.json`
28 |
29 | **Method 2:**
30 | * Download mkdeploydisk.sh from website:
31 | * Execute the following command with sudo permission to make usb installing disk `sudo ./mkdeploydisk.sh`. Please specify your disk with `--target=/dev/sdx` if more than one USB disk connected to your computer. If not specified, the first detected usb device will be used.
32 | * According the prompt to make usb install disk.
33 | 2. After you have made usb install disk, please connect the usb to target board.
34 | 3. Reboot the board.
35 | 4. Select "EFI USB Device" at UEFI menu.
36 | 5. According to the prompt to deploy the system.
37 | 6. Reboot board to enter the system you deployed by default.
38 |
39 | ### Deploy system via DVD/BMC-website
40 |
41 | Download ISO file from website:> or >
42 | **Deploy system via DVD**
43 | * Burn the iso image file to DVD disk if you use the physical DVD driver.
44 | * Connect the physical DVD driver to the board, plug in the DVD install disk.
45 | * Reboot the board.
46 | * Select the uefi menu to boot from the DVD device.
47 | * According to the prompt to deploy the system.
48 | * Reboot the boards to enter the system you deployed by default.
49 |
50 | **Deploy the system via BMC-website**
51 | * Login the website of boards' BMC IP(e.g:https://192.168.2.100) with browser(IE browser is suggested to use), The `username` & `password` is `root` & `Huawei12#$`.
52 | * Click "Remote" on the top of BMC webiste. Select "Remote Virtual Console (Shared Mode)" to enter into KVM interface. Click "Image File" and choose the iso image, then click "Connect" button.
53 | * Click "Config" on the top of BMC website, click "Boot Option" to select "DVD-ROM drive", then click "Save" button.
54 | * Reboot the board at BMC website
55 | * According to the prompt to deploy the system at BMC website
56 | * Reboot the boards to enter the system you deployed by default.
57 |
58 | ### Deploy system via PXE
59 |
60 | 1. Connect Ubuntu PC and hardware boards into the same local area network. (Make sure the PC can connect to the internet and no other PXE servers exist.)
61 | 2. Modify the configuration file of `estuary/estuarycfg.json` based on your hardware boards. Change the values of mac to physical addresses of the connected network cards on the board. Change the value of "install" to "yes" in object "setup" for PXE.
62 | 3. Backup files under the tftp root directory if necessary. Use `build.sh` to build project and setup the PXE server on Ubuntu PC.
63 | eg: `./estuary/build.sh -f estuary/estuarycfg.json`
64 | 4. After that, connect the hardware boards by using BMC ports.
65 | 5. Reboot the hardware boards and start the boards from the correct EFI Network.
66 | 6. Install the system according to prompt. After install finished, the boards will restart automatically.
67 | 7. Start the boards from "grub" menu of UEFI by default.
68 |
69 | ### View serial info via VGA/ipmitool/BMC-website
70 |
71 | **View serial info via VGA**
72 | * Connect board's vga port to your local pc and connect a keyboard to the board's usb interface.
73 | * Connect usb install disk to your board and deploy the system
74 | * Reboot the system and you can see the serial info via VGA
75 |
76 | **View serial info via ipmitool**
77 | * Install ipmitool on your local pc by "sudo apt-get install ipmitool"
78 | * Use command `ipmitool -H -I lanplus -U root -P Huawei12#$ sol activate` to see the booting log
79 |
80 | **View serial info via BMC-website**
81 | * When you load iso from BMC website, you can the the process of booting via BMC website directly
82 |
83 |
--------------------------------------------------------------------------------
/doc/README.armor.iptables.md:
--------------------------------------------------------------------------------
1 | How to verify iptables tool
2 |
3 | **On D02**
4 |
5 | `iptables -A INPUT -s <172.18.45.13> -j REJECT` -> set the rule to reject the packets
6 |
7 | `iptables -L` -> shows the added rule
8 |
9 | **Where**
10 |
11 | `172.18.45.60` - ip address of D02 board and
12 |
13 | `172.18.45.13` - ip address of the PC.
14 |
15 | **On PC**
16 |
17 | `ping <172.18.45.60>` -> unreachable as packets are rejected based on above iptables rule.
18 |
19 | **On D02**
20 |
21 | `iptables -D INPUT 1` -> delete the rule.
22 |
23 | `iptables -A INPUT -s <172.18.45.13> -j ACCEPT` -> set the rule to accept the packets
24 |
25 | `iptables -L` -> shows the added rule
26 |
27 | **ON PC**
28 |
29 | `ping <172.18.45.60>` -> work ok as packets are accpeted.
30 |
--------------------------------------------------------------------------------
/doc/README.armor.kgdb.kdb.md:
--------------------------------------------------------------------------------
1 | * [Readme for KGDB and KDB Tools](#1)
2 | * [Debugging using KGDB](#2)
3 |
4 | ## Readme for KGDB and KDB Tools
5 |
6 | 1. Enable following configurations in the open-estuary kernel defconfig file, if it is not already done.
7 | ```bash
8 | CONFIG_HAVE_ARCH_KGDB=y
9 | CONFIG_KGDB=y
10 | CONFIG_KGDB_SERIAL_CONSOLE=y
11 | CONFIG_KGDB_TESTS=y
12 | CONFIG_KGDB_KDB=y
13 | CONFIG_KDB_DEFAULT_ENABLE=0x1
14 | CONFIG_KDB_KEYBOARD=y
15 | CONFIG_KDB_CONTINUE_CATASTROPHIC=0
16 | CONFIG_MAGIC_SYSRQ=y
17 | ```
18 | 2. Build kernel.
19 |
20 | 3. Set Kernel Debugger Boot Arguments (kgdboc=ttyS0,115200) in the grub.conf file as given in the following example.
21 | ```bash
22 | menuentry "ubuntu64-nfs" --id ubuntu64-nfs {
23 | set root=(tftp,192.168.0.3)
24 | linux /ftp-X/Image rdinit=/init console=ttyS0,115200 kgdboc=ttyS0,115200 earlycon=uart8250,mmio32,0x80300000 ...
25 | devicetree /ftp-X/hip05-d02.dtb
26 | }
27 | ```
28 |
29 | 4. Boot the target D02 board with the above grub.conf and kernal image built in step `2`.
30 |
31 | 5. On D02, enter the kernel debugger(kdb) manually or by waiting for an oops or fault.
32 | There are several ways you can enter the kernel debugger manually; all involve using the sysrq-g. When logged in as root or with a super user session you can run:
33 | `echo g > /proc/sysrq-trigger`
34 | This will enter kdb terminal and then coninue with kernel debugging using kdb commands.
35 |
36 | ## Debugging using KGDB
37 |
38 | 6. On D02 kdb terminal type 'kgdb' command. Then please attach debugger from host machine to remotely debug using gdb.
39 |
40 | 7. On the host machine install gdb-multiarch
41 | `sudo adb-get install gdb-multiarch`
42 |
43 | 8. run command on the host machine. 'sudo gdb-multiarch ./vmlinux'
44 | where vmlinux is the uncompressed kernel image built in step (2).
45 |
46 | 9. On the gdb terminal, run following commands
47 | ```bash
48 | (gdb) set remote interrupt-sequence Ctrl-C BREAK BREAK-g -> optional
49 | (gdb) set serial baud 115200 -> set serial baud rate
50 | (gdb) set debug remote 1 -> optional for more gdb debug messages
51 | (gdb) target remote /dev/ttyUSB1 -> connect to the kgdb over serial port /dev/ttyUSB1 on host machine.
52 | (gdb) target remote `/dev/ttyUSB1`
53 | ```
54 | Remote debugging using `/dev/ttyUSB1`
55 | Sending packet: `$qSupported:multiprocess+;xmlRegisters=i386;qRelocInsn+#b5...Ack`
56 | ..........
57 | Sending packet: `$g#67...Ack`
58 | Packet received:
59 | ```bash
60 | 801b0901c0ffffff01000000000000000000000000000000881b0901c0ffffff1600000000000000ff01000000000000545d5900c0ffffffa066f200c0ffffff5b01000000000000000000000000000006000000000000000090f400c0ffffff06000000000000003000000000000000ffffffffffffff0f1100000000000000010000000000000000000000000000002f33ad44000000003035f500c0ffffff801b0901c0ffffff801b0901c0ffffff801b0901c0ffffffd0d3fc00c0ffffff000000000000000000a6e800c0ffffff70e2e200c0ffffff00e00501c0ffffffa0a5e800c0ffffff207ddcf6d1ffffff08e11500c0ffffff207ddcf6d1ffffffbccc1500c0ffffff450000600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
61 | ```
62 | Sending packet: `$mffffffc00015ccbc,4#15...Ack`
63 | Packet received: `208020d4`
64 | arch_kgdb_breakpoint () at `./arch/arm64/include/asm/kgdb.h:32`
65 | `32 ./arch/arm64/include/asm/kgdb.h`: No such file or directory.
66 | Sending packet: `$qSymbol::#5b...Ack`
67 | Packet received:
68 | 10. Once the gdb successfully connected to the remote kgdb on D02, kernel can be debugged as user space applications using normal gdb commands.
69 |
--------------------------------------------------------------------------------
/doc/README.armor.lttng.md:
--------------------------------------------------------------------------------
1 | * [LTTNG](#1)
2 | * [Ubuntu](#2)
3 | * [Fedora](#3)
4 | * [OpenSuse](#4)
5 |
6 | ## LTTNG
7 |
8 | Installtion and building of lttng kernel modules (lttng-modules-dkms) using apt-get does not work on estuary Ubuntu platform. Thus lttng module build from source code and installed into rootfs.
9 | The source code of lttng kernel module worked is lttng-modules-2.6.4.tar.bz2 (ubuntu version) downloaded from https://lttng.org/download/#build-from-source
10 |
11 | ## Ubuntu
12 |
13 | - LTTNG user space packages are available to install in Ubuntu distribution.
14 |
15 | ```bash
16 | apt-get install -y lttng-tools
17 | apt-get install -y liblttng-ust-dev
18 | ```
19 | - The armor-postinstall.sh script does the lttng user space packages installations on first bootup.
20 |
21 |
22 | ## Fedora
23 |
24 | - LTTNG user space packages are available to install in Fedora distribution.
25 | ```bash
26 | dnf install -y lttng-tools.aarch64
27 | dnf install -y lttng-ust.aarch64
28 | dnf install -y babeltrace.aarch64
29 | ```
30 | - The armor-postinstall.sh script does the lttng user space package installations on first bootup.
31 |
32 | ## OpenSuse
33 |
34 | LTTNG packages are not available to install in OpenSuse distribution.
35 | Thus lttng-tools and lttng-ust to be natively built on target board from source code and
36 |
37 | install using the build script present in the `/usr/local/armor/build_scripts/build_lttng_tools_opensuse.sh`
38 | in the rootfs.
39 |
40 |
--------------------------------------------------------------------------------
/doc/README.armor.perf.md:
--------------------------------------------------------------------------------
1 | **Hisilicon SoC PMU (Performance Monitoring Unit)**
2 | =================================================
3 | The Hisilicon SoC hip05/06/07 chips consist of various independent system
4 | device PMU's such as L3 cache(L3C), Miscellaneous Nodes(MN) and DDR
5 | controllers. These PMU devices are independent and have hardware logic to
6 | gather statistics and performance information.
7 |
8 | Hip0x chips are encapsulated by multiple CPU and IO die's. The CPU die is
9 | called as Super CPU cluster (SCCL) which includes 16 cpu-cores. Every SCCL
10 | is further grouped as CPU clusters (CCL) which includes 4 cpu-cores each.
11 | Each SCCL has 1 L3 cache and 1 MN units.
12 |
13 | The L3 cache is shared by all CPU cores in a CPU die. The L3C has four banks
14 | (or instances). Each bank or instance of L3C has Eight 32-bit counter
15 | registers and also event control registers. The hip05/06 chip L3 cache has
16 | 22 statistics events. The hip07 chip has 66 statistics events. These events
17 | are very useful for debugging.
18 |
19 | The MN module is also shared by all CPU cores in a CPU die. It receives
20 | barriers and DVM(Distributed Virtual Memory) messages from cpu or smmu, and
21 | perform the required actions and return response messages. These events are
22 | very useful for debugging. The MN has total 9 statistics events and support
23 | four 32-bit counter registers in hip05/06/07 chips.
24 |
25 | The DDR controller supports various statistics events. Every SCCL has 2 DDR
26 | channels and hence 2 DDR controllers. The hip05/06/07 has support for a
27 | total of 13 statistics events.
28 |
29 | There is no memory mapping for L3 cache and MN registers. It can be accessed
30 | by using the Hisilicon djtag interface. The Djtag in a SCCL is an independent
31 | module which connects with some modules in the SoC by Debug Bus.
32 |
33 | **Hisilicon SoC (hip05/06/07) PMU driver**
34 | --------------------------------------
35 | The hip0x PMU driver shall register perf PMU drivers like L3 cache, MN, DDRC
36 | etc.
37 | The available events and configuration options shall be described in the sysfs.
38 | The "perf list" shall list the available events from sysfs.
39 |
40 | The L3 cache in a SCCL is divided as 4 banks. Each L3 cache bank have separate
41 | PMU registers for event counting and control. So each L3 cache bank is
42 | registered with perf as a separate PMU.
43 | The PMU name will appear in event listing as hisi_l3c_.
44 | where "bank-id" is the bank index (0 to 3) and "scl-id" is the SCCL identifier
45 | e.g. hisi_l3c0_2/read_hit is READ_HIT event of L3 cache bank #0 SCCL ID #2.
46 |
47 | The MN in a SCCL is registered as a separate PMU with perf.
48 | The PMU name will appear in event listing as hisi_mn_.
49 | e.g. hisi_mn_2/read_req. READ_REQUEST event of MN of Super CPU cluster #2.
50 |
51 | For DRR controller separate PMU shall be registered for each channel in a
52 | Super CPU cluster.
53 | The PMU name will appear in event listing as hisi_ddrc_.
54 | where "ch-id" is the channel identifier.
55 | eg: hisi_ddrc0_2/flux_read/ is FLUX_READ event of DDRC channel #0 in
56 | Super CPU cluster #2.
57 |
58 | The event code is represented by 12 bits.
59 | i) event 0-11
60 | The event code will be represented using the LSB 12 bits.
61 |
62 | The driver also provides a "cpumask" sysfs attribute, which shows the CPU core
63 | ID used to count the uncore PMU event.
64 |
65 | ```bash
66 | Example usage of perf:
67 | $# perf list
68 | hisi_l3c0_2/read_hit/ [kernel PMU event]
69 | ------------------------------------------
70 | hisi_l3c1_2/write_hit/ [kernel PMU event]
71 | ------------------------------------------
72 | hisi_l3c0_1/read_hit/ [kernel PMU event]
73 | ------------------------------------------
74 | hisi_l3c0_1/write_hit/ [kernel PMU event]
75 | ------------------------------------------
76 | hisi_mn_2/read_req/ [kernel PMU event]
77 | hisi_mn_2/write_req/ [kernel PMU event]
78 | ------------------------------------------
79 | hisi_ddrc0_2/flux_read/ [kernel PMU event]
80 | ------------------------------------------
81 | hisi_ddrc1_2/flux_read/ [kernel PMU event]
82 | ------------------------------------------
83 |
84 | `$# perf stat -a -e "hisi_l3c0_2/read_allocate/" sleep 5`
85 |
86 | `$# perf stat -A -C 0 -e "hisi_l3c0_2/read_allocate/" sleep 5`
87 | ```
88 |
89 | The current driver doesnot support sampling. so "perf record" is unsupported.
90 | Also attach to a task is unsupported as the events are all uncore.
91 |
92 | Note: Please contact the maintainer for a complete list of events supported for
93 | the PMU devices in the SoC and its information if needed.
94 |
95 | Known Issues:
96 |
97 | 1. As Hisilicon hardware counters are not CPU core specific, the counter values maynot be accurate.
98 |
99 | 2. As the counter registers in Hisiilicon are config and accessed via Djtag interface, it can affect the
100 | event counter readings as the access is not atomic.
101 |
102 | 3. Counter are 32 bit and overflow handling is currently not supported.
103 |
--------------------------------------------------------------------------------
/doc/Readme.4D03.md:
--------------------------------------------------------------------------------
1 | This is the readme file for D03 platform
2 |
3 | After you executed
4 | ```bash
5 | ./estuary/build.sh --file=./estuary/estuarycfg.json --builddir=./workspace
6 | ```
7 | for D03, all targets files will be produced. they are:
8 |
9 | ### UEFI_D03.fd
10 |
11 | **description**: `UEFI_D03.fd` is the UEFI bios for D03 platform.
12 | **target**: `/workspace/binary/D03/UEFI_D03.fd`
13 | **source**: `/uefi`
14 |
15 | build commands(supposedly, you are in `` currently):
16 | ```bash
17 | ./estuary/submodules/build-uefi.sh --platform=D03 --output=workspace
18 | ```
19 |
20 | ### grubaa64.efi
21 |
22 | **description**: `grubaa64.efi` is used to load kernel image and dtb files from SATA, SAS, USB Disk, or NFS into RAM and start the kernel.
23 | **target**: `/workspace/binary/arm64/grubaa64.efi`
24 | **source**: `/grub`
25 |
26 | build commands(supposedly, you are in `` currently):
27 | ```bash
28 | ./estuary/submodules/build-grub.sh --output=./workspace
29 | ```
30 | if your host is not arm architecture, please execute
31 | ```bash
32 | ./estuary/submodules/build-grub.sh --output=./workspace --cross=aarch64-linux-gnu-
33 | ```
34 | Note: more details about how to install `gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux`, please refer to .
35 |
36 | ### Image & hip06-d03.dtb
37 |
38 | **descriptions**: `Image` is the kernel executable program, and `hip06-d03.dtb` is the device tree binary.
39 | **target**:
40 | `Image` in `/workspace/binary/arm64/Image`
41 | `hip06-d03.dtb` in `/workspace/binary/D03/hip06-d03.dtb`
42 | **source**: `/kernel`
43 |
44 | build commands(supposedly, you are in `` currently):
45 | ```bash
46 | ./estuary/submodules/build-kernel.sh --platform=D03 --output=workspace
47 | ```
48 | if your host is not arm architecture, please execute
49 | ```bash
50 | ./estuary/submodules/build-kernel.sh --platform=D03 --output=workspace --cross=aarch64-linux-gnu-
51 | ```
52 | Note: more details about how to install `gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux`, please refer to .
53 |
54 | More detail about distributions, please refer to [Distributions_Guide.md](https://github.com/open-estuary/estuary/blob/master/doc/Distributions_Guide.4All.md).
55 |
56 | More detail about toolchains, please refer to [Toolchains_Guide.md](https://github.com/open-estuary/estuary/blob/master/doc/Toolchains_Guide.4All.md).
57 |
58 | More detail about how to deploy target system into D03 board, please refer to [Deployment_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/Deploy_Manual.4D03.md).
59 |
60 | More detail about how to debug, analyse, diagnose system, please refer to [Armor_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/Armor_Manual.4All.md).
61 |
62 | More detail about how to benchmark system, please refer to [Caliper_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/Caliper_Manual.4All.md).
63 |
64 | More detail about how to access remote boards in OpenLab, please refer to [Boards_in_OpenLab](http://open-estuary.org/accessing-boards-in-open-lab/).
65 |
--------------------------------------------------------------------------------
/doc/Readme.4D05.md:
--------------------------------------------------------------------------------
1 | This is the readme file for D05 platform
2 |
3 | After you executed `./estuary/build.sh --file=./estuary/estuarycfg.json --builddir=./workspace` for D05, all targets files will be produced. they are:
4 |
5 | ### UEFI_D05.fd
6 |
7 | **description**: UEFI_D05.fd is the UEFI bios for D05 platform.
8 |
9 | **target**: `/workspace/binary/D05/UEFI_D05.fd`
10 |
11 | **source**: `/uefi`
12 |
13 | build commands(supposedly, you are in `` currently):
14 | ```shell
15 | ./estuary/submodules/build-uefi.sh --platform=D05 --output=workspace
16 | ```
17 |
18 | ### grubaa64.efi
19 |
20 | **description**:
21 |
22 | grubaa64.efi is used to load kernel image and dtb files from SATA, SAS, USB Disk, or NFS into RAM and start the kernel.
23 |
24 | **target**: `/workspace/binary/arm64/grubaa64.efi`
25 |
26 | **source**: `/grub`
27 |
28 | build commands(supposedly, you are in `` currently):
29 |
30 | `./estuary/submodules/build-grub.sh --output=./workspace`, if your host is not arm architecture, please execute`build-grub.sh --output=./workspace --cross=aarch64-linux-gnu-`
31 |
32 | Note: more details about how to install gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux, please refer to https://github.com/open-estuary/estuary/blob/master/doc/Toolchains_Guide.4All.md.
33 |
34 | ### Image
35 |
36 | **descriptions**: Image is the kernel executable program.
37 |
38 | **target**:
39 |
40 | Image in `/workspace/binary/arm64/Image`
41 |
42 | **source**: `/kernel`
43 |
44 | build commands(supposedly, you are in `` currently):
45 |
46 | `./estuary/submodules/build-kernel.sh --platform=D05 --output=workspace`, if your host is not arm architecture, please execute `./estuary/submodules/build-kernel.sh --platform=D05 --output=workspace --cross=aarch64-linux-gnu-`.
47 |
48 | Note: more details about how to install gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux, please refer to https://github.com/open-estuary/estuary/blob/master/doc/Toolchains_Guide.4All.md.
49 |
50 | More detail about distributions, please refer to [Distributions_Guide.md](https://github.com/open-estuary/estuary/blob/master/doc/Distributions_Guide.4All.md).
51 |
52 | More detail about toolchains, please refer to [Toolchains_Guide.md](https://github.com/open-estuary/estuary/blob/master/doc/Toolchains_Guide.4All.md).
53 |
54 | More detail about how to deploy target system into D05 board, please refer to [Deployment_Manual.md](https://github.com/open-estuary/estuary/blob/estuary-d05-3.0b/doc/Deploy_Manual.4D05.md).
55 |
56 | More detail about how to debug, analyse, diagnose system, please refer to [Armor_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/Armor_Manual.4All.md).
57 |
58 | More detail about how to benchmark system, please refer to [Caliper_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/Caliper_Manual.4All.md).
59 |
60 | More detail about how to access remote boards in OpenLab, please refer to [Boards_in_OpenLab](http://open-estuary.org/accessing-boards-in-open-lab/).
61 |
--------------------------------------------------------------------------------
/doc/Readme.4HiKey.md:
--------------------------------------------------------------------------------
1 | This is the readme file for HiKey platform
2 |
3 | Above all, you need install some applications firstly as follows:
4 | ```bash
5 | sudo apt-get install -y wget automake1.11 make bc libncurses5-dev libtool li
6 | bc6:i386 libncurses5:i386 libstdc++6:i386 lib32z1 bison flex uuid-dev build-esse
7 | ntial iasl gcc zlib1g-dev libperl-dev libgtk2.0-dev libfdt-dev
8 | ```
9 | After you executed `./estuary/build.sh --file=./estuary/estuarycfg.json --builddir=./workspace` for HiKey, all targets files will be produced. they are:
10 |
11 | ### l-loader.bin
12 | ### ptable-linux.img
13 | ### AndroidFastbootApp.efi
14 | ### UEFI_HiKey.fd
15 |
16 | **description**: l-loader.bin - used to switch from aarch32 to aarch64 and boot, UEFI_HiKey.fd is the UEFI bios for HiKey, ptable-linux.img - partition tables for Linux images.
17 | **target**:
18 | ```bash
19 | /workspace/binary/HiKey/l-loader.bin
20 |
21 | /workspace/binary/HiKey/ptable-linux.img
22 |
23 | /workspace/binary/HiKey/AndroidFastbootApp.efi
24 |
25 | /workspace/binary/HiKey/UEFI_HiKey.fd.
26 | ```
27 |
28 | **source**: `/uefi`
29 |
30 | build commands(supposedly, you are in `` currently:
31 |
32 | `./estuary/submodules/build-uefi.sh --platform=HiKey --output=workspace`
33 |
34 | ### grubaa64.efi
35 |
36 | **description**: grubaa64.efi is used to load kernel image and dtb files from SD card, nandflash into RAM and start the kernel.
37 | **target**: `/workspace/binary/arm64/grubaa64.efi`
38 | **source**: `/grub`
39 |
40 | build commands(supposedly, you are in `` currently:
41 | `./estuary/submodules/build-grub.sh --output=./workspace`, if your host is not arm architecture, please execute`build-grub.sh --output=./workspace --cross=aarch64-linux-gnu-`
42 |
43 | Note: more details about how to install gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux, please refer to https://github.com/open-estuary/estuary/blob/master/doc/Toolchains_Guide.4All.md.
44 |
45 | ### Image
46 | ### hi6220-hikey.dtb
47 |
48 | **descriptions**: Image is the kernel executable program, and hi6220-hikey.dtb is the device tree binary.
49 | **target**: Image in `/workspace/binary/arm64/Image`
50 | hi6220-hikey.dtb in `/workspace/binary/HiKey/hi6220-hikey.dtb`
51 | **source**: `/kernel`
52 |
53 | build commands(supposedly, you are in `` currently):
54 | `./estuary/submodules/build-kernel.sh --platform=HiKey --output=workspace`, if your host is not arm architecture, please execute `./estuary/submodules/build-kernel.sh --platform=HiKey --output=workspace --cross=aarch64-linux-gnu-`.
55 |
56 | Note: more details about how to install gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux, please refer to https://github.com/open-estuary/estuary/blob/master/doc/Toolchains_Guide.4All.md.
57 |
58 | If you get more information about uefi, please visit https://github.com/96boards/documentation/wiki/HiKeyUEFI
59 | More detail information about how to deploy target system into HiKey board, please refer to [Deploy_Manual.4HiKey.md](https://github.com/open-estuary/estuary/blob/master/doc/Deploy_Manual.4HiKey.md).
60 | More detail information about how to config this WiFi function into HiKey board, please refer to [Setup_HiKey_Wifi_Env.md](https://github.com/open-estuary/estuary/blob/master/doc/Setup_HiKey_WiFi_Env.4HiKey.md).
61 | More detail information about distributions, please refer to [Distributions_Guider.md](https://github.com/open-estuary/estuary/blob/master/doc/Distributions_Guide.4All.md).
62 | More detail information about toolchains, please refer to [Toolchains_Guider.md](https://github.com/open-estuary/estuary/blob/master/doc/Toolchains_Guide.4All.md).
63 | More detail information about how to benchmark system, please refer to [Caliper_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/Caliper_Manual.4All.md).
64 | More detail information about how to access remote boards in OpenLab, please refer to [Boards_in_OpenLab](http://open-estuary.org/accessing-boards-in-open-lab/).
65 |
--------------------------------------------------------------------------------
/doc/Readme.4QEMU.md:
--------------------------------------------------------------------------------
1 | This is the readme file for QEMU platform
2 |
3 | After you do `./estuary/build.sh -p QEMU -d Ubuntu`, all targets files will be produced into `/build/QEMU` directory, they are: UEFI, grub and dtb files are not necessary for QEMU platform
4 |
5 | ### Image
6 |
7 | **descriptions**: Image is the kernel executable program.
8 | **target**: `/build/QEMU/kernel/arch/arm64/boot/Image`
9 | **source**: `/kernel`
10 | **Note**: Before compiling kernel, gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux(https://github.com/open-estuary/estuary/blob/master/doc/Toolchains_Guide.4All.md) and libssl-dev should be installed first.
11 |
12 | build commands(supposedly, you are in ``currently):
13 | ```bash
14 | build_dir=build
15 | KERNEL_DIR=kernel
16 | mkdir -p $build_dir/QEMU/$KERNEL_DIR 2>/dev/null
17 | kernel_dir=$build_dir/QEMU/$KERNEL_DIR
18 | KERNEL_BIN=$kernel_dir/arch/arm64/boot/Image
19 | CFG_FILE=defconfig
20 | DTB_BIN=
21 | export ARCH=arm64
22 | export CROSS_COMPILE=aarch64-linux-gnu-
23 |
24 | pushd $KERNEL_DIR/
25 |
26 | git clean -fdx
27 | git reset --hard
28 | sudo rm -rf ../$kernel_dir/*
29 | make O=../$kernel_dir mrproper
30 |
31 | ./scripts/kconfig/merge_config.sh -O ../$kernel_dir -m arch/arm64/configs/defconfig \
32 | arch/arm64/configs/distro.config arch/arm64/configs/estuary_defconfig arch/arm64/configs/qemu_defconfig
33 | mv -f ../$kernel_dir/.config ../$kernel_dir/.merged.config
34 | make O=../$kernel_dir KCONFIG_ALLCONFIG=../$kernel_dir/.merged.config alldefconfig
35 |
36 | make O=../$kernel_dir -j${corenum} ${KERNEL_BIN##*/}
37 |
38 | dtb_dir=${DTB_BIN#*arch/}
39 | dtb_dir=${DTB_BIN%/*}
40 | dtb_dir=../${kernel_dir}/arch/${dtb_dir}
41 |
42 | mkdir -p $dtb_dir 2>/dev/null
43 |
44 | make O=../$kernel_dir ${DTB_BIN#*/boot/dts/}
45 |
46 | ```
47 |
48 | ### qemu-system-aarch64
49 |
50 | **descriptions**: qemu-system-aarch64 is the QEMU executable program.
51 | **target**: `/build/qemu/bin/qemu-system-aarch64`
52 | **source**: `/qemu`
53 | build commands(supposedly, you are in currently):
54 | ```bash
55 | sudo apt-get install -y gcc zlib1g-dev libperl-dev libgtk2.0-dev libfdt-dev
56 |
57 | pushd qemu
58 | ./configure --prefix="/home/user/qemubuild" --target-list=aarch64-softmmu
59 | make -j14
60 | make install
61 | popd
62 |
63 | /home/user/qemubuild/qemu-system-aarch64 -machine virt -cpu cortex-a57 \
64 | -kernel /build/QEMU/binary/Image_QEMU \
65 | -drive if=none,file=,id=fs \
66 | -device virtio-blk-device,drive=fs \
67 | -append "console=ttyAMA0 root=/dev/vda rw" \
68 | -nographic
69 | ```
70 |
71 | More detail about distributions, please refer to [Distributions_Guide.md](https://github.com/open-estuary/estuary/blob/master/doc/Distributions_Guide.4All.md).
72 | More detail about toolchains, please refer to [Toolchains_Guide.md](https://github.com/open-estuary/estuary/blob/master/doc/Toolchains_Guide.4All.md).
73 | More detail about how to debug, analyse, diagnose system, please refer to [Armor_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/Armor_Manual.4All.md).
74 | More detail about how to benchmark system, please refer to [Caliper_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/Caliper_Manual.4All.md).
75 |
--------------------------------------------------------------------------------
/doc/Setup_PXE_Env_on_Host.4All.md:
--------------------------------------------------------------------------------
1 | * [Introduction](#1)
2 | * [Setup DHCP server on Ubuntu](#2)
3 | * [Setup TFTP server on Ubuntu](#3)
4 | * [Put files in the TFTP root path](#4)
5 | * [Setup NFS server on Ubuntu](#5)
6 |
7 | This is a guide to setup a PXE environment on host machine.
8 |
9 | ### Introduction
10 |
11 | PXE boot depends on DHCP, TFTP and NFS services. So before verifing PXE, you need to setup a working DHCP, TFTP, NFS server on one of your host machine in local network. In this case, my host OS is Ubuntu 12.04.
12 |
13 | ### Setup DHCP server on Ubuntu
14 |
15 | Refer to https://help.ubuntu.com/community/isc-dhcp-server . For a simplified direction, try these steps:
16 |
17 | * Install DHCP server package
18 |
19 | `sudo apt-get install -y isc-dhcp-server syslinux`
20 |
21 | * Edit /etc/dhcp/dhcpd.conf to suit your needs and particular configuration.
22 | Make sure filename is consistent with the file in tftp root directory.
23 | Here is an example: This will enable board to load "grubaa64.efi" from TFTP root to target board and run it, when you boot from PXE in UEFI Boot Menu.
24 | ```bash
25 | $ cat /etc/dhcp/dhcpd.conf
26 | # Sample /etc/dhcpd.conf
27 | # (add your comments here)
28 | default-lease-time 600;
29 | max-lease-time 7200;
30 | subnet 192.168.1.0 netmask 255.255.255.0 {
31 | range 192.168.1.210 192.168.1.250;
32 | option subnet-mask 255.255.255.0;
33 | option domain-name-servers 192.168.1.1;
34 | option routers 192.168.1.1;
35 | option subnet-mask 255.255.255.0;
36 | option broadcast-address 192.168.1.255;
37 | # Change the filename according to your real local environment and target board type.
38 | # And make sure the file has been put in tftp root directory.
39 | # grubaa64.efi is for ARM64 architecture.
40 | # grubarm32.efi is for ARM32 architecture.
41 | filename "grubaa64.efi";
42 | #filename "grubarm32.efi";
43 | #next-server 192.168.1.107
44 | }
45 | #
46 | ```
47 | * Edit /etc/default/isc-dhcp-server to specify the interfaces dhcpd should listen to. By default it listens to eth0.
48 | INTERFACES=""
49 |
50 | * Use these commands to start and check DHCP service
51 | `sudo service isc-dhcp-server restart`
52 | Check status with `netstat -lu`
53 | Expected output:
54 | ```
55 | Proto Recv-Q Send-Q Local Address Foreign Address State
56 | udp 0 0 *:bootpc *:*
57 | ```
58 | ### Setup TFTP server on Ubuntu
59 |
60 | * Install TFTP server and TFTP client(optional, tftp-hpa is the client package)
61 |
62 | `sudo apt-get install -y openbsd-inetd tftpd-hpa tftp-hpa lftp`
63 | * Edit /etc/inetd.conf
64 |
65 | Remove "#" from the beginning of tftp line or add if it’s not there under “#:BOOT:” comment as follow.
66 | `tftp dgram udp wait root /usr/sbin/in.tftpd /usr/sbin/in.tftpd -s /var/lib/tftpboot`
67 | * Enable boot service for inetd
68 | `sudo update-inetd --enable BOOT`
69 | * Configure the TFTP server, update /etc/default/tftpd-hpa like follows:
70 |
71 | ```bash
72 | TFTP_USERNAME="tftp"
73 | TFTP_ADDRESS="0.0.0.0:69"
74 | TFTP_DIRECTORY="/var/lib/tftpboot"
75 | TFTP_OPTIONS="-l -c -s"
76 | ```
77 |
78 | * Set up TFTP server directory
79 | ```bash
80 | sudo mkdir /var/lib/tftpboot
81 | sudo chmod -R 777 /var/lib/tftpboot/
82 | ```
83 |
84 | * Restart inet & TFTP server
85 | ```bash
86 | sudo service openbsd-inetd restart
87 | sudo service tftpd-hpa restart
88 | ```
89 | Check status with `netstat -lu`
90 | Expected output:
91 | ```
92 | Proto Recv-Q Send-Q Local Address Foreign Address State
93 | udp 0 0 *:tftp *:*
94 | ```
95 |
96 | ### Put files in the TFTP root path
97 |
98 | Put the corresponding files into TFTP root directory, they are: grub binary file, grub configure file and kernel Image.
99 | In my case, they are grubaa64.efi, grub.cfg-01-xx-xx-xx-xx-xx-xx and Image.
100 |
101 | Note:
102 | 1. The name of grub binary "grubaa64.efi" or "grubarm32.efi" must be as same as the DHCP configure file in `/etc/dhcp/dhcpd.conf`.
103 | 2. The grub configure file’s name must comply with a special format, e.g. grub.cfg-01-xx-xx-xx-xx-xx-xx, it starts with "grub.cfg-01-" and ends with board’s MAC address.
104 | 3. The grub binary and grub.cfg-01-xx-xx-xx-xx-xx-xx files must be placed in the TFTP root directory.
105 | 4. The names and positions of kernel image must be consistent with the corresponding grub config file.
106 |
107 | To get and config grub and grub config files, please refer to [Grub_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/Grub_Manual.4All.md).
108 | To get kernel, please refer to Readme.md.
109 |
110 | ### Setup NFS server on Ubuntu
111 |
112 | * Install NFS server package
113 | `sudo apt-get install nfs-kernel-server nfs-common portmap`
114 | * Modify configure file `/etc/exports` for NFS server
115 | Add following contents at the end of this file.
116 | ```bash
117 | *(rw,sync,no_root_squash)
118 | ```
119 | Note: `` is your real shared directory of rootfs of distributions for NFS server.
120 |
121 | * Uncompress a distribution to ``
122 | To get them, please refer to [Distributions_Guider.md](https://github.com/open-estuary/estuary/blob/master/doc/Distributions_Guide.4All.md)
123 | * Restart NFS service
124 | `sudo service nfs-kernel-server restart`
125 |
126 |
--------------------------------------------------------------------------------
/doc/Toolchains_Guide.4All.md:
--------------------------------------------------------------------------------
1 | This is the guide for toolchains
2 |
3 | If your host machine and target machine have the different architecture, you have to prepare toolchains before you build any target binaries.
4 |
5 | E.g. if you are building an arm-based target binary on Intel machine, you must use them in your host machine.
6 | By default, after you do `./esutary/build.sh -i toolchain`, the toolchain will be install your host machine's `/opt` directory.
7 |
8 | And the original toolchains files will be download into `/toolchain` directory too.
9 | ```bash
10 | gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux.tar.xz # this is for aarch64 architecture.
11 | gcc-linaro-arm-linux-gnueabihf-4.9-2014.09_linux.tar.xz # this is for arm32 architecture.
12 | ```
13 |
14 | Of course, you can install the toolchains yourself with following commands (take the aarch64 as example):
15 | ```bash
16 | pushd toolchain
17 | sudo mkdir -p /opt 2>/dev/null
18 | sudo tar -xvf gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux.tar.xz -C /opt
19 | str='export PATH=$PATH:/opt/gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux/bin'
20 | grep "$str" ~/.bashrc >/dev/null
21 | if [ x"$?" != x"0" ]; then echo "$str">> ~/.bashrc; fi
22 | source ~/.bashrc
23 | ```
24 |
--------------------------------------------------------------------------------
/doc/UEFI_Manual.4D03.md:
--------------------------------------------------------------------------------
1 | * [Introduction](#1)
2 | * [Upgrade UEFI](#2)
3 | * [Recover the UEFI when it broke](#3)
4 |
5 | ## Introduction
6 |
7 | UEFI is a kind of BIOS to boot system and provide runtime service to OS which can do some basic IO operation with the runtime service, e.g.: reboot, power off and etc.
8 | Normally, there are some trust firmware will be produce from UEFI building, they are responsible for trust reprogram, they include:
9 | ```
10 | UEFI_D03.fd //UEFI executable binary file.
11 | ```
12 | Where to get them, please refer to [Readme.md](https://github.com/open-estuary/estuary/blob/master/doc/Readme.4D03.md).
13 |
14 | ## Upgrade UEFI
15 |
16 | Note: This is not necessary unless you want to upgrade UEFI really.
17 |
18 | 1. Prepare UEFI_D03.fd on computer which installed FTP service
19 | FTP service is used to download files from the FTP server to hardware boards. Please prepare a computer installed FTP service with local network firstly, so that boards can get needed files frome FTP server with FTP protocol. Then put the UEFI files mentioned above into the root directory of FTP service.
20 | 2. Connect the board's UART port to a host machine
21 | Please refer to [Deploy_Manual.4D03.md](https://github.com/open-estuary/estuary/blob/master/doc/Deploy_Manual.4D03.md) "Prerequisite" chapter.
22 |
23 | If you choose Method 1 Deploy_Manual.4D03.md, use another console window, use `board_reboot` command to reset the board.
24 | If you choose Method 2 Deploy_Manual.4D03.md, press the reset key on the board to reset the board.
25 |
26 | when system showing "Press Any key in 10 seconds to stop automatical booting...", press any key except "enter" key to enter UEFI main menu.
27 |
28 | ### UEFI menu introduction
29 |
30 | UEFI main menu option is showed as follow:
31 | ```bash
32 | continue
33 | select Language
34 | >Boot Manager
35 | >Device Manager
36 | >Boot Maintenance Manager
37 | ```
38 | Choose "Boot Manager" and enter into Boot option menu:
39 | ```
40 | EFI Misc Device
41 | EFI Network
42 | EFI Network 1
43 | EFI Network 2
44 | EFI Network 3
45 | EFI Internal Shell
46 | ESL Start OS
47 | Embedded Boot Loader(EBL)
48 | ```
49 | Please select "EFI Network 2" when booting boards via PXE with openlab environment.
50 | D03 board support 4 network ports including two 2GE ports, two 10GE ports which corresponding to UEFI startup interface are EFI Network 2, EFI Network 3, EFI Network 0, EFI Network 1.
51 |
52 | *EFI Internal Shell mode* is a standard command shell in UEFI.
53 | *Embedded Boot Loader(EBL) mode* is an embedded command shell based on boot loader specially for developers.
54 | You can switch between two modes by typing "exit" from one mode to UEFI main menu and then choose the another mode.
55 |
56 | ### Update UEFI files
57 |
58 | * IP address config at "EFI Internal Shell" mode(Optional, you can ignore this step if DHCP works well)
59 |
60 | Press any key except "enter" key to enter UEFI main menu. Select "Boot Manager"->"EFI Internal Shell".
61 | ```bash
62 | ifconfig -s eth0 static
63 | ```
64 | e.g.:
65 | ```
66 | ifconfig -s eth0 static 192.168.1.4 255.255.255.0 192.168.1.1
67 | ```
68 | * Burn BIOS file at "Embedded Boot Loader(EBL)" mode
69 |
70 | Enter "exit" from "EFI Internal Shell" mode to the UEFI main menu and choose "Boot Manager"-> "Embedded Boot Loader(EBL)"after setting the IP address done.
71 | ```bash
72 | # Download file from FTP server to board's RAM
73 | provision -u -p -f -a
74 | # Write the data into NORFLASH
75 | spiwfmem
76 | ```
77 | e.g.:
78 | ```bash
79 | provision 192.168.1.107 -u sch -p aaa -f UEFI_D03.fd -a 0x100000
80 | ```
81 | D03 board supports 4 network ports which including 2 GE ports and 2 10GE ports. Please select "Interface 3" when downloading bios file with openlab environment.
82 | ```bash
83 | spiwfmem 0x100000 0x0000000 0x300000
84 | ```
85 |
86 | * Power off and reboot board again.
87 |
88 | ## Recover the UEFI when it broke
89 |
90 | 1. Connect board's BMC port to the network port of your ubuntu host.
91 | 2. Configure board's BMC IP and your ubuntu host's IP at the same network segment.
92 | 3. Login the BMC website, The `username/passwd` is `root/Huawei12#$`. Click "system", "Firmware upgrade","Browse" to choose the uefi file in hpm formate.(Please contact support@open-estuary.org to get the hpm file).
93 |
94 | Note: Usually BMC website can be visited by (https://192.168.2.100) by default. If BMC IP have modified by somebody, please take the following steps to find modified BMC IP
95 |
96 | * Pull out the power cable to power off tthe board. Find the pin named "`COM_SW`" at `J44`. Then connect it with jump cap.
97 | * Power on the board, connect the board's serial port to your ubuntu serial port. When the screen display message "You are trying to access a restricted zone. Only Authorized Users allowed.", type "Enter" key, input `username/passwd`, the `username/passwd` is `root/Huawei12#$`.
98 | * After you login the BMC interface which start with "`iBMC:/->`", use command "`ifconfig`" to see the modified BMC IP.
99 | * When you get the board's BMC IP, please visit the BMC website by `https://`
100 | 4. Click "Start update"(Do not power off during this period).
101 | 5. After updated UFEI file, reboot the board to enter UEFI menu.
102 |
--------------------------------------------------------------------------------
/doc/UEFI_Manual.4D05.md:
--------------------------------------------------------------------------------
1 | * [Introduction](#1)
2 | * [Upgrade UEFI](#2)
3 | * [Recover the UEFI when it broke](#3)
4 |
5 |
Introduction
6 |
7 | UEFI is a kind of BIOS to boot system and provide runtime service to OS which can do some basic IO operation with the runtime service, e.g.: reboot, power off and etc.
8 | Normally, there are some trust firmware will be produce from UEFI building, they are responsible for trust reprogram, they include:
9 |
10 | UEFI_D05.fd //UEFI executable binary file.
11 |
12 | Where to get them, please refer to [Readme.md](https://github.com/open-estuary/estuary/blob/master/doc/Readme.4D05.md).
13 |
14 |
Upgrade UEFI
15 |
16 | Note: This is not necessary unless you want to upgrade UEFI really.
17 |
18 | 1. Prepare UEFI_D05.fd on computer which installed FTP service
19 | FTP service is used to download files from the FTP server to hardware boards. Please prepare a computer installed FTP service with local network firstly, so that boards can get needed files frome FTP server with FTP protocol. Then put the UEFI files mentioned above into the root directory of FTP service.
20 | 2. Connect the board's UART port to a host machine
21 | Please refer to [Deploy_Manual.4D05.md](https://github.com/open-estuary/estuary/blob/master/doc/Deploy_Manual.4D05.md) "Prerequisite" chapter.
22 |
23 | If you choose Method 1 Deploy_Manual.4D05.md, use another console window, use `board_reboot` command to reset the board.
24 | If you choose Method 2 Deploy_Manual.4D05.md, press the reset key on the board to reset the board.
25 |
26 | when system showing "Press Any key in 10 seconds to stop automatical booting...", press any key except "enter" key to enter UEFI main menu.
27 |
28 |
29 | ### UEFI menu introduction
30 |
31 | UEFI main menu option is showed as follow:
32 | ```bash
33 | continue
34 | select Language
35 | >Boot Manager
36 | >Device Manager
37 | >Boot Maintenance Manager
38 | ```
39 | Choose "Boot Manager" and enter into Boot option menu:
40 | ```bash
41 | EFI Misc Device
42 | EFI Network
43 | EFI Network 1
44 | EFI Network 2
45 | EFI Network 3
46 | EFI Internal Shell
47 | ESL Start OS
48 | Embedded Boot Loader(EBL)
49 | ```
50 | Please select "EFI Network 2" when booting boards via PXE with openlab environment.
51 | D05 board support 4 on-board network ports at maximun. To enable any one of them by connecting to network cable or optical fiber. From left to right, followed by the two 2GE ports, two 10GE ports which corresponding to UEFI startup interface are EFI Network 2, EFI Network 3, EFI Network 0, EFI Network 1.
52 |
53 | EFI Internal Shell mode is a standard command shell in UEFI.
54 |
55 | Embedded Boot Loader(EBL) mode is an embedded command shell based on boot loader specially for developers.
56 |
57 | You can switch between two modes by typing "exit" from one mode to UEFI main menu and then choose the another mode.
58 |
59 | ### Update UEFI files
60 |
61 | * IP address config at "EFI Internal Shell" mode(Optional, you can ignore this step if DHCP works well)
62 |
63 | Press any key except "enter" key to enter UEFI main menu. Select "Boot Manager"->"EFI Internal Shell".
64 |
65 | `ifconfig -s eth0 static `
66 |
67 | e.g.:
68 |
69 | `ifconfig -s eth0 static 192.168.1.4 255.255.255.0 192.168.1.1`
70 |
71 | * Burn BIOS file at "Embedded Boot Loader(EBL)" mode
72 |
73 | Enter "exit" from "EFI Internal Shell" mode to the UEFI main menu and choose "Boot Manager"-> "Embedded Boot Loader(EBL)"after setting the IP address done.
74 | ```bash
75 | # Download file from FTP server to board's RAM
76 | provision -u -p -f -a
77 | # Write the data into NORFLASH
78 | spiwfmem
79 | ```
80 | e.g.:
81 | ```bash
82 | provision 192.168.1.107 -u sch -p aaa -f UEFI_D05.fd -a 0x100000
83 | ```
84 | D05 board supports 4 network ports which including 2 GE ports and 2 10GE ports. Please select "Interface 3" when downloading bios file with openlab environment.
85 | ```bash
86 | spiwfmem 0x100000 0x0000000 0x300000
87 | ```
88 | * Power off and reboot board again.
89 |
90 |
Recover the UEFI when it broke
91 |
92 | 1. Connect board's BMC port to the network port of your ubuntu host.
93 |
94 | 2. Configure board's BMC IP and your ubuntu host's IP at the same network segment.
95 |
96 | 3. Login the BMC website, The username/passwd are root/Huawei12#$. Click "system", click "Firmware upgrade", click "Browse" to choose the hpm formate uefi file(Please contact support@open-estuary.org to get the hpm formate uefi file).
97 |
98 | Note: Usually BMC website can be visited by (https://192.168.2.100) by default. If BMC IP have modified by somebody, please take the following steps to find modified BMC IP
99 |
100 | * Pull out the power cable. Find the pin named "COM_SW" at J44. Then connect it.
101 |
102 | * Power on the board, connect the board's serial port to your ubuntu serial port. When the screen display message "You are trying to access a restricted zone. Only Authorized Users allowed.", type "Enter" key, input username/passwd, the username/passwd are root/Huawei12#$.
103 |
104 | * After you login the BMC interface which start with "iBMC:/->", use command "ifconfig" to see the modified BMC IP.
105 |
106 | * When you get the board's BMC IP, please visit the BMC website by https:
107 | 4. Click "Start update"(Do not power off during this period).
108 |
109 | 5. After updated UFEI file, reboot the board to enter UEFI menu.
110 |
--------------------------------------------------------------------------------
/doc/release-files/CentOS/ReadMe.md:
--------------------------------------------------------------------------------
1 | # Introduction
2 | This folder contains the binary of iso and pxe whitch used to build the CentOS.How to deploy D03&D05,please refer to [Deploy_manual.fd](https://github.com/open-estuary/estuary/tree/master/doc/Deploy_Manual.4All.md)
3 | ```
4 | CentOS-7-aarch64-Everything.iso : CentOS iso installer
5 | CentOS-7-aarch64-Everything.iso.MD5SUM : the md5sum of CentOS-7-aarch64-Everything.iso
6 | boot.iso : network installer iso
7 | boot.iso.MD5SUM : the md5sum of boot.iso
8 | netboot.tar.gz : PXE boot network installer
9 | netboot.tar.gz.MD5SUM : the md5sum of netboot.tar.gz
10 | ```
11 |
--------------------------------------------------------------------------------
/doc/release-files/Debian/ReadMe.md:
--------------------------------------------------------------------------------
1 | # Introduction
2 | This folder contains the binary of iso and pxe whitch used to build the debian.How to deploy D03&D05,please refer to [Deploy_manual.fd](https://github.com/open-estuary/estuary/tree/master/doc/Deploy_Manual.4All.md).
3 | ```
4 | estuary-v5.0-debian-8.7-arm64-CD-1.iso : Debian iso installer
5 | estuary-v5.0-debian-8.7-arm64-CD-1.iso.MD5SUM : the md5sum of estuary-v5.0-debian-8.7-arm64-CD-1.iso
6 | mini.iso : network installer iso
7 | mini.iso.MD5SUM : the md5sum of mini.iso
8 | netboot.tar.gz : PXE boot network installer
9 | netboot.tar.gz.MD5SUM : the md5sum of netboot.tar.gz
10 | default-preseed.cfg : the mini requried preseed config file
11 | default-preseed.cfg.MD5SUM : the md5sum of default-preseed.cfg
12 | ```
13 |
--------------------------------------------------------------------------------
/doc/release-files/ReadMe.md:
--------------------------------------------------------------------------------
1 | # Introduction
2 | This folder contain binaries for platform of D03&D05.How to deploy D03&D05,please refer to [deploy_manual.md](https://github.com/open-estuary/estuary/tree/master/doc/Deploy_Manual.4All.md)
3 | ```
4 | binary : contain binaries(uefi,grub,mini-rootfs,*_ARM64.tar.gz) for platform of D03&D05
5 | CentOS : contain centos iso,boot iso,netboot.tar.gz and md5sum of them
6 | Debian : contain debian iso,mini iso,netboot and md5sum of them
7 | Ubuntu : contain ubuntu iso,mini iso,netboot and md5sum of them
8 | Fedora : contain fedora iso,boot iso,netboot and md5sum of them
9 | OpenSuse : contain opensuse iso,boot iso,netboot and md5sum of them
10 | ```
11 |
--------------------------------------------------------------------------------
/doc/release-files/ReleaseNotes.md:
--------------------------------------------------------------------------------
1 | # Estuary v5.2 Release Information:
2 | [Please click here to go to the download page of this release](http://open-estuary.org/estuary-download/)
3 |
4 | ```
5 | Release Version : 5.2
6 | Release Date : 29-May-2018
7 | QEMU : v2.7.0
8 | OpenJDK : v1.8
9 | Docker : v1.12.6
10 | MySQL : percona-5.7.18
11 | CI : Support NFS/PXE boot testing on D03/D05 board(OS is Ubuntu,CentOS,Debian)
12 | Armor tools : include perf, gdb, strace... (totally more than 40 tools for system debug\analyses\diagnosis)
13 | Distributions Supported : Ubuntu 16.04.4,CentOS 7.5,Debian 9,Fedora 26,OpenSuse 42.3,mini-rootfs 1.1
14 | Kernel Version : 4.16.3
15 | Bootloader Info : UEFI 3.0 + Grub 2.02-beta3
16 | Boot mode : PXE, NFS, iBMC Load ISO
17 | Boards Supported : D03(ARM64), D05(ARM64)
18 | Deployment Methods : Auto ISO file load, PXE
19 | ```
20 |
21 | # Introduction:
22 |
23 | Estuary is a development version of the whole software solution which target is the ICT market. It is a long term solution and focus on the combination of the high level components. It is expected to be re-based to top tip kernel /distribution versions/applications at the earliest.
24 |
25 | # Changelog:
26 |
27 | ```
28 | 1. UEFI
29 | - Increase the function of Raid card(type 3008,type 3108)
30 | 2. OS
31 | - Upgraded Linux kernel version to v4.16.3
32 | 3. Distros
33 | - Added support for Fedora
34 | - Added support for OpenSuse
35 | 4. Applications
36 | - Integrated new tool : Malluma
37 | 5. Deployment
38 | - Support standard network installation, ISO installation, and compatible with the original NFS deployment
39 | - Build scripts support parallel compiling of CentOS, Ubuntu, Debian, Fedora, OpenSuse and common modules
40 | 6. Document
41 | - Updated project documentation (Readme, Grub, deploy_manual,etc)
42 | 7. CI/Automation
43 | - Supported basic CI/Automation for D03 (Build, NFS/PXE Deployment, Some tests)
44 | - Supported basic CI/Automation for D05 (Build, NFS/PXE Deployment, Some tests)
45 | ```
46 | # Known issues:
47 |
48 | ```
49 | 1. Malluma can not install with default path
50 | 2. After upgrade the bios,cannot find the option to boot OpenSuse&&Fedora
51 | 3. OS can not install by BMC
52 | ```
53 |
--------------------------------------------------------------------------------
/doc/release-files/Ubuntu/ReadMe.md:
--------------------------------------------------------------------------------
1 | # Introduction
2 | This folder contains the binary of iso and pxe whitch used to build the Ubuntu.How to deploy D03&D05,please refer to [Deploy_manual.md](https://github.com/open-estuary/estuary/tree/master/doc/Deploy_Manual.4All.md)
3 | ```
4 | estuary-v5.0-ubuntu.iso : Ubuntu iso installer
5 | estuary-v5.0-ubuntu.iso.MD5SUM : the md5sum of estuary-v5.0-ubuntu.iso
6 | mini.iso : network installer iso
7 | mini.iso.MD5SUM : the md5sum of mini.iso
8 | netboot.tar.gz : PXE boot network installer
9 | netboot.tar.gz.MD5SUM : the md5sum of netboot.tar.gz
10 | default-preseed.cfg : the mini required preseed config file for network installer
11 | default-preseed.cfg.MD5SUM : the md5sum of default-preseed.cfg
12 |
13 | ```
14 |
--------------------------------------------------------------------------------
/doc/release-files/binary/D03/ReadMe.md:
--------------------------------------------------------------------------------
1 | # Introduction
2 | This folder contains the bios for D03 platform.
3 | ```
4 | UEFI_D03.hpm : the UEFI bios for D03 platform
5 | UEFI_D03.fd : the UEFI bios for D03 platform,for how to upgrade uefi, please refer to [UEFI_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/UEFI_Manual.4D03.md)
6 | ```
7 |
--------------------------------------------------------------------------------
/doc/release-files/binary/D05/ReadMe.md:
--------------------------------------------------------------------------------
1 | # Introduction
2 | This folder contains the bios for D05 platform.
3 | ```
4 | UEFI_D05.hpm : the *.fd type bios for D05 platform
5 | UEFI_D05.fd : the *.hpm type bios for D05 platform,for how to upgrade uefi, please refer to [UEFI_Manual.md](https://github.com/open-estuary/estuary/blob/master/doc/UEFI_Manual.4D05.md)
6 | ```
7 |
--------------------------------------------------------------------------------
/doc/release-files/binary/ReadMe.md:
--------------------------------------------------------------------------------
1 | # Introduction
2 | This folder contains the binaries whitch used to build the platform of D03&D05.How to deploy D03&D05,please refer to [Deploy_manual.md](https://github.com/open-estuary/estuary/tree/master/doc/Deploy_Manual.4All.md)
3 | ```
4 | D03 : contain uefi for D03 platform
5 | D05 : contain uefi for D05 platform
6 | arm64 : contain the binaries(Image,grub,mini-rootfs,*_ARM64.tar.gz) to build the platform of D03&D05
7 | ```
8 |
--------------------------------------------------------------------------------
/doc/release-files/binary/arm64/ReadMe.md:
--------------------------------------------------------------------------------
1 | # Introduction
2 | This folder contains the binaries whitch used to build the platform of D03&D05.How to deploy D03&D05,please refer to [Deploy_manual.md](https://github.com/open-estuary/estuary/tree/master/doc/Deploy_Manual.4All.md)
3 | ```
4 | Image : the kenrel Image file of D03&D05
5 | *_ARM64.tar.gz : the rootfs tarball of CentOS,Debian,Ubuntu
6 | mini-rootfs.cpio.gz : the mini-rootfs file
7 | grub.cfg : the grub config file
8 | ```
9 |
10 |
--------------------------------------------------------------------------------
/doc/release-files/binary/arm64/grub.cfg:
--------------------------------------------------------------------------------
1 | # NOTE: Please remove the unused boot items according to your real condition.
2 | # Sample GRUB configuration file
3 | #
4 |
5 | # Boot automatically after 3 secs.
6 | set timeout=3
7 |
8 | # By default, boot the Linux
9 | set default=pxe_console
10 |
11 | menuentry "PXE_CONSOLE" --id pxe_console {
12 | set root=(tftp,192.168.1.107)
13 | linux /Image pcie_aspm=off pci=pcie_bus_perf
14 | initrd /mini-rootfs-arm64.cpio.gz
15 | }
16 |
17 | menuentry "PXE_VGA" --id pxe_vga {
18 | set root=(tftp,192.168.1.107)
19 | linux /Image pcie_aspm=off pci=pcie_bus_perf console=tty0
20 | initrd /mini-rootfs-arm64.cpio.gz
21 | }
22 |
23 | menuentry "NFS_CONSOLE" --id nfs_console {
24 | set root=(tftp,192.168.1.107)
25 | linux /Image pcie_aspm=off pci=pcie_bus_perf rootwait root=/dev/nfs rw nfsroot=192.168.1.107:/home/ftp/user/rootfs_ubuntu64,nfsvers=3 ip=dhcp
26 | }
27 |
28 | menuentry "NFS_VGA" --id nfs_vga {
29 | set root=(tftp,192.168.1.107)
30 | linux /Image pcie_aspm=off pci=pcie_bus_perf rootwait root=/dev/nfs rw nfsroot=192.168.1.107:/home/ftp/user/rootfs_ubuntu64,nfsvers=3 ip=dhcp console=tty0
31 | }
32 |
33 | menuentry "SATA_CONSOLE" --id sata_console {
34 | set root=(hd1,gpt1)
35 | linux /Image pcie_aspm=off pci=pcie_bus_perf rootwait root=/dev/sda2 rw
36 | }
37 |
38 | menuentry "SATA_VGA" --id sata_vga {
39 | set root=(hd1,gpt1)
40 | linux /Image pcie_aspm=off pci=pcie_bus_perf rootwait root=/dev/sda2 rw console=tty0
41 | }
42 |
43 | menuentry "D05_RancherOS" --id d05_sata_rancheros {
44 | set root=(hd1,gpt1)
45 | linux /Image \
46 | pcie_aspm=off pci=pcie_bus_perf init=/init console=ttyAMA0,115200 \
47 | rootwait root=/dev/sda2 rw \
48 | rancher.autologin=ttyAMA0 rancher.password=rancher
49 | }
50 |
51 | menuentry "D03_RancherOS" --id d03_sata_rancheros {
52 | set root=(hd1,gpt1)
53 | linux /Image \
54 | pcie_aspm=off console=ttyS0,115200 init=/init \
55 | rootwait root=/dev/sda2 rw \
56 | rancher.autologin=ttyS0 rancher.password=rancher
57 | }
58 |
59 |
--------------------------------------------------------------------------------
/estuary.txt:
--------------------------------------------------------------------------------
1 | estuary_interal_ftp: http://114.119.4.74/FolderNotVisibleOnWebsite/EstuaryInternalConfig
2 | china_interal_ftp: ftp://117.78.41.188/FolderNotVisibleOnWebsite/EstuaryInternalConfig
3 |
--------------------------------------------------------------------------------
/estuarycfg.json:
--------------------------------------------------------------------------------
1 | {
2 | "system":[
3 | {"platform":"D03","install":"yes"},
4 | {"platform":"D05","install":"yes"}
5 | ],
6 |
7 | "distros":[
8 | {"name":"common","install":"yes"},
9 | {"name":"centos","install":"yes"},
10 | {"name":"debian","install":"yes"},
11 | {"name":"fedora","install":"yes"},
12 | {"name":"opensuse","install":"yes"},
13 | {"name":"ubuntu","install":"yes"}
14 | ],
15 |
16 | "env": [
17 | {"ESTUARY_FTP": "http://repo.estuary.cloud/FolderNotVisibleOnWebsite/EstuaryInternalConfig/"},
18 | {"ESTUARY_REPO": "ftp://repoftp:repopushez7411@117.78.41.188/releases/"},
19 | {"CENTOS_ESTUARY_REPO": "http://192.168.1.107/estuary-repo/kernel-5.30/centos"},
20 | {"CENTOS_MIRROR": "http://repo.estuary.cloud/centos/7/os/aarch64/"},
21 | {"CENTOS_ISO_MIRROR": "http://repo.estuary.cloud/centos/7/isos/aarch64/"},
22 | {"FEDORA_ESTUARY_REPO": "http://repo.estuary.cloud/releases/5.2/fedora"},
23 | {"FEDORA_MIRROR": "http://repo.estuary.cloud/fedora"},
24 | {"FEDORA_ISO_MIRROR": "http://repo.estuary.cloud/FolderNotVisibleOnWebsite/EstuaryInternalConfig/linux/Fedora"},
25 | {"DEBIAN_ESTUARY_REPO": "http://192.168.1.107/estuary-repo/kernel-5.30/debian"},
26 | {"DEBIAN_MIRROR": "http://mirrors.tuna.tsinghua.edu.cn/debian/"},
27 | {"DEBIAN_SECURITY_MIRROR": "http://mirrors.tuna.tsinghua.edu.cn/debian-security/"},
28 | {"OPENSUSE_MIRROR": "http://repo.estuary.cloud/opensuse"},
29 | {"OPENSUSE_ISO_MIRROR": "http://repo.estuary.cloud/FolderNotVisibleOnWebsite/EstuaryInternalConfig/linux/OpenSuse"},
30 | {"UBUNTU_ESTUARY_REPO": "http://repo.estuary.cloud/releases/5.2/ubuntu"},
31 | {"UBUNTU_MIRROR": "http://repo.estuary.cloud/ubuntu-ports/"},
32 | {"UBUNTU_ISO_MIRROR": "http://repo.estuary.cloud/ubuntu/releases/18.04/release/"}
33 | ]
34 |
35 | }
36 |
--------------------------------------------------------------------------------
/estuarycfg_x86_64.json:
--------------------------------------------------------------------------------
1 | {
2 | "system":[
3 | {"platform":"D03","install":"yes"},
4 | {"platform":"D05","install":"yes"}
5 | ],
6 |
7 | "distros":[
8 | {"name":"common","install":"yes"},
9 | {"name":"ubuntu","install":"yes"},
10 | {"name":"debian","install":"yes"}
11 | ],
12 |
13 | "env": [
14 | {"ESTUARY_FTP": "http://repo.estuary.cloud/FolderNotVisibleOnWebsite/EstuaryInternalConfig/"},
15 | {"DEBIAN_ESTUARY_REPO": "ftp://repoftp:repopushez7411@117.78.41.188/releases/5.2/debian"},
16 | {"DEBIAN_MIRROR": "http://repo.estuary.cloud/debian/"},
17 | {"DEBIAN_SECURITY_MIRROR": "http://repo.estuary.cloud/debian-security/"},
18 | {"UBUNTU_ESTUARY_REPO": "ftp://repoftp:repopushez7411@117.78.41.188/releases/5.2/ubuntu"},
19 | {"UBUNTU_MIRROR": "http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/"},
20 | {"UBUNTU_ISO_MIRROR": "http://repo.estuary.cloud/ubuntu/releases/18.04/release/"}
21 | ]
22 |
23 | }
24 |
--------------------------------------------------------------------------------
/include/checksum-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # check_sum
5 | ###################################################################################
6 | check_sum()
7 | {
8 | (
9 | target_dir=$1
10 | checksum_source=$2
11 | checksum_dir=$(cd `dirname $checksum_source` ; pwd)
12 | checksum_file=`basename $checksum_source`
13 |
14 | pushd $target_dir >/dev/null
15 | if [ -f .$checksum_file ]; then
16 | if diff .$checksum_file $checksum_file >/dev/null 2>&1; then
17 | return 0
18 | fi
19 | rm -f .$checksum_file 2>/dev/null
20 | fi
21 |
22 | if ! md5sum --quiet --check $checksum_dir/$checksum_file >/dev/null 2>&1; then
23 | return 1
24 | fi
25 |
26 | cp $checksum_file .$checksum_file
27 |
28 | popd >/dev/null
29 | return 0
30 | )
31 | }
32 |
33 | cal_md5sum()
34 | {
35 | root_dir=$1
36 | cd ${root_dir}
37 | file_list=$(find . -type f)
38 | for file in ${file_list}; do
39 | dir=$(dirname ${file})
40 | name=$(basename ${file})
41 | (cd $dir && md5sum $name > ${name}.MD5SUM)
42 | done
43 | }
44 |
--------------------------------------------------------------------------------
/include/compile-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # get_cross_compile
5 | ###################################################################################
6 | get_cross_compile()
7 | {
8 | (
9 | arch=$1
10 | toolchain_dir=$2
11 |
12 | if [ x"$arch" != x"x86_64" ]; then
13 | return 0
14 | fi
15 |
16 | cross_compile=$(basename `find $toolchain_dir/bin -name "*-gcc" 2>/dev/null` | grep -Po "(.*-)(?=gcc$)")
17 | echo $cross_compile
18 | )
19 | }
20 |
21 |
--------------------------------------------------------------------------------
/include/deploy-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # get_usb_device
5 | ###################################################################################
6 | get_usb_device()
7 | {
8 | sudo lshw | grep "bus info: usb" -A 12 | grep "logical name: /dev/sd" | grep -Po "(/dev/sd.*)" | sort
9 | }
10 |
11 | ###################################################################################
12 | # get_1st_usb_storage
13 | ###################################################################################
14 | get_1st_usb_storage()
15 | {
16 | (
17 | root_dev=$(mount | grep " / " | grep -Po "(/dev/sd[^ ]*)")
18 | if [ x"" = x"$root_dev" ]; then
19 | root_dev="/dev/sdx"
20 | fi
21 |
22 | usb_devs=($(get_usb_device | grep -v $root_dev))
23 | echo ${usb_devs[0]}
24 | )
25 | }
26 |
27 | ###################################################################################
28 | # get_deploy_type
29 | ###################################################################################
30 | get_deploy_type()
31 | {
32 | expr "X$1" : 'X\([^:]*\):*.*'
33 | }
34 |
35 | ###################################################################################
36 | # get_deploy_device
37 | ###################################################################################
38 | get_deploy_device()
39 | {
40 | expr "X$1" : 'X[^:]*:\(.*\)'
41 | }
42 |
43 |
--------------------------------------------------------------------------------
/include/distro-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # int download_distros
5 | ###################################################################################
6 | download_distros()
7 | {
8 | (
9 | ftp_cfgfile=$1
10 | ftp_addr=$2
11 | target_dir=$3
12 | distros=($(echo $4 | tr ',' ' '))
13 |
14 | distro_files=(`get_field_content $ftp_cfgfile distro`)
15 | mkdir -p $target_dir
16 | pushd $target_dir >/dev/null
17 | for distro in ${distros[@]}; do
18 | ftp_file=`echo ${distro_files[*]} | tr ' ' '\n' | grep -Po "(?<=${distro}_ARM64.tar.gz:)(.*)"`
19 | distro_file=`basename $ftp_file`
20 |
21 | if [ ! -f ${distro_file}.sum ]; then
22 | wget $ftp_addr/${ftp_file}.sum || return 1
23 | fi
24 |
25 | if [ ! -f $distro_file ] || ! check_sum . ${distro_file}.sum; then
26 | rm -f $distro_file 2>/dev/null
27 | wget ${WGET_OPTS} $ftp_addr/$ftp_file || return 1
28 | check_sum . ${distro_file}.sum || return 1
29 | fi
30 |
31 | if [ x"$distro_file" != x"${distro}_ARM64.tar.gz" ]; then
32 | rm -f ${distro}_ARM64.tar.gz ${distro}_ARM64.tar.gz.sum 2>/dev/null
33 | ln -s $distro_file ${distro}_ARM64.tar.gz
34 | ln -s ${distro_file}.sum ${distro}_ARM64.tar.gz.sum
35 | fi
36 | done
37 | popd >/dev/null
38 |
39 | return 0
40 | )
41 | }
42 |
43 | ###################################################################################
44 | # uncompress_distros
45 | ###################################################################################
46 | uncompress_distros()
47 | {
48 | (
49 | distros=($(echo $1 | tr ',' ' '))
50 | src_dir=$2
51 | target_dir=$3
52 | for distro in ${distros[*]}; do
53 | if ! check_file_update $target_dir/${distro} $src_dir/${distro}_ARM64.tar.gz; then
54 | sudo rm -rf $target_dir/$distro
55 | rm -f $target_dir/${distro}_ARM64.tar.gz 2>/dev/null
56 | mkdir -p $target_dir/$distro
57 | if ! uncompress_file_with_sudo $src_dir/${distro}_ARM64.tar.gz $target_dir/$distro; then
58 | echo -e "\033[31mError! Uncompress ${distro}_ARM64.tar.gz failed!\033[0m" >&2
59 | sudo rm -rf $target_dir/$distro
60 | return 1
61 | else
62 | sudo rm -rf $target_dir/$distro/lib/modules/*
63 | fi
64 | fi
65 | done
66 |
67 | return 0
68 | )
69 | }
70 |
71 | ###################################################################################
72 | # create_distros
73 | ###################################################################################
74 | create_distros()
75 | {
76 | (
77 | distros=($(echo $1 | tr ',' ' '))
78 | distro_dir=$2
79 | target_dir="${distro_dir}/../binary/arm64"
80 | mkdir -p ${target_dir}
81 | for distro in ${distros[*]}; do
82 | target_file="${target_dir}/${distro}_arm64.tar.gz"
83 | if [ ! -d $distro_dir/$distro ]; then
84 | echo "Error! $distro_dir/$distro is not exist!" >&2 ; return 1
85 | fi
86 |
87 | if [ -f ${target_file} ]; then
88 | echo "Check ${target_file} update ......"
89 | last_modify=`sudo find $distro_dir/$distro 2>/dev/null -exec stat -c %Y {} \+ | sort -n -r | head -n1`
90 | distro_last_modify=`stat -c %Y ${target_file} 2>/dev/null`
91 | if [[ "$last_modify" -gt "$distro_last_modify" ]]; then
92 | rm -f ${target_file}
93 | else
94 | echo "File ${target_file} no need to update."
95 | continue
96 | fi
97 | fi
98 |
99 | pushd $distro_dir/$distro
100 | if ! (sudo tar cf - . | pigz > ${target_file} ); then
101 | echo "Error! Create ${target_file} failed!" >&2
102 | rm -f ${target_file}
103 | return 1
104 | fi
105 | md5sum ${target_file} > ${target_file}.sum
106 | popd
107 | done
108 |
109 | return 0
110 | )
111 | }
112 |
113 |
--------------------------------------------------------------------------------
/include/doc-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # copy_plat_doc
5 | ###################################################################################
6 | copy_plat_doc()
7 | {
8 | (
9 | plat=$1
10 | src_dir=$2
11 | target_dir=$3
12 | for doc in `find $src_dir -type f -name "*.4$plat.md" 2>/dev/null`; do
13 | target_doc=`basename $doc | sed "s/\(.*\)\(.4$plat\)\(.md\)$/\1\3/"`
14 | cp $doc $target_dir/$target_doc || return 1
15 | done
16 |
17 | return 0
18 | )
19 | }
20 |
21 | ###################################################################################
22 | # copy_all_docs
23 | ###################################################################################
24 | copy_all_docs()
25 | {
26 | (
27 | platforms=`echo $1 | tr ',' ' '`
28 | src_dir=$2
29 | target_dir=$3
30 |
31 | mkdir -p $target_dir/arm64
32 | copy_plat_doc All $doc_src_dir $target_dir/arm64 || return 1
33 |
34 | for plat in ${platfroms[*]}; do
35 | mkdir -p $target_dir/$plat
36 | copy_plat_doc $plat $doc_src_dir $target_dir/$plat || return 1
37 | pushd $target_dir/$plat >/dev/null
38 | find . -maxdepth 1 -type l -print | xargs rm -f
39 | find ../arm64/ -maxdepth 1 -type f | xargs -i ln -s {}
40 | popd >/dev/null
41 | done
42 |
43 | return 0
44 | )
45 | }
46 |
47 |
--------------------------------------------------------------------------------
/include/docker-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | set -x
3 |
4 | docker_run_sh() {
5 | distro=$1
6 | sh_dir=$2
7 | home_dir=$3
8 | envlist_file=$4
9 | script=$5
10 |
11 | usage()
12 | {
13 | echo "Usage: docker_run_sh script_running_disro script_dir home_dir envlist script_name script_options"
14 | }
15 |
16 | if [ $# -lt 5 ]; then
17 | usage
18 | exit 1
19 | fi
20 |
21 | shift 5
22 | scipt_options=$@
23 | name=$(echo $script| awk -F '.' '{print $1}')
24 | name=$(echo ${name}${top_dir} | sed -e 's#/#-#g' -e 's#@##g' )
25 | eval image="$"${distro}"_image"
26 | localarch=`uname -m`
27 | if [ x"$localarch" = x"x86_64" ]; then
28 | qemu_cmd="-v /usr/bin/qemu-aarch64-static:/usr/bin/qemu-aarch64-static"
29 | fi
30 |
31 | echo "Start container to build."
32 | if [ x"$(docker ps -a|grep ${name})" != x"" ]; then
33 | docker stop ${name}
34 | docker rm ${name}
35 | fi
36 |
37 | mkdir -p log/${distro}
38 | echo "Start container to build."
39 | docker_flag=$(docker network inspect bridge|grep ${name})
40 | if [ x"$docker_flag" != x"" ]; then
41 | docker network disconnect -f bridge ${name}
42 | fi
43 | docker run --privileged=true -i --rm --env-file ${envlist_file} -v ${home_dir}:/root/ \
44 | ${qemu_cmd} --name ${name} ${image} \
45 | bash /root/${sh_dir}/${script} ${scipt_options} | tee > log/${distro}/${name}
46 |
47 | }
48 |
--------------------------------------------------------------------------------
/include/download-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # download_files_witchout_path
5 | ###################################################################################
6 | download_files_witchout_path()
7 | {
8 | (
9 | target_dir=$1
10 | checksum_source=$2
11 | checksum_dir=$(cd `dirname $checksum_source` ; pwd)
12 | checksum_file=`basename $checksum_source`
13 | file_source=$3
14 |
15 | mkdir -p $target_dir
16 | if ! check_sum_without_path $target_dir $checksum_dir/$checksum_file; then
17 | pushd $target_dir
18 | download_files=($(md5sum --quiet --check $checksum_dir/$checksum_file 2>/dev/null | grep FAILED | cut -d : -f 1))
19 | for download_file in ${download_files[@]}; do
20 | wget ${WGET_OPTS} $file_source/$download_file || return 1
21 | done
22 | popd
23 |
24 | check_sum_without_path $target_dir $checksum_source || return 1
25 | fi
26 |
27 | return 0
28 | )
29 | }
30 |
31 | ###################################################################################
32 | # download_files_with_path
33 | ###################################################################################
34 | download_files_with_path()
35 | {
36 | (
37 | all_files=($(echo $1 | tr ',' ' '))
38 | file_source=$2
39 |
40 | for download_file in ${all_files[@]}; do
41 | target_dir=`dirname $download_file`
42 | mkdir -p $target_dir ; rm -f $download_file 2>/dev/null
43 | wget ${WGET_OPTS} $file_source/$download_file -O $download_file ; return 1
44 | done
45 |
46 | return 0
47 | )
48 | }
49 |
50 |
--------------------------------------------------------------------------------
/include/estuary-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # string get_estuary_version
5 | ###################################################################################
6 | get_estuary_version()
7 | {
8 | local version_regexp="(?<=/dev/null | sed 's/.*\///g' 2>/dev/null`
11 | current_version=${current_version:-master}
12 | echo $current_version
13 | }
14 |
15 | ###################################################################################
16 | # print_version
17 | ###################################################################################
18 | print_version()
19 | {
20 | local estuary_dir=$1
21 | local current_version=`get_estuary_version $estuary_dir`
22 | if [ x"$current_version" = x"master" ]; then
23 | echo "This is a developing version."
24 | else
25 | echo "Estuary version is $current_version."
26 | fi
27 | }
28 |
29 |
--------------------------------------------------------------------------------
/include/file-check.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # int check_file_update ...
5 | ###################################################################################
6 | check_file_update()
7 | {
8 | (
9 | target_file=$1
10 | shift
11 | target_file_modify=`stat -L -c %Y "$target_file" 2>/dev/null` || return 1
12 | for f in $@; do
13 | dep_file_modify=`stat -L -c %Y "$f" 2>/dev/null` || return 1
14 | [[ "$dep_file_modify" -gt "$target_file_modify" ]] && return 1
15 | done
16 | return 0
17 | )
18 | }
19 |
--------------------------------------------------------------------------------
/include/fileop-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # get_compress_file_prefix
5 | ###################################################################################
6 | get_compress_file_prefix()
7 | {
8 | (
9 | src_file=`basename $1`
10 | postfix=$(echo $src_file | grep -Po "((\.tar)*\.(tar|bz2|gz|xz)$)" 2>/dev/null)
11 | prefix=${src_file%$postfix}
12 | echo $prefix
13 | )
14 | }
15 |
16 | ###################################################################################
17 | # get_compress_file_postfix
18 | ###################################################################################
19 | get_compress_file_postfix()
20 | {
21 | (
22 | src_file=`basename $1`
23 | postfix=$(echo $src_file | grep -Po "((\.tar)*\.(tar|bz2|gz|xz)$)" 2>/dev/null)
24 | echo $postfix
25 | )
26 | }
27 |
28 | ###################################################################################
29 | # uncompress_file
30 | ###################################################################################
31 | uncompress_file()
32 | {
33 | (
34 | src_file=$1
35 | target_dir=$2
36 | if [ x"$target_dir" = x"" ]; then
37 | target_dir="./"
38 | fi
39 |
40 | postfix=`get_compress_file_postfix $src_file`
41 | case $postfix in
42 | .tar.bz2 | .tar.gz | .tar.xz | .xz | .tbz)
43 | if ! tar xf $src_file -C $target_dir >/dev/null 2>&1; then
44 | return 1
45 | fi
46 | ;;
47 | .gz)
48 | if ! pigz -d $src_file >/dev/null 2>&1; then
49 | return 1
50 | fi
51 | ;;
52 | *)
53 | echo -e "\033[31mCan not find the suitable root filesystem!\033[0m" ; return 1
54 | ;;
55 | esac
56 |
57 | return 0
58 | )
59 | }
60 |
61 | ###################################################################################
62 | # uncompress_file_with_sudo
63 | ###################################################################################
64 | uncompress_file_with_sudo()
65 | {
66 | (
67 | src_file=$1
68 | target_dir=$2
69 | if [ x"$target_dir" = x"" ]; then
70 | target_dir="./"
71 | fi
72 |
73 | postfix=`get_compress_file_postfix $src_file`
74 | case $postfix in
75 | .tar.bz2 | .tar.gz | .tar.xz | .xz | .tbz)
76 | if ! sudo tar --use-compress-program=pigz -xf $src_file -C $target_dir >/dev/null 2>&1; then
77 | return 1
78 | fi
79 | ;;
80 | .gz)
81 | if ! sudo pigz -d $src_file >/dev/null 2>&1; then
82 | return 1
83 | fi
84 | ;;
85 | *)
86 | echo -e "\033[31mCan not find the suitable root filesystem!\033[0m" ; return 1
87 | ;;
88 | esac
89 |
90 | return 0
91 | )
92 | }
93 |
94 |
--------------------------------------------------------------------------------
/include/ftp-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # check_ftp_update
5 | ###################################################################################
6 | check_ftp_update()
7 | {
8 | local estuary_version=$1
9 | local estuary_dir=$2
10 | local local_update_date=`cat .${estuary_version}.initialize 2>/dev/null`
11 | local last_commit_date=`get_last_commit_date $estuary_dir`
12 | if [ ! -f ${estuary_version}.xml ] || [ x"$local_update_date" != x"$last_commit_date" ]; then
13 | return 1
14 | fi
15 |
16 | return 0
17 | }
18 |
19 | ###################################################################################
20 | # int update_ftp_cfgfile
21 | ###################################################################################
22 | update_ftp_cfgfile()
23 | {
24 | local estuary_version=$1
25 | local ftp_addr=$2
26 | local estuary_dir=$3
27 |
28 | local last_commit_date=`get_last_commit_date $estuary_dir`
29 | rm -f ${estuary_version}.xml .${estuary_version}.initialize 2>/dev/null
30 |
31 | wget -c $ftp_addr/config/${estuary_version}.xml || return 1
32 | echo $last_commit_date > .${estuary_version}.initialize
33 |
34 | return 0
35 | }
36 |
37 |
--------------------------------------------------------------------------------
/include/git-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # get_last_commit_date
5 | ###################################################################################
6 | get_last_commit_date()
7 | {
8 | (
9 | target_dir=$1
10 | pushd $target_dir >/dev/null
11 | version=`git log -n 1 2>/dev/null | grep -Po "^(Date:).*" | sed 's/Date: *//g'`
12 | version=${version:-master}
13 | echo $version
14 | popd >/dev/null
15 | )
16 | }
17 |
18 | ###################################################################################
19 | # get_last_commit
20 | ###################################################################################
21 | get_last_commit()
22 | {
23 | (
24 | target_dir=$1
25 | pushd $target_dir >/dev/null
26 | git log -n 1 2>/dev/null | grep -P "^(commit ).*" | awk '{print $2}'
27 | popd >/dev/null
28 | )
29 | }
30 |
31 |
--------------------------------------------------------------------------------
/include/initialize-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | check_build_permission()
4 | {
5 | if [ "$dname" != "/root" ] && [ "$dname" != "/home" ]; then
6 | echo -e "\033[31mERROR: please move estuary to user's HOME directory!!!\033[0m"
7 | exit 1
8 | fi
9 | }
10 |
11 | check_docker_running_permission()
12 | {
13 | if [ "$USER" != "root" ] && [ -z "$(groups|grep docker)" ]; then
14 | sudo groupadd docker || true
15 | sudo usermod -aG docker $USER
16 | sudo systemctl start docker
17 | echo -e "\033[31m warning: user just add into docker group, please re-login!!!\033[0m"
18 | exit 1
19 | fi
20 |
21 | }
22 |
23 | install_dev_tools_debian()
24 | {
25 | sudo apt-get install -y git jq bc libssl-dev unzip make build-essential \
26 | qemu qemu-user-static qemu-user binfmt-support flex bison gcc pigz
27 | check_docker_running_permission
28 | }
29 |
30 | install_dev_tools_ubuntu()
31 | {
32 | sudo apt-get install -y git jq docker.io bc libssl-dev unzip make build-essential \
33 | qemu qemu-user-static qemu-user binfmt-support flex bison gcc pigz
34 | check_docker_running_permission
35 | }
36 |
37 | ###################################################################################
38 | # int install_dev_tools_centos
39 | ###################################################################################
40 | install_dev_tools_centos()
41 | {
42 | pkglist="autoconf automake libtool python git docker bc openssl-devel unzip gcc jq pigz make bison flex"
43 | install_available=`yum info epel-release $pkglist |grep "Available Packages"`
44 | if [ x"$install_available" != x"" ]; then
45 | yum install -y epel-release
46 | yum install -y $pkglist
47 | fi
48 |
49 | check_docker_running_permission
50 |
51 | return 0
52 | }
53 |
54 | ###################################################################################
55 | # install_dev_tools
56 | ###################################################################################
57 | install_dev_tools()
58 | {
59 | local host_distro=$(cat /etc/os-release |grep ^ID=|awk -F '=' '{print $2}')
60 |
61 | # remove "centos"'s ""
62 | if [ -n "$(echo ${host_distro}|grep \")" ]; then
63 | host_distro=$(echo ${host_distro}|awk -F \" '{print $2}')
64 | fi
65 |
66 | if ! declare -F install_dev_tools_${host_distro} >/dev/null; then
67 | echo "Unspported distro!" >&2; return 1
68 | fi
69 |
70 | install_dev_tools_${host_distro}
71 |
72 | docker_status=`service docker status|grep "running"`
73 | if [ x"$docker_status" = x"" ]; then
74 | service docker start
75 | fi
76 | }
77 |
78 | ###################################################################################
79 | # update_acpica_tools
80 | ###################################################################################
81 | update_acpica_tools()
82 | {
83 | if [ ! -d acpica ]; then
84 | git clone https://github.com/acpica/acpica.git
85 | fi
86 |
87 | corenum=`cat /proc/cpuinfo | grep "processor" | wc -l`
88 | (cd acpica/generate/unix && make -j${corenum} && sudo make install)
89 | }
90 |
91 | check_arch()
92 | {
93 | if [ "$(uname -m)" != "aarch64" ]; then
94 | echo -e "\033[31mError: build.sh script only run on arm64 server!!\033[0m"
95 | exit 1
96 | fi
97 | }
98 |
99 | check_running_not_in_container()
100 | {
101 | if [ -f /.dockerenv ]; then
102 | echo -e "\033[31mError: build.sh script can't run inside container!!\033[0m"
103 | exit 1
104 | fi
105 | }
106 |
--------------------------------------------------------------------------------
/include/mirror-func.sh:
--------------------------------------------------------------------------------
1 | set_debian_mirror()
2 | {
3 | if [ -n "${DEBIAN_MIRROR}" ]; then
4 | local default_mirror="http://deb.debian.org/debian"
5 | sed -i "s#${default_mirror}#${DEBIAN_MIRROR}#" \
6 | /etc/apt/sources.list
7 | fi
8 | debian_region="${estuary_repo} ${estuary_dist}"
9 | echo -e "deb ${debian_region} main\ndeb-src ${debian_region} main" > /etc/apt/sources.list.d/estuary.list
10 | }
11 |
12 | set_ubuntu_mirror()
13 | {
14 |
15 | if [ -n "${UBUNTU_MIRROR}" ]; then
16 | local default_mirror="http://ports.ubuntu.com/ubuntu-ports"
17 | sed -i "s#${default_mirror}#${UBUNTU_MIRROR}#" \
18 | /etc/apt/sources.list
19 | fi
20 | debian_region="${estuary_repo} ${estuary_dist}"
21 | echo -e "deb ${debian_region} main\ndeb-src ${debian_region} main" > /etc/apt/sources.list.d/estuary.list
22 |
23 | }
24 | set_fedora_mirror()
25 | {
26 | if [ -n "${FEDORA_MIRROR}" ]; then
27 | local mirror=${FEDORA_MIRROR}
28 | sed -i "s#http://download.fedoraproject.org/pub/fedora/linux#${mirror}#g" /etc/yum.repos.d/fedora.repo /etc/yum.repos.d/fedora-updates.repo
29 | sed -i '1,/metalink/{s/metalink/#metalink/}' /etc/yum.repos.d/fedora.repo /etc/yum.repos.d/fedora-updates.repo
30 | sed -i '1,/#baseurl/{s/#baseurl/baseurl/}' /etc/yum.repos.d/fedora.repo /etc/yum.repos.d/fedora-updates.repo
31 | fi
32 | if [ -n "${FEDORA_ESTUARY_REPO}" ]; then
33 | local mirror=${FEDORA_ESTUARY_REPO}
34 | docker_mirror="ftp://repoftp:repopushez7411@117.78.41.188/releases/.*/fedora"
35 | sed -i "s#${docker_mirror}#${mirror}#g" /etc/yum.repos.d/estuary.repo
36 | fi
37 | }
38 | set_centos_mirror()
39 | {
40 | if [ -n "${CENTOS_MIRROR}" ]; then
41 | local mirror=${CENTOS_MIRROR}
42 | docker_mirror="http://mirror.centos.org/altarch/\$releasever/os/\$basearch/"
43 | sed -i "s#${docker_mirror}#${mirror}#g" /etc/yum.repos.d/CentOS-Base.repo
44 | fi
45 | if [ -n "${estuary_repo}" ]; then
46 | local mirror=${estuary_repo}
47 | docker_mirror="ftp://repoftp:repopushez7411@117.78.41.188/releases/.*/centos"
48 | sed -i "s#${docker_mirror}#${mirror}#g" /etc/yum.repos.d/estuary.repo
49 | fi
50 | rpm --import /root/estuary/ESTUARY-GPG-KEY
51 | }
52 | set_docker_loop()
53 | {
54 | seq 0 7 | xargs -I {} mknod -m 660 /dev/loop{} b 7 {} || true
55 | chgrp disk /dev/loop[0-7]
56 | }
57 |
--------------------------------------------------------------------------------
/include/toolchain-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # get_toolchain
5 | ###################################################################################
6 | get_toolchain()
7 | {
8 | (
9 | ftp_cfgfile=$1
10 | arch=$2
11 | get_field_content $ftp_cfgfile toolchain | tr ' ' '\n' | grep $arch | awk -F ':' '{print $1}'
12 | )
13 | }
14 |
15 | ###################################################################################
16 | # int download_toolchains
17 | ###################################################################################
18 | download_toolchains()
19 | {
20 | (
21 | ftp_cfgfile=$1
22 | ftp_addr=$2
23 | target_dir=$3
24 | toolchain_files=(`get_field_content $ftp_cfgfile toolchain`)
25 |
26 | pushd $target_dir >/dev/null
27 | for toolchain in ${toolchain_files[*]}; do
28 | target_file=`expr "X$toolchain" : 'X\([^:]*\):.*' | sed 's/ //g'`
29 | target_addr=`expr "X$toolchain" : 'X[^:]*:\(.*\)' | sed 's/ //g'`
30 | toolchain_file=`basename $target_addr`
31 |
32 | if [ ! -f ${toolchain_file}.sum ]; then
33 | rm -f .${toolchain_file}.sum 2>/dev/null
34 | wget $ftp_addr/${target_addr}.sum || return 1
35 | fi
36 |
37 | if [ ! -f $toolchain_file ] || ! check_sum . ${toolchain_file}.sum; then
38 | rm -f $toolchain_file 2>/dev/null
39 | wget ${WGET_OPTS} $ftp_addr/$target_addr || return 1
40 | check_sum . ${toolchain_file}.sum || return 1
41 | fi
42 |
43 | if [ x"$target_file" != x"$toolchain_file" ]; then
44 | rm -f $target_file 2>/dev/null
45 | ln -s $toolchain_file $target_file
46 | fi
47 | done
48 | popd >/dev/null
49 |
50 | return 0
51 | )
52 | }
53 |
54 | ###################################################################################
55 | # int copy_toolchains
56 | ###################################################################################
57 | copy_toolchains()
58 | {
59 | (
60 | ftp_cfgfile=$1
61 | src_dir=$2
62 | target_dir=$3
63 | toolchain_files=(`get_field_content $ftp_cfgfile toolchain | tr ' ' '\n' | awk -F ':' '{print $1}'`)
64 | for toolchain_file in ${toolchain_files[*]}; do
65 | if ! (diff $src_dir/${toolchain_file}.sum $target_dir/.${toolchain_file}.sum >/dev/null 2>&1) \
66 | || [ ! -f $target_dir/$toolchain_file ]; then
67 | rm -f $target_dir/$toolchain_file 2>/dev/null
68 | cp $src_dir/$toolchain_file $target_dir/$toolchain_file || return 1
69 | rm -f $target_dir/.${toolchain_file}.sum 2>/dev/null
70 | cp $src_dir/${toolchain_file}.sum $target_dir/.${toolchain_file}.sum || return 1
71 | fi
72 | done
73 |
74 | return 0
75 | )
76 | }
77 |
78 | ###################################################################################
79 | # int uncompress_toolchains
80 | ###################################################################################
81 | uncompress_toolchains()
82 | {
83 | (
84 | ftp_cfgfile=$1
85 | src_dir=$2
86 |
87 | toolchain_files=(`get_field_content $ftp_cfgfile toolchain | tr ' ' '\n' | awk -F ':' '{print $1}'`)
88 |
89 | for toolchain_file in ${toolchain_files[*]}; do
90 | toolchain_dir=`get_compress_file_prefix $toolchain_file`
91 | if [ ! -d $src_dir/$toolchain_dir ]; then
92 | if ! uncompress_file $src_dir/$toolchain_file $src_dir; then
93 | rm -rf $src_dir/$toolchain_dir 2>/dev/null ; return 1
94 | fi
95 | fi
96 | done
97 |
98 | return 0
99 | )
100 | }
101 |
102 | ###################################################################################
103 | # int install_toolchain
104 | ###################################################################################
105 | install_toolchain()
106 | {
107 | (
108 | ftp_cfgfile=$1
109 | src_dir=$2
110 | toolchain_files=(`get_field_content $ftp_cfgfile toolchain | tr ' ' '\n' | awk -F ':' '{print $1}'`)
111 |
112 | for toolchain_file in ${toolchain_files[*]}; do
113 | toolchain_dir=`get_compress_file_prefix $toolchain_file`
114 | if [ ! -d /opt/$toolchain ]; then
115 | if ! sudo cp -r $src_dir/$toolchain_dir/ /opt/; then
116 | return 1
117 | fi
118 |
119 | str='export PATH=$PATH:/opt/'$toolchain_dir'/bin'
120 | if ! grep "$str" ~/.bashrc >/dev/null; then
121 | echo "$str">> ~/.bashrc
122 | fi
123 | fi
124 | done
125 |
126 | return 0
127 | )
128 | }
129 |
130 |
--------------------------------------------------------------------------------
/include/xml-func.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###################################################################################
4 | # string[] get_field_content
5 | ###################################################################################
6 | get_field_content()
7 | {
8 | local xml_file=$1
9 | local field=$2
10 | local xml_content=(`sed -n "/<$field>/,/<\/$field>/p" $xml_file 2>/dev/null | sed -e '/^$/d' | sed 's/ //g'`)
11 | unset xml_content[0]
12 | unset xml_content[${#xml_content[@]}]
13 | echo ${xml_content[*]}
14 | }
15 |
16 |
--------------------------------------------------------------------------------
/submodules/build-distro.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | set -x
3 |
4 | top_dir=$(cd `dirname $0`; cd .. ; pwd)
5 | sh_dir=${top_dir}/submodules
6 | . ${top_dir}/Include.sh
7 | home_dir=$(cd ${top_dir}/.. ; pwd)
8 |
9 |
10 | usage()
11 | {
12 | echo "Usage: ./build_distro.sh --distro=DISTRO --version=VERSION --envlist=ENVLIST --build_dir=BUILD_DIR --build_kernel=FALSE"
13 | }
14 |
15 | if [ $# -ne 5 ]; then
16 | usage
17 | exit 1
18 | fi
19 |
20 | ###################################################################################
21 | # get args
22 | ###################################################################################
23 | while test $# != 0
24 | do
25 | case $1 in
26 | --*=*) ac_option=`expr "X$1" : 'X\([^=]*\)='` ; ac_optarg=`expr "X$1" : 'X[^=]*=\(.*\)' || true` ;;
27 | *) ac_option=$1 ;;
28 | esac
29 |
30 | case $ac_option in
31 | --distro) distro=$ac_optarg ;;
32 | --version) version=$ac_optarg ;;
33 | --envlist) envlist=$ac_optarg ;;
34 | --build_dir) build_dir=$ac_optarg ;;
35 | --build_kernel) build_kernel=$ac_optarg ;;
36 | *) echo "Unknown option $ac_option!"
37 | build_platform_usage ; exit 1 ;;
38 | esac
39 |
40 | shift
41 | done
42 |
43 |
44 | envlist_dir=${build_dir}/tmp/${distro}
45 | envlist_file=${envlist_dir}/env.list
46 | build_absolute_dir=${build_dir}
47 |
48 | # get relative path
49 | sh_dir=$(echo $sh_dir| sed "s#$home_dir/##")
50 | build_dir=$(echo $build_dir| sed "s#$home_dir/##")
51 |
52 | # genrate env.list
53 | mkdir -p ${envlist_dir}
54 | rm -f ${envlist_file}
55 | touch ${envlist_file}
56 | for var in ${envlist}; do
57 | echo ${var} >> ${envlist_file}
58 | done
59 | sort -n ${envlist_file} | uniq > test.txt
60 | cat test.txt > ${envlist_file}
61 | rm -f test.txt
62 | rm -f ${build_absolute_dir}/build-${distro}-kernel
63 |
64 | # Notice:
65 | # Build kernel pakages and iso seperately.
66 | # The building process is:
67 | # 1) build kernel packages and upload to estuary repo.
68 | # This stage should be moved to
69 | # https://github.com/open-estuary/distro-repo in the fucture.
70 | # 2) build installer and iso, in which stage fetch kernel packages from
71 | # estuary repo.
72 | if [ "${distro}" == "common" ];then
73 | ./submodules/${distro}-build-kernel.sh ${version} ${build_absolute_dir}
74 | if [ $? -ne 0 ]; then
75 | exit 1
76 | else
77 | exit 0
78 | fi
79 | fi
80 |
81 | if [ x"$build_kernel" != x"false" ]; then
82 | # 1) build kernel
83 | docker_run_sh ${distro} ${sh_dir} ${home_dir} ${envlist_file} \
84 | ${distro}-build-kernel.sh ${version} ${build_dir}
85 | touch ${build_absolute_dir}/build-${distro}-kernel
86 | fi
87 | # 2) build installer
88 | docker_run_sh ${distro} ${sh_dir} ${home_dir} ${envlist_file} \
89 | ${distro}-build-installer.sh ${version} ${build_dir}
90 |
91 | # 3) build iso
92 | docker_run_sh ${distro} ${sh_dir} ${home_dir} ${envlist_file} \
93 | ${distro}-build-iso.sh ${version} ${build_dir}
94 |
95 | # 4) build rootfs tar
96 | # 5) calculate md5sum
97 | ./submodules/${distro}-calculate-md5sum.sh ${version} ${build_absolute_dir}
98 |
--------------------------------------------------------------------------------
/submodules/build-grub.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | GRUB_DIR=grub
4 | TOPDIR=$(cd `dirname $0` ; pwd)
5 |
6 | ###################################################################################
7 | # build arguments
8 | ###################################################################################
9 | CLEAN=
10 | PLATFORM=
11 | OUTPUT_DIR=
12 | CROSS_COMPILE=
13 |
14 | ###################################################################################
15 | # Include
16 | ###################################################################################
17 | . $TOPDIR/submodules-common.sh
18 |
19 | ###################################################################################
20 | # build_grub_usage
21 | ###################################################################################
22 | build_grub_usage()
23 | {
24 | cat << EOF
25 | Usage: build-grub.sh [clean] --cross=xxx --output=xxx
26 | clean: clean the grub binary files
27 | --cross: cross compile prefix (if the host is not arm architecture, it must be specified.)
28 | --output: target binary output directory
29 |
30 | Example:
31 | build-grub.sh --output=./workspace
32 | build-grub.sh --output=./workspace --cross=aarch64-linux-gnu-
33 | build-grub.sh clean --output=./workspace
34 |
35 | EOF
36 | }
37 |
38 | ###################################################################################
39 | # build_check